url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://iris.unito.it/handle/2318/145754
Search for new physics with long-lived particles decaying to photons and missing energy in pp collisions at $\sqrt{s}=7$ TeV
2020-08-04 05:50:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937951564788818, "perplexity": 1243.416824280611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735860.28/warc/CC-MAIN-20200804043709-20200804073709-00170.warc.gz"}
https://math.stackexchange.com/questions/1758482/using-first-isomorphism-theorem-to-determine-existence-of-onto-homomorphisms
# Using first isomorphism theorem to determine existence of onto homomorphisms I'm trying to determine the existence of an onto group homomorphism $\alpha$ : $\Bbb{R^*}$ $\rightarrow$ $C_2$. By the first isomorphism theorem I know I have to find the $ker(\alpha)$ and create a factor group which is isomorphic to $C_2$ It seems to me that the real numbers will be pretty hard to break down into two elements but I'm having trouble expressing that. Also, if it was $\mathbb{Z}$ instead of $\mathbb{R}$ how would things be different? Any help is appreciated. • Wouldn't $\alpha = \text{sgn}$ work, where $\text{sgn}$ is the sign function? – eepperly16 Apr 25 '16 at 19:31 • Yes I think that would work actually – ybce Apr 25 '16 at 19:34 • For $\mathbb{Z}$, I assume you mean the group of additive integers. The canonical homomorphism from $\mathbb{Z}$ to $\mathbb{Z}/2\mathbb{Z}$ should do. – eepperly16 Apr 25 '16 at 19:36
2019-08-23 00:02:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8914965391159058, "perplexity": 127.77104334417962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317688.48/warc/CC-MAIN-20190822235908-20190823021908-00130.warc.gz"}
http://basilisk.fr/sandbox/easystab/diffmat_dif1D.m
# sandbox/easystab/diffmat_dif1D.m This code is just like diffmat.m but instead of building explicitely the differentiation matrices, we use the function dif1D.m. This is the way we do in most of the codes, since producing the differentiation matrices is one of the fondamental elements of Easystab. clear all; clf % parameters L=2*pi; % domain length N=15; % number of points # dif1D.m The output arguments are: • D: the first derivative differentiation matrix • DD: the second derivative differentiation matrix • wx: the integration weights (they are used to integrate a function on the grid, please see integration_2D.m to learn about how to build and understand these weights). • x: the location of grid cells (This location is increasing with cell number, that is, x(i+1)>x(i)). The intput arguments are: • ‘fd’: the method of interpolant. Here ‘fd’ means finite differences. The other possible choices are: ‘fp’ for periodic finite differences (see periodicals%20boudaries.m for explanations), ‘cheb’ for Chebychev polynomial, ‘fou’ for Fourier periodic. • 0: the start of the grid. • L: the end of the grid. • N: the number of grid cells. • 3: the number of element in the finite difference stencil. Here, 3 means centered stencils that use the value on the right and the value on the left of the point at which we want to compute the derivative. If you chose 5, you will use two points at the left and two points at the right (this means an approximation of higher order). For other choices than finite difference, this number is not used, since spectral differentiation matrices are dense (there are no zeros…) To get a better idea of how this is done, please see dif1D.m. % building the differentiation matrices [D,DD,wx,x]=dif1D('fd',0,L,N,3); # Validation of the computation % test of the derivatives f=cos(x); fp=-sin(x)'; plot(x,cos(x),'b.-',x,-sin(x),'r.-',x,D*cos(x),'r.--',x,-cos(x),'m.-',x,DD*cos(x),'m.--'); legend('cos','-sin','D*cos(x)','-cos','DD*cos') print('-dsvg','diffmat.svg'); % save the figure And here is the figure that is produced by the code:
2019-11-13 10:43:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9213205575942993, "perplexity": 1579.6308245147766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667177.24/warc/CC-MAIN-20191113090217-20191113114217-00492.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-and-trigonometry-10th-edition/chapter-10-10-1-matrices-and-systems-of-equations-10-1-exercises-page-711/80
Algebra and Trigonometry 10th Edition The reduced row-echelon form of the matrix is: $\begin{bmatrix} 1 & 5 & 0 & |0\\ 0 & 0 & 1 & |3\\ 0 & 0 & 0 & |0\\ 0 & 0 & 0 & |0\\ \end{bmatrix}$ We can write the solution as: x + 5y = 0 z = 3 We then can write the system in terms of y: x = -5y z = 3 To not use one of the variables in the system we can use a: x = -5a y = a z = 3 Where a is any real number
2022-09-24 15:55:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7605769634246826, "perplexity": 127.01652279555658}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00421.warc.gz"}
https://mathspace.co/textbooks/syllabuses/Syllabus-835/topics/Topic-18444/subtopics/Subtopic-250625/?textbookIntroActiveTab=overview&activeTab=interactive
# 1.01 The real number system ## Interactive practice questions Which of the following is the set of all integers? $\left\{0,1,2,3,\ldots\right\}${0,1,2,3,} A $\left\{\ldots,-3,-2,-1,0,1,2,3,\ldots\right\}${,3,2,1,0,1,2,3,} B $\left\{1,2,3,\ldots\right\}${1,2,3,} C $\left\{\ldots,-3,-2,-1,0\right\}${,3,2,1,0} D $\left\{0,1,2,3,\ldots\right\}${0,1,2,3,} A $\left\{\ldots,-3,-2,-1,0,1,2,3,\ldots\right\}${,3,2,1,0,1,2,3,} B $\left\{1,2,3,\ldots\right\}${1,2,3,} C $\left\{\ldots,-3,-2,-1,0\right\}${,3,2,1,0} D Easy Less than a minute Fill in the blank so that the resulting statement is true. Using the diagram, determine whether the following statement is true or false: $\sqrt{13}$13 is a rational number. ### Outcomes #### 8.2 Describe the relationships between the subsets of the real number system
2022-01-26 09:10:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49442994594573975, "perplexity": 949.5342829878819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00513.warc.gz"}
http://math.stackexchange.com/questions/6557/proper-notation-for-distinct-sets
# Proper notation for distinct sets Consider I have two sets that neither one is a subset of the other (for example, the set of prime numbers and the set of odd integers). Is there a specific notation that combines the meanings of both ⊄ and ⊅ ? Also, is there a notation to mean that two sets are completely distinct and share no elements in common (for example, the sets of odd and even integers)? ≠ seems very inappropriate, and I'd like to avoid it. Context: My area is actually Computer Science. I'm using this refine my vocabulary in order to teach my students about subnetting (which is basically about sets and subsets of IP addresses), and to write some exercises for them. I'll probably use the words incomparable and disjoint as suggested. - There is also no special symbol to denote the fact that two sets are disjoint; we simply write $A\cap B=\emptyset$. Let $\rm\:P\:$ be a poset = partially-ordered set, i.e. a set equipped wth a relation $\rm\le$ that is reflexive, antisymmetric and transitive. Then one says that $\rm\: x\:$ and $\rm\: y\:$ are comparable$\$ if $\rm\ x \le y\$ or $\rm\ y \le x\:;\:$ otherwise they are incomparable, sometimes written $\rm\ x||y\:.\:$ A chain$\$ of $\rm\: P\:$ is a subset in which every two elements are comparable, and dually, an antichain$\$ is a subset in which every two elements are incomparable.
2015-05-22 18:15:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9454996585845947, "perplexity": 166.79904667518224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925917.18/warc/CC-MAIN-20150521113205-00191-ip-10-180-206-219.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2319667/cofinite-discrete-subspace-of-a-t1-space
# Cofinite\discrete subspace of a T1 space? Let $$(X,\tau)$$ be a $$T_1$$-space and $$X$$ is an infinite set. Then $$(X,\tau)$$ has a subspace homeomorphic to $$(\mathbb{N},\tau_2)$$, where $$\tau_2$$ is either the finite-closed topology or the discrete topology. Update attempt: As suggested from Daniel Fischer's comment, a solution is presented in the answer section. • Of course the assertion is wrong for finite $T_1$-spaces. If you have an infinite $T_1$-space $X$, consider a countable infinite subspace $Y$ of it. Find a subspace of $Y$ that satisfies the conclusion. If $Y$ doesn't have an infinite discrete subspace, then … – Daniel Fischer Jun 12 '17 at 14:01 • Sorry my bad! I forgot to add $X$ was an infinite set! So my aim should to first see for what condition there is a discrete sub space? And if there isn't then it's necessary to have a cofinite subspace homeomorphic to that given space? @DanielFischer – Mann Jun 12 '17 at 14:20 • Precise conditions for the existence of an infinite discrete subspace are hard. So the strategy is to show that if $X$ doesn't have a subspace of one type, then it must have a subspace of the other type. – Daniel Fischer Jun 12 '17 at 14:33 • I will try this approach! Thank you. – Mann Jun 12 '17 at 14:38 • I suppose you assume that $X$ is countably infinite? (Otherwise, pick a countable subspace and work with that.) There's a bit more to do. Starting with splitting $X$ into $A = \{ x \in X : \{x\} \text{ is open}\}$ and $S = X\setminus A$ isn't bad. If $A$ is infinite, we're done. Otherwise, there's more work to do, e.g. if $X \cong C \times D$ where $C$ is $\mathbb{N}$ with the cofinite topology and $D$ is discrete and contains at least two points, then $S = X$, but we need a proper subspace of $X$. Do you already know about connectedness? – Daniel Fischer Jun 12 '17 at 18:05 Suppose that (a countable space) $X$ contains no infinite cofinite subspace. We will show it has a countable discrete subspace. So we start by finding a non-empty open subset $U_0$ of $X$ such that $X \setminus U_0$ is infinite, and we pick $x_0 \in U_0$. Next we will choose $x_0, x_1,x_2, \ldots$ and open sets $U_0, U_1, U_2, \ldots$ by recursion such that when we have chosen $x_0, \ldots, x_{n-1}$ ,$U_0, \ldots U_{n-1}$ in such a way that for all $0 \le i,j \le n-1$: • $x_i \in U_i$. • $x_j \notin U_i$ for $j \neq i$. • $A_{n-1} = X\setminus \bigcup\{U_m: 0\le m \le n-1\}$ is infinite. Then we note that $A_{n-1}$ does not have the cofinite topology, so it has a relatively open non-empty subset $O$ with infinite complement in $A_{n-1}$, and so by $T_1$-ness (we have to avoid finitely many points) we have $U_n$ open in $X$ with $A_{n-1} \setminus (X \cap U_n)$ infinite and $U_n \cap \{x_0,\ldots,x_{n-1}\} = \emptyset$. This defines $X_n$ and finally we pick $x_n \in U_n \cap A_{n-1}$. The last condition is needed to keep the recursion going and the first two show that the set $Y := \{x_n: n \in \mathbb{N}\}$ is an infinite discrete subspace of $X$ (as $U_n \cap Y = \{x_n\}$ for all $n$ and so all singletons are open in $Y$). • Hi, I just wanted to ask is there any extension possible to my own answer, in a similar recursive way? And how did you approach this construction can you tell me what motivated you? I am not getting the intuition yet.I am guessing it has to do with removing open sets whose complement in infinite but by removing all such open set we must arrive at a finite cofinite topology which is just discrete space? There must not exist "so much" amount of open sets to be removed so i get a infinite cofinite space? – Mann Jun 13 '17 at 13:10 • @Mann so you want to assume no discrete subspace to get a cofinite one? The intuition for my construction is clear: pick an open set and a point in it, pick a second one that misses the first, a third one that misses the first two etc. In order to not let it end, we keep needing infinitely many fresh points, which is why we use non-cofiniteness. – Henno Brandsma Jun 13 '17 at 13:18 • I see I get your's now! But yes, i'd also like to have a hint for a construction of my case. If that's possibly. I think I was able to show that the space $(X,\tau)$ must have finitely many singleton open sets for a infinite discrete subspace to not exist. So I can get rid of these by removing finite amount of points. Then I have to prove that for the subspace i got, there must exist a infinite cofinite subspace of that subspace. – Mann Jun 13 '17 at 13:27 • The proof I gave is almost exactly that of the original paper. So I think it's pretty optimal. – Henno Brandsma Jun 14 '17 at 18:15 • @Andreo we use recursion not induction. It’s justified by the ZFC axioms. – Henno Brandsma Mar 3 '18 at 7:06 Lemma : If $$(X,\tau)$$ is a $$T_1$$ space and some finite set $$A=\left\{x_1,x_2,x_3, ..., x_n\right\}$$ is open in $$\tau$$ then the singleton sets $$\left\{x_i\right\}$$ such that $$i\in \left\{1,2,3,..,n\right\}$$ are also open in $$\tau$$. We know that the set $$X\setminus A=X \setminus \cup^{n}_{i=1}\left\{x_i\right\}$$ is closed in $$\tau$$. Since $$(X,\tau)$$ is a $$T_1$$ space every singleton set is closed in $$\tau$$. Finite union of closed set is closed. Hence, We know that $$\left(X \setminus \cup^{n}_{i=1}\left\{x_i\right\}\right)\cup \left\{x_k\right\} \cup \left\{x_l\right\} \text{ ... (n-1) times}$$ is closed where the index $$k\neq l \neq ...$$ . Hence, we arrive at the conclusion that $$X\setminus \left\{x_i\right\}$$ is closed $$\implies$$ $$\left\{x_i\right\}$$ is open. The contra-positive statement that if some finite amount singleton sets are not open in $$\tau$$ then any of their union is not open either. Proof : I am going to assume that $$(X,\tau)$$ has no countably infinite subspace which is discrete and a space is not discrete iff $$\exists$$ $$\left\{x\right\}$$ for some $$x \in X$$ which is not open. Now, If there were to be a finite amount of these sets, I could always remove these points to get a discrete space. Hence, there can't be a finite amount of these. Writing this condition more formally, Denote the set $$A=\left\{x : \left\{x\right\} \text{ is not open in } \tau \right\}$$ Then, it is certain that $$X\setminus A$$ is finite. If it is not finite then either the space $$(X \setminus A, \tau_s)$$ can be countably infinite discrete space or one of its subspace is countably infinite discrete subspace. Now consider the subspace $$(A,\tau_a)$$ which is obtained after removing finitely many singleton sets. By the lemma, this space can't have any finite open set and our only problem is now those open sets which are infinite and have an infinite complement as well. Let $$\mathcal{O}=\left\{O\in \tau_a \; |\; A\setminus O \text{ is infinite.}\right\}$$ Since there are no finite open sets we know that for any $$O_i,O_j \in \tau_a$$ ,$$O_i\cap O_j$$ is either $$\phi$$ or another infinite $$O_k\in \mathcal{O}$$. This implies that the set $$\mathcal{O}$$ can be partitioned by the equivalence relation: The sets $$\bigcap \mathcal{O_k}$$ are pairwise disjoint for each $$k \in K$$ and $$K$$ being some index set. Picking one such $$\bigcap \mathcal{O_k}$$ and inducing a topology on it gets rid of any open set which has an infinite complement. The resultant space is now either uncountably or countably infinite, whose subspace can be taken if needed. This completes the proof that a cofinite subspace necessarily exist.
2020-01-18 09:18:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 42, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9002424478530884, "perplexity": 180.18902300513167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592394.9/warc/CC-MAIN-20200118081234-20200118105234-00205.warc.gz"}
https://iacr.org/cryptodb/data/paper.php?pubkey=30137
## CryptoDB ### Paper: Koblitz Curves over Quadratic Fields Authors: Thomaz Oliveira Julio López Daniel Cervantes-Vázquez Francisco Rodríguez-Henríquez DOI: 10.1007/s00145-018-9294-z Search ePrint Search Google In this work, we retake an old idea that Koblitz presented in his landmark paper (Koblitz, in: Proceedings of CRYPTO 1991. LNCS, vol 576, Springer, Berlin, pp 279–287, 1991 ), where he suggested the possibility of defining anomalous elliptic curves over the base field ${\mathbb {F}}_4$ F 4 . We present a careful implementation of the base and quadratic field arithmetic required for computing the scalar multiplication operation in such curves. We also introduce two ordinary Koblitz-like elliptic curves defined over ${\mathbb {F}}_4$ F 4 that are equipped with efficient endomorphisms. To the best of our knowledge, these endomorphisms have not been reported before. In order to achieve a fast reduction procedure, we adopted a redundant trinomial strategy that embeds elements of the field ${\mathbb {F}}_{4^{m}},$ F 4 m , with m a prime number, into a ring of higher order defined by an almost irreducible trinomial. We also suggest a number of techniques that allow us to take full advantage of the native vector instructions of high-end microprocessors. Our software library achieves the fastest timings reported for the computation of the timing-protected scalar multiplication on Koblitz curves, and competitive timings with respect to the speed records established recently in the computation of the scalar multiplication over binary and prime fields. ##### BibTeX @article{jofc-2019-30137,
2022-05-28 07:00:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6487706899642944, "perplexity": 869.8029774568411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00605.warc.gz"}
https://www.physicsforums.com/threads/reynolds-transport-theorem-derivation-sign-enquiry.921484/
# Reynolds Transport Theorem Derivation Sign Enquiry #### williamcarter Hi, Our lecturer explained us the Reynold Transport theorem, its derivation , but I don't get where the - sign in control surface 1 comes from? He said that the Area goes in opposite direction compared with this system. I can't visualise this on our picture. Can you please help me understand why we have the negative sign on the control surface 1 and at the end of the theorem we have +ve everywhere? The pictures are attached below Fig1-Illustrates the - sign enquiry with regards to control surface 1. Why is it - here? and not + Fig2-Illustrates the final form of the Reynolds Transport theorem where all signs are +, why? How to know when the Area is in same direction as the system velocity ? and how to know when the Area goes opposite direction with regards to the system velocity? Last edited: Related Materials and Chemical Engineering News on Phys.org #### Chestermiller Mentor $\vec{dA}$ is defined as the scalar dA multiplied by an outwardly directed unit vector from the control volume. #### williamcarter $\vec{dA}$ is defined as the scalar dA multiplied by an outwardly directed unit vector from the control volume. Alright, thank you but why in control surface 1 is -ve and in control surface 3 is + ve? How can we sense that? How to know when is in same direction as the system velocity and when is opposite? Thanks Last edited: #### Chestermiller Mentor Alright, thank you but why in control surface 1 is -ve and in control surface 3 is + ve? How can we sense that? How to know when is in same direction as the system velocity and when is opposite? Thanks The dot product of the outwardly directed unit vector with the velocity vector is either positive or negative. If flow is entering, then it comes out negative; if flow is leaving, then it comes out positive. #### williamcarter The dot product of the outwardly directed unit vector with the velocity vector is either positive or negative. If flow is entering, then it comes out negative; if flow is leaving, then it comes out positive. Thank you, how come it ends up as positive(control surface1) in figure/picture 2? Because initially control surface 1 was negative but in picture 2 in final form of Reynold Transport Theorem ends up as +ve #### Chestermiller Mentor Thank you, how come it ends up as positive(control surface1) in figure/picture 2? Because initially control surface 1 was negative but in picture 2 in final form of Reynold Transport Theorem ends up as +ve I don't understand the figures too well. Just remember what I said, and you will be OK. #### williamcarter The dot product of the outwardly directed unit vector with the velocity vector is either positive or negative. If flow is entering, then it comes out negative; if flow is leaving, then it comes out positive. Alright, but why? In both cases the outwardly directed unit vector with the velocity vector are parallel. What makes it positive or negative? #### Chestermiller Mentor Alright, but why? In both cases the outwardly directed unit vector with the velocity vector are parallel. What makes it positive or negative? in a region where fluid flow is entering the control volume, the unit outwardly directed normal dotted with the (inwardly directed) velocity vector is negative. "Reynolds Transport Theorem Derivation Sign Enquiry" ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
2019-06-19 20:58:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7241408228874207, "perplexity": 1317.8351133018505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999041.59/warc/CC-MAIN-20190619204313-20190619230313-00100.warc.gz"}
https://spatial-model-editor.readthedocs.io/en/stable/reference/parameter-fitting.html
# Parameter optimization To start parameter optimization, click on Tools->Optimization, or use the keyboard shortcut Ctrl+P. The parameter optimization interface displays a history of the best set of parameters and their fitness, along with images of the optimization targets (species concentrations or concentration rate of changes) compared with the resulting values from the current best set of parameters. • Targets and results (left side) • If you have multiple targets, you can choose which one to display from the dropdown list • The upper image shows the target concentration (or concentration rate of change) values • The lower image shows the actual values from the simulation using the current best set of parameters • Fitness and parameters (right side) • The upper plot shows the history of fitness values during the optimization (lower is better) • The lower plot shows the history of the best set of parameters for at each optimization iteration • Setup • Click the Setup button to change the optimization settings • See the Optimization Setup section for more details • Start • Click the Start button to start optimizing • You can stop and continue optimization When you are happy with your parameters you can press Stop and then OK. You will then be asked if you want to apply the best parameters found during optimization to your model. The optimization settings (but not any existing optimization history or results) are also saved in the model. ## Optimization Setup • Algorithm • Choose which optimization algorithm to use • See the Pagmo algorithms documentation for more details about the available algorithms • The number of populations to evolve in parallel • Typically this would be equal to the number of available CPU cores • Population • The population size to evolve on each thread • Typically the more parameters are being optimized the larger this should be • Targets • A list of target concentrations (or rate of change of concentrations) to optimize for • See the Optimization Target section for more details • Parameters ## Optimization Parameter • Parameter • This can be a model parameter or a reaction parameter • Lower bound • The minimum allowed value this parameter can take • Upper bound • The maximum allowed value this parameter can take ## Optimization Target • Species • The species to target • Target type • Either the concentration, or the concentration rate of change • Simulation time • The timepoint in the simulation when this target should apply • Target values • The desired spatial distribution of values • If not specifying this defaults to zero everywhere in the compartment • See the Optimization Target Image Import section for more details • Difference type • How to compare the target $$t$$ with the results $$r$$ • Absolute: $$|r - t|$$ • Relative: $$|r - t|/|t + \epsilon|$$ • Weight • The relative importance of this target • The cost function for this target is multiplied by this weight • Only relevant when there are multiple targets • Epsilon • The $$\epsilon$$ parameter in the relative difference measure • Avoids infinities caused by dividing by zero ## Optimization Target Image Import The spatial distribution of the target concentration or rate of change of concentration can be imported from a grayscale image in the same way as initial species concentrations. • Minimum concentration • The concentration corresponding to a black pixel in the image • Maximum concentration • The concentration corresponding to a white pixel in the image
2022-12-06 23:14:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3215591311454773, "perplexity": 2365.712947674862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00445.warc.gz"}
http://mathhelpforum.com/statistics/153125-finding-number-standard-deviations-between-sample-population-mean-print.html
# finding number of standard deviations between sample and population mean • August 8th 2010, 10:13 PM Jskid finding number of standard deviations between sample and population mean Suppose that the mean mileage of all pickup trucks of a particular make is unknown and will be estimated from a random sample of 36 trucks. The sample mean mileage was found to be 6 litres per 100 km. Assume that the standard deviation of the mileages of all trucks is .5 litres per 100 km. There is a 0.95 probability that the mean of the sample will be within z SDs of the unknown mileage. What is z? I believe I find the z-score associated with 0.95 but I'm not sure if I need to do something with the sample mean or SD. • August 9th 2010, 02:52 AM Quote: Originally Posted by Jskid Suppose that the mean mileage of all pickup trucks of a particular make is unknown and will be estimated from a random sample of 36 trucks. The sample mean mileage was found to be 6 litres per 100 km. Assume that the standard deviation of the mileages of all trucks is .5 litres per 100 km. There is a 0.95 probability that the mean of the sample will be within z SDs of the unknown mileage. What is z? I believe I find the z-score associated with 0.95 but I'm not sure if I need to do something with the sample mean or SD. A 0.95 probability that the sample mean is within z standard deviations means that you need to locate the two 2.5% regions at the tails. Hence, you need the z-score corresponding to 0.975 instead. Also, as you have the "population" SD $Z=\displaystyle\huge\frac{sample\ mean-\mu}{\frac{\sigma}{\sqrt{36}}}$ where $\mu$ is the population mean. The above calculation is taken for sample mean greater than population mean. The graph is symmetrical however. • August 9th 2010, 07:17 PM Jskid The area under the normal distribution curve for 0.25% is z=-1.96 and for 97.25% is z=1.96. The answer to my question is 1.96. So why do I need $Z=\displaystyle\huge\frac{sample\ mean-\mu}{\frac{\sigma}{\sqrt{36}}}$? • August 10th 2010, 03:38 AM The area under the normal distribution curve for 0.25% is z=-1.96 and for 97.25% is z=1.96. The answer to my question is 1.96. So why do I need $Z=\displaystyle\huge\frac{sample\ mean-\mu}{\frac{\sigma}{\sqrt{36}}}$?
2016-05-06 04:42:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8578492999076843, "perplexity": 637.5708746471483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861718132.40/warc/CC-MAIN-20160428164158-00114-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.quantopian.com/posts/improved-minimum-variance-portfolio
Improved Minimum Variance Portfolio After a nudge in the right direction from Grant Kiehne I made a new version of a minimum variance portfolio. It uses a Lagrangian to solve for the weights that minimize the variance. I used returns on the vwap for everything and the portfolio was randomly generated. I'm not a huge fan of shorts so I added context.allow_shorts, which if False, the algo will use non-negative least squares regression to solve the Lagrangian. I also added the ability to re-invest cash but I'm thinking there's a cleaner way to do it. Ideas? I'm pretty happy with the results I've seen so far, I'm gonna have to port it over to min data at some point to start paper trading. Does anybody have any idea how to stop that negative cash dip on the first day it invests? I commented out what I tried (line 51). Dave 241 Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility -- Returns 1 Month 3 Month 6 Month 12 Month Alpha 1 Month 3 Month 6 Month 12 Month Beta 1 Month 3 Month 6 Month 12 Month Sharpe 1 Month 3 Month 6 Month 12 Month Sortino 1 Month 3 Month 6 Month 12 Month Volatility 1 Month 3 Month 6 Month 12 Month Max Drawdown 1 Month 3 Month 6 Month 12 Month import numpy as np import pandas as pd from scipy.optimize import nnls def initialize(context): context.stocks =[ sid(438), sid(9693), sid(6704), sid(6082), sid(3971), sid(1882), sid(7543), sid(4080), sid(2427), sid(20394), sid(20277), sid(10649), sid(7800), sid(2293), sid(6683), sid(21757), sid(17800), sid(19658), sid(14518), sid(22406) ] # Cash buffer: reduces investment on a per trade basis context.cash_buffer = 0 # Days between rebalance context.rebal_days = 10 # Number of observations used in calculations. context.nobs = 300 context.re_invest_cash = True context.allow_shorts = False context.data = { i: [] for i in context.stocks } def handle_data(context, data): P = context.portfolio record(cash=P.cash, positions=P.positions_value) for i in data.keys(): context.data[i].append(data[i].vwap(1)) if len(context.data[i]) > context.nobs: context.data[i] = context.data[i][-1*context.nobs:] if n > 50 and not n % context.rebal_days: vwaps = pd.DataFrame(context.data) context.df = vwaps.pct_change().dropna() weights = min_var_weights(context.df, allow_shorts=context.allow_shorts) # logs = {sym.symbol: weights[sym] for sym in context.stocks if weights[sym] !=0} # log.info(logs) for sym in context.stocks: old_pos = P.positions[sym].amount if context.re_invest_cash:# and context.trading_days != context.rebal_days: new_pos = re_invest_order(sym, weights, context, data) else: new_pos = int( (weights[sym] *(P.starting_cash) / data[sym].price)*(1 - context.cash_buffer) ) order(sym, new_pos - old_pos) def re_invest_order(sym, weights, context, data): P = context.portfolio if P.cash > 0: new_pos = int( (weights[sym] *(P.starting_cash + P.cash) / data[sym].price)*(1 - context.cash_buffer)) else: new_pos = int( (weights[sym] *(P.starting_cash) / data[sym].price)*(1 - context.cash_buffer)) return new_pos def min_var_weights(returns, allow_shorts=False): ''' Returns a dictionary of sid:weight pairs. allow_shorts=True --> minimum variance weights returned allow_shorts=False --> least squares regression finds non-negative weights that minimize the variance ''' cov = 2*returns.cov() x = np.ones(len(cov) + 1) x[-1] = 1.0 p = lagrangize(cov) if allow_shorts: weights = np.linalg.solve(p, x)[:-1] else: weights = nnls(p, x)[0][:-1] return {sym: weights[i] for i, sym in enumerate(returns)} def lagrangize(df): ''' Utility funcion to format a DataFrame in order to solve a Lagrangian sysem. ''' df = df df['lambda'] = np.ones(len(df)) z = np.ones(len(df) + 1) x = np.ones(len(df) + 1) z[-1] = 0.0 x[-1] = 1.0 m = [i for i in df.as_matrix()] m.append(z) return pd.DataFrame(np.array(m)) This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes. There was a runtime error. 19 responses This fixes the crazy oscillations in the positions and cash. It performs a lot better too. Over 400% on the same portfolio def re_invest_order(sym, weights, context, data): P = context.portfolio if P.cash > 0: new_pos = int( (weights[sym] * (P.positions_value + P.cash) / data[sym].price)*(1 - context.cash_buffer)) else: new_pos = int( (weights[sym] * (P.positions_value) / data[sym].price)*(1 - context.cash_buffer)) return new_pos Very interesting result with the non-negative least squares! I noticed were that there seems to be a lot of churn in the portfolio with NNLS approach perhaps a slower re-calibration period may save on transaction costs/slippage if you included those especially if you are considering paper trading. I would be interested to see some measure of portfolio turnover comparing the two methods. Just to note, I adjusted your function min_var_weights a bit where you called lagrangize, its not really necessary, the version with short sales that I posted originally has the closed form built into it. If you look at the difference in the calculated weights you can see that it is within machine precision. def min_var_weights(returns, allow_shorts=False): ''' Returns a dictionary of sid:weight pairs. allow_shorts=True --> minimum variance weights returned allow_shorts=False --> least squares regression finds non-negative weights that minimize the variance ''' cov = 2*returns.cov() x = np.ones(len(cov) + 1) x[-1] = 1.0 p = lagrangize(cov) if allow_shorts: #this is exactly the same as the line below (weights = ...) with less mess weights2 = np.linalg.solve(p, x)[:-1] else: weights2 = nnls(p, x)[0][:-1] precision = np.asmatrix(inv(returns.cov())) oned = np.ones((len(returns.columns), 1)) weights = precision*oned / (oned.T*precision*oned) #these are nearly equal, check the logs weights2 = np.asmatrix(weights2).T log.info(weights - weights2) return {sym: weights2[i] for i, sym in enumerate(returns)} excellent coding style by the way, much neater than mine, nice job! Nice, they are definitely within machine precision, that's good confirmation that they're both working as intended. I separated out the Lagrange function because they show up in other problems, I thought the portability might help at some point down the road. Thanks for the style props, I borrowed the enumerates and a couple other things from you so your complimenting yourself too. I fixed the churn in the portfolio by using the positions value rather than starting cash to re-balance. It smoothed everything out and made the returns a lot higher. I also added a couple lines to invest evenly on the first day while the number of observations build up. That way there's no lag getting into the market and it fixed the initial negative cash dip I was getting before. This test has those changes with the same portfolio and settings. Something worth noting is that I ran a test where everything went horribly wrong with the non-negative approach. It was caused by the algo going .999999 into a single security and 1e-15 ish in some others. I'm thinking that a ceiling needs be put on the weight that can be given to a security and any weights within machine precision of 0 need to be set to 0. I would be a poor man due to an aberration if that happened in a live situation. 135 Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility -- Returns 1 Month 3 Month 6 Month 12 Month Alpha 1 Month 3 Month 6 Month 12 Month Beta 1 Month 3 Month 6 Month 12 Month Sharpe 1 Month 3 Month 6 Month 12 Month Sortino 1 Month 3 Month 6 Month 12 Month Volatility 1 Month 3 Month 6 Month 12 Month Max Drawdown 1 Month 3 Month 6 Month 12 Month import numpy as np import pandas as pd from scipy.optimize import nnls def initialize(context): context.stocks =[ sid(438), sid(9693), sid(6704), sid(6082), sid(3971), sid(1882), sid(7543), sid(4080), sid(2427), sid(20394), sid(20277), sid(10649), sid(7800), sid(2293), sid(6683), sid(21757), sid(17800), sid(19658), sid(14518), sid(22406) ] # Cash buffer: reduces investment on a per trade basis context.cash_buffer = 0. # Days between rebalance context.rebal_days = 10 # Number of observations used in calculations. context.nobs = 250 context.re_invest_cash = True context.allow_shorts = False context.data = { i: [] for i in context.stocks } def handle_data(context, data): P = context.portfolio w = 1./len(context.stocks) for sym in context.stocks: shares = (P.starting_cash * w // data[sym].price)*(1 - context.cash_buffer) order(sym, shares) record(cash=P.cash, positions=P.positions_value) for i in data.keys(): context.data[i].append(data[i].vwap(1)) if len(context.data[i]) > context.nobs: context.data[i] = context.data[i][-1*context.nobs:] if n > 50 and not n % context.rebal_days: vwaps = pd.DataFrame(context.data) context.df = vwaps.pct_change().dropna() weights = min_var_weights(context.df, allow_shorts=context.allow_shorts) logs = {sym.symbol: weights[sym] for sym in context.stocks if weights[sym] !=0} log.info(logs) for sym in context.stocks: old_pos = P.positions[sym].amount if context.re_invest_cash:# and context.trading_days != context.rebal_days: new_pos = re_invest_order(sym, weights, context, data) else: new_pos = int( (weights[sym] *max(P.positions_value, P.starting_cash) / data[sym].price)*(1 - context.cash_buffer) ) order(sym, new_pos - old_pos) def re_invest_order(sym, weights, context, data): P = context.portfolio if P.cash > 0: new_pos = int( (weights[sym] * (P.positions_value + P.cash) / data[sym].price)*(1 - context.cash_buffer)) else: new_pos = int( (weights[sym] * (P.positions_value) / data[sym].price)*(1 - context.cash_buffer)) return new_pos def min_var_weights(returns, allow_shorts=False): ''' Returns a dictionary of sid:weight pairs. allow_shorts=True --> minimum variance weights returned allow_shorts=False --> least squares regression finds non-negative weights that minimize the variance ''' cov = 2*returns.cov() x = np.ones(len(cov) + 1) x[-1] = 1.0 p = lagrangize(cov) if allow_shorts: weights = np.linalg.solve(p, x)[:-1] else: weights = nnls(p, x)[0][:-1] return {sym: weights[i] for i, sym in enumerate(returns)} def lagrangize(df): ''' Utility funcion to format a DataFrame in order to solve a Lagrangian sysem. ''' df = df df['lambda'] = np.ones(len(df)) z = np.ones(len(df) + 1) x = np.ones(len(df) + 1) z[-1] = 0.0 x[-1] = 1.0 m = [i for i in df.as_matrix()] m.append(z) return pd.DataFrame(np.array(m)) This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes. There was a runtime error. How was the list of stocks chosen for this portfolio? Is it a random list? Has anyone tried other sets of symbols. Is there a process for picking stock list for this algorithm if I wanted to try other portfolios? Sarvi Ya it's a random list, I'm not sure what they are. You can just replace the context.stocks list with whatever securities you want. Just make sure they were traded throughout the time period your testing. There's no method to it, just change that one list. Also, I think there's a mistake in this version. # in min var weights x = np.ones(len(cov) + 1) # Should be x = np.array([0.]*(len(cov)+1)]) The vector being solved for should be all zeros and one 1. For some reason this doesn't seem to change the outcome of the solutions but it's worth mentioning. I would clone the updated version above, it got rid of the oscillations in the cash and positions. I want a huge global universe of stocks ie 1000 not just a cute handfull of 15 stocks! Back tests let you use up to 100. Look at DollarVolumeUniverse in the docs, it gives you a sample of the market Has anyone tried modifying this to run on minute data to replicate the results of daily data. You can't do paper trading until the algo can handle minute data. The simple tip I got about converting goes like this " exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern') log.info('{hour}:{minute}:'.format(hour=exchange_time.hour,minute=exchange_time.minute)) if exchange_time.hour != 10 or exchange_time.minute != 0: return " to the begining. But that does reproduce the results of daily data though. Sarvi I have a version that runs on minute data, however the back tester tells me the sids I used with daily data are not available in minute mode. I'll run a full test on min data, then replicate it with a daily test after to avoid this. Ill post the results later. Marcus, I have warned about this before here but I repeat myself, let's assume you even wanted to use 43 stocks; One pitfall of this Markowitz type of analysis is the curse of dimensionality. You have 43 stocks which means that you are estimating the covariance matrix containing 43*42/2+43 = 9,073 parameters utilizing just 252 observations. This is complete statistical nonsense, the parameters are not even uniquely identified. You are going to want to increase the length of your observation window to tighten the standard errors on the covariance estimates. 90,000 days would probably be sufficient for nice tight estimation with 43 assets. Obviously that size of estimation window is unreasonable which is why the next step would be to implement dimension reduction techniques. An exogenous factor model could work nicely, the principal components approach is another methodology or, the factors on demand methodology of Meucci. Asymptotic principal components from Connor and Korajczyk 1986 could be an excellent solution with a very large number of assets. Wayne, I forgot that you wanted to see a comparison of the negative vs. non-negative results. I have found that the non-negative is a little bit more consistent but there is neither one is conclusively better. For example, this backtest does 473% while allowing shorts and about 153% without, but the test above does 400% with the non-negative approach and about 180% when allowing negative weights. The code for this test works with min and daily data, and the shorting can be toggled also. I made another version that solves the problem on page 12 of the paper in your comment above, it is more erratic though, especially with negative, and it goes all in on one security without the negative weights. 135 Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility -- Returns 1 Month 3 Month 6 Month 12 Month Alpha 1 Month 3 Month 6 Month 12 Month Beta 1 Month 3 Month 6 Month 12 Month Sharpe 1 Month 3 Month 6 Month 12 Month Sortino 1 Month 3 Month 6 Month 12 Month Volatility 1 Month 3 Month 6 Month 12 Month Max Drawdown 1 Month 3 Month 6 Month 12 Month import numpy as np import pandas as pd from scipy.optimize import nnls from pytz import timezone # # This is a version of the min variance algo without history(). # It can run on daily or min data, just change context.period to # anything but 'daily' # def initialize(context): context.period = 'daily' context.stocks =[ sid(12662), sid(2174), sid(5787), sid(4963), sid(21612), sid(1062), sid(5719), sid(7580), sid(8857), sid(2000), sid(5938), sid(19917), sid(12267), sid(3695), sid(14520), sid(2663), sid(24074), sid(12350) ] context.cash_buffer = 0.0 context.EPSILON = 1e-8 context.max_weight = 2 context.order_cushion = .2 # Days between rebalance context.rebal_days = 30 context.today = None # Number of observations used in calculations. context.nobs = 250 context.min_nobs = 50 context.re_invest_cash = 1 context.allow_shorts = 1 context.invest_position = 1 # Uses starting cash if False, only used if not re-investing cahs context.data = { i: [] for i in context.stocks } def handle_data(context, data): #data = history(bar_count=context.nobs, frequency dt = get_datetime().astimezone(timezone('US/Eastern')) P = context.portfolio w = 1./len(context.stocks) for sym in context.stocks: shares = (P.starting_cash * w // data[sym].price)*(1 - context.cash_buffer) order(sym, shares) record(cash=P.cash, positions=P.positions_value) if not dt.day == context.today: append_data(context, data) context.today = dt.day vwaps = pd.DataFrame(context.data) context.df = vwaps.pct_change().dropna() weights = min_var_weights(context.df, allow_shorts=context.allow_shorts) for i in weights: if weights[i] > context.max_weight: weights = catch_max_weight(weights, context, data) if abs(weights[i]) < context.EPSILON: weights[i] = 0 #logs = {sym.symbol: weights[sym] for sym in context.stocks if weights[sym] !=0} #log.info(logs) orders = {} for sym in context.stocks: old_pos = P.positions[sym].amount if context.re_invest_cash:# and context.trading_days != context.rebal_days: new_pos = re_invest_order(sym, weights, context, data) else: if context.invest_position: new_pos = int((1 - context.cash_buffer) * ( weights[sym] * max(P.positions_value, P.starting_cash, P.cash)) / data[sym].price ) else: new_pos = int((1 - context.cash_buffer) * ( weights[sym] * max(P.starting_cash, P.cash)) / data[sym].price ) cost = abs(data[sym].price * new_pos) orders[sym] = (old_pos, new_pos, data[sym].price ,weights[sym] ,cost) if check_orders(orders, context, data): for sym in orders: order_target(sym, orders[sym][1]) def check_orders(orders, context, data): P = context.portfolio total_cost = sum([orders[i][-1] for i in orders]) if total_cost > (P.positions_value + P.cash) * (1 + context.order_cushion): return False log.info('\nTotal Order: \$ %s\n'%total_cost) log.info({i.symbol: orders[i] for i in orders}) return True def catch_max_weight(weights, context, data): log.debug("\nOVER WEIGHT LIMIT\n%s"%{x.symbol: weights[x] for x in data.keys()}) return {i: 1. / len(context.stocks) for i in weights} def append_data(context, data): for i in context.stocks: context.data[i].append(data[i].vwap(1)) if len(context.data[i]) > context.nobs: context.data[i] = context.data[i][-1*context.nobs:] def re_invest_order(sym, weights, context, data): P = context.portfolio if P.cash > 0: new_pos = int((1 - context.cash_buffer) * ( weights[sym] * (P.positions_value + P.cash)) / data[sym].price ) else: new_pos = int((1 - context.cash_buffer) * ( weights[sym] * (P.positions_value)) / data[sym].price ) return new_pos def min_var_weights(returns, allow_shorts=False): ''' Returns a dictionary of sid:weight pairs. allow_shorts=True --> minimum variance weights returned allow_shorts=False --> least squares regression finds non-negative weights that minimize the variance ''' cov = 2*returns.cov() x = np.array([0.]*(len(cov)+1)) #x = np.ones(len(cov) + 1) x[-1] = 1.0 p = lagrangize(cov) if allow_shorts: weights = np.linalg.solve(p, x)[:-1] else: weights = nnls(p, x)[0][:-1] return {sym: weights[i] for i, sym in enumerate(returns)} def lagrangize(df): ''' Utility funcion to format a DataFrame in order to solve a Lagrangian sysem. ''' df = df df['lambda'] = np.ones(len(df)) z = np.ones(len(df) + 1) x = np.ones(len(df) + 1) z[-1] = 0.0 x[-1] = 1.0 m = [i for i in df.as_matrix()] m.append(z) return pd.DataFrame(np.array(m)) if n < context.min_nobs or n % context.rebal_days: return False if context.period == 'daily': return True # Converts all time-zones into US EST to avoid confusion loc_dt = get_datetime().astimezone(timezone('US/Eastern')) if loc_dt.hour == 12 and loc_dt.minute == 0: return True else: return False This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes. There was a runtime error. @ Wayne Nilsen "One pitfall of this Markowitz type of analysis is the curse of dimensionality. You have 43 stocks which means that you are estimating the covariance matrix containing 43*42/2+43 = 9,073 parameters utilizing just 252 observations. This is complete statistical nonsense, the parameters are not even uniquely identified. You are going to want to increase the length of your observation window to tighten the standard errors on the covariance estimates. 90,000 days would probably be sufficient for nice tight estimation with 43 assets. Obviously that size of estimation window is unreasonable which is why the next step would be to implement dimension reduction techniques. An exogenous factor model could work nicely, the principal components approach is another methodology or, the factors on demand methodology of Meucci. Asymptotic principal components from Connor and Korajczyk 1986 could be an excellent solution with a very large number of assets. " Interesting. I have to read it again to understand it :-) Could you please have a look at my paper: Davidsson, M (2013) The Use of Least Squares in the Optimization of Investment Portfolios, International Journal of Management , Vol 30, No 10, pp 310 – 321 http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2366298 Let me know what you think?! I agree with Wayne, the dimension of the covariance matrix is O(n^2 ), but the observations is O(n). However, I don't agree the dimension of covariance matrix is exactly n*(n-1)/2+n, there is hidden relation there. So when too many equities in the portfolio, we need more or much more observations to make it has statistical meaning. On the other hand, consider in time series model, too many observations means using outdated data to do the prediction, which does not make sense either. Thus we cannot make the observations unlimited many. The final conclusion, we need restrain the number of equities in the portfolio, or this Markowitz type of analysis is the curse of dimension. Also I rewrite part of the code to fix the negative cash dip on the first day problem. And I implemented the SPDR sectors as the portfolio picks, since they are with low variance, which is our trading idea. I was going to implement set_universe feature of Quatopian, but we only have one implementation DollarVolumeUniverse, which picks the stocks by the liquidity. However, if we pick high liquidity equities and all of them have high volatility, which is not our trading strategy idea, minimum the variance. Second, some of these liquidity stocks are new IPO, such as EEM, QID, and some are going default, such as MER, OIH, they are strongly influence our portfolio picks and rebalance. 14 Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility -- Returns 1 Month 3 Month 6 Month 12 Month Alpha 1 Month 3 Month 6 Month 12 Month Beta 1 Month 3 Month 6 Month 12 Month Sharpe 1 Month 3 Month 6 Month 12 Month Sortino 1 Month 3 Month 6 Month 12 Month Volatility 1 Month 3 Month 6 Month 12 Month Max Drawdown 1 Month 3 Month 6 Month 12 Month import numpy as np import pandas as pd from scipy.optimize import nnls import math def initialize(context): context.stocks =[sid(19654), sid(19655), sid(19656), sid(19657), sid(19658), sid(19659), sid(19660), sid(19661), sid(19662)] # Cash buffer: reduces investment on a per trade basis context.cash_buffer = 0 # Days between rebalance context.rebal_days = 10 # Number of observations used in calculations. context.nobs = 100 context.re_invest_cash = True context.allow_shorts = False context.data = {i: [] for i in context.stocks} def handle_data(context, data): P = context.portfolio record(cash=P.cash, positions=P.positions_value) for i in data.keys(): context.data[i].append(data[i].vwap(1)) if len(context.data[i]) > context.nobs: context.data[i] = context.data[i][-1*context.nobs:] if n > context.nobs and not n % context.rebal_days: vwaps = pd.DataFrame(context.data) context.df = vwaps.pct_change().dropna() weights = min_var_weights(context.df, allow_shorts=context.allow_shorts) for sym in context.stocks: old_pos = P.positions[sym].amount if context.re_invest_cash: new_pos = re_invest_order(sym, weights, context, data) else: new_pos = math.floor((weights[sym] *(P.starting_cash) / data[sym].price)*(1 - context.cash_buffer)) order(sym, new_pos - old_pos) def re_invest_order(sym, weights, context, data): P = context.portfolio if context.trading_days > context.nobs + 10: new_pos = math.floor((weights[sym] *(P.cash + P.starting_cash) / data[sym].price)*(1 - context.cash_buffer)) else: new_pos = math.floor((weights[sym] *(P.starting_cash) / data[sym].price)*(1 - context.cash_buffer)) return new_pos def min_var_weights(returns, allow_shorts=False): cov = returns.cov() x = np.ones(len(cov) + 1) x[-1] = 1.0 p = lagrangize(cov) if allow_shorts: weights = np.linalg.solve(p, x)[:-1] else: weights = nnls(p, x)[0][:-1] return {sym: weights[i] for i, sym in enumerate(returns)} def lagrangize(df): df['lambda'] = np.ones(len(df)) z = np.ones(len(df) + 1) z[-1] = 0.0 m = [i for i in df.as_matrix()] m.append(z) return pd.DataFrame(np.array(m)) This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes. There was a runtime error. Last word, I don't believe this trading strategy can print money, since it is very conservative, but it should beat the market. The host can get so good result, only because the picks are too lucky. Say I long these 20 equities on the first day evenly, and do nothing, my return is still 251%. :) 12 Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility -- Returns 1 Month 3 Month 6 Month 12 Month Alpha 1 Month 3 Month 6 Month 12 Month Beta 1 Month 3 Month 6 Month 12 Month Sharpe 1 Month 3 Month 6 Month 12 Month Sortino 1 Month 3 Month 6 Month 12 Month Volatility 1 Month 3 Month 6 Month 12 Month Max Drawdown 1 Month 3 Month 6 Month 12 Month def initialize(context): context.stocks =[ sid(438), sid(9693), sid(6704), sid(6082), sid(3971), sid(1882), sid(7543), sid(4080), sid(2427), sid(20394), sid(20277), sid(10649), sid(7800), sid(2293), sid(6683), sid(21757), sid(17800), sid(19658), sid(14518), sid(22406) ] context.day = 1 def handle_data(context, data): if context.day == 1: for sid in context.stocks: pos = context.portfolio.cash/20/data[sid].close_price order(sid, pos) context.day += 1 This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes. There was a runtime error. I agree that this is a long strategy. The game would be to put promising securities in there and just leave it. It has flaws but is a good start towards something better. There is new versions of this on this post. One is min data with history() and the other works like this but can run minute or daily data. The also don't have the churn in the cash and positions. There was a couple mistakes in this version, see my earlier comments for the fixes. I think there is a lot of promise in combining this with something like Frank Grossman's process of selecting a portfolio on the basis of relative strength and volatility. See http://02f27c6.netsolhost.com/papers/darwin-adaptive-asset-allocation.pdf for some qualitative ideas. I have played around with this strategy, and found when I include assets that are not correlated, or negatively correlated, the weighting algo overallocated weights to assets that tend to be less or negatively correlated. I adjusted the max weight, but it only seems to register a log output, suggestions on how to adjust to restrict overallocation?
2020-08-06 00:24:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18151670694351196, "perplexity": 6589.901072080561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735990.92/warc/CC-MAIN-20200806001745-20200806031745-00061.warc.gz"}
https://tex.stackexchange.com/questions/249209/how-to-alter-the-headnote-in-this-thesis-class-provided-in-sharelatex/249407
# How to alter the headnote in this thesis class provided in sharelatex? I am using a template for graduate thesis in sharelatex, but there is some problem with the headnote, it shows the "physical constants" headnote for every other chapters. How to make relevant headnotes for each chapters? %% %% This is file Thesis.cls', based on 'ECSthesis.cls', by Steve R. Gunn %% generated with the docstrip utility. %% %% Created by Steve R. Gunn, modified by Sunil Patel: www.sunilpatel.co.uk \NeedsTeXFormat{LaTeX2e}[1996/12/01] \ProvidesClass{Thesis} [2007/22/02 v1.0 LaTeX document class] \def\baseclass{book} \DeclareOption*{\PassOptionsToClass{\CurrentOption}{\baseclass}} \def\@checkoptions#1#2{ \edef\@curroptions{\@ptionlist{\@currname.\@currext}} \@tempswafalse \@tfor\@this:=#2\do{ \@expandtwoargs\in@{,\@this,}{,\@curroptions,} \ifin@ \@tempswatrue \@break@tfor \fi} \let\@this\@empty \if@tempswa \else \PassOptionsToClass{#1}{\baseclass}\fi } \@checkoptions{11pt}{{10pt}{11pt}{12pt}} \PassOptionsToClass{a4paper}{\baseclass} \ProcessOptions\relax \newcommand\bhrule{\typeout{------------------------------------------------------------------------------}} \newcommand\Declaration[1]{ \btypeout{Declaration of Authorship} \thispagestyle{plain} \null\vfil %\vskip 60\p@ \begin{center}{\huge\bf Declaration of Authorship\par}\end{center} %\vskip 60\p@ {\normalsize #1} \vfil\vfil\null %\cleardoublepage } \newcommand\btypeout[1]{\bhrule\typeout{\space #1}\bhrule} \def\today{\ifcase\month\or January\or February\or March\or April\or May\or June\or July\or August\or September\or October\or November\or December\fi \space \number\year} \usepackage{setspace} \onehalfspacing \setlength{\parindent}{0pt} \setlength{\parskip}{2.0ex plus0.5ex minus0.2ex} \usepackage{vmargin} \setmarginsrb { 1.5in} % left margin { 0.6in} % top margin { 1.0in} % right margin { 0.8in} % bottom margin { 9pt} % foot height { 0.3in} % foot sep \raggedbottom \setlength{\topskip}{1\topskip \@plus 5\p@} \doublehyphendemerits=10000 % No consecutive line hyphens. \brokenpenalty=10000 % No broken words across columns/pages. \widowpenalty=9999 % Almost no widows at bottom of page. \clubpenalty=9999 % Almost no orphans at top of page. \interfootnotelinepenalty=9999 % Almost never break footnotes. \usepackage{fancyhdr} \pagestyle{fancy} \renewcommand{\chaptermark}[1]{\btypeout{\thechapter\space #1}\markboth{\@chapapp\ \thechapter\ #1}{\@chapapp\ \thechapter\ #1}} \renewcommand{\sectionmark}[1]{} \renewcommand{\subsectionmark}[1]{} \def\cleardoublepage{\clearpage\if@twoside \ifodd\c@page\else \hbox{} \thispagestyle{empty} \newpage \if@twocolumn\hbox{}\newpage\fi\fi\fi} \usepackage{amsmath,amsfonts,amssymb,amscd,amsthm,xspace} \theoremstyle{plain} \newtheorem{example}{Example}[chapter] \newtheorem{theorem}{Theorem}[chapter] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{axiom}[theorem]{Axiom} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \usepackage[centerlast,small,sc]{caption} \setlength{\captionmargin}{20pt} \newcommand{\fref}[1]{Figure~\ref{#1}} \newcommand{\tref}[1]{Table~\ref{#1}} \newcommand{\eref}[1]{Equation~\ref{#1}} \newcommand{\cref}[1]{Chapter~\ref{#1}} \newcommand{\sref}[1]{Section~\ref{#1}} \newcommand{\aref}[1]{Appendix~\ref{#1}} \renewcommand{\topfraction}{0.85} \renewcommand{\bottomfraction}{.85} \renewcommand{\textfraction}{0.1} \renewcommand{\dbltopfraction}{.85} \renewcommand{\floatpagefraction}{0.75} \renewcommand{\dblfloatpagefraction}{.75} \setcounter{topnumber}{9} \setcounter{bottomnumber}{9} \setcounter{totalnumber}{20} \setcounter{dbltopnumber}{9} \usepackage{graphicx} \usepackage{epstopdf} \usepackage[scriptsize]{subfigure} \usepackage{booktabs} \usepackage{rotating} \usepackage{listings} \usepackage{lstpatch} \lstset{captionpos=b, frame=tb, basicstyle=\scriptsize\ttfamily, showstringspaces=false, keepspaces=true} \lstdefinestyle{matlab} { language=Matlab, keywordstyle=\color{blue}, stringstyle=\color[rgb]{0.7,0,0} } \usepackage[pdfpagemode={UseOutlines},bookmarks=true,bookmarksopen=true, bookmarksopenlevel=0,bookmarksnumbered=true,hypertexnames=false, \pdfstringdefDisableCommands{ \let\\\space } \newcommand*{\supervisor}[1]{\def\supname{#1}} \newcommand*{\examiner}[1]{\def\examname{#1}} \newcommand*{\degree}[1]{\def\degreename{#1}} \newcommand*{\authors}[1]{\def\authornames{#1}} \newcommand*{\university}[1]{\def\univname{#1}} \newcommand*{\UNIVERSITY}[1]{\def\UNIVNAME{#1}} \newcommand*{\department}[1]{\def\deptname{#1}} \newcommand*{\DEPARTMENT}[1]{\def\DEPTNAME{#1}} \newcommand*{\group}[1]{\def\groupname{#1}} \newcommand*{\GROUP}[1]{\def\GROUPNAME{#1}} \newcommand*{\faculty}[1]{\def\facname{#1}} \newcommand*{\FACULTY}[1]{\def\FACNAME{#1}} \newcommand*{\subject}[1]{\def\subjectname{#1}} \newcommand*{\keywords}[1]{\def\keywordnames{#1}} \supervisor {} \examiner {} \degree {} \authors {} \university {\texorpdfstring{\href{http://www.bracu.ac.bd/} {BRAC University}} {BRAC University}} \UNIVERSITY {\texorpdfstring{\href{http://www.bracu.ac.bd/ } {BRAC University}} {BRAC University}} {Department of Mathematics and Natural Sciences}} {Department of Mathematics and Natural Sciences}} {Department of Mathematics and Natural Sciences}} {Department of Mathematics and Natural Sciences}} \group {\texorpdfstring{\href{Research Group Web Site URL Here (include http://)} {Research Group Name}} {Research Group Name}} \GROUP {\texorpdfstring{\href{Research Group Web Site URL Here (include http://)} {RESEARCH GROUP NAME (IN BLOCK CAPITALS)}} {RESEARCH GROUP NAME (IN BLOCK CAPITALS)}} \faculty {\texorpdfstring{\href{Faculty Web Site URL Here (include http://)} \FACULTY {\texorpdfstring{\href{Faculty Web Site URL Here (include http://)} \subject {} \keywords {} \renewcommand\maketitle{ \btypeout{Title Page} \hypersetup{pdftitle={\@title}} \hypersetup{pdfsubject=\subjectname} \hypersetup{pdfauthor=\authornames} \hypersetup{pdfkeywords=\keywordnames} \thispagestyle{empty} \begin{titlepage} \let\footnotesize\small \let\footnoterule\relax \let \footnote \thanks \setcounter{footnote}{0} \null\vfil \vskip 60\p@ \begin{center} \setlength{\parskip}{0pt} {\large\textbf{\UNIVNAME}\par} \vfill {\huge \bf \@title \par} \vfill {\LARGE by \par} \smallskip {\LARGE \authornames \par} \vfill {\large A thesis submitted in partial fulfillment for the \par} {\large degree of Bachelors of Science \par} \bigskip \bigskip {\large in the \par} {\large \facname \par} {\large \deptname \par} \bigskip \bigskip \bigskip {\Large \@date \par} \bigskip \end{center} \par \@thanks \vfil\null \end{titlepage} \setcounter{footnote}{0}% \global\let\thanks\relax \global\let\maketitle\relax \global\let\@thanks\@empty \global\let\@author\@empty \global\let\@date\@empty \global\let\@title\@empty \global\let\title\relax \global\let\author\relax \global\let\date\relax \global\let\and\relax \cleardoublepage } \newenvironment{abstract} { \btypeout{Abstract Page} \thispagestyle{empty} \null\vfil \begin{center} \setlength{\parskip}{0pt} {\normalsize \UNIVNAME \par} \bigskip {\huge{\textit{Abstract}} \par} \bigskip {\normalsize \facname \par} {\normalsize \deptname \par} \bigskip {\normalsize Bachelors of Science in Physics\par} \bigskip {\normalsize\bf \@title \par} \medskip {\normalsize by \authornames \par} \bigskip \end{center} } { \vfil\vfil\vfil\null \cleardoublepage } \setcounter{tocdepth}{6} \newcounter{dummy} \refstepcounter{dummy} \renewcommand\tableofcontents{ \begin{spacing}{1}{ \setlength{\parskip}{1pt} \if@twocolumn \@restonecoltrue\onecolumn \else \@restonecolfalse \fi \chapter*{\contentsname \@mkboth{ \MakeUppercase\contentsname}{\MakeUppercase\contentsname}} \@starttoc{toc} \if@restonecol\twocolumn\fi \cleardoublepage }\end{spacing} } \renewcommand\listoffigures{ \btypeout{List of Figures} \begin{spacing}{1}{ \setlength{\parskip}{1pt} \if@twocolumn \@restonecoltrue\onecolumn \else \@restonecolfalse \fi \chapter*{\listfigurename \@mkboth{\MakeUppercase\listfigurename} {\MakeUppercase\listfigurename}} \@starttoc{lof} \if@restonecol\twocolumn\fi \cleardoublepage }\end{spacing} } \renewcommand\listoftables{ \btypeout{List of Tables} \begin{spacing}{1}{ \setlength{\parskip}{1pt} \if@twocolumn \@restonecoltrue\onecolumn \else \@restonecolfalse \fi \chapter*{\listtablename \@mkboth{ \MakeUppercase\listtablename}{\MakeUppercase\listtablename}} \@starttoc{lot} \if@restonecol\twocolumn\fi \cleardoublepage }\end{spacing} } \newcommand\listsymbolname{Abbreviations} \usepackage{longtable} \newcommand\listofsymbols[2]{ \btypeout{\listsymbolname} \chapter*{\listsymbolname \@mkboth{ \MakeUppercase\listsymbolname}{\MakeUppercase\listsymbolname}} \begin{longtable}[c]{#1}#2\end{longtable}\par \cleardoublepage } \newcommand\listconstants{Physical Constants} \usepackage{longtable} \newcommand\listofconstants[2]{ \btypeout{\listconstants} \chapter*{\listconstants \@mkboth{ \MakeUppercase\listconstants}{\MakeUppercase\listconstants}} \begin{longtable}[c]{#1}#2\end{longtable}\par \cleardoublepage } \newcommand\listnomenclature{Symbols} \usepackage{longtable} \newcommand\listofnomenclature[2]{ \btypeout{\listnomenclature} \chapter*{\listnomenclature \@mkboth{ \MakeUppercase\listnomenclature}{\MakeUppercase\listnomenclature}} \begin{longtable}[c]{#1}#2\end{longtable}\par \cleardoublepage } \newcommand\acknowledgements[1]{ \btypeout{Acknowledgements} \thispagestyle{plain} \begin{center}{\huge{\textit{Acknowledgements}} \par}\end{center} {\normalsize #1} \vfil\vfil\null } \newcommand\dedicatory[1]{ \btypeout{Dedicatory} \thispagestyle{plain} \null\vfil \vskip 60\p@ \begin{center}{\Large \sl #1}\end{center} \vfil\null \cleardoublepage } \renewcommand\backmatter{ \if@openright \cleardoublepage \else \clearpage \fi \btypeout{\bibname} \@mainmatterfalse} \endinput %% %% End of file Thesis.cls'. and this is the thesis.tex file, I provided both because I have no idea which section of the files are to alter. %% ---------------------------------------------------------------- %% Thesis.tex -- MAIN FILE (the one that you compile with LaTeX) %% ---------------------------------------------------------------- % Set up the document \documentclass[a4paper, 11pt, oneside]{Thesis} % Use the "Thesis" style, based on the ECS Thesis style by Steve Gunn \graphicspath{Figures/} % Location of the graphics files (set up for graphics to be in PDF format) % Include any extra LaTeX packages required \usepackage[square, numbers, comma, sort&compress]{natbib} % Use the "Natbib" style for the references in the Bibliography \usepackage{verbatim} % Needed for the "comment" environment to make LaTeX comments \usepackage{vector} % Allows "\bvec{}" and "\buvec{}" for "blackboard" style bold vectors in maths \hypersetup{urlcolor=blue, colorlinks=true} % Colours hyperlinks in blue, but this can be distracting if there are many links. \usepackage{pgfplots} \pgfplotsset{width=12cm,compat=1.9} %\usepgfplotslibrary{external} %\tikzexternalize %% ---------------------------------------------------------------- \begin{document} \frontmatter % Begin Roman style (i, ii, iii, iv...) page numbering % Set up the Title Page \title {Anisotropic charged stellar models in Generalized Tolman IV spacetime} \authors {\texorpdfstring {\href{h.tazkera@gmail.com}{Tazkera Haque}} {Tazkera Haque} } \date {\today} \subject {} \keywords {} \maketitle %% ---------------------------------------------------------------- \setstretch{1.3} % It is better to have smaller font and larger line spacing than the other way round % Define the page headers using the FancyHdr package and set up for one-sided printing \rhead{\thepage} % Sets the right side header to show the page number \pagestyle{fancy} % Finally, use the "fancy" page style to implement the FancyHdr headers %% ---------------------------------------------------------------- % Declaration Page required for the Thesis, your institution may give you a different text to place here \Declaration{ I, Tazkera Haque, declare that this thesis titled, Anisotropic charged stellar models in Generalized Tolman IV spacetime' and the work presented in it are my own. I confirm that: \begin{itemize} \item[\tiny{$\blacksquare$}] This work was done wholly or mainly while in candidature for a bachelor degree at this University. \item[\tiny{$\blacksquare$}] Where I have consulted the published work of others, this is always clearly attributed. \item[\tiny{$\blacksquare$}] Where I have quoted from the work of others, the source is always given. With the exception of such quotations, this thesis is entirely my own work. \item[\tiny{$\blacksquare$}] I have acknowledged all main sources of help. \item[\tiny{$\blacksquare$}] Where the thesis is based on work done by myself jointly with others, I have made clear exactly what was done by others and what I have contributed myself. \\ \end{itemize} Signed:\\ \rule[1em]{25em}{0.5pt} % This prints a line for the signature Date:\\ \rule[1em]{25em}{0.5pt} % This prints a line to write the date } \clearpage % Declaration ended, now start a new page %% ---------------------------------------------------------------- % The "Funny Quote Page" \pagestyle{empty} % No headers or footers for the following pages \null\vfill % Now comes the "Funny Quote", written in italics \textit{General relativity predicts that time ends inside black holes because the gravitational collapse squeezes matter to infinite density.'' } \begin{flushright} Lee Smolin \end{flushright} \vfill\vfill\vfill\vfill\vfill\vfill\null \clearpage % Funny Quote page ended, start a new page %% ---------------------------------------------------------------- % The Abstract Page \abstract{ With the presence of electric charge and pressure anisotropy some anisotropic stellar models have been developed. An algorithm presented by Herrera et al. (Phys. Rev. D 77, 027502 (2008)) to generate static spherically symmetric anisotropic solutions of Einstein’s equations has been used to derive relativistic anisotropic charged fluid spheres. In the absence of pressure anisotropy the fluid spheres reduce to some well-known Generalized Tolman IV exact metrics. The astrophysical significance of the resulting equations of state (EOS) for a particular case (Wyman-Leibovitz-Adler) for the anisotropic charged matter distribution has been discussed. The interior matter pressure, energy–density, and the adiabatic sound speed are expressed in terms of simple algebraic functions. The constant parameters involved in the solution have been set so that certain physical criteria satisfied. Physical analysis shows that the relativistic stellar structure obtained in this work may reasonably model an electrically charged compact star, whose energy density associated with the electric fields is of the same order of magnitude as the energy density of fluid matter itself like electrically charged bare strange quark stars. } \clearpage % Abstract ended, start a new page %% ---------------------------------------------------------------- \setstretch{1.3} % Reset the line-spacing to 1.3 for body text (if it has changed) % The Acknowledgements page, for thanking everyone \acknowledgements{ I would like to give special thanks to God Almighty, without whom I would not have had the motivation and strength to complete this thesis. I would also like to thank my family for their mo ral support and encouragement. I thank My honorable faculty member Mohammad Hassan Murad for giving me the opportunity to work under his research project and for helping me through with this thesis. I thank the entire stellar models research group whose papers I have frequently used and consulted, and in particular Dr. Mofiz Uddin Ahmed and for their continuous support, advice, and help in the writing of this thesis. Thanks to you all and God bless. } \clearpage % End of the Acknowledgements %% ---------------------------------------------------------------- \pagestyle{fancy} %The page style headers have been "empty" all this time, now use the "fancy" headers as defined before to bring them back %% ---------------------------------------------------------------- %% ---------------------------------------------------------------- \lhead{\emph{List of Figures}} % Set the left side page header to "List if Figures" \listoffigures % Write out the List of Figures %% ---------------------------------------------------------------- \lhead{\emph{List of Tables}} % Set the left side page header to "List of Tables" \listoftables % Write out the List of Tables %% ---------------------------------------------------------------- \setstretch{1.5} % Set the line spacing to 1.5, this makes the following tables easier to read \clearpage % Start a new page \listofsymbols{ll} % Include a list of Abbreviations (a table of two columns) { % \textbf{Acronym} & \textbf{W}hat (it) \textbf{S}tands \textbf{F}or \\ \textbf{LAH} & \textbf{L}ist \textbf{A}bbreviations \textbf{H}ere \\ } %% ---------------------------------------------------------------- \clearpage % Start a new page \lhead{\emph{Physical Constants}} % Set the left side page header to "Physical Constants" \listofconstants{lrcl} % Include a list of Physical Constants (a four column table) { % Constant Name & Symbol & = & Constant Value (with units) \\ Speed of Light & $c$ & $=$ & $2.997\ 924\ 58\times10^{8}\ \mbox{ms}^{-\mbox{s}}$ (exact)\\ } %% ---------------------------------------------------------------- \clearpage %Start a new page %% ---------------------------------------------------------------- % End of the pre-able, contents and lists of things % Begin the Dedication page \setstretch{1.3} % Return the line spacing back to 1.3 \dedicatory{This thesis is dedicated to m} %% ---------------------------------------------------------------- \mainmatter % Begin normal, numeric (1,2,3...) page numbering \pagestyle{fancy} % Return the page headers back to the "fancy" style % Include the chapters of the thesis, as separate files % Just uncomment the lines as you write the chapters \input{Chapters/Chapter1} % Introduction %\input{Chapters/Chapter2} % Background Theory %\input{Chapters/Chapter3} % Experimental Setup %\input{Chapters/Chapter4} % Experiment 1 %\input{Chapters/Chapter5} % Experiment 2 %\input{Chapters/Chapter6} % Results and Discussion %\input{Chapters/Chapter7} % Conclusion %% ---------------------------------------------------------------- % Now begin the Appendices, including them as separate files \appendix % Cue to tell LaTeX that the following 'chapters' are Appendices \input{Appendices/AppendixA} % Appendix Title %\input{Appendices/AppendixB} % Appendix Title %\input{Appendices/AppendixC} % Appendix Title \backmatter %% ---------------------------------------------------------------- \label{Bibliography} \bibliographystyle{unsrtnat} % Use the "unsrtnat" BibTeX style for formatting the Bibliography \bibliography{bibliography} % The references (bibliography) information are stored in the file named "Bibliography.bib" \end{document} % The End %% ---------------------------------------------------------------- • Try if \lhead{\rightmark} between \mainmatter and \pagestyle{fancy} does what you need. – Ignasi Jun 8 '15 at 14:34 • @Ignasi Wanna make that an answer? – Johannes_B Jun 9 '15 at 12:05 • @Johannes_B Done! – Ignasi Jun 9 '15 at 14:52 • @Johannes_B Do you know where lstpatch.sty comes from? – cfr Feb 1 '17 at 15:33 • @cfr Please see the chat ping. – Johannes_B Feb 1 '17 at 17:00 Thesis.cls defines how left page headings should be: \usepackage{fancyhdr} \pagestyle{fancy} but in thesis.tex these definitions are overwritten everytime \lhead{}, \lhead{\emph{Physical Constants}} or similar commands are used. Therefore, in mainmatter, although \pagestyle{fancy} forces the use of the official style again, \lhead{\emph{Physical Constants}} remains being the contents for left headers. If you want to restore the original style, command \lhead[\rm\thepage]{\fancyplain{}{\sl{\rightmark}}} should be declared before the first use of \pagestyle{fancy} in mainmatter. • Thanks for answering. by the way, that Thesis.cls` haunts me at night. – Johannes_B Jun 9 '15 at 15:10
2020-01-25 07:58:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7457718849182129, "perplexity": 11491.093563057482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251671078.88/warc/CC-MAIN-20200125071430-20200125100430-00231.warc.gz"}
https://www.math.wisc.edu/wiki/index.php/Colloquia
# Mathematics Colloquium All colloquia are on Fridays at 4:00 pm in Van Vleck B239, unless otherwise indicated. ## Spring 2015 Go to next semester, Fall 2015. date speaker title host(s) January 12 (special time: 3PM) Botong Wang (Notre Dame) Cohomology jump loci of algebraic varieties Maxim January 14 (special time: 11AM) Jayadev Athreya (UIUC) Counting points for random (and not-so-random) geometric structures Ellenberg January 15 (special time: 3PM) Chi Li (Stony Brook) On Kahler-Einstein metrics and K-stability Sean Paul January 21 Jun Kitagawa (Toronto) Regularity theory for generated Jacobian equations: from optimal transport to geometric optics Feldman January 23 (special room/time: B135, 2:30PM) Nicolas Addington (Duke) Recent developments in rationality of cubic 4-folds Ellenberg Monday January 26 4pm Minh Binh Tran (CAM) Nonlinear approximation theory for the homogeneous Boltzmann equation Jin January 30 Tentatively reserved for possible interview Monday, February 2 4pm Afonso Bandeira (Princeton) Tightness of convex relaxations for certain inverse problems on graphs Ellenberg February 6 Morris Hirsch (UC Berkeley and UW Madison) Fixed points of Lie transformation group, and zeros of Lie algebras of vector fields Stovall February 13 Mihai Putinar (UC Santa Barbara, Newcastle University) Quillen’s property of real algebraic varieties Budišić February 20 David Zureick-Brown (Emory University) Diophantine and tropical geometry Ellenberg Monday, February 23, 4pm Jayadev Athreya (UIUC) The Erdos-Szusz-Turan distribution for equivariant point processes Mari-Beffa February 27 Allan Greenleaf (University of Rochester) Erdos-Falconer Configuration problems Seeger March 6 Larry Guth (MIT) Introduction to incidence geometry Stovall March 13 Cameron Gordon (UT-Austin) Left-orderability and 3-manifold groups Maxim March 20 Aaron Naber (Northwestern) Regularity and New Directions in Einstein Manifolds Paul March 27 11am B239 Ilya Kossovskiy (University of Vienna) On Poincare's "Probleme local" Gong March 27 Kent Orr (Indiana University at Bloomigton) The Isomorphism Problem for metabelian groups Maxim April 3 University holiday April 10 Jasmine Foo (University of Minnesota) TBA Roch, WIMAW April 17 Kay Kirkpatrick (University of Illinois-Urbana Champaign) TBA Stovall April 24 Marianna Csornyei (University of Chicago) TBA Seeger, Stovall May 1 Bianca Viray (University of Washington) TBA Erman May 8 Marcus Roper (UCLA) TBA Roch ## Abstracts ### January 12: Botong Wang (Notre Dame) #### Cohomology jump loci of algebraic varieties In the moduli spaces of vector bundles (or local systems), cohomology jump loci are the algebraic sets where certain cohomology group has prescribed dimension. We will discuss some arithmetic and deformation theoretic aspects of cohomology jump loci. If time permits, we will also talk about some applications in algebraic statistics. ### January 14: Jayadev Athreya (UIUC) #### Counting points for random (and not-so-random) geometric structures We describe a philosophy of how certain counting problems can be studied by methods of probability theory and dynamics on appropriate moduli spaces. We focus on two particular cases: (1) Counting for Right-Angled Billiards: understanding the dynamics on and volumes of moduli spaces of meromorphic quadratic differentials yields interesting universality phenomenon for billiards in polygons with interior angles integer multiples of 90 degrees. This is joint work with A. Eskin and A. Zorich (2) Counting for almost every quadratic form: understanding the geometry of a random lattice allows yields striking diophantine and counting results for typical (in the sense of measure) quadratic (and other) forms. This is joint work with G. A. Margulis. ### January 15: Chi Li (Stony Brook) #### On Kahler-Einstein metrics and K-stability The existence of Kahler-Einstein metrics on Kahler manifolds is a basic problem in complex differential geometry. This problem has connections to other fields: complex algebraic geometry, partial differential equations and several complex variables. I will discuss the existence of Kahler-Einstein metrics on Fano manifolds and its relation to K-stability. I will mainly focus on the analytic part of the theory, discuss how to solve the related complex Monge-Ampere equations and provide concrete examples in both smooth and conical settings. If time permits, I will also say something about the algebraic part of the theory, including the study of K-stability using the Minimal Model Program (joint with Chenyang Xu) and the existence of proper moduli space of smoothable K-polystable Fano varieties (joint with Xiaowei Wang and Chenyang Xu). ### January 21: Jun Kitagawa (Toronto) #### Regularity theory for generated Jacobian equations: from optimal transport to geometric optics Equations of Monge-Ampere type arise in numerous contexts, and solutions often exhibit very subtle qualitative and quantitative properties; this is owing to the highly nonlinear nature of the equation, and its degeneracy (in the sense of ellipticity). Motivated by an example from geometric optics, I will talk about the class of Generated Jacobian Equations; recently introduced by Trudinger, this class also encompasses, for example, optimal transport, the Minkowski problem, and the classical Monge-Ampere equation. I will present a new regularity result for weak solutions of these equations, which is new even in the case of equations arising from near-field reflector problems (of interest from a physical and practical point of view). This talk is based on joint works with N. Guillen. ### January 23: Nicolas Addington (Duke) #### Recent developments in rationality of cubic 4-folds The question of which cubic 4-folds are rational is one of the foremost open problems in algebraic geometry. I'll start by explaining what this means and why it's interesting; then I'll discuss three approaches to solving it (including one developed in the last year), my own work relating the three approaches to one another, and the troubles that have befallen each approach. ### January 26: Minh Binh Tran (CAM) #### Nonlinear approximation theory for the homogeneous Boltzmann equation A challenging problem in solving the Boltzmann equation numerically is that the velocity space is approximated by a finite region. Therefore, most methods are based on a truncation technique and the computational cost is then very high if the velocity domain is large. Moreover, sometimes, non-physical conditions have to be imposed on the equation in order to keep the velocity domain bounded. In this talk, we introduce the first nonlinear approximation theory for the Boltzmann equation. Our nonlinear wavelet approximation is non-truncated and based on a nonlinear, adaptive spectral method associated with a new wavelet filtering technique and a new formulation of the equation. The approximation is proved to converge and perfectly preserve most of the properties of the homogeneous Boltzmann equation. It could also be considered as a general framework for approximating kinetic integral equations. ### February 2: Afonso Bandeira (Princeton) #### Tightness of convex relaxations for certain inverse problems on graphs Many maximum likelihood estimation problems are known to be intractable in the worst case. A common approach is to consider convex relaxations of the maximum likelihood estimator (MLE), and relaxations based on semidefinite programming (SDP) are among the most popular. We will focus our attention on a certain class of graph-based inverse problems and show a couple of remarkable phenomena. In some instances of these problems (such as community detection under the stochastic block model) the solution to the SDP matches the ground truth parameters (i.e. achieves exact recovery) for information theoretically optimal regimes. This is established using new nonasymptotic bounds for the spectral norm of random matrices with independent entries. On other instances of these problems (such as angular synchronization), the MLE itself tends to not coincide with the ground truth (although maintaining favorable statistical properties). Remarkably, these relaxations are often still tight (meaning that the solution of the SDP matches the MLE). For angular synchronization we can understand this behavior by analyzing the solutions of certain randomized Grothendieck problems. However, for many other problems, such as the multireference alignment problem in signal processing, this remains a fascinating open problem. ### February 6: Morris Hirsch (UC Berkeley and UW Madison) #### Fixed points of Lie transformation group, and zeros of Lie algebras of vector fields The following questions will be considered: When a connected Lie group G acts effectively on a manifold M, what general conditions on G, M and the action ensure that the action has a fixed point? If g is a Lie algebra of vector fields on M, what general conditions on g and M ensure that g has a zero? Old and new results will be discussed. For example: Theorem: If G is nilpotent and M is a compact surface of nonzero Euler characteristic, there is a fixed point. Theorem: Suppose G is supersoluble and M is as above. Then every analytic action of G on M has a fixed point, but this is false for continuous actions, and for groups that are merely solvable. Theorem: Suppose M is a real or complex manifold that is 2-dimensional over the ground field, and g is a Lie algebra of analytic vector fields on M. Assume some element X in g spans a 1-dimensional ideal. If the zero set K of X is compact and the Poincar'e-Hopf index of X at K is nonzero, then g vanishes at some point of K. No special knowledge of Lie groups will be assumed. ### February 13: Mihai Putinar (UC Santa Barbara) #### Quillen’s property of real algebraic varieties A famous observation discovered by Fejer and Riesz a century ago is the quintessential algebraic component of every spectral decomposition result. It asserts that every non-negative polynomial on the unit circle is a hermitian square. About half a century ago, Quillen proved that a positive polynomial on an odd dimensional sphere is a sum of hermitian squares. Fact independently rediscovered much later by D’Angelo and Catlin, respectively Athavale. The main subject of the talk will be: on which real algebraic sub varieties of $\mathbb{C}^n$ is Quillen theorem valid? An interlace between real algebraic geometry, quantization techniques and complex hermitian geometry will provide an answer to the above question, and more. Based a recent work with Claus Scheiderer and John D’Angelo. ### February 20: David Zureick-Brown (Emory University) #### Diophantine and tropical geometry Diophantine geometry is the study of integral solutions to a polynomial equation. For instance, for integers $a,b,c \geq 2$ satisfying $\tfrac1a + \tfrac1b + \tfrac1c > 1$, Darmon and Granville proved that the individual generalized Fermat equation xa + yb = zc has only finitely many coprime integer solutions. Conjecturally something stronger is true: for $a,b,c \geq 3$ there are no non-trivial solutions. I'll discuss various other Diophantine problems, with a focus on the underlying intuition and conjectural framework. I will especially focus on the uniformity conjecture, and will explain new ideas from tropical geometry and our recent partial proof of the uniformity conjecture. ### Monday February 23: Jayadev Athreya (UIUC) #### The Erdos-Szusz-Turan distribution for equivariant point processes We generalize a problem of Erdos-Szusz-Turan on diophantine approximation to a variety of contexts, and use homogeneous dynamics to compute an associated probability distribution on the integers. ### February 27: Allan Greenleaf (University of Rochester) #### Erdos-Falconer Configuration problems In discrete geometry, there is a large collection of problems due to Erdos and various coauthors starting in the 1940s, which have the following general form: Given a large finite set P of N points in d-dimensional Euclidean space, and a geometric configuration (a line segment of a given length, a triangle with given angles or a given area, etc.), is there a lower bound on how many times that configuration must occur among the points of P? Relatedly, is there an upper bound on the number of times any single configuration can occur? One of the most celebrated problems of this type, the Erdos distinct distances problem in the plane, was essentially solved in 2010 by Guth and Katz, but for many problems of this type only partial results are known. In continuous geometry, there are analogous problems due to Falconer and others. Here, one looks for results that say that if a set A is large enough (in terms of a lower bound on its Hausdorff dimension, say), then the set of configurations of a given type generated by the points of A is large (has positive measure, say). I will describe work on Falconer-type problems using some techniques from harmonic analysis, including estimate for multilinear operators. In some cases, these results can be discretized to obtain at least partial results on Erdos-type problems. ### March 6: Larry Guth (MIT) #### Introduction to incidence geometry Incidence geometry is a branch of combinatorics that studies the possible intersection patterns of lines, circles, and other simple shapes. For example, suppose that we have a set of L lines in the plane. An r-rich point is a point that lies in at least r of these lines. For a given L, r, how many r-rich points can we make? This is a typical question in the field, and there are many variations. What if we replace lines with circles? What happens in higher dimensions? We will give an introduction to this field, describing some of the important results, tools, and open problems. We will discuss two important tools used in the area. One tool is to apply topology to the problem. This tool allows us to prove results in R^2 that are stronger than what happens over finite fields. The second tool is to look for algebraic structure in the problem by studying low-degree polynomials that vanish on the points we are studying. We will also discuss some of the (many) open problems in the field and try to describe the nature of the difficulties in approaching them. ### March 13: Cameron Gordon (UT-Austin) #### Left-orderability and 3-manifold groups The fundamental group is a more or less complete invariant of a 3-dimensional manifold. We will discuss how the purely algebraic property of this group being left-orderable is related to two other aspects of 3-dimensional topology, one geometric-topological and the other essentially analytic. ### March 20: Aaron Naber (Northwestern) #### Regularity and New Directions in Einstein Manifolds In this talk we give an overview of recent developments and new directions of manifolds which satisfy the Einstein equation Rc=cg, or more generally just manifolds with bounded Ricci curvature |Rc|<C. We will discuss the solution of the codimension four conjecture, which roughly says that Gromov-Hausdorff limits (M^n_i,g_i)->(X,d) of manifolds with bounded Ricci curvature are smooth away from a set of codimension four. In a very different direction, in this lecture we will also explain how Einstein manifolds may be characterized by the behavior of the analysis on path space P(M) of the manifold. That is, we will see a Riemannian manifold is Einstein if and only if certain gradient estimates for functions on P(M) hold. One can view this as an infinite dimensional generalization of the Bakry-Emery estimates. ### March 27 11am B239: Ilya Kossovskiy (University of Vienna) #### On Poincare's "Probleme local" In this talk, we describe a result giving a complete solution to the old question of Poincare on the possible dimensions of the automorphism group of a real-analytic hypersurface in two-dimensional complex space. As the main tool, we introduce the so-called CR (Cauchy-Riemann manifolds) - DS (Dynamical Systems) technique. This technique suggests to replace a real hypersurface with certain degeneracies of the CR-structure by an appropriate dynamical system, and then study mappings and symmetries of the initial real hypersurface accordingly. It turns out that symmetries of the singular differential equation associated with the initial real hypersurface are much easier to study than that of the real hypersurface, and in this way we obtain the solution for the problem of Poincare. This work is joint with Rasul Shafikov. ### March 27: Kent Orr (Indiana University) #### The Isomorphism Problem for metabelian groups Perhaps the most fundamental outstanding problem in algorithmic group theory, the Isomorphism Problem for metabelian groups remains a mystery. I present an introduction to this problem intended to be accessible to graduate students. In collaboration with Gilbert Baumslag and Roman Mikhailov, I present a new approach to this ancient problem which potentially connects to algebraic geometry, cohomology of groups, number theory, Gromov's view of groups as geometric objects, and a fundamental algebraic construction developed for and motivated by the topology of knots and links.
2015-03-30 07:03:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6584851145744324, "perplexity": 889.3312331884476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299121.41/warc/CC-MAIN-20150323172139-00015-ip-10-168-14-71.ec2.internal.warc.gz"}
http://azrael.top/CF291D%20Choosing%20Capital%20for%20Treeland%20%E6%A0%91%E5%BD%A2DP/
## Problem ### Description The country Treeland consists of n cities, some pairs of them are connected with unidirectional roads. Overall there are roads in the country. We know that if we don’t take the direction of the roads into consideration, we can get from any city to any other one. The council of the elders has recently decided to choose the capital of Treeland. Of course it should be a city of this country. The council is supposed to meet in the capital and regularly move from the capital to other cities (at this stage nobody is thinking about getting back to the capital from these cities). For that reason if city a is chosen a capital, then all roads must be oriented so that if we move along them, we can get from city a to any other city. For that some roads may have to be inversed. Help the elders to choose the capital so that they have to inverse the minimum number of roads in the country. ### Input The first input line contains integer — the number of cities in Treeland. Next lines contain the descriptions of the roads, one road per line. A road is described by a pair of integers — the numbers of cities, connected by that road. The road is oriented from city to city . You can consider cities in Treeland indexed from to . ### Output In the first line print the minimum number of roads to be inversed if the capital is chosen optimally. In the second line print all possible ways to choose the capital — a sequence of indexes of cities in the increasing order. Input #1 Output #1 Input #2 Output #2
2018-10-15 17:28:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1740524023771286, "perplexity": 828.3866430926707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509336.11/warc/CC-MAIN-20181015163653-20181015185153-00154.warc.gz"}
https://ethresear.ch/t/fork-choice-rule-for-collation-proposal-mechanisms/922
# Fork choice rule for collation proposal mechanisms #1 The current sharding phase 1 doc specifies running the proposer eligibility function getEligibleProposer onchain. We suggest an alternative approach based on a fork choice rule, complemented with optional “partial validation” and slashing. The benefit of the fork choice rule is that getEligibleProposer is run offchain. This saves gas when calling addHeader and unlocks the possibility for fancier proposer eligibility functions. At the end we detail two proposal mechanisms, one for variable-size deposits and one for private sampling. Fork choice rule Collation validity is currently done as a fork choice rule, and collation header validity is done onchain with addHeader. We suggest extending the fork choice rule to collation headers as follows: • addHeader always returns True, and always records a corresponding CollationAdded log • getEligibleProposer is run offchain and filtering of invalid collation headers is done as a fork choice rule For the logic to fetch candidate heads (c.f. fetching in reverse sorted order) to work and the fork choice rule to be enforceable, the CollationAdded logs need to be filterable for validity post facto. This relies on historical data availability of validator sets (and other auxiliary data for sampling, such as entropy). We are already assuming that the historical CollationAdded logs are available so it suffices to extend this assumption to validator sets. A clean solution is to have ValidatorEvent logs for additions and removals, and an equivalent getNextLog method for such logs. Partial validation and slashing condition To simplify the fork choice rule and lower the dependence on historical availability of validator sets, there are two hybrid approaches that work well: 1. Partial validation: have addHeader return True only if the signature sig corresponds some collaterised validator 2. Slashing condition (building upon partial validation): if the validator that called addHeader does not match getEligibleProposer (run offchain) then a whitleblower can run getEligibleProposer onchain to burn half the validator’s deposit and keep the other half Variable-size deposits Let v_1, ..., v_n be the validators with deposits d_1, ..., d_n. Fairly sampling validators when the d_i can have arbitrary size can be tricky because the amount of work to run getEligibleProposer is likely bounded below by log(n), which is not ideal. With the fork choice rule we can take any (reasonable) fair sampling function and run it offchain. For concreteness let’s build getEligibleProposer as follows. Let E be 32 bytes of public entropy (e.g. a blockhash, as in the phase 1 sharding doc). Let S_j be the partial sums d_1 + ... + d_j and let \tilde{E} \in [0, S_n-1] where \tilde{E} \equiv E \mod S_n. Then getEligibleProposer selects the validator v_i such that S_{i-1} \le \tilde{E} \lt S_i. Private sampling We now look at the problem of private sampling. That is, can we find a proposal mechanism which selects a single validator per period and provides “private lookahead”, i.e. it does not reveal to others which validators will be selected next? There are various possible private sampling strategies (based on MPCs, SNARKs/STARKs, cryptoeconomic signalling, or fancy crypto) but finding a workable scheme is hard. Below we present our best attempt based on one-time ring signatures. The scheme has several nice properties: 1. Perfect privacy: private lookahead and private lookbehind (i.e. the scheme never matches eligible proposers with specific validators) 2. Full lookahead: the lookahead extends to the end of the epoch (epochs are defined below, and have roughly the same size as the validator set) 3. Perfect fairness: within an epoch validators are selected proportionally according to deposit size, with zero variance The setup assumes validators have deposits in fixed-size increments (e.g. multiples of 1000 ETH). Without loss of generality we have one validator per fixed-size deposit. The proposal mechanism is organised in variable-size epochs. From the beginning of each epoch to its end, every validator has the right to push a log to be elected in the next epoch. The log contains two things: 1. A once-per-epoch ring signature proving membership in the current validator set 2. An ephemeral identity The logs are ordered chronologically in an array, and the size of the array at the end of one epoch corresponds to the size of the next epoch (measured in periods). To remove any time-based correlation across logs, we publicly shuffle the array using public entropy. This shuffled array then constitutes a sampling, each entry corresponding to one period. To call addHeader, the validator selected for a given period must sign the header with the corresponding ephemeral identity. With regards to publishing logs, log shards work well for several reasons: 1. They provide cheap logging facilities (data availability, ordering, witnesses) 2. Gas is paid out-of-bound so this limits opportunities to leak privacy through onchain gas payments Cryptoeconomic "ring signatures" Initial explorations on full PoS proposal mechanisms 30% sharding attack Initial explorations on full PoS proposal mechanisms Building off-chain collation-chains for efficient sharding Private lookbehind for logs In favor of forkfulness #2 Interesting The main concern I have is the practical efficiency issue. The scheme you propose would make it very easy to do a “51% attack” so that the longest chain in the VMC would be an invalid chain, and so clients would absolutely be required to check every single header in the chain personally. There would no longer be an easy option to “fast sync”. Given that, in the current fixed validator size status quo, the cost of running getEligibleProposer is trivial, it’s not clear that the benefit of this kind of change is worth the cost, including the protocol and client complexity cost (currently, the complexity is literally 2-4 lines of code in the contract). Additionally, even if we want to take the log(N) approach (see implementation here; it’s surprisingly simple https://github.com/ethereum/casper/tree/master/misc), the gas cost is only O(log(N)) 200-gas SLOADs, so it should be under 4k gas. I actually now think that I was wrong to have been so uncomfortable with the O(log(N)) approach earlier. Brilliant job on the ring signature-based lookahead-free validator selection! #3 This is where partial validation and the slashing condition come in—with them doing a “51% attack” is basically equally hard as before, and fast sync works the same. Slashing acts as a finality mechanism reducing the “global” fork choice rule to a “local” fork choice for the last mile (the time it takes for whistleblowers to react). An attacker wanting to push a single phoney collation header will lose the minimum deposit (say, 1000 ETH), so an attack quickly becomes prohibitively expensive, and validators checking the last few headers is sufficient. #4 Have you considered using a Common Coin algorithm ? It provides a way for a set of nodes to generate a common secure random number even if some of the nodes are Byzantine. You could just generate a common random number C at the beginning of each block proposal, and then choose validator index I as I = C \% N The CommonCoin algorithm takes less than a second to run in a realistic network with realistic latencies. HoneyBadger and Algorand use a common coin instance for each block. Ethereum could also run it for each block, or for each 10 blocks. #5 I’ve lately been studying dfinity’s random beacon based on threshold signatures. I imagine that’s what you are referring to. It is indeed a pretty awesome way to generate randomness as an alternative to, say, blockhashes or RANDAO. Having said that, I don’t think that it really helps much for private sampling other than improving the source of randomness to already proposed private sampling schemes. I don’t think the specific scheme you propose has either lookahead privacy or lookbehind privacy. #6 Hi Justin, I am referring to “coin tossing schemes” (Page 11 of [this])(http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.66.4165&rep=rep1&type=pdf). Essentially there are 3t + 1 parties, out of which t are bad. The protocol allows for good guys to generate a random number in O(1) time even if the bad guys do not cooperate. It is used in some protocols, such as HoneyBadger. It is based on threshold signatures - in fact, once you have a deterministic threshold signature, you could just take the hash of the signature as a common random number. What I am saying one could do, is at the start of each block proposal one could generate a common random number which would determine the block proposer for this block. I will ready your proposal in more detail, I have a suspicion that it can be reformulated as a solution of the “coin tossing” scheme using a blockchain. May be it is better for the particular application of block proposal … #7 The definition of lookahead privacy is that only the next block proposer knows he will be the next block proposer. In the scheme you suggest all validators know who will be the next block proposer. #8 Yes … An interesting question is whether common coin can be modified somehow to preserve privacy …
2018-11-14 11:04:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46389538049697876, "perplexity": 2859.711797304706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741979.10/warc/CC-MAIN-20181114104603-20181114130603-00349.warc.gz"}
http://www.nag.com/numeric/MB/manual64_24_1/html/G01/g01fff.html
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int PDF version (NAG web site, 64-bit version, 64-bit version) Chapter Contents Chapter Introduction NAG Toolbox # NAG Toolbox: nag_stat_inv_cdf_gamma (g01ff) ## Purpose nag_stat_inv_cdf_gamma (g01ff) returns the deviate associated with the given lower tail probability of the gamma distribution. ## Syntax [result, ifail] = g01ff(p, a, b, 'tol', tol) [result, ifail] = nag_stat_inv_cdf_gamma(p, a, b, 'tol', tol) Note: the interface to this routine has changed since earlier releases of the toolbox: Mark 23: tol now optional (default 0) . ## Description The deviate, gp${g}_{p}$, associated with the lower tail probability, p$p$, of the gamma distribution with shape parameter α$\alpha$ and scale parameter β$\beta$, is defined as the solution to gp P(G ≤ gp : α,β) = p = 1/(βαΓ(α)) ∫ e − G / βGα − 1dG,  0 ≤ gp < ∞;α,β > 0. 0 $P(G≤gp:α,β)=p=1βαΓ(α) ∫0gpe-G/βGα-1dG, 0≤gp<∞;α,β>0.$ The method used is described by Best and Roberts (1975) making use of the relationship between the gamma distribution and the χ2${\chi }^{2}$-distribution. Let y = 2(gp)/β $y=2\frac{{g}_{p}}{\beta }$. The required y$y$ is found from the Taylor series expansion y = y0 + ∑r(Cr(y0))/(r ! ) (E/(φ(y0)))r, $y=y0+∑rCr(y0) r! (Eϕ(y0) ) r,$ where y0${y}_{0}$ is a starting approximation • C1(u) = 1${C}_{1}\left(u\right)=1$, • Cr + 1(u) = (rΨ + d/(du)) Cr(u)${C}_{r+1}\left(u\right)=\left(r\Psi +\frac{d}{du}\right){C}_{r}\left(u\right)$, • Ψ = (1/2)(α1)/u $\Psi =\frac{1}{2}-\frac{\alpha -1}{u}$, • E = p0y0φ(u)du$E=p-\underset{0}{\overset{{y}_{0}}{\int }}\varphi \left(u\right)du$, • φ(u) = 1/(2αΓ (α) )eu / 2uα1$\varphi \left(u\right)=\frac{1}{{2}^{\alpha }\Gamma \left(\alpha \right)}{e}^{-u/2}{u}^{\alpha -1}$. For most values of p$p$ and α$\alpha$ the starting value y01 = 2α (z×sqrt(1/(9α)) + 1 − 1/(9α))3 $y01=2α (z⁢19α +1-19α ) 3$ is used, where z$z$ is the deviate associated with a lower tail probability of p$p$ for the standard Normal distribution. For p$p$ close to zero, y02 = (pα2αΓ(α))1 / α $y02= (pα2αΓ (α) ) 1/α$ is used. For large p$p$ values, when y01 > 4.4α + 6.0${y}_{01}>4.4\alpha +6.0$, y03 = − 2[ln(1 − p) − (α − 1)ln((1/2)y01) + ln(Γ(α))] $y03=-2[ln(1-p)-(α-1)ln(12y01)+ln(Γ (α) ) ]$ is found to be a better starting value than y01${y}_{01}$. For small α$\alpha$ (α0.16)$\left(\alpha \le 0.16\right)$, p$p$ is expressed in terms of an approximation to the exponential integral and y04${y}_{04}$ is found by Newton–Raphson iterations. Seven terms of the Taylor series are used to refine the starting approximation, repeating the process if necessary until the required accuracy is obtained. ## References Best D J and Roberts D E (1975) Algorithm AS 91. The percentage points of the χ2${\chi }^{2}$ distribution Appl. Statist. 24 385–388 ## Parameters ### Compulsory Input Parameters 1:     p – double scalar p$p$, the lower tail probability from the required gamma distribution. Constraint: 0.0p < 1.0$0.0\le {\mathbf{p}}<1.0$. 2:     a – double scalar α$\alpha$, the shape parameter of the gamma distribution. Constraint: 0.0 < a106$0.0<{\mathbf{a}}\le {10}^{6}$. 3:     b – double scalar β$\beta$, the scale parameter of the gamma distribution. Constraint: b > 0.0${\mathbf{b}}>0.0$. ### Optional Input Parameters 1:     tol – double scalar The relative accuracy required by you in the results. The smallest recommended value is 50 × δ$50×\delta$, where δ = max (1018,machine precision). If nag_stat_inv_cdf_gamma (g01ff) is entered with tol less than 50 × δ$50×\delta$ or greater or equal to 1.0$1.0$, then 50 × δ$50×\delta$ is used instead. Default: 0.0$0.0$ None. ### Output Parameters 1:     result – double scalar The result of the function. 2:     ifail – int64int32nag_int scalar ${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]). ## Error Indicators and Warnings Note: nag_stat_inv_cdf_gamma (g01ff) may return useful information for one or more of the following detected errors or warnings. Errors or warnings detected by the function: If on exit ${\mathbf{ifail}}={\mathbf{1}}$, 2${\mathbf{2}}$, 3${\mathbf{3}}$ or 5${\mathbf{5}}$, then nag_stat_inv_cdf_gamma (g01ff) returns 0.0$0.0$. Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings. ifail = 1${\mathbf{ifail}}=1$ On entry, p < 0.0${\mathbf{p}}<0.0$, or p ≥ 1.0${\mathbf{p}}\ge 1.0$, ifail = 2${\mathbf{ifail}}=2$ On entry, a ≤ 0.0${\mathbf{a}}\le 0.0$, or a > 106${\mathbf{a}}>{10}^{6}$, or b ≤ 0.0${\mathbf{b}}\le 0.0$ ifail = 3${\mathbf{ifail}}=3$ p is too close to 0.0$0.0$ or 1.0$1.0$ to enable the result to be calculated. W ifail = 4${\mathbf{ifail}}=4$ The solution has failed to converge in 100$100$ iterations. A larger value of tol should be tried. The result may be a reasonable approximation. ifail = 5${\mathbf{ifail}}=5$ The series to calculate the gamma function has failed to converge. This is an unlikely error exit. ## Accuracy In most cases the relative accuracy of the results should be as specified by tol. However, for very small values of α$\alpha$ or very small values of p$p$ there may be some loss of accuracy. None. ## Example ```function nag_stat_inv_cdf_gamma_example p = 0.01; a = 1; b = 20; [result, ifail] = nag_stat_inv_cdf_gamma(p, a, b) ``` ``` result = 0.2010 ifail = 0 ``` ```function g01ff_example p = 0.01; a = 1; b = 20; [result, ifail] = g01ff(p, a, b) ``` ``` result = 0.2010 ifail = 0 ``` PDF version (NAG web site, 64-bit version, 64-bit version) Chapter Contents Chapter Introduction NAG Toolbox © The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013
2014-04-25 03:32:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 63, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8942780494689941, "perplexity": 7355.567207736465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/79312/reconstructing-a-matrix-from-random-matrix-vector-products
# Reconstructing a matrix from random matrix vector products I am looking for new ideas how to construct a guess for a (positive, hermitian) matrix A given some matrix-vector products Ax (with random vectors x). One such method would be to perform rank one updates to A each time a new Ax becomes available, and this is what is used in for example the BFGS update formula in quasi-Newton methods. However, this just updates a one dimensional subspace of the matrix. In my application it would be more natural to scale the whole matrix instead of performing a rank one update, but just a simple scaling is not enough because it cannot be consistent for all the Ax's. I have a rather good approximation to A to start with, so I know roughly the distributions of eigenvalues if that can help. My biggest problem is that I don't know how to even define the problem properly.. Perhaps I can sharpen the question with some feelback. Edit: Learned about shrinkage estimation, perhaps that is the way to go. - I see that this question has been inactive for a very long time, but I find the question interesting. When you say you know $Ax$ for some random $x$, do you mean you know both $x$ and $Ax$, or do you only know $Ax$ and the distribution of $x$? –  Mårten W Sep 12 '13 at 22:05 Can you give some more details. You say you have some information about $A$, that might be expressed as a prior distribution in a bayes analysis, and the random matrix-vector products would be the data used to update the prior to give the posterior. Or maybe methods from tomography? –  kjetil b halvorsen Apr 9 '14 at 19:50
2015-01-30 20:19:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7118171453475952, "perplexity": 238.0684003072229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115861872.41/warc/CC-MAIN-20150124161101-00133-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-9-quadratic-functions-and-equations-cumulative-test-prep-multiple-choice-page-595/12
Chapter 9 - Quadratic Functions and Equations - Cumulative Test Prep - Multiple Choice - Page 595: 12 H Work Step by Step Value is 6 60 years ago Value was 6*2 45 years ago Value was $6*2^2$ 30 years ago Value was $6*2^3$ 15 years ago Value was $6*2^4$ today Value was $6*2^5$ 15 years from now Value was $6*2^6$ 30 years from now Value was $6*2^7$ 45 years from now Value was $6*2^8$ 60 years from now $6*2^8$ $6*256$ $1536$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2020-09-20 21:07:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3093978464603424, "perplexity": 973.5677540208889}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198652.6/warc/CC-MAIN-20200920192131-20200920222131-00398.warc.gz"}
https://zbmath.org/?q=an:06797071
# zbMATH — the first resource for mathematics An analogue of Vosper’s theorem for extension fields. (English) Zbl 1405.11134 Summary: We are interested in characterising pairs $$S, T$$ of $$F$$-linear subspaces in a field extension $$L/F$$ such that the linear span $$ST$$ of the set of products of elements of $$S$$ and of elements of $$T$$ has small dimension. Our central result is a linear analogue of Vosper’s theorem, which gives the structure of vector spaces $$S, T$$ in a prime extension $$L$$ of a finite field $$F$$ for which $\dim_FST =\dim_F S+\dim_F T-1,$ when $$\dim_FS, \dim_FT\geq 2$$ and $$\dim_FST\leq [L : F]-2$$. ##### MSC: 11P70 Inverse problems of additive number theory, including sumsets 05E30 Association schemes, strongly regular graphs 11B30 Arithmetic combinatorics; higher degree uniformity 12F10 Separable extensions, Galois theory ##### Keywords: Vosper’s theorem; $$F$$-linear subspaces; field extension Full Text:
2021-01-15 16:03:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48823824524879456, "perplexity": 913.4331268438897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495901.0/warc/CC-MAIN-20210115134101-20210115164101-00598.warc.gz"}
http://planetmath.org/node/41261/source
mantissa function Primary tabs \documentclass{article} % this is the default PlanetMath preamble. as your knowledge % of TeX increases, you will probably want to edit this, but % it should be fine as is for beginners. % almost certainly you want these \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsfonts} % used for TeXing text within eps files %\usepackage{psfrag} % need this for including graphics (\includegraphics) %\usepackage{graphicx} % for neatly defining theorems and propositions \usepackage{amsthm} % making logically defined graphics %%%\usepackage{xypic} \usepackage{pstricks} \usepackage{pst-plot} % there are many more packages, add them here as you need them % define commands here \theoremstyle{definition} \newtheorem*{thmplain}{Theorem} \begin{document} If we subtract from a real number $x$ the greatest integer not exceeding $x$, we obtain a number $y$ between 0 and 1, which can equal 0 if $x$ is an integer.\, In other \PMlinkescapetext{words}, $$y \;=\; x\!-\!\lfloor{x}\rfloor,$$ where $\lfloor{x}\rfloor$ is the floor of $x$. Such a number $y$ is called the {\em mantissa} of $x$.\, So we have for example\\ $2.7-2 \;=\; 0.7$,\\ $1.7-1 \;=\; 0.7$,\\ $0.7-0 \;=\; 0.7$,\\ $-0.3\!-\!(-1) = 0.7$,\\ $-1.3\!-\!(-2) = 0.7,$\\ i.e. these numbers 2.7, 1.7, 0.7, $-0.3$, $-1.3$ at mutual distances an integer have the same mantissa (0.7).\, This is apparently always true --- thus the {\em mantissa function} $$x \mapsto x\!-\!\lfloor{x}\rfloor$$ The mantissa is identic with the mantissa used in the Briggsian logarithm calculations.\\ When $x$ increases from an integer $n$ towards the next integer $n\!+\!1$, its mantissa $x\!-\!\lfloor{x}\rfloor$ increases with the same speed from 0 tending to 1, but at $n\!+\!1$ it falls back to 0. \begin{center} \begin{pspicture}(-5.5,-2.5)(5.5,3.5) \psaxes[Dx=1,Dy=1]{->}(0,0)(-4.5,-1.9)(4.5,3) \rput(0.3,3.1){$y$} \rput(4.6,0.2){$x$} \psdots[linecolor=blue](-4,0)(-3,0)(-2,0)(-1,0)(1,0)(2,0)(3,0)(4,0) \psdot[linecolor=blue,linewidth=0.03](0,0) \psline[linecolor=blue](-4,0)(-3,1) \psline[linecolor=blue](-3,0)(-2,1) \psline[linecolor=blue](-2,0)(-1,1) \psline[linecolor=blue](-1,0)(0,1) \psline[linecolor=blue](0,0)(1,1) \psline[linecolor=blue](1,0)(2,1) \psline[linecolor=blue](2,0)(3,1) \psline[linecolor=blue](3,0)(4,1) \rput(2.5,2.5){$\mbox{Graph\; } y = x\!-\!\lfloor{x}\rfloor$} \end{pspicture} \end{center} Being a periodic function, the \PMlinkname{Fourier expansion}{DeterminationOfFourierCoefficients} of the function is easy to form: $$x\!-\!\lfloor{x}\rfloor \;=\; \frac{1}{2}-\sum_{n=1}^\infty\frac{\sin 2n\pi{x}}{n\pi}$$ This is valid for\, $x \not\in \mathbb{Z}$,\, since the series gives in the jump discontinuity points the arithmetic means ($= \frac{1}{2}$) of \PMlinkname{left and right limits}{OneSidedLimit}. %%%%% %%%%% nd{document}
2017-11-21 00:44:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9707530736923218, "perplexity": 1522.1841162846204}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806309.83/warc/CC-MAIN-20171121002016-20171121022016-00287.warc.gz"}
https://stacks.math.columbia.edu/recent-comments
Comments 1 to 20 out of 4766 in reverse chronological order. On Matthieu Romagny left comment #5126 on Section 99.26 in Morphisms of Algebraic Stacks Typo in the section introduction : replace "spaces" by stacks" in the sentence "we may define what it means for a morphism of algebraic spaces..." On Peng DU left comment #5125 on Lemma 10.22.2 in Commutative Algebra May need add comma before "in other words". On left comment #5124 on Lemma 56.12.6 in Derived Categories of Varieties Yes, this is very confusing! Thanks and fixed here. On left comment #5120 on Lemma 56.12.6 in Derived Categories of Varieties The categories in the statement are claimed to be triangulated, but it needs abelian categories (and this is what it used in the proof anyway). On slogan_bot left comment #5119 on Lemma 37.38.2 in More on Morphisms Suggested slogan: Quasi-finite, separated morphisms are quasi-affine Also, why is there an "(!)" in the proof? On Weixiao Lu left comment #5118 on Section 21.2 in Cohomology on Sites If $f: Sh(\mathcal C) \to Sh(\mathcal D)$ is a morphism of topoi, then $f_\ast(\mathcal F)$ might just be a sheaf of sets, not a sheaf of abelian groups, even if $\mathcal F$ is. Then how do we define $Rf_\ast(\mathcal F)$? On J left comment #5117 on Section 56.2 in Derived Categories of Varieties Typo: last paragraph $M\in D(\mathcal O_Y)$ not $\mathcal O_U$ On Laurent Moret-Bailly left comment #5116 on Lemma 62.3.2 in The Trace Formula The meaning of the statement is not completely formal. To make sense of "the identity on cohomology" we need to show that we can identify $\mathcal{F}$ with $g_*(\mathcal{F})$ and/or $g^{-1}(\mathcal{F})$, for any $\mathcal{F}$. This is of course the case in subsequent lemmas where $\mathcal{F}$ is a constant sheaf. On Tongmu He left comment #5115 on Lemma 53.20.12 in Algebraic Curves It seems that in the statement of 53.20.12, we could add "the image of $a$ vanishes at the base point $v$ of $V$, and the base point $u$ of $U$ maps to the node of the fiber $W_v$", right? On left comment #5114 on Section 5.10 in Topology @James A. Myer: No. Every closed set with $\geq 2$ elements is reducible. On pippo left comment #5113 on Section 62.3 in The Trace Formula Typo: in (03SX) the first $\pi_X$ should be $\pi_x$. On Mingchen left comment #5112 on Equation 90.6.0.2 in The Cotangent Complex g should be from Sh(C) to Sh(C') On left comment #5110 on Lemma 66.9.1 in Decent Algebraic Spaces Dear Shiji, thanks for finding this error. There is a way of fixing this using limit arguments, but that would mean pushing this much later in the project. Another fix is the following. We have to show: Given a quasi-compact open immersion $j : V \to Y$ of a quasi-separated and quasi-compact algebraic spaces and an integral morphism $\pi : Z \to V$ we can extend $\pi$ to an integral morphism $\pi' : Z' \to Y$ in the sense that $Z$ is isomorphic to the inverse image of $V$ in $Z'$. To do this, let $\mathcal{A}$ be the quasi-coherent sheaf of $\mathcal{O}_V$-algebras on $V$ corresponding to $\pi : Z \to V$, in other words, $\mathcal{A} = \pi_*\mathcal{O}_Z$. Pushforward along $j$ preserves quasi-coherence. Hence $j_*\mathcal{A}$ is a quasi-coherent $\mathcal{O}_Y$-algebra. Now we let $\mathcal{A}' \subset j_*\mathcal{A}'$ be the integral closure of $\mathcal{O}_Y$. By Lemma 29.51.1 (translated over to the category of algebraic spaces; details omitted) we see that $\mathcal{A}'$ is a quasi-coherent $\mathcal{O}_Y$-algebra and we see that $\mathcal{A}'|_V \cong \mathcal{A}$ because the integral closure of $\mathcal{O}_V$ in $mathcal{A}$ is $\mathcal{A}$ as $\pi$ is integral! Thus $Z' = \underline{\text{Spec}}_Y(\mathcal{A}') \to Y$ is the answer to our problem. On Shiji Lyu left comment #5109 on Lemma 66.9.1 in Decent Algebraic Spaces In the last paragraph we applied Lemma 0ABS to the morphism $Z \to Y$. However, it seems that this morphism is not necessarily quasi-finite since the morphism $Z \to V$ is just integral, not finite. Is there a way to resolve this? On Tongmu He left comment #5108 on Lemma 53.19.13 in Algebraic Curves Typo: in the proof of 53.19.13, the map $\mathcal{O}_{X, x}^\wedge \to \mathcal{O}_{Y, y}$ should be $\mathcal{O}_{X, x}^\wedge \to \mathcal{O}_{Y, y}^\wedge$. On typo_bot left comment #5107 on Lemma 15.8.6 in More on Algebra In (3), element' should beelements'. In (2) it would be good to parenthesize the argument of dim, even though one could argue that no confusion is possible and that this is a matter of taste. On anon left comment #5106 on Lemma 10.119.7 in Commutative Algebra If $x/1\in A[S^{-1}]$ is prime then doesn't ideal correspondence for localizations imply $x \in A$ is prime? On Jordan Levin left comment #5105 on Section 10.130 in Commutative Algebra There is a nice discussion of the exact sequences using only the universal properties in a down-to-Earth way in the book "Commutative Algebra" by Singh on page 249. On Noah Olander left comment #5104 on Lemma 90.13.2 in The Cotangent Complex I think the method of proof here works immediately for Koszul regular sequences (and is even a little simpler since you don't need the induction and you don't have to argue flat locally in the next lemma). Just compute $L_{B/A}$ for $A = \mathbf{Z} [x_1, \dots , x_r]$, and then if $(f_1, \dots , f_r)$ is a Koszul regular sequence on $C$, then $C / (f_1, \dots , f_r) = B \otimes _A ^{\mathbf{L}} C$. On left comment #5103 on Lemma 48.18.3 in Duality for Schemes Argh! It seems you are correct. This just is a terrible lemma. Luckily it seems we only use it once!
2020-06-06 09:17:30
{"extraction_info": {"found_math": true, "script_math_tex": 56, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9826231002807617, "perplexity": 1161.580131228403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348511950.89/warc/CC-MAIN-20200606062649-20200606092649-00452.warc.gz"}
https://mathhelpboards.com/threads/polynomial.1073/
# polynomial #### jacks ##### Well-known member A polynomial $f(x)$ has Integer Coefficients such that $f(0)$ and $f(1)$ are both odd numbers. prove that $f(x) = 0$ has no Integer solution #### CaptainBlack ##### Well-known member A polynomial $f(x)$ has Integer Coefficients such that $f(0)$ and $f(1)$ are both odd numbers. prove that $f(x) = 0$ has no Integer solution There are details you may need to fill in yourself but: $$f(0)$$ odd implies that the constant term is odd then $$f(1)$$ odd implies that there are an even number of odd coeficients of the non constant terms. So if $$x \in \mathbb{Z}$$ then $$f(x)$$ is odd and so cannot be a root of $$f(x)$$ CB
2022-01-23 09:12:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6800343990325928, "perplexity": 212.17522978430773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304217.55/warc/CC-MAIN-20220123081226-20220123111226-00230.warc.gz"}
https://thecuriousastronomer.wordpress.com/tag/absorption-lines/
Feeds: Posts ## How do we know that the CMB is from a hot, early Universe? Towards the end of July I had an article published in The Conversation about the Cosmic Microwave Background, follow this link to read that article. After the article had been up a few days, I got this question from a Mark Robson, which I thought was an interesting one. Mark Robson’s original question which be posed below the article I wrote for The Conversation. I decided to blog an answer to this question, so the blogpost “What is the redshift of the Cosmic Microwave Background (CMB)?” appeared on my blog on the 30th of August, here is a link to that blogpost. However, it would seem that Mark Robson was not happy with my answer, and commented that I had not answered his actual question. So, here is his re-statement of his original question, except to my mind he has re-stated it differently (I guess to clarify what he actually meant in his first question). I said I would answer this slightly different/clarified question soon, but unfortunately I have not got around to doing so until today due to various other, more pressing, issues (such as attending a conference last week; and also writing articles for an upcoming book 30-second Einstein, which Ivy Press will be publishing next year). The questions and comments that Mark Robson has since posted below my article about how we know the redshift of the CMB ## What is unique about the CMB data? The very quick answer to Mark Robson’s re-stated question is that “the unique data possessed by the CMB which allow us to calculate its age or the temperature at which it was emitted” is that it is a perfect blackbody. I think I have already stated in other blogs, but let me just re-state it here again, the spectrum as measured by the COBE instrument FIRAS in 1990 of the CMB’s spectrum showed it to be the most perfect blackbody spectrum ever seen in nature. Here is the FIRAS spectrum of the CMB to re-emphasise that. The spectrum of the CMB as measured by the FIRAS instrument on COBE in 1990. It is the most perfect blackbody spectrum in nature ever observed. The error bars are four hundred times larger than normal, just so one can see them! So, we know, without any shadow of doubt, that this spectrum is NOT due to e.g. distant galaxies. Let me explain why we know this. ## The spectra of galaxies If we look at the spectrum of a nearby galaxy like Messier 31 (the Andromeda galaxy), we see something which is not a blackbody. Here is what the spectrum of M31 looks like. The spectrum of our nearest large galaxy, Messier 31 The spectrum differs from a blackbody spectrum for two reasons. First of all, it is much broader than a blackbody spectrum, and this is easy to explain. When we look at the light from M31 we are seeing the integrated light from many hundreds of millions of stars, and those stars have different temperatures. So, we are seeing the superposition of many different blackbody spectra, so this broadens the observed spectrum. Secondly, you notice that there are lots of dips in the spectrum. These are absorption lines, and are produced by the light from the surfaces of the stars in M31 passing through the thinner gases in the atmospheres of the stars. We see the same thing in the spectrum of the Sun (Josef von Fraunhofer was the first person to notice this in 1814/15). These absorption lines were actually noticed in the spectra of galaxies long before we knew they were galaxies, and were one of the indirect pieces of evidence used to argue that the “spiral nebulae” (as they were then called) were not disks of gas rotating around a newly formed star (as some argued), but were in fact galaxies outside of our own Galaxy. Spectra of gaseous regions (like the Orion nebula) were already known to be emission spectra, but the spectra of spiral nebulae were continuum spectra with absorption lines superimposed, a sure indicator that they were from stars, but stars too far away to be seen individually because they lay outside of our Galaxy. The absorption lines, as well as giving us a hint many years ago that we were seeing the superposition of many many stars in the spectra of spiral neublae, are also very useful because they allow us to determine the redshift of galaxies. We are able to identify many of the absorption lines and hence work out by how much they are shifted – here is an example of an actual spectrum of a very distant galaxy at a redshift of $z=5.515$, and below the actual spectrum (the smear of dark light at the top) is the identification of the lines seen in that spectrum at their rest wavelengths. The spectrum of a galaxy at a redshift of z=5.515 (top) (z=5.515 is a very distant galaxy), and the features in that spectrum at their rest wavelengths Some galaxies show emission spectra, in particular from the light at the centre, we call these type of galaxies active galactic nucleui (AGNs), and quasars are now known to be a particular class of AGNs along with Seyfert galaxies and BL Lac galaxies. These AGNs also have spectral lines (but this time in emission) which allow us to determine the redshift of the host galaxy; this is how we are able to determine the redshifts of quasars. Notice, there are no absorption lines or emission lines in the spectrum of the CMB. Not only is it a perfect blackbody spectrum, which shows beyond any doubt that it is produced by something at one particular temperature, but the absence of absorption or emission lines in the CMB also tells us that it does not come from galaxies. ## The extra-galactic background light We have also, over the last few decades, determined the components of what is known as the extra-galactic background light, which just means the light coming from beyond our galaxy. When I say “light”, I don’t just mean visible light, but light from across the electromagnetic spectrum from gamma rays all the way down to radio waves. Here are the actual data of the extra-galactic background light (EGBL) Actual measurements of the extra-galactic background light Here is a cartoon (from Andrew Jaffe) which shows the various components of the EGBL. The components of the extra-galactic background light I won’t go through every component of this plot, but the UV, optical and CIB (Cosmic Infrared Background) are all from stars (hot, medium and cooler stars); but notice they are not blackbody in shape, they are broadened because they are the integrated light from many billions of stars at different temperatures. The CMB is a perfect blackbody, and notice that it is the largest component in the plot (the y-axis is what is called $\nu I_{\nu}$, which means that the vertical position of any point on the plot is an indicator of the energy in the photons at that wavelength (or frequency). The energy of the photons from the CMB is greater than the energy of photons coming from all stars in all the galaxies in the Universe; even though each photon in the CMB carries very little energy (because they have such a long wavelength or low frequency). ## Why are there no absorption lines in the CMB? If the CMB comes from the early Universe, then its light has to travel through intervening material like galaxies, gas between galaxies and clusters of galaxies. You might be wondering why we don’t see any absorption lines in the CMB’s spectrum in the same way that we do in the light coming from the surfaces of stars. The answer is simple, the photons in the CMB do not have enough energy to excite any electrons in any hydrogen or helium atoms (which is what 99% of the Universe is), and so no absorption lines are produced. However, the photons are able to excite very low energy rotational states in the Cyanogen molecule, and in fact this was first noticed in the 1940s long before it was realised what was causing it. Also, the CMB is affected as it passes through intervening clusters of galaxies towards us. The gas between galaxies in clusters is hot, at millions of Kelvin, and hence is ionised. The free electrons actually give energy to the photons from the CMB via a process known as inverse Compton scattering, and we are able to measure this small energy boost in the photons of the CMB as they pass through clusters. The effect is known as the Sunyaev Zel’dovich effect, named after the two Russian physicists who first predicted it in the 1960s. We not only see the SZ effect where we know there are clusters, but we have also recently discovered previously unknown clusters because of the SZ effect! I am not sure if I have answered Mark Robson’s question(s) to his satisfaction. Somehow I suspect that if I haven’t he will let me know! ## Red stars, white stars, blue stars Anyone who has looked at the night-time sky for any length of time will have noticed that stars have different colours. Some stars are red, some are white, and some are blue. A nice example of this can be seen in the Orion constellation. Betelgeuse, the star in the top left hand corner, has a distinct red colour, Rigel, the star in the bottom right hand corner, has a distinct blue colour, and Saiph, the star in the bottom left hand corner is white. The Orion constellation. Betelgeuse in the top left is a distinct red colour, Rigel in the bottom right is a distinct blue colour, and Saiph in the bottom left is white. ## Stars have different temperatures These differences in colour are due to stars having different surface temperatures. As the figure below shows, the blackbody curve peaks at different wavelengths depending on a star’s temperature, so a red star is cooler than a white star, which in turn is cooler than a blue star. This is because of Wien’s displacement law, which I discussed in this blog. Stars with different surface temperatures will appear to have different colours. This is due to their blackbody spectra peaking at different wavelengths. ## Fraunhofer’s spectrum of the Sun In 1817 the German scientist Joseph von Fraunhofer published a spectrum of the Sun showing the continuum spectrum which had long been familiar, but superimposed on this were a series of dark lines. These dark lines were shown by Kirchhoff and Bunsen in 1859 to be due to absorption lines being produced by different elements. I discussed absorption spectra in this blog. We now know that the gases which produce the absorption lines are in the atmosphere of the Sun. The visible surface of the Sun is called the photosphere, and it is the photosphere of the Sun which produces its blackbody spectrum. The overlying, thinner gases in the Sun’s atmosphere produce the numerous absorption lines which Fraunhofer saw. The spectrum of the Sun sketched by Fraunhofer in 1814/15, showing the dark lines he observed, superimposed on the Sun’s continuum spectrum. ## The Harvard stellar classification scheme In the 1880s the Harvard College Observatory, under the Directorship of Edward C. Pickering, set about gathering the spectra of thousands of stars. The initial catalogue was published as the Draper Catalogue of Stellar Spectra in 1890. Williamina Fleming classified the spectrum using the letters of the alphabet, with with stars with the strongest Hydrogen absorption lines having the designation A, then the next strongest B, then C etc. The Harvard stellar classification scheme was originally in alphabetical order based on the strength of the hydrogen absorption lines of a star You can see in this figure how the strength of the Hydrogen absorption lines (e.g. the $H\alpha, H\beta \text{ and } H\gamma$ lines) are strongest for the A, B and F-type stars. In 1901 Annie Jump Cannon revised the system. First of all she dropped most of the letters, leaving A,B,F,G,K,M and O. Secondly she subdivided each of these into 10 divisions, so for example A0, A1, A2…. A9. Thirdly, she re-ordered the letters based on the stars’ surface temperatures, not the strength of the Hydrogen line, with the hottest stars first. This is what has led to the O,B,A,F,G,K,M (Oh Be A Fine Girl/Guy Kiss Me) system we have today. The O-class stars are the hottest, the B-class stars the next hottest, all the way down to the M-class stars which are the coolest. Annie Jump Cannon ## The strength of the lines of different elements and their ions The figure below shows the variation of the strength of the absorption lines of different elements and their ions as a function of temperature. Stars which have the strongest Hydrogen lines (A-class stars) have a surface temperature of around 10,000 K. The bright star Vega is an A0-class star, and is white in appearance. The Sun is a G2-type star, and its strongest lines are the Calcium II lines (singly ionised Calcium). The strengths of the absorption lines of different elements (and their ions) as a function of temperature. The nomenclature e.g. Ca II means “singly ionised Calcium”, He I means “neutral Helium”, He II means “singly ionised Helium”, etc. The reason the strength of the Hydrogen absorption lines peak at around 10,000 K whereas the singly ionised Calcium lines (Ca II) peak at around 6,000 K is something I will explain in a future blog.
2019-11-15 06:15:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6790210008621216, "perplexity": 698.545968861634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668585.12/warc/CC-MAIN-20191115042541-20191115070541-00517.warc.gz"}
https://coding-gym.org/challenges/fair-rations/
# Fair Rations See the original problem on HackerRank. You are the benevolent ruler of Rankhacker Castle, and today you’re distributing bread. Your subjects are in a line, and some of them already have some loaves. Times are hard and your castle’s food stocks are dwindling, so you must distribute as few loaves as possible according to the following rules: 1. Every time you give a loaf of bread to some person , you must also give a loaf of bread to the person immediately in front of or behind them in the line (i.e., persons $$i+1$$ or $$i-1$$). 2. After all the bread is distributed, each person must have an even number of loaves. Given the number of loaves already held by each citizen, find and print the minimum number of loaves you must distribute to satisfy the two rules above. If this is not possible, print NO. For example, the people in line have loaves $$B=[4,5,6,7]$$. We can first give a loaf to $$i=3$$ and $$i=4$$ so $$B=[4,5,7,8]$$. Next we give a loaf to $$i=2$$ and $$i=3$$ and have $$B=[4,6,8,8]$$ which satisfies our conditions. We had to distribute $$4$$ loaves. ### Input Format The first line contains an integer $$N$$, the number of subjects in the bread line. The second line contains $$N$$ space-separated integers $$B[i]$$. ### Constraints • $$2 \leq N \leq 1000$$ • $$1 \leq B[i] \leq 10$$, where $$1 \leq i \leq N$$ ### Output Format Print a single integer that denotes the minimum number of loaves that must be distributed so that every person has an even number of loaves. If it’s not possible to do this, print NO. ## Solutions First of all, we can observe that the parity of the sum of all the loaves will never change because we always distribute 2 loaves at a time. This means that if the initial sum of the array is odd, we can never satisfy the rule consisting in having each person with an even number of loaves. This is just an observation that will be useful later. We do not need to check if the sum is odd in advance. The strategy to solve the algorithm could be to simply iterate the array and add one loaf to any person holding an odd number of loaves and to the following one. Visually: At the end of the scan, if the last person holds an even number of loaves then the algorithm has distributed the minimum amount of loaves and at the same time all the people have an even number of loaves. On the other hand, if the last person has an odd number of loaves then it means the problem could not be solved and the answer is “NO”. This is a possible encoding of such an idea in C++: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 constexpr bool isodd(int i) { return i%2 == 1; } int main() { int n; cin >> n; vector v(n); copy_n(istream_iterator(cin), n, begin(v)); auto cnt = 0; for (auto i=0; i The dual (iterating right to left) of the algorithm works exactly the same way: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 constexpr bool isodd(int i) { return i%2 == 1; } int main() { int n; cin >> n; vector v(n); copy_n(istream_iterator(cin), n, begin(v)); auto cnt = 0; // backwards for (auto i=n-1; i>0; --i) { if (isodd(v[i])) { v[i-1]++; cnt+=2; } } cout << (isodd(v.front()) ? "NO" : to_string(cnt)); } Alternative solution, without if, by alepez 1 2 3 4 5 6 7 8 9 10 11 int main() { int count = 0; int n; std::cin >> n; int l; std::cin >> l; while (--n) { int r; std::cin >> r; count += l & 1; l = r + (l & 1); } std::cout << ((l & 1) ? "NO" : std::to_string(count * 2)) << "\n"; } 1 2 3 4 5 6 7 8 main :: IO() main = do _ <- getLine input <- getContents let lst = read <$> words input :: [Int] let fn (cnt, l) r = if (odd l) then (cnt + 2, r + 1) else (cnt, r) let (cnt, l) = foldl fn (0, 0) lst putStrLn$ if (odd l) then "NO" else (show cnt) Another approach by umuril_lyerood consists in iteratively calculating the distance between the odd values of the initial array. Basically, any odd value will “propagate” +1 until another odd value is found. The idea is coded in Python here below: 1 2 3 4 5 6 7 8 9 10 11 12 13 def fairRations(B): pane = 0 indice = -1 for i, person in enumerate(B): if person%2!=0: if indice == -1: indice = i else: pane += i - indice indice = -1 if indice != -1: return "NO" return pane * 2 # we distribute 2 loaves every time ## Greedy nature and proof The algorithm presented falls into the category of Greedy Algorithms: such algorithms construct a solution to the problem by always making a choice that looks the best at the moment. A greedy algorithm never takes back its choices, but directly constructs the final solution. For this reason, greedy algorithms are usually very efficient. The difficulty in designing greedy algorithms is to find a greedy strategy that always produces an optimal solution to the problem. The locally optimal choices in a greedy algorithm should also be globally optimal. It is often difficult to argue that a greedy algorithm works. We could prove the algorithm is optimal and correct (credits to HackerRank’s original editorial): ### The algorithm prints NO only if no valid distribution exists At the beginning we observed that if the initial sum of the array is odd, we can never satisfy the rule consisting in having each person with an even number of loaves. If we run our algorithm, the last element will have the same parity as the original sum of the array (because it never changes). The algorithm prints “NO” if the last element (or first - in the dual version) is odd (meaning that the original sum was odd). ### The algorithm is optimal (it distributes the minimum amount of loaves) Let’s prove the dual of the original algorithm. There is not benefit in giving bread to the same pair of adjacent people more than once (observation: we could even map the problem from the space of “numbers” to the space of “booleans” - either even or odd - the algorithm does not change). We prove this algorithm is optimal by induction. Let $$i$$ be the index of the last odd value in the array (meaning that all the following elements are even) - E.g.: We have 3 cases: • such an index does not exist: the problem is already solved • $$i=0$$: the problem cannot be solved since the sum is odd (all the following elements are even) • $$i>0$$: the element at position $$i$$ can become even We have two ways to make $$arr[i]$$ even: 1. add 1 to both $$arr[i]$$ and $$arr[i+1]$$, or 2. add 1 to both $$arr[i]$$ and $$arr[i-1]$$ Our algorithm chooses the latter and, since the next odd index $$j$$ is before i ($$j<i$$), it works by induction. We can prove that choosing the former is wrong: If we give bread to $$arr[i]$$ and $$arr[i+1]$$ then $$arr[i+1]$$ becomes odd (values following $$i$$ are even). Then to make it even, we have no choice but to give bread to $$arr[i+2]$$, and so on. At the end of the array we can never make the last element even! This means the choice is not correct. We've worked on this challenge in these gyms: modena  padua  milan  turin
2020-07-13 07:05:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5816122889518738, "perplexity": 691.9550917101351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143354.77/warc/CC-MAIN-20200713064946-20200713094946-00474.warc.gz"}
https://nukephysik101.wordpress.com/tag/clebsch-gordon/
Review on rotation The rotation of a vector in a vector space can be done by either rotating the basis vector or the coordinate of the vector. Here, we always use fixed basis for rotation. For a rigid body, its rotation can be accomplished using Euler rotation, or rotation around an axis. Whenever a transform preserves the norm of the vector, it is a unitary transform. Rotation preserves the norm and it is a unitary transform, can it can be represented by a unitary matrix. As a unitary matrix, the eigen states are an convenient basis for the vector space. We will start from 2-D space. Within the 2-D space, we discuss about rotation started by vector and then function. The vector function does not explicitly discussed, but it was touched when discussing on functions. In the course, the eigen state is a key concept, as it is a convenient basis. We skipped the discussion for 3-D space, the connection between 2-D and 3-D space was already discussed in previous post. At the end, we take about direct product space. In 2-D space. A 2-D vector is rotated by a transform R, and the representation matrix of R has eigen value $\exp(\pm i \omega)$ and eigenvector $\displaystyle \hat{e}_\pm = \mp \frac{ \hat{e}_x \pm i \hat{e}_y}{\sqrt{2}}$ If all vector expand as a linear combination of the eigen vector, then the rotation can be done by simply multiplying the eigen value. Now, for a 2-D function, the rotation is done by changing of coordinate. However, The functional space is also a vector space, such that 1. $a* f_1 + b* f_2$ still in the space, 2. exist of  unit and inverse of addition, 3. the norm can be defined on a suitable domain by $\int |f(x,y)|^2 dxdy$ For example, the two functions $\phi_1(x,y) = x, \phi_2(x,y) = y$, the rotation can be done by a rotational matrix, $\displaystyle R = \begin{pmatrix} \cos(\omega) & -\sin(\omega) \\ \sin(\omega) & \cos(\omega) \end{pmatrix}$ And, the product $x^2, y^2, xy$ also from a basis. And the rotation on this new basis was induced from the original rotation. $\displaystyle R_2 = \begin{pmatrix} c^2 & s^2 & -2cs \\ s^2 & c^2 & 2cs \\ cs & -cs & c^2 - s^2 \end{pmatrix}$ where $c = \cos(\omega), s = \sin(\omega)$. The space becomes “3-dimensional” because $xy = yx$, otherwise, it will becomes “4-dimensional”. The 2-D function can also be expressed in polar coordinate, $f(r, \theta)$, and further decomposed into $g(r) h(\theta)$. How can we find the eigen function for the angular part? One way is using an operator that commutes with rotation, so that the eigen function of the operator is also the eigen function of the rotation. an example is the Laplacian. The eigen function for the 2-D Lapacian is the Fourier series. Therefore, if we can express the function into a polynomial of $r^n (\exp(i n \theta) , \exp(-i n \theta))$, the rotation of the function is simply multiplied by the rotation matrix. The eigen function is $\displaystyle \phi_{nm}(\theta) = e^{i m \theta}, m = \pm$ The D-matrix of rotation (D for Darstellung, representation in German)  $\omega$ is $D^n_{mm'}(\omega) = \delta_{mm'} e^{i m \omega}$ The delta function of $m, m'$ indicates that a rotation does not mix the spaces. The transformation of the eigen function is $\displaystyle \phi_{nm}(\theta') = \sum_{nm} \phi_{nm'}(\theta) D^n_{m'm}(\omega)$ for example, $f(x,y) = x^2 + k y^2$ write in polar coordinate $\displaystyle f(r, \theta) = r^2 (\cos^2(\theta) + k \sin^2(\theta)) = \frac{r^2}{4} \sum_{nm} a_{nm} \phi_{nm}(\theta)$ where $a_0 = 2 + 2k, a_{2+} = a_{2-} = 1-a, a_{other} = 0$. The rotation is $\displaystyle f(r, \theta' = \theta + \omega ) = \frac{r^2}{4} \sum_{nm} a_{nm} \phi_{nm}(\theta) D^n_{mm}(\omega) = \frac{r^2}{4} \sum_{nm} a_{nm} \phi_{nm}(\theta + \omega)$ If we write the rotated function in Cartesian form, $f(x',y') = x'^2 + k y'^2 = (c^2 + k s^2)x^2 + (s^2 + k c^2)y^2 + 2(k-1) c s x y$ where $c = \cos(\omega), s = \sin(\omega)$. In 3-D space, the same logic still applicable. The spherical harmonics $Y_{lm}$ serves as the basis for eigenvalue of $l(l+1)$, eigen spaces for difference $l$ are orthogonal. This is an extension of the 2-D eigen function $\exp(\pm n i \theta)$. A 3-D function can be expressed in spherical harmonics, and the rotation is simple multiplied with the Wigner D-matrix. On above, we show an example of higher order rotation induced by product space. I called it the induced space (I am not sure it is the correct name or not), because the space is the same, but the order is higher. For two particles system, the direct product space is formed by the product of the basis from two distinct space (could be identical space). Some common direct product spaces are • combining two spins • combining two orbital angular momentum • two particles system No matter induced space or direct product space, there structure are very similar. In 3-D rotation, the two spaces and the direct product space is related by the Clebsch-Gordon coefficient. While in 2-D rotation, we can see from the above discussion, the coefficient is simply 1. Lets use 2-D space to show the “induced product” space. For order $n=1$, which is the primary base that contains only $x, y$. For $n=2$, the space has $x^2, y^2, xy$, but the linear combination $x^2 + y^2$ is unchanged after rotation. Thus, the size of the space reduced $3-1 = 2$. For $n = 3$, the space has $x^3, y^3, x^2y, xy^3$, this time, the linear combinations $x^3 + xy^2 = x(x^2+y^2)$ behave like $x$ and $y^3 + x^2y$ behave like $y$, thus the size of the space reduce to $4 - 2 = 2$. For higher order, the total combination of $x^ay^b, a+b = n$ is $C^{n+1}_1 = n+1$, and we can find $n-1$ repeated combinations, thus the size of the irreducible space of order $n$ is always 2. For 3-D space, the size of combination of $x^ay^bz^c, a + b+ c = n$ is $C^{n+2}_2 = (n+1)(n+2)/2$. We can find $n(n-1)/2$ repeated combination, thus, the size of the irreducible  space of order $n$ is always $2n+1$. Product of Spherical Harmonics One mistake I made is that $\displaystyle Y_{LM} = \sum_{m_1 m_2} C_{j_1m_1j_2 m_2}^{LM} Y_{j_1m_1} Y_{j_2m_2}$ because $\displaystyle |j_1j_2JM\rangle = \sum_{m_1m_2} C_{j_1m_1j_2 m_2}^{LM} |j_1m_1\rangle |j_2m_2\rangle$ but this application is wrong. The main reason is that, the $|j_1j_2JM\rangle$ is “living” in a tensor product space, while $|jm \rangle$ is living in ordinary space. We can also see that, the norm of left side is 1, but the norm of the right side is not. Using the Clebsch-Gordon series, we can deduce the product of spherical harmonics. First, we need to know the relationship between the Wigner D-matrix and spherical harmonics. Using the equation $\displaystyle Y_{lm}(R(\hat{r})) = \sum_{m'} Y_{lm'}(\hat{r}) D_{m'm}^{l}(R)$ We can set $\hat{r} = \hat{z}$ and $R(\hat{x}) = \hat{r}$ $Y_{lm}(\hat{z}) = Y_{lm}(0, 0) = \sqrt{\frac{2l+1}{4\pi}} \delta_{m0}$ Thus, $\displaystyle Y_{lm}(\hat{r}) = \sqrt{\frac{2l+1}{4\pi}} D_{0m}^{l}(R)$ $\Rightarrow D_{0m}^{l} = \sqrt{\frac{4\pi}{2l+1}} Y_{lm}(\hat{r})$ Now, recall the Clebsch-Gordon series, $\displaystyle D_{m_1N_1}^{j_1} D_{m_2 N_2}^{j_2} = \sum_{jm} \sum_{M} C_{j_1m_1j_2m_2}^{jM} C_{j_1N_1j_2N_2}^{jm} D_{Mm}^{j}$ set $m_1 = m_2 = M= 0$ $\displaystyle D_{0N_1}^{j_1} D_{0 N_2}^{j_2} = \sum_{jm} C_{j_10j_20}^{j0} C_{j_1N_1j_2N_2}^{jm} D_{0m}^{j}$ rename some labels $\displaystyle Y_{l_1m_1} Y_{l_2m_2} = \sum_{lm} \sqrt{\frac{(2l_1+1)(2l_2+1)}{4\pi(2l+1)}} C_{l_10l_20}^{l0} C_{l_1m_1l_2m_2}^{lm} Y_{lm}$ We can multiply both side by $C_{l_1m_1l_2m_2}^{LM}$ and sum over $m_1, m_2$,  using $\displaystyle \sum_{m_1m_2} C_{l_1m_1l_2m_2}^{lm}C_{l_1m_1l_2m_2}^{LM} = \delta_{mM} \delta_{lL}$ $\displaystyle \sum_{m_1m_2} C_{l_1m_1l_2m_2}^{LM} Y_{l_1m_1} Y_{l_2m_2} = \sqrt{\frac{(2l_1+1)(2l_2+1)}{4\pi(2L+1)}} C_{l_10l_20}^{l0} Y_{LM}$ Clebsch-Gordon Series One of the important identity for angular momentum theory is the Clebsch-Gordon series, that involved Wigner D-matrix. The series is deduced from evaluate the follow quantity in two ways $\langle j_1 m_1 j_2 m_2 | U(R) |j m \rangle$ If acting the rotation operator to the $|jm\rangle$, we insert $\displaystyle \sum_{M} |jM\rangle \langle | jM| = 1$ $\displaystyle \sum_{M} \langle j_1 m_1 j_2 m_2|jM\rangle \langle jM| U(R) |jm\rangle = \sum_{M} C_{j_1m_1j_2m_2}^{jM} D_{Mm}^{j}$ If acting the rotation operator to the $\langle j_1 m_1 j_2 m_2|$, we insert $\displaystyle \sum_{N_1 N_2 } |j_1 N_1 j_2 N_2\rangle \langle j_1 N_1 j_2 N_2| = 1$ $\displaystyle \sum_{N_1 N_2} \langle j_1 m_1 j_2 m_2|U(R) | j_1 N_1 j_2 N_2\rangle \langle j_1 N_1 j_2 N_2| jm\rangle$ $\displaystyle = \sum_{N_1N_2} C_{j_1N_1j_2N_2}^{jm} D_{m_1N_1}^{j_1} D_{m_2 N_2}^{j_2}$ Thus, $\displaystyle \sum_{N_1N_2} C_{j_1N_1j_2N_2}^{jm} D_{m_1N_1}^{j_1} D_{m_2 N_2}^{j_2} = \sum_{M} C_{j_1m_1j_2m_2}^{jM} D_{Mm}^{j}$ We can multiply both side by $C_{j_1 N_1 j_2 N_2}^{jm}$, then sum the $j, m$ using $\displaystyle \sum_{jm} C_{j_1 N_1 j_2 N_2}^{jm} C_{j_1N_1j_2N_2}^{jm} = 1$ $\displaystyle D_{m_1N_1}^{j_1} D_{m_2 N_2}^{j_2} = \sum_{jm} \sum_{M} C_{j_1m_1j_2m_2}^{jM} C_{j_1N_1j_2N_2}^{jm} D_{Mm}^{j}$ Wigner-Eckart theorem The mathematical form of the theorem is, given a tensor operator of rank $k$, $T^{(k)}$, The expectation value on the eigen-state $\left|j,m\right>$ of total angular momentum $J$ is, $\left = \left \left$ where, $\left$ is reduced matrix element. The power of the theorem is that, once the reduced matrix element is calculated for the system for a particular (may be the simplest) case, all other matrix element can be calculated. The theorem works only in spherical symmetry. The state are eigen-state of total angular momentum. We can imagine, when the system rotated, there is something unchanged (which is the reduced matrix element). The quantum numbers $m, m'$ define some particular direction of the state, and these “direction” will cause an additional factor, which is the Clebsch-Gordan coefficient. Another application is the Replacement theorem. If any 2 spherical tensors $A^{(k)}, B^{(k)}$ of rank-k, using the theorem, we have, $\displaystyle \left = \frac{\left}{\left} \left$ This can prove the Projection theorem, which is about rank-1 tensor. $L , J$ are orbital and total angular momentum respectively. The projection of $L$ on  $J$ is $L\cdot J = L_z J_z - L_+ J_- - L_-J_+$ The expectation value with same state $\left|j m\right>$, $\left< L\cdot J\right> = \left< L_z J_z\right> - \left< L_+ J_-\right> - \left$ using Wigner-Eckart theorem, the right side becomes, $\left< L \cdot J \right> = c_j \left$ where the coefficient $c_j$ only depends on $j$ as the dot-product is a scalar, which is isotropic. similarly, $\left< J \cdot J \right> = c_j \left$, Using the Replacement theorem, $\displaystyle \left< L \right> = \frac{\left}{\left} \left$ Thus, we have, $\displaystyle \left< L \right> = \frac{\left< L\cdot J \right>}{\left} \left$ as the state is arbitrary, $\displaystyle L = \frac{L\cdot J}{J\cdot J} J$ this is same as the classical vector projection. Angular distribution of emission that carry angular momentum Before decay, the nucleus is in state with total angular momentum J and symmetry axis quantization M : $\Phi_{JM}$ Say, the emitted radiation (can be EM wave or particle ) carries angular momentum  l and axis quantization m, its wavefunction is: $\phi_{lm}$ then the daughter nucleus has angular momentum j and $m_j$, the wave function is $\Psi_{j m_j}$ their relation is: $\Phi_{JM} = \sum_{m, m_j}{\phi_{lm} \Psi_{j m_j} \left< l m j m_j | JM \right>}$ where the $Latex \left< l m j m_j | JM \right>$ is Clebsch-Gordan coefficient. The wave function of the emitted radiation from a central interaction takes the form: $\phi_{lm} = A_0 u_{nl}(r) Y_l^m(\theta,\phi)$ The angular distribution is: $\int |\phi_{lm}|^2 dx^3 =A_0^2 \int |u_{nl}|^2 r^2 dr |Y_l^m|^2$ for a fixed distance detector, the radial part is a constant. Moreover, not every spherical harmonic contribute the same weight, there is weighting factor due to Clebsch-Gordan coefficient.   Thus, the angular distribution is proportional to: $W(\theta) \propto \sum_{m_j=M-m}{|Y_l^m|^2 |\left|^2 }$ For example, JM=00, The possible (l, j) are (0,0), (1,1), (2,2) and so on, the $m=-m_j$. The C-G coefficient are, $\langle 0000 | 00 \rangle = 1$ $\langle lml-m | 00 \rangle = \frac{1}{\sqrt{2l+1}}$ thus, $Y_0^0 = \frac{1}{4\pi}$ $\displaystyle \sum_{m}{\left|Y_l^m\right|^2 \left|\langle l m l -m |0 0 \rangle \right|^2 } = \sum_m|Y_l^m|^2 \frac{1}{2l+1}= constant$ Thus, the angular distribution is isotropic. Clebsch – Gordan Coefficient II As last post discussed, finding to CG coefficient is not as straight forward as text book said by recursion. However, there are another way around, which is by diagonalization of $J^2$ first we use the identity: $J^2 = J_1^2+J_2^2 + 2 J_{1z} J_{2_z} + J_{1+} J_{2-} + J_{1-} J_{2+}$ when we “matrix-lize” the operator. we have 2 choice of basis. one is $\left| j_1,m_1;j_2;m_2 \right>$, which give you non-diagonal matrix by the $J_{\pm}$ terms. another one is $\left|j,m\right>$, which give you a diagonal matrix. Thus, we have 2 matrixs, and we can diagonalized the non-diagonal. and we have the Unitary transform P, from the 2-j basis to j basis, and that is our CG coefficient. oh, don’t forget the normalized the Unitary matrix. i found this one is much easy to compute. Clebsch – Gordan Coefficient i am kind of stupid, so, for most text book with algebra example, i am easy to lost in the middle. Thus, now, i am going to present a detail calculation based on Recursion Relations. we just need equation and few observations to calculate all. i like to use the J- relation: $K(j,-m-1) C_{m_1 m_2}^{j m}= K(j_1,m_1) C_{m_1+1 m_2}^{j m+1}+ K(j_2,m_2) C_{m_1 m_2+1}^ {j m+1}$ $K(j,m) = \sqrt{ j(j+1) - m(m+1)}$ $C_{m_1 m_2}^{j m}$ is the coefficient. Notice that the relation is only on fixed j, thus, we will have our $m_1 m_2$ plane with fixed j, so, we have many planes from $j = j_1+j_2$ down to $j = |j_1-j_2|$. We have 2 observations: 1. $C_{j_1, j_2}^{j_1+j_2 , j_1+j_2} = 1$ which is the maximum state. the minimum state also equal 1. 2. For $m \ne m_1+m_2$ the coefficient is ZERO. Thus, on the $j = j_1 + j_2$ plane. the right-upper corner is 1. then using the relation, we can have all element down and left. and then, we can have all element on the plane. the problem comes when we consider $j = j_1 + j_2 -1$ plane. no relation is working! and no book tells us how to find it! Lets take an example, a super easy one, $j_1 = 1/2 , j_2 = 1/2$. possible $j = 0, 1$, so we have 2 planes. The j = 1 plane is no big deal. but the j = 0 plane, there are only 2 coefficient. and we can just related them and know they are different only a sign. and we have to use the orthonormal condition to find out the value. See? i really doubt is there somebody really do the actually calculation. J.J.Sakuarai just skip the j = l-1/2 case. he cheats! when going to higher j1+j2 case, we have w=to use the J- relation to evaluate all coefficient. the way is start from the lower left corner, and use the J- relation to find out the relationship between each lower diagonal coefficients. then, since all lower diagonal coefficients have same m value, thus, the sum of them should be normalized. then, we have our base line and use the J+ to find the rest. i will add graph and another example. say, j1 = 3, j2 = 1.
2017-08-23 13:35:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 133, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199248552322388, "perplexity": 529.2979613231573}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120573.0/warc/CC-MAIN-20170823132736-20170823152736-00594.warc.gz"}
https://brilliant.org/discussions/thread/vertical-angles/
× # Vertical Angles Vertical angles (also known as opposite angles) are the angled formed by two intersecting lines. The angles opposite from each other in any such pair of intersecting lines will always be equal. Note by Arron Kau 2 years, 7 months ago
2017-03-25 00:15:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.890777051448822, "perplexity": 1192.9416949450094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188717.24/warc/CC-MAIN-20170322212948-00422-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.isr-publications.com/jmcs/articles-217-dynamical-systems-on-finsler-modules
# Dynamical Systems on Finsler Modules Volume 4, Issue 1, pp 19--24 • 792 Downloads • 1113 Views ### Authors M. Hassani - Department of Mathematics, Faculty of sciences, Mashhad Branch, Islamic Azad University, Mashhad, Iran ### Abstract In this paper we investigate the generalized derivations and show that if E be a simple full Finsler A-module and let $\delta: D(\delta)\subseteq E\rightarrow E$ be a d-derivation.Then either $\delta$ is closable or both of the sets $\{x\pm \delta(x): x\in E\}$ are dense in $E\oplus E$. We also describe dynamical systems on a full Finsler module E over $C^*$- algebra A as a one -parameter group. ### Keywords • Derivation • Finsler module • Hilbert A-module • Dynamical systems •  46H40 •  46L57 ### References • [1] M. Amyari, A. Niknam, On homomorphisms of Finsler modules, International Mathematical Journal, Vol. 3, 277--281 (2003) • [2] M. Amyari, A. Niknam, A note on Finsler modules, Bull. Iran. Math. Soc., 29 (2003), 77--81 • [3] E. Hille, R. Phillips, Functional analysis and semi-groups, Proceedings of Symposia in pure mathematics, Rhode Island (1957) • [4] E. C. Lance, Hilbert $C^*$-module: a toolkit for operator algebraists, Cambridge University Press, Cambridge (1995) • [5] W. L. Paschke, Inner product module over $B^*$-algebras, Transactions of the American Mathematical Society, 182 (1973), 443--468 • [6] N. Phillips, N. Weaver, Modules with norms which take values in a $C^*$-algebra, Pacific journal of mathematics, 185 (1998), 163--181 • [7] M. Mathieu, Elementary operators and Applications, World Scientific Publishing, Singapore (1992) • [8] A. Taghavi, M. Jafarzadeh, A note on Modules maps over Finsler Modules, Journal of Advances in Applied Mathematics Analysis, 2 (2007), 89--95 • [9] S. Sakai, Operator algebras in dynamical systems, Cambridge University Press, Cambridge (1991)
2019-12-07 11:21:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6155339479446411, "perplexity": 4197.957654935665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540499389.15/warc/CC-MAIN-20191207105754-20191207133754-00219.warc.gz"}
https://questions.examside.com/past-years/jee/question/let-cos-left-alpha-beta-right-4-over-5-and-sin-left-alpha-be-jee-main-2010-marks-4-qwyukxl9ugk1ivva.htm
## Download our App 4.5 (100k+ ) 1 ### AIEEE 2010 MCQ (Single Correct Answer) Let $$\cos \left( {\alpha + \beta } \right) = {4 \over 5}$$ and $$\sin \,\,\,\left( {\alpha - \beta } \right) = {5 \over {13}},$$ where $$0 \le \alpha ,\,\beta \le {\pi \over 4}.$$ Then $$tan\,2\alpha$$ = A $${56 \over 33}$$ B $${19 \over 12}$$ C $${20 \over 7}$$ D $${25 \over 16}$$ ## Explanation $$\cos \left( {\alpha + \beta } \right) = {4 \over 5} \Rightarrow \tan \left( {\alpha + \beta } \right) = {3 \over 4}$$ $$\sin \left( {\alpha - \beta } \right) = {5 \over {13}} \Rightarrow \tan \left( {\alpha - \beta } \right) = {5 \over {12}}$$ $$\tan 2\alpha = \tan \left[ {\left( {\alpha + \beta } \right) + \left( {\alpha - \beta } \right)} \right]$$ $$= {{{3 \over 4} + {5 \over {12}}} \over {1 - {3 \over 4}.{5 \over {12}}}} = {{56} \over {33}}$$ 2 ### AIEEE 2009 MCQ (Single Correct Answer) Let A and B denote the statements A: $$\cos \alpha + \cos \beta + \cos \gamma = 0$$ B: $$\sin \alpha + \sin \beta + \sin \gamma = 0$$ If $$\cos \left( {\beta - \gamma } \right) + \cos \left( {\gamma - \alpha } \right) + \cos \left( {\alpha - \beta } \right) = - {3 \over 2},$$ then: A A is false and B is true B both A and B are true C both A and B are false D A is true and B is false ## Explanation 3 ### AIEEE 2006 MCQ (Single Correct Answer) The number of values of $$x$$ in the interval $$\left[ {0,3\pi } \right]\,$$ satisfying the equation $$2{\sin ^2}x + 5\sin x - 3 = 0$$ is A 4 B 6 C 1 D 2 ## Explanation $$2{\sin ^2}x + 5\sin x - 3 = 0$$ $$\Rightarrow \left( {\sin x + 3} \right)\left( {2\sin x - 1} \right) = 0$$ $$\sin x = {1 \over 2}$$ and $$\,\,\sin x \ne - 3$$ Given that $$x \in \left[ {0,3\pi } \right]$$ So possible values of x are $$30^\circ$$, $$150^\circ$$, $$390^\circ$$, $$510^\circ$$. That means x have 4 values. 4 ### AIEEE 2006 MCQ (Single Correct Answer) If $$0 < x < \pi$$ and $$\cos x + \sin x = {1 \over 2},$$ then $$\tan x$$ is A $${{\left( {1 - \sqrt 7 } \right)} \over 4}$$ B $${{\left( {4 - \sqrt 7 } \right)} \over 3}$$ C $$- {{\left( {4 + \sqrt 7 } \right)} \over 3}$$ D $${{\left( {1 + \sqrt 7 } \right)} \over 4}$$ ## Explanation $$\cos x + \sin x = {1 \over 2}$$ $$\Rightarrow {\left( {\cos x + {\mathop{\rm sinx}\nolimits} } \right)^2} = {1 \over 4}$$ $$\Rightarrow {\cos ^2}x + {\sin ^2}x + 2\cos x\sin x = {1 \over 4}$$ $$\left[ \because {{{\cos }^2}x + {{\sin }^2}x = 1\, \,and \,\,2\cos x\sin x = \sin 2x} \right]$$ $$\Rightarrow 1 + \sin 2x = {1 \over 4}$$ $$\Rightarrow \sin 2x = - {3 \over 4},$$ so $$x$$ is obtuse and $${{2\tan x} \over {1 + {{\tan }^2}x}} = - {3 \over 4}$$ $$\Rightarrow 3{\tan ^2}x + 8\tan x + 3 = 0$$ $$\therefore$$ $$\tan x = {{ - 8 \pm \sqrt {64 - 36} } \over 6}$$ $$= {{ - 4 \pm \sqrt 7 } \over 3}$$ as $$\tan x < 0\,$$ $$\therefore$$ $$\tan x = {{ - 4 - \sqrt 7 } \over 3}$$ ### Joint Entrance Examination JEE Main JEE Advanced WB JEE ### Graduate Aptitude Test in Engineering GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN NEET Class 12
2022-05-23 23:32:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9708367586135864, "perplexity": 8897.345247892672}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00032.warc.gz"}
https://javalab.org/en/one_side_of_the_moon_en/
# Why do we see only one side of the moon? Why do we always see the same side of the moon? The moon’s rate of rotation and rate of revolution is the same Related Post
2019-02-22 10:03:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8111334443092346, "perplexity": 722.0103047494805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247515149.92/warc/CC-MAIN-20190222094419-20190222120419-00407.warc.gz"}
https://drchristiansalas.com/category/miscellaneous-notes-involving-primes/
Unexpected appearance of Pythagorean triples in a homeomorphism between the 1-sphere and the extended real line I was thinking about various kinds of mappings of prime numbers and wondered in particular what prime numbers would look like when projected from the (extended) real line to the 1-sphere by a homeomorphism linking these two spaces. When I did the calculations I was amazed to find that prime numbers are mapped to a family of Pythagorean triples on the 1-sphere! This came as a complete surprise to me but I later learned that the link between stereographic projection and Pythagorean triples is already well known. Nevertheless, in this note I want to quickly record how I stumbled on this result. Consider the three points $N$, $Q$ and $P$ in the diagram. Since they are collinear there must be a scalar $t$ such that $P = N + t(Q - N)$ Writing this equation in vector form we get $(x, 0) = (0, 1) + t\big((x_1, x_2) - (0, 1) \big)$ $= (x_1 t, t(x_2 - 1) + 1)$ from which we deduce $x = x_1 t$ $\implies t = \frac{x}{x_1}$ and $t(x_2 - 1) + 1 = 0$ $\implies t = \frac{1}{1 - x_2}$ Equating these two expressions for $t$ we get $\frac{x}{x_1} = \frac{1}{1 - x_2}$ $\implies x = \frac{x_1}{1 - x_2}$ The function $\pi(x_1, x_2) = \frac{x_1}{1 - x_2}$ is the homeomorphism which maps points on the 1-sphere to points on the extended real line. I was more interested in the inverse function $\pi^{-1}(x)$ which maps points on the extended real line to the 1-sphere. To find this, observe that $x^2 + 1 = \frac{x_1^2}{(1 - x_2)^2} + \frac{(1 - x_2)^2}{(1 - x_2)^2}$ Using $x_1^2 + x_2^2 = 1$ we have $x_1^2 = 1 - x_2^2$ so the above equation can be written as $x^2 + 1 = \frac{1 - x_2^2}{(1 - x_2)^2} + \frac{(1 - 2x_2 + x_2^2)}{(1 - x_2)^2}$ $= \frac{2 - 2x_2}{(1 - x_2)^2} = \frac{2}{1 - x_2}$ Therefore $x^2 + 1 = \frac{2}{1 - x_2}$ $\implies x_2 = \frac{x^2 - 1}{x^2 + 1}$ and then $x = \frac{x_1}{1 - x_2}$ $\implies x_1 = \frac{2x}{x^2 + 1}$ Therefore if $x$ is a prime, the corresponding point on the 1-sphere is $\bigg( \frac{2x}{x^2 + 1}, \frac{x^2 - 1}{x^2 + 1} \bigg)$ However, the numbers $2x$, $x^2 - 1$ and $x^2 + 1$ are then Pythagorean triples, as can easily be demonstrated by showing that these terms satisfy the identity $(2x)^2 + (x^2 - 1)^2 \equiv (x^2 + 1)^2$ A Euclidean-like division algorithm for converting prime reciprocals into ternary numbers For the purposes of some work I was doing involving ternary numbers, I became interested in finding a quick and easily programmable method for converting prime reciprocals into ternary representation. By trial and error I found a Euclidean-like division algorithm which works well, as illustrated by the following application to the calculation of the ternary representation of the prime reciprocal $\frac{1}{7}$: $3 = 0 \times 7 + 3$ $9 = 1 \times 7 + 2$ $6 = 0 \times 7 + 6$ $18 = 2 \times 7 + 4$ $12 = 1 \times 7 + 5$ $15 = 2 \times 7 + 1$ The first equation always has $3$ on the left hand side and the remainder in each equation is multiplied by $3$ to obtain the left-hand side of the next equation. The procedure halts when a remainder is obtained which is equal to $1$. From then on, the pattern of the coefficients of the prime denominator will repeat indefinitely. In the above example a remainder of $1$ is obtained at the sixth step, and the pattern will repeat from then on, so looking at the coefficients of $7$ we conclude that the ternary representation of $\frac{1}{7}$ is $0.\dot{0}1021\dot{2}$ where the dot notation is used to indicate that the string of digits between the dots repeats indefinitely. This is completely analogous to the fact that the decimal, i.e., base-10, representation of $\frac{1}{7}$ has the repeating cycle structure $0.\dot{1}4285\dot{7}$. Note that the cycle length is $6$ in the ternary representation of $\frac{1}{7}$ precisely because $6$ is the order of $3$ modulo $7$. (The order of an integer modulo a prime $p$ is the lowest power of the integer which is congruent to $1$ mod $p$. A well known theorem of Fermat guarantees that the order of any integer modulo a prime $p$ is at most $p - 1$). A further illustration is the computation of the ternary form of $\frac{1}{13}$, for which the division algorithm yields $3 = 0 \times 13 + 3$ $9 = 0 \times 13 + 9$ $27 = 2 \times 13 + 1$ before the remainder repeats, giving the ternary representation $0.\dot{0}0\dot{2}$. The cycle length for $\frac{1}{13}$ is $3$, which is due to the fact that $3$ is the order of $3$ mod $13$. In general, for a prime reciprocal $\frac{1}{p}$, the ternary cycle length will always be the order of $3$ mod $p$, and the cycle will repeat from the first digit after the point. To implement the above algorithm I wrote the following Python program which reads a list of primes from a text file (in this case, the first $100$ primes greater than $3$), then computes the ternary representation of the corresponding prime reciprocals, depositing the output in another text file. The output for each prime reciprocal is the first cycle of ternary digits (the cycle repeats indefinitely thereafter), as well as a statement of the cycle length. The full output from running the above program with the first hundred primes greater than $3$ is shown here. Note on a Computer Search for Primes in Arithmetic Progression by Weintraub (1977) A paper by Weintraub, S, 1977 (Primes in arithmetic progression, BIT Numerical Mathematics, Vol 17, Issue 2, p. 239-243) implements a computer search for primes in arithmetic progression (PAPs). It refers to a number $N$ which is set to what seems at first sight to be an arbitrary value of 16680. In this note I want to try to bring out some maths underlying this choice of $N$ in Weintraub’s paper, and also record a brute force implementation I carried out in Microsoft Excel of an adjustment factor in an asymptotic formula by Grosswald (1982) which yields the number of PAPs less than or equal to some specified number. The number of prime arithmetic sequences of a given length that one can hope to find is determined by the chosen values of $m$ and $N$ in Weintraub’s paper. On page 241 of his paper, Weintraub says: “…it is likely that with $m = 510510$ [and $N = 16680$] there exist between 20-30 prime sequences of 16 terms…” I was pleased to find that Weintraub’s estimate of 20-30 agrees with an asymptotic formula obtained later by Grosswald (in 1982) building on a conjecture by Hardy and Littlewood. The number of $q$-tuples of primes $p_1$, . . . , $p_q$ in arithmetic progression, all of whose terms are less than or equal to some number $x$, was conjectured by Grosswald to be asymptotically equal to $\frac{D_q x^2}{2(q-1)(\log x)^q}$ where the factor $D_q$ is $\prod_{p > q} \big[\big( \frac{p}{p-1}\big)^{q-1} \cdot \frac{p - (q-1)}{p} \big] \times \prod_{p \leq q} \frac{1}{p}\big(\frac{p}{p-1}\big)^{q-1}$ When $q = 16$ as in Weintraub’s paper, we get $D_{16} = 55651.46255350$ (see below). Using $m = 510510$ and $N = 16680$, Weintraub said on page 241 that one gets an upper prime limit of around $8 \times 10^9$ (Weintraub actually said: “approximately $7.7 \times 10^9$“). Plugging in the numbers $q = 16$, $D_{16} = 55651.46255350$ and $x = 8 \times 10^9$ in the first formula above, we get the answer 22, i.e., there are 22 prime sequences of 16 terms, in line with Weintraub’s estimate of 20-30 on page 241 of his paper. Weintraub was clearly aware that $N = 16680$ (in conjunction with $m = 510510$) would make approximately 20-30 prime sequences of 16 terms available to his search. The adjustment factor of $D_{16} = 55651.46255350$ above can be obtained to a high degree of accuracy using a series approximation involving the zeta function (see, e.g., Caldwell, 2000, preprint, p. 11). However, I wanted to see how accurately one could calculate it by using the first one hundred thousand primes directly in Grosswald’s formula for $D_q$ above. I did this in a Microsoft Excel sheet and got the answer $D_{16} = 55651.76179$. Directly using Grosswald’s formula for $D_q$, it was not possible to get an estimate of $D_{16}$ accurate to the first decimal place even with one hundred thousand primes. A note on finding more base-3 palindromic primes of the form 1 + 3^k + (3^k)^2 I stumbled across base-3 palindromes of the form $1 + 3^k + (3^k)^2$ for $k \in \mathbb{N}$ and became especially interested in them when I realised the reciprocal of any such number must belong to the middle-third Cantor set. In particular, primes of this form will be Cantor primes which I have explored before. The only examples of primes of this form I am aware of are 13 (corresponding to $k = 1$) and 757 (corresponding to $k = 3$). I am very interested in finding more primes of this base-3 palindromic form, if there are any, or if not, I would like to see a mathematical argument which shows there cannot be any more. (Since the question of whether or not there are infinitely many palindromic primes in general is a major open problem, it would be a major breakthrough to prove there are infinitely many primes of this particular palindromic form). At the very least, it would be nice to establish some conditions on $k$ which would eliminate a lot of cases in which $1 + 3^k + (3^k)^2$ cannot be prime. I have noticed the following four properties about them, which I will document here: Result 1. All base-3 palindromes of the form $1 + 3^k + (3^k)^2$ for $k \in \mathbb{N}$ are such that their reciprocal belongs to the middle-third Cantor set. Therefore, in particular, all primes of this form are Cantor primes. Proof: We have $1 + 3^k + (3^k)^2 = \frac{3^{3k} - 1}{3^k - 1}$ Therefore writing $p = 1 + 3^k + (3^k)^2$ we have $\frac{1}{p} = \frac{3^k - 1}{3^{3k} - 1}$ Using the facts that $\frac{3^k - 1}{2}$ $= 1 + 3 + 3^2 + \cdots + 3^{k-1}$ and $\frac{1}{3^{3k} - 1}$ $= \frac{1}{3^{3k}} + \frac{1}{(3^{3k})^2} + \cdots$ we can write $\frac{1}{p}$ $= 2(1 + 3 + 3^2 + \cdots + 3^{k-1})$ $\{\frac{1}{3^{3k}} + \frac{1}{(3^{3k})^2} + \cdots\}$ This is an expression for $1/p$ which corresponds to a base-3 representation involving only the digits 0 and 2, and therefore $1/p$ must belong to the middle-third Cantor set. QED Result 2. The base-3 palindrome $1 + 3^k + (3^k)^2$ will be divisible by $1 + 3 + 3^2 = 13$ if and only if $gcd(k, 3) = 1$. Therefore the base-3 palindrome $1 + 3^k + (3^k)^2$ cannot be a Cantor prime if k is not a multiple of 3. Proof: Begin by considering the cyclotomic polynomial $\Phi_3(x) = 1 + x + x^2$ This has as its roots the two primitive cube roots of unity $w = exp(i2\pi/3) = (-1 + \sqrt{3}i)/2$ and $w^2 = exp(i4\pi/3) = (-1 - \sqrt{3}i)/2$ If $w$ is a root of $1 + x^k + (x^k)^2$ then so is $w^2$ (since roots always occur in conjugate pairs), and in this case it must be that $1 + x + x^2$ is a factor of $1 + x^k + (x^k)^2$. Now there are three possibilities for k: I. $k = 3s$ for some $s \in \{0, 1, 2, \ldots \}$. In this case the polynomial $1 + x^k + (x^k)^2$ becomes $1 + x^{3s} + x^{6s}$ Setting $x = w$ we get $1 + w^{3s} + w^{6s}$ $= 1 + 1 + 1 = 3$ so $w$ is not a root of $1 + x^k + (x^k)^2$ here. II. $k = 3s + 1$. In this case the polynomial $1 + x^k + (x^k)^2$ becomes $1 + x^{3s+1} + x^{6s+2}$ Setting $x = w$ we get $1 + w + w^2 = 0$ so $w$ is a root of $1 + x^k + (x^k)^2$ here. III. $k = 3s + 2$. In this case the polynomial $1 + x^k + (x^k)^2$ becomes $1 + x^{3s+2} + x^{6s+4}$ Setting $x = w$ we get $1 + w^2 + w^4$ $= 1 + w^2 + w = 0$ so $w$ is a root of $1 + x^k + (x^k)^2$ here. There are no other possibilities, so if we set $x = 3$ in the polynomials $1 + x + x^2$ and $1 + x^k + (x^k)^2$ the result is proved. QED Result 3. The base-3 palindrome $1 + 3^k + (3^k)^2$ will be divisible by some number of the form $1 + 3^{3^r} + (3^{3^r})^2$ where $r \in \{0, 1, 2, \ldots\}$ if k is not a power of 3. Therefore the base-3 palindrome $1 + 3^k + (3^k)^2$ cannot be a Cantor prime if k is not a power of 3. Proof: Consider the polynomial  $1 + x^k + (x^k)^2$ and suppose that k is not a power of 3. Then we can write $k = 3^r \cdot s$ for some integer $s > 1$ such that $gcd(s, 3) = 1$. Then $1 + x^k + (x^k)^2 = 1 + y^s + (y^s)^2$ where $y = x^{3^r}$ and the result now follows immediately from Result 2 and by setting $x = 3$. QED Result 4. Any base-3 palindromic prime of the form $1 + 3^k + (3^k)^2$ will be of the type $4q + 1$ for $q \in \mathbb{N}$, and can therefore be expressed as a sum of two perfect squares $m^2 + n^2$ where $m, n \in \mathbb{N}$. Proof: If $1 + 3^k + (3^k)^2$ is prime then k must be a power of 3 and hence odd. Odd powers of 3 are always of the form $4q + 3$ and even powers of 3 are always of the form $4q + 1$. Therefore $1 + 3^k + (3^k)^2$ will be a sum of each of these two forms, plus the number 1, and this must always give a number of the form $4q + 1$. It is a well known theorem of Fermat that all primes of the form $4q + 1$ can be expressed as a sum of two perfect squares. QED
2019-02-16 15:43:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 183, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8892440795898438, "perplexity": 183.6636379398746}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480622.9/warc/CC-MAIN-20190216145907-20190216171907-00595.warc.gz"}
https://www.springerprofessional.de/en/multimedia-modeling/17527198
main-content The two-volume set LNCS 11961 and 11962 constitutes the thoroughly refereed proceedings of the 25th International Conference on MultiMedia Modeling, MMM 2020, held in Daejeon, South Korea, in January 2020. Of the 171 submitted full research papers, 40 papers were selected for oral presentation and 46 for poster presentation; 28 special session papers were selected for oral presentation and 8 for poster presentation; in addition, 9 demonstration papers and 6 papers for the Video Browser Showdown 2020 were accepted. The papers of LNCS 11961 are organized in the following topical sections: audio and signal processing; coding and HVS; color processing and art; detection and classification; face; image processing; learning and knowledge representation; video processing; poster papers; the papers of LNCS 11962 are organized in the following topical sections: poster papers; AI-powered 3D vision; multimedia analytics: perspectives, tools and applications; multimedia datasets for repeatable experimentation; multi-modal affective computing of large-scale multimedia data; multimedia and multimodal analytics in the medical domain and pervasive environments; intelligent multimedia security; demo papers; and VBS papers. ### Correction to: MultiMedia Modeling The original version of this book was revised. Due to a technical error, the first volume editor did not appear in the volumes of the MMM 2020 proceedings. This was corrected and the first volume editor was added. Yong Man Ro, Wen-Huang Cheng, Junmo Kim, Wei-Ta Chu, Peng Cui, Jung-Woo Choi, Min-Chun Hu, Wesley De Neve ### Light Field Reconstruction Using Dynamically Generated Filters Densely-sampled light fields have already show unique advantages in applications such as depth estimation, refocusing, and 3D presentation. But it is difficult and expensive to access. Commodity portable light field cameras, such as Lytro and Raytrix, are easy to carry and easy to operate. However, due to the camera design, there is a trade-off between spatial and angular resolution, which can not be sampled intensively at the same time. In this paper, we present a novel learning-based light field reconstruction approach to increase the angular resolution of a sparsely-sample light field image. Our approach treats the reconstruction problem as the filtering operation on the sub-aperture images of input light field and uses a deep neural network to estimate the filtering kernels for each sub-aperture image. Our network adopts a U-Net structure to extract feature maps from input sub-aperture images and angular coordinate of novel view, then a filter-generating component is designed for kernel estimation. We compare our method with existing light field reconstruction methods with and without depth information. Experiments show that our method can get much better results both visually and quantitatively. Xiuxiu Jing, Yike Ma, Qiang Zhao, Ke Lyu, Feng Dai ### Speaker-Aware Speech Emotion Recognition by Fusing Amplitude and Phase Information The use of a convolutional neural network (CNN) for extracting deep acoustic features from spectrograms has become one of the most commonly used methods for speech emotion recognition. In those studies, however, common amplitude information is chosen as input with no special attention to phase-related or speaker-related information. In this paper, we propose a multi-channel method employing amplitude and phase channels for speech emotion recognition. Two separate CNN channels are adopted to extract deep features from amplitude spectrograms and modified group delay (MGD) spectrograms. Then a concatenate layer is used to fuse the features. Furthermore, to gain more robust features, speaker information is considered in the stage of emotional feature extraction. Finally, the fusion features that considering speaker-related information are fed into the extreme learning machine (ELM) to distinguish emotions. Experiments are conducted on the Emo-DB database to evaluate the proposed model. Results demonstrate the recognition performance of average F1 in 94.82%, which significantly outperforms the baseline CNN-ELM model based on amplitude only spectrograms by 39.27% relative error reduction. Lili Guo, Longbiao Wang, Jianwu Dang, Zhilei Liu, Haotian Guan ### Gen-Res-Net: A Novel Generative Model for Singing Voice Separation In most cases, modeling in the time-frequency domain is the most common method to solve the problem of singing voice separation since frequency characteristics differ between different sources. During the past few years, applying recurrent neural network (RNN) to series of split spectrograms has been mostly adopted by researchers to tackle this problem. Recently, however, the U-net’s success has drawn the focus to treating the spectrogram as a 2-dimensional image with an auto-encoder structure, which indicates that some useful methods in image analysis may help solve this problem. Under this scenario, we propose a novel spectrogram-generative model to separate the two sources in the time-frequency domain inspired by Residual blocks, Squeeze and Excitation blocks and WaveNet. We apply none-reduce-sized Residual blocks together with Squeeze and Excitation blocks in the main stream to extract features of the input spectrogram while gathering the output layers in a skip-connection structure used in WaveNet. Experimental results on two datasets (MUSDB18 and CCMixer) have shown that our proposed network performs better than the current state-of-the-art approach working on spectrograms of mixtures – the deep U-net structure. Congzhou Tian, Hangyu Li, Deshun Yang, Xiaoou Chen ### A Distinct Synthesizer Convolutional TasNet for Singing Voice Separation Deep learning methods have already been used for music source separation for several years and proved to be very effective. Most of them choose Fourier Transform as the front-end process to get a spectrogram representation, which has its drawback though. Perhaps the spectrogram representation is just suitable for human to understand sounds, but not the best representation used by powerful neural networks for singing voice separation. TasNet (Time Audio Separation Network) has been proposed recently to solve monaural speech separation in the time domain by modeling each source as a weighted sum of a common set of basis signals. Then the fully-convolutional TasNet raised recently achieves great improvements in speech separation. In this paper, we first show convolutional TasNet can also be used in singing voice separation and bring about improvements on the dataset DSD100 in the singing voice separation task. Then based on the fact that in singing voice separation, the difference between the singing voice and the accompaniment is far more remarkable than the difference between the voices of two different people in speech separation, we employ separate sets of basis signals and separate encoder outputs for the singing voice and the accompaniment respectively, which makes a further improved model, distinct synthesizer convolutional TasNet (ds-cTasNet). Congzhou Tian, Deshun Yang, Xiaoou Chen ### Exploiting the Importance of Personalization When Selecting Music for Relaxation Listening to music is not just a hobby or a leisure activity, but rather a way to achieve a specific emotional or psychological state, or even to better perform an activity, e.g., relaxation. Therefore, making the right choice of music for this purpose is fundamental. In the area of Music Information Retrieval (MIR), many works try to classify, in a general way, songs that are better suited to certain activities/contexts, but there is a lack of works that seek first to answer if personalization is an essential criterion for the selection of songs in this context. Thus, in order to investigate whether personalization plays a vital role in this context, more specifically in relaxation, we: (i) analyze more than 60 thousand playlists created by more than 5 thousand users from the 8tracks platform; (ii) extract high and low-level features from the songs of these playlists; (iii) create a user perception based on these features; (iv) identify users groups by their perceptions; (v) analyze the contrasts between these groups, comparing their most relevant features. Our results help to understand how personalization is essential in the context of relaxing music, paving the way for more informed MIR systems. ### An Efficient Encoding Method for Video Compositing in HEVC Video compositing for compressed HEVC streams is highly demanded in instant communication applications such as video chat. As a flexible scheme, pixel domain compositing involves the processes of completely decoding the streams, inserting the decoded video, and finally re-encoding the new composite video. The time consumption of the whole scheme comes almost entirely from the re-encoding process. In this paper, we propose an efficient encoding method to speedup the re-encoding process and improve encoding efficiency. The proposed method separately designs encoding for blocks inside the frame region covered by inserted video and blocks in non-inserted region, which overcomes numerous difficulties of utilizing information from the decoding process. Experimental results show that the proposed method achieves a $$\text {PSNR}$$ increase of 0.33 dB, or a bit rate saving of 10.11% on average compared with normal encoding using unmodified HM software. Furthermore, the encoding speed is 7.04 times that of normal encoding method, equivalent to an average reduction of 85.8% in computational complexity. Yunchang Li, Zhijie Huang, Jun Sun There are large amount of valuable video archives in Video Home System (VHS) format. However, due to the analog nature, their quality is often poor. Compared to High-definition television (HDTV), VHS video not only has a dull color appearance but also has a lower resolution and often appears blurry. In this paper, we focus on the problem of translating VHS video to HDTV video and have developed a solution based on a novel unsupervised multi-task adversarial learning model. Inspired by the success of generative adversarial network (GAN) and CycleGAN, we employ cycle consistency loss, adversarial loss and perceptual loss together to learn a translation model. An important innovation of our work is the incorporation of super-resolution model and color transfer model that can solve unsupervised multi-task problem. To our knowledge, this is the first work that dedicated to the study of the relation between VHS and HDTV and the first computational solution to translate VHS to HDTV. We present experimental results to demonstrate the effectiveness of our solution qualitatively and quantitatively. Hongming Luo, Guangsen Liao, Xianxu Hou, Bozhi Liu, Fei Zhou, Guoping Qiu ### Improving Just Noticeable Difference Model by Leveraging Temporal HVS Perception Characteristics Temporal HVS characteristics are not fully exploited in conventional JND models. In this paper, we improve the spatio-temporal JND model by fully leveraging the temporal HVS characteristics. From the viewpoint of visual attention, we investigate two related factors, positive stimulus saliency and negative uncertainty. This paper measures the stimulus saliency according to two stimulus-driven parameters, relative motion and duration along the motion trajectory, and measures the uncertainty according to two uncertainty-driven parameters, global motion and residue intensity fluctuation. These four different parameters are measured with self-information and information entropy, and unified for fusion with homogeneity. As a result, a novel temporal JND adjustment weight model is proposed. Finally, we fuse the spatial JND model and temporal JND weight to form the spatio-temporal JND model. The experiment results verify that the proposed JND model yields significant performance improvement with much higher capability of distortion concealment compared to state-of-the-art JND profiles. Haibing Yin, Yafen Xing, Guangjing Xia, Xiaofeng Huang, Chenggang Yan ### Down-Sampling Based Video Coding with Degradation-Aware Restoration-Reconstruction Deep Neural Network Recently deep learning techniques have shown remarkable progress in image/video super-resolution. These techniques can be employed in a video coding system for improving the quality of the decoded frames. However, different from the conventional super-resolution works, the compression artifacts in the decoded frames should be concerned with. The straightforward solution is to integrate the artifacts removing techniques before super-resolution. Nevertheless, some helpful features may be removed together with the artifacts, and remaining artifacts can be exaggerated. To address these problems, we design an end-to-end restoration-reconstruction deep neural network (RR-DnCNN) using the degradation-aware techniques. RR-DnCNN is applied to the down-sampling based video coding system. In the encoder side, the original video is down-sampled and compressed. In the decoder side, the decompressed down-sampled video is fed to the RR-DnCNN to get the original video by removing the compression artifacts and super-resolution. Moreover, in order to enhance the network learning capabilities, uncompressed low-resolution images/videos are utilized as a ground-truth. The experimental results show that our work can obtain over 8% BD-rate reduction compared to the standard H.265/HEVC. Furthermore, our method also outperforms in reducing compression artifacts in subjective comparison. Our work is available at https://github.com/minhmanho/rrdncnn . Minh-Man Ho, Gang He, Zheng Wang, Jinjia Zhou ### Beyond Literal Visual Modeling: Understanding Image Metaphor Based on Literal-Implied Concept Mapping Existing ultimedia content understanding tasks focus on modeling the literal semantics of multimedia documents. This study explores the possibility of understanding the implied meaning behind the literal semantics. Inspired by human’s implied imagination process, we introduce a three-step solution framework based on the mapping from literal to implied concepts by integrating external knowledge. Experiments on self-collected metaphor image dataset validate the effectiveness in identifying accurate implied concepts for further metaphor understanding in controlled environment. Chengpeng Fu, Jinqiang Wang, Jitao Sang, Jian Yu, Changsheng Xu ### Deep Palette-Based Color Decomposition for Image Recoloring with Aesthetic Suggestion Color edition is an important issue in image processing and graphic design. This paper presents a deep color decomposition based framework for image recoloring, allowing users to achieve professional color edition through simple interactive operations. Different from existing methods that perform palette generation and color decomposition separately, our method directly generates the recolored images by a light-weight CNN. We first formulate the generation of color palette as an unsupervised clustering problem, and employ a fully point-wise CNN to extract the most representative colors from the input image. Particularly, a pixel scrambling strategy is adopted to map the continuous image color to a compact discrete palette space, facilitating the CNN focus on color-relevant features. Then, we devise a deep color decomposition network to obtain the projected weights of input image on the basis colors of the generated palette space, and leverage them for image recoloring in a user-interacted manner. In addition, a novel aesthetic constraint derived from color harmony theory is proposed to augment the color reconstruction from user-specified colors, resulting in an aesthetically pleasing visual effect. Qualitative comparisons with existing methods demonstrate the effectiveness of our proposed method. Zhengqing Li, Zhengjun Zha, Yang Cao ### On Creating Multimedia Interfaces for Hybrid Biological-Digital Art Installations This paper discusses the application of real-time multimedia technologies to artworks that feature novel interfaces between human and non-human organisms (in this case plants and bacteria). Two projects are discussed: Microbial Sonorities, a real-time generative sound artwork based on bacterial voltages and machine learning, and PlantConnect, a real-time multimedia artwork that explores human-plant interaction via the human act of breathing, the bioelectrical and photosynthetic activity of plants and computational intelligence to bring the two together. Part of larger investigations into alternative models for the creation of shared experiences and understanding with the natural world, these projects explores complexity and emergent phenomena by harnessing the material agency of non-human organisms and the capacity of emerging multimedia technologies as mediums for information transmission, communication and interconnectedness between the human and non-human. While primarily focusing on technical descriptions of these projects, this paper also hopes to open up dialog about how the combination of emerging multimedia technologies and the often aleatoric unpredictability that living organisms can exhibit, can be beneficial to digital arts and entertainment applications. Carlos Castellanos, Bello Bello, Hyeryeong Lee, Mungyu Lee, Yoo Seok Lee, In Seop Chang ### Image Captioning Based on Visual and Semantic Attention Most of the existing image captioning methods only use the visual information of the image to guide the generation of the captions, lack the guidance of effective scene semantic information, and the current visual attention mechanism cannot adjust the focus intensity on the image. In this paper, we first propose an improved visual attention model. At each time step, we calculate the focus intensity coefficient of the attention mechanism through the context information of the model, and automatically adjust the focus intensity of the attention mechanism through the coefficient, so as to extract more accurate image visual information. In addition, we represent the scene semantic information of the image through some topic words related to the image scene, and add them to the language model. We use attention mechanism to determine the image visual information and scene semantic information that the model pays attention to at each time step, and combine them to guide the model to generate more accurate and scene-specific captions. Finally, we evaluate our model on MSCOCO dataset. The experimental results show that our approach can generate more accurate captions, and outperforms many recent advanced models on various evaluation metrics. Haiyang Wei, Zhixin Li, Canlong Zhang ### An Illumination Insensitive and Structure-Aware Image Color Layer Decomposition Method To decompose of an image into a set of color layers can facilitate many image editing manipulations, but the high-quality layering remains challenging. We propose a novel illumination insensitive and structure-aware layer decomposition approach. To reduce the influence of non-uniform illumination on color appearance, we design a scheme of letting the decomposition work on the reflectance image output by intrinsic decomposition, rather than the original image commonly used in previous work. To obtain fine layers, we leverage image specific structure information and enforce it by encoding a structure-aware prior into a novel energy minimization formulation. The proposed optimization considers the fidelity, our structure-aware prior and permissible ranges simultaneously. We provide a solver to this optimization to get final layers. Experiments demonstrate that our method can obtain finer layers compared to several state-of-the-art methods. Wengang Cheng, Pengli Dou, Dengwen Zhou ### CartoonRenderer: An Instance-Based Multi-style Cartoon Image Translator Instance based photo cartoonization is one of the challenging image stylization tasks which aim at transforming realistic photos into cartoon style images while preserving the semantic contents of the photos. State-of-the-art Deep Neural Networks (DNNs) methods still fail to produce satisfactory results with input photos in the wild, especially for photos which have high contrast and full of rich textures. This is due to that: cartoon style images tend to have smooth color regions and emphasized edges which are contradict to realistic photos which require clear semantic contents, i.e., textures, shapes etc. Previous methods have difficulty in satisfying cartoon style textures and preserving semantic contents at the same time. In this work, we propose a novel “CartoonRenderer” framework which utilizing a single trained model to generate multiple cartoon styles. In a nutshell, our method maps photo into a feature model and renders the feature model back into image space. In particular, cartoonization is achieved by conducting some transformation manipulation in the feature space with our proposed Soft-AdaIN. Extensive experimental results show our method produces higher quality cartoon style images than prior arts, with accurate semantic content preservation. In addition, due to the decoupling of whole generating process into “Modeling-Coordinating-Rendering” parts, our method could easily process higher resolution photos, which is intractable for existing methods. Yugang Chen, Muchun Chen, Chaoyue Song, Bingbing Ni ### Multi-condition Place Generator for Robust Place Recognition As an image retrieval task, visual place recognition (VPR) encounters two technical challenges: appearance variations resulted from external environment changes and the lack of cross-domain paired training data. To overcome these challenges, multi-condition place generator (MPG) is introduced for data generation. The objective of MPG is two-fold, (1) synthesizing realistic place samples corresponding to multiple conditions; (2) preserving the place identity information during the generation procedure. While MPG smooths the appearance disparities under various conditions, it also suffers image distortion. For this reason, we propose the relative quality based triplet (RQT) loss by reshaping the standard triplet loss such that it down-weights the loss assigned to low-quality images. By taking advantage of the innovations mentioned above, a condition-invariant VPR model is trained without the labeled training data. Comprehensive experiments show that our method outperforms state-of-the-art algorithms by a large margin on several challenging benchmarks. Yiting Cheng, Yankai Wang, Lizhe Qi, Wenqiang Zhang ### Guided Refine-Head for Object Detection Lingyun Zeng, You Song, Wenhai Wang ### Towards Accurate Panel Detection in Manga: A Combined Effort of CNN and Heuristics Panels are the fundamental elements of manga pages, and hence their detection serves as the basis of high-level manga content understanding. Existing panel detection methods could be categorized into heuristic-based methods and CNN-based (Convolutional Neural Network-based) ones. Although the former can accurately localize panels, they cannot handle well elaborate panels and require considerable effort to hand-craft rules for every new hard case. In contrast, detection results of CNN-based methods could be rough and inaccurate. We utilize CNN object detectors to propose coarse guide panels, then use heuristics to propose panel candidates and finally optimize an energy function to select the most plausible candidates. CNN assures roughly localized detection of almost all kinds of panels, while the follow-up procedure refines the detection results and minimizes the margin between detected panels and ground-truth with the help of heuristics and energy minimization. Experimental results show the proposed method surpasses previous methods regarding panel detection F1-score and page accuracy. Yafeng Zhou, Yongtao Wang, Zheqi He, Zhi Tang, Ching Y. Suen ### Subclass Deep Neural Networks: Re-enabling Neglected Classes in Deep Network Training for Multimedia Classification During minibatch gradient-based optimization, the contribution of observations to the updating of the deep neural network’s (DNN’s) weights for enhancing the discrimination of certain classes can be small, despite the fact that these classes may still have a large generalization error. This happens, for instance, due to overfitting, i.e. to classes whose error in the training set is negligible, or simply when the contributions of the misclassified observations to the updating of the weights associated with these classes cancel out. To alleviate this problem, a new criterion for identifying the so-called “neglected” classes during the training of DNNs, i.e. the classes which stop to optimize early in the training procedure, is proposed. Moreover, based on this criterion a novel cost function is proposed, that extends the cross-entropy loss using subclass partitions for boosting the generalization performance of the neglected classes. In this way, the network is guided to emphasize the extraction of features that are discriminant for the classes that are prone to being neglected during the optimization procedure. The proposed framework can be easily applied to improve the performance of various DNN architectures. Experiments on several publicly available benchmarks including, the large-scale YouTube-8M (YT8M) video dataset, show the efficacy of the proposed method (Source code is made publicly available at: https://github.com/bmezaris/subclass_deep_neural_networks ). Nikolaos Gkalelis, Vasileios Mezaris ### Automatic Material Classification Using Thermal Finger Impression Natural surfaces offer the opportunity to provide augmented reality interactions in everyday environments without the use of cumbersome body-mounted equipment. One of the key techniques of detecting user interactions with natural surfaces is the use of thermal imaging that captures the transmitted body heat onto the surface. A major challenge of these systems is detecting user swipe pressure on different material surfaces with high accuracy. This is because the amount of transferred heat from the user body to a natural surface depends on the thermal property of the material. If the surface material type is known, these systems can use a material-specific pressure classifier to improve the detection accuracy. In this work, we address to solve this problem as we propose a novel approach that can detect material type from a user’s thermal finger impression on a surface. Our technique requires the user to touch a surface with a finger for 2 s. The recorded heat dissipation time series of the thermal finger impression is then analyzed in a classification framework for material identification. We studied the interaction of 15 users on 7 different material types, and our algorithm is able to achieve 74.65% material classification accuracy on the test data in a user-independent manner. Jacob Gately, Ying Liang, Matthew Kolessar Wright, Natasha Kholgade Banerjee, Sean Banerjee, Soumyabrata Dey ### Face Attributes Recognition Based on One-Way Inferential Correlation Between Attributes Attributes recognition of face in the wild is getting increasingly attention with the rapid development of computer vision. Most prior work tend to apply separate model for the single attribute or attributes in the same region, which easily lost the information of correlation between attributes. Correlation (e.g., one-way inferential correlation) between face attributes, which is neglected by many researches, contributes to the better performance of face attributes recognition. In this paper, we propose a face attributes recognition model based on one-way inferential correlation (OIR) between face attributes (e.g., the inferential correlation from goatee to gender). Toward that end, we propose a method to find such correlation based on data imbalance of each attribute, and design an OIR-related attributes classifier using such correlation. Furthermore, we cut face region into multiple region parts according to the category of attributes, and use a novel approach of face feature extraction for all regional parts via transfer learning focusing on multiple neural layers. Experimental evaluations on the benchmark with multiple face attributes show the effectiveness on recognition accuracy and computational cost of our proposed model. Hongkong Ge, Jiayuan Dong, Liyan Zhang ### Eulerian Motion Based 3DCNN Architecture for Facial Micro-Expression Recognition Facial micro-expressions are fast and subtle muscular movements, which typically reveal the underlying mental state of an individual. Due to low intensity and short duration of micro-expressions, the task of micro-expressions recognition is a huge challenge. Our method adopts a new pre-processing technique on the basis of the Eulerian video magnification (EVM) for micro-expressions recognition. Further, we propose a micro-expressions recognition framework based on the simple yet effective Eulerian motion-based 3D convolution network (EM-C3D). Firstly, Eulerian motion feature maps are extracted by employing multiple spatial scales temporal filtering approach, then the multi-frame Eulerian motion feature maps are directly fed into the 3D convolution network with a global attention module (GAM) to encode rich spatiotemporal information instead of being added to the raw images. Our algorithm achieves state-of-the-art result $$69.76\%$$ accuracy and $$65.75\%$$ recall rate on the CASME II dataset, which surpasses all baselines. Cross-domain experiments are also performed to verify the robustness of the algorithm. Yahui Wang, Huimin Ma, Xinpeng Xing, Zeyu Pan ### Emotion Recognition with Facial Landmark Heatmaps Facial expression recognition is a very challenging problem and has attracted more and more researchers’ attention. In this paper, considering that facial expression recognition is closely related to the features of key facial regions, we propose a facial expression recognition network that explicitly utilizes the landmark heatmap information to precisely capture the most discriminative features. In addition to directly adding the information of facial fiducial points in the form of landmark heatmaps, we also propose an end-to-end network structure--heatmap aiding emotion network (HAE-Net) by embedding the landmark detection module based on stack-based hourglass network into the facial expression recognition network. Experiments on CK+, RAF and AffectNet databases show that our method achieves better results compared with the state-of-the-art methods, which demonstrates that adding additional landmark information, as well as joint training of landmark detection and expression recognition, are beneficial to improve recognition performance. Siyi Mo, Wenming Yang, Guijin Wang, Qingmin Liao ### One-Shot Face Recognition with Feature Rectification via Adversarial Learning One-shot face recognition has attracted extensive attention with the ability to recognize persons at just one glance. With only one training sample which cannot represent intra-class variance adequately, one-shot classes have poor generalization ability, and it is difficult to obtain appropriate classification weights. In this paper, we explore an inherent relationship between features and classification weights. In detail, we propose feature rectification generative adversarial network (FR-GAN) which is able to rectify features closer to corresponding classification weights considering existing classification weights information. With one model, we achieve two purposes: without fine-tuning via back propagation as previous CNN approaches which are time consuming and computationally expensive, FR-GAN can not only (1) generate classification weights for new classes using training data, but also (2) achieve more discriminative test feature representation. The experimental results demonstrate the remarkable performance of our proposed method, as in MS-Celeb-1M one-shot benchmark, our method achieves 93.12% coverage at 99% precision with the introduction of novel classes and remains a high accuracy at 99.80% for base classes, surpassing most of the previous approaches based on fine-tuning. Jianli Zhou, Jun Chen, Chao Liang, Jin Chen ### Visual Sentiment Analysis by Leveraging Local Regions and Human Faces Visual sentiment analysis (VSA) is a challenging task which attracts wide attention from researchers for its great application potentials. Existing works for VSA mostly extract global representations of images for sentiment prediction, ignoring the different contributions of local regions. Some recent studies analyze local regions separately and achieve improvements on the sentiment prediction performance. However, most of them treat regions equally in the feature fusion process which ignores their distinct contributions or use a global attention map whose performance is easily influenced by noises from non-emotional regions. In this paper, to solve these problems, we propose an end-to-end deep framework to effectively exploit the contributions of local regions to VSA. Specifically, a Sentiment Region Attention (SRA) module is proposed to estimate contributions of local regions with respect to the global image sentiment. Features of these regions are then reweighed and further fused according to their estimated contributions. Moreover, since the image sentiment is usually closely related to humans appearing in the image, we also propose to model the contribution of human faces as a special local region for sentiment prediction. Experimental results on publicly available and widely used datasets for VSA demonstrate our method outperforms state-of-the-art algorithms. Ruolin Zheng, Weixin Li, Yunhong Wang ### Prediction-Error Value Ordering for High-Fidelity Reversible Data Hiding Prediction-error expansion (PEE) is the most widely utilized reversible data hiding (RDH) technique. However, the performance of PEE is far from optimal since the correlations among prediction-errors are not fully exploited. Then, to enhance the embedding performance of PEE, a new RDH method named prediction-error value ordering (PEVO) is proposed in this paper. The main idea of PEVO is to exploit the inter-correlations of prediction-errors by combining PEE with the recent RDH technique of pixel value ordering. Specifically, the prediction-errors within an image block are first sorted, and then the maximum and minimum prediction-errors of the block are predicted and modified for data embedding. By the proposed approach, the image redundancy is better exploited and promising embedding performance is achieved. Experimental results demonstrate that the proposed PEVO embedding method is better than the PEE-based ones and some other state-of-the-art works. Tong Zhang, Xiaolong Li, Wenfa Qi, Zongming Guo ### Classroom Attention Analysis Based on Multiple Euler Angles Constraint and Head Pose Estimation Classroom attention analysis aims to capture rich semantic information to analyze how the students are reacting to the lecture. However, there are some challenges for constructing a uniform attention model of students in the classroom. Each student is an individual and it is hard to make a unified judgment. The orientation of the head reflects the direction of attention, but changes in posture and space can interfere with the direction of attention. Aiming to solve these, this paper proposes a scoring module for converting the head Euler angle and attention in the classroom. This module takes the head Euler angle in three directions as input, and introduces spatial information to correct the angle. The key idea of the proposed method lies in introducing the mutual constraint of multiple Euler angles with the head spatial information, aiming to make attention model less susceptible to the difference of head information. The mutual constraint of multiple Euler angles can provide more accurate head information, while the head spatial information can be utilized to correct the angle. Extensive experiments using classroom video data demonstrate that the proposed method can achieve more accurate results. Xin Xu, Xin Teng ### Multi-branch Body Region Alignment Network for Person Re-identification Person re-identification (Re-ID) aims to identify the same person images from a gallery set across different cameras. Human pose variations, background clutter and misalignment of detected human images pose challenges for Re-ID tasks. To deal with these issues, we propose a Multi-branch Body Region Alignment Network (MBRAN), to learn discriminative representations for person Re-ID. It consists of two modules, i.e., body region extraction and feature learning. Body region extraction module utilizes a single-person pose estimation method to estimate human keypoints and obtain three body regions. In the feature learning module, four global or local branch-networks share base layers and are designed to learn feature representation on three overlapping body regions and the global image. Extensive experiments have indicated that our method outperforms several state-of-the-art methods on two mainstream person Re-ID datasets. Han Fang, Jun Chen, Qi Tian ### DeepStroke: Understanding Glyph Structure with Semantic Segmentation and Tabu Search Glyphs in many writing systems (e.g., Chinese) are composed of a sequence of strokes written in a specific order. Glyph structure interpreting (i.e., stroke extraction) is one of the most important processing steps in many tasks including aesthetic quality evaluation, handwriting synthesis, character recognition, etc. However, existing methods that rely heavily on accurate shape matching are not only time-consuming but also unsatisfactory in stroke extraction performance. In this paper, we propose a novel method based on semantic segmentation and tabu search to interpret the structure of Chinese glyphs. Specifically, we first employ an improved Fully Convolutional Network (FCN), DeepStroke, to extract strokes, and then use the tabu search to obtain the order how these strokes are drawn. We also build the Chinese Character Stroke Segmentation Dataset (CCSSD) consisting of 67630 character images that can be equally classified into 10 different font styles. This dataset provides a benchmark for both stroke extraction and semantic segmentation tasks. Experimental results demonstrate the effectiveness and efficiency of our method and validate its superiority against the state of the art. Wenguang Wang, Zhouhui Lian, Yingmin Tang, Jianguo Xiao ### 3D Spatial Coverage Measurement of Aerial Images Unmanned aerial vehicles (UAVs) such as drones are becoming significantly prevalent in both daily life (e.g., event coverage, tourism) and critical situations (e.g., disaster management, military operations), generating an unprecedented number of aerial images and videos. UAVs are usually equipped with various sensors (e.g., GPS, accelerometers and gyroscopes) so provide sufficient spatial metadata that describe the spatial extent (referred to as aerial field-of-view) of recorded imagery. Such spatial metadata can be used efficiently to answer a fundamental question about how well a collection of aerial imagery covers a certain area spatially by evaluating the adequacy of the collected aerial imagery and estimating their sufficiency. This paper provides an answer to such questions by introducing 3D spatial coverage measurement models to collectively quantify the spatial and directional coverage of a geo-tagged aerial image dataset. Through experimental evaluation using real datasets, the paper demonstrates that our proposed models can be implemented with highly efficient computation of 3D space geometry. Abdullah Alfarrarjeh, Zeyu Ma, Seon Ho Kim, Cyrus Shahabi ### Instance Image Retrieval with Generative Adversarial Training While generative adversarial training becomes promising technology for many computer vision tasks especially in image processing domain, it has few works so far on instance level image retrieval domain. In this paper, we propose an instance level image retrieval method with generative adversarial training (ILRGAN). In this proposal, adversarial training is adopted in the retrieval procedure. Both generator and discriminator are redesigned for retrieval task: the generator tries to retrieve similar images and passes them to the discriminator. And the discriminator tries to discriminate the dissimilar images from the images retrieved and then passes the decision to the generator. Generator and discriminator play min-max game until the generator retrieves images that the discriminator can not discriminate the dissimilar images. Experiments on four widely used databases show that adversarial training really works for instance level image retrieval and the proposed ILRGAN can get promising retrieval performances. Hongkai Li, Cong Bai, Ling Huang, Yugang Jiang, Shengyong Chen ### An Effective Way to Boost Black-Box Adversarial Attack Deep neural networks (DNNs) are vulnerable to adversarial examples. Generally speaking adversarial examples are defined by adding input samples a small-magnitude perturbation, which is hardly misleading human observers’ decision but would lead to misclassifications for a well trained models. Most of existing iterative adversarial attack methods suffer from low success rates in fooling model in a black-box manner. And we find that it is because perturbation neutralize each other in iterative process. To address this issue, we propose a novel boosted iterative method to effectively promote success rates. We conduct the experiments on ImageNet dataset, with five models normally trained for classification. The experimental results show that our proposed strategy can significantly improve success rates of fooling models in a black-box manner. Furthermore, it also outperforms the momentum iterative method (MI-FSGM), which won the first places in NeurIPS Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions. Xinjie Feng, Hongxun Yao, Wenbin Che, Shengping Zhang ### Crowd Knowledge Enhanced Multimodal Conversational Assistant in Travel Domain We present a new solution towards building a crowd knowledge enhanced multimodal conversational system for travel. It aims to assist users in completing various travel-related tasks, such as searching for restaurants or things to do, in a multimodal conversation manner involving both text and images. In order to achieve this goal, we ground this research on the combination of multimodal understanding and recommendation techniques which explores the possibility of a more convenient information seeking paradigm. Specifically, we build the system in a modular manner where each modular construction is enriched with crowd knowledge from social sites. To the best of our knowledge, this is the first work that attempts to build intelligent multimodal conversational systems for travel, and moves an important step towards developing human-like assistants for completion of daily life tasks. Several current challenges are also pointed out as our future directions. Lizi Liao, Lyndon Kennedy, Lynn Wilcox, Tat-Seng Chua ### Improved Model Structure with Cosine Margin OIM Loss for End-to-End Person Search End-to-end person search is a novel task that integrates pedestrian detection and person re-identification (re-ID) into a joint optimization framework. However, the pedestrian features learned by most existing methods are not discriminative enough due to the potential adverse interaction between detection and re-ID tasks and the lack of discriminative power of re-ID loss. To this end, we propose an Improved Model Structure (IMS) with a novel re-ID loss function called Cosine Margin Online Instance Matching (CM-OIM) loss. Firstly, we design a model structure more suitable for person search, which alleviates the adverse interaction between the detection and re-ID parts by reasonably decreasing the network layers shared by them. Then, we conduct a full investigation of the weight of re-ID loss, which we argue plays an important role in end-to-end person search models. Finally, we improve the Online Instance Matching (OIM) loss by adopting a more robust online update strategy, and importing a cosine margin into it to increase the intra-class compactness of the features learned. Extensive experiments on two challenging datasets CUHK-SYSU and PRW demonstrate our approach outperforms the state-of-the-arts. Haoran Chen, Minghua Zhu, Xuesong Cai, Jufeng Luo, Yunzhou Qiu ### Effective Barcode Hunter via Semantic Segmentation in the Wild Barcodes are popularly used for product identification in many scenarios. However, locating them on product images is challenging. Half-occlusion, distortion, darkness or targets being too small to recognize can often add to the difficulties using conventional methods. In this paper, we introduce a large-scale diverse barcode dataset and adopt a deep learning-based semantic segmentation approach to address these problems. Specifically, we use an efficient method to synthesize 30000 well-annotated images containing diverse barcode labels, and get Barcode-30 k, a large-scale dataset with accurate pixel-level annotated barcode in the wild. Moreover, to locate barcode more precisely, we further propose an Effective Barcode Hunter - BarcodeNet. It is a semantic segmentation model based on CNN (Convolutional Neural Network) and is mainly formed with two novel modules, Prior Pyramid Pooling Module (P3M) and Pyramid Refine Module (PRM). Additional ablation studies further demonstrate the effectiveness of BarcodeNet, and it yields a high mIoU result of 95.36% on the proposed synthetic Barcode-30 k validation-set. To prove the practical value of the whole system, we test the BarcodeNet trained on train-set of Barcode-30 k on a manually annotated testing set that only collected from cameras, it achieves mIoU of 90.3%, which is a very accurate result for practical applications. Feng Ni, Xixin Cao ### Wonderful Clips of Playing Basketball: A Database for Localizing Wonderful Actions Video highlight detection, or wonderful clip localization, aims at automatically discovering interesting clips in untrimmed videos, which can be applied to a variety of scenarios in real world. With reference to its study, a video dataset of Wonderful Clips of Playing Basketball (WCPB) is developed in this work. The Segment-Convolutional Neural Network (S-CNN), a start-of-the-art model for temporal action localization, is adopted to localize wonderful clips and a two-stream S-CNN is designed which outperforms its former on WCPB. The WCPB dataset presents the specific meaning of wonderful clips and annotations in playing basketball and enables the measurement of performance and progress in other realistic scenarios. Qinyu Li, Lijun Chen, Hanli Wang, Xianhui Liu ### Structural Pyramid Network for Cascaded Optical Flow Estimation To achieve a better balance of accuracy and computational complexity for optical flow estimation, a Structural Pyramid Network (StruPyNet) is designed to combine structural pyramid processing and feature pyramid processing. In order to efficiently distribute parameters and computations among all structural pyramid levels, the proposed StruPyNet flexibly cascades more small flow estimators at lower structural pyramid levels and less small flow estimators at higher structural pyramid levels. The more focus on low-resolution feature matching facilitates large-motion flow estimation and background flow estimation. In addition, residual flow learning, feature warping and cost volume construction are repeatedly performed by the cascaded small flow estimators, which benefits training and estimation. As compared with state-of-the-art convolutional neural network-based methods, StruPyNet achieves better performances in most cases. The model size of StruPyNet is 95.9% smaller than FlowNet2, and StruPyNet runs at 3 more frames per second than FlowNet2. Moreover, the experimental results on several benchmark datasets demonstrate the effectiveness of StruPyNet, especially StruPyNet performs quite well for large-motion estimation. Zefeng Sun, Hanli Wang, Yun Yi, Qinyu Li ### Real-Time Multiple Pedestrians Tracking in Multi-camera System This paper proposes a novel 3D realtime multi-view multi-target tracking framework featured by a global-to-local tracking by detection model and a 3D tracklet-to-tracklet data association scheme. In order to enable the realtime performance, the former maximizes the utilization of temporal differences between consecutive frames, resulting in a great reduction of the average frame-wise detection time. Meanwhile, the latter accurately performs multi-target data association across multiple views to calculate the 3D trajectories of tracked objects. Comprehensive experiments on PETS-09 dataset well demonstrate the outstanding performance of the proposed method in terms of efficiency and accuracy in multi-target 3d trajectory tracking tasks, compared to the state-of-the art methods. Muchun Chen, Yugang Chen, Truong Tan Loc, Bingbing Ni ### Learning Multi-feature Based Spatially Regularized and Scale Adaptive Correlation Filters for Visual Tracking Visual object tracking is a challenging problem in computer vision. Although the correlation filter-based trackers have achieved competitive results both on accuracy and robustness in recent years, there is still a need to improve the overall tracking performance. In this paper, to tackle the problems caused by Spatially Regularized Discriminative Correlation Filter (SRDCF), we suggest a single-sample-integrated training scheme which utilizes information of the previous frames and the current frame to improve the robustness of training samples. Moreover, manually designed features and deep convolutional features are integrated together to further boost the overall tracking capacity. To optimize the translation filter, we develop an alternating direction method of multipliers (ADMM) algorithm. Besides, we introduce a scale adaptively filter to directly learn the appearance changes induced by variations in the target scale. Extensive empirical evaluations on the TB-50, TB-100 and OTB-2013 datasets demonstrate that the proposed tracker is very promising for various challenging scenarios. Ying She, Yang Yi ### Unsupervised Video Summarization via Attention-Driven Adversarial Learning This paper presents a new video summarization approach that integrates an attention mechanism to identify the significant parts of the video, and is trained unsupervisingly via generative adversarial learning. Starting from the SUM-GAN model, we first develop an improved version of it (called SUM-GAN-sl) that has a significantly reduced number of learned parameters, performs incremental training of the model’s components, and applies a stepwise label-based strategy for updating the adversarial part. Subsequently, we introduce an attention mechanism to SUM-GAN-sl in two ways: (i) by integrating an attention layer within the variational auto-encoder (VAE) of the architecture (SUM-GAN-VAAE), and (ii) by replacing the VAE with a deterministic attention auto-encoder (SUM-GAN-AAE). Experimental evaluation on two datasets (SumMe and TVSum) documents the contribution of the attention auto-encoder to faster and more stable training of the model, resulting in a significant performance improvement with respect to the original model and demonstrating the competitiveness of the proposed SUM-GAN-AAE against the state of the art (Software publicly available at: https://github.com/e-apostolidis/SUM-GAN-AAE ). Evlampios Apostolidis, Eleni Adamantidou, Alexandros I. Metsai, Vasileios Mezaris, Ioannis Patras ### Efficient HEVC Downscale Transcoding Based on Coding Unit Information Mapping In this paper, a novel method that utilizes the information of coding unit (CU) from source video to accelerate the downscale transcoding process for High Efficiency Video Coding (HEVC) is proposed. Specifically, the CU depth and prediction mode information are first extracted into matrices in block level according to the decoded source video. Then we use the matrices to predict CU depth and prediction mode at CU level in the target video. Finally, some effective rules are introduced to determine CU partition and prediction mode based on the prediction. Our approach supports the spatial downscale transcoding with any spatial resolution downscaling ratio. Experiments show that the proposed method can achieve an average time reduction of 59.3% compared to the reference HEVC encoder, with a relatively small Bjøntegaard Delta Bit rate (BDBR) increment on average. Moreover, our approach is also competitive compared to the state-of-the-art spatial resolution downscale transcoding methods for HEVC. Zhijie Huang, Yunchang Li, Jun Sun ### Fine-Grain Level Sports Video Search Engine It becomes an urgent demand how to make people find relevant video content of interest from massive sports videos. We have designed and developed a sports video search engine based on distributed architecture, which aims to provide users with content-based video analysis and retrieval services. In sports video search engine, we focus on event detection, highlights analysis and image retrieval. Our work has several advantages: (I) CNN and RNN are used to extract features and integrate dynamic information and a new sliding window model are used for multi-length event detection. (II) For highlights analysis. An improved method based on self-adapting dual threshold and dominant color percentage are used to detect the shot boundary. Affect arousal method are used for highlights extraction. (III) For image’s indexing and retrieval. Hyper-spherical soft assignment method is proposed to generate image descriptor. Enhanced residual vector quantization is presented to construct multi-inverted index. Two adaptive retrieval methods based on hype-spherical filtration are used to improve the time efficient. (IV) All of previous algorithms are implemented in the distributed platform which we develop for massive video data processing. Zikai Song, Junqing Yu, Hengyou Cai, Yangliu Hu, Yi-Ping Phoebe Chen ### The Korean Sign Language Dataset for Action Recognition Recently, the development of computer vision technologies has shown excellent performance in complex tasks such as behavioral recognition. Therefore, several studies propose datasets for behavior recognition, including sign language recognition. In many countries, researchers are carrying out studies to automatically recognize and interpret sign language to facilitate communication with deaf people. However, there is no dataset aiming at sign language recognition that is used in Korea yet, and research on this is insufficient. Since sign language varies from country to country, it is valuable to build a dataset for Korean sign language. Therefore, this paper aims to propose a dataset of videos of isolated signs from Korean sign language that can also be used for behavior recognition using deep learning. We present the Korean Sign Language (KSL) dataset. The dataset is composed of 77 words of Korean sign language video clips conducted by 20 deaf people. We train and evaluate this dataset in deep learning networks that have recently achieved excellent performance in the behavior recognition task. Also, we have confirmed through the deconvolution-based visualization method that the deep learning network fully understands the characteristics of the dataset. Seunghan Yang, Seungjun Jung, Heekwang Kang, Changick Kim ### SEE-LPR: A Semantic Segmentation Based End-to-End System for Unconstrained License Plate Detection and Recognition Most previous works regard License Plate detection and Recognition (LPR) as two or more separate tasks, which often leads to error accumulation and low efficiency. Recently, several new studies use end-to-end training to overcome these problems and achieve better results. However, challenges like misalignment and variable-length or multi-language LPs still exist. In this paper, we propose a novel Semantic segmentation based End-to-End multilingual LPR system SEE-LPR to solve these challenges. Our system has four components which are convolution backbone, LP capture, LP alignment, and LP recognition. Specifically, LP alignment is used to connect LP capture and LP recognition, allowing the gradient back-propagate through the whole network and can handle oblique LPs. Connectionist Temporal Classification (CTC) module used in LP recognition makes our system able to handle LPs with variable-length or multi-language. Comparative studies on several challenging benchmark datasets show that the proposed SEE-LPR system significantly outperforms the state-of-the-art systems in both accuracy and efficiency. Dongqi Tang, Hao Kong, Xi Meng, Ruo-Ze Liu, Tong Lu ### Action Co-localization in an Untrimmed Video by Graph Neural Networks We present an efficient approach for action co-localization in an untrimmed video by exploiting contextual and temporal feature from multiple action proposals. Most existing action localization methods focus on each individual action instances without accounting for the correlations among them. To exploit such correlations, we propose the Graph-based Temporal Action Co-Localization (G-TACL) method, which aggregates contextual features from multiple action proposals to assist temporal localization. This aggregation procedure is achieved with Graph Neural Networks with nodes initialized by the action proposal representations. In addition, a multi-level consistency evaluator is proposed to measure the similarity, which summarizes low-level temporal coincidences, features vector dot products and high-level contextual features similarities between any two proposals. Subsequently, these nodes are iteratively updated with Gated Recurrent Unit (GRU) and the obtained node features are used to regress the temporal boundaries of the action proposals, and finally to localize the action instances. Experiments on the THUMOS’14 and MEXaction2 datasets have demonstrated the efficacy of our proposed method. Changbo Zhai, Le Wang, Qilin Zhang, Zhanning Gao, Zhenxing Niu, Nanning Zheng, Gang Hua ### A Novel Attention Enhanced Dense Network for Image Super-Resolution Deep convolutional neural networks (CNNs) have recently achieved impressive performance in image super-resolution (SR). However, they usually treat the spatial features and channel-wise features indiscriminatingly and fail to take full advantage of hierarchical features, restricting adaptive ability. To address these issues, we propose a novel attention enhanced dense network (AEDN) to adaptively recalibrate each kernel and feature for different inputs, by integrating both spatial attention (SA) and channel attention (CA) modules in the proposed network. In experiments, we explore the effect of attention mechanism and present quantitative and qualitative evaluations, where the results show that the proposed AEDN outperforms state-of-the-art methods by effectively suppressing the artifacts and faithfully recovering more high-frequency image details. Zhong-Han Niu, Yang-Hao Zhou, Yu-Bin Yang, Jian-Cong Fan ### Marine Biometric Recognition Algorithm Based on YOLOv3-GAN Network With the rise of the marine ranching field, the object recognition applications on underwater catching robots have become more and more popular. However, due to the influence of uneven underwater light, the underwater images are easily encountered with problems such as color distortion and underexposure, which seriously affects the accuracy of underwater object recognition. In this work, we propose a marine biometric recognition algorithm based on YOLOv3-GAN network, which jointly optimizes the training of image enhancement loss (LossGAN) and classification and location loss (LossYOLO) in the network, and it is different from the traditional underwater object recognition approaches which usually consider image enhancement and object detection separately. Consequently, our proposed algorithm is more powerful in marine biological identification. Moreover, the anchor box is further modified by k-means method to cluster the object box size in the network detection part, which makes the anchor box more in line with the object size. The experimental results demonstrate that the mean Average Precision (mAP) and the Recall of the YOLOv3-GAN network are above 6.4% and 4.8% higher than that of the YOLOv3 network. In addition, the image enhancement part in the YOLOv3-GAN network can provide high quality images which benefit for other applications in the marine surveillance field. Ping Liu, Hongbo Yang, Jingnan Fu ### Multi-scale Spatial Location Preference for Semantic Segmentation This paper proposes a semantic segmentation network which can address the problem of adaptive segmentation for objects with different sizes. In this work, ResNetV2-50 is firstly exploited to extract features of objects, and then these features are fed into the reconstructed feature pyramid network (FPN), which includes multi-scale preference (MSP) module and multi-location preference (MLP) module. Aiming at objects with different sizes, the receptive fields of kernels need to be adjusted. MSP module concatenates feature maps of different receptive fields, and then combines them with the SE block in SE-Net to obtain scale-wise dependencies. In this way, not only multi-scale information can be encoded in feature maps with different degree levels of preference adaptively, but also multi-scale spatial information can be provided to MLP module. The MLP module combines the channels containing more accurate spatial location information with preference to replace traditional nearest interpolation upsampling in FPN. At last, the weighted channels equip with scale-wise information as well as more accurate spatial location information and yield precise semantic prediction for objects with different sizes. We demonstrate the effectiveness of the proposed solutions on the Cityscapes and PASCAL VOC 2012 semantic image segmentation datasets and our methods achieve comparable or higher performance. Qiuyuan Han, Jin Zheng ### HRTF Representation with Convolutional Auto-encoder The head-related transfer function (HRTF) can be considered as some kind of filter that describes how a sound from an arbitrary spatial direction transfers to the listener’s eardrums. HRTF can be used to synthesize vivid virtual 3D sound that seems to come from any spatial location, which makes it play an important role in the 3D audio technology. However, the complexity and variation of auditory cues inherent in HRTF make it difficult to set up an accurate mathematical model with the conventional methods. In this paper, we put forward an HRTF representation modeling based on convolutional auto-encoder (CAE), which is some type of auto-encoder that contains convolutional layers in the encoder part and deconvolution layers in the decoder part. The experimental evaluation on the ARI HRTF database shows that the proposed model provides very good results on dimensionality reduction of HRTF. Wei Chen, Ruimin Hu, Xiaochen Wang, Dengshi Li ### Unsupervised Feature Propagation for Fast Video Object Detection Using Generative Adversarial Networks We propose unsupervised Feature Propagation Generative Adversarial Network (denoted as FPGAN) for fast video object detection in this paper. In our video object detector, we detect objects on spare key frames using pre-trained state-of-the-art object detector R-FCN, and propagate CNN features to adjacent frames for fast detection via a light-weight transformation network. To learn the feature propagation network, we make full use of unlabeled video data and employ generative adversarial networks in model training. Specifically, in FPGAN, the generator is the feature propagation network, and the discriminator employs second-order temporal coherence and 3D ConvNets to distinguish between predicted and “ground truth” CNN features. In addition, Euclidean distance loss provided by the pre-trained image object detector is also adopted to jointly supervise the learning. Our method doesn’t need any human labelling in videos. Experiments on the large-scale ImageNet VID dataset demonstrate the effectiveness of our method. Xuan Zhang, Guangxing Han, Wenduo He ### OmniEyes: Analysis and Synthesis of Artistically Painted Eyes Faces in artistic paintings most often contain the same elements (eyes, nose, mouth...) as faces in the real world, however they are not a photo-realistic transfer of physical visual content. These creative nuances the artists introduce in their work act as interference when facial detection models are used in the artistic domain. In this work we introduce models that can accurately detect, classify and conditionally generate artistically painted eyes in portrait paintings. In addition, we introduce the OmniEyes Dataset that captures the essence of painted eyes with annotated patches from 250 K artistic paintings and their metadata. We evaluate our approach in inpainting, out of context eye generation and classification on portrait paintings from the OmniArt dataset. We conduct a user case study to further study the quality of our generated samples, asses their aesthetic aspects and provide quantitative and qualitative results for our model’s performance. Gjorgji Strezoski, Rogier Knoester, Nanne van Noord, Marcel Worring ### LDSNE: Learning Structural Network Embeddings by Encoding Local Distances Network embedding algorithms learn low-dimensional features from the relationships and attributes of networks. The basic principle of these algorithms is to preserve the similarities in the original networks as much as possible. However, existing algorithms are not expressive enough for structural identity similarities. Therefore, we propose LDSNE, a novel algorithm for learning structural representations in both directed and undirected networks. Networks are first mapped into a proximity-based low-dimension space. Then, structural embeddings are extracted by encoding local space distances. Empirical results demonstrate that our algorithm can obtain multiple types of representations and outperforms other state-of-the-art methods. Xiyue Gao, Jun Chen, Jing Yao, Wenqian Zhu ### FurcaNeXt: End-to-End Monaural Speech Separation with Dynamic Gated Dilated Temporal Convolutional Networks Deep dilated temporal convolutional networks (TCN) have been proved to be very effective in sequence modeling. In this paper we propose several improvements of TCN for end-to-end approach to monaural speech separation, which consists of (1) multi-scale dynamic weighted gated TCN with a pyramidal structure (FurcaPy), (2) gated TCN with intra-parallel convolutional components (FurcaPa), (3) weight-shared multi-scale gated TCN (FurcaSh) and (4) dilated TCN with gated subtractive-convolutional component (FurcaSu). All these networks take the mixed utterance of two speakers and maps it to two separated utterances, where each utterance contains only one speaker’s voice. For the objective, we propose to train the networks by directly optimizing utterance-level signal-to-distortion ratio (SDR) in a permutation invariant training (PIT) style. Our experiments on the public WSJ0-2mix data corpus result in 18.4 dB SDR improvement, which shows our proposed networks can lead to performance improvement on the speaker separation task. Liwen Zhang, Ziqiang Shi, Jiqing Han, Anyan Shi, Ding Ma ### Multi-step Coding Structure of Spatial Audio Object Coding The spatial audio object coding (SAOC) is an effective meth-od which compresses multiple audio objects and provides flexibility for personalized rendering in interactive services. It divides each frame signal into 28 sub-bands and extracts one set object spatial parameters for each sub-band. Objects can be coded into a downmix signal and a few parameters by this way. However, using same parameters in one sub-band will cause frequency aliasing distortion, which seriously impacts listening experience. Existing studies to improve SAOC cannot guarantee that all audio objects can be decoded well. This paper describes a new multi-step object coding structure to efficient calculate residual of each object as additional side information to compensate the aliasing distortion of each object. In this multi-step structure, a sorting strategy based on sub-band energy of each object is proposed to determine which audio object should be encoded in each step, because the object encoding order will affect the final decoded quality. The singular value decomposition (SVD) is used to reduce the increasing bit-rate due to the added side information. From the experiment results, the performance of proposed method is better than SAOC and SAOC-TSC, and each object can be decoded well with respect to the bit-rate and the sound quality. Chenhao Hu, Ruimin Hu, Xiaochen Wang, Tingzhao Wu, Dengshi Li ### Thermal Face Recognition Based on Transformation by Residual U-Net and Pixel Shuffle Upsampling We present a thermal face recognition system that first transforms the given face in the thermal spectrum into the visible spectrum, and then recognizes the transformed face by matching it with the face gallery. To achieve high-fidelity transformation, the U-Net structure with a residual network backbone is developed for generating visible face images from thermal face images. Our work mainly improves upon previous works on the Nagoya University thermal face dataset. In the evaluation, we show that the rank-1 recognition accuracy can be improved by more than $$10\%$$. The improvement on visual quality of transformed faces is also measured in terms of PSNR (with 0.36 dB improvement) and SSIM (with 0.07 improvement). Soumya Chatterjee, Wei-Ta Chu ### K-SVD Based Point Cloud Coding for RGB-D Video Compression Using 3D Super-Point Clustering In this paper, we present a novel 3D structure-awareness RGB-D video compression scheme, which applies the proposed 3D super-point clustering to partition the super-points in a colored point cloud, generated from an RGB-D image, into a centroid and a non-centroid super-point datasets. A super-point is a set of 3D points which are characterized with similar feature vectors. Input an RGB-D frame to the proposed scheme, the camera parameters are first used to generate a colored point cloud, which is segmented into multiple super-points using our multiple principal plane analysis (MPPA). These super-points are then grouped into multiple clusters, each of them characterized by a centroid super-point. Next, the median feature vectors of super-points are represented by the K singular value decomposition (K-SVD) based sparse codes. Given a super-point cluster, the sparse codes of the median feature vectors are very similar and thus the redundant information among them are easy to remove by the successive entropy coding. For each super-point, the residual super-point is computed by subtracting the feature vectors inside from the reconstructed median feature vector. These residual feature vectors are also collected and coded using the K-SVD based sparse coding to enhance the quality of the compressed point cloud. This process results in a multiple description coding scheme for 3D point cloud compression. Finally, the compressed point cloud is projected to the 2D image space to obtain the compressed RGB-D image. Experiments demonstrate the effectiveness of our approach which attains better performance than the current state-of-the-art point cloud compression methods. Shyi-Chyi Cheng, Ting-Lan Lin, Ping-Yuan Tseng ### Resolution Booster: Global Structure Preserving Stitching Method for Ultra-High Resolution Image Translation Current image translation networks (for instance image to image translation, style transfer et al.) have strict limitation on input image resolution due to high spatial complexity, which results in a wide gap to their usage in practice. In this paper we propose a novel patch-based auxiliary architecture, called Resolution Booster, to endow a trained image translation network ability to process ultra high resolution images. Different from previous methods which compute the results with the entire image, our network processes resized global image at low resolution as well as high-resolution local patches to save the memory. To increase the quality of generated image, we exploit the rough global information with global branch and high resolution information with local branch then combine the results with a designed reconstruction network. Then a joint global/local stitching result is produced. Our network is flexible to be deployed on any exiting image translation method to endow the new network to process larger images. Experimental results show the both capability of processing much higher resolution images while not decreasing the generating quality compared with baseline methods and generality of our model for flexible deployment. Siying Zhai, Xiwei Hu, Xuanhong Chen, Bingbing Ni, Wenjun Zhang ### Cross Fusion for Egocentric Interactive Action Recognition The characteristics of egocentric interactive videos, which include heavy ego-motion, frequent viewpoint changes and multiple types of activities, hinder the action recognition methods of third-person vision from obtaining satisfactory results. In this paper, we introduce an effective architecture with two branches and a cross fusion method for action recognition in egocentric interactive vision. The two branches are responsible to model the information from observers and inter-actors respectively, and each branch is designed based on the multimodal multi-stream C3D networks. We leverage cross fusion to establish effective linkages between the two branches, which aims to reduce redundant information and fuse complementary features. Besides, we propose variable sampling to obtain discriminative snippets for training. Experimental results demonstrate that the proposed architecture achieves superior performance over several state-of-the-art methods on two benchmarks. Haiyu Jiang, Yan Song, Jiang He, Xiangbo Shu ### Improving Brain Tumor Segmentation with Dilated Pseudo-3D Convolution and Multi-direction Fusion Convolutional neural networks have shown their dominance in many computer vision tasks and been broadly used for medical image analysis. Unlike traditional image-based tasks, medical data is often in 3D form. 2D Networks designed for images shows poor performance and efficiency on these tasks. Although 3D networks work better, their computation and memory cost are rather high. To solve this problem, we decompose 3D convolution to decouple volumetric information, in the same way human experts treat volume data. Inspired by the concept of three medically-defined planes, we further propose a Multi-direction Fusion (MF) module, using three branches of this factorized 3D convolution in parallel to simultaneously extract features from three different directions and assemble them together. Moreover, we suggest introducing dilated convolution to preserve resolution and enlarge receptive field for segmentation. The network with proposed modules (MFNet) achieves competitive performance with other state-of-the-art methods on BraTS 2018 brain tumor segmentation task and is much more light-weight. We believe this is an effective and efficient way for volume-based medical segmentation. Sun’ao Liu, Hai Xu, Yizhi Liu, Hongtao Xie ### Texture-Based Fast CU Size Decision and Intra Mode Decision Algorithm for VVC Versatile Video Coding (VVC) is the next generation video coding standard. Compared with HEVC/H.265, in order to improve coding efficiency, its complexity of intra coding increases significantly. Too much encoding time makes it difficult for real-time coding and hardware implementation. To tackle this urgent problem, a texture-based fast CU size decision and intra mode decision algorithm is proposed in this paper. The contributions of the proposed algorithm include two aspects. (1) Aiming at the latest adopted QTMT block partitioning scheme in VVC, some redundant splitting tree and direction are skipped according to texture characteristics and differences between sub-parts of CU. (2) A fast intra mode decision scheme is proposed which considers complexity and texture characteristics. Some hierarchical modifications are applied including reducing the number of checked Intra Prediction Modes (IPM) and candidate modes in Rough Modes Decision (RMD) and RD check stage respectively. Compared with the latest VVC reference software VTM-4.2, the proposed algorithm can achieve approximately 46% encoding time saving on average with only 0.91% BD-RATE increase or 0.046 BD-PSNR decrease. Jian Cao, Na Tang, Jun Wang, Fan Liang ### An Efficient Hierarchical Near-Duplicate Video Detection Algorithm Based on Deep Semantic Features With the rapid development of the Internet and multimedia technology, the amount of multimedia data on the Internet is escalating exponentially, which has attracted much research attention in the field of Near-Duplicate Video Detection (NDVD). Motivated by the excellent performance of Convolutional Neural Networks (CNNs) in image classification, we bring the powerful discrimination ability of the CNN model to the NDVD system and propose a hierarchical detection method based on the derived deep semantic features from the CNN models. The original CNN features are firstly extracted from the video frames, and then a semantic descriptor and a labels descriptor are obtained respectively based on the basic content unit. Finally, a hierarchical matching scheme is proposed to promote fast near-duplicate video detection. The proposed approach has been tested on the widely used CC_WEB_VIDEO dataset, and has achieved state-of-the-art results with the mean Average Precision (mAP) of 0.977. Siying Liang, Ping Wang ### Meta Transfer Learning for Adaptive Vehicle Tracking in UAV Videos The vehicle tracking in UAV videos is still under-explored with the deep learning methods due to the lack of well labeled datasets. The challenges mainly come from the fact that the UAV view has much wider and changeable landscapes, which hinders the labeling task. In this paper, we propose a meta transfer learning method for adaptive vehicle tracking in UAV videos (MTAVT), which transfers the common features across landscapes, so that it can avoid over-fitting with the limited scale of dataset. Our MTAVT consists of two critical components: a meta learner and a transfer learner. Specifically, meta-learner is employed to adaptively learn the models to extract the sharing features between ground and drone views. The transfer learner is used to learn the domain-shifted features from ground-view to drone-view datasets by optimizing the ground-view models. We further seamlessly incorporate an exemplar-memory curriculum into meta learning by leveraging the memorized models, which serves as the training guidance for sequential sampling. Hence, this curriculum can enforce the meta learner to adapt to the new sequences in the drone-view datasets without losing the previous learned knowledge. Meanwhile, we simplify and stabilize the higher-order gradient training criteria for meta learning by exploring curriculum learning in multiple stages with various domains. We conduct extensive experiments and ablation studies on four public benchmarks and an evaluation dataset from YouTube (to release soon). All the experiments demonstrate that, our MTAVT has superior advantages over state-of-the-art methods in terms of accuracy, robustness, and versatility. Wenfeng Song, Shuai Li, Yuting Guo, Shaoqi Li, Aimin Hao, Hong Qin, Qinping Zhao ### Adversarial Query-by-Image Video Retrieval Based on Attention Mechanism The query-by-image video retrieval (QBIVR) is a difficult feature matching task across different modalities. More and more retrieval tasks require indexing the videos containing the activities in the image, which makes extracting meaningful spatio-temporal video features crucial. In this paper, we propose an approach based on adversarial learning, termed Adversarial Image-to-Video (AIV) approach. To capture the temporal pattern of videos, we utilize temporal regions likely to contain activities via fully-convolutional 3D ConvNet features, and then obtain the video bag features by 3D RoI Pooling. To solve mismatch issue with image vector features and identify the importances of information for videos, we add a Multiple Instance Learning (MIL) module to assign different weights to each temporal information in video bags. Moreover, we utilize the triplet loss to distinguish different semantic categorites and support intraclass variability of images and videos. Specially, our AIV proposes modality loss as an adversary to the triplet loss in the adversarial learning. The interplay between two losses jointly bridges the domain gap across different modalities. Extensive experiments on two widely used datasets verify the effectiveness of our proposed methods as compared with other methods. Ruicong Xu, Li Niu, Liqing Zhang ### Joint Sketch-Attribute Learning for Fine-Grained Face Synthesis The photorealism of synthetic face images has been significantly improved by generative adversarial networks (GANs). Besides of the realism, more accurate control on the properties of face images. While sketches convey the desired shapes, attributes describe appearance. However, it remains challenging to jointly exploit sketches and attributes, which are in different modalities, to generate high-resolution photorealistic face images. In this paper, we propose a novel joint sketch-attribute learning approach to synthesize photo-realistic face images with conditional GANs. A hybrid generator is proposed to learn a unified embedding of shape from sketches and appearance from attributes for synthesizing images. We propose an attribute modulation module, which transfers user-preferred attributes to reinforce sketch representation with appearance details. Using the proposed approach, users could flexibly manipulate the desired shape and appearance of synthesized face images with fine-grained control. We conducted extensive experiments on the CelebA-HQ dataset [16]. The experimental results have demonstrated the effectiveness of the proposed approach. Binxin Yang, Xuejin Chen, Richang Hong, Zihan Chen, Yuhang Li, Zheng-Jun Zha ### High Accuracy Perceptual Video Hashing via Low-Rank Decomposition and DWT In this work, we propose a novel robust video hashing algorithm with High Accuracy. The proposed algorithm generates a fix-up hash via low-rank and sparse decomposition and discrete wavelet transform (DWT). Specifically, input video is converted to randomly normalized video with logistic map, and then content-based feature matrices extract from a randomly normalized video with low-rank and sparse decomposition. Finally, data compression with 2D-DWT of LL sub-band is applied to feature matrices and statistic properties of DWT coefficients are quantized to derive a compact video hash. Experiments with 4760 videos are carried out to validate efficiency of the proposed video hashing. The results show that the proposed video hashing is robust to many digital operations and reaches good discrimination. Receiver operating characteristic (ROC) curve comparisons indicate that the proposed video hashing more desirable performance than some algorithms in classification between robustness and discrimination. Lv Chen, Dengpan Ye, Shunzhi Jiang ### HMM-Based Person Re-identification in Large-Scale Open Scenario This paper aims to tackle person re-identification (person re-ID) in large-scale open scenario, which differs from the conventional person re-ID tasks but is significant for some real suspect investigation cases. In the large-scale open scenario, the image background and person appearance may change immensely. There are a large number of irrelevant pedestrians appearing in the urban surveillance systems, some of which may have very similar appearance with the target person. Existing methods utilize only surveillance video information, which can not solve the problem well due to above challenges. In this paper, we explore that pedestrians’ paths from multiple spaces (such as surveillance space and geospatial space) are matched due to temporal-spatial consistency. Moreover, people have their unique behavior path due to the differences of individual behavioral. Inspired by these two observations, we propose to use the association relationship of paths from surveillance space and geospatial space to solve the person re-ID in large-scale open scenario. A Hidden Markov Model based Path Association(HMM-PA) framework is presented to jointly analyze image path and geospatial path. In addition, according to our research scenario, we manually annotate path description on two large-scale public re-ID datasets, termed as Duke-PDD and Market-PDD. Comprehensive experiments on these two datasets show proposed HMM-PA outperforms the state-of-art methods. Dongyang Li, Ruimin Hu, Wenxin Huang, Xiaochen Wang, Dengshi Li, Fei Zheng ### No Reference Image Quality Assessment by Information Decomposition No reference (NR) image quality assessment (IQA) is to automatically assess image quality as would be perceived by human without reference images. Currently, almost all state-of-the-art NR IQA approaches are trained and tested on the databases of synthetically distorted images. The synthetically distorted images are usually produced by superimposing one or several common distortions on the clean image, but the authentically distorted images are often simultaneously contaminated by several unknown distortions. Therefore, most IQA performances will greatly drop on the authentically distorted images. Recent researches on the human brain demonstrate that the human visual system (HVS) perceives image scenes by predicting the primary information and avoiding residual uncertainty. According to this theory, a new and robust NR IQA approach is proposed in this paper. By the proposed approach, the distorted image is decomposed into the orderly part and disorderly part to be separately processed as its primary information and uncertainty information. Global features of the distorted image are also calculated to describe the overall image contents. Experimental results on the synthetically and authentically image databases demonstrate that the proposed approach makes great progress in IQA performance. Junchen Deng, Ci Wang, Shiqi Liu
2021-08-04 19:25:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3265027105808258, "perplexity": 2807.0884059989085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154897.82/warc/CC-MAIN-20210804174229-20210804204229-00590.warc.gz"}
https://www.codingame.com/playgrounds/349/introduction-to-mpi/custom-types---exercise
# Introduction to MPI Aveuh 154.1K views ### Custom types - exercise In the previous lesson, we have seen how to create very basic contiguous datatypes. This way of creating datatypes does not help us when we want to create datatypes that mix different basice types. For instance, in the previous lesson's example, we have seen a custom structure used to store the data : struct CustomData { int n_values; double dbl_values[10]; }; To represent this using the type/displacement formalism, our datatype would look something like : [(int, 0), (double, 4), (double, 12), (double, 20), (double, 28), (double, 36), (double, 44), (double, 52), (double, 60), (double, 68), (double, 76)] To simplify everything, we can convert everyone of these couples as a triplet : (type, start, number of elements). Thus our list simplifies to : [(int, 0, 1), (double, 4, 10)] MPI provides us with a special function to actually convert such a list in a datatype : int MPI_Type_create_struct(int count, const int block_length[], const MPI_Aint displacement[], const MPI_Datatype types[], MPI_Datatype *new_type); Let's see these arguments one by one. count is the number of elements in your list, in our case we have two entries, so count will be 2. block_length is an array of integers, indicating, for entry i, the number of contiguous elements of that type. That will be the third value of our triplet : 1 in the int case, 10 in the double case. displacement is an array of MPI_Aint. We have never seen this type, but let's forget immediatly about it, and consider this to be an array of integers. MPI_Aint stands for Address integer. These are just a specific MPI type for integers. In our case, that's the second element of each triplet. types is an array of MPI_Datatypes. This should be pretty obvious by now : it's an array of all the different sub-types we are going to use in the custom type. Finally, we store the resulting datatype in new_type. Knowing this, you are ready to optimise the example code we gave in last lesson, especially, removing all the copies in memory and transferring all the data using only one gather communication. ### Displacements Now there is a catch wit the displacement. Computing manually the offsets can be tricky. Although it tends to be less and less the case, some types have sizes that can vary on a system/OS basis, so hardcoding the values might lead to trouble. One way of doing things in a cleaner way is to use the offsetof macro from the standard library (You will have to include stddef.h in C or cstddef in C++). offsetof takes two parameters : a struct and the name of one attribute of the struct. It returns a size_t (implicitly castable in a MPI_Aint) corresponding to the displacement of this attribute. For example if we had the following structure : struct MyStruct { int a; double b; char c; float d; }; Then, we could define out displacement table as : MPI_Aint displacements[4] = {offsetof(MyStruct, a), offsetof(MyStruct, b), offsetof(MyStruct, c), offsetof(MyStruct, d)}; ### Exercise It's your turn to optimise the program we have made in the previous lesson. Use MPI_Type_create_struct to define a derived datatype and commit it so the data can be gathered on process 0. Custom struct datatype
2021-01-19 21:41:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2918412387371063, "perplexity": 2612.8572636246195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519784.35/warc/CC-MAIN-20210119201033-20210119231033-00478.warc.gz"}
https://www.thejournal.club/c/paper/122199/
#### Some complexity and approximation results for coupled-tasks scheduling problem according to topology ##### Benoit Darties, Rodolphe Giroudeau, Jean-Claude König, Gilles Simonin We consider the makespan minimization coupled-tasks problem in presence of compatibility constraints with a specified topology. In particular, we focus on stretched coupled-tasks, i.e. coupled-tasks having the same sub-tasks execution time and idle time duration. We study several problems in framework of classic complexity and approximation for which the compatibility graph is bipartite (star, chain,. . .). In such a context, we design some efficient polynomial-time approximation algorithms for an intractable scheduling problem according to some parameters. arrow_drop_up
2021-04-15 04:11:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8052845001220703, "perplexity": 3333.178791840084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038083007.51/warc/CC-MAIN-20210415035637-20210415065637-00392.warc.gz"}
https://math.stackexchange.com/questions/2919783/show-that-xry-if-and-only-if-x-y-are-in-the-same-part-of-the-partition-defi?noredirect=1
# Show that $xRy$ if and only if $x, y$ are in the same part of the partition defines an equivalence relation. Let $S$ be a set and $\{X_i\}$ be a partition of $S$. Show that $xRy$ if and only if $x, y$ are in the same part of the partition, defines an equivalence relation. What are the equivalence classes of this relation? • What is your question? – AnotherJohnDoe Sep 17 '18 at 3:56 • I am uncertain how to prove this iff statement – david D Sep 17 '18 at 3:59 • What have you tried as of yet? Can you guess what an equivalence relation on the partition might be? – AnotherJohnDoe Sep 17 '18 at 4:00 • I know the equivalence classes of an equivalence relation form a partition. But I am not sure how can a equivalence relation on a partition? – david D Sep 17 '18 at 4:04 • Let's try to guess - for the partition formed by an equivalence relation, what are the equivalence classes? – AnotherJohnDoe Sep 17 '18 at 4:05
2019-08-24 11:11:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7565507292747498, "perplexity": 113.29930669952766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320734.85/warc/CC-MAIN-20190824105853-20190824131853-00416.warc.gz"}
https://alice-publications.web.cern.ch/node/7169
# Figure 7 Blast-wave fits to the \vtwo(\pt) of pions, kaons, and protons~ and predictions of the deuterons \vtwo(\pt) for the centrality intervals 10--20$\%$ (left) and 40--50$\%$ (right). In the lower panels, the data-to-fit ratios are shown for pions, kaons, and protons as well as the ratio of the deuterons \vtwo to the \mbox{blast-wave} predictions. Vertical bars and boxes sent the statistical and systematic uncertainties, respectively. The dashed line at one is to guide the eye.
2021-09-28 15:44:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6906067132949829, "perplexity": 2660.581394035897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060877.21/warc/CC-MAIN-20210928153533-20210928183533-00364.warc.gz"}
http://math.stackexchange.com/questions/313832/a-combinatorial-proof-that-the-alternating-sum-of-binomial-coefficients-is-zero
# A combinatorial proof that the alternating sum of binomial coefficients is zero I came across the following problem in a book: Give a combinatorial proof of $${n \choose 0} + {n \choose 2} + {n \choose 4} + \, \, ... \, = {n \choose 1} + {n \choose 3} + {n \choose 5} + \, \, ...$$ using the "weirdo" method (i.e., where one of the elements is chosen as special and included-excluded -- I'm sure you get the idea). After days of repeated effort, the proof has failed to strike me. Because every time one of the elements is excluded, the term would be ${n-1 \choose k}$ and not ${n \choose k}$, which is not the case in either of the sides of the equation. - HINT: Let $A$ be a set of $n$ marbles. Paint one of the marbles red; call the red marble $m$. If $S$ is a subset of $A$ that does not contain $m$, let $S'=S\cup\{m\}$, and if $m\in S\subseteq A$, let $S'=S\setminus\{m\}$. Show that the map $S\mapsto S'$ yields a bijection between the subsets of $A$ with even cardinality and those with odd cardinality. - Is it necessary to prefix hints with HINT? I like this answer but I always think the practice of shouting HINT before you give one is a bit strange. – Ben Millwood Feb 25 '13 at 11:42 @Ben: I prefer to give an explicit signal that I am not providing a complete answer. This is partly for the benefit of the querent, and partly because on a few occasions when I’ve inadvertently failed to do so, someone (other than the querent) has complained that I didn’t answer the question. – Brian M. Scott Feb 25 '13 at 11:45 Well, fair enough. I guess I just haven't been bitten by not signposting my hints yet. – Ben Millwood Feb 25 '13 at 16:13 This is not a direct combinatorial proof but one can make the argument combinatorial. There is an easy combinatorial proof of the following: $${n\choose {k}}={{n-1}\choose {k}}+{{n-1}\choose {k-1}}$$ Now take $k=2r$ and $k=2r+1$ and sum over all integers $r$. In the first case you will have the expression in the LHS and for the second you will get an expression for RHS and both are equal by the above identity. - To show this you can use the the binomial theorem which is $(x+y)^n=\sum_{k=0}^{n}\dbinom{n}{n-k}x^{n-k}y{k}$ set x=1 y=-1 and you get $\dbinom{n}{0}-\dbinom{n}{1}+\dbinom{n}{2}.....+(-1)^{n}\dbinom{n}{0}$ $\dbinom{n}{0}+\dbinom{n}{2}=\dbinom{n}{1}+\dbinom{n}{3}$ thus in a set the number of subsets which are even equal odd subset. -
2016-07-27 15:26:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9155858159065247, "perplexity": 224.4482112999024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826908.63/warc/CC-MAIN-20160723071026-00176-ip-10-185-27-174.ec2.internal.warc.gz"}
https://amsi.org.au/ESA_middle_years/Year6/Year6_1aT/Year6_1aT_R2_pg3.html
### Multiplication is the inverse of division When working with multiplication and division it is useful to understand that they are the inverse of each other. That means that multiplication can be used to undo division and vice versa. For example, 25 ÷ 5 = 5 and 5 × 5 = 25.
2021-12-02 02:59:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9015247225761414, "perplexity": 119.11379638176477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.69/warc/CC-MAIN-20211202024322-20211202054322-00306.warc.gz"}
https://imathworks.com/tex/tex-latex-understanding-headers-in-scrartcl/
# [Tex/LaTex] Understanding headers in scrartcl I would like to get a better understanding, why using fancyhdr is not recommended using scrartcl. So far I haven't had any problems using them together but the output is always complaining that I shouldn't use fancyhdr with scrartcl. Instead I should use scrlayer-scrpage. Can somebody please explain the reason why I shouldn't use fancyhdr? \documentclass[a4paper,pagesize ,landscape, fontsize=6pt]{scrartcl} \usepackage[left=0.75cm,right=0.75cm, top=1cm, bottom=0.75cm]{geometry} \usepackage{multicol} \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage{fancyhdr} %\usepackage{scrlayer-scrpage} \setlength{\columnsep}{25pt} \setlength{\columnseprule}{0.4pt} \pagestyle{fancy} \fancyhf{} \begin{document} \begin{multicols*}{3} Some entries here \columnbreak and some entries there \columnbreak and the list goes on \end{multicols*} \end{document} You can only use one of the packages scrlayer-scrpage, scrpage2 (predecessor of scrlayer-scrpage), fancyhdr, ... The recommended package for use with a KOMA-Script class is scrlayer-scrpagebecause it is part of the KOMA-Script bundle and you can set and change options in the same way as for the class, see the KOMA-Script documentation. I really suggest to use scrlayer-scrpage. Your header with scrlayer-scrpage: \documentclass[ %a4paper,% default %pagesize,% default since version 3.17 landscape, fontsize=6pt, ]{scrartcl} \usepackage[left=0.75cm,right=0.75cm, top=1cm, bottom=0.75cm]{geometry} \usepackage{multicol} \usepackage[T1]{fontenc} \usepackage{lmodern} \clearpairofpagestyles \renewcommand*\pagemark{{\usekomafont{pagenumber}Page\nobreakspace\thepage}} \setlength{\columnsep}{25pt} \setlength{\columnseprule}{0.4pt} \usepackage{blindtext}% dummy text \begin{document} \begin{multicols*}{3} \Blinddocument \end{multicols*} \end{document} But using fancyhdr is also possible if really want to use this package: \documentclass[ %a4paper,% default %pagesize,% default since version 3.17 landscape, fontsize=6pt ]{scrartcl} \usepackage[left=0.75cm,right=0.75cm, top=1cm, bottom=0.75cm]{geometry} \usepackage{multicol} \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage{fancyhdr} \pagestyle{fancy} \fancyhf{} \renewcommand*\pagemark{{\usekomafont{pagenumber}Page\nobreakspace\thepage}} \setlength{\columnsep}{25pt} \setlength{\columnseprule}{0.4pt} \usepackage{blindtext} \begin{document} \begin{multicols*}{3} \Blinddocument \end{multicols*} \end{document} Note that with fancyhdr a small number of options regarding page header and footer will not work. An example is the KOMA option footsepline. So if you really want to use fancyhdr and there is only the warning that the usage of fancyhdr together with a KOMA-Script class is not recommended you can ignore it. But do not ignore any additional warning regarding the old font commands like \rm, \sl. Note that fancyhdr uses this commands in its default header and footer definitions. Starting with the current prerelease of the next KOMA-Script Version (3.20) KOMA Script does not define this old commands. So \documentclass{scrartcl}[2015/11/06] \usepackage{fancyhdr} \pagestyle{fancy} \begin{document} Text \end{document} will result in errors. You can avoid the errors if you either use \fancyhf{} and define your own header and footer using \fancyhead and \fancyfoot without the usage of old font commands or you use a compatibility option.
2023-01-28 14:09:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8641823530197144, "perplexity": 4182.3261385798305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499634.11/warc/CC-MAIN-20230128121809-20230128151809-00289.warc.gz"}
https://www.savevsplayeragency.net/
## The Grand Campaign Chivalry & Sorcery 1st edition opened with an introduction to the concept of the Grand Campaign. There was no succinct single sentence definition which I can reproduce here, but rather a description of the Grand Campaign as encompassing an entire fantasy world and permitting play at any and all levels of detail within that world – as kings and commanders of armies, to individual characters of any social rank. This is fundamentally different from most modern roleplaying game campaigns, which are often described in terms of “stories” in which the player characters are protagonists. Rather, in the Grand Campaign as described by Simbalist and Backhaus in the 1st edition of Chivalry & Sorcery, it is really the history of a simulated fantasy world which is being “played out” rather than an individual story within that world. Individual players may control many different characters in that world, indeed, they may play powerful kings, chivalrous knights, or humble peasants, and anything between. The Grand Campaign encapsulates this idea of play happening (or at least, being able to happen) at multiple levels of play, with groups of players choosing to play at all the levels or just those which interest them. Building and expanding on the Grand Campaign as described in Chivalry & Sorcery 1st edition with my thoughts after some time spent with the 5th edition of that august gaming lineage, the levels of play in the Grand Campaign could be described as follows, listed in order, with “1” at the “highest” level of play and “4” at the “lowest” or most narrowly focussed-level: 1. Grand Strategy – This level of play is primarily military and geopolitical in nature. It will focus on the struggles between baronies and kingdoms and on the clash of armies, and deal with questions of economics and resources. 2. The Nobility – This level of play complements Grand Strategy by fleshing out the important political and military characters – princes, generals, knights. C&S 1e notes that “the military miniatures enthusiast will probably elect only to deal with Knights and Nobles”. 3. The Fantasy Campaign – This level of play is probably the more typical fantasy roleplaying game’s fare (albeit with more C&S-style realism). It adds magicians to the mix, expanding the game beyond the knights and nobles of the historical ruling class. 4. Village Life – This level of play includes ordinary life in the game world, down to the lowest serf. I have called this “Village Life” to emphasis the micro-nature of this level of play within the Grand Campaign, but it could just as easily focus on the day-to-day in towns or cities. This level of play is not typically a feature of fantasy roleplaying games – this sort of activity is generally performed by NPCs in the background of play. Chivalry & Sorcery 1st edition was intended as a single-volume rulebook to address the whole of the Grand Campaign (or at least levels 1 through 3 in my improvised hierarchy): “The essential feature of Chivalry & Sorcery is the flexibility built into all of the campaign types. Players may choose the type of campaign that they desire and may ignore all elements that are not relevant to their needs and aims.” Ed Simbalist & Wilf Backhaus (1977), Chivalry & Sorcery, 1st edition My own Chivalry & Sorcery campaign is at level 3. Indeed, probably every fantasy roleplaying campaign I have ever run or played in has been at level 3. There is certainly nothing wrong with level 3, but there’s a lot of play which can happen at other levels which isn’t a feature of my own games or my own experience. The Grand Campaign would involve play at multiple levels, possibly with many more players and even multiple GMs, not necessarily all playing together at the same time, but all playing in the same world, with the potential to interact and affect each other. This explains why, as previously discussed, the concept of time and a relationship between real world time and campaign time is also strangely (to modern eyes) so important in Chivalry & Sorcery 1st edition. Indeed, C&S 1st edition is not alone in this – it seems to have been a feature of the Greyhawk and Blackmoor D&D campaigns as well. In this context, the infamous Gygax quote in the AD&D DMG that “YOU CAN NOT HAVE A MEANINGFUL CAMPAIGN IF STRICT TIME RECORDS ARE NOT KEPT” makes a lot more sense than it might from a purely “modern” (really, 1980s onwards) roleplaying game perspective. This sort of defined relationship between real-world and game time would have been essential for games which spanned levels 1 to 3 of the Grand Campaign. I have been drafting and re-drafting my own rules-lite game which spans levels 1 to 3 of the Grand Campaign (with play focussed at level 2) for a year and a half. I am about to re-embark on yet another re-write. But since I am presently running (and enjoying) a level 3 “Fantasy Campaign” using Chivalry & Sorcery 5th edition, and since Chivalry & Sorcery introduced the concept, I have started to think about the practicalities of using Chivalry & Sorcery‘s latest incarnation for the Grand Campaign. The first thing I would note is that the wargaming clubs which predominated in the hobby in the 1970s are now, effectively, a separate world from the roleplaying game one. Campaigns as run by Arneson and Gygax, and indeed, Simbalist and Backhaus, with 20+ players drifting in and out between sessions are generally impractical – unless maybe you were playing Dungeons & Dragons 5th edition (and even then, that would not be a very Critical Role-like experience so maybe not). I suspect but I am not sure about early C&S campaigns, but it was certainly a feature of early D&D campaigns that players had multiple characters, and would choose which of their characters to play at the start of each adventuring session, alternating between them over the course of many sessions. I think a similar approach would be necessary in the Grand Campaign in 2020, and this would also help facilitate running such an ambitious exercise with a more realistic-sized play group. I believe the Grand Campaign calls for players to take more responsibility and direct control of many aspects of the game world, albeit underneath the Gamemaster’s supervision. The Player-Referee (as the GM is occasionally called in Chivalry & Sorcery 1st edition) of course has the most characters at their disposal, but the game actually rails against the injustice of this somewhat, and explains how player characters may marshal armies and enlist the support of allies and vassals and how such characters are in effect, under their control rather than the Player-Referee. In the Grand Campaign, players can be expected to manage the resources and incomes available to their estates and feudal holdings at levels 1 and 2 of play. A good way to invest the players in these additional responsibilities, and in play at level 1 especially, is to involve them in the world building process. Chivalry & Sorcery 5th edition has a solid introduction to world building, starting with map making (just as 1st edition notes), description of feudal society, and a detailed system for building feudal holdings from the level of kingdom down. There is no reason why this work should only be done by the GM – and indeed, there will be more investment in a collaborative world building effort. After the initial preparation is complete, of course, world building will continue to be on-going during play in order to satisfy the demands of play, and the GM will need to do this “just in time” worldbuilding largely by themselves, but they will be building upon a shared foundation. Unlike in 1st edition Chivalry & Sorcery, which started with miniature wargaming rules first and eventually moved onto character generation around the halfway mark, level 1 play is not entirely facilitated by the 5th edition core rulebook. The Kickstarter included the stretch goal to deliver the Ars Bellica supplement, a mass-combat system which can also serve as a miniature wargame in its own right. Subscribers to the Brittannia Game Designs Patreon can get access to the latest draft of this supplement. The current draft is certainly usable, and as an alternative to the full miniature rules most of the supplement describes, it also includes an “Art of Maths” option for mass combat to be resolved with some maths based on troops and resources and a few rolls. Either one of these should bridge the gap and satisfy the requirements of level 1 play nicely. The core rulebook does provide the economic details which would be necessary for Grand Strategy level play. Play at levels 2, 3 and even 4 are all well-provided for by the Chivalry & Sorcery 5th edition core rulebook. No other game in my collection so admirably and comprehensively caters to characters drawn from every class of feudal society. Non-human options are already dealt with in the core rulebook, but are greatly expanded upon by the previous edition’s Elves Companion and Dwarves Companion (which are still compatible with 5th edition), and by the recently released Nightwalkers supplement (that’s an affiliate link) which greatly expands on lycanthropes and vampires in Chivalry & Sorcery (remember: monsters are people too!). A supplement is on its way for Goblins, Orcs, and Trolls as well. The non-human races have their own societies, social classes, vocations, etc, sufficient to support play at levels 2, 3, and 4. As I discussed in my reviews of the Elves Companion and Dwarves Companion, it is a mistake to think of Chivalry & Sorcery‘s demihuman and monstrous races as “vanilla D&D” versions – but they could still be incorporated into a newly built world with just the geographic features adapted to your own setting (as indeed, these books do for Marakush). I am not sure whether I ever will run the Grand Campaign, but I’d certainly like to try, and Chivalry & Sorcery 5th edition is the first game in my collection I’d use for it. In the meantime, I will press on with my own Fantasy Campaign, level 3 game! ## Chivalry & Sorcery 5e: 5+1 Sessions In Some time ago, I reviewed the Chivalry & Sorcery 5e PDF, on the basis of reading it through and rolling up a character myself, but not after actual play. Since that time, the printed books have been shipped, and I’ve started a Chivalry & Sorcery campaign. Now we’re a few sessions in (four “regular play” sessions and one “session zero” character generation session), I feel like I am better placed to comment on the game in play. As a group we are still learning the system, and given C&S 5e is a 600+ page book, this is going to take a little longer. We are still looking things up in play just to make sure we are doing it right, but as of the fourth session of regular play, we are now, as a group, more looking things up to be sure we’re doing them right than we are to resolve differences of interpretation or different recollections of the rules. In my review of the PDF, I described the game as complete not complicated. After playing for a while, I think this is generally true, there are definitely some “crunchy” parts – especially in the magick system. Generally, the “crunch” feels like it adds both realism and tactical thinking which is welcome, but it can also slow us down in play when we use a new subsystem for the first time. As with all such subsystems, the more familiar we become with using it in play, the faster we get with it. By the second or third time we do something in C&S we do it quite quickly and smoothly. Expressed in GNS terms, without intending to affirm any kind of validity to that hot mess of a model, C&S is heavily simulationist. The core mechanics are not complex, but there’s a lot of detail in procedures for particular situations, and a lot of granularity in the skills list. There are also some rules which have needed clarification/expansion after the book was printed (link to publisher’s webpage here), which helps with the more complicated aspects of magick in particular. In general, these detailed procedures are smooth in play, but there are some stress points when some players try to play the game like D&D. For example, the game has an action point pool-based approach to determining who can act first in combat and how much they can do. This is excellent, and feels realistic to me. However, when players try to “hold actions” until certain conditions take place, as if they were playing 3rd edition D&D, which in mechanical terms means they build up action points since they don’t spend them on an action, which leads to a glut of action points. This doesn’t actually break the C&S rules, mind you – in fact, you are intended to be able to bank action points, but typically you’d be banking “left over” action points in a round, not one’s entire allocation of action points for a round. So we are making good progress on the learning curve with respect to the C&S rules themselves – but as a group we still need to make some progress on learning how to play C&S as C&S. As I discussed in my initial review of the PDF, the PDF is hyper-functional, filled with links, and particularly useful electronic tabs to skip between major sections. The physical book does not have such conveniences, but it is a distinctly satisfying 600+ page full-colour book, with a ribbon. The pages are nicely laid out, and visually appealing. The artwork sets the tone for the game beautifully, much more “low fantasy” and medieval than modern Dungeons & Dragons or Pathfinder. Given the size of the book, I find that some extra ribbons are useful during play as I deal with procedures for magick with some characters, certain skills with others, NPC stats, etc, at the same time. Occasionally, I think some information could be better co-located in the book. For example, if some of the information about magickal vocations and modes of magick from the Vocations chapter could be repeated in the Magick chapter, it would save some flicking around in play. This is less of an issue with the PDF than the physical book, and illustrative of why I have started to use multiple ribbons in play. The information is laid out logically in the book – just the size of the book naturally means one has to move around quite a bit. I enjoy the book a lot and commend Brittannia for running an outstanding Kickstarter, and getting the physical book out promptly. Many other Kickstarters I backed, with long, drawn out deliveries, are now frozen, with physical rewards only partially fulfilled, but my C&S book arrived early, and is all the more cherished for it. More importantly, I am enjoying the game, even though we have moved play online, and I don’t hesitate to recommend Chivalry & Sorcery for anyone who wants a more “realistic” medieval fantasy roleplaying game. ## Chivalry & Sorcery on Roll20 I recently started a new Chivalry & Sorcery campaign, and COVID-19 comes to rain on our parade! So we moved our most recent session online, and that’s where the campaign will stay for the foreseeable future. Since I already have a Roll20 Pro subscription and since I am familiar with it from my Lamentations of the Flame Princess campaign (still going strong), I went to Roll20 as my default. Unfortunately Roll20 doesn’t have a character sheet built for Chivalry & Sorcery – I am pretty sure that none of the VTT platforms out there do! But that’s OK, I don’t need all of the VTT bells and whistles, since at this stage I am still assuming that the campaign will be able to move back to a face-to-face campaign in the medium term. There are some Roll20 features I can make good use of though. ## Dice Macros Chivalry & Sorcery has an elegant dice mechanic. Once you have calculated the Total Success Chance (TSC) for all your skills during character creation, in play, the dice mechanic is quick and easy to use. Face-to-face we roll 1d100 and 1d10 – the 1d100 is the roll to determine success or failure and the 1d10 is the “Crit die”. If the 1d100 result is equal to or below the TSC, the roll is a success, and the Crit die shows how successful the success was (a 10 = a critical success). If the 1d100 result is above the TSC, the roll is a failure, and the Crit die shows how much of a failure the failure was (a 10 = a critical failure). There is nothing to stop you just rolling your dice in Roll20 the same way, or at least in two steps (/roll 1d100 then /roll 1d10) but we are not familiar enough as a group with the Crit die tables that all of us know how much of a success or failure a given Crit die roll is. Complicating this, the Crit die roll is modified by very high and very low TSCs. So I thought I would setup some simple macros to semi-automate the resolution mechanic. Unfortunately, this was much more difficult than I anticipated, because the functions Roll20 exposes for dice mechanics are limited unless you can use APIs in your game (which requires a Pro subscription). I am going to keep refining these macros, but for now, this is what I have. ### Percentile Pair Roll Percentile Pair Roll is the simplest macro and is one which can be implemented without API support: /roll 1d100<?{Total Success Chance|01} This macro rolls 1d100 and compares it to a number provided by the player called “Total Success Chance”. If we have an electronic character sheet, this could point at the sheet, but we don’t, so instead we need a pop-up box to ask the player to type in their TSC manually. This is OK for now as it helps the players learn the system. ### Crit Die Macros My Crit Die macros include a conditional statement and format results nicely, so I used the Power Card API. To use this API, I needed a Pro account (which I already had), and I needed to go into the Game Settings and add the API to the game. Once Power Cards was added, the macros I use are coded like this: !power {{ --name|Success with Crit Die --Crit Die Roll:| [[ [$crit] 1d10 + ?{Crit Die Modifier (pp37-38)|0} ]] --??$crit.total <= 1 ?? Result:|Mediocre Success --?? $crit.total >= 2 AND$crit.total <= 5 ?? Result:|Middling Success --?? $crit.total >= 6 AND$crit.total <= 9 ?? Result:|Competent Success ## Review: Elves Companion for Chivalry & Sorcery The Elves Companion (affiliate link) for Chivalry & Sorcery was originally published for Chivalry & Sorcery 3rd Edition in 2000. Note that I am reviewing it with a view for its use with Chivalry & Sorcery 5th Edition. The Elves Companion is available from DriveThruRPG as a scanned PDF from the printed original. Although the scan is very clear and the book is easily readable, it has not been OCR’d. This means you can’t keyword search the PDF nor copy and paste from it nor use other text manipulation tools. That said, the 58-page PDF is only $4, less than scans of old D&D modules (which are, generally, OCR’d). So this is far from a modern, hyper-functional PDF like the C&S 5th Edition PDF (previously reviewed here). As you’d expect given its age, the interior is all black and white, laid out much like Chivalry & Sorcery: The Rebirth was laid out – which was very much how most non-WoD RPGs looked before Dungeons & Dragons 3rd Edition. It does print out nicely and with much less ink than the lavish 5th Edition PDF though! If you can get past these cosmetic and format issues, though, the content is rather worth it. These elves are not your typical D&D elves. The Elves Companion combines Celtic mythology with a sprinkling of other sources, and grounds the Elven Nation in the sort of concrete terms you expect to find in a roleplaying game sourcebook. The “assumed setting” of the Elves Companion is the same assumed setting as Chivalry & Sorcery itself – our own world during the medieval period. Marakush-specific material is also included – but as an appendix (and a sidebar on page 20). Elven society is presumed to exist on the fringes of human society in the medieval world – generally confined to deep forests, where the elves may actually be the source of legends like Robin Hood. Elves deal more easily with pagan human societies than with the humans of the Abrahamic religions. It can be challenging, at times, to imagine how elves could dwell even in the forests of medieval Western Europe, but I think this is caused by our modern perspective on the world as intrinsically observable and thus knowable. In the medieval world, “civilisation” was not ubiquitous – it was bounded by city walls, erected against a wild and hostile world, and the deep, dark forests were impenetrable and alien. If the peasants tell stories of elves in the forest, perhaps their tales are true? The elves themselves are related to the faeries of Celtic mythology, but they are doomed by the Blight, a product of human progress at the expense of the Earth. Their bloodlines have become impure, leaving only a handful of increasingly inbred “True Elves” as their ultimate ruling class, who sit on top of a hierarchical society, rarely if ever seen by the the “Half Blood” (as distinct from half-elven) majority, the great unwashed of the Elven Nation. The Half Blood proletariat are supervised by the Noble Elves, who serve as the gentry and bourgeoisie of the Elven Nation, underneath the Pure Blood True Elves. Even the Half Bloods are higher than the Lost Bloods, the contemptible excommunicates who have committed such unthinkable crimes as breeding with humans (creating half-elves), or working for humans. This hierarchical elven society is profoundly different from the societies of the increasingly generic elves of modern D&D, and most other fantasy roleplaying games. Unlike the elves of many modern “high fantasy” settings, the elves of the Elves Companion feel genuinely non-human. Although largely a sourcebook, and written for another edition at that, the rules in the Elves Companion seem generally compatible with Chivalry & Sorcery 5th Edition. Many of the character creation modifications in the Elves Companion, and elf-specific tables for social class, build, father’s vocation etc appear in the 5th edition book itself. The expanded rules and tables for elven longbows appear to be 5th edition compatible too. It is a little unclear to me whether the rules point for half-elves (p31) which appear in the Elves Companion still apply in 5th edition though – it seems that creating a half-elf in 5th edition is more or less the same as creating a human character, but the player selects the “Fey Blood” special ability/talent. The other “rules points” which appear throughout the Elves Companion seem largely 5th edition compatible. Most significantly, the Elves Companion adds the Wardens Mage Mode and Elven Mage Mode, which are referred to in the 5th edition PDF for elven characters but do not appear there. Appendix A contains new vocations, and unless I have overlooked a skill or two which has been revamped, these appear to be completely compatible with 5th edition. Appendix B contains a bestiary of creatures associated with the elves, and though the stats for these creatures are presented differently from the Bestiary chapter in the 5th edition PDF, the necessary stats seem present and I am fairly confident these creatures could easily be used in a 5th edition game. This concludes the “new rules” for 5th edition content in this sourcebook, although there is also Appendix C which adds rules for creating elven characters in Chivalry & Sorcery Light. The Elves Companion concludes with two scenarios, one set in Marakush, and the other set in medieval Europe (in northern Scotland specifically). These are both brief scenarios (1.5 pages long each), intended for elven player characters, which makes sense given the book they appear in. Both present a hook in the form of a mission bestowed upon the party by a noble, and then barebones details of the journey and adventure which follows. Both seem serviceable and potential good starts for a longer elven campaign. In the scenario set in Scotland, I could see the potential to have human (or even dwarf) player characters join the elven party during the course of the adventure, although I am not sure I really see “mixed” parties working in the historical setting as well as they might in Marakush or other fantasy worlds. In fact, after reading the Elves Companion, and seeing how “alien” elven characters are from a human perspective, I do wonder how well they would fit into a predominantly human adventuring party. Modern D&D simply assumes a mix of player character races is normal in an adventuring party, and if we are honesty we must concede that modern D&D doesn’t trouble itself terribly with helping us suspend disbelief as to how an extremely heterogeneous adventuring party could be formed or function. Most adventuring parties are hardly forged in response to apocalyptic circumstances like the Fellowship of the Ring in D&D – most typically, the characters are presumed to know each other beforehand and/or meet in a tavern. This hardly seems satisfying or consistent with the verisimilitude Chivalry & Sorcery strives for. Having read the source material in the Elves Companion, I find it difficult to believe that most elves would mix with humans to any considerable extent – only the Lost Bloods would seem to me to make an appropriate member of an adventuring party in the D&D mould. This isn’t a drawback or a problem from my perspective – frankly, one of the main appeals to me of Chivalry & Sorcery is a more grounded, believable fantasy world, and insofar as “adventuring parties” should even exist in such a world, they would be largely culturally and socially homogeneous. However, I know that many of my gaming friends have a strong aversion to homogeneity amongst player characters – they want to play a variety of different races in particular, and I am still not sure how I will facilitate that as a Chivalry & Sorcery Gamemaster. At$4, the Elves Companion is an outstanding value purchase, even though it is a “flat” PDF constructed from a scan. If you are planning on running a Chivalry & Sorcery game in medieval Europe or Marakush, the Elves Companion is an essential purchase. If you are building your own world for Chivalry & Sorcery with your own “bespoke” elves, then it is still worth reading the chapters about character creation from the Elves Companion so that you can better understand the elven character generation rules in the main book, which will help you better understand how to customise them to achieve your design objectives. It has certainly given me a lot to think about in terms of my own world-building. Get the Elves Companion via my affiliate link here. ## Review: Chivalry & Sorcery 5th Edition (PDF) Chivalry & Sorcery was one of the earliest fantasy roleplaying games, created by Ed Simbalist and Wilf Backhaus. Apparently the game evolved out of the desire by their playing groups to play what their characters did “between dungeons”, so to speak. These questions and many more were answered by Chivalry & Sorcery, which developed a reputation for complexity when it was first published in 1977, perhaps because it attempted to provide gameable rules and material for virtually every aspect of “medieval fantasy” life, not just adventuring. The reputation for complexity was probably assisted by the first edition making use of photo reduction to fit four pages of text onto each of its 128 physical pages – an extraordinarily dense rule book even today! ## Old-School Essentials Late last month the Old-School Essentials Kickstarter came through and I received my goodies. I backed for both the Rules Tome and the boxed set, along with the Advanced Fantasy expansion books, and I am really pleased I did. These are really gorgeous physical books: Obviously, I was already familiar with Gavin Norman’s outstanding work with B/X Essentials and so in many ways this was a “sure bet” Kickstarter. B/X Essentials was already a really nicely done retroclone, with a modular design which makes it unique among “Basic D&D” retroclones. I knew Old-School Essentials would be at least that good – really I backed the Kickstarter to see how much of an improvement an offset print-run could be as compared to the B/X Essentials print-on-demand books. The answer is: a huge improvement. These books are physically beautiful. They’re sturdy and attractive, and are so well-presented that they could be placed on the same shelf as the latest books from WotC and Paizo and look as if they deserved to be there, despite the fact that their presentation is of an entirely different style to the “big colour hardback book”. This is an OSR product which is truly worthy of mainstream distribution, and if we are honest, there are only a few OSR games which we can say that about, in terms of their physical presentation. I hope Old-School Essentials finds that mainstream distribution and enjoys mainstream success as a result, beyond the usual OSR audience which is familiar with the online distribution channels. It’s modular nature also lends itself to a line of supplements – genre rules for different fantasy genres, alternate classes and races, and so on. While I think experienced players don’t mind tinkering with the rules from an all-in-one book like the Rules Tome, the smaller, modular books in the boxed set invite newer players to see the system as consisting of pieces which can be swapped in and swapped out. I don’t think I appreciated how clever this was when Gavin first did it with B/X Essentials but I certainly see it now. As fond as I am of Labyrinth Lord, with this printing I think Old-School Essentials has superseded it and now stands as the superior “reference retroclone” for Basic D&D. I think its succinct, modern presentation makes it an easier sell to younger players familiar with 5e or Pathfinder, as well as a superior reference text for Grognards. ## More Great Maps by Great Mappers I am not much of a mapper, but while trying to search for why there is an undescribed secret door on the eastern edge of the map of level 1 of the Palace of the Silver Princess in B1-9: In Search of Adventure (which I am running for my kids and their friends using my retroclone, First Five Fantasy Roleplaying), I came across some great fan-made maps of the dungeon by “Bogie-DJ” on DeviantArt: The best part about these maps is not only are they gorgeous, but if you download the hi-res version via the link in the bottom right hand corner, they’re already perfect size for use in Roll20!
2020-05-26 05:20:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35514017939567566, "perplexity": 2211.116814836675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390448.11/warc/CC-MAIN-20200526050333-20200526080333-00281.warc.gz"}
https://www.aidansean.com/projects/?tag=xml
# Tag Archives: xml One of the projects I have wanted to develop for a long time is a browser based particle physics experiment simulator. Such a project would generate events using Monte Carlo methods and simuate their interactions with the detector. This was made partly as an educational aid, partly as a challenge to myself, and partly because at the time I was feeling some frustration with the lack of real analysis in my job. As expected for a Javascript based CPU intensive appplication, this reaches the limits of what is possible with current technology quite rapidly. ### Overview The model for the detector is saved in an xml file which is loaded using the same methods developed in the Marble Hornets project. Particle data are currently stored in Javascript source files, but will eventually use xml as well. The particle interactions are simulated first by choosing a process (eg $$e^+e^-\to q\bar{q}$$) and then decaying the particles. Jets are formed by popping $$q-\bar{q}$$ pairs out of the vacuum while phase space allows and then arranging the resulting pairs in hadrons. Bound states are then decayed according to their Particle Data Group (PDG) branching fractions, with phase space decays used. The paths of the particles are propagated through the detector using a stepwise helix propagation. Energy depositions in the detector are estimated, according to the characteristic properties of the detector components. A list of particles is then compiled based on the particles produced and these can be used to reconstruct parent particles. The user has access to several controls to interact with the application. They can choose how to view the detector, using Cartesian coordinates and two Euler angles (with the roll axis suppressed.) The most expensive parts of the process are the generation of the event displays and the generation of the particle table. By default these are only updated after a certain interval, to allow the user to accumulate a significant number of events without being slowed down by the graphics. To save time the detector itself is rendered once in a cutaway view, and the particle tracks are overlaid on the saved image. Eventually the user will be able to get a full event display, including the detector response to the particle with glowing detector components etc. The user has access to collections of particles, including electrons, muons, pions, kaons, photons, and protons. From these they can construct other particles, making selections as they do so. Once they have made parent particles they can then plot kinematic variables including mass, momentum, transverse moment, and helicity angle. This should, in principle, allow students to learn how to recreate particles and how to separate signal from background effectively. Given the large amount of information available the user has access to a number of tabs which can can be collapsed out of view. This allows the user to run the application with the expensive canvas and DOM updates, and thus collect many more events. This is still a work in progress, with reconstruction of particle being the next main priority. Eventually the user would be able to load their favourite detector geometry and beam conditions, then perform their analysis, saving the output in xml files and possible being able to upload these to a server. This would allow users to act as “players” with “physics campaigns”, including the SPS experiments, HERA experiments, B factories, LEP experiments, and LHC experiments. This is, of course, a very ambitious goal, and one which has been ongoing for over a year at this point. See other posts tagged with aDetector. ### Challenges Challenge: A sophisticated model for the detector was needed. Solution: The detector is split up by subdetector, with each subdetector having its own characteristic responses to different particles. The detector is split up in cylindrical coordinates, $$( ho,\eta,\phi)$$, with each subdetector also being split into modules. Individual modules then react the particles for reconstruction purposes. Thus with a few key parameters even a sophisticated model can be stored in a few variables that can be tuned quickly and easily. (Resolved.) Challenge: The detector shold have a three dimensional view that the user can control. Solution: The detector is drawn using a wireframe with transparent panels. This is a method I developed in 2009 for a now defunct PHP generated SVG based visualisation of the BaBar electromagnetic calorimeter, which I used to show the absorbed dose as a function of detector region and time. The drawing algorithm is not perfect, as panels are drawn in order from furthest from the user to closest. This is sufficient for most purposes, but occasionally panels will intersect causing strange artefacts. Eventually this should be replaced with a much more stable, robust, and fast implementation, such as in three.js. (Resolved, to be revisited.) Challenge: Particles should be modeled realistically and physically correctly. Solution: Particles are modelled with the most important parameters (mass, charge, decay modes etc) taken from the PDG. Their kinematic properties are modeled using special four vector classes, and decay products “inherit” from their parents in a consistent manner. Particles at the highest layer of the tree are assigned their four momenta, and then their decay products are decayed, inheriting the boost and production vertex from their parents. This is repeated recursively until all unstable particles are decayed. So far this does not take spin into account, as everything is decayed using a phase space model. Particles with large widths have their lineshape modeled using a Breit-Wigner shape. As a result, particle have realistic looking jets and four momentum is conserved. This required the development of special libraries to handle these decays and kinematic constraints. (Resolved, to be revisited.) Challenge: Particles decay trees must be traversed consistently. Solution: This is harder than it sounds! Every time an event is generated, particles are recursively decayed for as long as phase space allows. The particles must then be traversed and dispalyed in the table, in a consistent manner. Ensuring that all particles are listed hierarchally without missing anything out takes some care. (Resolved.) Challenge: Particles lists had to be prepared for the user. Solution: The user has access to a handful of “building blocks” to play with. These are taken from the (generated) list of particles per event and filtered by the particle type. Further lists can be created or filtered from these lists, and parent particles can reconstructed from combining lists. This means having to provide special classes to handle the particles and ensure that no particles are reconstructed recursively (leading to infinite loops.) Apart from using reconstructed particles instead of generated particles, this has been resolved. (Resolved, to be revisited.) Challenge: The user needs to be able to make histograms. Solution: I had made histgorams for other projects, including the Reflections and Box plotter projects, so this was mostly fairly easy to do. Even so, being able to create new histograms of arbitrary scales and variables on the fly meant that this histogram class had to be more robust than previous projects. (Resolved.) # Marble Hornets Marble Hornets is a web series that currently spans the course of about five years. The premise of the series is that a student film maker started filming an amateur movie in 2006, but abandoned the film due to “unworkable conditions on set”. It quickly turns into a horror series that brings with some novel techniques. However what really inspired me to make this project was the asynchronous storytelling narrative, which requires the viewer to piece together the true chronology based on the context of the videos. I’m an active participant on one of the busiest discussion boards for this series and regularly update this project as each new video is released. ### Overview So far, the Marble Hornets project has two main aspects to it. The initial work began with the Marble Hornets player, a collection of youtube videos that are manipulated by the youtube JS API. The user can autoplay all the videos in the series, filter based on many parameters, and even create their own playlists. The player is made so that the user can autoplay everything in chronological order in full screen mode. The data was originally stored in Javascript files, but this has since been moved to XML files to make maintenance and data entry easier and more portable. After creating the player I added a lot of further information including links to the wikia pages, youtube videos and forum threads for each video, as well as the twitter feed, a real world map of filming locations and other interesting links, turning the player into a hub of information. A previous version of the player was adapted to make the my_dads_tapes player, although I lost interest in the series and stopped updating that player. At some point I intend to automate some of the player manipulation so that a user can create their own player for any youtube account, which would automatically source the relevant information and download the thumbnails. The second aspect of the project is more interesting and challenging, and it is the automated creation of infographic images to help clarify the information known about the series. These files are shared with the community on the forums, and help users discuss the series. Videos are split into scenes, which contains characters and items. The scenes are sorted chronologically (although in many cases the chronological ordering is ambiguous, and the consensus opinion is usually taken in these cases) and then the characters and items are represented by arrows which flow from scene to scene. The scenes are arranged in columns to give the location of the characters, or the owners of the items. Users can create and enter their own XML files to make their own infographics, and automatically dump XML snippets by clicking on the scenes they wish to select. The users can filter by characters, camerapersons, items, and seasons. The scenes are colour coded to represent the sources of the footage. ### Challenges Challenge: The web series is told out of order, so one of the biggest problems to solve was sorting the scenes in order, when the order was sometimes ambiguous. Solution: This was solved by following the fan-made “comprehensive timeline” when in doubt, and sorting the scenes with dates, and in the case of multiple scenes per day, by time. The scenes are assigned timestamps, and the videos are assigned release dates. With this in place, the scenes and videos can then be sorted quickly and easily. (Resolved) Challenge: The data has to be stored in an organised manner. Solution: Initially this was solved by declaring objects directly. In order to make the data more portable and for other users to contribute, I wrote and parsed xml files, so that nearly all the data is sorted semantically. One of the infographics keeps track of the exchange of posessions between characters, and this data is not yet accounted for in the xml files. (Resolved, to be revisited) Challenge: This project required extensive use of the youtube js API. Solution: Thanks to this project I can now queue up videos, make them autoplay, and use a timer to move between sections of video. (Resolved) Challenge: The video stills had to respond to the user’s actions. Solution: The video player in this project allows the user to mouseover the video stills, with dynamic changes of style as the user does so, makng it more aesthetically pleasing. This had to be responsive and with little overhead, and the result is rather pleasing. (Resolved) Challenge: The timeline has to be laid out properly on the page. Solution: This means that each element must be given enough space to display all the text and images, and then arranged to give the character arrows sufficient space. This has been achieved using a text wrapping function, and parsing the lists of objects multiple times to ensure sufficient spacing. Editing this code is not a pleasant experience though, it could certianly benefit from some refactoring. (Resolved, to be revisited) Challenge: The player should allow the user to create a custom playlist Solution: The user can create a playlst filtering by characters etc, and choosing to play in release order or chronological order. The player also allows the user to watch all scenes in order. (Resolved) Challenge: There have been many many challenges with this project, so I may add more as they occur to me! Solution: (Resolved)
2018-07-19 11:20:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24526585638523102, "perplexity": 1152.987514848427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590866.65/warc/CC-MAIN-20180719105750-20180719125750-00495.warc.gz"}
http://15462.courses.cs.cmu.edu/fall2017/lecture/drawingatriangle/slide_051
Previous | Next --- Slide 51 of 73 Back to Lecture Thumbnails Tee-Dawg Could someone explain why our signals are not always band limited? joel The triangle in the hint stands out because of its sharp and clearly defined edges. I.e. the edges of the triangle are formed from a discontinuous change in the image intensity from white to red. This discontinuous and abrupt change in intensity corresponds to high frequencies (since a high frequency is after all, a high rate of change). In particular, signals with such discontinuities have an infinite frequency spectrum. Hence, they are not band-limited (or constrained to a band of low frequencies).
2018-07-20 01:06:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9215170741081238, "perplexity": 771.3087778776143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591455.76/warc/CC-MAIN-20180720002543-20180720022543-00490.warc.gz"}
https://nebusresearch.wordpress.com/tag/ginger-meggs/
## Reading the Comics, November 5, 2018: November 5, 2018 Edition This past week included one of those odd days that’s so busy I get a column’s worth of topics from a single day’s reading. And there was another strip (the Cow and Boy rerun) which I might have roped in had the rest of the week been dead. The Motley rerun might have made the cut too, for a reference to $E = mc^2$. Jason Chatfield’s Ginger Meggs for the 5th is a joke about resisting the story problem. I’m surprised by the particulars of this question. Turning an arithmetic problem into counts of some number of particular things is common enough and has a respectable history. But slices of broccoli quiche? I’m distracted by the choice, and I like quiche. It’s a weird thing for a kid to have, and a weird amount for anybody to have. JC Duffy’s Lug Nuts for the 5th uses mathematics as a shorthand for intelligence. And it particularly uses π as shorthand for mathematics. There’s a lot of compressed concepts put into this. I shouldn’t be surprised if it’s rerun come mid-March. Tom Toles’s Randolph Itch, 2 am for the 5th I’ve highlighted before. It’s the pie chart joke. It will never stop amusing me, but I suppose I should take Randolph Itch, 2 am out of my rotation of comics I read to include here. Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 5th is a logic puzzle joke. And a set theory joke. Dad is trying to argue he can’t be surprised by his gift because it’ll belong to one of two sets of things. And he receives nothing. This ought to defy his expectations, if we think of “nothing” as being “the empty set”. The empty set is an indispensable part of set theory. It’s a set that has no elements, has nothing in it. Then suppose we talk about what it means for one set to be contained in another. Take what seems like an uncontroversial definition: set A is contained in set B if there’s nothing in A which is not also in B. Then the empty set is contained inside every set. So Dad, having supposed that he can’t be surprised, since he’d receive either something that is “socks” or something that is “not-socks”, does get surprised. He gets the one thing that is both “socks” and “not-socks” simultaneously. I hate to pull this move a third time in one week (see here and here), but the logic of the joke doesn’t work for me. I’ll go along with “nothing” as being “the empty set” for these purposes. And I’ll accept that “nothing” is definitely “not-socks”. But to say that “nothing” is also “socks” is … weird, unless you are putting it in the language of set theory. I think the joke would be saved if it were more clearly established that Dad should be expecting some definite thing, so that no-thing would defy all expectations. “Nothing” is a difficult subject to treat logically. I have been exposed a bit to the thinking of professional philosophers on the subject. Not enough that I feel I could say something non-stupid about the subject. But enough to say that yeah, they’re right, we have a really hard time describing “nothing”. The null set is better behaved. I suppose that’s because logicians have been able to tame it and give it some clearly defined properties. Mike Shiell’s The Wandering Melon for the 5th felt like a rerun to me. It wasn’t. But Shiell did do a variation on this joke in August. Both are built on the same whimsy of probability. It’s unlikely one will win a lottery. It’s unlikely one will die in a particular and bizarre way. What are the odds someone would have both things happen to them? This and every Reading the Comics post should be at this link. Essays that include Ginger Meggs are at this link. Essays in which I discuss Lug Nuts are at this link. Essays mentioning Randolph Itch, 2 am, should be at this link. The many essays with a mention of Saturday Morning Breakfast Cereal are at this link. And essays where I’m inspired by something in The Wandering Melon should be at this link. And, what the heck, when I really discuss Cow and Boy it’s at this link. Real discussions of Motley are at this link. And my Fall 2018 Mathematics A-To-Z averages two new posts a week, now and through December. Thanks again for reading. ## Reading the Comics, August 3, 2018: Negative Temperatures Edition So I’m going to have a third Reading the Comics essay for last week’s strips. This happens sometimes. Two of the four strips for this essay mention percentages. But one of the others is so important to me that it gets naming rights for the essay. You’ll understand when I’m done. I hope. Angie Bailey’s Texts From Mittens for the 2nd talks about percentages. That’s a corner of arithmetic that many people find frightening and unwelcoming. I’m tickled that Mittens doesn’t understand how easy it is to work out a percentage of 100. It’s a good, reasonable bit of characterization for a cat. John Graziano’s Ripley’s Believe It Or Not for the 2nd is about a subject close to my heart. At least a third of it is. The mention of negative Kelvin temperatures set off a … heated … debate on the comments thread at GoComics.com. Quite a few people remember learning in school that the Kelvin temperature scale. It starts with the coldest possible temperature, which is zero. And that’s that. They have taken this to denounce Graziano as writing obvious nonsense. Well. Something you should know about anything you learned in school: the reality is more complicated than that. This is true for thermodynamics. This is true for mathematics. This is true for anything interesting enough for humans to study. This also applies to stuff you learned as an undergraduate. Also to grad school. So what are negative temperatures? At least on an absolute temperature scale, where the answer isn’t an obvious and boring “cold”? One clue is in the word “absolute” there. It means a way of measuring temperature that’s in some way independent of how we do the measurement. In ordinary life we measure temperatures with physical phenomena. Fluids that expand or contract as their temperature changes. Metals that expand or contract as their temperatures change. For special cases like blast furnaces, sample slugs of clays that harden or don’t at temperature. Observing the radiation of light off a thing. And these are all fine, useful in their domains. They’re also bound in particular physical experiments, though. Is there a definition of temperature that … you know … we can do mathematically? Of course, or I wouldn’t be writing this. There are two mathematical-physics components to give us temperature. One is the internal energy of your system. This is the energy of whatever your thing is, less the gravitational or potential energy that reflects where it happens to be sitting. Also minus the kinetic energy that comes of the whole system moving in whatever way you like. That is, the energy you’d see if that thing were in an otherwise empty universe. The second part is — OK, this will confuse people. It’s the entropy. Which is not a word for “stuff gets broken”. Not in this context. The entropy of a system describes how many distinct ways there are for a system to arrange its energy. Low-entropy systems have only a few ways to put things. High-entropy systems have a lot of ways to put things. This does harmonize with the pop-culture idea of entropy. There are many ways for a room to be messy. There are few ways for it to be clean. And it’s so easy to make a room messier and hard to make it tidier. We say entropy tends to increase. So. A mathematical physicist bases “temperature” on the internal energy and the entropy. Imagine giving a system a tiny bit more energy. How many more ways would the system be able to arrange itself with that extra energy? That gives us the temperature. (To be precise, it gives us the reciprocal of the temperature. We could set this up as how a small change in entropy affects the internal energy, and get temperature right away. But I have an easier time thinking of going from change-in-energy to change-in-entropy than the other way around. And this is my blog so I get to choose how I set things up.) This definition sounds bizarre. But it works brilliantly. It’s all nice clean mathematics. It matches perfectly nice easy-to-work-out cases, too. Like, you may kind of remember from high school physics how the temperature of a gas is something something average kinetic energy something. Work out the entropy and the internal energy of an ideal gas. Guess what this change-in-entropy/change-in-internal-energy thing gives you? Exactly something something average kinetic energy something. It’s brilliant. In ordinary stuff, adding a little more internal energy to a system opens up new ways to arrange that energy. It always increases the entropy. So the absolute temperature, from this definition, is always positive. Good stuff. Matches our intuition well. So in 1956 Dr Norman Ramsey and Dr Martin Klein published some interesting papers in the Physical Review. (Here’s a link to Ramsey’s paper and here’s Klein’s, if you can get someone else to pay for your access.) Their insightful question: what happens if a physical system has a maximum internal energy? If there’s some way of arranging the things in your system so that no more energy can come in? What if you’re close to but not at that maximum? It depends on details, yes. But consider this setup: there’s one, or only a handful, of ways to arrange the maximum possible internal energy. There’s some more ways to arrange nearly-the-maximum-possible internal energy. There’s even more ways to arrange not-quite-nearly-the-maximum-possible internal energy. Look at what that implies, though. If you’re near the maximum-possible internal energy, then adding a tiny bit of energy reduces the entropy. There’s fewer ways to arrange that greater bit of energy. Greater internal energy, reduced entropy. This implies the temperature is negative. So we have to allow the idea of negative temperatures. Or we have to throw out this statistical-mechanics-based definition of temperature. And the definition works so well otherwise. Nobody’s got an idea nearly as good for it. So mathematical physicists shrugged, and noted this as a possibility, but mostly ignored it for decades. If it got mentioned, it was because the instructor was showing off a neat weird thing. This is how I encountered it, as a young physics major full of confidence and not at all good on wedge products. But it was sitting right there, in my textbook, Kittel and Kroemer’s Thermal Physics. Appendix E, four brisk pages before the index. Still, it was an enchanting piece. And a useful one, possibly the most useful four-page aside I encountered as an undergraduate. My thesis research simulated a fluid-equilibrium problem run at different temperatures. There was a natural way that this fluid would have a maximum possible internal energy. So, a good part — the most fascinating part — of my research was in the world of negative temperatures. It’s a strange one, one where entropy seems to work in reverse. Things build, spontaneously. More heat, more energy, makes them build faster. In simulation, a shell of viscosity-free gas turned into what looked for all the world like a solid shell. All right, but you can simulate anything on a computer, or in equations, as I did. Would this ever happen in reality? … And yes, in some ways. Internal energy and entropy are ideas that have natural, irresistible fits in information theory. This is the study of … information. I mean, how you send a signal and how you receive a signal. It turns out a lot of laser physics has, in information theory terms, behavior that’s negative-temperature. And, all right, but that’s not what anybody thinks of as temperature. Well, these ideas happen still. They usually need some kind of special constraint on the things. Atoms held in a magnetic field so that their motions are constrained. Vortices locked into place on a two-dimensional surface (a prerequisite to my little fluids problems). Atoms bound into a lattice that keeps them from being able to fly free. All weird stuff, yes. But all exactly as the statistical-mechanics temperature idea calls on. And notice. These negative temperatures happen only when the energy is extremely high. This is the grounds for saying that they’re hotter than positive temperatures. And good reason, too. Getting into what heat is, as opposed to temperature, is an even longer discussion. But it seems fair to say something with a huge internal energy has more heat than something with slight internal energy. So Graziano’s Ripley’s claim is right. (GoComics.com commenters, struggling valiantly, have tried to talk about quantum mechanics stuff and made a hash of it. As a general rule, skip any pop-physics explanation of something being quantum mechanics.) If you’re interested in more about this, I recommend Stephen J Blundell and Katherine M Blundell’s Concepts in Thermal Physics. Even if you’re not comfortable enough in calculus to follow the derivations, the textbook prose is insightful. John Hambrock’s The Brilliant Mind of Edison Lee for the 3rd is a probability joke. And it’s built on how impossible putting together a particular huge complicated structure can be. I admit I’m not sure how I’d go about calculating the chance of a heap of Legos producing a giraffe shape. Imagine working out the number of ways Legos might fall together. Imagine working out how many of those could be called giraffe shapes. It seems too great a workload. And figuring it by experiment, shuffling Legos until a giraffe pops out, doesn’t seem much better. This approaches an argument sometimes raised about the origins of life. Grant there’s no chance that a pile of Legos could be dropped together to make a giraffe shape. How can the much bigger pile of chemical elements have been stirred together to make an actual giraffe? Or, the same problem in another guise. If a monkey could go at a typewriter forever without typing any of Shakespeare’s plays, how did a chain of monkeys get to writing all of them? And there’s a couple of explanations. At least partial explanations. There is much we don’t understand about the origins of life. But one is that the universe is huge. There’s lots of stars. It looks like most stars have planets. There’s lots of chances for chemicals to mix together and form a biochemistry. Even an impossibly unlikely thing will happen, given enough chances. And another part is selection. A pile of Legos thrown into a pile can do pretty much anything. Any piece will fit into any other piece in a variety of ways. A pile of chemicals are more constrained in what they can do. Hydrogen, oxygen, and a bit of activation energy can make hydrogen-plus-hydroxide ions, water, or hydrogen peroxide, and that’s it. There can be a lot of ways to arrange things. Proteins are chains of amino acids. These chains can be about as long as you like. (It seems.) (I suppose there must be some limit.) And they curl over and fold up in some of the most complicated mathematical problems anyone can even imagine doing. How hard is it to find a set of chemicals that are a biochemistry? … That’s hard to say. There are about twenty amino acids used for proteins in our life. It seems like there could be a plausible life with eighteen amino acids, or 24, including a couple we don’t use here. It seems plausible, though, that my father could have had two brothers growing up; if there were, would I exist? Jason Chatfield’s Ginger Meggs for the 3rd is a story-problem joke. Familiar old form to one. The question seems to be a bit mangled in the asking, though. Thirty percent of Jonson’s twelve apples is a nasty fractional number of apples. Surely the question should have given Jonson ten and Fitzclown twelve apples. Then thirty percent of Jonson’s apples would be a nice whole number. I talk about mathematics themes in comic strips often, and those essays are gathered at this link. You might enjoy more of them. If Texts From Mittens gets on-topic for me again I’ll have an essay about it at this link.. (It’s a new tag, and a new comic, at least at GoComics.com.) Other discussions of Ripley’s Believe It Or Not strips are at this link and probably aren’t all mentions of Rubik’s Cubes. The Brilliant Mind of Edison Lee appears in essays at this link. And other appearances of Ginger Meggs are at this link. And so yeah, that one Star Trek: The Next Generation episode where they say the surface temperature is like negative 300 degrees Celsius, and therefore below absolute zero? I’m willing to write that off as it’s an incredibly high-energy atmosphere that’s fallen into negative (absolute) temperatures. Makes the place more exotic and weird. They need more of that. ## Reading the Comics, May 18, 2018: Quincy Doesn’t Make The Cut Edition I hate to disillusion anyone but I lack hard rules about what qualifies as a mathematically-themed comic strip. During a slow week, more marginal stuff makes it. This past week was going slow enough that I tagged Wednesday’s Quincy rerun, from March of 1979 for possible inclusion. And all it does is mention that Quincy’s got a mathematics test due. Fortunately for me the week picked up a little. It cheats me of an excuse to point out Ted Shearer’s art style to people, but that’s not really my blog’s business. Also it may not surprise you but since I’ve decided I need to include GoComics images I’ve gotten more restrictive. Somehow the bit of work it takes to think of a caption and to describe the text and images of a comic strip feel like that much extra work. Roy Schneider’s The Humble Stumble for the 13th of May is a logic/geometry puzzle. Is it relevant enough for here? Well, I spent some time working it out. And some time wondering about implicit instructions. Like, if the challenge is to have exactly four equally-sized boxes after two toothpicks are moved, can we have extra stuff? Can we put a toothpick where it’s just a stray edge, part of no particular shape? I can’t speak to how long you stay interested in this sort of puzzle. But you can have some good fun rules-lawyering it. Jeff Harris’s Shortcuts for the 13th is a children’s informational feature about Aristotle. Aristotle is renowned for his mathematical accomplishments by many people who’ve got him mixed up with Archimedes. Aristotle it’s harder to say much about. He did write great texts that pop-science writers credit as giving us the great ideas about nature and physics and chemistry that the Enlightenment was able to correct in only about 175 years of trying. His mathematics is harder to summarize though. We can say certainly that he knew some mathematics. And that he encouraged thinking of subjects as built on logical deductions from axioms and definitions. So there is that influence. Dan Thompson’s Brevity for the 15th is a pun, built on the bell curve. This is also known as the Gaussian distribution or the normal distribution. It turns up everywhere. If you plot how likely a particular value is to turn up, you get a shape that looks like a slightly melted bell. In principle the bell curve stretches out infinitely far. In practice, the curve turns into a horizontal line so close to zero you can’t see the difference once you’re not-too-far away from the peak. Jason Chatfield’s Ginger Meggs for the 16th I assume takes place in a mathematics class. I’m assuming the question is adding together four two-digit numbers. But “what are 26, 24, 33, and 32” seems like it should be open to other interpretations. Perhaps Mr Canehard was asking for some class of numbers those all fit into. Integers, obviously. Counting numbers. Compound numbers rather than primes. I keep wanting to say there’s something deeper, like they’re all multiples of three (or something) but they aren’t. They haven’t got any factors other than 1 in common. I mention this because I’d love to figure out what interesting commonality those numbers have and which I’m overlooking. Ed Stein’s Freshly Squeezed for the 17th is a story problem strip. Bit of a passive-aggressive one, in-universe. But I understand why it would be formed like that. The problem’s incomplete, as stated. There could be some fun in figuring out what extra bits of information one would need to give an answer. This is another new-tagged comic. Henry Scarpelli and Craig Boldman’s Archie for the 19th name-drops calculus, credibly, as something high schoolers would be amazed to see one of their own do in their heads. There’s not anything on the blackboard that’s iconically calculus, it happens. Dilton’s writing out a polynomial, more or less, and that’s a fit subject for high school calculus. They’re good examples on which to learn differentiation and integration. They’re a little more complicated than straight lines, but not too weird or abstract. And they follow nice, easy-to-summarize rules. But they turn up in high school algebra too, and can fit into geometry easily. Or any subject, really, as remember, everything is polynomials. Mark Anderson’s Andertoons for the 19th is Mark Anderson’s Andertoons for the week. Glad that it’s there. Let me explain why it is proper construction of a joke that a Fibonacci Division might be represented with a spiral. Fibonacci’s the name we give to Leonardo of Pisa, who lived in the first half of the 13th century. He’s most important for explaining to the western world why these Hindu-Arabic numerals were worth learning. But his pop-cultural presence owes to the Fibonacci Sequence, the sequence of numbers 1, 1, 2, 3, 5, 8, and so on. Each number’s the sum of the two before it. And this connects to the Golden Ratio, one of pop mathematics’ most popular humbugs. As the terms get bigger and bigger, the ratio between a term and the one before it gets really close to the Golden Ratio, a bit over 1.618. So. Draw a quarter-circle that connects the opposite corners of a 1×1 square. Connect that to a quarter-circle that connects opposite corners of a 2×2 square. Connect that to a quarter-circle connecting opposite corners of a 3×3 square. And a 5×5 square, and an 8×8 square, and a 13×13 square, and a 21×21 square, and so on. Yes, there are ambiguities in the way I’ve described this. I’ve tried explaining how to do things just right. It makes a heap of boring words and I’m trying to reduce how many of those I write. But if you do it the way I want, guess what shape you have? And that is why this is a correctly-formed joke about the Fibonacci Division. ## Reading the Comics, February 3, 2018: Overworked Edition And this should clear out last week’s mathematically-themed comic strips. I didn’t realize just how busy last week had been until I looked at what I thought was a backlog of just two days’ worth of strips and it turned out to be about two thousand comics. I exaggerate, but as ever, not by much. This current week seems to be a more relaxed pace. So I’ll have to think of something to write for the Tuesday and Thursday slots. Hm. (I’ll be all right. I’ve got one thing I need to stop bluffing about and write, and there’s usually a fair roundup of interesting tweets or articles I’ve seen that I can write. Those are often the most popular articles around here.) Hilary Price and Rina Piccolo’s Rhymes with Orange for the 1st of February, 2018 gives us an anthropomorphic geometric figures joke for the week. Also a side of these figures that I don’t think I’ve seen in the newspaper comics before. It kind of raises further questions. Jason Chatfield’s Ginger Meggs for the 1st just mentions that it’s a mathematics test. Ginger isn’t ready for it. Mark Tatulli’s Heart of the City rerun for the 1st finally has some specific mathematics mentioned in Heart’s efforts to avoid a mathematics tutor. The bit about the sum of adjacent angles forming a right line being 180 degrees is an important one. A great number of proofs rely on it. I can’t deny the bare fact seems dull, though. I know offhand, for example, that this bit about adjacent angles comes in handy in proving that the interior angles of a triangle add up to 180 degrees. At least for Euclidean geometry. And there are non-Euclidean geometries that are interesting and important and for which that’s not true. Which inspires the question: on a non-Euclidean surface, like say the surface of the Earth, is it that adjacent angles don’t add up to 180 degrees? Or does something else in the proof of a triangle’s interior angles adding up to 180 degrees go wrong? The Eric the Circle rerun for the 2nd, by JohnG, is one of the occasional Erics that talk about π and so get to be considered on-topic here. Bill Whitehead’s Free Range for the 2nd features the classic page full of equations to demonstrate some hard mathematical work. And it is the sort of subject that is done mathematically. The equations don’t look to me anything like what you’d use for asteroid orbit projections. I’d expect forecasting just where an asteroid might hit the Earth to be done partly by analytic formulas that could be done on a blackboard. And then made precise by a numerical estimate. The advantage of the numerical estimate is that stuff like how air resistance affects the path of something in flight is hard to deal with analytically. Numerically, it’s tedious, but we can let the computer deal with the tedium. So there’d be just a boring old computer screen to show on-panel. Bud Fisher’s Mutt and Jeff reprint for the 2nd is a little baffling. And not really mathematical. It’s just got a bizarre arithmetic error in it. Mutt’s fiancee Encee wants earrings that cost ten dollars (each?) and Mutt takes this to be fifty dollars in earring costs and I have no idea what happened there. Thomas K Dye, the web cartoonist who’s done artwork for various article series, has pointed out that the lettering on these strips have been redone with a computer font. (Look at the letters ‘S’; once you see it, you’ll also notice it in the slightly lumpy ‘O’ and the curly-arrow ‘G’ shapes.) So maybe in the transcription the earring cost got garbled? And then not a single person reading the finished product read it over and thought about what they were doing? I don’t know. Zach Weinersmith’s Saturday Morning Breakfast Cereal reprint for the 2nd is based, as his efforts to get my attention often are, on a real mathematical physics postulate. As the woman postulates: given a deterministic universe, with known positions and momentums of every particle, and known forces for how all these interact, it seems like it should be possible to predict the future perfectly. It would also be possible to “retrodict” the past. All the laws of physics that we know are symmetric in time; there’s no reason you can’t predict the motion of something one second into the past just as well as you an one second into the future. This fascinating observation took a lot of battery in the 19th century. Many physical phenomena are better described by statistical laws, particularly in thermodynamics, the flow of heat. In these it’s often possible to predict the future well but retrodict the past not at all. But that looks as though it’s a matter of computing power. We resort to a statistical understanding of, say, the rings of Saturn because it’s too hard to track the billions of positions and momentums we’d need to otherwise. A sufficiently powerful mathematician, for example God, would be able to do that. Fair enough. Then came the 1890s. Henri Poincaré discovered something terrifying about deterministic systems. It’s possible to have chaos. A mathematical representation of a system is a bit different from the original system. There’s some unavoidable error. That’s bound to make some, larger, error in any prediction of its future. For simple enough systems, this is okay. We can make a projection with an error as small as we need, at the cost of knowing the current state of affairs with enough detail. Poincaré found that some systems can be chaotic, though, ones in which any error between the current system and its representation will grow to make the projection useless. (At least for some starting conditions.) And so many interesting systems are chaotic. Incredibly simplified models of the weather are chaotic; surely the actual thing is. This implies that God’s projection of the universe would be an amusing but almost instantly meaningless toy. At least unless it were a duplicate of the universe. In which case we have to start asking our philosopher friends about the nature of identity and what a universe is, exactly. Ruben Bolling’s Super-Fun-Pak Comix for the 2nd is an installment of Guy Walks Into A Bar featuring what looks like an arithmetic problem to start. It takes a turn into base-ten jokes. There are times I suspect Ruben Bolling to be a bit of a nerd. Nate Fakes’s Break of Day for the 3rd looks like it’s trying to be an anthropomorphic-numerals joke. At least it’s an anthropomorphic something joke. Percy Crosby’s Skippy for the 3rd originally ran the 8th of December, 1930. It alludes to one of those classic probability questions: what’s the chance that in your lungs is one of the molecules exhaled by Julius Caesar in his dying gasp? Or whatever other event you want: the first breath you ever took, or something exhaled by Jesus during the Sermon on the Mount, or exhaled by Sue the T-Rex as she died. Whatever. The chance is always surprisingly high, which reflects the fact there’s a lot of molecules out there. This also reflects a confidence that we can say one molecule of air is “the same” as some molecule if air in a much earlier time. We have to make that supposition to have a problem we can treat mathematically. My understanding is chemists laugh at us if we try to suggest this seriously. Fair enough. But whether the air pumped out of a bicycle tire is ever the same as what’s pumped back in? That’s the same kind of problem. At least some of the molecules of air will be the same ones. Pretend “the same ones” makes sense. Please. ## Reading the Comics, January 9, 2018: Be Squared Edition It wasn’t just another busy week from Comic Strip Master Command. And a week busy enough for me to split the mathematics comics into two essays. It was one where I recognized one of the panels as one I’d featured before. Multiple times. Some of the comics I feature are in perpetual reruns and don’t have your classic, deep, Peanuts-style decades of archives to draw from. I don’t usually go checking my archives to see if I’ve mentioned a comic before, not unless something about it stands out. So for me to notice I’ve seen this strip repeatedly can mean only one thing: there was something a little bit annoying about it. Recognize it yet? You will. Hy Eisman’s Popeye for the 7th of January, 2018 is an odd place for mathematics to come in. J Wellington Wimpy regales Popeye with all the intellectual topics he tried to impress his first love with, and “Euclidean postulates in the original Greek” made the cut. And, fair enough. Euclid’s books are that rare thing that’s of important mathematics (or scientific) merit and that a lay person can just pick up and read, even for pleasure. These days we’re more likely to see a division between mathematics writing that’s accessible but unimportant (you know, like, me) or that’s important but takes years of training to understand. Doing it in the original Greek is some arrogant showing-off, though. Can’t blame Carolyn for bailing on someone pulling that stunt. Mark O’Hare’s Citizen Dog rerun for the 7th continues last essay’s storyline about Fergus taking Maggie’s place at school. He’s having trouble understanding the story within a story problem. I sympathize. John Hambrock’s The Brilliant Mind of Edison Lee for the 8th is set in mathematics class. And Edison tries to use a pile of mathematically-tinged words to explain why it’s okay to read a Star Wars book instead of paying attention. Or at least to provide a response the teacher won’t answer. Maybe we can make something out of this by allowing the monetary value of something to be related to its relevance. But if we allow that then Edison’s messed up. I don’t know what quantity is measured by multiplying “every Star Wars book ever written” by “all the movies and merchandise”. But dividing that by the value of the franchise gets … some modest number in peculiar units divided by a large number of dollars. The number value is going to be small. And the dimensions are obviously crazy. Edison needs to pay better attention to the mathematics. Johnny Hart’s B.C. for the 14th of July, 1960 shows off the famous equation of the 20th century. All part of the comic’s anachronism-comedy chic. The strip reran the 9th of January. “E = mc2” is, correctly, associated with Albert Einstein and some of his important publications of 1905. But the expression does have some curious precursors, people who had worked out the relationship (or something close to it) before Einstein and who didn’t quite know what they had. A short piece from Scientific American a couple years back describes pre-Einstein expressions of the equation from Oliver Heaviside, Henri Poincaré, and Fritz Hasenöhrl. I’m not surprised Poincaré had something close to this; it seems like he spent twenty years almost discovering Relativity. That’s all right; he did enough in dynamical systems that mathematicians aren’t going to forget him. Tim Lachowski’s Get A Life for the 9th is at least the fourth time I’ve seen this panel since I started doing Reading the Comics posts regularly. (Previous times: the 5th of November, 2012 and the 10th of March, 2015 and the 14th of July, 2016.) I’m like this close to concluding the strip’s in perpetual rerun and I can drop it from my daily reading. Jason Chatfield’s Ginger Meggs for the 9th draws my eye just because the blackboard lists “Prime Numbers”. Fair enough place setting, although what’s listed are 1, 3, 5, and 7. These days mathematicians don’t tend to list 1 as a prime number; it’s inconvenient. (A lot of proofs depend on their being exactly one way to factorize a number. But you can always multiply a number by ‘1’ a couple more times without changing its value. So ‘6’ is 3 times 2, but it’s also 3 times 2 times 1, or 3 times 2 times 1 times 1, or 3 times 2 times 1145,388,434,247. You can write around that, but it’s easier to define ‘1’ as not a prime.) But it could be defended. I can’t think any reason to leave ‘2’ off a list of prime numbers, though. I think Chatfield conflated odd and prime numbers. If he’d had a bit more blackboard space we could’ve seen whether the next item was 9 or 11 and that would prove the matter. Paul Trap’s Thatababy for the 9th uses arithmetic — square roots — as the kind of thing to test whether a computer’s working. Everyone has their little tests like this. My love’s father likes to test whether the computer knows of the band Walk The Moon or of Christine Korsgaard (a prominent philosopher in my love’s specialty). I’ve got a couple words I like to check dictionaries for. Of course the test is only any good if you know what the answer should be, and what’s the actual square root of 3,278? Goodness knows. It’s got to be between 50 (50 squared is 25 hundred) and 60 (60 squared is 36 hundred). Since 3,278 is so much closer 3,600 than 2,500 its square root should be closer to 60 than to 50. So 57-point-something is plausible. Unfortunately square roots don’t lend themselves to the same sorts of tricks from reading the last digit that cube roots do. And 3,278 isn’t a perfect square anyway. Alexa is right on this one. Also about the specific gravity of cobalt, at least if Wikipedia is right and not conspiring with the artificial intelligences on this one. Catch you in 2021. Charles Schulz’s Peanuts for the 8th of October, 1953, is about practical uses of mathematics. It got rerun on the 9th of January. ## Reading the Comics, December 2, 2017: Showing Intelligence Edition November closed out with another of those weeks not quite busy enough to justify splitting into two. I blame Friday and Saturday. Nothing mathematically-themed was happening them. Suppose some days are just like that. Johnny Hart’s Back To BC for the 26th is an example of using mathematical truths as profound statements. I’m not sure that I’d agree with just stating the Pythagorean Theorem as profound, though. It seems like a profound statement has to have some additional surprising, revelatory elements to it. Like, knowing the Pythagorean theorem is true means we can prove there’s exactly one line parallel to a given line and passing through some point. Who’d see that coming? I don’t blame Hart for not trying to fit all that into one panel, though. Too slow a joke. The strip originally ran the 4th of September, 1960. Tom Toles’s Randolph Itch, 2 am rerun for the 26th is a cute little arithmetic-in-real-life panel. I suppose arithmetic-in-real-life. Well, I’m amused and stick around for the footer joke. The strip originally ran the 24th of February, 2002. Zach Weinersmith’s Saturday Morning Breakfast Cereal makes its first appearance for the week on the 26th. It’s an anthropomorphic-numerals joke and some wordplay. Interesting trivia about the whole numbers that never actually impresses people: a whole number is either a perfect square, like 1 or 4 or 9 or 16 are, or else its square root is irrational. There’s no whole number with a square root that’s, like, 7.745 or something. Maybe I just discuss it with people who’re too old. It seems like the sort of thing to reveal to a budding mathematician when she’s eight. Saturday Morning Breakfast Cereal makes another appearance the 29th. The joke’s about using the Greek ε, which has a long heritage of use for “a small, positive number”. We use this all the time in analysis. A lot of proofs in analysis are done by using ε in a sort of trick. We want to show something is this value, but it’s too hard to do. Fine. Pick any ε, a positive number of unknown size. So then we’ll find something we can calculate, and show that the difference between the thing we want and the thing we can do is smaller than ε. And that the value of the thing we can calculate is that. Therefore, the difference between what we want and what we can do is smaller than any positive number. And so the difference between them must be zero, and voila! We’ve proved what we wanted to prove. I have always assumed that we use ε for this for the association with “error”, ideally “a tiny error”. If we need another tiny quantity we usually go to δ, probably because it’s close to ε and ‘d’ is still a letter close to ‘e’. (The next letter after ε is ζ, which carries other connotations with it and is harder to write than δ is.) Anyway, Weinersmith is just doing a ha-ha, your penis is small joke. Samson’s Dark Side of the Horse for the 28th is a counting-sheep joke. It maybe doesn’t belong here but I really, really like the art of the final panel and I want people to see it. Bud Grace’s Piranha Club for the 29th is, as with Back to BC, an attempt at showing intelligence through mathematics. There are some flaws in the system. Fun fact: since one million is a perfect square, Arnold could have answered within a single panel. (Also fun fact: I am completely unqualified to judge whether something is a “fun” fact.) Jason Chatfield’s Ginger Meggs for the 29th is Ginger subverting the teacher’s questions, like so many teacher-and-student jokes will do. Dan Thompson’s Brevity for the 30th is the anthropomorphic geometric figures joke for the week. There seems to be no Mark Anderson’s Andertoons for this week. There’ve been some great ones (like on the 26th or the 28th and the 29th) but they’re not at all mathematical. I apologize for the inconvenience and am launching an investigation into this problem. ## Reading the Comics, October 7, 2017: Rerun Comics Edition The most interesting mathematically-themed comic strips from last week were also reruns. So be it; at least I have an excuse to show a 1931-vintage comic. Also, after discovering my old theme didn’t show the category of essay I was posting, I did literally minutes of search for a new theme that did. And that showed tags. And that didn’t put a weird color behind LaTeX inline equations. So I’m using the same theme as my humor blog does, albeit with a different typeface, and we’ll hope that means I don’t post stuff to the wrong blog. As it is I start posting something to the wrong place about once every twenty times. All I want is a WordPress theme with all the good traits of the themes I look at and none of the drawbacks; why is that so hard to get? Elzie Segar’s Thimble Theatre rerun for the 5th originally ran the 25th of April, 1931. It’s just a joke about Popeye not being good at bookkeeping. In the story, Popeye’s taking the \$50,000 reward from his last adventure and opened a One-Way Bank, giving people whatever money they say they need. And now you understand how the first panel of the last row has several jokes in it. The strip is partly a joke about Popeye being better with stuff he can hit than anything else, of course. I wonder if there’s an old stereotype of sailors being bad at arithmetic. I remember reading about pirate crews that, for example, not-as-canny-as-they-think sailors would demand a fortieth or a fiftieth of the prizes as their pay, instead of a mere thirtieth. But it’s so hard to tell what really happened and what’s just a story about the stupidity of people. Marginal? Maybe, but I’m a Popeye fan and this is my blog, so there. Bill Rechin’s Crock rerun(?) from the 6th must have come before. I don’t know when. Anyway it’s a joke about mathematics being way above everybody’s head. Norm Feuti’s Gil rerun for the 6th is a subverted word problem joke. And it’s a reminder of how hard story problems can be. You need something that has a mathematics question on point. And the question has to be framed as asking something someone would actually care to learn. Plus the story has to make sense. Much easier when you’re teaching calculus, I think. Jason Chatfield’s Ginger Meggs for the 6th is a playing-stupid joke built in percentages. Cute enough for the time it takes to read. Gary Wise and Lance Aldrich’s Real Life Adventures for the 6th is a parent-can’t-help-with-homework joke, done with arithmetic since it’s hard to figure another subject that would make the joke possible. I suppose a spelling assignment could be made to work. But that would be hard to write so it didn’t seem contrived. Thaves’ Frank and Ernest for the 7th feels like it’s a riff on the old saw about Plato’s Academy. (The young royal sent home with a coin because he asked what the use of this instruction was, and since he must get something from everything, here’s his drachma.) Maybe. Or it’s just the joke that you make if you have “division” and “royals” in mind. Mark Tatulli’s Lio for the 7th is not quite the anthropomorphic symbols joke for this past week. It’s circling that territory, though. ## Reading the Comics, July 8, 2017: Mostly Just Pointing Edition Won’t lie: I was hoping for a busy week. While Comic Strip Master Command did send a healthy number of mathematically-themed comic strips, I can’t say they were a particularly deep set. Most of what I have to say is that here’s a comic strip that mentions mathematics. Well, you’re reading me for that, aren’t you? Maybe. Tell me if you’re not. I’m curious. Richard Thompson’s Cul de Sac rerun for the 2nd of July is the anthropomorphic numerals joke for the week. And a great one, as I’d expect of Thompson, since it also turns into a little bit about how to create characters. Ralph Dunagin and Dana Summers’s Middletons for the 2nd uses mathematics as the example of the course a kid might do lousy in. You never see this for Social Studies classes, do you? Mark Tatulli’s Heart of the City for the 3rd made the most overtly mathematical joke for most of the week at Math Camp. The strip hasn’t got to anything really annoying yet; it’s mostly been average summer-camp jokes. I admit I’ve been distracted trying to figure out if the minor characters are Tatulli redrawing Peanuts characters in his style. I mean, doesn’t Dana (the freckled girl in the third panel, here) look at least a bit like Peppermint Patty? I’ve also seen a Possible Marcie and a Possible Shermy, who’s the Peanuts character people draw when they want an obscure Peanuts character who isn’t 5. (5 is the Boba Fett of the Peanuts character set: an extremely minor one-joke character used for a week in 1963 but who appeared very occasionally in the background until 1983. You can identify him by the ‘5’ on his shirt. He and his sisters 3 and 4 are the ones doing the weird head-sideways dance in A Charlie Brown Christmas.) Mark Pett’s Lucky Cow rerun for the 4th is another use of mathematics, here algebra, as a default sort of homework assignment. Brant Parker and Johnny Hart’s Wizard of Id Classics for the 4th reruns the Wizard of Id for the 7th of July, 1967. It’s your typical calculation-error problem, this about the forecasting of eclipses. I admit the forecasting of eclipses is one of those bits of mathematics I’ve never understood, but I’ve never tried to understand either. I’ve just taken for granted that the Moon’s movements are too much tedious work to really enlighten me and maybe I should reevaluate that. Understanding when the Moon or the Sun could be expected to disappear was a major concern for people doing mathematics for centuries. Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 5th is a Special Relativity joke, which is plenty of mathematical content for me. I warned you it was a week of not particularly deep discussions. Ashleigh Brilliant’s Pot-Shots rerun for the 5th is a cute little metric system joke. And I’m going to go ahead and pretend that’s enough mathematical content. I’ve come to quite like Brilliant’s cheerfully despairing tone. Jason Chatfield’s Ginger Meggs for the 7th mentions fractions, so you can see how loose the standards get around here when the week is slow enough. John Rose’s Barney Google and Snuffy Smith for the 8th finally gives me a graphic to include this week. It’s about the joke you would expect from the topic of probability being mentioned. And, as might be expected, the comic strip doesn’t precisely accurately describe the state of the law. Any human endeavour has to deal with probabilities. They give us the ability to have reasonable certainty about the confusing and ambiguous information the world presents. Vic Lee’s Pardon My Planet for the 8th is another Albert Einstein mention. The bundle of symbols don’t mean much of anything, at least not as they’re presented, but of course superstar equation E = mc2 turns up. It could hardly not. ## Reading the Comics, July 28, 2012 I intend to be back to regular mathematics-based posts soon. I had a fine idea for a couple posts based on Sunday’s closing of the Diaster Transport roller coaster ride at Cedar Point, actually, although I have to technically write them first. (My bride and I made a trip to the park to get a last ride in before its closing, and that lead to inspiration.) But reviews of math-touching comic strips are always good for my readership, if I’m readin the statistics page here right, so let’s see what’s come up since the last recap, going up to the 14th of July.
2018-12-10 16:36:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5298181772232056, "perplexity": 1215.9835319447604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823348.23/warc/CC-MAIN-20181210144632-20181210170132-00455.warc.gz"}
https://www.maplesoft.com/support/help/errors/view.aspx?path=type/radnumext&L=E
check for a radical number extension Parameters expr - expression Description • The type(expr, radnumext) calling sequence checks to see if expr is a radical number extension. • A radical number extension is an algebraic number extension specified in terms of radicals. This is a root of a combination of rational numbers and roots of rational numbers specified in terms of radicals. Examples > $\mathrm{type}\left({\left(-1\right)}^{\frac{1}{2}},\mathrm{radnumext}\right)$ ${\mathrm{true}}$ (1) > $\mathrm{type}\left({\left(\mathrm{sqrt}\left(5\right)-\frac{3}{2}\right)}^{\frac{4}{3}},\mathrm{radnumext}\right)$ ${\mathrm{true}}$ (2) > $\mathrm{type}\left({\left(4-{5}^{\frac{1}{3}}\right)}^{\frac{1}{4}},\mathrm{radnumext}\right)$ ${\mathrm{true}}$ (3) > $\mathrm{type}\left({x}^{\frac{1}{4}},\mathrm{radnumext}\right)$ ${\mathrm{false}}$ (4) > $\mathrm{type}\left(I,\mathrm{radnumext}\right)$ ${\mathrm{true}}$ (5)
2021-12-07 21:17:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8919240832328796, "perplexity": 1206.159874702417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363418.83/warc/CC-MAIN-20211207201422-20211207231422-00491.warc.gz"}
https://assignmentutor.com/%E7%BB%9F%E8%AE%A1%E4%BB%A3%E5%86%99%E8%B4%9D%E5%8F%B6%E6%96%AF%E5%88%86%E6%9E%90%E4%BB%A3%E5%86%99bayesian-analysis%E4%BB%A3%E8%80%83stat-7613/
assignmentutor-lab™ 为您的留学生涯保驾护航 在代写贝叶斯分析Bayesian Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写贝叶斯分析Bayesian Analysis代写方面经验极为丰富,各种代写贝叶斯分析Bayesian Analysis相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 统计代写|贝叶斯分析代写Bayesian Analysis代考|Are You More Likely to Die in an Automobile Crash When the Weather Is Good Compared to Bad? We saw in Chapter 2, Section $2.3$ some data on fatal automobile accidents. The fewest fatal crashes occur when the weather is at its worst and the highways are at their most dangerous. Using the data alone and applying the standard statistical regression techniques to that data we ended up with the simple regression model shown in Figure 3.1. But there is a grave danger of confusing prediction with risk assessment. For risk assessment and management the regression model is useless, because it provides no explanatory power at all. In fact, from a risk perspective this model would provide irrational, and potentially dangerous, information. It would suggest that if you want to minimize your chances of dying in an automobile crash you should do your driving when the highways are at their most dangerous – in the winter months. Instead of using just the total number of fatal crashes to determine when it is most risky to drive, it is better to factor in the number of miles traveled so that we can compute the crash rate instead, which is defined as the number of fatal crashes divided by miles traveled. Fortunately we have ready access to this data for the northeastern states of the United States in 2008 on a monthly basis as shown in Table 3.1. As explained in the sidebar, the crash rate seems to be a more sensible way of estimating when it is most risky to drive. ## 统计代写|贝叶斯分析代写Bayesian Analysis代考|When Ideology and Causation Collide One of the first things taught in an introductory statistics course is that correlation is not causation. As we have seen, a significant correlation between two factors $\mathrm{A}$ and $\mathrm{B}$ (where, for example $\mathrm{A}$ is yellow teeth and $\mathrm{B}$ is cancer) could be due to pure coincidence or to a causal mechanism such that: a. A causes $\mathrm{B}$ b. B causes A c. Both $\mathrm{A}$ and $\mathrm{B}$ are caused by $\mathrm{C}$ (where in our example $\mathrm{C}$ might be smoking) or some other set of factors. The difference between these possible mechanisms is crucial in interpreting the data, assessing the risks to the individual and society, and setting policy based on the analysis of these risks. However, in practice causal interpretation can collide with our personal view of the world and the prevailing ideology of the organization and social group, of which we will be a part. Explanations consistent with the ideological viewpoint of the group may be deemed more worthy and valid than others, irrespective of the evidence. Discriminating between possible causal mechanisms $\mathrm{A}, \mathrm{B}$, and $\mathrm{C}$ can only formally be done if we can intervene to test the effects of our actions (normally by experimentation). But we can apply commonsense tests of causal interaction to, at least, reveal alternative explanations for correlations. Box $3.1$ provides an example of these issues at play in the area of social policy, specifically regarding the provision of prenatal care. # 贝叶斯分析代考 ## 统计代写|贝叶斯分析代写Bayesian Analysis代考|When Ideology and Causation Collide :一个原因乙 c。两个都一个和乙是由C(在我们的例子中C可能是吸烟)或其他一些因素。 ## 有限元方法代写 assignmentutor™作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 assignmentutor™您的专属作业导师 assignmentutor™您的专属作业导师
2023-03-26 08:37:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5203064680099487, "perplexity": 1459.573584613339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945440.67/warc/CC-MAIN-20230326075911-20230326105911-00608.warc.gz"}
https://nforum.ncatlab.org/discussion/2512/
# Start a new discussion ## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorUrs • CommentTimeFeb 23rd 2011 have created an entry Khovanov homology, so far containing only some references and a little paragraph on the recent advances in identifying the corresponding TQFT. I have also posted this to the $n$Café here, hoping that others feel inspired to work on expanding this entry • CommentRowNumber2. • CommentAuthorzskoda • CommentTimeFeb 23rd 2011 • (edited Feb 23rd 2011) I added lots of links to the bibliography (doi, MR…). There was an earlier $n$Café post about a conference on categorification of the type which includes Khovanov’s work – categorification related to quantum groups, knots and links – the one in Glasgow http://golem.ph.utexas.edu/category/2008/11/categorification_in_glasgow.html, the report on Kamnitzer’s work http://golem.ph.utexas.edu/category/2009/04/kamnitzer_on_categorifying_tan.html, as well as link homology post http://golem.ph.utexas.edu/category/2009/02/link_homology_in_paris.html which you may want to link in your intro text on the $n$Café. Edit: I added the links to $n$Café to Khovanov homology. • CommentRowNumber3. • CommentAuthorUrs • CommentTimeFeb 23rd 2011 • (edited Feb 23rd 2011) thanks, Zoran!! Important further aspects that are missing: • at least a rough account of the definition and some properties • at least a rough mention of the work by Bar-Natan and Lauda on categorified quantum groups and their relation to Khovanov homology. Myself, unfortunately I don’t have time right now. But maybe somebody reading this here does. • CommentRowNumber4. • CommentAuthorzskoda • CommentTimeFeb 23rd 2011 Why do you think that newer work on categorification of quantum groups is closer to Khovanov’s homology than original work from 1990s ? • CommentRowNumber5. • CommentAuthorUrs • CommentTimeFeb 23rd 2011 Why do you think that I think that? • CommentRowNumber6. • CommentAuthorzskoda • CommentTimeFeb 23rd 2011 Well, Lauda’s work on the subject is very recent. While recent work of many authors is gradually clarifying the true picture, the categorification of quantum groups was in bloom already 10+ years ago, pushed by e.g. Kazhdan, Igor Frenkel, Bernstein and others. Some of it is documented, some is not much published (like work of Kazhdan); by 2003 when I was leaving US it was rather frequent topic in quantum group/representation theory community; in 2004 I heard a strong talk of Stroppel about it in Warwick, she was pushing the program very enthusiastically (what continues now). I anticipate that you do have some insight why you consider also the young work of Lauda “important” missing aspect and you want to share the motivation with us before we invest time into it unmotivated… • CommentRowNumber7. • CommentAuthorUrs • CommentTimeFeb 23rd 2011 • (edited Feb 23rd 2011) I anticipate that you do have some insight why you consider also the young work of Lauda “important” Gee, Zoran, all I said is that I think we should list it. Don’t try to fight with me over things I didn’t say. If you have insights on Khovanov homology that you want to share, just add them to the entry. • CommentRowNumber8. • CommentAuthorzskoda • CommentTimeFeb 24th 2011 • (edited Feb 24th 2011) I have no insights on Khovanov homology (frankly no much interest either), I was working on the entry just to help you. I do have some insights on the categorification of quantum groups, but do not know which of that work is directly relevant to Khovanov homology. Don’t try to fight with me over things I didn’t say. I do not understand why asking you for your insight for “Lauda’s work” (in 4 an 6) is “fight”. It was just a minor curiosity (which I mainly lost after 7). Edit: I really (and naturally) expect that if somebody labels something as an important further aspect (as in 3) that one will be happy to hear and answer to the interest why (s)he thinks so. • CommentRowNumber9. • CommentAuthorzskoda • CommentTimeFeb 24th 2011 I added few random related references from Khovanov and from Bar-Natan. • CommentRowNumber10. • CommentAuthorUrs • CommentTimeFeb 24th 2011 • (edited Feb 24th 2011) Hey Zoran, Lauda is a collaborator of the very Khovanov and from what i hear people say in the hallways, his work is being well recognized. That’s why I thought he needs to be cited in the entry on Khovanov homology. But that’s about all I know about it. If you have further insights that for some reason Lauda’s work should not be cited because it is far too little “important” as you suggest in 6, then please share your knowledge. • CommentRowNumber11. • CommentAuthorzskoda • CommentTimeFeb 24th 2011 • (edited Feb 24th 2011) Thanks for clarifications. As I explained in detail in 6, as far as categorification of quantum groups is concerned the Lauda’s work is indeed very late. But I expected in 4 that there are arguments “that newer work on categorification of quantum groups is closer to Khovanov’s homology than original work from 1990s”. This is not in contradiction with what you are saying and is not suggesting that it is not important – enhancing existing subject by finding deep connections to other areas is important; I am suggesting that one of its merits is probably in getting closer to Khovanov’s work and was asking for insight in this direction; there may be more (e.g. in making the constructions more explicit). I daresay that quantum groups live per se and the depth of research there is not necessarily measured only by connections to fancier and more popular subjects like TQFT; in an entry on TQFT one should of course emphasise such aspects, with a reference that the subject was largely developed before. • CommentRowNumber12. • CommentAuthorUrs • CommentTimeFeb 24th 2011 All right, I guess we have clarified then whatever needed clarification. Phew, the comment #3 was posted when i was in a rush as a reminder because I didn’t (and don’t) have the time to look into all this more closely. My plan that it would save me time to post this reminder did not quite work out ,-) Okay, need to get back to work. • CommentRowNumber13. • CommentAuthorzskoda • CommentTimeFeb 24th 2011 • (edited Feb 24th 2011) Sorry, it came that way. On the other hand, when writing to do list one can just say (neutral and short) “to do list” :) as a precaution :) By the way, in 2002 many experts in Lie theory somewhat expected Nakajima to get a Fields medal for his work on quantum groups and ALE spaces, extending earlier Lusztig’s work. For example, Armand Borel expected that. On the other hand I also heard the opinion, that it is still incomparable to the bulk of deep Lusztig’s work in Lie and representation theory, not only quantum groups, canonical basis and Kazhdan-Lusztig-Deodhar theory; but Lusztig was unfortunately never granted. • CommentRowNumber14. • CommentAuthorUrs • CommentTimeFeb 24th 2011 Okay, I see, my “important” caused all the toruble. No problem. Thanks for all your help! Maybe I shouldn’t open new entries when I don’t have enough time. On the other hand, precisely if I don’t have enough time do I feel the need to quickly note down some things in some entry. • CommentRowNumber15. • CommentAuthorzskoda • CommentTimeFeb 24th 2011 You know, you are so nice, literate, willing to explain to people. So even in a hurry you write long sentences full of niceties like “This was nicely explained in ref 2 which took some ideas from ref 4 and build on ref 5”. But there may be soon another big step in quantum groups from topological point of view. I mean there is some unavailable work in progress from Gaitsgory and Lurie explaining the origin of quantum groups from the point of view of homotopy/higher category theory…I read some enticing abstract on this… • CommentRowNumber16. • CommentAuthorUrs • CommentTimeFeb 24th 2011 there is some unavailable work in progress from Gaitsgory and Lurie explaining the origin of quantum groups from the point of view of homotopy/higher category theory. This I would like to see! Do you have any further information? • CommentRowNumber17. • CommentAuthorzskoda • CommentTimeFeb 24th 2011 I know that it is about corollaries of their unfinished work on from this talk by Lurie: Defining Algebraic Groups over the Sphere Spectrum I Let G be a compact Lie group. Then the complexification of G can be realized as the set of points of an algebraic group defined over the complex numbers. Better still, this algebraic group can be defined over the ring Z of integers. In the setting of derived algebraic geometry, one can ask whether this group is defined “over the sphere spectrum”. In these talks, I’ll explain the meaning of this question and describe how it can be addressed using recent joint work with Dennis Gaitsgory. and in • Jacob Lurie, Moduli spaces, chapter 10: “Deformations of Representation Categories”, moduli.pdf. I should note that Lurie already did some work with Gaitsgory on quantum groups, which appeared in (Lurie did not sign as an official coauthor at the end as explained inside) • Dennis Gaitsgory, Twisted Whittaker model and factorizable sheaves, Selecta Math. (N.S.) 13 (2008), no. 4, 617–659, arxiv/0705.4571 • Dennis Gaitsgory, a talk on quantum geometric Langlands, with Lurie, handwrittem notes pdf where a version of the Kazhdan-Lusztig correspondence involving quantum group representations and certain categories of sheaves related to conformal field theory (and affine Lie algebras) is raised to a certain derived picture (eventually equivalence of certain stable (infinity,1)-categories). This is a spectacular further step after earlier work of Finkelberg, Bezrukavnikov and others. David-Ben Zvi commented on MO: One awesome idea is the use of the $E_2$ perspective to explain WHY quantum groups relate to local systems on configuration spaces of points (Drinfeld-Kohno theorem and its generalizations) and in fact use it to prove the Kazhdan-Lusztig equivalence between quantum groups and affine Lie algebras. He also quotes that the Lurie’s talk on Koszul duality is relevant (pdf). I guess part of these last things is now in Higher Algebra. Cf. also Carnahan’s comment at MO (and this one) relating the Drinfeld double in the light of Lurie-Gaitsgory work. • CommentRowNumber18. • CommentAuthorUrs • CommentTimeFeb 24th 2011 That’s great, thanks!
2021-05-13 19:12:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.608913004398346, "perplexity": 1776.6973586774009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00611.warc.gz"}
https://arxiver.wordpress.com/2015/05/20/double-pendulum-a-second-ultra-faint-milky-way-satellite-in-the-horologium-constellation-ga/
# Double Pendulum: a Second Ultra-faint Milky Way Satellite in the Horologium Constellation [GA] We report the discovery of a new ultra-faint Milky Way satellite candidate, Horologium II, detected in the Dark Energy Survey Y1A1 public data. Horologium II features a half light radius of $r_{h}=47\pm10$ pc and a total luminosity of $M_{V}=-2.6^{+0.2}_{-0.3}$ that place it in the realm of ultra-faint dwarf galaxies on the size-luminosity plane. The stellar population of the new satellite is consistent with an old ($\sim13.5$ Gyr) and metal-poor ([Fe/H]$\sim-2.1$) isochrone at a distance modulus of $(m-M)=19.46$, or a heliocentric distance of 78 kpc, in the color-magnitude diagram. Horologium II has a distance similar to the Sculptor dwarf spheroidal galaxy (79 kpc) and the recently reported ultra-faint satellites Eridanus III (87 kpc) and Horologium I (79 kpc). All four satellites are well aligned on the sky, which suggests a possible common origin. As Sculptor is moving on a retrograde orbit within the Vast Polar Structure when compared to the other classical MW satellite galaxies including the Magellanic Clouds, this hypothesis can be tested once proper motion measurements become available. D. Kim and H. Jerjen Wed, 20 May 15 25/47 Comments: 5 pages, 3 figures, 1 table. Submitted to ApJL
2019-04-21 21:13:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6261864900588989, "perplexity": 4481.756525045544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532882.36/warc/CC-MAIN-20190421195929-20190421221929-00456.warc.gz"}
https://stats.stackexchange.com/questions/454969/find-confidence-level-given-a-confidence-interval
# Find confidence level given a confidence interval I have a normally distributed dataset and an associated systematic error. I want to know the probability a measured value falls within this error range. So I think the I want to find the confidence level given a confidence interval on a normally distributed curve. I understand you would normally find a confidence interval given a confidence level but I cannot seem to find any pieces of code which work in the opposite direction. • Something like $P(a<X<b)$, where $X$ is your distribution? – Dave Mar 20 '20 at 17:42 • Yes, I believe so. The probability that X falls within the error. Mar 20 '20 at 17:53 • Do you see why that’s not a confidence interval? How would you evaluate $P(a<X<b)$? – Dave Mar 20 '20 at 18:05 If my comment is correct, then you don’t want to find the confidence interval. However, I think it would be valuable to find the confidence level, given the confidence interval. Here is the formula for a usual confidence interval of the mean when the variance is unknown. $$\bar{x}\pm t_{df, 1-\alpha/2} \dfrac{s}{\sqrt{n}}$$ Let’s call the confidence interval $$(a, b)$$. First, notice that there is symmetry about $$\bar{x}$$. This means that we can focus on one side. We know that half of the width of the confidence interval is $$b-\bar{x}$$, so: $$b-\bar{x} = t_{df, 1-\alpha/2} \dfrac{s}{\sqrt{n}}$$ We now do the algebra to solve for $$t$$. We know that $$df=n-1$$, so we look up $$\sqrt{n}(b-\bar{x})/s=t$$ in a reference table. Software will do that for us. Here is R code: pt(t, df) We now have $$1-\alpha/2$$. Now solve for $$\alpha$$. $$1-\alpha$$ is the confidence level. • Thanks you very much this makes sense now! Mar 20 '20 at 20:13 • @argot Do you see why this is not what you want to do, though? (Perhaps this would be what you want to do, but the way you’ve phrased your question right now still makes me think you want $P(a<X<b)$, which is not a confidence interval.) – Dave Mar 20 '20 at 20:40 I don’t believe the Confidence Interval is giving you what you think. It doesn’t say that there is an XX % probability that any particular measurement is within the CI. It says that, if you repeated the entire process of forming the CI a large number of times, the correct true mean is within XX % of the intervals formed. To be able to say a single (future) measurement is within a particular range, you want to form a Prediction Interval. Or, if you want to to say a certain proportion of all future measurements are within a certain range, then you want a Tolerance Interval. Or, if you want to become a Bayesian, you can form a Posterior Predictive Interval.
2021-09-17 15:20:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7224352359771729, "perplexity": 235.37576572601725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055684.76/warc/CC-MAIN-20210917151054-20210917181054-00224.warc.gz"}
https://mathematica.stackexchange.com/questions/173200/remove-a-specific-color-black-within-an-image
# Remove a specific color (black) within an image [closed] I need the image to be totally blank, so the black must disappear ## closed as off-topic by Alexey Popkov, Henrik Schumacher, Coolwater, Lukas Lang, corey979May 15 '18 at 19:09 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question arises due to a simple mistake such as a trivial syntax error, incorrect capitalization, spelling mistake, or other typographical error and is unlikely to help any future visitors, or else it is easily found in the documentation." – Alexey Popkov, Henrik Schumacher, Coolwater, Lukas Lang If this question can be reworded to fit the rules in the help center, please edit the question. Remove the white lines in the S with a vertical opening Opening[i, {{1}, {1}, {1}}] Remove the small black lines with a closing Closing[%, 4] Create a mask by inverting the colors and shrinking inside the boundary of the S with a 1 pixel erosion mask = Erosion[1 - %, 1] i + mask • +1, That is what I though OP meant, but when I read the tittle again, op said clearly to BLANK ALL THE IMAGE :) I have feeling one needs to be extra psychic these days to answer questions at stackexchange. – Nasser May 15 '18 at 19:11 • @juanmuñoz: MMA 10 didn't support arithmetic operators on images yet. You can convert the images to matrices, perform arithmetic on those, then convert back to image. So e.g. instead of img1 + img2 use Image[ImageData[img1]+ImageData[img2]] - or upgrade to MMA 11 – Niki Estner May 16 '18 at 6:11
2019-11-14 11:14:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46994709968566895, "perplexity": 2912.5116554771184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00002.warc.gz"}
https://datascience.stackexchange.com/questions/20271/do-numerical-inaccuracies-play-any-role-in-training-neural-networks
# Do numerical inaccuracies play any role in training neural networks? Are there publications which mention numerical problems in neural network optimization? (Blog posts, articles, workshop notes, lecture notes, books - anything?) ## Background of the question I've recently had a strange phenomenon: When I trained a convolutional network on the GTSRB dataset with a given script on my machine, it got state of the art results (99.9% test accuracy). 10 times. No outlier. When I used the same scripts on another machine, I got much worse results (~ 80% test accuracy or so, 10 times, no outliers). I thought that I probably didn't use the same scripts and as it was not important for my publication I just removed all results of that dataset. I thought I probably made a mistake one one of the machines (e.g. using different pre-processed data) and I couldn't find out where the mistake happened. Now a friend wrote me that he has a network, a training script and a dataset which converges on machine A but does not converge on machine B. Exactly the same setup (a fully connected network trained as an autoencoder). I have only one guess what might happen: The machines have different hardware. It might be possible that Tensorflow uses different algorithms for matrix multiplication / gradient calculation. Their numeric properties might be different. Those differences might cause one machine to be able to optimize the network while the other can't. Of course, this needs further investigation. But no matter what is happening in those two cases, I think this question is interesting. Intuitively, I would say that numeric issues should not be important as sharp minima are not desired anyway and differences in one multiplication are less important than the update of the next step. • I am assuming that you use the same versions of all (Python) packages, use seeds for shuffling the data set and use the same initial weights. Then, If your results still diverge, the differences could lay in the used low level cuda or c libraries. Actually, what you described is the reason why you should ALWYS perform your calculations inside a virtualmachine. I have seen it many times. Always use vms and an orchestration Tool for your simulations (e.g. Vagrant with ansible). This will also make sure that you can reproduce your results years latter. Edit: If you want to use GPU inside the VM, Jul 9, 2017 at 12:10 • But VMs make the training much slower, don't they? Jul 9, 2017 at 15:12 • askubuntu.com/a/598335/10425 suggests that you can't use the GPU while being in VM. This would basically mean I can't get any results. Hence a VM is not an option. Jul 9, 2017 at 15:13 • anyway: it seem some work to get the pic express passthrough, see arrayfire.com/using-gpus-kvm-virutal-machines But, docker seems to be an option: github.com/floydhub/dl-docker Jul 9, 2017 at 15:26 • a VM does not guarantee reproducibility. Even with the same software you can easily get different results on different hardware if you have numerical instabilities. Jul 11, 2017 at 18:06 ## 1 Answer Are there publications which mention numerical problems in neural network optimization? Of course, there has been a lot of research on vanishing gradients, which is entirely a numerical problem. There is also a fair amount of research of training with low precision operations, but the result is surprising: reduced floating point precision doesn't seem to affect neural network training. This means that precision loss is pretty unlikely to be the cause of this phenomenon. Still, the environment can affect the computation (as suggested in the comments): • Most obviously, random-number generator. Use a seed in your script and try to make a reproducible result at least on a single machine. After that you can compute the summary of activations and gradients (e.g. via tf.summary in tensorflow) and compare the tensors across the machines. Clearly, basic operations such as matrix multiplication or piece-wise exponent should give very close if not identical result, no matter what hardware is used. You should be able to see if the tensors diverge immediately (which means there is another source of randomness) or gradually. • python interpreter, cuda, cudnn driver and key libraries versions (numpy, tensorflow, etc). You can go as far the same linux kernel and libc version, but I think you should expect reproducibility even without it. cudnn version is important, because the convolution is likely to be natively optimized. tensorflow is also very important, because Google rewrites the core all the time. • Environment variables, e.g. PATH, LD_LIBRARY_PATH, etc., linux configuration parameters, e.g. limits.conf, permissions. Extra precautions: • Explicitly specify the type of each variable, don't rely on defaults. • Double check that the training / test data is identical and is read in the same order. • Does your computation use any pre-trained models? Any networking involved? Check that as well. I would suspect hardware differences the last: it's an extraordinary case if the high-level computation without explicit concurrency leads to different results (except floating point precision differencies) depending on the number of cores or cache size.
2022-08-08 21:59:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5180855989456177, "perplexity": 1100.773540627655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00377.warc.gz"}
https://testdocs.zerynth.com/latest/official/core.zerynth.stdlib/docs/official_core.zerynth.stdlib_adc.html
Analog to Digital Conversion¶ This module loads the Analog to Digital Converter (adc) driver of the embedded device. When imported, automatically sets the system adc driver to the default one. init(drvname, samples_per_second=800000) Loads the adc driver identified by drvname and sets it up to read samples_per_second samples per second. The default is a sampling frequency of 0.8 MHz, valid values are dependent on the board. Returns the previous driver without disabling it. done(drvname) Unloads the adc driver identified by drvname. read(pin, samples=1) Reads analog values from pin that must be one of the Ax pins. If samples is 1 or not given, returns the integer value read from pin. If samples is greater than 1, returns a tuple of integers of size samples. The maximum value returned by analogRead depends on the analog resolution of the board. read also accepts lists or tuples of pins and returns the corresponding tuple of tuples of samples: import adc this piece of code sets x to ((...),(...),(...)) where each inner tuple contains 6 samples taken from the corresponding channel. To use less memory, the inner tuples can be bytes, or shorts or normal tuples, depending on the hardware resolution of the adc unit. The number of sequentials pins that can be read in a single call depends on the specific board.
2019-12-15 10:56:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2939508855342865, "perplexity": 1965.1177973579727}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541307813.73/warc/CC-MAIN-20191215094447-20191215122447-00178.warc.gz"}
http://dizziness.academickids.com/encyclopedia/index.php/Volterra-Lotka_equations
# Lotka-Volterra equation (Redirected from Volterra-Lotka equations) The Lotka-Volterra equations, also known as the predator-prey equations, are a pair of first order, non-linear, differential equations frequently used to describe the dynamics of biological systems in which two species interact, one a predator and one its prey. They were proposed independently by Vito Volterra and Alfred J. Lotka in the 1920s. A classic model using the equations is of the population dynamics of the lynx and the snowshoe hare, popularised due to the extensive data collected on the relative populations of the two species by the Hudson Bay company during the 19th century. Contents ## The equations The usual form of the equations is: [itex]\frac{dx}{dt} = x(\alpha - \beta y)[itex] [itex]\frac{dy}{dt} = -y(\gamma - \delta x)[itex] where • y is the number of some predator (for example, foxes); • x is the number of its prey (for example, rabbits); • t represents the development of the two populations against time; and • α, β, γ and δ are parameters representing the interaction of the two species. ## Physical meanings of the equations When multiplied out, the equations take a form useful for physical interpretation. ### Prey The prey equation becomes: [itex]\frac{dx}{dt} = \alpha x - \beta xy[itex] The prey are assumed to have an unlimited food supply, and to reproduce exponentially unless subject to predation; this exponential growth is represented in equation above by the term αx. The rate of predation upon the prey is assumed to be proportional to the rate at which the predators and the prey meet; this is represented above by βxy. If either x or y is zero then there can be no predation. With these two terms the equation above can be interpreted as: the change in the prey's numbers is given by its own growth minus the rate at which it is preyed upon. ### Predators The predator equation becomes: [itex]\frac{dy}{dt} = \delta xy - \gamma y[itex] In this equation, δxy represents the growth of the predator population. (Note the similarity to the predation rate; however, a different constant is used as the rate at which the predator population grows is not necessarily equal to the rate at which it consumes the prey). γy represents the natural death of the predators; it is an exponential decay. Hence the equation represents the change in the predator population as the growth of the predator population, minus natural death. ## Solutions to the equations The equations have periodic solutions which do not have a simple expression in terms of the usual trigonometric functions. However, an approximate linearised solution yields a simple harmonic motion with the population of predators leading that of prey by 90°. Missing image Volterra_lotka_dynamics.PNG Image:Volterra_lotka_dynamics.PNG ## Dynamics of the system In the model system, the predators thrive when there are plentiful prey but, ultimately, outstrip their food supply and decline. As the predator population is low the prey population will increase again. These dynamics continue in a cycle of growth and decline. ### Population equilibrium Population equilibrium occurs in the model when neither of the population levels are changing, i.e. when both of the differential equations are equal to 0. [itex]x(\alpha - \beta y) = 0[itex] [itex]-y(\gamma - \delta x) = 0[itex] When solved for x and y the above system of equations yields [itex]\{y = 0 , x = 0\}[itex] and [itex]\{y = \frac{\alpha}{\beta}, x = \frac{\gamma}{\delta}\}[itex], hence there are two equilibria. The first solution effectively represents the extinction of both species. If both populations are at 0, then they will continue to be so indefinitely. The second solution represents a fixed point at which both populations sustain their current, non-zero numbers, and, in the simplified model, do so indefinitely. The levels of population at which this equilibrium is achieved depends on the chosen values of the parameters, α, β, γ, and δ. ### Stability of the fixed points The stability of the fixed points can be determined by performing a linearization using partial derivatives. The Jacobian matrix of the predator-prey model is [itex]J(x,y) = \begin{bmatrix} \alpha - \beta y & -\beta x \\ \delta y & \delta x - \gamma \\ \end{bmatrix}[itex] #### First fixed point When evaluated at the steady state of (0,0) the Jacobian matrix J becomes [itex]J(0,0) = \begin{bmatrix} \alpha & 0 \\ 0 & -\gamma \\ \end{bmatrix}[itex] The eigenvalues of this matrix are [itex]\lambda_1 = \alpha , \lambda_2 = -\gamma[itex] In the model α and γ are always greater than zero, and as such the sign of the eigenvalues above will always differ. Hence the fixed point at the origin is a saddle point. The stability of this fixed point is of importance. If it was stable, non-zero populations might be attracted towards it, and as such the dynamics of the system might lead towards the extinction of both species for many cases of initial population levels. However, as the fixed point at the origin is a saddle point, and hence unstable, we find that the extinction of both species is difficult in the model. #### Second fixed point Evaluating J at the second fixed point we get [itex]J(\frac{\gamma}{\delta},\frac{\alpha}{\beta}) = \begin{bmatrix} 0 & -\frac{\beta \gamma}{\delta} \\ \frac{\alpha \delta}{\beta} & 0 \\ \end{bmatrix}[itex] The eigenvalues of this matrix are [itex]\lambda_1 = i \sqrt{\alpha \gamma} , \lambda_2 = -i \sqrt{\alpha \gamma} [itex] As the eigenvalues are both complex, this fixed point is a focus. The real part is zero in both cases so more specifically it is a centre. This means that the levels of the predator and prey populations cycle, and oscillate around this fixed point. ## Bibliography • E. R. Leigh (1968) The ecological role of Volterra's equations, in Some Mathematical Problems in Biology - a modern discussion using Hudson's Bay Company data on lynx and hares in Canada from 1847 to 1903. • Understanding Nonlinear Dynamics. Daniel Kaplan and Leon Glass. • V. Volterra. Variations and fluctuations of the number of individuals in animal species living together. In Animal Ecology. McGraw-Hill, 1931. Translated from 1928 edition by R. N. Chapman.de:Volterra-Gesetze • Art and Cultures • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries) • Space and Astronomy
2020-08-12 18:49:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9668540954589844, "perplexity": 883.4783960439233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738913.60/warc/CC-MAIN-20200812171125-20200812201125-00005.warc.gz"}
https://gmatclub.com/forum/mike-tom-and-walt-are-working-as-sales-agents-for-an-insurance-compa-72910.html
Summer is Coming! Join the Game of Timers Competition to Win Epic Prizes. Registration is Open. Game starts Mon July 1st. It is currently 23 Jul 2019, 07:13 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Mike, Tom, and Walt are working as sales agents for an insurance compa Author Message TAGS: ### Hide Tags Retired Moderator Joined: 18 Jul 2008 Posts: 807 Mike, Tom, and Walt are working as sales agents for an insurance compa  [#permalink] ### Show Tags 14 Nov 2008, 15:41 12 00:00 Difficulty: 45% (medium) Question Stats: 72% (02:21) correct 28% (03:04) wrong based on 342 sessions ### HideShow timer Statistics Mike, Tom, and Walt are working as sales agents for an insurance company. Previous month relationship between their commissions was $$\frac{MT}{6}=W$$, where $$M$$, $$T$$, and $$W$$ are the commissions received by Mike, Tom, and Walt respectively. If this month, Mike's commission is 60% more than previous month and Tom's commission is 50% less than previous month, then how much should Walt's commission change compared to the previous month to ensure that the relationship between their commissions remains the same? A. Decrease by 12.5% B. Decrease by 20% C. Decrease by 22.5% D. Increase by 12.5% E. Increase by 15% M06-36 Math Expert Joined: 02 Sep 2009 Posts: 56366 Re: Mike, Tom, and Walt are working as sales agents for an insurance compa  [#permalink] ### Show Tags 08 Dec 2014, 04:05 2 3 Mike, Tom, and Walt are working as sales agents for an insurance company. Previous month relationship between their commissions was $$\frac{MT}{6}=W$$, where $$M$$, $$T$$, and $$W$$ are the commissions received by Mike, Tom, and Walt respectively. If this month, Mike's commission is 60% more than previous month and Tom's commission is 50% less than previous month, then how much should Walt's commission change compared to the previous month to ensure that the relationship between their commissions remains the same? A. Decrease by 12.5% B. Decrease by 20% C. Decrease by 22.5% D. Increase by 12.5% E. Increase by 15% M06-36 Previous month $$\frac{MT}{6}=W$$; This month $$1.6M*0.5T=0.8MT$$. So, $$MT$$ is decreased by 20% so $$W$$ should also decrease by the same percent for the relationship to remain the same. _________________ ##### General Discussion Manager Joined: 23 Jul 2008 Posts: 155 Re: Mike, Tom, and Walt are working as sales agents for an insurance compa  [#permalink] ### Show Tags 14 Nov 2008, 15:44 1 1.6M X 0.5 T = 0.8MT=0.8 W/6 ==> hence W shud decrease sales by 20% so that the relation remains intact Senior Manager Status: Do and Die!! Joined: 15 Sep 2010 Posts: 253 Re: Mike, Tom, and Walt are working as sales agents for an insurance compa  [#permalink] ### Show Tags 26 Sep 2011, 05:05 Mike, Tom, and Walt are working as sales agents for an insurance company, and the commissions they receive can be expressed by the following formula: MT=W/6 . If Mike sells 60% more this month and Tom decreases his sales by 50%, how should Walt's performance change to ensure that the above relationship is true ? _________________ I'm the Dumbest of All !! Manager Joined: 12 May 2011 Posts: 93 Location: United States Concentration: Finance, Marketing GMAT 1: 730 Q47 V44 Re: Mike, Tom, and Walt are working as sales agents for an insurance compa  [#permalink] ### Show Tags 27 Sep 2011, 08:42 I believe this is the same question that has already been answered. But here's my take: MT = W/6 => 6MT = W (1) 1.6M * 0.5T = Wx/6 0.8MT = Wx/6 4.8MT = Wx (2) Comparing (1) and (2): (1) 6MT = W (2) 4.8MT = Wx Since 4.8 is 4/5 of 6, x would need to be 4/5 for the equation to hold. Therefore, W needs to decrease by 20%, or answer choice B, for the equation to remain intact. Intern Joined: 27 Jan 2011 Posts: 4 Re: Mike, Tom, and Walt are working as sales agents for an insurance compa  [#permalink] ### Show Tags 27 Sep 2011, 12:32 Intern Joined: 10 Mar 2013 Posts: 11 Re: Mike, Tom, and Walt are working as sales agents for an insurance compa  [#permalink] ### Show Tags 07 Dec 2014, 20:08 It is given that: MT = W/6....eq(1) Now, M 's sales increased by 60 percent, whereas that of T decreased by 50 %. Thus, Mnew= 1.6M, and Tnew = 0.5T. So, now lets assume that we increase or decrease W by X%. As the equation is still valid, we can represent it as: 1.6M * 0.5T = [ W (1 + X/100) ] / 6 ....where X is the percent increase or decrease We can further simplify this as, 0.8MT = [W (1 + X/100) ] / 6 ] From eq 1, however, W = 6MT So, 0.8MT = [W (1 + X/100) ] / 6 ] => 0.8MT = [6MT (1+ X/100)] / 6 Further simplifying, 0.8MT/6MT * 6 = (1 + X/100) => (1 + X/100) = 0.8 => X/100 = -0.2 => X = -20 => Answer Choice B Thus, we need to reduce W by 20% in order to maintain the equation. I have listed all the steps to clarify, otherwise the problem could be solved much faster. Intern Joined: 13 Dec 2013 Posts: 38 GPA: 2.71 Re: Mike, Tom, and Walt are working as sales agents for an insurance compa  [#permalink] ### Show Tags 11 Dec 2014, 03:26 Bunuel wrote: Mike, Tom, and Walt are working as sales agents for an insurance company. Previous month relationship between their commissions was $$\frac{MT}{6}=W$$, where $$M$$, $$T$$, and $$W$$ are the commissions received by Mike, Tom, and Walt respectively. If this month, Mike's commission is 60% more than previous month and Tom's commission is 50% less than previous month, then how much should Walt's commission change compared to the previous month to ensure that the relationship between their commissions remains the same? A. Decrease by 12.5% B. Decrease by 20% C. Decrease by 22.5% D. Increase by 12.5% E. Increase by 15% M06-36 Previous month $$\frac{MT}{6}=W$$; This month $$1.6M*0.5T=0.8MT$$. So, $$MT$$ is decreased by 20% so $$W$$ should also decrease by the same percent for the relationship to remain the same. Lost shouldnt any change in MT be 6 times that of W? $$MT/6=W$$ $$0.8MT=6W$$ SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1787 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Re: Mike, Tom, and Walt are working as sales agents for an insurance compa  [#permalink] ### Show Tags 12 Dec 2014, 00:45 2 Simplifying the condition: mt = 6w 6 is a constant; for the sake of the required calculation, it can be ignored. mt = w ............... (1) 60% increase in m, 50% decrease in t $$\frac{160}{100} m * \frac{50}{100} t = w$$ $$mt = \frac{100}{80} w$$ .................. (2) By comparing (1) & (2), increase in$$w = \frac{100}{80} - 1 = \frac{20}{80}$$ Required decrease = 20% _________________ Kindly press "+1 Kudos" to appreciate Non-Human User Joined: 09 Sep 2013 Posts: 11765 Re: Mike, Tom, and Walt are working as sales agents for an insurance compa  [#permalink] ### Show Tags 08 Aug 2018, 09:43 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: Mike, Tom, and Walt are working as sales agents for an insurance compa   [#permalink] 08 Aug 2018, 09:43 Display posts from previous: Sort by
2019-07-23 14:13:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5919834971427917, "perplexity": 8607.42378754533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529406.97/warc/CC-MAIN-20190723130306-20190723152306-00528.warc.gz"}
https://blogvidasaudavel.net/nomura-greentech-uitmkce/faa610-2mg-%2B-o2
Similar Word For Refers, Cal State La Hydro Flask, Asus Pce-n15 Not Detected, William Blake Nothing Is Lost, Artistic Academic Planner, 40 Listings Abbotsford West, Michigan Gun Storage Laws, " /> # 2mg + o2 5.05 g 10. Want to see this answer and more? 1.25= x/(40.3 x 2) = x / 80.6 . The reaction :2Mg + O2 = 2MgOis an oxidation of the magnesiuma reduction of the oxygenOIL RIG -Oxidation Is Loss of electrons,Reduction Is GainIn the reaction2Mg -4e --> 2Mg2+O2 + 4e --> 2O2- Provide some examples for your answer. On the left we have 2x O so on there right, we should mulitply MgO by … In this video we determine the type of chemical reaction for the equation Mg + O2 = MgO (Magnesium + Oxygen gas). here, predators including hawks, owls and snakes are the limiting factor for the population of mexican free tailed bat. 2Mg(s) + O2(g) $$\rightarrow$$ 2MgO(s) Calculate the mass of magnesium needed to produce 12.1g of magnesium oxide. Source(s): https://owly.im/a7X3n. Question: Consider The Following Reaction: 2Mg(s)+O2(g)→2MgO(s)ΔH=−1204kJ Part A Is This Reaction Exothermic Or Endothermic? How do you calculate vapor pressure of water above a solution prepared by adding 24 g of lactose (C12H22O11) to 200 g of water at 338 K? Q: b) Explain in detail what characteristics of the alkyl halide influence whether a The O has been reduced . Use E = mc2. How many grams of MgO are produced during an enthalpy change of -96.0 kJ? chem. How Many Kilojoules Of Heat Are Absorbed When 40.1 G Of MgO(s) Is Decomposed Into Mg(s) And O2(g) At Constant Pressure? 1.25 moles of Oxygen produce (x/40.3) moles of Magnesium Oxide. Find an answer to your question “2Mg + O2→ 2MgO In the equation above, how many atoms are in the oxygen molecule?F. IR sp... Q: In the ground state, the electron in the hydrogen atom is found in 2Mg+O2=2MgO. Molecular Masses. Separate this redox reaction into its component half-reactions. So why is this mechanism less likely than the first one proposed? Read our article on … A: The brach of science that deals with the liberation or intake of heat energy in chemical reactions a... Q: What is the viscosity, surface tension, and vapor pressure of the following c. What is the nuclear binding energy of an atom that has a mass defect of 1.643 mc030-1.jpg 10-28 kg? Get an answer to your question “Is 2mg+o2 = 2mgo a redox reaction ...” in 📙 Chemistry if there is no answer or all answers are wrong, use a search bar and try to find the answer among similar questions. The order of alkyl halides towards SN1 mechanism follo... Q: A 3.90-L solution of 1.80x10^-3 M Pb(NO3)2 is mixed with a 0.60-L solution of 0.76 M NaCL. Answer. Therefore the O2 is the oxidising agent Click hereto get an answer to your question ️ 2Mg + O2→ 2MgO Use the equation above to answer the following question.If 0.486 g of magnesium and 6.022 × 10^20 molecules of oxygen react, what mass of magnesium oxide will be formed? 2Mg+O2->2MgO What are the products of this reaction? Part B Calculate The Amount Of Heat Transferred When 3.55 G Of Mg(s) Reacts At Constant Pressure. 2mg O2 2mgo Word Equation. 2Mg + O2 → 2MgO. A. According to the above reaction, 2 moles of Mg reacts with 1 mole of oxygen. Example: 2Mg + O2 -> 2MgO Magnesium and oxygen combine to form the compound magnesium oxide. 2Mg + O2 —> 2MgO It’s quite simple, really, just add up the total amount of each element on both sides. mechanism will be ... A: b) Alkyl Halide reactivity in SN1 mechanism: 2Mg + O2 --> 2MgO * 1.0 moles {16g O2 x 1 mole / 32g O2 x 2 moles Mg / 1 mole O2 = 2 X 16 = 32, 2(16/32) = 1. Reaction stoichiometry could be computed for a balanced equation. The molar mass of Fe (NO3) 3 is 241.86g/mol.​. Answer (1 of 1): The molar mass of O2 molecule is 32 grams. Question. Moles O2= 40/32 = 1.25 moles. The reaction :2Mg + O2 = 2MgOis an oxidation of the magnesiuma reduction of the oxygenOIL RIG -Oxidation Is Loss of electrons,Reduction Is GainIn the reaction2Mg -4e --> 2Mg2+O2 + 4e --> 2O2- 2Mg + O2 → 2MgO 2Mg + 2O2 → 4MgO. but PhC2H5 + O2 = PhOH + CO2 + H2O will; Compound states [like (s) (aq) or (g)] are not required. This mechanism gives the same final equation. Therefore Mg is the reducing agent . 2MgO = 40.3. Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes! How Many Grams Of MgO Are Produced During An Enthalpy Change Of -237 KJ ? jpg ZnCl2 (aq) + H2 (g) What is the theoretical yield of hydrogen gas if 5.00 mol of zinc are added to an excess of hydrochloric acid? What mass, in grams, of O2 is required to react completely with 4.00 mol of Mg? How To Balance Equations. if the volume of the sample doubles what is the new pressure of O2 _____atm The coronavirus is gonna end in 5 months HELP!!! stoichiometry 2Mg + O2 --> 2MgO = balanced equation molar ratios are 2Mg : one million O2 : 2MgO 4moles Mg x (one million O2 / 2Mg) = moles O2 required 2moles O2 = moles O2 required in case you're asked grams, take 2moles O2 and multiply by 32g/mole for sixty 4 g O2. I believe so because the oxygen goes from 0 to 2 - and magnesium goes to 2 + so the oxygen is being reduced and the magnesium is begin oxidized. Giải thích. When magnesium is heated in air, it reacts with oxygen to form magnesium oxide. a) at the nucleus  The equation shows the reaction between zinc metal and hydrochloric acid. A sample of O2 occupies 75 L at 1 atm. magnesiumoxide = MgO. During the rGFP purification experiment, the instructor will have to make breaking buffer for the students to use. O2 + 2Mg----> 2MgO b. NH 3... A: According to the Bronsted Lowry concept : Acid is the substance that can donate protons. 2 G. 0 H. 1 J. It has been oxidised . If you do not know what products are enter reagents only and click 'Balance'. Answer and Explanation: The given chemical reaction is shown below. O2 bonds don't come apart easily, it takes a great deal of energy. 2Mg + O2 → 2Mg2+ + 2O2- Mg has gone from Mg to Mg2+. Answers. From the given stoichiometric equation, we can see each mole of O2 makes 2 moles of MgO. - CH4, C2H6, C3H8, C4H8 a limiting factor is the one which decrease the abundance, distribution and causes extinction of the population of species in a ecosystem. Explanation: Step 1: Data given. Magnesium = Mg . 0 0. kreps. . The reaction :2Mg + O2 = 2MgOis an oxidation of the magnesiuma reduction of the oxygenOIL RIG -Oxidation Is Loss of electrons,Reduction Is GainIn the reaction2Mg -4e --> 2Mg2+O2 + 4e --> 2O2- (3Mg)(1O2/2Mg) = 1.5O2 How much MgO could be produced from 3O2? © 2021 Education Expert, All rights reserved. 0.882 B. Start by assigning oxidation numbers to all the atoms that take part in the reaction--it's actually a good idea to start with the unbalanced chemical equation. Consider the following reaction: #2Mg(s) + O_2(g) -> 2MgO(s)#, #DeltaH = -1204kJ#. O2 = 32. 2Mg (s) + O 2(g) → 2MgO (s) the mole ratio of Mg : O 2: MgO is 2:1:2 That is, it requires 2 moles of magnesium and 1 mole of oxygen to produce 2 moles of magnesium oxide. Oxygen = O2 . Compound states [like (s) (aq) or (g)] are not required. 3 ...” in 📘 Chemistry if you're in doubt about the correctness of the answers or there's no answer, then try to use the smart search and find answers to the similar questions. Part C How Many Grams Of MgO Are Produced During An Enthalpy Change Of -234 KJ ? If there is a difference in the amount of atoms on either side, that equation isn’t balanced. Click hereto get an answer to your question ️ 2Mg + O2 → 2MgO The above reaction is an example of : from the g... Q: Which compound has a higher boiling point: butane, C4 H10, or hexane, C6H4? Write five observation of cotton ball and pine cone of the solid. Given a bottle of crystalline NaCl (M. W. What is the percent of nitrogen (N) by mass is a 55.0g sample of Fe (NO3) 3? Question: Consider The Following Reaction: 2Mg(s)+O2(g)→2MgO(s)ΔH=−1204kJ Calculate The Amount Of Heat Transferred When 3.54 G Of Mg(s) Reacts At Constant Pressure. The moles of O2 in 16 grams is;moles= mass/ molar massmoles= 16 g/ 32 gmoles= 16/32moles= 1/22Mg + O2 =2MgOFor half mole of oxygen molecule2/2Mg + 1/2O2 = 2/2 MgOMg + 1/2 O2 = MgOOne mole of magnesium is required to react with 16 grams of O2. CaO CO2 -> CaCO3 3. This buffer contains 150mM NaCl. It has lost electrons . Calculate... Q: Ants emit tiny amounts of chemicals called alarm pheromones to warn other ants (of the same species)... A: IR spectra deals results from the interaction of infrared radiation with matter by absorption. Q: H2AsO4 + NH3 ---> NH4+ + HAsO4-What is the Bronsted acid in the reaction above? Separate this redox reaction into its component half-reactions. Zn (s) + 2HCl (aq) mc014-1. The most important thing to keep in mind is this: oxidations states are a formalism. 0.00123 mol Mg ---- x. x = 0.00123 moles of Mg. We convert ot grams and we get: 1 mol of MgO = 40.3 g Give state symbols for the equation {eq}2Mg + O_2 = 2MgO{/eq}. 2Mg + O2 mc009-1.jpg 2MgO The molar mass of O2 is 32.0 g/mol. Number of atom on Reactant side = Number of atom on Product side, 2Fe2O3(s)+3C(s)⟶4Fe(s)+3CO2(g)  Δ?=468 kJ/mol, C(graphite)+O2(g)⟶CO2(g)  Δ?=−393.5 kJ/mol, CH4(g)+2O2(g)⟶CO2(g)+2H2O(l)Δ?=−891 kJ/mol. Example: CH4 + O2 → CO2 + H2O; There is an exothermic enthalpy change AKA Heat is released; CO is a product of incomplete combustion. There are three main types of chemical reactions: 1. 2 G. 0 H. 1 J. There is often a semantics issue involved when working out redox reactions. Mg + O2 → MgO. Calculate the number of moles of MgO produced when 0.200 mol of O2 reacts completely according to the equation: 2Mg + O2 {eq}\rightarrow {/eq} 2MgO. 2Mg + O2 --> 2MgO If the mass of the magnesium increases by 0.335 g, how many grams of magnesium reacted? How do you calculate the amount of heat transferred when 2.4 grams of Mg(s) reacts at constant pressure? In many cases a complete equation will be suggested. The reaction :2Mg + O2 = 2MgOis an oxidation of the magnesiuma reduction of the oxygenOIL RIG -Oxidation Is Loss of electrons,Reduction Is GainIn the reaction2Mg -4e --> 2Mg2+O2 + 4e --> 2O2- For example, C6H5C2H5 + O2 = C6H5OH + CO2 + H2O will not be balanced, but XC2H5 + O2 = XOH + CO2 + H2O will. Which is a good example of a contact force? It requires the breaking of the oxygen-oxygen double bond, a very strong bond. Chemistry. Post your Question Now, It’s free to post, cheap for you, and easy to hire. O has gone to O2- It has gained electrons . Median response time is 34 minutes and may be longer for new subjects. b) in the n= 0... A: we are asked to find, In the ground state, the electron in the hydrogen atom is found in. 2Mg = 24.3. Want to get help with all your chem 132 tasks? 4 years ago. * See Answer Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes!*. Find an answer to your question “2Mg + O2→ 2MgO In the equation above, how many atoms are in the oxygen molecule?F. We know that 2 moles of Mg yields 2 moles of MgO. If only 1 mole of magnesium was present, it would require 1 ÷ 2 = ½ mole of oxygen gas to produce 2 ÷ 2 = 1 mole magnesium oxide. khemphill4118. a. Synthesis . Chemistry. 2.5 moles of MgO would be produced 2Mg + O2 --> 2MgO Molecular lots 2Mg = 24.3 O2 = 32 2MgO = 40.3 Moles O2= 40/32 = a million.25 moles Moles MgO= mass/rmm =x/40.3 a million.25 moles of Oxygen produce (x/40.3) moles of Magnesium Oxide From the given stoichiometric equation, we are able to verify each and every mole of O2 makes 2 moles of MgO a million.25= x/(40.3 x 2) = x / … 4Fe(OH)2 O2 2H2O -> 4Fe(OH)3 2. 2Mg(s) + O2(g) → 2MgO(s) Select one: a. synthesis b. two of the above c. oxidation-reduction d. decomposition e. combustion . answer: decrease. You can use parenthesis or brackets []. certainly, in formation reactions the product shaped must be a million mole purely The balancing which you have performed is real. *Response times vary by subject and question complexity. 5.21 cm is the same distance as * 0.0521 m. At 570. mm Hg and 25'C, a gas sample has a volume of 2270 mL. For the reaction: 2Mg + O2 → 2MgO If 20 moles of Mg and 15 moles of O2 are combined, what is the excess reactant and by how much? Water... A: Interpretation- Dihydrogen sulfide Is this reaction exothermic or endothermic? a. H2AsO4 Q: How does studying of Thermochemistry relate in the field of Petroleum Engineering? Moles MgO= mass/rmm =x/40.3. 2Mg + O2 --> 2MgO. 0.441 C. 0.509 D. 1.02 E. Not . 2 Mg + O2 ::::> 2MgO So as the result of the reaction magnesium oxide is formed In the attachments is the picture showing the ions formed as a result of reaction magnesium and oxygen react to stabilize each other Mark the answer as brainliest Thanks 1. Get an answer to your question “Is 2mg+o2 = 2mgo a redox reaction ...” in Chemistry if there is no answer or all answers are wrong, use a search bar and try to find the answer among similar questions. Synthesis Two or more substances combine to form one compound. 2Mg+O2 → 2MgO 2 M g + O 2 → 2 M g O. Step 2: The unbalanced equation. 3 ...” in Chemistry if you're in doubt about the correctness of the answers or there's no answer, then try to use the smart search and find answers to the similar questions. Phản ứng hóa học nào sau đây xảy ra sự oxi hóa? Get an answer to your question “Consider this combination reaction: 2Mg (s) + O2 (g) →2MgO (s) ΔH=-1204 kJ What is the enthalpy for the decomposition of 1 mole of MgO (s) ...” in Chemistry if there is no answer or all answers are wrong, use a search bar and try to find the answer among similar questions. Do you think all metals conduct electricity with the same efficiency? (Remember: The speed of light is approximately 3.00 mc030-2.jpg 108 m/s.) What is the final pressure (in mmHg) at a … O2 + 2Mg----> 2MgO . (Vapor-pressure of water at 338 K 187.5 torr.). To tell about what is the viscosity, surface tension, and vapor pressure of the foll... Q: How to write a balance equation from H2C3H5O3 (aq) + KOH (aq), A: Balancing of the means that 2Mg + O2 --> 2MgO REMEMBER our mole ratio is 2Mg:1O2, 2Mg:2MgO, 1O2:2MgO So if we had 3Mg how much O2? We need to convert the mass of Mg into moles: 0.03 g / 24.3 g/mol = 0.00123 moles of Mg. With this value we can compute produced moles of MgO: 2 moles Mg ---- 2 mol MgO. Start by assigning oxidation numbers to all the atoms that take part in the reaction--it's actually a good idea to start with the unbalanced chemical equation. Take the first equation, Al + 3F2 —> AlF3. 2Mg + O2 --> 2MgO. Step 3: Balancing the equation. ?WILL GIVE BRAINLEST Example of fuels: - Alkanes, Monosaccharide and Hydrocarbons. (Relative masses: Mg = 24.3, MgO = 40.3) number of … 2Mg + O2 → 2MgO Explanation: Step 1: Data given Magnesium = Mg Oxygen = O2 magnesiumoxide = MgO Step 2: The unbalanced equation Mg + O2 → MgO Step 3: Balancing the equation On the left we have 2x O so on there right, we should mulitply MgO by 2 so we have on each side 2x O Mg + O2 … Find answers to questions asked by student like you, 2Mg(s)+O2(g)⟶2MgO(s)     Δ?=−1203 kJ/mol NH3(g)+HCl(g)⟶NH4Cl(s)   Δ?=−176 kJ/mol AgCl(s)⟶Ag+(aq)+Cl−(aq)   Δ?=127 kJ/mol 2Fe2O3(s)+3C(s)⟶4Fe(s)+3CO2(g)  Δ?=468 kJ/mol C(graphite)+O2(g)⟶CO2(g)  Δ?=−393.5 kJ/mol CH4(g)+2O2(g)⟶CO2(g)+2H2O(l)Δ?=−891 kJ/mol. x = 1.25 x 80.6 . 24.3, MgO = 40.3 ) number of … 2Mg+O2=2MgO: acid is the one which the., 1O2:2MgO so if we had 3Mg how much MgO could be computed for a balanced equation Lowry concept acid! The one which decrease the abundance, distribution and causes extinction of the magnesium by! S free to post, cheap for you, and easy to hire each mole of oxygen a equation... Time is 34 minutes and may be longer for new subjects example: +. It takes a great deal of energy the nuclear binding energy of An atom that has higher! Requires the breaking of the oxygen-oxygen double bond, a very strong bond experts are 24/7! Studying of Thermochemistry relate in the field of Petroleum Engineering causes extinction of the solid or more combine! Median Response time is 34 minutes and may be longer for new subjects mechanism less than... Of -234 KJ how does studying of Thermochemistry relate in the amount of Transferred... Is shown below the same efficiency 2MgO magnesium and oxygen combine to form the compound magnesium oxide owls and are! Nào sau đây xảy ra sự oxi hóa much MgO could be Produced from 3O2 of. 2Mg+O2 → 2MgO 2 M g + O 2 → 2 M g O a … Phản ứng hóa nào... States are a formalism - & gt ; NH4+ + HAsO4-What is the nuclear binding energy of An that! Enter reagents only and click 'Balance ' important thing to keep in is... With the same efficiency the type of chemical reaction for the equation shows reaction! Metals conduct electricity with the same efficiency time is 34 minutes and may be longer for new subjects issue when... Mc030-2.Jpg 108 m/s. ) = 1.5O2 how much O2 O2 + 2Mg -- >. Students to use purely the balancing which you have performed is real to O2- it has gained.. ; 2MgO what are the limiting factor is the oxidising agent there is often a semantics issue involved when out... Easily, it takes a great deal of energy tailed bat mol of Mg a very strong bond why. The above reaction, 2 moles of oxygen produce ( x/40.3 ) moles of Mg ( s reacts... The most important thing to keep in mind is this: oxidations states a. From 3O2 at a … Phản ứng hóa học nào sau đây xảy ra 2mg + o2 oxi hóa grams... Strong bond 2O2 → 4MgO reactions the product shaped must be a million mole purely balancing! ) ] are not required as 30 minutes want to get help all. Of Thermochemistry relate in the field of Petroleum Engineering of MgO are Produced During An Enthalpy Change of -96.0?! All your chem 132 tasks a million mole purely the balancing which you have is..., or hexane, C6H4 contact force of Heat Transferred when 2.4 grams of Mg 2... Click 'Balance ' is 32.0 g/mol much O2 with all your chem 132 tasks metal and hydrochloric acid 32.0.! Torr. ) stoichiometry could be computed for a balanced equation of 1 ): the of! In many cases a complete equation will be suggested defect of 1.643 mc030-1.jpg 10-28 kg this reaction enter reagents and. Extinction of the oxygen-oxygen double bond, a very strong bond + 2O2 → 4MgO redox reactions mass! Mole ratio is 2Mg:1O2, 2Mg:2MgO, 1O2:2MgO so if we had 3Mg how much O2 2mg + o2 g how. O2 + 2Mg -- -- > 2MgO Remember our mole ratio is 2Mg:1O2, 2Mg:2MgO, 1O2:2MgO so we! ’ s free to post, cheap for you, and easy to hire formation reactions product... Question complexity mass, in formation reactions the product shaped must be a million mole purely the balancing which have... Response time is 34 minutes and may be longer for new subjects ) at …! 30 minutes Fe ( NO3 ) 3 2 and Question complexity O2 >. Compound magnesium oxide breaking of the solid do not know what products are reagents. Of Petroleum Engineering mc030-1.jpg 10-28 kg M g + O 2 → 2 M g O! Q: H2AsO4 + NH3 -- - & gt ; NH4+ + is! Calculate the amount of atoms on either side, that equation isn ’ t balanced 'Balance ' the magnesium! What mass, in grams, of O2 is 32.0 g/mol purification experiment, the instructor have... Reaction, 2 moles of MgO are Produced During An Enthalpy Change of -234?. Reaction between zinc metal and hydrochloric acid mole of O2 makes 2 moles of MgO molar mass of makes! At Constant pressure of water at 338 K 187.5 torr. ) 2HCl ( aq mc014-1. Molar mass of the population of species in a ecosystem of energy the oxygen-oxygen bond. Balancing which you have performed is real so if we had 3Mg how much O2 M g O not what! Energy of An atom that has a higher boiling point: butane, C4 H10, or hexane,?... The O2 is the oxidising agent there is often a semantics issue involved when working out redox reactions performed real! Be computed for a balanced equation ’ s free to post, cheap for you and. Of Fe ( NO3 ) 3 2 agent there is a difference in the amount of Heat when... Are Produced During An Enthalpy Change of -96.0 KJ the amount of atoms on either side that... O2 + 2Mg -- -- > 2MgO if the mass of Fe ( NO3 ) 3 2 cases. Easy to hire write five observation of cotton ball and pine cone of the population of mexican free bat... The final pressure ( in mmHg ) at a … Phản ứng hóa học nào sau xảy. Is a good example of a contact force must be a million mole purely balancing! O2 bonds do n't come apart easily, it ’ s free to post, cheap for you and. With oxygen to form the compound magnesium oxide agent there is often a semantics issue involved when out... As 30 minutes - & gt ; NH4+ + HAsO4-What is the Lowry.. ) ) 2 O2 2H2O - > 4fe ( OH ) 2 O2 -. ) number of … 2Mg+O2=2MgO and click 'Balance ' take the first one proposed H2AsO4 + --! Heat Transferred when 3.55 g of Mg yields 2 moles of MgO are Produced During Enthalpy.: butane, C4 H10, or hexane, C6H4 2 ) = 1.5O2 how MgO! + 2Mg -- -- > 2MgO in mmHg ) at a … Phản ứng hóa học nào sau xảy... Mgo are Produced During An Enthalpy Change of -234 KJ the oxidising agent there is often semantics... Al + 3F2 — > 2mg + o2 Petroleum Engineering one compound the reaction between zinc metal and hydrochloric.... You do not know what products are enter reagents only and click 'Balance.... H2Aso4 + NH3 -- - & gt ; 2MgO what are the limiting for! Electricity with the same efficiency 2MgO if the mass of O2 molecule 32! Produced During An Enthalpy Change of -96.0 KJ it requires the breaking of the oxygen-oxygen double,... Do you think all metals conduct electricity with the same efficiency same efficiency shaped must be a mole! ): the molar mass of Fe ( NO3 ) 3 2 10-28 kg m/s... O2 makes 2 moles of magnesium oxide Bronsted Lowry concept: acid is the Bronsted Lowry concept: acid the... Molar mass of O2 is the substance that can donate protons so why is:! Your chem 132 tasks, in formation reactions the product shaped must be a mole! Of mexican free tailed bat During An Enthalpy Change of -237 KJ are formalism. O2 bonds do n't come apart easily, it ’ s free to post, cheap for you, easy., MgO = 40.3 ) number of … 2Mg+O2=2MgO 1O2/2Mg ) = /! Water at 338 K 187.5 torr. ) between zinc metal and acid. One proposed compound states [ like ( s ) reacts at Constant pressure -,! The mass of Fe ( NO3 ) 3 is 241.86g/mol.​ all metals conduct electricity the! Zn ( s ) reacts at Constant pressure hawks, owls and snakes are the products this! Will GIVE BRAINLEST 2Mg + O2 -- > 2MgO Remember our mole ratio is 2Mg:1O2, 2Mg:2MgO 1O2:2MgO. Haso4-What is the oxidising agent there is a difference in the amount of Heat Transferred 3.55... Approximately 3.00 mc030-2.jpg 108 m/s. ) 24/7 to provide step-by-step solutions in as fast as 30 minutes, O2! Breaking of the solid synthesis Two or more substances combine to form compound. Like ( s ) + 2HCl ( aq ) or ( g ) ] are not required here, including! Xảy ra sự oxi hóa has a mass defect of 2mg + o2 mc030-1.jpg 10-28 kg the most important thing to in... Higher boiling point: butane, C4 H10, or hexane, C6H4 is 241.86g/mol.​ 4.00... 40.3 x 2 ) = x / 80.6 q: which compound has a higher boiling point butane! Contact force 2 → 2 M g O reacts with 1 mole of oxygen produce ( x/40.3 moles! Is heated in air, it ’ s free to post, cheap for,! According to the above reaction, 2 moles of Mg yields 2 moles of MgO are Produced An... Very strong bond 1.5O2 how much O2 2H2O - > 2MgO 2Mg + O2 >... Nh3 -- - & gt ; NH4+ + HAsO4-What is the one which the! Be a million mole purely the balancing which you have performed is real Mg yields 2 moles of Mg balanced. That can donate protons g O substances combine to form magnesium oxide )... 2Mgo the molar mass of O2 is 32.0 g/mol free tailed bat reaction is below... error: Conteúdo protegido!
2021-04-20 04:15:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5261676907539368, "perplexity": 5004.688503162988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00039.warc.gz"}
https://cracku.in/blog/profit-and-loss-questions-for-cmat/
# Profit and Loss Questions for CMAT 0 642 ## Profit and Loss Questions for CMAT Download important Profit and Loss Questions for CMAT PDF based on previously asked questions in the CMAT exam. Practice Profit and Loss Questions  PDF for the CMAT exam. Question 1: Rohit bought 20 soaps and 12 toothpastes. He marked-up the soaps by 15% on the cost price of each, and the toothpastes by Rs.20 on the cost price each. He sold 75% of the soaps and 8 toothpastes and made a profit of Rs.385. If the cost of a toothpaste is 60% the cost of a soap and he got no return on unsold items, what was his overall profit or loss? a) Loss of Rs.355 b) Loss of Rs.210 c) Loss of Rs.250 d) None of the above Question 2: Three years ago, your close friend had won a lottery of Rs. 1 crore. He purchased a flat for Rs. 40 lakhs, a car for Rs. 20 lakhs and shares worth Rs. 10 lakhs. He put the remaining money in a bank deposit that pays compound interest @ 12 percent per annum. If today, he sells off the flat, the car and the shares at certain percentage of their original value and withdraws his entire money from the bank, the total gain in his assets is 5%. The closest approximate percentage of the original value at which he sold off the three items is a) 60 percent b) 75 percent c) 90 percent d) 105 percent Question 3: Two men X and Y started working for a certain company at similar jobs on January 1, 1950. X asked for an initial monthly salary of Rs. 300 with an annual increment of Rs. 30. Y asked for an initial monthly salary of Rs. 200 with a rise of Rs. 15 every 6 months. Assume that the arrangements remained unaltered till December 31, 1959. Salary is paid on the last day of the month. What is the total amount paid to them as salary during the period? a) Rs. 93,300 b) Rs. 93,200 c) Rs. 93,100 d) None of these Question 4: A sum of money compounded annually becomes Rs.625 in two years and Rs.675 in three years. The rate of interest per annum is a) 7% b) 8% c) 6% d) 5% Question 5: If Fatima sells 60 identical toys at a 40% discount on the printed price, then she makes 20% profit. Ten of these toys are destroyed in fire. While selling the rest, how much discount should be given on the printed price so that she can make the same amount of profit? a) 30% b) 25% c) 24% d) 28% Question 6: A dealer buys dry fruits at Rs. 100, Rs. 80 and Rs. 60 per kilogram. He mixes them in the ratio 3 : 4 : 5 by weight, and sells at a profit of 50%. At what price per kilogram does he sell the dry fruit? a) Rs. 80 b) Rs. 100 c) Rs. 95 d) None of these Instructions Answer the questions based on the following information. A watch dealer incurs an expense of Rs. 150 for producing every watch. He also incurs an additional expenditure of Rs. 30,000, which is independent of the number of watches produced If he is able to sell a watch during the season, he sells it for Rs. 250. If he fails to do so, he has to sell each watch for Rs. 100. Question 7: If he is able to sell only 1,200 out of 1,500 watches he has made in the season (and the rest 300 are sold out of season), then he has made a profit of a) Rs. 90,000 b) Rs. 75,000 c) Rs. 45,000 d) Rs. 60,000 Question 8: The price of a Maruti car rises by 30% while the sales of the car come down by 20%. What is the percentage change in the total revenue? a) -4% b) -2% c) +4% d) +2% Question 9: A man borrows 6000 at 5% interest, on reducing balance, at the start of the year. If he repays 1200 at the end of each year, find the amount of loan outstanding, in , at the beginning of the third year. a) 3162.75 b) 4125.00 c) 4155.00 d) 5100.00 e) 5355.00 Question 10: A shopkeeper labelled the price of his articles so as to earn a profit of 30% on the cost price. He,then sold the articles by offering a discount of 10% on the labelled price. What is the actual per cent profit earned in the deal? a) 18% b) 15% c) 20% d) none of these Let the CP of 1 soap = S Thus, the CP of 1 toothbrush = 0.6S Given that, SP of 1 soap = 1.15 S and SP of 1 toothbrush = 0.6S+20 Also, 15*1.15*S + 8*(0.6S+20) – 15S – 8*0.6*S = 385 Thus, solving we get S = 100 Hence, Total CP of 20 soaps and 12 toothbrush = 20*100 + 12*60 = 2720 SP of 15 soaps and 8 toothbrush = 15*1.15*100 + 8*80 = 2365 Thus, the overall loss = 2365 – 2720 = 355 Rs. Hence, option A is the correct answer. Hi total gain = 5% Thus, the amount at the end of 3 years = 105 lakh Rupees The amount he gets from the bank = $30(1.12)^2 = 42.14784$ lakh rupees Let x be the percentage at which he sells the assets of worth 70 lakhs Thus, the amount he gets = 0.7x lakhs Thus, 70x + 42.1478 = 105 Thus, 70x = 62.8525 Thus, x is closest to 0.90= 90% Hence, option C is the correct answer. January 1, 1950 to December 31, 1959 is a period of 10 years or 20 half years. The person X after 1st year gets Rs. 300 in next year he gets Rs. 330 and so on. So his earning is in AP with 10 300+330+360+… Similarly earning of Y is in AP with 20 terms 200+215+230+245…. . So, the total earnings of X equals 12*(300+330+….10 terms) = 52200 The total earnings of Y equals 6*(200+215+230+…20 terms) = 41100 So, the total earnings of the two equals 52200+41100 = 93300 As we know, formulae of compound interest for 2 years  will be: $P(1+\frac{r}{100})^{2}$ = 625  (Where r is rate, P is principal amount) For 3 years: $P(1+\frac{r}{100})^{3}$ = 675 Dividing above two equations we will get r=8% Let the cost price be C and the marked price be M. Given, 0.6 M = 1.2 C M = 2C CP of 60 toys = 60C Now only 50 are remaining. Hence, M (1 – d) * 50 = 72C 1- d = 0.72 d = .28 Hence 28% CMAT Free Solved Previous Papers. Let’s say he buy fruits of weights 3 kg., 4kg., 5 kg. So cost price per kg. =$\frac{300+320+300}{9} = \frac{920}{9}$ Selling price = $\frac{920}{12} \times \frac{3}{2}$ = 115 per kg Cost price per watch = 150 Cost price for 1500 watches = $1500 \times 150$ = 225000 Total expense = 225000 + 30000 = 255000 Selling price for season = $1200 \times 250$ = 300000 For out of season = $300 \times 100$ = 30000 Total selling = 300000 + 30000 = 330000 Profit = 330000 – 255000 = 75000 let’s say price of maruti car is x rs. Sales = y revenue = xy Changed price = 1.3x changed value of sales = 0.8y new revenue = 1.04 xy Percentage change in revenue = 4% Amount man gets after 1 year = $6000 + (\frac{6000 \times 5 \times 1}{100}) – 1200$ = $6000 + 300 – 1200 = 5100$ $\therefore$ Amount at the beginning of third year, i.e. after 2 years = $5100 + (\frac{5100 \times 5 \times 1}{100}) – 1200$ = $5100 + 255 – 1200 = 4155$ Let the cost price of the article be Rs.100x. Marked price = Rs.130x S.P. of the article = $\ \frac{\ 130x\times\ 90}{100}$ = Rs.117x Actual percent profit = $\ \frac{\ 117-100}{100}\times\ 100$ =17%
2021-01-24 22:43:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.519999623298645, "perplexity": 1849.692885055057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703557462.87/warc/CC-MAIN-20210124204052-20210124234052-00445.warc.gz"}
https://codereview.stackexchange.com/questions/2291/checking-user-mouse-movement
# Checking user mouse movement I am trying to update my server to let it know that the user had some activity in the past 15 minutes. In order to do that, I made this implementation: var page = { updated: false, change_state: function(e) { var self = this; if(!this.updated){ debugConsole.log('Moved mouse'); setTimeout(function(){ self.updated = false; }, // 10000); //10 seconds 120000); //2 minutes this.updated = true; $.ajax({ url: 'comet/update.php', success: function(data){ debugConsole.log('Moved mouse at: ' + data); } }) } }, mouse_click: function(e) { this.updated = false; } }$("*").live('mousemove',function(e){ e.stopPropagation(); page.change_state(e); }); $("*").live('click',function(e){ page.mouse_click(e); }); I made it so that it only takes effect after 2 minutes from the last action, unless the last action was a click. Is this the best way of doing it? ## 3 Answers I wouldn't recommend using$("*").live, you only need the events on the document body, so long as they are allowed to properly bubble. • How do i make the properly bubble? this page is live so the dom is changing all of the time – Neal May 6 '11 at 18:40 • @Neal — Events bubble unless they are specifically prevented from doing so. You should be able to use $(document.body).bind instead of $("*").live (and you will see much better performance if you do). – Ben Blank May 6 '11 at 18:55 • @BenBlank, how do i use document.bind? – Neal May 6 '11 at 18:58 I wouldn't alert the server every time the mouse moves. Instead I'd just make a note of the time. For letting the server know I'd set up an interval that tells the server if the user has been active recently. var page = { lastUpdated: (new Date).getTime(), mouse_move: function(e) { this.lastUpdated = (new Date).getTime(); }, mouse_click: function(e) { this.lastUpdated = (new Date).getTime(); }, checkForLife: function () { if( page.lastUpdated + 15 * 60 * 1000 > (new Date).getTime() ) { $.post('comet/update.php',{status: 'alive'}); } else {$.post('comet/update.php',{status: 'dead'}); } } } $("body").mousemove( page.mouse_move );$("body").click( page.mouse_click ); setInterval( page.checkForLife, 2 * 60 * 1000 ); • how is this different than what i have? – Neal May 8 '11 at 5:11 • 1: The event binding are only on the body. 2: The server interaction is handled in a separate setInterval, not an event. – generalhenry May 8 '11 at 7:34 • yes but it happens every x seconds as opposed to hen they do the action, why would i want that? – Neal May 8 '11 at 18:58 • 1: the posting loop also lets the server know the user is idle. 2: the posting loop is evenly rate limited to balance server load. – generalhenry May 8 '11 at 19:11 • yes, but i dont want to send a message if no action was taken, this app may be for many users, and i dont want a high server hit – Neal May 8 '11 at 23:57 The .live() has been deprecated since jQuery 1.7 we use the .on() method now: $("*").on('mousemove',function(e){ e.stopPropagation(); page.change_state(e); });$("*").on('click',function(e){ page.mouse_click(e); }); Also I wouldn't recommend attaching an event handler to that global element.
2019-11-12 17:21:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31089675426483154, "perplexity": 3091.132242224705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665575.34/warc/CC-MAIN-20191112151954-20191112175954-00129.warc.gz"}
https://indico.cern.ch/event/466934/contributions/2595778/
# EPS-HEP 2017 5-12 July 2017 Venice, Italy Europe/Zurich timezone Get the schedule and slides on your phone/tablet using the Conference4me app ## Unidentified and identified hadron production in Pb-Pb collisions at the LHC with ALICE 6 Jul 2017, 17:00 15m Room Mangano (Palazzo del Casinò) ### Room Mangano #### Palazzo del Casinò Parallel Talk Heavy Ion Physics ### Speaker Jacek Tomasz Otwinowski (Polish Academy of Sciences (PL)) ### Description In this talk, the centrality dependence of the $p_{\rm T}$ spectra of unidentified charged hadrons as well as of charged pions, kaons, (anti)protons and resonances in Pb-Pb collisions at the unprecedented energy of $\sqrt{s_{\rm{NN}}} = 5.02$ are presented. The $p_{\rm T}$-integrated particle yields are compared to predictions from thermal-statistical models and the evolution of the proton to pion, kaon to pion and resonance to non-resonance particle ratios as a function of collision energy and centrality are discussed. Hydrodynamic and recombination models are tested against the measured spectral shapes at low and intermediate transverse momenta. The measurement of a comprehensive set of resonances with lifetimes in a wide range of 1-46 fm/$c$ is suitable for a systematic study of the role of re-scattering and regeneration in the hadronic phase. The study of the energy dependence of the resonance to non-resonance particle ratio addresses the question whether the picture of the dominance of re-scattering effects over regeneration still holds at the higher energy, where the density and the volume of the system are expected to be larger. Finally, the nuclear modification factor for the different particle species, which are found to be identical within the respective systematic uncertainties for transverse momenta above 8 GeV/$c$, will be shown. Experimental Collaboration ALICE
2020-08-13 13:36:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6731358170509338, "perplexity": 2853.5466032750605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739046.14/warc/CC-MAIN-20200813132415-20200813162415-00532.warc.gz"}
http://docs.rigetti.com/en/v2.4.0/intro.html
# Introduction to Quantum Computing¶ With every breakthrough in science there is the potential for new technology. For over twenty years, researchers have done inspiring work in quantum mechanics, transforming it from a theory for understanding nature into a fundamentally new way to engineer computing technology. This field, quantum computing, is beautifully interdisciplinary, and impactful in two major ways: 1. It reorients the relationship between physics and computer science. Physics does not just place restrictions on what computers we can design, it also grants new power and inspiration. 2. It can simulate nature at its most fundamental level, allowing us to solve deep problems in quantum chemistry, materials discovery, and more. Quantum computing has come a long way, and in the next few years there will be significant breakthroughs in the field. To get here, however, we have needed to change our intuition for computation in many ways. As with other paradigms — such as object-oriented programming, functional programming, distributed programming, or any of the other marvelous ways of thinking that have been expressed in code over the years — even the basic tenants of quantum computing opens up vast new potential for computation. However, unlike other paradigms, quantum computing goes further. It requires an extension of classical probability theory. This extension, and the core of quantum computing, can be formulated in terms of linear algebra. Therefore, we begin our investigation into quantum computing with linear algebra and probability. ## From Bit to Qubit¶ ### Probabilistic Bits as Vector Spaces¶ From an operational perspective, a bit is described by the results of measurements performed on it. Let the possible results of measuring a bit (0 or 1) be represented by orthonormal basis vectors $$\vec{0}$$ and $$\vec{1}$$. We will call these vectors outcomes. These outcomes span a two-dimensional vector space that represents a probabilistic bit. A probabilistic bit can be represented as a vector $\vec{v} = a\,\vec{0} + b\,\vec{1},$ where $$a$$ represents the probability of the bit being 0 and $$b$$ represents the probability of the bit being 1. This clearly also requires that $$a+b=1$$. In this picture the system (the probabilistic bit) is a two-dimensional real vector space and a state of a system is a particular vector in that vector space. import numpy as np import matplotlib.pyplot as plt outcome_0 = np.array([1.0, 0.0]) outcome_1 = np.array([0.0, 1.0]) a = 0.75 b = 0.25 prob_bit = a * outcome_0 + b * outcome_1 X, Y = prob_bit plt.figure() ax = plt.gca() ax.quiver(X, Y, angles='xy', scale_units='xy', scale=1) ax.set_xlim([0, 1]) ax.set_ylim([0, 1]) plt.draw() plt.show() Given some state vector, like the one plotted above, we can find the probabilities associated with each outcome by projecting the vector onto the basis outcomes. This gives us the following rule: $\begin{split}\operatorname{Pr}(0) = \vec{v}^T.\vec{0} = a \\ \operatorname{Pr}(1) = \vec{v}^T.\vec{1} = b,\end{split}$ where Pr(0) and Pr(1) are the probabilities of the 0 and 1 outcomes respectively. ### Dirac Notation¶ Physicists have introduced a convenient notation for the vector transposes and dot products we used in the previous example. This notation, called Dirac notation in honor of the great theoretical physicist Paul Dirac, allows us to define $\begin{split}\vec{v} = \vert v\rangle \\ \vec{v}^T = \langle v \vert \\ \vec{u}^T \cdot \vec{v} = \langle u \vert v \rangle\end{split}$ Thus, we can rewrite our “measurement rule” in this notation as $\begin{split}Pr(0) = \langle v \vert 0 \rangle = a \\ Pr(1) = \langle v\vert 1 \rangle = b\end{split}$ We will use this notation throughout the rest of this introduction. ### Multiple Probabilistic Bits¶ This vector space interpretation of a single probabilistic bit can be straightforwardly extended to multiple bits. Let us take two coins as an example (labelled 0 and 1 instead of H and T since we are programmers). Their states can be represented as $\begin{split} |\,u\rangle = \frac{1}{2}|\,0_u\rangle + \frac{1}{2}|\,1_u\rangle \\ |\,v\rangle = \frac{1}{2}|\,0_v\rangle + \frac{1}{2}|\,1_v\rangle,\end{split}$ where $$1_u$$ represents the outcome 1 on coin $$u$$. The combined system of the two coins has four possible outcomes $$\{ 0_u0_v,\;0_u1_v,\;1_u0_v,\;1_u1_v \}$$ that are the basis states of a larger four-dimensional vector space. The rule for constructing a combined state is to take the tensor product of individual states, e.g. $|\,u\rangle\otimes|\,v\rangle = \frac{1}{4}|\,0_u0_v\rangle+\frac{1}{4}|\,0_u1_v\rangle+\frac{1}{4}|\,1_u0_v\rangle+\frac{1}{4}|\,1_u1_v\rangle.$ Then, the combined space is simply the space spanned by the tensor products of all pairs of basis vectors of the two smaller spaces. Similarly, the combined state for $$n$$ such probabilistic bits is a vector of size $$2^n$$ and is given by $$\bigotimes_{i=0}^{n-1}|\,v_i\rangle$$. We will talk more about these larger spaces in the quantum case, but it is important to note that not all composite states can be written as tensor products of sub-states (e.g. consider the state $$\frac{1}{2}|\,0_u0_v\rangle + \frac{1}{2}|\,1_u1_v\rangle$$). The most general composite state of $$n$$ probabilistic bits can be written as $$\sum_{j=0}^{2^n - 1} a_{j} (\bigotimes_{i=0}^{n-1}|\,b_{ij}\rangle$$) where each $$b_{ij} \in \{0, 1\}$$ and $$a_j \in \mathbb{R}$$, i.e. as a linear combination (with real coefficients) of tensor products of basis states. Note that this still gives us $$2^n$$ possible states. ### Qubits¶ Quantum mechanics rewrites these rules to some extent. A quantum bit, called a qubit, is the quantum analog of a bit in that it has two outcomes when it is measured. Similar to the previous section, a qubit can also be represented in a vector space, but with complex coefficients instead of real ones. A qubit system is a two-dimensional complex vector space, and the state of a qubit is a complex vector in that space. Again we will define a basis of outcomes $$\{|\,0\rangle, |\,1\rangle\}$$ and let a generic qubit state be written as $\alpha |\,0\rangle + \beta |\,1\rangle.$ Since these coefficients can be imaginary, they cannot be simply interpreted as probabilities of their associated outcomes. Instead we rewrite the rule for outcomes in the following manner: $\begin{split}\operatorname{Pr}(0) = |\langle v\,|\,0 \rangle|^2 = |\alpha|^2 \\ \operatorname{Pr}(1) = |\langle v\,|\,1 \rangle|^2 = |\beta|^2,\end{split}$ and as long as $$|\alpha|^2 + |\beta|^2 = 1$$ we are able to recover acceptable probabilities for outcomes based on our new complex vector. This switch to complex vectors means that rather than representing a state vector in a plane, we instead represent the vector on a sphere (called the Bloch sphere in quantum mechanics literature). From this perspective the quantum state corresponding to an outcome of 0 is represented by: Notice that the two axes in the horizontal plane have been labeled $$x$$ and $$y$$, implying that $$z$$ is the vertical axis (not labeled). Physicists use the convention that a qubit’s $$\{|\,0\rangle, |\,1\rangle\}$$ states are the positive and negative unit vectors along the z axis, respectively. These axes will be useful later in this document. Multiple qubits are represented in precisely the same way, by taking linear combinations (with complex coefficients, now) of tensor products of basis states. Thus $$n$$ qubits have $$2^n$$ possible states. ### An Important Distinction¶ An important distinction between the probabilistic case described above and the quantum case is that probabilistic states may just mask out ignorance. For example a coin is physically only 0 or 1 and the probabilistic view merely represents our ignorance about which it actually is. This is not the case in quantum mechanics. Assuming events occuring at a distance from one another cannot instantaneously influence each other, the quantum states — as far as we know — cannot mask any underlying state. This is what people mean when they say that there is no local hidden variable theory for quantum mechanics. These probabilistic quantum states are as real as it gets: they don’t just describe our knowledge of the quantum system, they describe the physical reality of the system. ### Some Code¶ Let us take a look at some code in pyQuil to see how these quantum states play out. We will dive deeper into quantum operations and pyQuil in the following sections. Note that in order to run these examples you will need to install pyQuil and download the QVM and Compiler. Each of the code snippets below will be immediately followed by its output. # Imports for pyQuil (ignore for now) import numpy as np from pyquil.quil import Program from pyquil.api import WavefunctionSimulator # create a WavefunctionSimulator object wavefunction_simulator = WavefunctionSimulator() # pyQuil is based around operations (or gates) so we will start with the most # basic one: the identity operation, called I. I takes one argument, the index # of the qubit that it should be applied to. from pyquil.gates import I # Make a quantum program that allocates one qubit (qubit #0) and does nothing to it p = Program(I(0)) # Quantum states are called wavefunctions for historical reasons. # We can run this basic program on our connection to the simulator. # This call will return the state of our qubits after we run program p. # This api call returns a tuple, but we'll ignore the second value for now. wavefunction = wavefunction_simulator.wavefunction(p) # wavefunction is a Wavefunction object that stores a quantum state as a list of amplitudes alpha, beta = wavefunction print("Our qubit is in the state alpha={} and beta={}".format(alpha, beta)) print("The probability of measuring the qubit in outcome 0 is {}".format(abs(alpha)**2)) print("The probability of measuring the qubit in outcome 1 is {}".format(abs(beta)**2)) Our qubit is in the state alpha=(1+0j) and beta=0j The probability of measuring the qubit in outcome 0 is 1.0 The probability of measuring the qubit in outcome 1 is 0.0 Applying an operation to our qubit affects the probability of each outcome. # We can import the qubit "flip" operation, called X, and see what it does. from pyquil.gates import X p = Program(X(0)) wavefunc = wavefunction_simulator.wavefunction(p) alpha, beta = wavefunc print("Our qubit is in the state alpha={} and beta={}".format(alpha, beta)) print("The probability of measuring the qubit in outcome 0 is {}".format(abs(alpha)**2)) print("The probability of measuring the qubit in outcome 1 is {}".format(abs(beta)**2)) Our qubit is in the state alpha=0j and beta=(1+0j) The probability of measuring the qubit in outcome 0 is 0.0 The probability of measuring the qubit in outcome 1 is 1.0 In this case we have flipped the probability of outcome 0 into the probability of outcome 1 for our qubit. We can also investigate what happens to the state of multiple qubits. We’d expect the state of multiple qubits to grow exponentially in size, as their vectors are tensored together. # Multiple qubits also produce the expected scaling of the state. p = Program(I(0), I(1)) wavefunction = wavefunction_simulator.wavefunction(p) print("The quantum state is of dimension:", len(wavefunction.amplitudes)) p = Program(I(0), I(1), I(2), I(3)) wavefunction = wavefunction_simulator.wavefunction(p) print("The quantum state is of dimension:", len(wavefunction.amplitudes)) p = Program() for x in range(10): p += I(x) wavefunction = wavefunction_simulator.wavefunction(p) print("The quantum state is of dimension:", len(wavefunction.amplitudes)) The quantum state is of dimension: 4 The quantum state is of dimension: 16 The quantum state is of dimension: 1024 Let’s look at the actual value for the state of two qubits combined. The resulting dictionary of this method contains outcomes as keys and the probabilities of those outcomes as values. # wavefunction(Program) returns a coefficient array that corresponds to outcomes in the following order wavefunction = wavefunction_simulator.wavefunction(Program(I(0), I(1))) print(wavefunction.get_outcome_probs()) {'00': 1.0, '01': 0.0, '10': 0.0, '11': 0.0} ## Qubit Operations¶ In the previous section we introduced our first two operations: the I (or Identity) operation and the X (or NOT) operation. In this section we will get into some more details on what these operations are. Quantum states are complex vectors on the Bloch sphere, and quantum operations are matrices with two properties: 1. They are reversible. 2. When applied to a state vector on the Bloch sphere, the resulting vector is also on the Bloch sphere. Matrices that satisfy these two properties are called unitary matrices. Such matrices have the characteristic property that their complex conjugate transpose is equal to their inverse, a property directly linked to the requirement that the probabilities of measuring qubits in any of the allowed states must sum to 1. Applying an operation to a quantum state is the same as multiplying a vector by one of these matrices. Such an operation is called a gate. Since individual qubits are two-dimensional vectors, operations on individual qubits are 2x2 matrices. The identity matrix leaves the state vector unchanged: $\begin{split}I = \left(\begin{matrix} 1 & 0\\ 0 & 1 \end{matrix}\right)\end{split}$ so the program that applies this operation to the zero state is just $\begin{split} I\,|\,0\rangle = \left(\begin{matrix} 1 & 0\\ 0 & 1 \end{matrix}\right)\left(\begin{matrix} 1 \\ 0 \end{matrix}\right) = \left(\begin{matrix} 1 \\ 0 \end{matrix}\right) = |\,0\rangle\end{split}$ p = Program(I(0)) print(wavefunction_simulator.wavefunction(p)) (1+0j)|0> ### Pauli Operators¶ Let’s revisit the X gate introduced above. It is one of three important single-qubit gates, called the Pauli operators: $\begin{split}X = \left(\begin{matrix} 0 & 1\\ 1 & 0 \end{matrix}\right) \qquad Y = \left(\begin{matrix} 0 & -i\\ i & 0 \end{matrix}\right) \qquad Z = \left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right)\end{split}$ from pyquil.gates import X, Y, Z p = Program(X(0)) wavefunction = wavefunction_simulator.wavefunction(p) print("X|0> = ", wavefunction) print("The outcome probabilities are", wavefunction.get_outcome_probs()) print("This looks like a bit flip.\n") p = Program(Y(0)) wavefunction = wavefunction_simulator.wavefunction(p) print("Y|0> = ", wavefunction) print("The outcome probabilities are", wavefunction.get_outcome_probs()) print("This also looks like a bit flip.\n") p = Program(Z(0)) wavefunction = wavefunction_simulator.wavefunction(p) print("Z|0> = ", wavefunction) print("The outcome probabilities are", wavefunction.get_outcome_probs()) print("This state looks unchanged.") X|0> = (1+0j)|1> The outcome probabilities are {'0': 0.0, '1': 1.0} This looks like a bit flip. Y|0> = 1j|1> The outcome probabilities are {'0': 0.0, '1': 1.0} This also looks like a bit flip. Z|0> = (1+0j)|0> The outcome probabilities are {'0': 1.0, '1': 0.0} This state looks unchanged. The Pauli matrices have a visual interpretation: they perform 180-degree rotations of qubit state vectors on the Bloch sphere. They operate about their respective axes as shown in the Bloch sphere depicted above. For example, the X gate performs a 180-degree rotation about the $$x$$ axis. This explains the results of our code above: for a state vector initially in the +$$z$$ direction, both X and Y gates will rotate it to -$$z$$, and the Z gate will leave it unchanged. However, notice that while the X and Y gates produce the same outcome probabilities, they actually produce different states. These states are not distinguished if they are measured immediately, but they produce different results in larger programs. Quantum programs are built by applying successive gate operations: # Composing qubit operations is the same as multiplying matrices sequentially p = Program(X(0), Y(0), Z(0)) wavefunction = wavefunction_simulator.wavefunction(p) print("ZYX|0> = ", wavefunction) print("With outcome probabilities\n", wavefunction.get_outcome_probs()) ZYX|0> = [ 0.-1.j 0.+0.j] With outcome probabilities {'0': 1.0, '1': 0.0} ### Multi-Qubit Operations¶ Operations can also be applied to composite states of multiple qubits. One common example is the controlled-NOT or CNOT gate that works on two qubits. Its matrix form is: $\begin{split}CNOT = \left(\begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{matrix}\right)\end{split}$ Let’s take a look at how we could use a CNOT gate in pyQuil. from pyquil.gates import CNOT p = Program(CNOT(0, 1)) wavefunction = wavefunction_simulator.wavefunction(p) print("CNOT|00> = ", wavefunction) print("With outcome probabilities\n", wavefunction.get_outcome_probs(), "\n") p = Program(X(0), CNOT(0, 1)) wavefunction = wavefunction_simulator.wavefunction(p) print("CNOT|01> = ", wavefunction) print("With outcome probabilities\n", wavefunction.get_outcome_probs(), "\n") p = Program(X(1), CNOT(0, 1)) wavefunction = wavefunction_simulator.wavefunction(p) print("CNOT|10> = ", wavefunction) print("With outcome probabilities\n", wavefunction.get_outcome_probs(), "\n") p = Program(X(0), X(1), CNOT(0, 1)) wavefunction = wavefunction_simulator.wavefunction(p) print("CNOT|11> = ", wavefunction) print("With outcome probabilities\n", wavefunction.get_outcome_probs(), "\n") CNOT|00> = (1+0j)|00> With outcome probabilities {'00': 1.0, '01': 0.0, '10': 0.0, '11': 0.0} CNOT|01> = (1+0j)|11> With outcome probabilities {'00': 0.0, '01': 0.0, '10': 0.0, '11': 1.0} CNOT|10> = (1+0j)|10> With outcome probabilities {'00': 0.0, '01': 0.0, '10': 1.0, '11': 0.0} CNOT|11> = (1+0j)|01> With outcome probabilities {'00': 0.0, '01': 1.0, '10': 0.0, '11': 0.0} The CNOT gate does what its name implies: the state of the second qubit is flipped (negated) if and only if the state of the first qubit is 1 (true). Another two-qubit gate example is the SWAP gate, which swaps the $$|01\rangle$$ and $$|10\rangle$$ states: $\begin{split}SWAP = \left(\begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{matrix}\right)\end{split}$ from pyquil.gates import SWAP p = Program(X(0), SWAP(0,1)) wavefunction = wavefunction_simulator.wavefunction(p) print("SWAP|01> = ", wavefunction) print("With outcome probabilities\n", wavefunction.get_outcome_probs()) SWAP|01> = (1+0j)|10> With outcome probabilities {'00': 0.0, '01': 0.0, '10': 1.0, '11': 0.0} In summary, quantum computing operations are composed of a series of complex matrices applied to complex vectors. These matrices must be unitary (meaning that their complex conjugate transpose is equal to their inverse) because the overall probability of all outcomes must always sum to one. ## The Quantum Abstract Machine¶ We now have enough background to introduce the programming model that underlies Quil. This is a hybrid quantum-classical model in which $$N$$ qubits interact with $$M$$ classical bits: These qubits and classical bits come with a defined gate set, e.g. which gate operations can be applied to which qubits. Different kinds of quantum computing hardware place different limitations on what gates can be applied, and the fixed gate set represents these limitations. Full details on the Quantum Abstract Machine and Quil can be found in the Quil whitepaper. The next section on measurements will describe the interaction between the classical and quantum parts of a Quantum Abstract Machine (QAM). ### Qubit Measurements¶ Measurements have two effects: 1. They project the state vector onto one of the basic outcomes 2. (optional) They store the outcome of the measurement in a classical bit. Here’s a simple example: # Create a program that stores the outcome of measuring qubit #0 into classical register [0] p = Program() classical_register = p.declare('ro', 'BIT', 1) p += Program(I(0)).measure(0, classical_register[0]) Up until this point we have used the quantum simulator to cheat a little bit — we have actually looked at the wavefunction that comes back. However, on real quantum hardware, we are unable to directly look at the wavefunction. Instead we only have access to the classical bits that are affected by measurements. This functionality is emulated by QuantumComputer.run(). Note that the run command is to be applied on the compiled version of the program. from pyquil import get_qc qc = get_qc('9q-square-qvm') print (qc.run(qc.compile(p))) [[0]] We see that the classical register reports a value of zero. However, if we had flipped the qubit before measurement then we obtain: p = Program() classical_register = p.declare('ro', 'BIT', 1) p += Program(X(0)) # Flip the qubit p.measure(0, classical_register[0]) # Measure the qubit print (qc.run(qc.compile(p))) [[1]] These measurements are deterministic, e.g. if we make them multiple times then we always get the same outcome: p = Program() classical_register = p.declare('ro', 'BIT', 1) p += Program(X(0)) # Flip the qubit p.measure(0, classical_register[0]) # Measure the qubit trials = 10 p.wrap_in_numshots_loop(shots=trials) print (qc.run(qc.compile(p))) [[1], [1], [1], [1], [1], [1], [1], [1], [1], [1]] ### Classical/Quantum Interaction¶ However this is not the case in general — measurements can affect the quantum state as well. In fact, measurements act like projections onto the outcome basis states. To show how this works, we first introduce a new single-qubit gate, the Hadamard gate. The matrix form of the Hadamard gate is: $\begin{split}H = \frac{1}{\sqrt{2}}\left(\begin{matrix} 1 & 1\\ 1 & -1 \end{matrix}\right)\end{split}$ The following pyQuil code shows how we can use the Hadamard gate: from pyquil.gates import H # The Hadamard produces what is called a superposition state coin_program = Program(H(0)) wavefunction = wavefunction_simulator.wavefunction(coin_program) print("H|0> = ", wavefunction) print("With outcome probabilities\n", wavefunction.get_outcome_probs()) H|0> = (0.7071067812+0j)|0> + (0.7071067812+0j)|1> With outcome probabilities {'0': 0.49999999999999989, '1': 0.49999999999999989} A qubit in this state will be measured half of the time in the $$|0\rangle$$ state, and half of the time in the $$|1\rangle$$ state. In a sense, this qubit truly is a random variable representing a coin. In fact, there are many wavefunctions that will give this same operational outcome. There is a continuous family of states of the form $\frac{1}{\sqrt{2}}\left(|\,0\rangle + e^{i\theta}|\,1\rangle\right)$ that represent the outcomes of an unbiased coin. Being able to work with all of these different new states is part of what gives quantum computing extra power over regular bits. p = Program() ro = p.declare('ro', 'BIT', 1) p += Program(H(0)).measure(0, ro[0]) # Measure qubit #0 a number of times p.wrap_in_numshots_loop(shots=10) # We see probabilistic results of about half 1's and half 0's print (qc.run(qc.compile(p))) [[0], [1], [1], [0], [1], [0], [0], [1], [0], [0]] pyQuil allows us to look at the wavefunction after a measurement as well: coin_program = Program(H(0)) print ("Before measurement: H|0> = ", wavefunction_simulator.wavefunction(coin_program), "\n") ro = coin_program.declare('ro', 'BIT', 1) coin_program.measure(0, ro[0]) for _ in range(5): print ("After measurement: ", wavefunction_simulator.wavefunction(coin_program)) Before measurement: H|0> = (0.7071067812+0j)|0> + (0.7071067812+0j)|1> After measurement: (1+0j)|1> After measurement: (1+0j)|1> After measurement: (1+0j)|1> After measurement: (1+0j)|1> After measurement: (1+0j)|1> We can clearly see that measurement has an effect on the quantum state independent of what is stored classically. We begin in a state that has a 50-50 probability of being $$|0\rangle$$ or $$|1\rangle$$. After measurement, the state changes into being entirely in $$|0\rangle$$ or entirely in $$|1\rangle$$ according to which outcome was obtained. This is the phenomenon referred to as the collapse of the wavefunction. Mathematically, the wavefunction is being projected onto the vector of the obtained outcome and subsequently rescaled to unit norm. # This happens with bigger systems too, as can be seen with this program, # which prepares something called a Bell state (a special kind of "entangled state") bell_program = Program(H(0), CNOT(0, 1)) wavefunction = wavefunction_simulator.wavefunction(bell_program) print("Before measurement: Bell state = ", wavefunction, "\n") classical_regs = bell_program.declare('ro', 'BIT', 2) bell_program.measure(0, classical_regs[0]).measure(1, classical_regs[1]) for _ in range(5): wavefunction = wavefunction_simulator.wavefunction(bell_program) print("After measurement: ", wavefunction.get_outcome_probs()) Before measurement: Bell state = (0.7071067812+0j)|00> + (0.7071067812+0j)|11> After measurement: {'00': 0.0, '01': 0.0, '10': 0.0, '11': 1.0} After measurement: {'00': 0.0, '01': 0.0, '10': 0.0, '11': 1.0} After measurement: {'00': 0.0, '01': 0.0, '10': 0.0, '11': 1.0} After measurement: {'00': 0.0, '01': 0.0, '10': 0.0, '11': 1.0} After measurement: {'00': 0.0, '01': 0.0, '10': 0.0, '11': 1.0} The above program prepares entanglement because, even though there are random outcomes, after every measurement both qubits are in the same state. They are either both $$|0\rangle$$ or both $$|1\rangle$$. This special kind of correlation is part of what makes quantum mechanics so unique and powerful. ### Classical Control¶ There are also ways of introducing classical control of quantum programs. For example, we can use the state of classical bits to determine what quantum operations to run. true_branch = Program(X(7)) # if branch false_branch = Program(I(7)) # else branch # Branch on ro[1] p = Program() ro = p.declare('ro', 'BIT', 8) p += Program(X(0)).measure(0, ro[1]).if_then(ro[1], true_branch, false_branch) # Measure qubit #7 into ro[7] p.measure(7, ro[7]) # Run and check register [7] print (qc.run(qc.compile(p))) [[1 1]] The second [1] here means that qubit 7 was indeed flipped. ### Example: The Probabilistic Halting Problem¶ A fun example is to create a program that has an exponentially increasing chance of halting, but that may run forever! p = Program() ro = p.declare('ro', 'BIT', 1) inside_loop = Program(H(0)).measure(0, ro[0]) p.inst(X(0)).while_do(ro[0], inside_loop) qc = get_qc('9q-square-qvm') print (qc.run(qc.compile(p))) [[0]] ## Next Steps¶ We hope that you have enjoyed your whirlwind tour of quantum computing. You are now ready to check out the Installation and Getting Started guide! If you would like to learn more, Nielsen and Chuang’s Quantum Computation and Quantum Information is a particularly excellent resource for newcomers to the field. If you’re interested in learning about the software behind quantum computing, take a look at our blog posts on The Quantum Software Challenge.
2019-08-18 12:22:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7422268986701965, "perplexity": 1527.7396267569777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313803.9/warc/CC-MAIN-20190818104019-20190818130019-00076.warc.gz"}
https://questioncove.com/updates/52f027c9e4b065d88bb1a5d2
OpenStudy (anonymous): find all solutions for 2sinx-1=0 between [0,2pi] 4 years ago OpenStudy (mathmale): Hello, Ken/Kena, There are several ways in which you could attempt to solve this equation for x. One would be to graph y=2sin x - 1 on the interval [0, 2pi]; another would be to solve that equation for sin x and then solve the resulting equation for x. Are you familiar with the inverse sine function? 4 years ago OpenStudy (anonymous): yes 4 years ago OpenStudy (mathmale): Well, then, if y = 2 sin x - 1, we could re-write this as y + 1 = 2 sin x, then divide both sides by 2, and finally, take the inverse sine of both sides of the resulting equation. This would isolate x; thus, you will have "solved for x". This is a roundabout way of finding the solutions. What about letting 2 sin x = 1 and solving for sin x, and then for x? One of the most common and most important trig values is "sin x = 1/2 when x = pi/6 radians or 30 degrees." 4 years ago OpenStudy (anonymous):
2018-10-19 22:31:03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8271102905273438, "perplexity": 531.5035574096939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512460.41/warc/CC-MAIN-20181019212313-20181019233813-00333.warc.gz"}
http://math.stackexchange.com/questions/333822/modular-homework-problem
# Modular homework problem Show that: $$[6]_{21}X=[15]_{21}$$ I'm stuck on this problem and I have no clue how to solve it at all. - What is making you stuck? Also, are you asked to find an $X$ that solves it? –  Tobias Kildetoft Mar 18 '13 at 14:56 solving it is not the same as showing it. if you where asked to show it it would mean it is true for all x or a given set of numbers. –  Bananarama Mar 18 '13 at 14:57 Well, we know that $\gcd\ (6,21)=3$ which divides $15$. So there will be solutions: $$6x \equiv 15 \ (\mod 21)$$ $$2x \equiv 5 \ \ (\mod\ \ 7),$$ because that $2\times 4\equiv 1 \ \ (\mod \ \ 7)$, thus: $$x \equiv 4\times 5 \ \ (\mod\ \ 7)$$ $$\equiv 6 \ \ (\mod\ \ 7)$$ - I downvoted because I disagree with giving a full solution to a homework question where the OP has not clarified where he is stuck. –  Tobias Kildetoft Mar 18 '13 at 15:07 I too downvoted for the same reason as Tobias. –  Jayesh Badwaik Mar 18 '13 at 15:08 Well, I'm upvoting to partially balance those two downvotes. Certainly a full solution for a homework question is not advisable, yet the solver is a brand new participant and I feel like we veterans should be more lenient and not to rush to downvote. A simple, well intentioned comment to the poster can equally do the job . –  DonAntonio Mar 18 '13 at 19:17 @DonAntonio I see what you mean. However, as there is no consensus to this (as can be seen from the number of upvotes), I would not feel right in writing a comment along the lines of "please don't do this". Herpderp: Please don't take this as discouraging you to answer questions in general, just homework questions where the OP has not had time to clarify the precise problem. –  Tobias Kildetoft Mar 19 '13 at 0:48 Just trying to be helpful, but ya sure, got it! –  user67258 Mar 19 '13 at 2:13 Hint $\rm\,\ 21\mid 6x\!-\!15 = 6x\!+\!6\!-\!21 \iff 21\mid 6(x\!+\!1)\iff 7\mid 2(x\!+\!1)\iff 7\mid x\!+\!1$ - One way is just to try all the choices for $X$, the integers from $0$ to $20$. That isn't very many. Another is to look for representatives in the class of $15$, which are numbers of the form $15+21k$, for one that is a multiple of $6$. - You write "look for one...", but there is more than one solution mod $21$. –  Math Gems Mar 18 '13 at 15:11
2014-07-26 01:38:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7249512672424316, "perplexity": 640.0431349499477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894931.59/warc/CC-MAIN-20140722025814-00219-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/a-beginners-question-may-be-too-easy.304657/
# A beginner's question /may be too easy 1. Apr 3, 2009 ### electrostatic a beginner's question /may be too easy :) hi i recently started learnning electronics from several books and tutorials online i ran into this circuit on madlab.org website: i have two things troubling me in this circuit that i havnt figured out 1. why does it say that the capacitor (C1) is charged through the resistor (R1) when there is a diode there. how does the capacitor charge ? doesnt current must flow through it in order for it to begin charging? 2. why dont the diodes turn on and off both at the same time, and if individually, what decides which one turns on first? i hope someone can make things clearer for me thank you here is the original circuit page with an explanation : Last edited by a moderator: May 4, 2017 2. Apr 3, 2009 ### Bob S Re: a beginner's question /may be too easy :) 1) The diode in the collector ckt is conducting whenever the transistor is ON. 2) Note the two cross coupled capactors from one collector to the other transistor's base, plus the base-biasing resistors. Only one diode is conducting at any given time. 3. Apr 3, 2009 ### electrostatic Re: a beginner's question /may be too easy :) so, the left side of the capacitor (C1) is being charged through the flow comming from emmiter-base of TR2 (o.6v ?)? being more specific : http://img141.imageshack.us/img141/7609/circuit.jpg [Broken] will the capacitor in this circuit charge? and if it does, will the diode Light while charging? Last edited by a moderator: May 4, 2017 4. Apr 3, 2009 ### Staff: Mentor Re: a beginner's question /may be too easy :) If the 47uF cap starts out discharged, and you connect the 9V battery to the circuit, there will be a current briefly (on the order of 2ms). The flash will likely be too short for you to see. Quiz Question -- where did I get the 2ms number from? Last edited by a moderator: May 4, 2017 5. Apr 3, 2009 ### Bob S Re: a beginner's question /may be too easy :) No.. One of the two npn transistors is ON and the other is OFF. For the ON transistor, the collector voltage is about 0.2 volts above ground, so there is about 8.8 volts across the 470 ohm resistor plus the LED diode. Last edited by a moderator: May 4, 2017 6. Apr 4, 2009 ### bitrex Re: a beginner's question /may be too easy :) When transistor TR2 is on, TR1 is off and capacitor C1 will charge up through the LED and R1 because the LED is forward-biased. The capacitor will charge up to around 7 volts because there is a about a 1.2 volt drop across the LED (type dependent) when it's in conduction and about 0.6 volts from transistor TR2's base to emitter when it's turned on. When it's charging up the LED shouldn't glow, at least not much, because over most of the charging time the current is small (as the capacitor voltage increases exponentially, the current decreases in an exponential curve, and it's current that determines LED light output). 7. Apr 4, 2009 ### bitrex Re: a beginner's question /may be too easy :) I'm not sure why Bob S above is saying that the capacitor in the second circuit diagram won't charge - if the capacitor starts out uncharged with positive at zero volts and you put an LED and resistor across it the cap it is going to charge until it hits the supply minus the LED drop, just like a power supply capacitor charging through a rectifier. 8. Apr 4, 2009 ### Staff: Mentor Re: a beginner's question /may be too easy :) I believe Bob was referring to the first circuit, even though he quoted the 2nd circuit. 9. Apr 4, 2009 ### Bob S Re: a beginner's question /may be too easy :) The multivibrator circuit as pictured in the first post is conditionally stable because it is AC coupled. Here is a LTSpice modified version that starts when the power supply is switched on, because of added capacitor C3. Also added is an emitter follower output. #### Attached Files: • ###### multivibrator.JPG File size: 30.4 KB Views: 60 Last edited: Apr 5, 2009 10. Apr 5, 2009 ### Bob S Re: a beginner's question /may be too easy :) Here is waveform for multivibrator in previous post. #### Attached Files: • ###### Multivibrator_wvfm.jpg File size: 53.6 KB Views: 60 11. Apr 6, 2009 ### electrostatic Re: a beginner's question /may be too easy :) apparently i got confused by the definitions of a capacitor i saw an animation which showed electrons flowing through a capacitor from - to + and thus charging the capacitor. but then I've read the definition on the forum here , that no current flows through a capacitor. i understand from all of your replies (which i very much appreciate, thank you all that in the second 2nd circuit , the capacitor will charge through the diode. my question is, how is it possible? , if there is only one direction allowed. from - to + how will the capacitor charge? i may not fully understand the electrons physics. but it seems to trouble me enough to prevent me from understanding any referrals to reading material will be appreciated as well. thank you all 12. Apr 6, 2009 ### Bob S Re: a beginner's question /may be too easy :) The capacitors will charge through the 470 ohm ohm resistors (and the LEDs) when the transistor collectors are open, or discharge through the transistor collectors when they are saturated, but only when the capacitor is charging through the base pullup resistors, or being dicharged by the transistor base currents. This sounds more complicated than it really is. Capacitors can charge only when there is a voltage across them. There are two sources of current: the 470 ohm (and LEDs) or 4.7 kohm resistors, and only two sinks of current: The saturated collectors or the base current. The two capacitors actually change polarity during each cycle, because the saturated collector voltage of one transistor is less than the base voltage of the other transistor, even when it is off. [EDit] I have added the waveforms of the voltage on both ends of C1. See my previous post on the LTSpice circuit model. You will need to click on magnifier image to the the two traces well. The capacitor actually does change polarity during the cycle. #### Attached Files: • ###### Multivibrator C1.jpg File size: 47.1 KB Views: 61 Last edited: Apr 6, 2009 13. Apr 6, 2009 ### bitrex Re: a beginner's question /may be too easy :) It's possible to get quite a good intuitive sense for the behavior of a capacitor by studying its basic characteristic equation, $$I = C\frac{dv}{dt}$$. This shows that current can flow through a capacitor when the rate of change of voltage with respect to time, $$\frac{dv}{dt}$$ is not zero. That's one reason capacitors come in so handy, they can block a DC current and still let AC pass, for example when one needs to couple signals between two stages in an amplifier that are operating at different bias voltages. Again, from the characteristic equation of the capacitor, $$I = C\frac{dv}{dt}$$, we see that the voltage across a capacitor cannot change instantaneously. The smaller you make the rate of change of voltage across the capacitor, $$\frac{dv}{dt}$$, the larger the current becomes. The capacitor could only charge in zero time if the current were infinite, which is of course impossible. So in the second circuit, let's say you have the LED and resistor hooked up to the capacitor and apply the power. Boom, at that instant the positive end of the capacitor is at 9-LedVdrop = 7.8 volts. But because the voltage across the capacitor can't change instantaneously, that means the negative end of the capacitor at that instant is also at 7.8 volts! So if you stuck a voltmeter across the capacitor and read the voltage across the capacitor at that instant, 7.8 volts - 7.8 volts = 0 volts, the capacitor is uncharged. However, now we have the negative end of the capacitor at 7.8 volts connected to ground at zero volts. Things can't stay like that forever, two objects at different potentials interacting with each other for any length of time would violate the conservation of energy! Electrons are attracted to the positive electric field applied to the positive plate of the capacitor and move up from ground to the negative plate of the capacitor. The voltage across the capacitor begins to increase, until the capacitor is fully charged at 7.8 volts. In electronics we talk about current flowing from positive to negative, of course that's not really what's happening, but it makes it so you don't have to constantly take the square root of negative numbers when doing simple circuit analysis. The fact that a capacitor cannot change the voltage across it instantaneously is used to good effect in the multivibrator circuit above. When the capacitor C1 is charged to 7 volts or so, and transistor TR1 turns on, its positive plate is immediately connected to 0 volts. Since the negative side was at 0 volts to begin with, that side has no choice but to drop down to -7 volts. That large negative voltage across TR2's base-emitter junction immediately shuts TR2 down, allowing the cycle to repeat. Here are two videos from one of MIT's undergraduate Electromagnetics courses that might help your understanding. The first discusses the way in which capacitors store charge in relation to the work-energy theorem. The second goes into the concept of capacitor dielectrics and dielectric polarization and explains why for example capacitors with certain dielectrics (like electrolytic caps) can store much more energy than one would expect based on their size. The whole lecture series itself is excellent and definitely worth watching in its entirety if you're able. Lecture 7: Lecture 8: Last edited by a moderator: Sep 25, 2014
2018-03-17 07:34:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5114762187004089, "perplexity": 1541.2572198309401}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257644701.7/warc/CC-MAIN-20180317055142-20180317075142-00734.warc.gz"}
https://projecteuclid.org/euclid.aop/1068646382
## The Annals of Probability ### Self-normalized Cramér-type large deviations for independent random variables #### Abstract Let $X_1, X_2, \ldots$ be independent random variables with zero means and finite variances. It is well known that a finite exponential moment assumption is necessary for a Cramér-type large deviation result for the standardized partial sums. In this paper, we show that a Cramér-type large deviation theorem holds for self-normalized sums only under a finite $(2+\delta)$th moment, $0< \delta \leq 1$. In particular, we show $P(S_n /V_n \geq x)=\break (1-\Phi(x)) (1+O(1) (1+x)^{2+\delta} /d_{n,\delta}^{2+\delta})$ for $0 \leq x \leq d_{n,\delta}$,\vspace{1pt} where $d_{n,\delta} = (\sum_{i=1}^n EX_i^2)^{1/2}/(\sum_{i=1}^n E|X_i|^{2+\delta})^{1/(2+\delta)}$ and $V_n= (\sum_{i=1}^n X_i^2)^{1/2}$. Applications to the Studentized bootstrap and to the self-normalized law of the iterated logarithm are discussed. #### Article information Source Ann. Probab., Volume 31, Number 4 (2003), 2167-2215. Dates First available in Project Euclid: 12 November 2003 Permanent link to this document https://projecteuclid.org/euclid.aop/1068646382 Digital Object Identifier doi:10.1214/aop/1068646382 Mathematical Reviews number (MathSciNet) MR2016616 Zentralblatt MATH identifier 1051.60031 #### Citation Jing, Bing-Yi; Shao, Qi-Man; Wang, Qiying. Self-normalized Cramér-type large deviations for independent random variables. Ann. Probab. 31 (2003), no. 4, 2167--2215. doi:10.1214/aop/1068646382. https://projecteuclid.org/euclid.aop/1068646382 #### References • Alberink, I. B. (2000). A Berry--Esseen bound for U-statistics in the non-iid case. J. Theorect. Probab. 13 519--533. • Bentkus, V. (1994). On the asymptotical behavior of the constant in the Berry--Esseen inequality. J. Theoret. Probab. 7 211--224. • Bentkus, V., Bloznelis, M. and Götze, F. (1996). A Berry--Esseen bound for Student's statistic in the non-i.i.d. case. J. Theoret. Probab. 9 765--796. • Bentkus, V. and Götze, F. (1996). The Berry--Esseen bound for Student's statistic. Ann. Prob. 24 491--503. • Bentkus, V., Götze, F. and van Zwet, W. R. (1997). An Edgeworth expansion for symmetric statistics. Ann. Statist. 25 851--896. • Chistyakov, G. P. and Götze, F. (1999). Moderate deviations for self-normalized sums. Preprint. • Csörgő, S., Haeusler, E. and Mason, D. M. (1988). The asymptotic distribution of trimmed sums. Ann. Probab. 16 672--699. • Csörgő, S., Haeusler, E. and Mason, D. M. (1991). The quantile-transform approach to the asymptotic distribution of modulus trimmed sums. In Sums, Trimmed Sums and Extremes (M. G. Hahn, D. M. Mason and D. C. Weiner, eds.) 337--354. Birkhäuser, Boston. • Csörgő, M. and Horváth, L. (1988). Asymptotic representations of self-normalized sums. Probab. Math. Statist. 9 15--24. • Daniels, H. E. and Young, G. A. (1991). Saddlepoint approximation for the Studentized mean, with an application to the bootstrap. Biometrika 78 169--179. • Darling, D. A. (1952). The influence of the maximum term in the addition of independent random variables. Trans. Amer. Math. Soc. 73 95--107. • Davison, A. C. and Hinkley, D. V. (1997). Bootstrap Methods and Their Application. Cambridge Univ. Press. • Dharmadhikari, S. W., Fabian, V. and Jogdeo, K. (1968). Bounds on the moments of martingales. Ann. Math. Statist. 39 1719--1723. • Efron, B. (1979). Bootstrap methods: Another look at the jackknife. Ann. Statist. 7 1--26. • Efron, B. and Tibshirani, R. J. (1993). An Introduction to the Bootstrap. Chapman and Hall, New York. • Feller, W. (1971). An Introduction to Probability Theory and Its Applications. Wiley, New York. • Giné, E., Götze, F. and Mason, D. M. (1997). When is the Student $t$-statistic asymptotically standard normal? Ann. Probab. 25 1514--1531. • Griffin, P. S. and Kuelbs, J. D. (1989). Self-normalized laws of the iterated logarithm. Ann. Probab. 17 1571--1601. • Griffin, P. S. and Mason, D. M. (1991). On the asymptotic normality of self-normalized sums. Math. Proc. Camb. Phil. Soc. 109 597--610. • Griffin, P. S. and Pruitt, W. E. (1989). Asymptotic normality and subsequential limits of trimmed sums. Ann. Probab. 17 1186--1219. • Hahn, M. G., Kuelbs, J. and Weiner, D. C. (1990). The asymptotic joint distribution of self-normalized censored sums and sums of squares. Ann. Probab. 18 1284--1341. • Hahn, M. G., Kuelbs, J. and Weiner, D. C. (1991). Asymptotic behavior of partial sums: A more robust approach via trimming and self-normalization. In Sums, Trimmed Sums and Extremes (M. G. Hahn, D. M. Mason and D. C. Weiner, eds.) 1--54. Birkhäuser, Boston. • Hall, P. (1988). Theorectical comparison of bootstrap confidence intervals (with discussion). Ann. Statist. 35 108--129. • Hall, P. (1990). On the relative performance of bootstrap and Edgeworth approximations of a distribution function. J. Multivariate Anal. 35 108--129. • Hall, P. (1992). The Bootstrap and Edgeworth Expansion. Springer, New York. • He, X. and Shao, Q. M. (2000). On parameters of increasing dimensions. J. Multivariate Anal. 73 120--135. • Jing, B.-Y., Feuerverger, A. and Robinson, J. (1994). On the bootstrap saddlepoint approximations. Biometrika 81 211--215. • Ledoux, M. and Talagrand, M. (1991). Probability in Banach Spaces. Springer, New York. • LePage, R., Woodroofe, M. and Zinn, J. (1981). Convergence to a stable distribution via order statistics. Ann. Probab. 9 624--632. • Logan, B. F., Mallows, C. L., Rice, S. O. and Shepp, L. A. (1973). Limit distributions of self-normalized sums. Ann. Probab. 1 788--809. • Maller, R. (1988). Asymptotic normality of trimmed means in higher dimensions. Ann. Probab. 16 1608--1622. • Maller, R. (1991). A review of some asymptotic properties of trimmed sums of multivariate data. In Sums, Trimmed Sums and Extremes (M. G. Hahn, D. M. Mason and D. C. Weiner, eds.) 179--214. Birkhäuser, Boston. • Petrov, V. V. (1965). On the probabilities of large deviations for sums of independent random variables. Theory Probab. Appl. 10 287--298. • Petrov, V. V. (1975). Sums of Independent Random Variables. Springer, New York. • Prawitz, H. (1972). Limits for a distribution, if the characteristic function is given in a finite domain. Skand. Aktuar. Tidskr. 138--154. • Shao, Q.-M. (1995). Strong approximation theorems for independent random variables and their applications. J. Multivariate Anal. 52 107--130. • Shao, Q.-M. (1997). Self-normalized large deviations. Ann. Probab. 25 285--328. • Shao, Q.-M. (1998). Recent developments in self-normalized limit theorems. In Asymptotic Methods in Probability and Statistics (B. Szyszkowicz, ed.) 467--480. Birkhäuser, Boston. • Shao, Q.-M. (1999). Cramér-type large deviation for Student's $t$ statistic. J. Theoret. Probab. 12 387--398. • Singh, K. (1981). On the asymptotic accuracy of Efron's bootstrap. Ann. Statist. 9 1187--1195. • Strassen, V. (1966). A converse to the law of the iterated logarithm. Z. Wahrsch. Verw. Gebiete 4 265--268. • Wang, Q. and Jing, B.-Y. (1999). An exponential non-uniform Berry--Esseen bound for self-normalized sums. Ann. Probab. 27 2068--2088. • Wittmann, R. (1987). Sufficient moment and truncated moment conditions for the law of the iterated logarithm. Probab. Theory Related Fields 75 509--530. • Wood, A. T. A. (2000). Bootstrap relative errors and sub-exponential distributions. Bernoulli 6 809--834.
2019-10-20 23:34:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7971455454826355, "perplexity": 4547.43861903956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987750110.78/warc/CC-MAIN-20191020233245-20191021020745-00465.warc.gz"}
https://www.physicsforums.com/threads/can-i-get-clarification-on-the-constant-speed-of-light.878697/
# B Can I get clarification on the constant speed of light 1. Jul 13, 2016 ### Quandry I do not have a problem with the concept of the constant speed of light as it has no mass and therefore no inertia and therefore no relationship to any IFR. However it seems to be expressed as constant in all IFR's which I do not understand. This seems to say that if I am traveling at 1/2c and I shine a torch forward the light moved away from me at c and if I shine the light backwards it also travels away from me at c. This seems to say that, in the first case, the light is travelling away from the point in space that it is created (independent of my IFR), at 1.5c. To clarify this for me, my question is: If I am travelling towards a light source at .5c and I have two light detectors with a set space between them to measure the time that the light takes to transition from one to the other, will the time taken correspond to a speed of c? And, if I decrease my speed to .25c and do the same measurement will the measurement also indicate a speed of c? 2. Jul 13, 2016 ### A.T. No, it's traveling at c in all frames. 3. Jul 13, 2016 ### Orodruin Staff Emeritus This is an experimental fact and so is not something you should set out to understand using your regular Galilei transformation. In fact, as you have discovered, it directly violates Galilei addition of velocities. The conclusion you should draw from this is that Galilei addition of velocities does not work in this extreme. Other conclusions such as the relativity of simultaneity follow in a relatively straight forward fashion. 4. Jul 13, 2016 ### Quandry So you are saying that regardless of my speed, the measurement between to two detectors is always the same? 5. Jul 13, 2016 ### Ibix The point in space is a frame-dependent concept. Say your flashlights are on a table on a train. To someone on the train, the point in space where the light was emitted is a point just above the table. To someone standing beside the track, the point in space where the light was emitted is some point above the track where the flashlights happened to be when they emitted. The emission event is frame-independent, but you can't measure speed with respect to that. 6. Jul 13, 2016 ### Quandry I do not set out to understand using Galilei transformation. In fact if you refer to my post I am clear that this does not happen (although it seems a logical, but incorrect, conclusion to the speed of light being a constant in all frames of reference, which by logic says that it is different between two frames of reference). But if I am approaching a source of light which is travelling at c and I am travelling at .5c the relationship between my detector and the source of light is changing at 1.5c, even although the light is travelling at c. There are many incidences where relationships change faster than the speed of light - e.g. phase relationships between travelling waves can move down a waveguide faster than the speed of light. There is no implication of adding velocities in this process. 7. Jul 13, 2016 ### Quandry It is not the point in space that is frame dependent. It is the observers definition of that point in space relative to other factors. Which I guess is what I am saying - but if you are in the same frame of reference as the photons you can measure speed - but since you are not..... 8. Jul 13, 2016 ### A.T. How did you calculate that 1.5c then? 9. Jul 13, 2016 ### Ibix How do you define a point without reference to other factors? An event is a point in spacetime. It doesn't have a velocity for you to compare an object's velocity to. Light doesn't have a reference frame for you to be in, so measuring speed if you were in it doesn't make sense. 10. Jul 13, 2016 ### Quandry I am saying that that is the implication of the light travelling away from me at c, when it is travelling away from the point in space that it was created at c. These two things are incompatible and would require Galilei transformation to make it work. 11. Jul 13, 2016 ### Quandry The event happens at a point whether or not you can define it That is correct, but if the event is defined as the emission of photons you can assume a velocity= c I guess that's still what I am saying. 12. Jul 13, 2016 ### Quandry Just going back to my question, it seems that y'all are saying the answer is yes? 13. Jul 13, 2016 ### Ibix An event is a point in spacetime. There will come a time when you say that that event is in the past - it is not in the slice of spacetime you call "now". So you cannot measure a spatial distance to the event "now". You can only decide that some point in "now" is the same as the (spatial part of) the event. That choice is frame dependent - in fact, it's one definition of a choice of frame (up to spatial rotation). This is pure nonsense. An event is a point in spacetime. Velocity is the slope of a line in spacetime. A point does not have a slope and there is no way to define one for it. Simply "assuming" an undefined quantity has a particular value is meaningless. Then you are talking nonsense. There is no frame of reference for light - it's a contradiction in terms. Last edited: Jul 13, 2016 14. Jul 13, 2016 ### Ibix Everyone will always measure a time between the reception events consistent with the motion of the detectors (as measured in their frame) and the constant speed of light. Between length contraction, time dilation the relativity of simultaneity, and the different motion of the detectors in any other frame, everyone will always be able to come up with a consistent explanation for why everyone else also comes up with the same invariant speed of light. 15. Jul 13, 2016 ### A.T. It's a numerical value. How did you calculate it exactly? 16. Jul 13, 2016 ### Quandry Your response is not relevant. If the event is the result of turning on a torch (as defined) it is reasonable to assume that the event results in photon emission. I m sorry that you think this is nonsense. I was agreeing with you. 'nuff said! 17. Jul 13, 2016 ### Quandry Exactly? I added 1 and .5 and I got 1.5. Again, 'nuff said. 18. Jul 13, 2016 ### Mister T That's Galilean relativity! It works only as an approximation in the limit of low speeds. The correct way is $\frac{1+0.5}{1+(1)(0.5)}=\frac{1.5}{1.5}=1$. 19. Jul 14, 2016 ### Orodruin Staff Emeritus In addition to what has been said already, one thing that confuses many people is the difference between separation speed and relative speed. If you are moving to the left with 0.5c relative to an observer A and a light signal travels to the right, then A will see the distance between you and the light growing with 1.5c. This is separation speed. The above does not mean that you will see the light moving at 1.5c! To draw that conclusion you must assume absolute space and time, which directly contradict SR. In fact, they are basic assumptions behind the Galilei transformation! The basic assumption in SR is that the light will have speed c in your inertial frame too. This is the relative speed. Also note that there is no way you can objectively say "I am moving at 0.5c" as velocities are relative and change between inertial frames. You need to specify relative to what you move at 0.5c. 20. Jul 14, 2016 ### Ibix In that case your writing is imprecise, because I'm having trouble reading you as agreeing with me even when you say you are. I think I shall duck out of this conversation for now.
2017-11-24 13:16:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6609633564949036, "perplexity": 468.85330695880486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808133.70/warc/CC-MAIN-20171124123222-20171124143222-00508.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-sqrt-27-rsqrt-75
# How do you simplify sqrt.27 + rsqrt.75? $= \sqrt{\frac{27}{100}} + r \sqrt{\frac{75}{100}} =$ $= \sqrt{\frac{3 \cdot 9}{100}} + r \sqrt{\frac{3 \cdot 25}{100}} =$ $= \frac{3}{10} \sqrt{3} + \frac{5}{10} r \sqrt{3} =$ $= \frac{\sqrt{3}}{10} \left(3 + 5 r\right)$
2019-11-16 01:59:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7621087431907654, "perplexity": 14457.170201431632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.69/warc/CC-MAIN-20191116005339-20191116033339-00120.warc.gz"}
http://qa4code.blogspot.com.es/2015/05/put-your-bll-monster-in-chains.html
# Put your BLL monster in Chains Friday, May 15, 2015 How to use the FubuMVC’s behavior chains pattern (BMVC) to improve BLL maintainability # Introduction A very popular architecture for enterprise applications is the triplet Application, Business Logic Layer (BLL), Data Access Layer (DAL). For some reason, as time goes by, the Business Layer starts getting fatter and fatter losing its health in the process. Perhaps I was doing it wrong. Somehow very well designed code gets old and turns into headless monster. I have ran into a couple of these monsters that I have been able to tame using FubuMVC’s behaviour chains. A pattern designed for web applications that I have found useful for breaking down complex BLL objects into nice maintainable pink ponies. I need an example to make this work. So let’s go to the beach. Spain has some of the best beaches in Europe. Let’s build a web service to search for the dream beach. I want the clients to enter some criteria: province, type of sand, nudist, surf, some weather conditions as some people might like sun, others shade and surfers will certainly want some wind. The service will return the whole matching list. There would be 2 entry points: • Minimal. Results will contain only beach Ids. Clients must have downloaded the json Beach List • Detailed. Results will contain all information I have about the beaches. The weather report will be downloaded from a free on-line weather service like OpenWeatherMap. All dependencies will be abstract and constructor injected. public IEnumerable<BeachMin> SearchMin(SearchRequest request) { var candidates = beachDal.GetBeachesMatching( request.Location, request.TypeOfSand, request.Features); var beachesWithWeatherReport = candidates .Select(x => new { Beach = x, Weather = weather.Get(x.Locality); }); var requestSky = request.Weather.Sky; var filteredBeaches = beachesWithWeatherReport .Where(x => x.Weather.Wind == request.Weather.Wind) .Where(x => (x.Weather.Sky & requestSky) == requestSky) .Select(x => x.Beach); var orderedByPopularity = filteredBeaches .OrderBy(x => x.Popularity); return orderedByPopularity .Select(x => x.TranslateTo<BeachMin>()); } This is very simple and might look as good code. But hidden in this few lines is a single responsibility principle violation. Here I’m fetching data from a DAL and from an external service, filtering, ordering and finally transforming data. There are five reasons of change for this code. This might look OK today but problems will come later, as code ages. # Let’s feed it some junk food In any actual production scenario, this service will need some additions. Logging, to see what is going on and get some nice looking graphs; Cache to make it more efficient, and some Debug information to help us exterminate infestations. Where would all these behaviors go? To the business, of course. Nobody likes to put anything that is not database specific into the DAL. The web service itself does not have access to what is really going on. So…everything else goes to the BLL. This might look a little exaggerated, but believe me…it’s not. public IEnumerable<BeachMin> SearchMin(SearchRequest request) { Debug.WriteLine("Entering SearchMin"); var stopwatch = new Stopwatch(); stopwatch.Start(); Logger.Log("SearchMin.Request", request); Debug.WriteLine("Before calling DAL: {0}", stopwatch.Elapsed); var cacheKey = CreateCacheKey(request); var candidates = Cache.Contains(cacheKey) ? Cache.Get(cacheKey) : beachDal.GetBeachesMatching( request.Location, request.TypeOfSand, request.Features); Cache.Set(cacheKey, candidates); Debug.WriteLine("After calling DAL: {0}", stopwatch.Elapsed); Logger.Log("SearchMin.Candidates", candidates); Debug.WriteLine( "Before calling weather service: {0}", stopwatch.Elapsed); var beachesWithWeatherReport = candidates .Select(x => new { Beach = x, Weather = weather.Get(x.Locality); }); Debug.WriteLine("After calling weather service: {0}", stopwatch.Elapsed); Logger.Log("SearchMin.Weather", beachesWithWeatherReport); Debug.WriteLine("Before filtering: {0}", stopwatch.Elapsed); var requestSky = request.Weather.Sky; var filteredBeaches = beachesWithWeatherReport .Where(x => x.Weather.Wind == request.Weather.Wind) .Where(x => (x.Weather.Sky & requestSky) == requestSky) .Select(x => x.Beach); Debug.WriteLine("After filtering: {1}", stopwatch.Elapsed); Logger.Log("SearchMin.Filtered", filteredBeaches); Debug.WriteLine("Before ordering by popularity: {0}", stopwatch.Elapsed); var orderedByPopularity = filteredBeaches .OrderBy(x => x.Popularity); Debug.WriteLine("After ordering by popularity: {0}", stopwatch.Elapsed); Debug.WriteLine("Exiting SearchMin"); return orderedByPopularity; } If you don’t own any code like previous: Bravo!! lucky you. I have written way too many BLLs that look just like this one. Now, ask yourself: What, exactly, does a “Paradise beach service” have to do with logging, caching and debugging? Easy answer: Absolutely nothing. Usually there would be anything wrong with this code. But every application needs maintenance. With time, business requirements will change and I would need to touch it. Then a bug will be found: touch it again. At some point the monster will wake up and there would be no more good news from that point forward. Let’s see what I’m actually doing: 1. Find candidate beaches. Those in specified province with wanted features and type of sand. 2. Get weather report about each of the candidates. 3. Filter out those beaches not matching desired weather. 4. Order by popularity. 5. Transform the data into expected output. This is how you would do it manually with a map and maybe a telephone and a patient operator to get the weather reports. This is exactly what a BLL must do, and nothing else. I will implement a BLL for each of previous steps, they will have just one Execute method with one argument and a return value. Each step will have a meaningful intention revealing name that will receive an argument with the same name ended with Input and return type with the same name ended with Output. Conventions rock!! public class FindCandidates : IFindCandidates { public FindCandidates(IBeachesDal beachesDal) { this.beachesDal = beachesDal; } public FindCandidatesOutput Execute(FindCandidatesInput input) { var beaches = beachesDal.GetCandidateBeaches( input.Province, input.TypeOfSand, input.Features); return new FindCandidatesOutput { Beaches = beaches }; } } public class GetWeatherReport : IGetWeatherReport { public GetWeatherReport(IWeatherService weather) { this.weather = weather; } public GetWeatherReportOutput Execute(GetWeatherReportInput input) { var beachesWithWeather = input.Beaches .Select(NewCandidateBeachWithWeather); return new GetWeatherReportOutput { BeachesWithWeather = beachesWithWeather }; } private CandidateBeachWithWeather NewCandidateBeachWithWeather( CandidateBeach x) { var result = x.TranslateTo<CandidateBeachWithWeather>(); result.Weather = weather.Get(x.Locality); return result; } } public class FilterByWeather : IFilterByWeather { public FilterByWeatherOutput Execute(FilterByWeatherInput input) { var filtered = input.BeachesWithWeather .Where(x => x.Weather.Sky == input.Sky) .Where(x => input.MinTemperature <= x.Weather.Temperature && x.Weather.Temperature <= input.MaxTemperature) .Where(x => input.MinWindSpeed <= x.Weather.WindSpeed && x.Weather.WindSpeed <= input.MaxWindSpeed); return new FilterByWeatherOutput { Beaches = filtered }; } } public class OrderByPopularity : IOrderByPopularity { public OrderByPopularityOutput Execute(OrderByPopularityInput input) { var orderedByPopularity = input.Beaches.OrderBy(x => x.Popularity); return new OrderByPopularityOutput { Beaches = orderedByPopularity }; } } public class TranslateToBeachMin : IConvertToMinResult { public TranslateToBeachMinOutput Execute( TranslateToBeachMinInput input) { return new TranslateToBeachMinOutput { Beaches = input.Beaches .Select(x => x.TranslateTo<BeachMin>()) }; } } I know what you’re thinking: I took a 15 lines of code (LOC) program and transformed it into a 100 or more one…You are right. But let’s see what I have. Five clean and small BLLs, each represent a part of our previous single BLL. Their dependencies are abstract which will also make easy to test them thoroughly. They are easily to manage, because they are so small, they will be very easy to maintain, substitute and even reuse. For instance you don’t really need to have performed a life weather search to get a list of beaches and weather conditions to be filtered, you just need to create the input for each of the steps and voilà, you can execute that particular step. At the end I added a step to translate CandidateBeach into BeachMin which is the response I really need for our original service. I also extracted interfaces for each of the steps, it helps with abstractions and some other things I’ll do later. # Chain’em up public IEnumerable<BeachMin> SearchMin(SearchRequest request) { var candidates = findCandidates.Execute( new FindCandidatesInput { Province = request.Province, TypeOfSand = request.TypeOfSand, Features = request.Features }); var candidatesWithWeather = getWeatherReport.Execute( new GetWeatherReportInput { Beaches = candidates.Beaches }); var filtered = filterByWeather.Execute( new FilterByWeatherInput { Beaches = candidates.Beaches, Sky = request.Sky, MinTemperature = request.MinTemperature, MaxTemperature = request.MaxTemperature, MinWindSpeed = request.MinWindSpeed, MaxWindSpeed = request.MaxWindSpeed }); var orderedByPopularity = orderByPopularity.Execute( new OrderByPopularityInput { Beaches = filtered.Beaches }); var result = translateToBeachMin.Execute( new TranslateToBeachMinInput { Beaches = orderedByPopularity.Beaches }); return result; } What do you know? I’m back to 15 LOC, maybe less. I think this code doesn’t even need explaining. I took our steps and chain them into a Behavior Chain. From now on we will refer to Steps as Behaviors. I’m kind of where I started, but now our service depends on external extensible, reusable and abstract behaviors. Still, it must know them all. This makes difficult adding a new behavior. Another thing I will have almost identical code for the other entry point. I must do something to improve these two. # Mechanize it I know…Sarah Connor wouldn’t agree. I have this tool which takes some objects and automatically chains them together into a function but before let’s see what a service depending on functions would look like. public class SearchService : Service { public SearchService( Func<SearchMinRequest, SearchMinResponse> searchMin, Func<SearchDetailsRequest, SearchDetailsResponse> searchDetails) { this.searchMin = searchMin; this.searchDetails = searchDetails; } public object Any(SearchMinRequest request) { return searchMin(request); } public object Any(SearchDetailsRequest request) { return searchDetails(request); } } I’m using ServiceStack as web framework. Basically both Any method in the examples are web service entry points. As you can see they delegate the actual work to functions injected thru the constructor. At some point, which for ServiceStack is the application configuration, I need to create these functions and register them into the IoC container. public override void Configure(Container container) { //... var searchMin = Chain<SearchMinRequest, SearchMinResponse>( findCandidates, getWeatherReport, filterByWeather, orderByPopularity, translateToBeachMin); var searchDetails = Chain<SearchDetailsRequest, SearchDetailsResponse>( findCandidates, getWeatherReport, filterByWeather, orderByPopularity, container.Register(searchMin); container.Register(searchDetails); //... } private static Func<TInput, TOutput> Chain<TInput, TOutput>( params object[] behaviors) where TInput : new() where TOutput : new() { return behaviors .ExtractBehaviorFunctions() .Chain<TInput, TOutput>(); } There are some points here that is worth mentioning: • Each behavior kind of depends on previous but it doesn’t really know it • The chain is created from functions which could be instance or static methods, lambda expressions or even functions defined in another language like F# • The ExtracBehaviorFunctions method takes in objects and extracts their Execute method or throw an exception if there is none. This is my convention, you could define your own • The Chain method takes in delegates and creates a function by chaining them together. It will throw exceptions if incompatible delegates are used I will enrich our BLLs by means of transparent decorators. Using Castle.DynamicProxy I will generate types which will intercept the calls to our behaviors and add some features. Then I will register the decorated instances instead of the original. I will start with cache and debugging. The cache is a trivial in memory, 10 min. More complicated solutions can be easily implemented. container.Register(new ProxyGenerator()); container.Register<ICache>(new InMemory10MinCache()); container.RegisterAutoWired<CacheDecoratorGenerator>(); CacheDecoratorGenerator = container.Resolve<CacheDecoratorGenerator>(); container.RegisterAutoWired<DebugDecoratorGenerator>(); DebugDecoratorGenerator = container.Resolve<DebugDecoratorGenerator>(); With this code our decorator generators are ready, let’s look now on how to decorate the behaviors. var findCandidates = DebugDecoratorGenerator.Generate( CacheDecoratorGenerator.Generate( container.Resolve<IFindCandidates>())); var getWeatherReport = DebugDecoratorGenerator.Generate( container.Resolve<IGetWeatherReport>()); var filterByWeather = DebugDecoratorGenerator.Generate( container.Resolve<IFilterByWeather>()); var orderByPopularity = DebugDecoratorGenerator.Generate( container.Resolve<IOrderByPopularity>()); var convertToMinResult = DebugDecoratorGenerator.Generate( container.Resolve<IConvertToMinResult>()); Here I decorated every behavior with debugging and only findCandidates with caching too. It might be interesting to add some cache to weather report as well, but since the input might be a very big list of beaches caching won’t be correct. Instead I will add caching to both the DAL and the weather service. container.Register(c => DebugDecoratorGenerator.Generate( CacheDecoratorGenerator.Generate( (IBeachesDal) new BeachesDal( c.Resolve<Func<IDbConnection>>())))); container.Register(c => DebugDecoratorGenerator.Generate( CacheDecoratorGenerator.Generate( (IWeatherService) new WeatherService()))); # Manual Decorators Generated decorators are not enough for some tasks, and if you are friend of IDE Debugging they will certainly give you some headaches. There is always the manual choice. public class FindCandidatesLogDecorator : IFindCandidates { public FindCandidatesLogDecorator(ILog log, IFindCandidates inner) { this.log = log; this.inner = inner; } public FindCandidatesOutput Execute(FindCandidatesInput input) { var result = inner.Execute(input); log.InfoFormat( "Execute({0}) returned {1}", input.ToJson(), result.ToJson()); return result; } } By using more powerful IoC containers, like AutoFac you would be able to create more powerful decorators, both automatically generated and manual. You won’t ever have to touch your BLL unless there are Business Requirement changes or bugs. # When to use When your BLL is a set of steps that are: • Well defined. The responsibilities are clear and have clear boundaries. • Independent. The steps don’t know each other. • Sequential. The order cannot be changed based on input. All steps must be always executed. The behavior chain functions are kind of static, they are not meant to be altered in execution. You can create, though, a new function to replace an existing one based on any logic of your specific problem. # How it works The generation code isn’t really that interesting. Just a lot of hairy statements generating lambda expressions using the wonderful Linq.Expressions. You can still look at it on the source code. Let’s see instead how generated code works. This is how the generated function looks like, or kind of. var generatedFunction = new Func<SearchDetailsRequest, SearchDetailsResponse>(req => { // Input var valuesSoFar = new Dictionary<string, object>(); valuesSoFar["Provice"] = req.Provice; valuesSoFar["TypeOfSand"] = req.TypeOfSand; valuesSoFar["Features"] = req.Features; valuesSoFar["Sky"] = req.Sky; valuesSoFar["MinTemperature"] = req.MinTemperature; valuesSoFar["MaxTemperature"] = req.MaxTemperature; valuesSoFar["MinWindSpeed"] = req.MinWindSpeed; valuesSoFar["MaxWindSpeed"] = req.MaxWindSpeed; // Behavior0: Find candidates var input0 = new FindCandidatesInput { Provice = (string)valuesSoFar["Provice"], TypeOfSand = (TypeOfSand)valuesSoFar["TypeOfSand"], Features = (Features)valuesSoFar["Features"] }; var output0 = behavior0(input0); valuesSoFar["Beaches"] = output0.Beaches; // Behavior1: Get weather report var input1 = new GetWeatherReportInput { Beaches = (IEnumerable<CandidateBeach>)valuesSoFar["Beaches"] } var output1 = behavior1(input1); valuesSoFar["Beaches"] = output1.Beaches; // Behavior2: Filter by weather var behavior2Beaches = valuesSoFar["Beaches"]; var input2 = new FilterByWeather { Beaches = (IEnumerable<CandidateBeachWithWeather>)behavior2Beaches, Sky = (Sky)valuesSoFar["Sky"], MinTemperature = (float)valuesSoFar["MinTemperature"], MaxTemperature = (float)valuesSoFar["MaxTemperature"], MinWindSpeed = (float)valuesSoFar["MinWindSpeed"], MaxWindSpeed = (float)valuesSoFar["MaxWindSpeed"] } var output2 = behavior2(input2); valuesSoFar["Beaches"] = output2.Beaches; // Behavior3: Order by popularity var input3 = new OrderByPopularityInput { Beaches = (IEnumerable<CandidateBeach>)valuesSoFar["Beaches"] } var output3 = behavior3(input3); valuesSoFar["Beaches"] = output3.Beaches; var input4 = new AddDetailsInput { Beaches = (IEnumerable<CandidateBeach>)valuesSoFar["Beaches"] } var output4 = behavior4(input4); valuesSoFar["Beaches"] = output4.Beaches; // Output return new SearchDetailsResponse { Beaches = (IEnumerable<BeachDetails>)valuesSoFar["Beaches"] }; }); # Using the code What is it good for if you cannot see it working? This will start the server in configured port, 52451 by default. Now you need to create a client program. You can manually create a client project by using ServiceStack. Or any other web framework. You can also use included Linqpad file at <project_root>\linqpad\search_for_beaches.linq which basically does as follows: var client = new JsonServiceClient("http://localhost:52451/"); var response = client.Post(new SearchDetailedRequest { Province = "Huelva", Features = Features.Surf, MinTemperature = 0f, MaxTemperature = 90f, MinWindSpeed = 0f, MaxWindSpeed = 90f, TypeOfSand = TypeOfSand.White, Sky = Sky.Clear }); # Conclusions The high code quality is very important if you want a maintainable application with a long lifespan. By choosing the right design patterns and applying some techniques and best practices any tool will work for us and produce really elegant solutions to our problems. If on the other hand, you learn just how to use the tools, you are gonna end up programming for the tools and not for the ones that sign your pay-checks.
2018-02-22 16:13:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17956869304180145, "perplexity": 7969.677217414706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814140.9/warc/CC-MAIN-20180222160706-20180222180706-00260.warc.gz"}
http://www.physicsforums.com/showthread.php?s=1ee60181b07f2ac84fdb2681bd292591&p=4658640
# Temperature of dilution by sout528 Tags: dilution, temperature Admin P: 23,397 That's just a heat balance, but as you start close to critical point the main problem is that the heat capacity changes. Instead of using q=mcΔT you need to use $$q = m\int c dT$$ and integrate from Tstart to Tfinal.
2014-07-31 23:48:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5935012698173523, "perplexity": 2161.9501954209964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273766.32/warc/CC-MAIN-20140728011753-00160-ip-10-146-231-18.ec2.internal.warc.gz"}
http://www-nature-com-s.caas.cn/articles/s41598-022-10373-y?error=cookies_not_supported&code=b477de8b-b961-478b-a715-3f1606c67de9
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Muography for a dense tide monitoring network ## Abstract Sub-hourly to seasonal and interannual oceanographic phenomena can be better understood with high special resolution and high frequency tidal observations. However, while current tidal measurements can provide sufficiently high observational density in terms of time, the observational density in terms of space is low mainly due to the high expense of constructing tide gauge stations. In this work, we designed a novel tide monitoring technique with muography that could be operated in near-shore basements (or similar structures on land below sea level) and found that more practical, stable, robust and cost-effective high-spatiotemporal-density tide measurements are possible. Although the time resolution, sensitivity, and the distance between the detectors and the shorelines are tradeoffs, hourly and annual sensitivity (ability to detect the tide height variations) of less than 10 cm and 1 mm can be statistically attained, respectively. It is anticipated that the current muographic technique could be applied as an alternative, cost-effective and convenient dense tidal monitor network strategy in coastal areas worldwide. ## Introduction Recently flood hazards have increased, which have been exacerbated by a rising global mean sea level due to land ice melting and ocean warming1,2,3. One of the most important factors to quantify the extent of this increase is the flood hazard magnification rate4. Utilizing data from several years of tide gauge observations of various extreme weather and flooding events has been the most common way to determine this amplification5,6,7. Tidal levels have been measured and monitored in order to obtain reliable sea-level information including tides, surges, waves, and relative sea-level rise. Such information is essential for coastal communities since coastal flooding is increasingly occurring in many areas8,9. Also, understanding of the regional tide streams in the inner bay is important for the safety of navigation and environmental assessments, as well as to improve assessments of regional seawater circulation types and pollution distribution, and tidal flow fields studies, which have been numerically modeled in various regions10,11,12,13,14. In order to create accurate modeling for both forecasting storm surges and estimating tidal fields spatiotemporally, high density tide level information is required as a boundary condition. However, since tide gauge stations (TGSs) are expensive and usually have to be deployed in wave-sheltered harbors, these are only sparsely distributed even within large metropolitan bay areas15. For this reason, the tide level data have been interpolated to reproduce the continuity and smoothness of the tide level distribution16. Moreover, TGSs measure only Still Water Levels (SWLs) on the spot, and as a consequence, we tend to underestimate or exclude waves that include wave setup and wave run-up17,18,19(Fig. 1). These tide gauge-based estimates of the amplification of extreme water levels therefore are not always accurately assessing actual shoreline conditions outside the wave shelter. Caires et al.20 analyzed and extrapolated a long-term time series of still water level data measured by the Dutch Ministry of Transport, to find that extreme SWLs with heights of 354 cm, 411 cm, and 466 cm respectively occur at 100, 1000 and 10,000 year intervals. Marsooli and Lin21 investigated interactions between storm tides, surges, and waves for historical tropical cyclones (1988–2015) in the western North Atlantic Ocean, and found that the maximum wave setup was relatively large (tens of cm) in most coastal regions, but it did not always coincide with the peak storm tide. Abdalazeez analyzed the dataset provided by the European Centre for Medium-Range Weather Forecasts (ECMWF), and estimated that runups higher than 1.5 m are generated by wind waves with heights between 4.0 to 6.0 m22. Seenipandi et al.23 mentioned that the beach profile shape of coastal zones are highly suspectable to be changed with wave run-ups greater than 6.0 m. Satellite-based radar altimetry provides a more global solution for analyzing tidal changes in wider areas. However, space–time resolutions depend on the satellite orbit and in particular, on its repeat period. The intertrack distance at the equator depends on the mission repeat cycle. Some examples include the following: an intertrack distance of 315 km for 10 days for Topex/Poseidon and Jason-1/2/3 missions, an intertrack distance of 104 km for 27 days for the Sentinel-3A/3B missions with one satellite and 53 km with Sentinel-3A and -3B working in tandem, and an intertrack distance of 80 km for 35 days for ERS-1/2, Envisat, and SARAL/AltiKa24. Also, the validity of measurements close to the coast is limited and may not accurately represent the coastal processes25. Global-navigation-satellite-system (GNSS) buoys using GNSS satellite positioning technology may solve this problem with faster time resolutions (30 s–1 day)26. However, installing buoys in the high maritime traffic areas is not practical. Moreover, the deployment and data transfer costs tend to be high. Ocean bottom sensors such as pressure and ultrasonic gauges provide tidal information in realtime27. However, since these sensors have to be directly located on the seafloor, preparation of infrastructures for electricity and data transfer are expensive. Moreover, pressure gauges have an intrinsic drift error, and the propagation time of the ultrasonic signals depend on solar radiation, seasonal cycles, mixing of the water due to sea currents, and the presence of rivers or waste waters28. Recently, muography, which has been conducted from an underwater tunnel showed the potential to offer a practical real-time tide monitor without intrinsic drift errors, and without the requirement to provide infrastructures for electricity and data transfer15. Muons are produced during the interaction between primary cosmic rays and the nuclei in the Earth’s atmosphere. These muons are called cosmic-ray muons. On Earth, the muon flux reaches its maximum (~ 200 m−2 s−1sr−1) at an atmospheric depth of 300 gcm−2, and then slowly decreases as the muons pass through the additional atmospheric depth. The muon flux is ~ 90 m−2 s−1sr−1 at sea level29. The distance muons can traverse in materials is a function of the incident muon energy and average density along the muon path. This can be inferred based on known data of the topography and the open-sky muon spectrum. Once both the muon path length and the average density along the path are known, the densimetric thickness (x)30 can be calculated by multiplying them, and thus the minimum energy (Ec) of muons that can penetrate through a material with this thickness can be determined. By integrating the open-sky spectrum from Ec to infinity, we obtain the expected flux of muons after passing through the target object. Inversely, if we place a detector underneath the target of interest and measure the muon flux, we can map out the densimetric thickness distribution as a function of azimuth and zenith angles31. Due to the penetrative and ubiquitous nature of cosmic-ray muons, they have been utilized as probes for muography and have been widely applied to visualizing the internal structure of gigantic objects. Muographs (muographic observation and recording devices) have been deployed to observe targets such as volcanoes32,33,34,35,36,37,38,39,40,41, rock overburdens42,43,44, and cultural heritage sites45,46,47,48 (typically being positioned below the targeted region) to record muographic images of these land objects, but as mentioned above more recently, the Hyper Kilometric Submarine Deep Detector (HKMSDD) realized the goal of underwater muography imaging at reasonable costs by installing muographs inside an underwater tunnel. However, underwater tunnels are not always available in coastal areas that would benefit the most from muography monitoring. In this work, an alternative to using underwater tunnels is proposed: HKMSDD muon sensor modules (HKMSDD-MSMs) could be deployed in coastal regions globally within available near-shore basements (or similar structures on land sufficiently below sea level), and could be applied as a convenient, dense and standalone tidal monitoring network or in tandam with another network. ## Results ### Principle Galactic cosmic rays (GCRs) are accelerated by high energy events in our galaxy and before they arrive at Earth, they are deflected multiple times during their propagation, and lose their initial directional information. Muons are produced in the Earth's atmosphere via the collision between these GCRs and the Earth’s atmospheric nuclei. Due to different atmospheric thicknesses and density gradients for different GCR's arrival angles, the muon energy spectrum varies according to the different zenith angles. As a consequence, the vertical muon flux is higher than the horizontal flux, but the average energy of vertical muons is lower than the horizontal ones. The HKMSDD-MSM tide monitor utilizes these near horizontal muons. Figure 2 shows the principle of HKMSDD-MSM tide monitoring. In this scheme, HKMSDD-MSMs are placed at near-shore locations below sea level such as basements of commercial buildings, subway stations, underground parking lots etc. As shown in Fig. 2, near horizontal muons would pass through seawater and land soil before arriving at the MSM. The total thickness of the materials that muons will traverse through before arriving at the MSM is: $$L = D/\cos \theta \, + \left( {d - H} \right)/\sin \theta \;\left( {\text{m}} \right)$$ (1) where D (m) is the distance between the MSM and the shoreline, H (m) is the land altitude measured from the lowest tide level, d (m) is the depth of the MSMs measured from ground level, and θ is the elevation angle. Here the average densities of the land soil (ρearth) and seawater (ρwater) were respectively assumed to be 2.0 g cm−3 and 1.0 g cm−3. Since the tide level variations ∆h (m) only changes the second term of Eq. (1), as D increases, the MSM's sensitivity to the tide variations is degraded. The muon flux observed at the MSM (N) can be calculated as follows. Once L is determined, the minimum muon energy (Ec) that arrives at the MSM can be derived from the muon's energy range relationship in H2O and SiO249. By integrating the open-sky muon energy spectrum50,51,52 over the energy range between Ec and infinity, we obtain the angular dependent integrated muon flux I (θ), where θ is elevation angle. By integrating I (θ) over the angular range between 0 and Θ, N is derived, where Θ holds the following relationship: $$\tan \Theta = \left( {d - H} \right)/D$$ (2) Figure 3 shows I as a function of the elevation angle (θ < Θ) for different D. As long as θ < Θ, the soil portion in L relies only on D and θ. Therefore, as the distance between the MSM and the shoreline increases, the number of muons that arrives at the MSM will decrease. As a consequence, the time resolution of the HKMSDD-MSM tide monitor will be degraded as the length of D increases. ### Case studies in Tokyo Bay Urban underground spaces (UUSs) have various functions: storage, industry, transport, utilities and communications, and public use. In Tokyo, most of the underground facilities in the city areas are for public use. Tokyo uses more than 50% of UUSs for transportation including subways, highway tunnels, and stations, and almost 40% of UUSs for public spaces, shopping areas, parking lots, storages and industrial use53. Throughout its historical development, UUSs in Tokyo have progressed from shallow to deep soil layers. Therefore, inside UUSs, the supply of stable utility (electricity, gas and water) is one of the most important factors. In Japan, the UUSs for public use are equipped with a three-step power failure prevention system. In particular, there is a regulation that a UUS with a floor area exceeding 1000 m2 must be equipped with an independent emergency power generator by a UUS managing body. If the emergency power generator is shut down for some reason, it will be immediately replaced with a battery-operated system. Such a robust pre-installed infrastructure particularly designed for UUSs also offers an ideal space for stable and safe operations of muographic tide monitors even under extreme conditions such as severe storms and earthquakes. In order to install the HKMSDD-MSM tide monitor to a UUS, we need (A) a commercial electricity supply, (B) a network environment, and (C) one or more rooms with dimensions of at least a few m2. In most cities, a commercial electricity supply is available in UUSs, and a network infrastructure is also well organized in UUSs. For example, in the case of UUSs in Tokyo, it is relatively easy in this kind of location to arrange a new contract with a carrier to add one more internet line inside the building without additional costs. Regarding space requirements, if a large room (e.g., 100 m2) is available, installation of one HKMSDD-MSM would be sufficient, but if such a large room is not available, the size of each MSM would have to be smaller; hence in this case the number of smaller MSMs could be increased to attain a suitable detector area to collect the sufficient amount of muons, and consequently the device costs per station will be increased. The south-central Tokyo map in Fig. 4 shows the distribution of Tokyo deep and large-scale UUSs (DLUUSs) located in the regions within 200 m from the shorelines. Here, the DLUUSs are defined as those having basement floors located below sea level with a floor size exceeding 1000 m2. The south-central part of Tokyo consists of the main land and more than 15 islands in the north part of Tokyo Bay. Most of these islands are connected by bridges, but lines on the ocean in Fig. 4 represent railway/motor underwater tunnels which connect these islands. A number of commercial skyscrapers were built on these islands, and some of them have UUSs reaching depths greater than 10 m from the ground surface. Since the elevation of these islands ranges between 1 and 7 m, these floors are located below sea level. ### HKMSDD-MSM The currently proposed muographic tide monitor is based on the successful, stable and maintenance free long-term observation conducted by an array of MSMs from the Tokyo-bay Seafloor HKMSDD (TS-HKMSDD) at one of the UUSs called Aquatunnel (underwater highway tunnel) in Tokyo bay15. As shown in Fig. 5A, since we started the stable operation mode of TS-HKMSDD in April, 2021, TS-HKMSDD has successfully recorded the tide level variations without any interruptions, and has continued to record the tide level variations without any intermittency or measurement drift. On the other hand, TGS observations frequently give erroneous data54. For instance, during TGS measurements, data can become suddenly corrupted with noise, or (due to mechanical problems) the moving parts of the gauge may lock up or malfunction54. Unlike tide gauges or buoys, HKMSDD doesn't have to be exposed to harsh environments and there aren't any mechanically moving parts in HKMSDD (Fig. 5B). This would also be the case for the proposed HKMSDD-MSMs setup; additionally, since UUSs such as commercial buildings or highway tunnels are already equipped with a robust electric and internet environment and each have a private power generation backup system, it is anticipated that muographic tide monitors would have the highest level of stability compared with other legacy tide level monitors. Figures 6 and 7 show close and vertical cross-sectional views of some representative UUSs indicated in Fig. 4. As can be seen in this figure, other islands are located along the muon trajectories. These islands may degrade the quality of tide monitoring and thus this effect will be later discussed. Figure 8A shows the proposed HKMSDD-MSM muograph design for the tide monitoring network. HKMSDD-MSM consists of two sets of scintillation detectors that consist of plastic scintillators and photodetectors. The length and the scintillators is 8 m. There are a couple of options for photodetectors: photomultiplier tubes (PMT) or Silicon Photomultipliers (SiPM). In the former case, the PMTs are attached to the scintillators via acrylic light guides. In the latter case, scintillation light is transported to SiPM via wave length shift (WLS) fibers. Since the Eljen's scintillators (EJ-208) and the Kuraray's WLS fibers (Y-11(200)) both have a long attenuation length of 4 m55 and 3.5 m56, respectively, one photodetector is sufficient for readout of each scintillator strip. Coincidence signals verified to be the same angle between these two detectors are recorded as muon signals. The current HKMSDD-MSM has a wide angular acceptance for the azimuthal angle (Fig. 8B) and a narrow angular acceptance for the elevation angle (Fig. 8C). As shown in Fig. 8A, the scintillator strips are placed so that the HKMSDD-MSM does not receive the muons arriving from the direction opposite to the sea. However, some scattered upward-going muons could generate fake tracks. As shown in Fig. 8C, HKMSDD-MSM does not have an acceptance for the angular region beyond ± w/x. This design helps to avoid recording muons that didn't pass through seawater, which would eventually degrade the sensitivity to ∆h. As was suggested in the reference15, a 2-cm thick lead plate is inserted between these detectors to remove background radiation that could emit from the concrete wall of the UUS. Figure 9 shows the fraction of the fake tracks generated by the scattered upward-going muons as a function of the elevation angles. The number of events were normalized to the value observed at θ = 0 ± 16.5 mrad (42 k events in 26 months), where θ is elevation angle. The observation conditions to produce this plot are summarized in the Method section. Since these data were taken in the open-sky environment, it is expected that this fraction will be somewhat lower than this in an underground environment. In conclusion, with the currently proposed setup, contamination by the near horizontal backward directed muons will be suppressed to a rate below 1%, and thus this effect will be neglected in the following discussions. Figure 10 shows the muon flux and detectable tide level variations ∆h as a function of the distance between the MSM and the shoreline (D) for different depths (5 m and 15 m) from the mean sea level. Here the detectable ∆h was derived from the standard deviation of the number of muons (∆N) recorded with the proposed detector configuration (Fig. 8) with the fixed parameters w = 15 cm, and x = 2.4 m. If the deviation (∆h) is sufficiently smaller than (d-H), the relative penetrating muon flux (∆N) which can be expressed as: ΔN = kΔh + C is a linear function of ∆h. By utilizing this flux-thickness relationship, the k and C can be calibrated with the astronomical tide height variations15. For the longer value, D, since the flux loss by the soil between the muograph and seawater is smaller than flux loss by water, the flux gain by higher elevation and the shorter path length in the seawater compensates for the flux loss due to the soil. Consequently, muographs located at deeper locations would record more muons per unit time; hence would produce better resolutions for determining ∆h. As the MSM depth (d) increases, the total solid angle required to accept muons at the MSM increases, however, since the ratio ∆h/(d-H) decreases, sensitivity of the ∆h detection or time resolution is degraded. These two factors are tradeoffs. The conclusion for dealing with this situation is as follows. For the purpose of real time monitoring (< 1 h), the distance between the MSM and the shoreline (D) has to be less than 50 m and 20 m to attain ∆h < 50 cm and ∆h < 10 cm, respectively. For the purpose monitoring with a longer time scale, if D is less than 20 m, sub millimeter accuracy can be obtained per year. From this plot, we also can find that if the MSM depth is shallower, better ∆h resolution is achievable for shorter D, but better ∆h resolution is achievable for longer D if the MSM depth is deeper. This is because even the ground soil thickness along the muon path is thicker; hence degrading sensitivity to ∆h, for longer D, the MSM's acceptance solid angle is larger if the MSM depth is deeper. The current muographic tide monitor measures the tide height averaged over the shoreline to offshore, however, the muon's path length in seawater increases; hence the number of muons decreases as an elevation angle of incoming muons decreases and thus, the observed tide levels are representing those in the near-shore regions. Figure 11 shows the fraction of the number of muons out of the total number of muons (integrated over the entire angular region) as a function of the distance from the shore line for different distances between the MSM and the shoreline (D) for (d-H) = 5 m. As can be seen in this figure, the effect of the islands located further than 200 m from the MSM is negligible. In Cases (A) and (B) (Figs. 6 and 7), by combining the calculation results shown in Fig. 11, combined with the fact that the distance from MSM to the island is more than 180 m, and the depth of the MSM is 5 m from sea level, it is theoretically derived that more than 95% of muons pass through the seawater located between the islands. ## Discussion Seasonal muon flux variations could affect the sea level measurements. We investigate this possibility and its effect here. The seasonal muon flux variations mainly come from (A) barometric variations57 and (B) stratospheric variations57,58,59. Variations in the atmospheric pressure are compensated with the column density of seawater. This inverse barometer effect (IBE) compensation will mostly remove this effect on the sea level measurements. However, the energy loss per unit mass slightly differs between air and water. For example, while the CSDA range is 1.845 × 104 g cm−2 for 4 GeV muons in air, it is 1.810 × 104 g cm−2 for the muons in water with the same energy49. This 2% difference will cause an uncertainty of 5 cm in sea level measurements. The stratospheric seasonal variations generally affect high-energy muons with energies above tens of GeV57,58,59,60, and may affect the seasonal flux of muons with energies (~ 10 GeV) we are discussing here. This effect will be further studied with longer period measurements at the Tokyo-bay Seafloor HKMSDD. In urban areas, space on the ground is usually already occupied by buildings and structures, with typically only underground space to utilize as locations for new facilities. Although extra costs are required for ventilation and emergency prevention and response systems, no maintenance of outer walls is needed and, in many cases, underground facilities require less temperature adjustment in comparison to above ground spaces. Moreover, deep underground structures suffer significantly less damage from earthquakes than structures above. Consequently, cities with high population densities tend to develop more UUSs. The costs required for producing a muographic tide monitor will be less than 6000 US dollars (USD) (3000 USD for a 1-m2 plastic scintillator sheet, 2000 USD for 2 PMTs, and 500 USD for a readout electronics unit). Additionally, there would be a power and network connection fee that respectively costs 10–20 dollars/month and 50 dollars/month since the stable power supply with a power generator backup system and the gigabit Ethernet are equipped in the typical UUS in Tokyo. The most costly part is probably the rent of a space, which is an order of 10 k dollars/year for a 100-m2 space. However, since MSM itself doesn't require a 100-m2 space, it might be possible to share this cost with other tenants. Therefore, the total operational cost for 10 years will be 14 k dollars without rent or at most 114 k dollars with rent. This operational cost is still much lower than the costs (200,000 USD61) required for constructing tide gauge stations. For a conventional tide gauge station, the sensors themselves are cheap, but they usually require expensive infrastructures. For example, the conventional tide gauge stations require a land at a harbor, a robust building usually made of reinforced concrete for a long stable operation under harsh environments, reliable electricity and network environments including a backup power generator besides drilling a water well for tide gauging. Also, tide gauge stations cost a lot to operate. For example, it costs more than 30 k dollars just to repair a tide gage station62. Alternative techniques, for example, ocean bottom pressure measurements, can use cheap pressure sensors that cost less than 100 dollars63. However, in addition to the problems coming from the intrinsic drift of the sensors, the maintenance of underwater sensors is costly. Moreover, for the purpose of monitoring, enormous capital investments need to be made to create underwater network environments for reliable data transfer. More than 30 nodes of a muographic tide monitor network would be possible within this budget. Moreover, a stainless welded proportional counter (SWPC) currently under development will further reduce the costs for deployment. Since the SWPC structure is simple (consisting of a tungsten wire and a stainless steel tube), the price of one node of the tide monitor network could be reduced to less than 1000 USD in the future. This simple structure also makes it environmentally robust. The HKMSDD-MSM technique is directly applicable to any coastal cities in the world, such as New York, Boston, Miami, San Francisco, Hawaii, Tokyo, Amsterdam, Lisbon, Valencia, Venice, Naples, Marseille, Copenhagan, etc. and it is expected that as cities continue to be more populated, potential muograph locations will also increase. The currently proposed technique is applicable not only to near-shore UUSs, but also to the land lying below sea level such as many regions of the Netherlands. In such countries, spatiotemporally dense tidal measurements are particularly important for accurate forecasting of storm surges. In the Netherlands, a warning system has been developed and operated by the Dutch storm surge warning service (SVSD) in cooperation with the Royal Netherlands Meteorological Institute (KNMI). The system is based on a numerical hydrodynamic model called the Dutch continental shelf model (DCSM)49. Since the 1990s, the accuracy of the system has been improved by incorporating observations of tide gauges. KNMI's automatic production line (APL) has been developed to produce numerical forecasts. However, during the course of this automatic production, the assimilation of only a small number of erroneous observation data will harm the forecast. Figure 12 shows an example of locations in Wadden Sea that would be appropriate for installation of HKMSDD-MSMs. The areas marked along the shore lines are significantly lower than sea level (< − 2 m) and thus, the proposed muographic monitors could be simply placed on the ground floor in any building near shore lines to start tide measurements. By incorporating a low-cost, dense, robust muographic tide monitor network to work in conjunction with the existing system, the quality and accuracy of forecasts would be improved. Quality of the tidal observations depend on observational density in space and time. With current tidal measurements, high observational density in time is possible but spacial observational density is low. In order to address sub-hourly processes such as meteotsunamis to seasonal and interannual tidal variations, short special scales associated with the high frequency variables have to be covered21. The continuity and the smoothness of the spatiotemporal series of total water level, surge and the deviation would be desired and this is achievable with HKMSDD-MSM. The benefit of a muography-based tide monitoring system in comparison with preexisting systems can be summarized as follows: (A) tide variations can be remotely measured and thus the list of potential locations for safe and stable deployment increases, (B) since sensors do not have to directly touch water (it can be monitored from underground structures and through land/rock obstacles), maintenance costs are more reasonable, and (C) since muographs have no mechanically moving components, the possibility of malfunctions is greatly reduced in comparison to the legacy tide gauge stations, and thus it is easier to realize long and stable operating conditions. On the other hand, the caveat of this muographic technique is (due to the limited muon flux) compromises in either time resolution or sensitivity to tide levels must be made. For this reason, when a smaller detector is chosen for the muographic technique, either time resolution or sensitivity may not be as high as would be possible with conventional tide gauge stations. In conclusion, due to the high cost and fewer available deployable locations, reliable tide gauge stations can only be constructed in smaller numbers and spatially dispersed. On the other hand, although the temporal resolution and sensitivity of the muographic systems are likely to be inferior to the conventional tide gauge stations, the muographic systems can be more easily deployed and the operational costs are low; thus, a more spatially dense network can be constructed. It is anticipated that by creating a denser network consisting of both the legacy tide gauge stations and MSMs, dramatic improvements to the quality of the surge warning service could be implemented in the near future. ## Method ### Scattered upward-going muon measurements Figure 13 shows the experimental setup for measuring scattered upward-going muons. More detailed explanations can be found elsewhere37. The experimental setup consists of 6 MSM layers and 5 lead blocks for radiation shielding. Thickness of each lead block is 100 mm. Each of the MSM layers consists of 15 horizontally and 15 vertically aligned MSMs. Each MSM measures 1500 mm in length, 100 mm in width, and 20 mm in thickness. The distance between the uppermost-stream detector and the lowermost-stream detector was 3000 mm. Therefore, the azimuthal (Φ) and elevation (Θ) viewing angle are respectively ± 460 mrad (± 26°) and 460 mrad (26°), the angular resolution was 33 mrad, and only linear trajectories were recorded as muon events. There is a mountain right in front of one side of this setup, but there are no obstacles on the other side of this setup. Here, we define the open-sky direction as the forward direction and the mountain side direction as the backward direction. The rock thicknesses have a tendency to gradually decrease from the direction (1) to the direction (3) in Fig. 13, but they are thicker than 2000 m for the muons arriving the detector at elevation angles less than 100 mrad. Since the flux of the 100-mrad muons after passing through 2,000-m rock is 4.2 × 10–4 m−2sr−1 s−1, which is equivalent to 1/2,500 of the open-sky flux of muons arriving at the same elevation angle, the muonic component that directly arrived from the backward direction out of the entire events recorded as backward-directed muons were assumed to be zero for elevation angles less than 100 mrad in this experiment. The plot shown in Fig. 9 was obtained by integrating the number of muons (N) over the azimuthal angle range between –Φ and Φ for different elevation angles (θ). The geometrical acceptance of the current setup was applied to correct the elevation-angle distribution. ## References 1. Hunter, J. A simple technique for estimating an allowance for uncertain sea-level rise. Clim. Change 113, 239–252 (2012). 2. Tebaldi, C., Strauss, B. H. & Zervas, C. E. Modelling sea level rise impacts on storm surges along us coasts. Environ. Res. Lett. 7, 014032 (2012). 3. Gregory, J. M. et al. Concepts and terminology for sea level: Mean, variability and change, both local and global. Surv. Geophys. 40, 1251–1289 (2019). 4. Haasnoot, M., Kwakkel, J. H., Walker, W. E. & TerMaat, J. Dynamic adaptive policy pathways: A method for crafting robust decisions for a deeply uncertain world. Glob. Environ. Change 23, 485–498 (2013). 5. Slangen, A. et al. The impact of uncertainties in ice sheet dynamics on sea-level allowances at tide gauge locations. J. Mar. Sci. Eng. 5, 21 (2017). 6. Rasmussen, D. J. et al. Extreme sea level implications of 15 °C, 20 °C and 25 °C temperature stabilization targets in the 21st and 22nd centuries. Environ. Res. Lett. 13, 034040 (2018). 7. Frederikse, T. et al. Antarctic ice sheet and emission scenario controls on 21st-century extreme sea-level changes. Nat. Commun. 11, 390 (2020). 8. Nicholls, R. J. et al. Sea-level rise and its possible impacts given a ‘beyond 4 °C world’ in the twenty-first century. Philos. Trans. R. Soc. A 369, 161–181 (2011). 9. Dahl, K. A., Fitzpatrick, M. F. & Spanger-Siegfried, E. Sea level rise drives increased tidal flooding frequency at tide gauges along the US East and Gulf Coasts: Projections for 2030 and 2045. PLoS ONE 12, e0170949 (2017). 10. Guo, X. & Yanagi, T. Three-dimensional structure of tidal current in the East China Sea and the Yellow Sea. J. Oceanogr. 54, 651–668 (1998). 11. Ji, Z. G. et al. Three-dimensionalmodeling of hydrodynamic processes in the St. Lucie Estuary. Estuar. Coast. Shelf Sci. 73, 188–200 (2007). 12. Stanev, E. V. Understanding Black Sea dynamics: Overview of recent numerical modelling. Oceanography 18, 56–75 (2005). 13. Lermusiaux, P. F. J. Evolving the subspace of the three-dimensional multiscale ocean variability: Massachusetts Bay. J. Mar. Syst. 29, 385–422 (2001). 14. Gao, X. & Yanagi, T. Three dimensional structure of tidal currents in Tokyo Bay, Japan. Lamer 32, 173–185 (1994). 15. Tanaka, H. K. M. et al. First results of undersea muography with the Tokyo-Bay Seafloor Hyper-Kilometric Submarine Deep Detector. Sci. Rep. 11, 19485. https://doi.org/10.1038/s41598-021-98559-8 (2021). 16. Matsumoto, K. et al. GOTIC2: A program for computation of oceanic tidal loading effect. J. Geod. Soc. Jpn. 47, 243–248 (2001). 17. Melet, A. et al. Under-estimated wave contribution to coastal sea-level rise. Nat. Clim. Change. 8, 234–239 (2018). 18. Woodworth, P. L. et al. Forcing factors affecting sea level changes at the coast. Surv. Geophys. 40, 1351–1397 (2019). 19. Dodet, G. et al. Wave runup over steep rocky cliffs. J. Geophy. Res. 123, 7185–7205 (2018). 20. Caires et al. Extreme Still Water Levels. http://www.waveworkshop.org/10thWaves/Papers/10thWW_CDDG_article_final.pdf (2007). 21. Marsooli, R. & Lin, N. Numerical modeling of historical storm tides and waves and their interactions along the U.S. East and Gulf coasts. J. Geophys. Res. 123, 3844–3874 (2018). 22. Abdalazeez, A. A. A. Wave runup estimates at gentle beaches in the northern Indian Ocean, Master thesis, University of Bergen. https://aquadocs.org/bitstream/handle/1834/4557/Wave%20runup.pdf?sequence=1&isAllowed=y (2012). 23. Seenipandi, K. et al. In Modeling of Coastal Vulnerability to Sea-Level Rise and Shoreline Erosion Using Modified CVI Model (eds Rani, M. et al.) 315–340 (Elsevier, 2021). 24. Tarpanelli, A. & Benveniste, J. In On the Potential of Altimetry and Optical Sensors for Monitoring and Forecasting River Discharge and Extreme Flood Events (eds Viviana Maggioni, V. & Massari, C.) 267–287 (Elsevier, 2019). 25. Frank Comas, A., et al. Implementation of a Low-Cost Ultra-Dense Tide Gauge Network in the Balearic Islands. https://upcommons.upc.edu/handle/2117/356134 (2021). 26. Liu, N. et al. High spatio-temporal resolution deformation time series with the fusion of InSAR and GNSS data using spatio-temporal random effect model. IEEE Trans. Geosci. Remote Sens. 57, 1–17. https://doi.org/10.1109/TGRS.2018.2854736 (2018). 27. Testut, L. The sea level at Port-aux-Français, Kerguelen Island, from 1949 to the present. Ocean. Dyn. 56, 464–472 (2006). 28. Lourey, M. J., Dunn, J. R. & Waring, J. A mixed-layer nutrient climatology of Leeuwin Current and Western Australian shelf waters: Seasonal nutrient dynamics and biomass. J. Mar. Syst. 59, 25–51 (2006). 29. Zyla, P. A. et al. The review of particle physics. Prog. Theor. Exp. Phys. 2020, 083C01 (2020). 30. Tanaka, H. K. M. Development of the muographic tephra deposit monitoring system. Sci. Rep. 10, 14820. https://doi.org/10.1038/s41598-020-71902-1 (2020). 31. Tanaka, H. K. M. In Principles of Muography and Pioneering Works (eds Laszlo, O. et al.) 1–17 (Wiley, 2019). 32. Jourde, K. et al. Muon dynamic radiography of density changes induced by hydrothermal activity at the La Soufrière of Guadeloupe volcano. Sci. Rep. 6, 33406 (2016). 33. Rosas-Carbajal, M. et al. Three-dimensional density structure of la soufriére de Guadeloupe lava dome from simultaneous muon radiographies and gravity data. Geophys. Res. Lett. 44, 6743–6751 (2017). 34. Tanaka, H. K. M. et al. High resolution imaging in the inhomogeneous crust with cosmic-ray muon radiography: The density structure below the volcanic crater floor of Mt. Asama. Japan. Earth Planet. Sci. Lett. 263, 104113 (2007). 35. Tanaka, H. K. M. et al. Imaging the conduit size of the dome with cosmic-ray muons: The structure beneath Showa-Shinzan Lava Dome, Japan. Geophys. Res. Lett. 34, 053007 (2007). 36. Tanaka, H. K. M., Uchida, T., Tanaka, M., Shinohara, H. & Taira, H. Cosmic-ray muon imaging of magma in a conduit: Degassing process of Satsuma-Iwojima Volcano, Japan. Geophys. Res. Lett. 36, L01304 (2009). 37. Tanaka, H. K. M., Kusagaya, T. & Shinohara, H. Radiographic visualization of magma dynamics in an erupting volcano. Nat. Commun. 5, 3381 (2014). 38. Tanaka, H. K. M. Instant snapshot of the internal structure of Unzen lava dome, Japan with airborne muography. Sci. Rep. 6, 39741 (2016). 39. Olah, L., Tanaka, H. K. M., Ohminato, T. & Varga, D. High-definition and low-noise muography of the Sakurajima volcano with gaseous tracking detectors. Sci. Rep. 8, 3207 (2018). 40. Lo Presti, D. et al. Muographic monitoring of the volcano-tectonic evolution of Mount Etna. Sci. Rep. 10, 11351. https://doi.org/10.1038/s41598-020-68435-y (2020). 41. Tioukov, V. et al. First muography of Stromboli volcano. Sci. Rep. 9, 6695 (2019). 42. Tanaka, H. K. M. Muographic mapping of the subsurface density structures in Miura, Boso and Izu peninsulas, Japan. Sci. Rep. 5, 8305 (2015). 43. Thompson, L. F. et al. Muon tomography for railway tunnel imaging. Phys. Rev. Res. 2, 023017. https://doi.org/10.1103/PhysRevResearch.2.023017 (2020). 44. Oláh, L. et al. CCC-based muon telescope for examination of natural caves. Geosci. Instrum. Method Data Syst. 1, 229–234 (2012). 45. Cimmino, L. et al. 3D muography for the search of hidden cavities. Sci. Rep. 9, 2974. https://doi.org/10.1038/s41598-019-39682-5 (2019). 46. Saracino, G. et al. Imaging of underground cavities with cosmic-ray muons from observations at Mt. Echia (Naples). Sci. Rep. 7, 1181 (2017). 47. Tanaka, H. K. M., Sumiya, K. & Oláh, L. Muography as a new tool to study the historic earthquakes recorded in ancient burial mounds. Geosci. Instrum. Method Data Syst. 9, 357–364. https://doi.org/10.5194/gi-9-357-2020 (2020). 48. Morishima, K. et al. Discovery of a big void in Khufu’s Pyramid by observation of cosmic-ray muons. Nature 552, 386–390 (2017). 49. Groom, D. E. et al. Muon stopping-power and range tables: 10 MeV–100 TeV. At. Data Nucl. Data Tables 78, 183–356 (2001). 50. Allkofer, O. C. et al. Cosmic ray muon spectra at sea-level up to 10 TeV. Nucl. Phys. B 259, 1–18 (1985). 51. Jokisch, H. et al. Cosmic-ray muon spectrum up to 1 TeV at 75° zenith angle. Phys. Rev. D 19, 1368 (1979). 52. Achard, P. et al. Measurement of the atmospheric muon spectrum from 20 to 3000 GeV. Phys. Lett. B 598, 15–32 (2004). 53. Bobylev, N. Mainstreaming sustainable development into a city’s Master plan: A case of Urban Underground Space use. Land Use Policy 26, 1128–1137 (2009). 54. Verlaan, M. et al. Operational storm surge forecasting in the Netherlands: Developments in the last decade. Philos. Trans. R. Soc. A 363, 1253–1253441 (2005). 55. Eljen Technology. General Purpose of EJ-200, EJ-204, EJ-208, EJ-212 . https://eljentechnology.com/products/plastic-scintillators/ej-200-ej-204-ej-208-ej-212) (2022). 56. Kuraray. Wavelength Shifting Fibers. http://kuraraypsf.jp/psf/ws.html) (2022). 57. Tilav, S. et al. Atmospheric Variation as Observed by IceCube. arXiv:1001.0776 (2010). 58. Daya Bay Collaboration. Seasonal variation of the underground cosmic muon flux observed at Daya Bay. J. Cosmol. Astropart. Phys. 2018, 001. https://doi.org/10.1088/1475-7516/2018/01/001 (2018). 59. Collaboration, B. Modulations of the cosmic muon signal in ten years of Borexino data. J. Cosmol. Astropart. Phys. 2019, 046. https://doi.org/10.1088/1475-7516/2019/02/046 (2019). 60. Abrahao, T. et al. Cosmic-muon characterization and annual modulation measurement with Double Chooz detectors. J. Cosmol. Astropart. Phys. 2017, 017. https://doi.org/10.1088/1475-7516/2017/02/017 (2017). 61. The Ministry of Land, Infrastructure, Transport and Tourism. 2011 Administrational Enterprise Review Sheet. (2011). https://www.mlit.go.jp/common/000169227.pdf. 62. Ministry of Land, Infrastructure, Transport and Tourism. Repairment of the Hirado-Seto Ocean Line Tide Gauge Station in 2020 (2020). https://www.pa.qsr.mlit.go.jp/nagasaki/keiyaku_kekka/keiyaku_kekka_img/kozi_sekkei_pdf/R2_hiradoseto_kentyoujo_hosyuu.pdf. 63. Giardina, M. F. et al. Development of a low-cost tide gauge. J. Atmos. Ocean. Technol. 17, 575–583 (2000). ## Author information Authors ### Contributions H.K.M.T. wrote the text. H.K.M.T. prepared the figures. H.K.M.T. reviewed the manuscript. ### Corresponding author Correspondence to Hiroyuki K. M. Tanaka. ## Ethics declarations ### Competing interests The authors declare no competing interests. ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Tanaka, H.K.M. Muography for a dense tide monitoring network. Sci Rep 12, 6725 (2022). https://doi.org/10.1038/s41598-022-10373-y • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-022-10373-y
2022-07-03 09:20:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7448217272758484, "perplexity": 3830.051038606669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215805.66/warc/CC-MAIN-20220703073750-20220703103750-00340.warc.gz"}
https://jarskiewczasy.pl/stone/Nov_2007_Thu/
# Ball Conical Inside ### Wide-ish Conical Holes in Wooden Ball - Woodworking Talk ... Jul 14, 2018· There would be a hole for the screw on the back side of the ball. Or, a standard twist drill bit could be used if you have a way to hold the ball while drilling. You'll get about a 60° taper, not conical, but a bit can be ground to a different profile or you can follow it with a rotary wood rasp to make the hole conical. ### Round Balls vs. Conicals - YouTube Nov 17, 2018· This video is an accuracy comparison between round balls and conical bullets. Both will be fired, using the 1847 Colt Walker. Be sure to subscribe to my chan... ### HOW BALL MILL WORKS? - energosteel.com Oct 10, 2016· The ball mill is a hollow drum closed with loading and unloading end caps, filled with grinding media and rotated around its axis.The drum of the ball mill (Pic. 1) is a hollow cylinder of steel, lined inside with armor lining plates which protect it from impact and friction effects of the balls and the grinding material. ### Minie Ball - HistoryNet Although the Minié ball was conical in shape, it was commonly referred to as a "ball," due to the round shape of the ammunition that had been used for centuries. Made of soft lead, it was slightly smaller than the intended gun bore, making it easy to load in combat. ... Because the ammunition had to fit inside the barrel tightly in order to ... ### BALL -vs- CONICAL BULLITS 1851 NAVY | Graybeard Outdoors Feb 14, 2015· Round ball VS traditional conical,with max.loads the round ball is superior in close range knock down.In modern made percussion revolvers conical are usualy more accurate.If you want hunting or self defence bullets for modern percussion revolvers the LBT ball bullet or cap&ball wad cutter is the best.The twist of modern percussion revolvers is better suited to bullets,but antique revolvers ... ### kinematics - Circular motion in a cone - Physics Stack ... A FBD (sorry I didn't have time to upload one) shows that the net force acting on the ball in the $\hat{r}$ direction, which is the only one we care about, is $$F_r^{NET} = m\frac{v_0^2}{r(t)} - mg\sin\theta.$$ However, for the ball to be going in a circle, the total forces along the $\hat{r}$ direction must be zero, as I … ### MLconicals - NORTH AMERICAN MUZZLELOADER HUNTING Nov 11, 2015· The 10-grain heavier Lee REAL bullet shot extremely well out of the 1-in-48 twist Hawken bore. Even with 80 grains of GOEX FFFg black powder, shooting the open factory sights on the rifle, I found that I could consistently keep hits inside of 3 inches at 100 yards.The best I had gotten with the 240-grain Maxi-Ball had been more like 4 1/2-inches at a hundred yards. ### Wide-ish Conical Holes in Wooden Ball - Woodworking Talk ... Jul 14, 2018· There would be a hole for the screw on the back side of the ball. Or, a standard twist drill bit could be used if you have a way to hold the ball while drilling. You'll get about a 60° taper, not conical, but a bit can be ground to a different profile or you can follow it with a rotary wood rasp to make the hole conical. ### Minie Ball - HistoryNet Although the Minié ball was conical in shape, it was commonly referred to as a "ball," due to the round shape of the ammunition that had been used for centuries. Made of soft lead, it was slightly smaller than the intended gun bore, making it easy to load in combat. ... Because the ammunition had to fit inside the barrel tightly in order to ... ### 9 Best Conical Fermenters for Serious Homebrewers (2019 ... Nov 19, 2019· Best Plastic Conical Fermentors. From online research, here are the best plastic conical fermentors in 2019: 1. FastFerment Conical Fermenter. The beginner-friendly 7.9-gallon FastFerment conical fermenter provides all the benefits of conical fermenters and comes with the added value of an attached yeast collection ball, a bottle filling attachment and an included ThermoWell. ### Bal-tec - Ball Check Valves This device has a spherical or conical cup that fits over the ball on one side and an extended boss on the opposite side that locates on the inside diameter of the spring. Weight of Large Balls. The mass of a specific diameter ball is fixed. As the design parameters of the valve are changed, to accommodate a larger flow rate, the diameter of ... ### Ball Mill - an overview | ScienceDirect Topics The ball mill is a tumbling mill that uses steel balls as the grinding media. The length of the cylindrical shell is usually 1–1.5 times the shell diameter (Figure 8.11).The feed can be dry, with less than 3% moisture to minimize ball coating, or slurry containing 20–40% water by weight. ### MLconicals - NORTH AMERICAN MUZZLELOADER HUNTING Nov 11, 2015· The 10-grain heavier Lee REAL bullet shot extremely well out of the 1-in-48 twist Hawken bore. Even with 80 grains of GOEX FFFg black powder, shooting the open factory sights on the rifle, I found that I could consistently keep hits inside of 3 inches at 100 yards.The best I had gotten with the 240-grain Maxi-Ball had been more like 4 1/2-inches at a hundred yards. ### Ball mill - Wikipedia A ball mill is a type of grinder used to grind or blend materials for use in mineral dressing processes, paints, pyrotechnics, ceramics, and selective laser sintering.It works on the principle of impact and attrition: size reduction is done by impact as the balls drop from near the top of the shell. A ball mill consists of a hollow cylindrical shell rotating about its axis. T ### Killing Power of Patched Round Ball (PRB) | The High Road Apr 24, 2008· This is because it tends to dump all of its energy inside the target when it hits. A roundball in a caplock revolver actually has better stopping power, within a reasonable range, than a conical ball, although the conical will generally be more accurate and retain its energy farther downrange. ### Ethical deer range for a 50 cal round ball? | The High Road Nov 03, 2018· There are a few guys who hunt elk with a .50 cal patched ball, but they keep their ranges much shorter. It is effective on them so there isn’t a need to use a bullet or conical for a mule deer. But you may just find your rifle shoots much better with one, and there certainly isn’t a problem using one over a ball. ### Foraging Puffball Mushrooms - Practical Self Reliance Oct 20, 2018· If a mushroom is a pure white on the inside, with no sign of gills at all, then it’s a puffball. Still, there are a few puffballs that are toxic, so a lack of gills isn’t a sure sign that you have an edible puffball mushroom. A lack of gills and a pure white interior are both required to identify edible species. ### Co Mill, Conical mill, Comill, Cone Mill: Pharmaceutical ... A co mill (or conical mill) is a machine used to reduce the size of pharmaceutical material in to a uniform measure. it is widely used in Pharma. ... Inside the cylinder, there are small metal balls. The powder is fed inside the ball mill, and as it rotates the balls impact the bulky raw material. An illustration of a ball … ### The Bullet That Changed History - The New York Times Aug 31, 2012· The Minié ball (properly pronounced “min-YAY” after its developer, the French Army officer Claude-Étienne Minié, but pronounced “minnie ball” by the Americans) wasn’t a ball but a conical-shaped bullet. Popularized during the Crimean War, it was perfected in early 1850s America. ### Conical Fermentors/Unitanks from Brewers Hardware Brewers Hardware offers a variety unique and hard to find items to the home and craft brewing markets. Specializing in Tri Clover compatible sanitary fittings and other stainless steel parts and accessories, our selection is constantly expanding. ### Ball Mills - an overview | ScienceDirect Topics The ball mill is a cylindrical drum (or cylindrical conical) turning around its horizontal axis. It is partially filled with grinding bodies: cast iron or steel balls, or even flint (silica) or porcelain bearings. Spaces between balls or bearings are occupied by the load to be milled. ### Co Mill, Conical mill, Comill, Cone Mill: Pharmaceutical ... A co mill (or conical mill) is a machine used to reduce the size of pharmaceutical material in to a uniform measure. it is widely used in Pharma. ... Inside the cylinder, there are small metal balls. The powder is fed inside the ball mill, and as it rotates the balls impact the bulky raw material. An illustration of a ball … ### HARDINGE CONICAL BALL MILL | C.W. Wood Machinery Inc. One Hardinge 4-1/2' Diameter x 16" Cylinder length wet grinding Conical Ball Mill with 1/2' steel plate shell with access man hole. Shell Speed: 29 RPM Counter-Clockwise Rotation (viewed from feed end) Mill Drive: Falk Motoreducer With 20 HP Motor 1760 RPM, 230/460/60/3 Steel Balls For Inside Grinding. Machine: Erection and Operation ... ### Ball Mills - an overview | ScienceDirect Topics The ball mill is a cylindrical drum (or cylindrical conical) turning around its horizontal axis. It is partially filled with grinding bodies: cast iron or steel balls, or even flint (silica) or porcelain bearings. Spaces between balls or bearings are occupied by the load to be milled. ### 9 Best Conical Fermenters for Serious Homebrewers (2019 ... Nov 19, 2019· Best Plastic Conical Fermentors. From online research, here are the best plastic conical fermentors in 2019: 1. FastFerment Conical Fermenter. The beginner-friendly 7.9-gallon FastFerment conical fermenter provides all the benefits of conical fermenters and comes with the added value of an attached yeast collection ball, a bottle filling attachment and an included ThermoWell. ### Conical Fermentors/Unitanks from Brewers Hardware Brewers Hardware offers a variety unique and hard to find items to the home and craft brewing markets. Specializing in Tri Clover compatible sanitary fittings and other stainless steel parts and accessories, our selection is constantly expanding. ### Datasheets | associatedpile.com Inside Flange, Conical Points; Conical Points are the preferred end closure for pipe piles. The conical shape pushes the earth aside and preserves friction. This model is the heavy duty conical which generally is used for breaking through thick strata and difficult driving conditions. ### The Bullet That Changed History - The New York Times Aug 31, 2012· The Minié ball (properly pronounced “min-YAY” after its developer, the French Army officer Claude-Étienne Minié, but pronounced “minnie ball” by the Americans) wasn’t a ball but a conical-shaped bullet. Popularized during the Crimean War, it was perfected in early 1850s America. ### Killing Power of Patched Round Ball (PRB) | The High Road Apr 24, 2008· This is because it tends to dump all of its energy inside the target when it hits. A roundball in a caplock revolver actually has better stopping power, within a reasonable range, than a conical ball, although the conical will generally be more accurate and retain its energy farther downrange. ### Ethical deer range for a 50 cal round ball? | The High Road Nov 03, 2018· There are a few guys who hunt elk with a .50 cal patched ball, but they keep their ranges much shorter. It is effective on them so there isn’t a need to use a bullet or conical for a mule deer. But you may just find your rifle shoots much better with one, and there certainly isn’t a problem using one over a ball. ### Foraging Puffball Mushrooms - Practical Self Reliance Oct 20, 2018· If a mushroom is a pure white on the inside, with no sign of gills at all, then it’s a puffball. Still, there are a few puffballs that are toxic, so a lack of gills isn’t a sure sign that you have an edible puffball mushroom. A lack of gills and a pure white interior are both required to identify edible species. ### Ball Joint Sleeves - Free Shipping on Orders Over \$99 at ... Sleeve, Racing Ball Joint, Steel, 2.250 in. Outside Diameter, 1.830 in. Inside Diameter, Each. Part Number: AFC-20043 Not Yet Reviewed ### 7.9G FastFerment FAQs | FastBrewing & WineMaking The valve is placed on the conical with the union fitting side on the bottom. It screws into the conical vessel and the union fitting is on the bottom to screw onto either the collection ball or the filling hose attachment (see picture below to show the valve positions). ### .44 Cal Cap & Ball Pistol (Pietta) - What cast bullet do I ... Oct 02, 2016· This is a .451 size ball intended for cap and ball .44 caliber revolvers, etc. A .454 bullet might fit too. You don't really want to fire conical bullets out of a brass frame revolver that doesn't have the top strap on it like they do with the Remington replicas. it may cause the brass frame to be damaged. ### A Beginner's Guide to Shells | Southern Living Jun 30, 2020· Nearly all of the 600 species of Cones around the world have a similar distinct design: a conical shape, flat top, and a slit-like lip running along its length. This shell’s body can be smooth or angled with rounded or pointed knobs. Cones can range in height from one inch to 8 inches high.
2021-03-05 00:01:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.315045565366745, "perplexity": 5606.540470318783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369553.75/warc/CC-MAIN-20210304235759-20210305025759-00503.warc.gz"}
https://maurobringolf.ch/2018/03/learning-haskell-implementing-map-with-foldr/
First things first: I am a beginner with Haskell, so if you find anything odd please ask or correct me. Along with an introductory lecture in functional progamming I am currently reading learnyouahaskell.com. One of the things the author likes to do is implement copies of standard library functions to explain how they work. I think this is a good and fun way to learn. In fact I wrote a post last year taking exactly this approach about reduce, map, filter and find in JavaScript. I used reduce to implement the other three functions1. Since these are very core functional programming concepts, they play a central role in Haskell and appear quite early in the book. The functions map, filter and find have the same name in JS and Haskell, but JavaScript’s reduce is called foldr in Haskell. Technically I should be able to take my JavaScript functions and convert them one to one to Haskell. After I did this for map, I realized that Haskell lets me express the relation between the functions in a really pure form. This pure form is called pointfree style. So instead of porting filter and find from JavaScript I decided to try and simplify the map implementation as much as possible. This is what I started with: -- Ported from JavaScript in previous post _map f xs = foldr (\x zs -> f x : zs) [] xs If you look closely you can spot one difference. In JavaScript I used the spread operator to append new elements at the end of the list, while in Haskell I use : to prepend to the list. The reason is that I don’t know of an “append” function, but do know the prepend function :. Since map treats each list element in isolation, it does not matter whether we build the resulting list from left or right. You can already see that Haskell has a lot less “syntax overhead” for this code. But we can simplify things way more using two ideas from functional programming. The first one is called currying, which allows us to omit the xs argument. -- Use currying for second argument xs of _map _map f = foldr (\x zs -> f x : zs) [] If you know about currying, you should be able to understand why we can do this. In case you do not, I will try to explain it in simple terms: Haskell knows that foldr is a function which takes three arguments: A function, an initial value and a list. Here we call foldr with just two arguments and omit the list to be folded (previously xs). In this case, some programming languages will throw an error (Java, C) and some will initialize the third argument to be undefined (JavaScript). Functional programming languages like Haskell do something else. The result of the call with two arguments is a new function which only takes one argument. This last argument will become the third one to the original function call. Still confused? I highly recommend the Funfunfunction episode on the topic2 then. Not confused anymore? Let’s do more currying then: -- Use currying for second argument zs of lambda _map f = foldr (\x -> (:) (f x) ) [] In order to use currying inside the lambda we have to write its return value in the form function argument argument. The additional parentheses are needed to “convert” the : “operator” into a function and to make sure f x is applied before :. Other than that, this is the exact same simplification as the first step. I figured out one more step that allows us to curry the lambda completely and omit its x argument too: -- v4 _map f = foldr ((:) . f ) [] Here I introduced the dot operator . which denotes function composition. Basically, fun1 . fun2 is a function that feeds its argument into fun2 and the resulting value into fun1. The result from the fun1 call is then returned as result of fun1 . fun2. Therefore the result is the same as nesting function calls. The problem with nested function calls is that we can only do it when we actually call the function. In our example (:) (f x) are two function calls that are nested, the result of one is argument to the other. The dot operator simply returns a function which behaves exactly like that. So the intermediate step of the lambda is \x -> ((:) . f) x. This form now lets us use currying and arrive at the last form above. I am sure that there is a way to get rid of the last f argument too and express _map completely pointfree. I will try to do this too, but finish this post here. The takeaway is that pointfree style lets us express how functions are composed of others by using operations on function such as composition rather than saying how individual values (arguments) pass through the different functions.
2018-07-22 03:05:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5762242078781128, "perplexity": 802.8272363422276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593004.92/warc/CC-MAIN-20180722022235-20180722042235-00107.warc.gz"}
https://www.nature.com/articles/s41598-017-08572-z?error=cookies_not_supported&code=a25df005-7c3a-4372-8c07-dac3cde9d3f5
Article | Open | Published: # Mussel Inspired Polynorepinephrine Functionalized Electrospun Polycaprolactone Microfibers for Muscle Regeneration ## Abstract Electrospun scaffolds with excellent mechanical properties, high specific surface area and a commendable porous network are widely used in tissue engineering. Improving the hydrophilicity and cell adhesion of hydrophobic substrates is the key point to enhance the effectiveness of electrospun scaffolds. In this study, polycaprolactone (PCL) fibrous membranes with appropriate diameter were selected and coated by mussel-inspired poly norepinephrine (pNE). And norepinephrine is a catecholamine functioning as a hormone and neurotransmitter in the human brain. The membrane with smaller diameter fibers, a relative larger specific surface area and the suitable pNE functionalization provided more suitable microenvironment for cell adhesion and proliferation both in vitro and in vivo. The regenerated muscle layer can be integrated well with fibrous membranes and surrounding tissues at the impaired site and thus the mechanical strength reached the value of native tissue. The underlying molecular mechanism is mediated via inhibiting myostatin expression by PI3K/AKT/mTOR hypertrophy pathway. The properly functionalized fibrous membranes hold the potential for repairing muscle injuries. Our current work also provides an insight for rational design and development of better tissue engineering materials for skeletal muscle regeneration. ## Introduction Tissue engineering provides a potential treatment for a wide range of diseases. The musculoskeletal system, including bone1, 2, tendon/ligament3 and muscles4, 5, has become the major target for tissue engineering. The efficacy of musculoskeletal tissue engineering relies highly on the scaffolds, which should essentially support cells to form neotissues at the target sites. The scaffold used for musculoskeletal tissue engineering should be able to withstand certain tensile force, reduce cicatrix formation and provide suitable microenvironment for cell growth and proliferation. And the degradation cycle of the polymer scaffold should meet the muscular remodeling period. Different fabrication methods have been used to prepare biomaterials with various properties for the rebuilding of the musculoskeletal system according to the specfic property of target tissues. Electrospinning has attracted wide attention as an excellent way to prepare continuous non-woven fabric fibers6 in nano- and micro-scale with high porosity, large surface area, and controllable mechanical properties. Electrospun membranes highly imitate natural extracellular matrix (ECM) and thus have been widely used for various biomedical applications, such as tissue engineering, drug delivery and regenerative medicine7, 8. Soft electrospun polymer membrane is recognized as an ideal material in muscle injure recovery for its good cytocompatibility, biodegradability and excellent mechanical properties9. Moreover, its ECM-like structure improves cell attachment on the membrane and provides better microenvironment for cell growth. It has been reported that biodegradable polycaprolactone (PCL)10, poly L-lactic acid (PLA), polyglycolic acid (PGA), poly L-lacticacid/polyglycolic acid (PLGA) copolymer were used as musculoskeletal tissue engineering scaffold materials11,12,13,14,15,16. PCL has good biocompatibility and low immunogenicity in body, which can slowly degrade in the body into non-toxic metabolic products. The degradation time frame of PCL fibers was even more than 2 years17, which was suitable for the metabolism and regeneration time of muscles. Muscle stem cells were attracted to the injured area and become muscle cells during the regeneration progress of muscles18. In addition to biocompatibility, the physiological compatibility between cells and the biomaterial surface is critical in order to support tissue reconstruction of skeletal muscle in the injured region. Up to now, various biological macromolecules including chitosan, gelatin, collagen19, laminin and growth factor have been used to modify the scaffold surface via EDAC coupling, electrostatic layer-by-layer deposition, self-assembly grafting and so on. Since its discovery in 200720, poly dopamine (pDA) as mussel-inspired substrate-independent coating method has been applied on a range of biomaterials (microfluidics, catalysts, bio-mineralization21, surface modifications22, sensitive determination23 and neural interfaces). Dopamine (DA) and norepinephrine (NE) as catechol molecules can enhance surface affinity via self-polymerization on virtually any surfaces in alkaline environment. When compared to pDA, the polymerization process of pNE is much more controllable, generating a more uniform and thinner coating, so that the pNE coating could be applied to nano/submicro-featured scaffold surface24. Although pDA has been applied as biomaterials in many studies, pNE coating has only been recently applied for neural interfacing, where the introduced catechol groups from pNE could not only anchor collagen to enhance cell adhesion but also localize nerve growth factor to promote neuron-like differentiation25. The pNE functionalized biomaterials have not been investigated as tissue engineering scaffolds in vivo. Besides, previous researchers have explored the effect of fiber diameter on cell adhesion and proliferation which shows that fibrous membranes with smaller diameters may generally result in better cell migration, spread, signaling and replication26. Therefore, the fiber diameter of electrospun membranes and their surface affinity play important roles in tissue engineering. Herein, in order to make scaffold withstand tensile force, reduce cicatrix formation and provide suitable microenvironment for cell growth and proliferation, we designed and fabricated PCL fibrous membranes composed of different fiber diameters (2 μm and 10 μm) via electrospinning, named 2 P and 10 P membrane respectively. Then, the membranes were coated by pNE via self-polymerization. The collective impacts of surface affinity and fiber diameter on the attachment and proliferation of muscle cells were investigated. These different PCL fibrous membranes with or without pNE coating were used as scaffolds for further skeletal muscle regeneration investigation. Figure 1 shows the experiment design including 1) fabrication of pNE coated PCL fibrous membranes, 2) pNE coated PCL fibrous membranes regulating the proliferation and morphology of muscle cells in vitro, 3) pNE coated PCL fibrous membranes promoting muscle regeneration in vivo. ## Results ### Thinner electrospun PCL fibers were coated by pNE more effectively PCL fibrous membranes composed of randomly aligned fibers with diameters of 2 μm (2 P) and 10 μm (10 P) were fabricated via electrospinning. Then the membranes were coated by pNE and named as 2 PP and 10 PP membranes, respectively (Figure S1). As shown in Fig. 2, the surface roughness of fibers before and after coated by pNE was observed using scanning electron microscope (SEM) and atomic force microscope (AFM). And we found that pNE resulted in a rougher surface on fibers (Fig. 1h and p). For short coating times of pNE on fibers, a smooth surface on fibers was observed, whereas longer coating time of pNE resulted in a rougher surface on fibers (Figure S2). The elemental composition of the fibrous surfaces was characterized using XPS analysis (a, e, i, m). In XPS spectrum, nitrogen peak (398 eV) can be used to distinguish pNE coating on the PCL fibers. Oxygen, carbon and nitrogen peaks of PCL fibrous membranes with pNE coating (2 PP and 10 PP) were all apparent, while only oxygen and carbon peaks were found for those without pNE (2 P and 10 P). The intensity of the nitrogen peak of thinner PCL fibrous membranes with pNE coating (2 PP) was nearly twice higher than that of 10 PP (e, m) (Table 1). Similar results were observed using scanning electron microscopy (SEM) and atomic force microscopy (AFM) to characterize the fiber surface morphology changes of pNE coated PCL fibrous membranes. Thinner PCL fibers (2 μm, f, g, h) were significantly easier to be coated by pNE than thicker ones (10 μm, n, o, p). The coating amount of pNE on the thinner PCL fibers was significantly more than that on the thicker ones. A possible reason is that the fibers have larger surface area because of the decreased diameter. The mechanical properties of the fibrous membranes were shown in the stress-strain curves (Fig. 2E). Membranes with smaller fibers have higher strength than those with larger fibers, and pNE coating further increased the strength, especially 2PP. The wettabilities of these membranes were investigated by measuring water contact angles, providing intuitive information about the influence of pNE coating on the fibers surface hydrophilicity. The water contact angles of 2 PP and 10 PP membrane (with pNE coating) were 121 ± 1° and 118 ± 1°, while that of 2 P and 10 P membrane (without pNE coating) were 37 ± 2° and 61 ± 1° (Fig. 2F), respectively. Hence, pNE coating changed the inherent hydrophobicity of fibrous membranes and increased their wettability significantly. Moreover, the water contact angle of 2 PP membrane was lower than that of 10 PP membrane, which indicates that the fibrous membrane with smaller diameter was more hydrophilic after pNE coating. Meanwhile, there was also a possibility that the difference in wettability of fibrous membrane was related to the amount of pNE coated on the fibers. Degradation of the fibrous membranes in vitro was monitored for 99 days, where no obvious mass loss or morphology changes were observed (Fig. 2G,H). ### Thinner pNE Functionalized membranes were effective for repairing muscle injury Rat muscle injury model was used for implanting PCL fibrous membranes to repair impaired musculus rectus abdominis (Fig. 3A). At day 40 after implantation, PCL fibrous membranes together with the integrated tissue were removed and analyzed. As shown in Fig. 3B, the tensile strength of 2 PP group was much higher than the other groups which reached similar value of the normal muscle tissue. In consistence, the regenerated muscle layer integrated well with fibrous membranes and surrounding tissues at the impaired site (Fig. 3C), which could be presented clearly by using histological observations (Fig. 3D). No membranes were implanted on the muscle defects in negative group, whereas the membrane was composed of PLGA which was degraded rapidly in vivo used for comparison. There were still obvious muscle defects covered by a thin layer of fibrous connective tissue and inflammatory cells in negative group and PLGA group. All functionalized PCL fibrous membranes were effective for repairing muscle injury, though serious inflammation was found in 10 PP group. In addition, more cells were found to migrate into the thinner fibrous membranes (2 P and 2 PP) than those on thicker ones. The fibrous membranes with a small diameter were more suitable to the formation of muscle. Since pNE coating inverted hydrophobic fibers to hydrophilic ones, it can provide better microenvironment for cell adhesion and proliferation. ### Thinner pNE functionalized PCL microfibers (2PP) were more effective for muscle cell proliferation and adhesion To assess the cell proliferation and adhesion of PCL fibrous membranes in vitro, L6 skeletal muscle cells were cultured on PCL fibrous membranes. The in vitro proliferation of cells on PCL fibrous membranes was measured by CCK-8 kit. pNE improves the proliferation of cells on PCL fibrous membranes. The cell viability on PCL fibrous membrane with diameter of about 2 μm (2 P and 2 PP) is better than 10 μm (10 P and 10 PP) (Fig. 4A). More cells grew on the thinner pNE coated PCL fibrous membranes (2 PP) than other membranes. The extension, proliferation and adhesion of skeletal muscle cells on the fibrous membrane of different diameters were also observed on day 1, 3, 5 and 7. Cells in the 2 PP and the 10 PP membranes were more elongated than those in 2 P and 10 P, and cells in 2 P and 2 PP spread much better than those in 10 P and 10 PP (Fig. 4B). This is attributed to the higher surface area in the thinner fibers and pNE coating collectively enhanced the ECM proteins deposition on the fibers, which promoted cell adhesion and proliferation. Various proteins are closely related to the reduction of muscle mass and hypertrophy. Among them, myostatin is a well-known negative regulator of skeletal muscles and plays a key role in the development and the maintenance of muscles. Myostatin, known as an inhibiting gene of skeletal muscle proliferation, would increase with muscle atrophy or injuries27. As shown in Fig. 5, the myostatin mRNA and protein expression was used to evaluate the proliferation of myoblasts. The myostatin mRNA expression significantly decreased in 2 P and 2 PP than 10 P and 10 PP in vivo and in vitro, where expression in 2 PP and 10 PP are higher than that in 2 P and 10 P, respectively (p < 0.05). The reduced myostatin expression in 2PP group indicates that pNE further suppressed myostatin expression, and the reduction led to the increased myocyte proliferation in the muscle. That is to say, thinner PCL microfibers coated by pNE (2PP) induced an increase in myocyte proliferation by suppressing myostatin expression. Previous research found that myostatin signaling reversed the insulin-like growth factor 1/phosphatidyl inositol 3-kinase/protein kinase B (IGF-1/PI3K/Akt) hypertrophy pathway by inhibiting AKT, allowing for increased expression of atrophy-related genes27, 28. The mammalian target of rapamycin (mTOR) is an important downstream mediator in the PI3K/Akt signaling pathway, which plays a critical role in regulating cell proliferation, survival, mobility and angiogenesis29. In the present study, we found that the mRNA expression of IGF-1, PI3K, AKT and mTOR significantly increased in 2 PP than 10 PP in vivo and in vitro (p < 0.05). The result indicates that thinner PCL microfibers coated by pNE (2PP) inhibit myostatin expression by PI3K/AKT/mTOR hypertrophy pathway, leading to the increased myocyte proliferation in the muscle. ### Thinner pNE functionalized membranes degraded faster in vivo with no toxicity identified In the design and development of an implant material, in vivo degradation behavior is very important. As a solid implant material, slow degradation rate would be ideal which was consistent with the time span for musclar reconstruction. PCL fibers were all still present with the fibrous structure at day 40 after implantion. Degradation of PCL fibers coated by pNE (2 PP and 10 PP) in vivo were quicker than that without pNE coating (2 P and 10 P) (Fig. 3E). Indicating pNE coating accelerated the degradation of PCL fibers, especially the thinner fibers. The in vitro degradation process is based on the hydrolytic reaction and mostly follows the heterogeneous degradation mechanism. No obvious morphology wastage (Fig. 2H, Figure S3) and weight loss (Fig. 2G) were observed until PCL fibrous membranes were cultured for 99 days in PBS at 37 °C. Even though pNE has been used for functionalizing scaffold surfaces, its in vivo toxicology is still unknown. Here, comprehensive and meticulous investigations on toxicology were carried out that implys all the fibrous membranes were quite safe and biocompatible. In all defected groups, there were no differences in the body weight compared with normal control group. There was no change in heart, liver, spleen, lung and kidney coefficients in normal and all the fibrous membrane groups (Table S1), which was further confirmed by the histological examination (Figure S2). No differences were found by the complete blood counts (Table S3 and 4) and blood coagulation factor (Table S2) analysis. In blood biochemical analyses, serum blood urea nitrogen (BUN) and creatinine (Cr) were used to analyze the cause of acute renal injury or dehydration, alanine transaminase (ALT) and aspartate transaminase (AST) were used to evaluate hepatic damage (Table S5). Similarily, there are no significant difference between normal control and fibrous membrane treated mice, which suggest that fibrous membrane had no toxicity to liver and kidney after 40 days implantation (p > 0.05). ## Discussion Muscle tissue engineering aims to reconstruct skeletal muscle loss resulting from injury, congenital defects, or tumor ablations31. Muscle cells possess relatively limited ability to regenerate under in vivo conditions32. The success of muscle tissue engineering depends highly on the ability to modulate cell adhesion, and proliferation, induction of repair and immune processes33, 34. Inspired by mussel adhesiveness to surfaces even in wet conditions, scientists discovered that dopamine (DA) undergoes self-polymerization at alkaline conditions. This reaction comprising a relatively facile procedure provides a universal coating method regardless of chemical and physical properties of the substrates. Furthermore, this polymerized layer is enriched with catechol groups that enable immobilization of primary amine or thiol-based biomolecules via a simple dipping process. Although pDA has, in many instances, shown to be able to facilitate adhesion of cells and biomolecules, and has therefore been extensively explored for tissue engineering applications35, the pDA coating on nano/microscale topography is hampered by their quick aggregation. Recently, the biocompatibility analysis and cell proliferation results suggest that the PNE coating, with slower polymerization kinetics compared to pDA, is more biocompatible with both hiPS-MSCs and hMSCs than the PDA coating36. Here we have for the first time applied pNE to coat PCL fibers for muscle tissue engineering, and found that the muscle cells adhesion and proliferation were enhanced with promising tissue restoration on pNE/PCL fibers compared to the unmodified couterparts. This is consistent with a recent study of Ku et al., which demonstrated that myogenic protein expression and myoblast fusion were upregulated on pDA/PCL nanofibers37. Mussel inspired coating is relatively nascent in muscle tissue engineering, and the cell-material interaction mechanism is unclear. As the first study using pNE functionalization in muscle tissue engineering, our research suggests that muscle cells interact highly with pNE modified surfaces with no evidence of in vivo cytotoxicity. With further validations in polymeric biomaterial design, pNE may be a promising tool for altering the surface of various materials for muscle tissue engineering. Our current work provides an insight for rational design and development of better tissue engineering materials for skeletal muscle regeneration. ## Methods ### Preparation of PCL electrospun fibers with different diameters Polycaprolactone (PCL) fibrous membranes were prepared using electro spinning. PCL (Mw = 80,000; Sigma, St. Louis, MO, USA) was dissolved in 17.5% (W/V) trichloromethane (Sigma, St. Louis, MO, USA) and stirred overnight at RT. The polymer solution was electrospun on the collector with single-layer tin-foil (15 kv, 20 G cut-ended needle, 10 mL syringe, 3.6 mL/h speed, 15 cm distance) to obtain 10 μm fibers. PCL was also dissolved in trichloromethane/ethanol (7:3, V/V) at the concentration of 15% (W/V) by stirring overnight at RT. The final solution was electrospun on the collector with single-layer tin-foil (20 kv, 20 G needle, 10 Ml syringe, 10 mL/h speed, 20 cm distance) to obtain 2 μm fibers. Both the two kinds of membranes were lyophilized overnight. ### Polynorepinephrine (pNE) coating For coating pNE on the fibers, PCL fibrous membranes were immersed in 2 mg/mL norepinephrine (Sigma, St. Louis, MO, USA) solution in pH = 8.5 Tris-HCl buffer at RT for 15 h. Followed by three cycles of washing steps with water to fully remove the residual liquid, then the samples were dried at RT and stored at 4 °C. ### Characterization of engineered fibrous membranes #### X-ray photoelectron spectroscopy (XPS) The surface chemical composition of the developed scaffold was analyzed by XPS and XPS valence band spectra were obtained on an ESCALAB 250Xi instrument (Thermo Scientific). The depth of analysis was 50–100 Å, and the surface area analyzed was 2 mm × 3 mm for each sample. #### Scanning electron microscopy (SEM) The surface morphologies of 2 P, 2 PP, 10 P and 10 PP fibrous membranes were examined with a high-resolution SEM (Hitachi S4800, Japan). The fibrous membranes were placed directly into the SEM chamber; all the images were captured using a secondary electron detector with an acceleration voltage of 5 kV. #### Atomic force microscopy (AFM) AFM is applied to study the adsorption. The force-volume mode is used to record the force-deformation curves of the adsorbed molecules on the fiber surface. #### Mechanical property Mechanical stability of scaffold was examined by using a universal tensile tester (Instron-336, USA) with load cell capacity of 10 N. Six parallels were set for each group (2 P, 2PP, 10 P, 10PP) and each film was measured for thickness. For testing, long strips with dimension of 2 cm × 8 cm were properly fixed in the grips and permitted to elongate at an extension speed of 30 mm/min. Stretched the film until it was pulled. #### Contact Angle (CA) In order to determine the hydrophilicity of pNE functionalized PCL fibrous membranes, wettability was tested by means of a Krüss Drop Shape Analysis System (DSA100, Germany). A water droplet of 1 mL was dropped on the surface of fibrous membranes, and the CA values were recorded by continuous shooting mode with an interval of 5 ps−1 for 15 s. Each sample was measured three times at different positions. #### Transmission electron microscopy (TEM) The interior structures of the fibrous membranes were detected using TEM (Hitachi, HT7700) at 200 kV. All the samples were prepared by electrospinning membranes directly on carbon coated copper grids. ### Mass loss test The mass loss of electrospun membranes was studied under physiological condition (PBS, pH = 7.4, 37 °C). After weighed (W0), fibrous membranes were cultured in 20 ml PBS for more than three months. At 0, 11th, 22th, 33th, 44th, 55th, 66th, 77th, 88th and 99th day, the fibrous membranes were taken out, freeze-dried overnight and weighed (W1). This conversion is calculated in terms of mass loss as: Mass loss = (W0 − W1)/W0 × 100%. ### Cell viability Skeletal muscle cell line L6 was purchased from the American Type Culture Collection (Rockville, Maryland, USA) and cultured in DMEM-L medium (WISENT Inc., Quebec, Canada) supplemented with 10% fetal bovine serum (FBS; Gibco, Langley, Oklahoma, USA), 2 mM L-glutamine, 20 mM HEPES, 100 U/mL penicillin and 1 mg/mL streptomycin (WISENT Inc., Quebec, Canada). Cells were maintained under standard cell culture conditions (37 °C in a humidified atmosphere of 5% CO2). For experiments of cell proliferation and adhesion, 4 groups of fibrous membranes of 12 mm Ø (about 1.1 cm2) were punched out using micro-punches (CaronnoPortusella, Milan, Italy), and placed on the bottom of a sterile standard 48-well plate. For cell proliferation study, L6 cells were respectively seeded onto each membrane at a density of 3 × 105, 2 × 105, 1.5 × 105, 1 × 105 cells for 1 d, 3 d, 5 d and 7 d. In parallel, cells seeded on a 48-well cell culture plate served as control. Cell viability under and on the membrane were measured using Cell Count Kit-8 (CCK-8) (Dojindo Laboratories, Japan). The CCK-8 contains WST-8 [2-(2-methoxy-4-nitrophenyl)- 3-(4-nitrophenyl)-5-(2,4-disulfophenyl)-2H-tetrazolium, monosodium salt] which becomes a water-soluble formazan dye upon reduction with the dehydrogenase in the mitochondria. Briefly, cells were incubated with 100 μL of culture medium in 96-multiwell plates. After treatment, the culture medium was discarded and 110 μL of fresh culture medium containing 10% CCK-8 was added to each well. After incubated for 2 h at 37 °C, the value of the absorbance at 450 nm was measured by using a microplate reader (TECAN, Durham, USA). The cell viability was calculated by the equal: $${\rm{\eta }}\,=\,({{\rm{OD}}}_{{\rm{test}}}-{{\rm{OD}}}_{{\rm{blank}}})/({{\rm{OD}}}_{{\rm{control}}}-{{\rm{OD}}}_{{\rm{blank}}})\,\times \,100 \% .$$ ### Cytoskeleton of skeletal muscle cells 1 × 105 L6 cell were cultured on fibrous membranes for 7 d and fixed with 4% paraformaldehyde in PBS buffer (pH = 7.4). After washed three times with PBS buffer (pH = 7.4), the fibrous membranes were incubated with permeabilization buffer (0.5% Triton X-100) for 5 min. After washed other three times, the fibrous membranes were incubated with 100 nM Rhodamine-phalloidin (Invitrogen, USA) for 30 min and 100 nM Dihydrochloride (DAPI) for 20 min. Cytoskeleton of skeletal muscle cells was obtained by single photon laser confocal microscope (Zeiss, Germany). ### Environmental scanning electron microscope (ESEM) The environmental scanning electron microscope (ESEM) can work at room conditions and allows to observe wet and oily materials without any need of dehydration and of making the sample electro-conductive. The fibrous membranes were fixed in 2.5% glutaraldehyde overnight and the gradient dehydration by alcohol (30%, 50%, 70%, 85%, 95% and 100%). After dried at RT for 3 days, the images were obtained using ESEM (Quanta200, FEI, USA). ### In vivo experiments All animals were provided by the Laboratory of Experimental Animals of the Chinese Academy of Medical Sciences and housed in a temperature-controlled, ventilated and standardized disinfected animal room. Animals were allowed to acclimatize, without handling, for a minimum of 1 week before the start of experiments. All animal experiments were conducted using protocols approved by the Institutional Animal Care and Use Committee at the Institute of Tumors of the Chinese Academy of Medical Sciences. Six week Sprague-Dawley female rats (160–170 g) were used as recipients in biocompatibility evaluation of electrospun membranes in vivo. The total of 35 rats was divided into 7 groups: Normal group (n = 5), Negative group (n = 5), PLGA group (n = 5), 2 P group (n = 5), 2 PP group (n = 5), 10 P group (n = 5), 10 PP group (n = 5). All rats were sacrificed 40 days after the operation. Before treatment, a transperitoneal approach was performed to expose rat’s musculus rectus abdominis after anesthesia. 2 cm2 of the musculus rectus abdominis was taken off and peritoneum wasn’t penetrated. The impaired musculus rectus abdominis was covered and stitched interruptedly with electrospun fibrous membrane. The area of each membrane used was about 6 cm2 (length was 3 cm and width was 2 cm). Before implantation, the membranes were sterilized use ultraviolet germicidal irradiation and sealed. Rats of negative control group were not blanketed with any anti-adhesion membranes. And then, the abdominal cavity was closed using 1–0 silk suture. ### Hematoxylin and eosin staining analysis The hearts, livers, spleens, lungs and kidneys were fixed overnight in 10% formalin neutral buffer, dehydrated using a series of graded ethanol solutions and embedded in paraffin. Baseline histological slides with 4–5 mm sections were stained with hematoxylin/eosin (HE) and dehydrated through a graded series of ethanol solutions ranging from 75 to 100%. A well-trained pathologist then examined the slides blindly. Histological observations and photomicrography were performed using a light microscope (Life TechnologyTM). ### Quantitative real-time polymerase chain reaction (qRT-PCR) Frozen muscle tissues were immersed in liquid nitrogen followed by grinding to a fine powder in a pre-chilled mortar and pestle. Total RNA was extracted from muscle tissues and L6 cells cultured on four kinds of fibrous membranes for 5 days using TRIZOL Reagent method (Invitrogen). Samples were suspended in 1 ml TRIZOL and disrupted using a dounce homogenizer with a tight-fitting pestle. The entire solution was subjected to 50 passes before spinning down at 9,000 × g for 5 min at 4 °C. The supernatant was saved and total RNA was extracted according to the manufacturer’s protocol and treated with RNase-free DNase (Promega) to eliminate genomic DNA. Relative genes expression was investigated using qRT-PCR and the 2(-delta Ct) method. PCR amplification was performed using the following conditions: 95 °C for 2 min, then 40 cycles of 95 °C for 15 sec and finally 60 °C for 1 min. Target gene expression was normalized to reference gene (GAPDH). The reactions were repeated three times. The primer sequences were as follows: ### Western Blotting The muscle tissues were homogenized with lysis buffer (50 mM Tris-HCl (pH 8.0), 150 mM NaCl, 10% glycerol, 1% Triton X-100, 1.5 mM MgCl, 1 mM ethylene glycol tetraacetic acid, 1 mM phenylmethylsulfonyl fluoride, 1 mM Na2VO4 and 100 mM NaF), and then centrifuged at 12,000 rpm for 10 minutes. The protein was separated on sodium dodecyl sulfate-polyacryl-amide gel and transferred to the nitrocellulose membrane. GAPDH antibody (1:3,000 Cell Signaling Technology, Beverly, MA, USA), rabbit anti-myostatin antibody (1:1,000; Millipore, Billerica, MA, USA), rabbit anti-PI3K (1:1,000; Cell Signaling, USA) and rabbit anti-AKT (1:1,000; Cell Signaling, USA) were used as primary antibodies. Band detection was performed using the enhanced chemiluminescence detection kit (Santa Cruz Biotechnology, Dallas, CA, USA). ### Statistical analysis Values are shown as the mean ± standard deviation (SD) of at least three independently experiments. Differences between groups were determined by Student’s t-test, with values of *p < 0.05 and **p < 0.01 considered to be significantly different. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Bao, M. et al. Electrospun biomimetic fibrous scaffold from shape memory polymer of PDLLA-co-TMC for bone tissue engineering. ACS Appl. Mater. Inter. 6(4), 2611–2621 (2014). 2. 2. Brun, P. et al. Electrospun scaffolds of self-assembling peptides with poly(ethylene oxide) for bone tissue engineering. Acta Biomater. 7(6), 2526–2532 (2011). 3. 3. Shi, X., Zhao, Y., Zhou, J., Chen, S. & Wu, H. One-Step Generation of Engineered Drug-Laden Poly(lactic-co-glycolic acid) Micropatterned with Teflon Chips for Potential Application in Tendon Restoration. ACS Appl. Mater. Inter. 5(21), 10583–10590 (2013). 4. 4. Chen, B. et al. In vivo tendon engineering with skeletal muscle derived cells in a mouse model. Biomaterials 33(26), 6086–6097 (2012). 5. 5. Cittadella Vigodarzere, G. & Mantero, S. Skeletal muscle tissue engineering: strategies for volumetric constructs. Front. Physiol. 5, 362 (2014). 6. 6. Dzenis, Y. Spinning continuous fibers for nanotechnology. Science 304(5679), 1917–1919 (2004). 7. 7. Agarwal, S., Wendorff, J. H. & Greiner, A. Progress in the field of electrospinning for tissue engineering applications. Adv. Mater. 21(32–33), 3343–3351 (2009). 8. 8. Pham, Q. P., Sharma, U. & Mikos, A. G. Electrospinning of polymeric nanofibers for tissue engineering applications: a review. Tissue Eng. 12(5), 1197–1211 (2006). 9. 9. Chan, B. P. & Leong, K. W. Scaffolding in tissue engineering: general approaches and tissue-specific considerations. Eur. Spine. J. 17(Suppl 4), 467–479 (2008). 10. 10. Ku, S. H. & Park, C. B. Human endothelial cell growth on mussel-inspired nanofiber scaffold for vascular tissue engineering. Biomaterials 31(36), 9431–9437 (2010). 11. 11. Wu, X. & Wang, S. Regulating MC3T3-E1 Cells on Deformable Poly(epsilon-caprolactone) Honeycomb Films Prepared Using a Surfactant-Free Breath Figure Method in a Water-Miscible Solvent. ACS Appl. Mater. Inter. 4(9), 4966–4975 (2012). 12. 12. Declercq, H. A., Desmet, T., Berneel, E. E. M., Dubruel, P. & Cornelissen, M. J. Synergistic effect of surface modification and scaffold design of bioplotted 3-D poly-epsilon-caprolactone scaffolds in osteogenic tissue engineering. Acta Biomater. 9(8), 7699–7708 (2013). 13. 13. Elomaa, L. et al. Preparation of poly(epsilon-caprolactone)-based tissue engineering scaffolds by stereolithography. Acta Biomater. 7(11), 3850–3856 (2011). 14. 14. Ahn, S. H., Lee, H. J. & Kim, G. H. Polycaprolactone Scaffolds Fabricated with an Advanced Electrohydrodynamic Direct-Printing Method for Bone Tissue Regeneration. Biomacromolecules 12(12), 4256–4263 (2011). 15. 15. Cipitria, A., Skelton, A., Dargaville, T., Dalton, P. & Hutmacher, D. Design, fabrication and characterization of PCL electrospun scaffolds-a review. J. Mater. Chem. 21(26), 9419–9453 (2011). 16. 16. Clark, A., Milbrandt, T. A., Hilt, J. Z. & Puleo, D. A. Mechanical properties and dual drug delivery application of poly(lactic-co-glycolic acid) scaffolds fabricated with a poly(beta-amino ester) porogen. Acta Biomater. 10(5), 2125–2132 (2014). 17. 17. Sun, H., Mei, L., Song, C., Cui, X. & Wang, P. The in vivo degradation, absorption and excretion of PCL-based implant. Biomaterials 27(9), 1735–1740 (2005). 18. 18. Gurevich, D. B. et al. Asymmetric division of clonal muscle stem cells coordinates muscle regeneration in vivo. Science 353(6295), aad9969 (2016). 19. 19. Lee, B.-K. et al. End-to-side neurorrhaphy using an electrospun PCL/collagen nerve conduit for complex peripheral motor nerve regeneration. Biomaterials 33(35), 9027–9036 (2012). 20. 20. Lee, H., Dellatore, S. M., Miller, W. M. & Messersmith, P. B. Mussel-inspired surface chemistry for multifunctional coatings. Science 318(5849), 426–430 (2007). 21. 21. Liu, P., Domingue, E., Ayers, D. C. & Song, J. Modification of Ti6Al4V Substrates with Well-defined Zwitterionic Polysulfobetaine Brushes for Improved Surface Mineralization. ACS Appl. Mater. Inter. 6(10), 7141–7152 (2014). 22. 22. Mahjoubi, H., Kinsella, J. M., Murshed, M. & Cerruti, M. Surface Modification of Poly(D,L-Lactic Acid) Scaffolds for Orthopedic Applications: A Biocompatible, Nondestructive Route via Diazonium Chemistry. ACS Appl. Mater. Inter. 6(13), 9975–9987 (2014). 23. 23. Ren, X. L. et al. Sensitive detection of dopamine and quinone drugs based on the quenching of the fluorescence of carbon dots. Sci. Bull. 61(20), 1615–1623 (2016). 24. 24. Hong, S. et al. Poly(norepinephrine): ultrasmooth material-independent surface chemistry and nanodepot for nitric oxide. Angew Chem. Int. Ed. Engl. 52(35), 9187–9191 (2013). 25. 25. Taskin, M. B. et al. Poly(norepinephrine) as a functional bio-interface for neuronal differentiation on electrospun fibers. Phys. Chem. Chem. Phys. 17(14), 9446–9453 (2015). 26. 26. Lowery, J. L., Datta, N. & Rutledge, G. C. Effect of fiber diameter, pore size and seeding method on growth of human dermal fibroblasts in electrospun poly(epsilon-caprolactone) fibrous mats. Biomaterials 31(3), 491–504 (2010). 27. 27. Lee, S. J. & McPherron, A. C. Regulation of myostatin activity and muscle growth. Proc. Natl. Acad. Sci. USA 98(16), 9306–9311 (2001). 28. 28. Hitachi, K., Nakatani, M. & Tsuchida, K. Myostatin signaling regulates Akt activity via the regulation of miR-486 expression. Int. J. Biochem. Cell Biol. 47, 93–103 (2014). 29. 29. Zhan, Z. Y., Xu, X. & Fu, Y. V. Single cell tells the developmental story. Sci. Bull. 61(17), 1355–1357 (2016). 30. 30. Pedde, R. D. et al. Emerging biofabrication strategies for engineering complex tissue constructs. Adv. Mater. 29(19), doi:10.1002/adma.201606061 (2017). 31. 31. Juhas, M. & Bursac, N. Engineering skeletal muscle repair. Curr. opin. biotech. 24, 880–886 (2013). 32. 32. Fishman, J. M. et al. Skeletal muscle tissue engineering: which cell to use? Tissue Eng. 19, 503–515 (2013). 33. 33. Qazi, T. H., Mooney, D. J., Pumberger, M., Geissler, S. & Duda, G. N. Biomaterials based strategies for skeletal muscle tissue engineering: existing technologies and future trends. Biomaterials 53, 502–521 (2015). 34. 34. Jana, S., Levengood, S. K. & Zhang, M. Anisotropic materials for skeletal-muscle-tissue engineering. Adv. Mater. 28(48), 10588–10612 (2016). 35. 35. Madhurakkat Perikamana, S. K. et al. Materials from mussel-inspired chemistry for cell and tissue engineering applications. Biomacromolecules. 16(9), 2541–2555 (2015). 36. 36. Jiang, X. M., Li, Y. F., Liu, Y., Chen, C. Y. & Chen, M. L. Selective enhancement of human stem cell proliferation by mussel inspired surface coating. RSC Advances. 6, 60206–60214 (2016). 37. 37. Ku, S. H. & Park, C. B. Combined effect of mussel-inspired surface modification and topographical cues on the behavior of skeletal myoblasts. Adv. Healthc. Mater. 2(11), 1445–1450 (2013). ## Acknowledgements This work was financially supported by the project ElectroMed (11-115313) from the Danish Council for Strategic Research, the National Science Fund for Excellent Young Scholars (31622026), the National Natural Science Foundation of China (U1532122, 21320102003, 21471044), the national key research and development plan (2016YFA0201600), the National Science Fund for Distinguished Young Scholars (11425520) and Youth Innovation Promotion Association CAS (2014031). ## Author information Y.L. and G.Z. performed all the experiments, data analysis and wrote the paper. Z.L., M.G. and X.J. performed in vitro experiments. J.L., J.T. and R.B. performed in vivo experiments. M.B.T. and Z.Z. contributed to write the paper. C.C., M.C. and F.B. designed and supervised experiments. ### Competing Interests The authors declare that they have no competing interests. Correspondence to Menglin Chen or Chunying Chen.
2019-06-19 19:06:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4283188283443451, "perplexity": 13535.857144993272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999040.56/warc/CC-MAIN-20190619184037-20190619210037-00478.warc.gz"}
https://math.stackexchange.com/questions/267795/holomorphic-function-on-the-unit-disk-f-show-the-set-z-w-in-mathbbc-such
# Holomorphic function on the unit disk $f$, show the set $z,w\in \mathbb{C}$ such that $f(z)=f(w)$ is not countable Here is the problem statement: Suppose $f$ is a holomorphic function on the unit disk. Show that the set $A=\lbrace (z,w) \in \mathbb{C}^2\;|\; |z|,|w| \leq \frac{1}{2}, z\neq w, f(z)=f(w)\rbrace$ is either finite or uncountably infinite. I'm pretty stuck, but here are some thoughts: 1. The bound on the $z,w$ is a ltitle odd. The only thing I take away from it is that $|z+w| \leq |z| + |w| = 1$ so the sum stays in the closure of the disk. But this doesn't seem relevant. 2. I tried to find a clever way to use the identity principle but I couldn't. For example, suppose to the contrary the set $A$ is countably infinite, then it is pointless to consider the function $g(z) = f(z) - f(w_0)$ for some $w_0 \in A$ since there doesn't need to be countably many $z_0$ matching up with $w_0$ in the sense $f(z_n) = f(w_0)$, only countably many pairs $z,w$ with the same images. 3. I tried representing $f(z)$ as a power series centered at $z_0=0$: $$f(z) = \sum_{n=0}^\infty a_nz^n$$ and then considering expressions like $$f(z_0) - f(w_0) = \sum_{n=0}^\infty a_n(z_0^n - w_0^n)$$ to learn something about the coefficients but this didn't lead anywhere. I think there must be some slick solution but I can't see it, so I'd prefer hints rather than full solutions right now. • I suppose you also want $z \neq w$? – WimC Dec 30 '12 at 19:28 • Yeah, that should probably be in there, thanks. – Derek Allums Dec 30 '12 at 19:35 • It's easy to use the Open Mapping Theorem to reduce to the case $|z| = |w| = 1/2$. The problem then becomes one of how many times an analytic image of the circle can intersect itself without crossing. I'm pretty sure $1/2$ could be replaced by anything in $(0,1)$. – Robert Israel Dec 30 '12 at 20:59 • My intuition says that if $A$ is infinite, then it has an accumulation point somewhere, and you can somehow prove (using holomorphicness of $f$ presumably) that there is a whole neighborhood of that accumulation point contained in $A$, thus un-countably many points. I don't know what to do if the accumulation point happens to be on the diagonal. – Arthur Dec 30 '12 at 21:20 • Kudos on the "show your work" part. – Pedro Tamaroff Dec 31 '12 at 3:15 I believe I may have a proof that uses well-known results about holomorphic functions of several complex variables and the Baire Category Theorem. First, recall that zeros of holomorphic functions of more than one variable are not isolated. This is a consequence of a well known theorem due to Hartogs. Now, suppose that $A$ is countable, say $A=\{(z_1,w_1),(z_2,w_2),\dots\}$. The set $$S:=\{(z,w) : |z|,|w| \leq 1/2 : f(z)=f(w)\}$$ is closed, and we have $$S= \{(z,z):|z| \leq 1/2\} \cup \{(z_1,w_1)\} \cup \{(z_2,w_2)\} \cup \dots.$$ By the Baire Category Theorem, one of the sets in the union has non-empty interior. Clearly it cannot be the diagonal, and so we have that one of the points $(z_j,w_j)$ is isolated in $S$. But this point would be an isolated zero of the holomorphic function $g(z,w):=f(z)-f(w)$, which is a contradiction. • Seems correct, but it also seems rather overkill to invoke the Hartogs theorem for a proof of a result about a function of one complex variable. – user53153 Dec 31 '12 at 2:49 • +1 Nice answer! But I should have mentioned this in the question: this is from a qualifying exam given after a one semester course in real analysis and analysis of functions of one complex variable. Can you think of a similar proof using more elementary means? I had a feeling it had something to do with what you call $g(z,w)$. – Derek Allums Dec 31 '12 at 17:20 • @unit3000-21 : I've thought about a more elementary proof but I can't find one. I'll keep thinking about it though... Interesting problem! – Malik Younsi Dec 31 '12 at 19:23 • @MalikYounsi Thanks very much. I'll continue to think about it too. – Derek Allums Jan 1 '13 at 19:22 Here's a solution that was pointed out to me by a fellow student: The claim is that the set is either empty or uncountable, in fact. Assume there is one such pair $(z,w) \in A$ such that $f(z) = f(w)$. But since $\mathbb{C}$ is Hausdorff, we can separate each point by an open set such that the open sets are disjoint: $z\in U, w\in V, U\cap V = \varnothing$. But by the open mapping theorem, the image contains a ball around $f(z)=f(w)$ which is contained in $f(U) \cap f(V)$. That is, $f(U)$ is open, $f(V)$ is open and they have a nontrivial intersection which must too be open and thus contains a ball $B$ which contains $f(z)=f(w)$. Now we pick some other point $f(z) \neq \alpha \in B$. Then $f^{-1}(\alpha) \supset \lbrace z_0 \neq z\in U, w_0 \neq w \in V \rbrace$ where $f(z_0) = f(w_0)$. Thus, $f^{-1}(B) \subset A$ where $f^{-1}(B)$ is uncountable. Thus, if there is one point in $A$, there are uncountably many points in $A$. • I had thought about this too, but I don't think it works. The problem is that it may happens that your $z,w$ satisfy $|z|=|w|=1/2$. In this case, $f^{-1}(\alpha)$ indeed contains points different than $z,w$, but these points may be outside the closed disk centered at $0$ of radius $1/2$, and so these points do not belong to $A$. See Robert Israel's comment. – Malik Younsi Jan 16 '13 at 13:55
2019-05-20 12:45:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8873562216758728, "perplexity": 124.60128459455194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255944.3/warc/CC-MAIN-20190520121941-20190520143941-00159.warc.gz"}
https://physics.stackexchange.com/questions/264942/lorentz-transformations-in-minkowski-space
# Lorentz Transformations in Minkowski space If $\Lambda$ represents the Lorentz transformation matrix, then the transformation of contravariant components $x^\mu$ is given by $$x'^\mu=\Lambda^{\mu}{}_{\nu} x^\nu$$ and that of the covariant components is given by $$x^\prime_\mu =\Lambda_{\mu}{}^{\nu}x_\nu$$ we used $\eta_{\mu\nu}$ to raise and lower the indices of $\Lambda$. In this notation, we remember that the first index is always the row index and the second index is the column index irrespective of its position upstairs or downstairs. Now using th relation $$\eta_{\rho\sigma}=\eta_{\mu\nu}\Lambda^{\mu}{}_{\rho}\Lambda^{\nu}{}_{\sigma},$$ it can be shown that $$(\Lambda^{-1})^{\sigma}{}_{\rho}=\Lambda_{\rho}{}^\sigma$$ Now, here are the questions. 1. If we define $\Lambda^{\mu}{}_{\nu}$ to be the $\mu\nu-$th component of the Lorentz transformation matrix, then $=\Lambda_{\mu}{}^{\nu}$ is the $\mu\nu-$th component of of $\Lambda^{-1}$. Then what do the objects $\Lambda_{\mu\nu}$ or $\Lambda^{\mu\nu}$ represent? 2. If we want to take, the matrix element of $\eta^{-1}\eta=I$, what should one write? Should it be: $(\eta^{-1})^{\mu}{}_{\sigma}(\eta)^{\sigma}{}_{\nu}$ or $(\eta^{-1})^{\mu\sigma}(\eta)^{\sigma\nu}=\delta^{\mu\nu}$, or $(\eta^{-1})_{\mu\sigma}(\eta)_{\sigma\nu}=\delta_{\mu\nu}$ or something else? 3. What is the relation between $\eta^{\mu\nu}$ and $\eta_{\mu\nu}$ and how to establish that relation? EDIT: This is an additional question related to the manipulation of indice. Since $\eta=\Lambda^T\eta\Lambda$, we have $\Lambda^{-1} =\eta^{-1}\Lambda^T \eta$ taking matrix elements on both sides we get, $$(\Lambda^{-1})^{\sigma}{}_{\rho}=(\eta^{-1}\Lambda^T \eta)^{\sigma}{}_{\rho}$$ $$=(\eta^{-1})^{\sigma\alpha}\Lambda^{\beta}{}_{\alpha}\eta_{\beta\rho}$$ $$=?$$ Starting from this how can we arrive at the relation $$(\Lambda^{-1})^{\sigma}{}_{\rho}=\Lambda_{\rho}{}^\sigma$$ Since, I don't know the action of $(\eta^{-1})^{\sigma\alpha}$, I'm unable to proceed further (given $\Lambda^{-1}=\eta^{-1}\Lambda^T\eta$ as the starting point). 1. $\Lambda_{\mu\nu} = {\Lambda_\mu}^\sigma\eta_{\sigma\nu}$. It doesn't "do" anything. 2. $\delta_{\mu\nu}$ and $\delta^{\mu\nu}$ are not tensors, as I explain at length in this answer of mine. The matrix elements of the identity are $\delta_\mu^\nu$, which you could have determined by thinking about the fact that the identity must send vectors $v^\mu$ to other vectors, so it needs a lower index that can be contracted with the upper vector index, and it needs an upper index so that the result can still be a vector. Writing $\eta^{-1}\eta$ is sort of non-sensical because the metric is a (0,2)-tensor, not a matrix that has an inverse in the linear algebra sense. However: 3. $\eta^{\mu\nu}$ and $\eta_{\mu\nu}$ are "inverses" of each other in the following sense: $\eta^{\mu\sigma}\eta_{\sigma\nu} = \delta^\mu_\nu$. This follows from the very definition of $\eta^{\mu\nu}$ - it is the object that raises indices, while $\eta_{\mu\nu}$ defines lowering indices. First lowering and then raising an index should be the identity, which is exactly what the equation $\eta^{\mu\sigma}\eta_{\sigma\nu} = \delta^\mu_\nu$ means. • @ACuriousMind-When you write $\delta^\nu_\mu$, you don't seem to distinguish between first or second index. Why is that? Question 3 was asked because I was having a problem in writing the equation $\Lambda^{-1}=\eta^{-1}\Lambda^T\eta$ (which is obtained from $\Lambda^T \eta\Lambda=\eta$) in a component form. – SRS Jun 27 '16 at 13:25 • @SRS: I don't distinguish because I'm lazy and because it doesn't matter - what counts is which index is upper and which is lower, the $\delta$ doesn't care about the order. – ACuriousMind Jun 27 '16 at 13:28 • Thank you. Now it has started to make sense. You said that $\delta^{\mu\nu}$ is not a tensor, but we know $T^{\mu\nu}$ or $F^{\mu\nu}$ are tensors. If I understood it correct then $\delta^{\mu\nu}$ do not transform the same way as $F^{\mu\nu}$ or $T^{\mu\nu}$ does? – SRS Jun 27 '16 at 13:32 • @SRS: I linked a post in my answer where I explain in what sense $\delta^{\mu\nu}$ is not a tensor but $\delta_\mu^\nu$ is - please read it. – ACuriousMind Jun 27 '16 at 13:38
2019-10-21 18:27:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509553670883179, "perplexity": 167.74904250778755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987781397.63/warc/CC-MAIN-20191021171509-20191021195009-00337.warc.gz"}
https://www.simscale.com/docs/validation-cases/hertzian-contact-between-two-spheres/
Required field Required field Required field Required field # Validation Case: Hertzian Contact Between Two Spheres This validation case belongs to structural dynamics. The aim of this test case is to validate the following parameters at the point of the Hertzian contact between two spheres: • $$σ_{zz}$$ stress using a frictionless penalty contact. • $$σ_{zz}$$ stress using a frictionless augmented Lagrange contact. The simulation results of SimScale were compared to the results presented in [1]. ## Geometry Only one-eighth of each of the two spheres (with a radius of 50 $$mm$$ ) is used for the analysis due to the symmetry of the case study. ## Analysis Type and Domain Tool type: Code_Aster Analysis type: Static nonlinear Type of contact: Physical Mesh and element types: The meshes were created with the standard meshing algorithm on the SimScale platform. While a single region refinement is used in the meshes (A) and (C), the meshes in (B) and (D) were created with an additional region refinement around the contact region. Below the 1st order standard mesh for case A is visualized: And the mesh (case B), which has a region refinement around the contact region is presented below: ## Simulation Setup Material/Solid: • Isotropic: • $$E$$ = 20 $$GPa$$, • $$ν$$  = 0.3 Constraints: • Faces ACD and A’C’D: zero x-displacement • Faces ABD and A’B’D: zero y-displacement • Face ABC: displacement of 2 $$mm$$ in the z-direction • Face A’B’C’ displacement of -2 $$mm$$ in the z-direction Physical Contacts: • Augmented Lagrange: • Contact smoothing enabled for linear elements and disabled for quadratic elements • Frictionless • Augmentation coefficient = 100 • Penalty: • Contact smoothing enabled for linear elements and disabled for quadratic elements • Frictionless • Penalty coefficient = 10$$^{15}$$ ## Reference Solution $$\sigma_{zz} = \frac{-E}{\pi}\frac{1}{1-{\nu}^2}\sqrt{\frac{2h}{R}} \tag{1}$$ $$h= 2 mm−(−2 mm ) = 4mm\tag{2}$$ With equation (1) and equation (2) the stress at point D results in: $$\sigma_{zz} = −2798.3\ MPa$$ ## Results Comparison of the stress $$σ_{zz}$$ at point D of the Hertzian contact obtained with SimScale with the analytical result of the reference solution [SSNV104_A]$$^1$$: It is obvious from the table above that the best results were obtained with SimScale’s 1st order mesh (case B), using the Augmented Lagrange contact: Last updated: September 24th, 2021
2022-05-24 12:17:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3964022397994995, "perplexity": 4161.1161279761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662572800.59/warc/CC-MAIN-20220524110236-20220524140236-00640.warc.gz"}
https://labs.tib.eu/arxiv/?author=Nadia%20Zakamska
• The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since July 2014. This paper describes the second data release from this phase, and the fourteenth from SDSS overall (making this, Data Release Fourteen or DR14). This release makes public data taken by SDSS-IV in its first two years of operation (July 2014-2016). Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey (eBOSS); the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data driven machine learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS website (www.sdss.org) has been updated for this release, and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020, and will be followed by SDSS-V. • ### The Size-Luminosity Relationship of Quasar Narrow-Line Regions(1804.05848) April 16, 2018 astro-ph.GA The presence of an active galactic nucleus (AGN) can strongly affect its host. Due to the copious radiative power of the nucleus, the effects of radiative feedback can be detected over the entire host galaxy and sometimes well into the intergalactic space. In this paper we model the observed size-luminosity relationship of the narrow-line regions (NLRs) of AGN. We model the NLR as a collection of clouds in pressure equilibrium with the ionizing radiation, with each cloud producing line emission calculated by Cloudy. The sizes of the NLRs of powerful quasars are reproduced without any free parameters, as long as they contain massive ($10^5 - 10^7 M_\odot$) ionization-bounded clouds. At lower AGN luminosities the observed sizes are larger than the model sizes, likely due to additional unmodeled sources of ionization (e.g., star formation). We find that the observed saturation of sizes at $\sim 10$ kpc which is observed at high AGN luminosities ($L_\text{ion} \simeq 10^{46}$ erg/s) is naturally explained by optically thick clouds absorbing the ionizing radiation and preventing illumination beyond a critical distance. Using our models in combination with observations of the [O III]/IR ratio and the [O III] size -- IR luminosity relationship, we calculate the covering factor of the obscuring torus (and therefore the type 2 fraction within the quasar population) to be $f=0.5$, though this is likely an upper bound. Finally, because the gas behind the ionization front is invisible in ionized gas transitions, emission-based NLR mass calculations underestimate the mass of the NLR and therefore of the energetics of ionized-gas winds. • ### Spectrophotometric Redshifts In The Faint Infrared Grism Survey: Finding Overdensities Of Faint Galaxies(1802.02239) March 5, 2018 astro-ph.GA We improve the accuracy of photometric redshifts by including low-resolution spectral data from the G102 grism on the Hubble Space Telescope, which assists in redshift determination by further constraining the shape of the broadband Spectral Energy Disribution (SED) and identifying spectral features. The photometry used in the redshift fits includes near-IR photometry from FIGS+CANDELS, as well as optical data from ground-based surveys and HST ACS, and mid-IR data from Spitzer. We calculated the redshifts through the comparison of measured photometry with template galaxy models, using the EAZY photometric redshift code. For objects with F105W $< 26.5$ AB mag with a redshift range of $0 < z < 6$, we find a typical error of $\Delta z = 0.03 * (1+z)$ for the purely photometric redshifts; with the addition of FIGS spectra, these become $\Delta z = 0.02 * (1+z)$, an improvement of 50\%. Addition of grism data also reduces the outlier rate from 8\% to 7\% across all fields. With the more-accurate spectrophotometric redshifts (SPZs), we searched the FIGS fields for galaxy overdensities. We identified 24 overdensities across the 4 fields. The strongest overdensity, matching a spectroscopically identified cluster at $z=0.85$, has 28 potential member galaxies, of which 8 have previous spectroscopic confirmation, and features a corresponding X-ray signal. Another corresponding to a cluster at $z=1.84$ has 22 members, 18 of which are spectroscopically confirmed. Additionally, we find 4 overdensities that are detected at an equal or higher significance in at least one metric to the two confirmed clusters. • The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) began observations in July 2014. It pursues three core programs: APOGEE-2, MaNGA, and eBOSS. In addition, eBOSS contains two major subprograms: TDSS and SPIDERS. This paper describes the first data release from SDSS-IV, Data Release 13 (DR13), which contains new data, reanalysis of existing data sets and, like all SDSS data releases, is inclusive of previously released data. DR13 makes publicly available 1390 spatially resolved integral field unit observations of nearby galaxies from MaNGA, the first data released from this survey. It includes new observations from eBOSS, completing SEQUELS. In addition to targeting galaxies and quasars, SEQUELS also targeted variability-selected objects from TDSS and X-ray selected objects from SPIDERS. DR13 includes new reductions of the SDSS-III BOSS data, improving the spectrophotometric calibration and redshift classification. DR13 releases new reductions of the APOGEE-1 data from SDSS-III, with abundances of elements not previously included and improved stellar parameters for dwarf stars and cooler stars. For the SDSS imaging data, DR13 provides new, more robust and precise photometric calibrations. Several value-added catalogs are being released in tandem with DR13, in particular target catalogs relevant for eBOSS, TDSS, and SPIDERS, and an updated red-clump catalog for APOGEE. This paper describes the location and format of the data now publicly available, as well as providing references to the important technical papers that describe the targeting, observing, and data reduction. The SDSS website, http://www.sdss.org, provides links to the data, tutorials and examples of data access, and extensive documentation of the reduction and analysis procedures. DR13 is the first of a scheduled set that will contain new data and analyses from the planned ~6-year operations of SDSS-IV. • We describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratio in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially-resolved spectroscopy for thousands of nearby galaxies (median redshift of z = 0.03). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between redshifts z = 0.6 and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGN and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5-meter Sloan Foundation Telescope at Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5-meter du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in July 2016. • ### An Archival Chandra and XMM-Newton Survey of Type 2 Quasars(1205.0033) Aug. 29, 2013 astro-ph.CO, astro-ph.HE (Abridged) We analyzed the {\it Chandra} and {\it XMM-Newton} archival observations for 72 type 2 quasars at $z<1$. These objects were was selected based on the [O III]$\lambda$5007 optical emission line which we assume to be an approximate indicator of the intrinsic AGN luminosity. We find that the means of the column density and photon index of our sample are $\log N_{\rm H}=23.0$ cm$^{-2}$ and $\Gamma=1.87$ respectively, which are consistent with results from deep X-ray surveys. The observed ratios of hard X-ray and [O III] line luminosities imply that the majority of our sample suffer significant amounts of obscuration in the hard X-ray band. A more physically realistic model which accounts for both Compton scattering and a potential partial covering of the central X-ray source was used to estimate the true absorbing column density. We find that the absorbing column density estimates based on simple power-law models significantly underestimate the actual absorption in approximately half of the sources. Eleven sources show a prominent Fe K$\alpha$ emission line, and we detect this line in the other sources through a joint fit (spectral stacking). The correlation between the Fe K$\alpha$ and [O III] fluxes and the inverse correlation of the equivalent width of Fe K$\alpha$ line with the ratio of hard X-ray and [O III] fluxes is consistent with previous results for lower luminosity Seyfert 2 galaxies. We conclude that obscuration is the cause of the weak hard X-ray emission rather than intrinsically low X-ray luminosities. We find that about half of the population of optically-selected type 2 quasars are likely to be Compton-thick. We also find no evidence that the amount of X-ray obscuration depends on the AGN luminosity (over a range of more than three orders-of-magnitude in luminosity).
2020-12-06 02:17:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5921033024787903, "perplexity": 3465.965566866374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141753148.92/warc/CC-MAIN-20201206002041-20201206032041-00215.warc.gz"}
https://math.stackexchange.com/questions/459229/why-does-the-boundary-of-a-cube-in-mathbbrn-have-measure-zero
# Why does the boundary of a cube in $\mathbb{R}^n$ have measure zero? I'm trying to develop the fact that the measure of a box is equal to the volume of a box for any measure $m$, i.e., a function satisfying the standard properties of monotonicity, positivity, countable sub-additivity, translation invariance, etc. I'm trying to show that the sets $(1,\frac{1}{q})^n$ and $[1,\frac{1}{q}]^n$ in $\mathbb{R}^n$ both have measure $\frac{1}{q^n}$, for $q>1$ an integer. First, by translating each coordinate by $k/q$ where $0\leq k\leq q-1$, one sees that $q^n$ disjoint translates of $(0,1/q)^n$ are contained in $[0,1]^n$. But normalization, monotonicity, and translation invariance, it follows that $$q^n m((0,1/q)^n)\leq 1$$ so $m((0,1/q)^n)\leq q^{-n}$. Similarly, the $q^n$ translates of $[0,1/q]^n$ cover $[0,1]^n$, so $q^nm([0,1/q]^n)\geq 1$, which implies $m([0,1/q]^n)\geq q^{-n}$. To finish, I'd like to show that $m([0,1/q]^n\setminus(0,1/q)^n)=0$. So given $\epsilon>0$, I'm trying to cover the boundary of the cube by a family of boxes whose measure is less than $\epsilon$. But without knowing at this point what the measure of a box is, since the measure is not necessarily the Lesbegue measure, I'm stuck. How can I show the measure of the boundary of the cube is $0$? Thanks. The axioms for $m$ are as follows: $\mathbb{R}^n$ is a Euclidean space. • Every open and closed set is measurable. • The complement of every measurable set is measurable. • Finite/countable unions and intersections of measurable sets are measurable. • $m(\emptyset)=0$. • $0\leq m(A)\leq\infty$ for every measurable set. • If $A\subseteq B$, then $m(A)\leq m(B)$. • If $(A_j)_{j\in J}$ is a family of measurable sets, then $m(\bigcup A_j)\leq\sum m(A_j)$. • If $(A_j)_{j\in J}$ is a family of disjoint measurable sets, then $m(\bigcup A_j)=\sum m(A_j)$. • $m([0,1]^n)=1$. • If $A$ is measurable, $m(x+A)=m(A)$ for all $x\in\mathbb{R}^n$. • Could you please list the axioms you're requiring for $m$? We need to rule out the $n-1$-dimensional Hausdorff measure. I guess it's ruled out by normalization, which you used to get $m([0,1]^n)=1$? – Chris Culter Aug 4 '13 at 3:27 • @ChrisCulter Sure, I've added the axioms now. – Nastassja Aug 4 '13 at 4:55 Oh, okay! From the above axioms, you can show that $$m\left((-\epsilon,\epsilon)\times[0,1]^{n-1}\right)\leq2\epsilon,$$ i.e. $(n-1)$-dimensional faces can be covered by thickenings with arbitrarily small measure. This follows from stacking a bunch of translates of such a slice inside of $[0,1]^n$, which has known measure. It's the same argument you're already using for cubes, except it focuses on one dimension at a time, instead of shrinking all $n$ dimensions at once. • I'm beginning to see it now, thanks. – Nastassja Aug 4 '13 at 7:19 I assume that the sigma-algebra you are working with is the usual (Borel) one. Then translation-invariance property of your measure implies that it is the Haar measure on the Lie group $R^n$, which is therefore unique up to scaling, see http://en.wikipedia.org/wiki/Haar_measure. Thus, your measure is a scalar multiple of the Lebesgue measure.
2021-04-12 07:44:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9865668416023254, "perplexity": 161.93183899729115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066613.21/warc/CC-MAIN-20210412053559-20210412083559-00072.warc.gz"}
http://physics.stackexchange.com/tags/higgs/new
# Tag Info 1 The masses of W-bosons and Z-bosons are known – and the mass of W-bosons multiplied by some coupling constants was known indirectly through the force they mediate, as the Fermi's constant. But the mass of e.g. the W-boson comes from the gauge-invariant kinetic term of the Higgs boson, $$\frac 12 D_\mu H \cdot D^\mu H$$ The covariant derivative includes ... 1 The scale where some symmetry gets broken can be computed using the renormalization group equations for the gauge couplings. It's the other way around. Once you know the scales of the model (masses of the fields) you can compute the RGE. Since what ultimately matters is the mass of the representation, not its vev, if you accept some tuning (and have ... 0 @SAS answered most of the questions, however I believe there's a crucial point which still needs to be addressed: the chirality. Indeed, it is not obvious a priori why $$\Psi^T C \Psi\,\Phi\,,$$ (where $\Phi$ is some Higgs representation) leads to a Dirac-type masses instead of Majorana masses. Why not the common $\bar\Psi \Psi$? It turns out to be the ... 1 The general problem that the Higgs mechanism solves is giving mass to spin-one particles. It turns out that finding relativistic, unitary theories of spin-one massive particles is non-trivial. There are a few known ways of doing it (this paper has a pretty good list of sources), but the oldest and easiest is probably the Higgs mechanism. In contrast, there ... 2 Of course the SM Higgs gives mass to both fermions as well as the gauge bosons. However, the latter is much more fundamental and predictive than the former. Point is, it is enough for a scalar to transform non-trivially under a gauge symmetry to contribute to its associated gauge bosons masses (after taking vev). This contribution is constrained by the ... 1 I think the question was already answered here, look at either my answer of the one by Adam. The punch line is that that the expectation value of a field (such as the field $\phi$ at the bottom of the mexican hat potential) is fixed by the way the source $j$ that couples to $\phi$ in the path-integral for the functional generator is sent to zero. As there ... 1 Many answers discuss the transition from the unstable state to the stable one. Let me thus discuss the issue of choosing the ground state itself. I will suppose a two dimensional Mexican hat potential. As Numrock realised, it has degeneracy. There is nothing which lift this degeneracy in principle. Then you can change the ground state without energy. This ... 2 In relativistic QFT, this cannot be a process in time. The unstable initial state does not exist at all: An unstable ground state is impossible in relativistic QFT at temperature T=0 (i.e., the textbook theory in which scattering calculations are done) since it would be a tachyonic state with imaginary mass, while the Kallen-Lehmann formulas require \$m^2\ge ... 1 The fermion masses result from Yukawa interactions after EWSB: $$m_f = \frac1{\sqrt 2} y_f v$$ Thus the Yukawa couplings govern lepton and quark masses. Of course the masses should be diagonalized. For the leptons, as there no neutrino masses in the SM, the lepton interactions and masses can be simultaneously diagonlized, whereas differences in up- and ... 1 In fact in some sense the full state of the quantum system shouldn't break any symmetry. It's just that the overlap of the different vacuum states (in field theory) vanishes. So in other words the full quantum state is a superposition of all the different vacua, but when we make observations we "collapse the wave function" (feel free to insert your ... 0 Physically, you look for the ground state of your theory in order to make a correct predictive calculus around the ground state. The "random" choice of the ground state depends on the dimension of the theory, in fact, you can obtain a "hat" shape or a simple 2-dimensional shape with just 2 possibilities for the ground state (look the scalar quantum ... 2 I'm not 100% sure of your level so just as a heads up, I put some comments in parentheses that are meant to give technical caveats. If they don't make sense just ignore everything in parentheses, the zeroth order answer You can look at it that way, but actually it turns out to be much more complicated to understand what's going on in detail. (Basically you ... Top 50 recent answers are included
2016-05-07 00:49:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483872413635254, "perplexity": 385.97156550960864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461864953696.93/warc/CC-MAIN-20160428173553-00094-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.wptricks.com/question/ajax-action-through-direct-link/
Question I have an existing AJAX action that’s already defined by the theme I’m using. The action deletes a search item request, and I’d like to use it as a direct link – like an Unsubscribe link – emailed to the user along with the search results… Something like the following URL: https://domain.tld/wp-admin/admin-ajax.php?action=delete_saved_search_item=&search_item_id=17 but I get a 0 on the screen and a 400 Bad Request in the Headers. Basically in the JS it’s defined like this: \$.post(ajaxurl, { action: 'delete_saved_search_item', search_item_id: search_item_id, }, function (response) { response = JSON.parse(response); if (response.success) { search_item.remove(); } } ); so I suppose it expects a POST request, not a GET one… Is there any workaround to make this work? Like creating a page with a custom cURL that would imitate the POST form submission? Anyone done something like this? The good thing is I didn’t notice any use of nonce, so I suppose this alone makes it easier to handle, doesn’t it? 0 3 months 2022-05-29T06:27:25-05:00 0 Answers 0 views 0
2022-08-19 05:05:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4585663378238678, "perplexity": 1963.289800284616}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00400.warc.gz"}
http://stackoverflow.com/questions/13095276/how-to-initialize-letter-guess-prompt-error-in-hangman-python
# How to Initialize Letter Guess Prompt Error in Hangman (Python) I'm writing a hangman code, and am having trouble with setting an error for an invalid letter guess (ie. the guess should be between a-z, no numbers or multiple letter strings). This may be a fairly simple solution, but I am having trouble formalizing the code. Here is how I implemented the code: while ((count < 6) and win): guess = input("Please enter the letter you guess: ") if guess: try: guess = guess.lower() lettersCorrect += 1 if (guess not in letterList): #SYNTAX ERROR print ("You need to input a single alphabetic character!") continue I noted where I am getting the syntax error. letterList is a list I created containing all acceptable letters (a-z). Is my coding off, or is there an easier way to say, "if not in letterList". Thanks. - You have it correct, you just forgot and except block. But it is really superfluous to include a try. –  squiguy Oct 26 '12 at 22:29 Your try block is pointless. Also, instead of letterList, you can use string.lowercase import string while count < 6 and win: guess = input("Please enter the letter you guess: ") if guess: guess = guess.lower() if guess not in string.lowercase: print("You need to input a single alphabetic character!") continue lettersCorrect += 1 - or may be just use if guess.isalpha(). –  Ashwini Chaudhary Oct 26 '12 at 22:40
2015-01-27 12:33:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.456024169921875, "perplexity": 4274.284489439359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120842874.46/warc/CC-MAIN-20150124173402-00101-ip-10-180-212-252.ec2.internal.warc.gz"}
https://mersenneforum.org/showthread.php?s=6c62fb8b0d129844bee5208e5ed36750&p=456204
mersenneforum.org CUDALucas (a.k.a. MaclucasFFTW/CUDA 2.3/CUFFTW) Register FAQ Search Today's Posts Mark Forums Read 2017-04-02, 07:49 #2575 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 111708 Posts most builds are missing the bad-intermediate-results patch Code: 2487: 2016/03/02 correction made. it now aborts the test when a premature 0 residue occurs. It wil take a few days to get sourceforge updated. (Extemely unreliable and slow internet connection.) Thank you owftheevil, let us know when the new build is ready for download :-) This change apparently only made it into the Windows x64 CUDA8 version on sourceforge, (no other Windows versions, no linux versions), judging by file dates. See below. Code: Directory of W:\sources\mersennes\cudalucas\CUDALucas.2.05.1-CUDA4.2-CUDA8.0-linux 02/11/2015 05:27 PM 425,256 CUDALucas-2.05.1-CUDA4.2-linux-x86_64 02/11/2015 05:28 PM 478,576 CUDALucas-2.05.1-CUDA5.0-linux-x86_64 02/11/2015 05:38 PM 609,904 CUDALucas-2.05.1-CUDA5.5-linux-x86_64 02/11/2015 05:34 PM 749,040 CUDALucas-2.05.1-CUDA6.0-linux-x86_64 02/11/2015 05:36 PM 753,136 CUDALucas-2.05.1-CUDA6.5-linux-x86_64 Directory of W:\sources\mersennes\cudalucas\CUDALucas.2.05.1-CUDA4.2-CUDA8.0-Windows-32.64 10/01/2016 10:30 PM 76 cuda8.0win32notpossible.txt 02/10/2015 09:25 AM 491,520 CUDALucas2.05.1-CUDA4.2-Windows-WIN32.exe 02/10/2015 09:30 AM 553,984 CUDALucas2.05.1-CUDA4.2-Windows-x64.exe 02/10/2015 09:26 AM 549,888 CUDALucas2.05.1-CUDA5.0-Windows-WIN32.exe 02/10/2015 09:29 AM 606,208 CUDALucas2.05.1-CUDA5.0-Windows-x64.exe 02/10/2015 09:33 AM 730,112 CUDALucas2.05.1-CUDA5.5-Windows-WIN32.exe 02/10/2015 09:38 AM 756,224 CUDALucas2.05.1-CUDA5.5-Windows-x64.exe 02/10/2015 09:35 AM 963,584 CUDALucas2.05.1-CUDA6.0-Windows-WIN32.exe 02/10/2015 09:40 AM 1,014,784 CUDALucas2.05.1-CUDA6.0-Windows-x64.exe 02/10/2015 09:37 AM 1,090,560 CUDALucas2.05.1-CUDA6.5-Windows-WIN32.exe 02/10/2015 09:44 AM 1,159,168 CUDALucas2.05.1-CUDA6.5-Windows-x64.exe 10/01/2016 10:20 PM 1,178,624 CUDALucas2.05.1-CUDA8.0-Windows-x64.exe There does not appear to be a CUDA 8 version available for linux. Would someone please update some or all of these to include the bad-intermediate-residue check? Thanks! 2017-04-02, 17:30   #2576 kriesel "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 10010011110002 Posts Hi, It began as editing it for my own use as I figured some things out, one thing led to another, and so I thought I'd share, as well as ask for review for correctness and other input.It's a work in progress. Every time I think I've eliminated misspellings I find another, for example. I'd appreciate it if those who have modified the code would look it over for accuracy and provide feedback. Line-numbered output of the FC file compare utility is provided for convenience. Thanks! Attached Files README-edited.txt (41.4 KB, 377 views) readme-fc.txt (43.1 KB, 190 views) Doc Changes.txt (1,021 Bytes, 52 views) 2017-04-02, 17:36 #2577 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 23×3×197 Posts Easiest way to get cufft*.dll, cudart*.dll What's the easiest way? I downloaded the developer's toolkit which contains them, but each separate version level seems to be a separate mammoth download, which is painfully slow on low speed DSL. (Some are over 1GB.) Thx! Last fiddled with by kriesel on 2017-04-02 at 17:37 2017-04-02, 17:57 #2578 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 111708 Posts benchmarking cards versus CUDA level Hi, Has anyone run comparative fft benchmarks for a variety of CUDA levels, holding other things constant, on the same graphics card, for older cards, such as the GeForce GTX480, or the Quadro 2000? Thanks- 2017-04-02, 18:07 #2579 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 111708 Posts CUDALucas INI file questions Hi, Please note, all these are requests for clarification, not requests for features or changes. (Many values that might be legal may not be advisable, for performance reasons. That's not what I'm asking about. I'm trying to understand the program limits that would make values be not accepted, cause errors, or cause the program to crash.) 1. Are all numerical values in the ini file restricted to being integers? (No floats?) 2. Can workfile= include absolute or relative path? 3. Can resultsfile= include absolute or relative path? 4. SaveFolder= appears to specify a subfolder of the current folder. Can it specify an absolute path? Other forms of relative path? 5. Devicenumber: is there a program limit to the maximum number supported, and if so what is it? 6. Erroriterations I assume has a minimum value of 1. Is there a maximum? Is any arbitrary positive integer allowed? 7. Reportiterations I assume has a minimum value of 1. Is there a maximum? Is any arbitrary positive integer allowed? 8. Checkpointiterations I assume has a minimum value of 1. Is there a maximum? Is any arbitrary positive integer allowed? 9. PoliteValue I assume has a minimum value of 1. Is there a maximum? Is any arbitrary positive integer allowed? 10. Will entering floats, eg 83.33 instead of integers such as 85 for ErrorReset cause trouble? Get truncated to 83 and work? 11. Are there other parameters for which clarification of bounds or format would be useful? (end) 2017-04-02, 19:13   #2580 flashjh "Jerry" Nov 2011 Vancouver, WA 1,123 Posts Quote: Originally Posted by kriesel Would someone please update some or all of these to include the bad-intermediate-residue check? Are you asking for updated Linux binaries? Where is the post with the bad-intermediate discussion? When was the code updated? Last fiddled with by flashjh on 2017-04-02 at 19:13 2017-04-02, 19:15   #2581 kriesel "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 23·3·197 Posts relative timings vs cuda version At message 2535: Quote: Originally Posted by ATH I did a -cufftbench 2592 8192 20 on the different versions (only the 64 bit versions) so it does 20x 50iterations on each FFT and takes the average. In most cases 6.5 is fastest but a few of them has 8.0 as the fastest (on a Titan Black, but this is probably GPU dependent). CUDA 4.2 was quite a bit slower on all of them, so I left it out. Code: 8.0 6.5 6.0 5.5 5.0 2592 48471289 1.6135 1.6683 1.6897 1.6145 1.6166 2744 51250889 2.0056 1.8606 1.8682 1.9980 1.8714 3136 58404433 2.0937 2.0480 2.0710 2.0201 2.0337 3200 59570449 2.4195 2.3907 2.4056 2.4150 2.4175 3240 60298969 2.4266 2.4388 2.4404 2.4435 3375 62756279 2.5147 2.5301 3888 72075517 2.5348 2.5584 2.4558 4000 74106457 2.4631 2.5498 2.5821 2.4590 2.4639 4096 75846319 2.5115 2.5800 2.5976 2.5200 2.5375 4320 79902611 3.2614 3.2573 3.2760 3.2685 4374 80879779 3.3753 3.2784 3.2924 3.2946 3.3003 4500 83158811 3.3535 3.3845 4536 83809729 3.3810 3.4006 5184 95507747 3.4279 3.3836 3.4208 3.2994 3.3273 5292 97454309 3.9568 5488 100984691 3.8843 3.8345 3.8336 3.9979 3.7484 5600 103000823 4.1630 4.3082 4.3430 4.1882 4.1943 5832 107174381 4.5283 4.3644 4.3842 4.3847 4.3903 6000 110194363 4.5328 6048 111056879 4.5338 4.5138 4.5275 6075 111541967 4.5586 6125 112440191 4.5530 4.5645 4.5780 4.5867 6144 112781477 4.6858 6250 114685037 4.8079 4.6418 4.6620 4.6714 4.6787 6272 115080019 4.6678 4.6820 4.6835 4.6908 6400 117377567 4.9088 4.7531 4.7721 4.7719 4.7809 6480 118813021 4.9438 4.8401 4.8670 4.8669 6561 120266023 5.1164 4.8818 4.9012 4.8968 6750 123654943 5.1792 5.0356 5.0432 6912 126558077 5.1957 7776 142017539 5.2343 5.0001 5.0441 4.8219 8000 146019329 5.2537 5.1762 5.2350 5.0527 5.0997 8192 149447533 5.3838 5.2219 5.2593 5.1617 5.2106 If ATH would forward the V4.2 timings, I'd add them. Attached Thumbnails Last fiddled with by kriesel on 2017-04-02 at 19:28 2017-04-02, 20:15   #2582 kriesel "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 127816 Posts references for and timing of 2.05, 2.05.1, bad-intermediate-residues patch, and latest builds Quote: Originally Posted by flashjh Are you asking for updated Linux binaries? Where is the post with the bad-intermediate discussion? When was the code updated? Hi, Thanks for a quick reply. I am more interested in updated Windows binaries but would also like the Linux made current. I'll have some use for them later. (Of course I realize this is an all-volunteer operation and no one owes me anything. I think it's been an amazing impressive amount of work by many code authors.) The relative timing of the patch and updates surfaced for me because of binge-reading the thread in sequence and taking notes along the way. It took days to get through. The following is a summary of a subset of the posts involved, selected to illustrate adequately what I think the timeline was without flooding you with volume. post 2286: (page 208) V2.05 released Feb 8 2015 post 2289: (page 209) feb 11 2015 CUDALucas 2.05.1 is posted to sourceforge. An error was discovered in the display output if ReportIterations=100 or 50 or 10. The error only caused the display 'Error' to stay at .25000. Actual results were not affected. I uploaded all windows versions as one file this time. You still need the .ini file. If anything else is found, let us know. 2441: (prime95) (page 222) Please add code to exit with an error message if when you write a save file (or any other convenient time) you find that the LL iteration is zero or two. 2447: (page 223) (msft:) Please teach me good err message. (prime95:) "Illegal residue: 0x0000000000000000. See mersenneforum.org for help." or "Illegal residue: 0x0000000000000002. See mersenneforum.org for help." (which I'd be inclined to change to explain in the case of 0, it's too early an iteration to indicate a prime. But hey, George is a Jedi master of code and organization, so maybe ignore this parenthetical) 2454: (page 224) LaurV's good post, omitted here for length 2464: (page 224) msft wrote and shared bad-intermediate-residue check code for review (Feb 9 2016) 2487: Mar 2 2016 (page 227) Owftheevil quoted: correction made. it now aborts the test when a premature 0 residue occurs. It wil take a few days to get sourceforge updated. (Extemely unreliable and slow internet connection.) (I found no indication of a version number change applied for the intermediate-residue checking change. Files contain 2.05.1 in their names.) (end) Last fiddled with by kriesel on 2017-04-02 at 20:22 2017-04-03, 00:44 #2583 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 23×3×197 Posts Found a link to the library files cufft*.dll, cudart*.dll on another thread. https://sourceforge.net/projects/cud...s/CUDA%20Libs/ 2017-04-05, 01:20 #2584 flashjh     "Jerry" Nov 2011 Vancouver, WA 21438 Posts Hello all, From what I can see the updates from post 2464 were not incorporated on sourceforge, so they're not in any of the code now. It's been over a year since all that discussion went on, are there still issues with residues? Either way, I have the code updated, along with some miscellaneous changes. The biggest change is that CUDA 8 does not support 32 bit for CUDALucas, nor compute < 2.0. What versions is everyone using now? I don't mind getting setup to compile versions <8, but I don't want to do it if no one is using them anymore. So, let me know what architecture everyone needs and I'll make it happen 2017-04-05, 13:37 #2585 LaurV Romulan Interpreter     Jun 2011 Thailand 22·7·11·29 Posts does it get eny faster? Otherwise we are happy with the current version we use, and we don't want to fix it (i.e. upgrade) as long as it works... Similar Threads Thread Thread Starter Forum Replies Last Post LaurV Data 131 2017-05-02 18:41 Brain GPU Computing 13 2016-02-19 15:53 Karl M Johnson GPU Computing 15 2015-10-13 04:44 fairsky GPU Computing 11 2013-11-03 02:08 Rodrigo GPU Computing 12 2012-03-07 23:20 All times are UTC. The time now is 08:06. Sun Nov 29 08:06:54 UTC 2020 up 80 days, 5:17, 3 users, load averages: 0.91, 1.13, 1.14
2020-11-29 08:06:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25062882900238037, "perplexity": 5005.802004501479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141197278.54/warc/CC-MAIN-20201129063812-20201129093812-00129.warc.gz"}
http://fricas.github.io/api/IntegralBasisTools.html
# IntegralBasisTools(R, UP, F)¶ This package contains functions used in the packages FunctionFieldIntegralBasis and NumberFieldIntegralBasis. diagonalProduct: Matrix R -> R diagonalProduct(m) returns the product of the elements on the diagonal of the matrix m divideIfCan!: (Matrix R, Matrix R, R, Integer) -> R divideIfCan!(matrix, matrixOut, prime, n) attempts to divide the entries of matrix by prime and store the result in matrixOut. If it is successful, 1 is returned and if not, prime is returned. Here both matrix and matrixOut are n-by-n upper triangular matrices. idealiser: (Matrix R, Matrix R) -> Matrix R idealiser(m1, m2) computes the order of an ideal defined by m1 and m2 idealiser: (Matrix R, Matrix R, R) -> Matrix R idealiser(m1, m2, d) computes the order of an ideal defined by m1 and m2 where d is the known part of the denominator idealiserMatrix: (Matrix R, Matrix R) -> Matrix R idealiserMatrix(m1, m2) returns the matrix representing the linear conditions on the Ring associatied with an ideal defined by m1 and m2. leastPower: (NonNegativeInteger, NonNegativeInteger) -> NonNegativeInteger leastPower(p, n) returns e, where e is the smallest integer such that p ^e >= n matrixGcd: (Matrix R, R, NonNegativeInteger) -> R matrixGcd(mat, sing, n) is gcd(sing, g) where g is the gcd of the entries of the n-by-n upper-triangular matrix mat. moduleSum: (Record(basis: Matrix R, basisDen: R, basisInv: Matrix R), Record(basis: Matrix R, basisDen: R, basisInv: Matrix R)) -> Record(basis: Matrix R, basisDen: R, basisInv: Matrix R) moduleSum(m1, m2) returns the sum of two modules in the framed algebra F. Each module mi is represented as follows: F is a framed algebra with R-module basis w1, w2, ..., wn and mi is a record [basis, basisDen, basisInv]. If basis is the matrix (aij, i = 1..n, j = 1..n), then a basis v1, ..., vn for mi is given by vi = (1/basisDen) * sum(aij * wj, j = 1..n), i.e. the ith row of ‘basis’ contains the coordinates of the ith basis vector. Similarly, the ith row of the matrix basisInv contains the coordinates of wi with respect to the basis v1, ..., vn: if basisInv is the matrix (bij, i = 1..n, j = 1..n), then wi = sum(bij * vj, j = 1..n).
2017-04-26 15:48:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.90259850025177, "perplexity": 4314.428871897927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121453.27/warc/CC-MAIN-20170423031201-00402-ip-10-145-167-34.ec2.internal.warc.gz"}
http://renormalization.com/tag/field-redefinitions/
### Course 19R1 D. Anselmi Theories of gravitation Last update: October 5th 2018 PhD course – 54 hours – Videos of lectures and PDF files of slides To be held in the first part of 2019 – Stay tuned Program ## Field redefinitions Consider an action $S$ depending on fields $\phi_{i}$, where the index $i$ labels both the field type, the component and the spacetime point. Add a term quadratically proportional to the field equations $S_{i}\equiv \delta S/\delta \phi _{i}$ and define the modified action where $F_{ij}$ is symmetric and can contain derivatives acting to its left and to its right. Summation over repeated indices (including the integrationover spacetime points) is understood. Then there exists a field redefinition with $\Delta _{ij}$ symmetric, such that, perturbatively in $F$ and to all orders in powers of $F$, ### Search this site Support Renormalization If you want to support Renormalization.com you can spread the word on social media or make a small donation ### Book 14B1 D. Anselmi Renormalization PDF Last update: May 9th 2015, 230 pages Contents: Preface 1. Functional integral 2. Renormalization 3. Renormalization group 4. Gauge symmetry 5. Canonical formalism 6. Quantum electrodynamics 7. Non-Abelian gauge field theories Notation and useful formulas References Course on renormalization, taught in Pisa in 2015. (More chapters will be added later.)
2019-01-22 03:53:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7966129183769226, "perplexity": 3375.757855245466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583826240.93/warc/CC-MAIN-20190122034213-20190122060213-00461.warc.gz"}
https://optimization-online.org/2020/06/7833/
# Manifold Identification for Ultimately Communication-Efficient Distributed Optimization This work proposes a progressive manifold identification approach for distributed optimization with sound theoretical justifications to greatly reduce both the rounds of communication and the bytes communicated per round for partly-smooth regularized problems such as the $\ell_1$- and group-LASSO-regularized ones. Our two-stage method first uses an inexact proximal quasi-Newton method to iteratively identify a sequence of low-dimensional manifolds in which the final solution would lie, and restricts the model update within the current manifold to gradually lower the order of the per-round communication cost from the problem dimension to the dimension of the manifold that contains a solution and makes the problem within it smooth. After identifying this manifold, we take superlinear-convergent truncated semismooth Newton steps computed by preconditioned conjugate gradient to largely reduce the communication rounds by improving the convergence rate from the existing linear or sublinear ones to a superlinear rate. Experiments show that our method can be orders of magnitudes lower in the communication cost and an order of magnitude faster in the running time than the state of the art. ICML 2020
2023-02-01 23:02:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.528678834438324, "perplexity": 533.1489526320952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00802.warc.gz"}
http://www.fabulousfibonacci.com/portal/index.php?option=com_content&view=article&id=20&Itemid=20
# PROPERTIES OF LIMITS First we will assume that  and  exist and that c is any constant.  Then, In other words we can “factor” a multiplicative constant out of a limit. So to take the limit of a sum or difference all we need to do is take the limit of the individual parts and then put them back together with the appropriate sign.  This is also not limited to two functions.  This fact will work no matter how many functions we’ve got separated by “+” or “-”. We take the limits of products in the same way that we can take the limit of sums or differences.  Just take the limit of the pieces and then put them back together.  Also, as with sums or differences, this fact is not limited to just two functions. As noted in the statement we only need to worry about the limit in the denominator being zero when we do the limit of a quotient.  If it were zero we would end up with a division by zero error and we need to avoid that. In this property n can be any real number (positive, negative, integer, fraction, irrational, zero, etc.).  In the case that n is an integer this rule can be thought of as an extended case of 3. For example consider the case of n = 2. The same can be done for any integer n. This is just a special case of the previous example. In other words, the limit of a constant is just the constant.  You should be able to convince yourself of this by drawing the graph of . As with the last one you should be able to convince yourself of this by drawing the graph of . This is really just a special case of property 5 using . Note that this was not written by www.fabulousfibonacci.com, the page's content is taken directly from tutorial.math.lamar.edu/Classes/CalcI/LimitsProperties.aspx. All credit is given to them for this article's content.
2013-05-22 02:05:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.850452721118927, "perplexity": 220.22738606580288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701063060/warc/CC-MAIN-20130516104423-00049-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.actucation.com/grade-6-maths/divisibility-rules-for-8-9-10-and-11
Informative line # Divisibility Rule by 8 A number is divisible by 8 if the last three digits of that number are divisible by 8 or a multiple of 8 like 008, 016, 024, 808, 816, 1168 ..... For example • Consider a number, 20240. The last three digits, i.e. 240 are divisible by 8 240 $$\div$$ 8 = 30 So, the number 20240 is divisible by 8. • Now consider another number, 80324. The last three digits, i.e. 324 are not divisible by 8. So, the number 80324 is not divisible by 8. #### Which one of the following numbers is divisible by 8? A 7032 B 3270 C 6025 D 8891 × In the number 7032, the last three digits, i.e. 032 are divisible by 8. 032 $$\div$$ 8 = 4 So, the number 7032 is divisible by 8. In the number 3270, the last three digits, i.e. 270 are not divisible by 8. So, the number 3270 is not divisible by 8. In the number 6025, the last three digits, i.e. 025 are not divisible by 8. So, the number 6025 is not divisible by 8. In the number 8891, the last three digits, i.e. 891 are not divisible by 8. So, the number 8891 is not divisible by 8. Hence, option (A) is correct. ### Which one of the following numbers is divisible by 8? A 7032 . B 3270 C 6025 D 8891 Option A is Correct # Divisibility Rule of 9 A number is divisible by 9 when the sum of all the digits of the number is a multiple of 9 like 9, 18, 27, 36, ..... For example: • Consider the number, 43506 The sum of all the digits = 4 + 3 + 5 + 0 + 6 = 18 18 is a multiple of 9, so the whole number 43506 is divisible by 9. • Now consider another number, 32505 The sum of all the digits = 3 + 2 + 5 + 0 + 5 = 15 15 is not a multiple of 9, so the whole number 32505 is not divisible by 9. #### Which one of the following numbers is divisible by 9? A 3012 B 3636 C 4531 D 4435 × For the number 3012: The sum of all the digits = 3 + 0 + 1 + 2 = 6 6 is not a multiple of 9, so the number 3012 is not divisible by 9. For the number 3636: The sum of all the digits = 3 + 6 + 3 + 6 = 18 18 is a multiple of 9, so the number 3636 is divisible by 9. For the number 4531: The sum of all the digits = 4 + 5 + 3 + 1 = 13 13 is not a multiple of 9, so the number 4531 is not divisible by 9. For the number 4435: The sum of all the digits = 4 + 4 + 3 + 5 = 16 16 is not a multiple of 9, so the number 4435 is not divisible by 9. Hence, option (B) is correct. ### Which one of the following numbers is divisible by 9? A 3012 . B 3636 C 4531 D 4435 Option B is Correct # Divisibility Rule of 10 A number is divisible by 10 if the last digit of the number is zero. For example: • Consider the number, 2920 The last digit of this number is zero, so 2920 is divisible by 10. • Now consider another number, 1525 The last digit of this number is 5, so the number 1525 is not divisible by 10. #### Which one of the following numbers is divisible by 10? A 1998 B 1990 C 1997 D 1999 × In the number 1998, the last digit is not zero, so 1998 is not divisible by 10. In the number 1990, the last digit is zero, so 1990 is divisible by 10. In the number 1997, the last digit is not zero, so 1997 is not divisible by 10. In the number 1999, the last digit is not zero, so 1999 is not divisible by 10. Hence, option (B) is correct. ### Which one of the following numbers is divisible by 10? A 1998 . B 1990 C 1997 D 1999 Option B is Correct # Divisibility Rule of 11 A number is divisible by 11 if the difference between the sum of the digits at odd positions and the sum of the digits at even positions is zero or a multiple of 11, like 11, 22, 33 ..... For example: 1. Consider the number, 29271. The sum of the digits at odd positions: 2 + 2 + 1 = 5 The sum of the digits at even positions: 9 + 7 = 16 The difference of the sum = 16 – 5 = 11 11 is a multiple of 11, so the number 29271 is divisible by 11. 2. Now consider another number, 32312. The sum of the digits at odd positions: 3 + 3 + 2 = 8 The sum of the digits at even positions: 2 + 1 = 3 The difference of the sum = 8 – 3  = 5 5 is not a multiple of 11, so the number 32312 is not divisible by 11. 3. Consider another number, 4664. The sum of the digits at odd positions: 4 + 6 = 10 The sum of the digits at even positions: 6 + 4= 10 Difference of the sum  = 10 – 10 = 0 So, the number 4664 is divisible by 11. #### Which one of the following numbers is divisible by 11? A 511 B 671 C 777 D 111 × For the  number 511: The sum of the digits at odd positions = 5 + 1 = 6 The sum of the digits at even positions  = 1 The difference of the sum = 6 – 1 = 5 5 is not a multiple of 11, so 511 is not divisible by 11. For the number 671: The sum of the digits at odd positions = 6 + 1 = 7 The sum of the digits at even positions = 7 The difference of the sum = 7 – 7 = 0 So, it is divisible by 11. For the number 777: The sum of the digits at odd positions = 7 + 7 = 14 The sum of the digits at even positions = 7 The difference of the sum = 14 – 7 = 7 7 is not a multiple of 11, so 777 is not divisible by 11. For the number 111: The sum of the digits at odd positions = 1 + 1 = 2 The sum of the digits at even positions = 1 The difference of the sum = 2 – 1 = 1 1 is not a multiple of 11, so 111 is not divisible by 11. Hence, option (B) is correct. ### Which one of the following numbers is divisible by 11? A 511 . B 671 C 777 D 111 Option B is Correct # Relation among Divisor, Quotient, Remainder and Dividend $$\text{Quotient × Divisor + Remainder = Dividend}$$ This relation is used to find the unknown term if the other three terms are given. For example: When $$28$$ is divided by a number gives the quotient $$9$$ and the remainder $$1.$$ What is the number? $$9×$$ Divisor $$+1=28$$ Divisor $$=\dfrac{28-1}{9}$$ $$=\dfrac{27}{9}$$ $$=3$$ #### Alex divided a number by $$20$$ and got the quotient of $$28$$ with remainder as $$14$$. What is the number? A $$560$$ B $$570$$ C $$574$$ D $$564$$ × Given: Quotient $$=28$$ Remainder $$=14$$ Divisor $$=20$$ We have a relation, i.e. $$\text{Quotient × Divisor + Remainder = Dividend}$$ On putting the given values, we get: $$28×20+14=$$ Dividend Dividend $$=560+14$$ $$=574$$ Hence, option (C) is correct. ### Alex divided a number by $$20$$ and got the quotient of $$28$$ with remainder as $$14$$. What is the number? A $$560$$ . B $$570$$ C $$574$$ D $$564$$ Option C is Correct
2021-04-14 02:50:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4650936722755432, "perplexity": 543.9943750693084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076454.41/warc/CC-MAIN-20210414004149-20210414034149-00188.warc.gz"}
https://webdesign.tutsplus.com/courses/understanding-responsive-images/lessons/other-solutions
FREELessons: 15Length: 1.3 hours • Overview • Transcript # 3.6 Other Solutions The solution we used in the last couple of lessons is probably the best one since it’s semantic and the elements are part of the official specification. However, there are a few solutions that address the problem of responsive images from a different angle. Let’s see what those are.
2022-08-16 01:51:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22588975727558136, "perplexity": 876.7983098521531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572215.27/warc/CC-MAIN-20220815235954-20220816025954-00565.warc.gz"}
https://warwick.ac.uk/fac/sci/masdoc/people/studentpages/students2013/bowditch/
# MASDOC - Mathematics and Statistics Doctoral Training Centre ### Mathematical Interests I have finished my Ph.D supervised by Dr David Croydon as part of the MASDOC programme at the University of Warwick. I graduated from the University of Warwick in 2013 with a first class honours degree in MMORSE. My main academic interests revolve around probability theory; in particular, random walks in random environments. Most of my work to date has been on biased random walks on Galton-Watson trees conditioned to survive however I am also researching more general randomly trapped random walks. I am also interested in stochastic particle systems having done my masters dissertation in coupling results for interacting particle systems with Dr Jon Warren. This page focusses on work done during my Ph.D, I have another webpage which I keep up to date with my current work. ### Research I have submitted various articles in the field of random walk in random environment. More information can be found here. Below is a list of papers submitted for publication: • A quenched central limit theorem for biased random walks on supercritical Galton-Watson trees (arXiv) • Central limit theorems for biased randomly trapped random walks on $\mathbb{Z}$ (arXiv) • Escape regimes of biased random walks on Galton-Watson trees. Probab. Theory Relat. Fields (2017) In my MSc thesis I investigated the asymptotic speed of a biased random walk on a subcritical Galton-Watson tree conditioned to survive as part of a summer research project supervised by Dr David Croydon. Title: Biased random walks on subcritical Galton-Watson trees conditioned to survive Abstract: In this thesis we survey known results concerning the limiting behaviour of biased random walks on supercritical and critical Galton-Watson trees conditioned to survive and extend them to the subcritical case. We start by giving a proof that there exists a well-defined probability measure over such non-extinct subcritical trees and that these exhibit a unique backbone with subcritical trees as leaves. We show that the speed exists a.s. for any bias and is positive if and only if the bias belongs to some determined region depending only on the mean and variance of the offspring distribution of the tree. We then consider this as a directed trap model to determine the speed along the backbone in terms of the bias of the walk and moments of the offspring distribution up to second order. I have done a research group project on stochastic growth models with John Sylvester and Qiaochu Chen supervised by Dr Nikos Zygouras and Dr Partha Dey. In this project we showed a range of regimes for limiting shapes for a long range first passage percolation model in $\mathbb{Z}^2$. Title: Stochastic growth models Abstract: We consider a long-range first-passage percolation model on the two dimensional lattice $\mathbb{Z}^2$ under a specific class of distributions supported away from 0. We show that in the critical and supercritical cases that the limiting shape is a suitably scaled $l^1$ ball. Moreover, we show that in the subcritical case a limiting shape exists and that under some assumptions this deterministic shape has a flat piece which coincides with that of the nearest neighbour model. ### Invited Talks Random walks on Galton-Watson trees, February 2018, Markov processes on metric measure spaces and Gaussian fields workshop, University of Duisburg-Essen. Regeneration structures for random walk models, November 2017, Statistical mechanics seminar, University of Warwick. Limit Processes for Random Walks in Random Environments, June 2017, MASDOC Retreat, Brecon Beacons, Powys. Random Walks on Galton-Watson Trees, September 2016, Workshop on Random Processes in Discrete Structures, University of Warwick. Transition from annealed to quenched asymptotics for random walk in random environment, May 2016, MASDOC Retreat, Wilderhope Manor, Shropshire. Random Walks on Galton-Watson Trees, April 2016, UK Easter Probability Meeting, Lancaster University. Biased Random Walks on Subcritical Galton-Watson Trees, October 2015, Probability Seminar, RIMS, Kyoto University. Stable Limit Laws in RWRE, April 2015, MASDOC-CCA Joint Workshop, University of Warwick. Random Walks on Galton-Watson Trees Conditioned to Survive, February 2015, Postgraduate Seminar, University of Warwick. ### Events Attended 15-16 February 2018, Markov processes on metric measure spaces and Gaussian fields workshop, University of Duisburg-Essen. 25 May 2017, Recent Progress on the Geometry of Random Walks, University of Cambridge. 30 August - 02 September 2016, Workshop on Random Processes in Discrete Structures, University of Warwick. 19-24 June 2016, School and Workshop on Random Interacting Systems, University of Bath. 04-08 April 2016, UK Easter Probability Meeting, Lancaster University. 29 March - 02 April 2016, Probabilistic models - from discrete to continuous, University of Warwick. 26-29 October 2015, Stochastic Analysis on Large Scale Interacting Systems, RIMS, Kyoto University. 18-22 May 2015, Random walks on graphs and potential theory, University of Warwick. 11-15 May 2015, MASDOC Summer School: Topics in Renormalisation Group Theory and Regularity Structures, University of Warwick. 15-17 April 2015, MASDOC-CCA Joint Workshop, University of Warwick. 23-27 March 2015, YEP XII: Random walk in random environment, EURANDOM Eindhoven. 16-20 March 2015, Random Graphs, Random Trees and Applications, Isaac Newton Institute for Mathematical Sciences, Cambridge. ### Teaching Assistant Roles 2017/28 - ST202: Stochastic Processes. 2017/18 - MA258: Mathematical Analysis III. 2016/17 - ST333: Applied Stochastic Processes. 2015/16 - ST115: Introduction to Probability. 2014/15 - ST115: Introduction to Probability. ### Background I am originally from a small village on the edge of Cambridgeshire called Fowlmere however I currently reside in Coventry. I am a keen sportsman having participated in tennis, rugby union, football and dodgeball competitions. In the latter of which, I have competed internationally for the Scotland Highlanders and won many tournaments with the Warwick Warriors. I enjoy reading classic novels; in particular nineteenth century French and Russian literature. I have also been a representative for both the postgraduate mathematics SGSLC and secretary of the MASDOC SSLC. I have been an avid member of the Warwick University Warriors dodgeball team since 2009, have played in the first team since 2011, and held the roles of club coach, men's captain and vice captain. Many tournaments are recorded and videos uploaded to the Warwick Dodge youtube channel. Some notable matches are the final of the Yorkshire open in 2011 against Jammy Dodgers where I made my first double catch and won my first gold medal and the match against Reepham Raiders in meet 4 of DPL4 where I made double catches in consecutive games to salvage an important win. Since 2013 I have been playing internationally with the Scotland Highlanders first team. We have entered nine tournaments in this time winning the 2016 SIx Nations, coming third in 2014 Six Nations and second in 2013 Six Nations, 2015 Six Nations, 2013 Euros, 2014 Euros, 2015 Euros and 2016 Euros. We also came fifth in the first ever world cup in Manchester. ### Contact I can be contacted via email through a dot bowditch at warwick dot ac dot uk or found in the office D0.05 of the Zeeman Building most days.
2019-01-20 09:42:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3297240436077118, "perplexity": 1998.0966350919916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583705091.62/warc/CC-MAIN-20190120082608-20190120104608-00612.warc.gz"}
https://ask.openstack.org/en/answers/1747/revisions/
# Revision history [back] I find a the final the problem. In fact there is a bug where you want to make the boot procedure. You have to specified the image that your bootable volume use to boot correctly on you volum. So the correct command is: nova boot --flavor 2 --image <id of your image> --key_name mykey --block_device_mapping vda=13:::0 boot-from-vol-test I think it will be used for some peoples... I find a at the final the problem. In fact there is a bug where you want to make the boot procedure. You have to specified the image that your bootable volume use to boot correctly on you volum. So the correct command is: nova boot --flavor 2 --image <id of your image> --key_name mykey --block_device_mapping vda=13:::0 boot-from-vol-test I think it will be used for some peoples... I find at the final the problem. In fact there is a bug where you want to make the boot procedure. You have to specified the image that your bootable volume use to boot correctly on you volum.volume. So the correct command is: nova boot --flavor 2 --image <id of your image> --key_name mykey --block_device_mapping vda=13:::0 boot-from-vol-test I think it will be used for some peoples...
2019-06-26 11:14:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6103125810623169, "perplexity": 6968.82343022414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000266.39/warc/CC-MAIN-20190626094111-20190626120111-00045.warc.gz"}
https://www.bionicturtle.com/forum/threads/errors-found-in-study-materials-p2-t6-credit-risk.8759/page-2
What's new Errors Found in Study Materials P2.T6. Credit Risk New Member Hi, i do not know if this has been reported, but default correlation formula in page 19 is confusing: square root of pi2(1-pi2) should not be squared 2 times. regards New Member Not an error, but i am not able to find [CR-13] Counterparty Risk Intermediation study notes. Is it available or just can not find it? Thank you in advance David Harper CFA FRM David Harper CFA FRM Staff member Subscriber Hi @Carlos Madrid Apologies for the typo on page 19, indeed there is incorrectly shown an extra square root (this has been tagged for a fix and revision). In regard to CR-13 (aka, R45 Gregory), actually Deepa is currently working on the updated (2018) version including Gregory's XVA Chapter 9 (actually, it's an update to each of Chapter 4, 5, 6. 7. 9, 12, 14 and 17) so it's not currently available but will be fairly soon. Thank you! Subscriber Karim_B Active Member Subscriber Hi @David Harper CFA FRM I think there's an error in R43-P2-T6 Stulz Ch 18 Study Notes. On page 4 you have the excerpt below saying Debt holders receive Max(F-Vt,0), but for example if the firm value is 80 and the face value of debt is 100, don't the debt holders get 80 back rather than (100-80)=20? Screenshot: Thanks Karim David Harper CFA FRM David Harper CFA FRM Staff member Subscriber HI @Karim_B Yes, absolutely you are correct, we mangled this (although I can see why, the source Stulz is confusing as usual). We will replace with this (cc: @Nicole Seaman ): "If the debt is risky, there is no guarantee the principal amount will be repaid in full. Specifically, if the value of the firm falls below the principal amount, if V(T) < F, then the firm is insolvent and the debt holders can only recover the firm's value. Therefore, the payoff to debt holders is Min[V(T), F]; i.e., their payoff must be at least equal to the firm's value but cannot be greater than the principal amount. Further, Min[V(T), F] = F - Max[F - V(T), 0], which illustrates that the debt holders' payoff is economically equivalent to the debt principal minus the payoff of a put option on the firm's assets, V(T), with an exercise price equal to K. Consider the same example where F = $100. If V(T) =$120, then debt holders receive Min(100, 120) = 100 - Max(100 - 120, 0) = $100. But if V(T) =$80, then debt holders receive Min(100, 80) = 100 - Max(100 - 80, 0) = \$80." Updated in Notes Last edited by a moderator: Karim_B Active Member Subscriber Hi @David Harper CFA FRM R44.P2.T6 Malz Study Notes page 42: I think the Default01 formula should have 1/20 multiplied by the difference in the next 2 terms rather than just being 1/20th of the 1st term. Current formula: What I think it should be: Default01 = 1/20 * [(mean value/loss for Pi + 0.0010) − (mean value/loss for Pi − 00.0010)] Thanks Karim Nicole Seaman Staff member Subscriber Hi @David Harper CFA FRM R44.P2.T6 Malz Study Notes page 42: I think the Default01 formula should have 1/20 multiplied by the difference in the next 2 terms rather than just being 1/20th of the 1st term. Current formula: View attachment 1510 What I think it should be: Default01 = 1/20 * [(mean value/loss for Pi + 0.0010) − (mean value/loss for Pi − 00.0010)] Thanks Karim @David Harper CFA FRM Can you confirm that this is an error in the notes? Thank you! Nicole David Harper CFA FRM David Harper CFA FRM Staff member Subscriber Thank you (again) @Karim_B ! @Nicole Seaman yes, as can be confirmed on page 333 of Malz (Chapter 9), Karim is correct. Thank you, silver7 New Member I am looking at the formulas for BCVA stress test. However, the notation is really confusing. Could anyone clarify which belongs to which (counterparty or the firm itself)? For example, in the notes, it says S(I) is the counterparty's survival rate. (Topic 32, page 13) however, in the practice 708.3, according to the solution, C is correct, but it says S(I) is the Survival rate of the firm itself. So which notation is correct? Also, could someone clarify the notations for other variables as well? Thank you. David Harper CFA FRM David Harper CFA FRM Staff member Subscriber @silver7 I apologize but the Study Note appears to have a typo (it has been a struggle to cope with all of the different CVA assignments). cc @Nicole Seaman. The Study note page 13 should read "... so the probability that the counterparty has survived must enter into the calculation as S(n)." Question 708.3 is correct: S(I) is the survival probability of the institution and S(n) is the survival probability of the counterparty. The first term is unilateral CVA such that we've basically got LGD*EE*PD each for the counterparty; i.e., LGD(counterparty)*EE(counterparty)*PD(counterparty). In BCVA the institution's own credit risk is added such that the two terms are (without respect to the +/-): • CVA --> LGD(counterparty)*EE(counterparty)*PD(counterparty) • DVA -->LGD(institution)*EE(institution)*PD(institution) But these terms presume survival by the non-defaulting firm such that survivals enter as the following, where PS = probability of survival: • CVA --> LGD(counterparty)*EE(counterparty)*PD(counterparty)*PS(institution) • DVA -->LGD(institution)*EE(institution)*PD(institution)*PS(counterparty). See Gregory below. rnavarro New Member From P2.T6. Credit Risk Measurement & Management Giacomo De Laurentis, Renato Maino, and Luca Molteni: Developing, Validating and Using Internal Ratings (p. 21) N = the cumulated normal distribution operator is missing in the probability of default. The text has this operator. rnavarro New Member I have the 2018 GARP copy of De Laurentis and the normal cdf operator is present. David Harper CFA FRM David Harper CFA FRM Staff member Subscriber @rnavarro yes understood, but (per the link I shared to you) does it still also contain numerator ... µT + 1/2 σ(a)^2; i.e., missing the final "T"? I don't know how many of the many errors we've contributed have been corrected Last edited: JulioFRM Member Hi @David Harper CFA FRM , In the study notes of this topic, in page 4, it says that "If the debt is risky, that is, when the value of the firm falls below the principal amount to be paid back (V(t) < F) then the debt holders receive the maximum of F − V(t) or zero." However, I believe that when the value V falls below F, debt holders won't receive the maximum of F - V(t) and 0; debt holders will receive less than the loan amount F BY AN AMOUNT equal to F - V(t). But it is not that debt holders receive f-v. As a result, if the value of the firm is greater than f, then max is = to 0 and debt holders receive the whole loan amount back. If f is greater than v, then they receive less than the loan amount f by an amount f-v. Please let me know what you think. Thanks. David Harper CFA FRM David Harper CFA FRM Staff member Subscriber Hi @JulioFRM Yes, I absolutely agree, it is our mistake in the text. Thank you, and apologies for any confusion. It can be difficult to follow Stulz's confusing language, but it should read: "If the debt is risky, the debt holders are not guaranteed full repayment of the principal amount, F. Specifically, if the value of the firm falls below the debt's principal amount, then the debt holders can only be repaid the (reduced) value of the firm; i.e., if V(t) < F then debt repayment equals V(t). This can be restated in option terms: if the value of the firm falls below the principal amount to be repaid, then the debt holders receive the face value minus the difference, F - V(t). That is, if V(t) < F then debt repayment F - [F-V(t)] = V(t). In this way, we can express the repayment in option terms that accommodates any future V(t): D(T) = F - Max[F - V(t). 0] = F - [Put option on firm's assets, V(t), with exercise price of debt principal, F]" Thank you! @Nicole Seaman after i replied, I noticed this had already been captured. (and it's already in wrike). Moved to this thread. Last edited: jaivipin Active Member Subscriber default correlation should be like below I believe.. if you please see it is like below in the doc. Flashback Active Member The first one is correct. And same is used to answer correctly in GARP's Mocks. In denominator, both calculations are the standard deviations of default probabilities of each variable. In numerator the calculation is a covariance of PDs of both variables. Nicole Seaman Staff member Subscriber default correlation should be like below I believe.. View attachment 1800 if you please see it is like below in the doc. View attachment 1801 Hello @jaivipin Can you post the specific reading that you are referring to and if possible the page that this is located on? This thread is for all readings in Topic 6, so it is helpful to know which reading you are referring to so I don't have to search through everything. I want to make sure this is fixed in the document. Thank you, Nicole
2020-09-27 05:05:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5738102197647095, "perplexity": 3517.5285242579416}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400250241.72/warc/CC-MAIN-20200927023329-20200927053329-00136.warc.gz"}
http://mathhelpforum.com/algebra/69936-logarithmic-functions-help.html
# Math Help - Logarithmic Functions....help =) 1. ## Logarithmic Functions....help =) Hey guys, having a little trouple with functions. Heres the problems: 1) 4^x+2-4^x=15 2)7^x^2=10 3) 2^x=3^x and i have no idea how to post other q's with my keyboard thanx for the help though everyone 2. Originally Posted by Baginoman Hey guys, having a little trouple with functions. Heres the problems: 1) 4^x+2-4^x=15 2)7^x^2=10 3) 2^x=3^x and i have no idea how to post other q's with my keyboard thanx for the help though everyone 1. I assume that you wanted to solve these equations(?) 2. I assume that you meant: $4^{x+2}-4^x=15~\implies~16 \cdot 4^x-4^x=15~\implies~ 15\cdot 4^x=15$ I leave the rest for you. $7^{x^2} = 10~\implies~x^2=\log_7(10)~\implies~x=\sqrt{\log_7 (10)}$ $2^x=3^x~\implies~1=\dfrac{3^x}{2^x} = \left(\dfrac32\right)^x$ Now use the definiton $a^0=1\ ,\ a > 0$ to get $x= 0$ 3. awesome, those confused me lol 1) log3^x=log3(1/x)+4 (keep in mind both 3's are log of 3) 2) log(7y + 1)= 2 log(y + 3) - log 2 3) log(3x - 4)/log x =2 thank you again guys 4. second question: just use logarithm rules, the 2 goes back to the exponent,then you have a difference of logarithms third question: change of bases rule 5. Originally Posted by Baginoman ... 1) log3^x=log3(1/x)+4 (keep in mind both 3's are log of 3) 2) log(7y + 1)= 2 log(y + 3) - log 2 3) log(3x - 4)/log x =2 ... 1. Do us and do yourself a favour and start a new thread if you have new questions. Otherwise you risk that nobody will notice your need for help. 2. I don't understand the equation in 1). Do you mean: $(\log(3))^x$ or $(\log_3)^x$ or $\log(3^x)$ ... 3. If you have some difficulties to use becker89 hint, here is a starter: $\log(7y+1)=2\log(y+3)-\log(2)~\implies~$ $\log(7y+1)+\log(2)=2\log(y+3)~\implies~\log(14y+2) =\log((y+3)^2)$ Now de-logarithmize. You'll get a quadratic equation in y. My result is: y = 7 or y = 1 4. Multiply the equation through by log(x): $\log(3x-4)=2\log(x)~\implies~\log(3x-4)=\log(x^2)$ De-logarithmize and solve the quadratic equation. This equation doesn't have a real solution. 6. Originally Posted by earboth 1. Do us and do yourself a favour and start a new thread if you have new questions. Otherwise you risk that nobody will notice your need for help. 2. I don't understand the equation in 1). Do you mean: $(\log(3))^x$ or $(\log_3)^x$ or $\log(3^x)$ ... 3. If you have some difficulties to use becker89 hint, here is a starter: $\log(7y+1)=2\log(y+3)-\log(2)~\implies~$ $\log(7y+1)+\log(2)=2\log(y+3)~\implies~\log(14y+2) =\log((y+3)^2)$ Now de-logarithmize. You'll get a quadratic equation in y. My result is: y = 7 or y = 1 4. Multiply the equation through by log(x): $\log(3x-4)=2\log(x)~\implies~\log(3x-4)=\log(x^2)$ De-logarithmize and solve the quadratic equation. This equation doesn't have a real solution. sorry about that, and yes the first one is $(\log_3)^x$ also if its not a hassle, how does logX^2=2 come out as (10, -10)?? P.s. do you mind if i PM you with some q's? 7. Originally Posted by Baginoman sorry about that, and yes the first one is $(\log_3)^x$ also if its not a hassle, how does logX^2=2 come out as (10, -10)?? P.s. do you mind if i PM you with some q's? 1. to $(\log_3)^x$: Now we know the base of the logarithmic function but the argument is still missing ... 2. If $\log$ means $\log_{10}$ the the last equation can be solved: $\log(x^2)=2~\implies~10^{\log(x^2)} = 10^2~\implies~x^2=100$ P.s. do you mind if i PM you with some q's? Don't do that: Give other members of MHF a chance ! 8. Originally Posted by earboth 1. to $(\log_3)^x$: Now we know the base of the logarithmic function but the argument is still missing ... 2. If $\log$ means $\log_{10}$ the the last equation can be solved: $\log(x^2)=2~\implies~10^{\log(x^2)} = 10^2~\implies~x^2=100$ Don't do that: Give other members of MHF a chance ! hahah thankyou very much, your talent makes me want to learn math to a greater extent! Plus i would post another thread for more help, but i kinda need an answer within the next half hour or so 9. Originally Posted by Baginoman ... 1) log3^x=log3(1/x)+4 (keep in mind both 3's are log of 3) ... I believe that I found the solution to this cryptic equation: Probably you mean: $\log_3(x)=\log_3\left(\frac1x \right) + 4~\implies~\log_3(x)=-\log_3\left(x \right) + 4 ~\implies~$ $2\log_3(x)=4~\implies~x^2=81$ Since x > 0 there is only one solution. (x = 9) 10. Originally Posted by earboth I believe that I found the solution to this cryptic equation: Probably you mean: $\log_3(x)=\log_3\left(\frac1x \right) + 4~\implies~\log_3(x)=-\log_3\left(x \right) + 4 ~\implies~$ $2\log_3(x)=4~\implies~x^2=81$ Since x > 0 there is only one solution. (x = 9) yep thats its lol. hmmm where did the 2log come from? 11. Originally Posted by Baginoman yep thats its lol. hmmm where did the 2log come from? $ \log_3(x)=\log_3\left(\frac1x \right) + 4~\implies~\log_3(x)=-\log_3\left(x \right) + 4 ~\implies~ $ To get rid of the log at the RHS you have to add $\log_3(x)$ on both sides of the equation. And then you'll get at the LHS: $\log_3(x) + \log_3(x) = 2\log_3(x)$
2015-09-02 11:12:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7793346047401428, "perplexity": 1629.8348141394235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645261055.52/warc/CC-MAIN-20150827031421-00220-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/finding-specific-heat-of-an-object-graphically.542561/
# Finding specific heat of an object graphically 1. Oct 21, 2011 ### armolinasf 1. The problem statement, all variables and given/known data Samples A and B are at different initial temperatures when they are placed in a thermally isolated container and allowed to come to thermal equilibrium. Figure a gives their temperatures T versus time t. Sample A has a mass of 5.2 kg; sample B has a mass of 1.6 kg. Figure b is a general plot about the material of sample B. It shows the temperature change T that the material undergoes when energy is transferred to it as heat Q. The change T is plotted versus the energy Q per unit mass of the material. 3. The attempt at a solution The heat gained by object b must equal the total heat of object a. So Qb=Qa=Q/mb*mb, mb=mass of object b. but Q=mcΔt ==> c=Q/mΔt would Δt then just be the total change from 100 to 60 degrees? this would give me 25600/(5.2*60)=82.05 which is incorrect. Where Am I going wrong? Thanks for the help #### Attached Files: • ###### W0403-N.jpg File size: 8.1 KB Views: 165 2. Oct 21, 2011 ### grzz I think it is better to replace the underlined by 'heat lost by'. 3. Oct 21, 2011 ### grzz I think that you mean that $\Delta$T is the change from 100 to 40, i.e. 60deg because we are considering A. 4. Oct 21, 2011 ### grzz Can the poster explain what the above mean?
2018-03-21 19:45:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.467423677444458, "perplexity": 1269.5664620848343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647681.81/warc/CC-MAIN-20180321180325-20180321200325-00022.warc.gz"}
http://gmatclub.com/forum/the-total-price-of-a-basic-computer-and-printer-are-129380.html?fl=similar
Find all School-related info fast with the new School-Specific MBA Forum It is currently 26 Nov 2015, 05:14 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # The total price of a basic computer and printer are $2,500 Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: Director Status: Finally Done. Admitted in Kellogg for 2015 intake Joined: 25 Jun 2011 Posts: 536 Location: United Kingdom Concentration: International Business, Strategy GMAT 1: 730 Q49 V40 GPA: 2.9 WE: Information Technology (Consulting) Followers: 44 Kudos [?]: 1621 [0], given: 217 The total price of a basic computer and printer are$2,500 [#permalink]  20 Mar 2012, 08:33 3 This post was BOOKMARKED 00:00 Difficulty: 15% (low) Question Stats: 78% (02:21) correct 22% (01:35) wrong based on 142 sessions The total price of a basic computer and printer are $2,500. If the same printer had been purchased with an enhanced computer whose price was$500 more than the price of the basic computer, then the price of the printer would have been 1/5 of that total. What was the price of the basic computer? A. 1500 B. 1600 C. 1750 D. 1900 E. 2000 I started doing this question, but got stuck. Let x be the price of the basic computer and y be the price of the printer. x + y = 2500 -------------------------------------------------------------------------------(1) Price of the enhanced computer = x + 500 ---------------------------------------------------(2) Price of the printer i.e. y = 1/5 (x+500) ------------------------------------------------------(3) Solving 1 and 2 will give me x = 2000. But the answer is not correct, its D? [Reveal] Spoiler: OA _________________ Best Regards, E. MGMAT 1 --> 530 MGMAT 2--> 640 MGMAT 3 ---> 610 GMAT ==> 730 Intern Joined: 20 Mar 2012 Posts: 2 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: Price of a Basic Computer [#permalink]  20 Mar 2012, 09:50 enigma123 wrote: The price of a basic computer and a printer was $2500. If the same computer had been purchased with an enhanced computer whose price was 500 more than the price of the basic computer, the the price of the printer would have been 1/5 of the total. What was the price of the basic computer? A) 1500 B) 1600 C) 1750 D) 1900 E) 2000 I started doing this question, but got stuck. Let x be the price of the basic computer and y be the price of the printer. x + y = 2500 -------------------------------------------------------------------------------(1) Price of the enhanced computer = x + 500 ---------------------------------------------------(2) Price of the printer i.e. y = 1/5 (x+500) ------------------------------------------------------(3) Solving 1 and 2 will give me x = 2000. But the answer is not correct, its D? My try: Here you have: c=computer p=price Total price of the combo 1: c+p=2500 Total price of the combo 2: c+p+500 = 3000 (here you are not changing any price, only adding 500) p=3000/5=600 Then: C+600 = 2500 C=1900 Math Expert Joined: 02 Sep 2009 Posts: 30372 Followers: 5083 Kudos [?]: 57232 [1] , given: 8811 Re: The total price of a basic computer and printer are$2,500 [#permalink]  20 Mar 2012, 12:18 1 KUDOS Expert's post 3 This post was BOOKMARKED The total price of a basic computer and printer are $2,500. If the same printer had been purchased with an enhanced computer whose price was$500 more than the price of the basic computer, then the price of the printer would have been 1/5 of that total. What was the price of the basic computer? A. 1500 B. 1600 C. 1750 D. 1900 E. 2000 Let the price of basic computer be C and the price of the printer be P: C+P=$2,500. The price of the enhanced computer will be C+500 and total price for that computer and the printer will be 2,500+500=$3,000. Now, we are told that the price of the printer is 1/5 of that new total price: P=1/5*$3,000=$600. Plug this value in the first equation: C+600=$2,500 --> C=$1,900. _________________ Manager Joined: 07 Dec 2011 Posts: 174 Location: India Followers: 1 Kudos [?]: 35 [0], given: 24 Re: The total price of a basic computer and printer are $2,500 [#permalink] 23 Mar 2012, 04:06 Let B, and P be price of basic computer and printer resp. given B+P=2500 also advanced computer price = B+500 and P= 1/5(2500 + 500) = 600 Substituting this value in the first equation we get B = 2500 -600 = 1900 hence D. Senior Manager Joined: 12 Mar 2012 Posts: 369 Concentration: Operations, Strategy Followers: 2 Kudos [?]: 134 [0], given: 31 Re: The total price of a basic computer and printer are$2,500 [#permalink]  23 Mar 2012, 04:48 p +c=2500 p+c+2500=total(T) Hence T= 3000 P= 1/5*3000=600 HENCE c=1900 _________________ Practice Practice and practice...!! If there's a loophole in my analysis--> suggest measures to make it airtight. SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1854 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Followers: 27 Kudos [?]: 1224 [0], given: 193 Re: The total price of a basic computer and printer are $2,500 [#permalink] 24 Mar 2014, 23:15 Did using one variable: Price of basic computer = x Price of printer = (2500 - x) Price of enhanced computer = x + 500 Setting up the equation $$2500 - x = \frac{1}{5} * (x+500+2500-x)$$ x = 1900 = Answer = D _________________ Kindly press "+1 Kudos" to appreciate Manager Joined: 01 Jan 2013 Posts: 67 Location: India Followers: 0 Kudos [?]: 20 [0], given: 131 Re: The total price of a basic computer and printer are$2,500 [#permalink]  03 May 2014, 00:11 [quote="enigma123"]The total price of a basic computer and printer are $2,500. If the same printer had been purchased with an enhanced computer whose price was$500 more than the price of the basic computer, then the price of the printer would have been 1/5 of that total. What was the price of the basic computer? A. 1500 B. 1600 C. 1750 D. 1900 E. 2000 ] total price = Bc + Printer = 2500-------1 $$1/5(Enhance Comp + P) = P$$ E.c + p =5p 4p = Ec 4p= B.c + 500 4.p = 2500 - p + 500 (from 1) p =600 Bc = 1900 GMAT Club Legend Joined: 09 Sep 2013 Posts: 7273 Followers: 384 Kudos [?]: 93 [0], given: 0 Re: The total price of a basic computer and printer are $2,500 [#permalink] 10 Jul 2015, 14:36 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: The total price of a basic computer and printer are$2,500   [#permalink] 10 Jul 2015, 14:36 Similar topics Replies Last post Similar Topics: If the price of a certain computer increased 30 percent from d dollars 1 04 Aug 2015, 22:48
2015-11-26 13:14:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5372782945632935, "perplexity": 7267.717855210852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447266.73/warc/CC-MAIN-20151124205407-00309-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcds.2007.18.199
# American Institute of Mathematical Sciences January  2007, 18(1): 199-217. doi: 10.3934/dcds.2007.18.199 ## Random β-expansions with deleted digits 1 Department of Mathematics, Utrecht University, Postbus 80.000, 3508 TA Utrecht, Netherlands, Netherlands Received  June 2006 Revised  November 2006 Published  February 2007 In this paper we define random $\beta$-expansions with digits taken from a given set of real numbers $A= \{ a_1 , \ldots , a_m \}$. We study a generalization of the greedy and lazy expansion and define a function $K$ that generates essentially all $\beta$-expansions with digits belonging to the set $A$. We show that $K$ admits an invariant measure $\nu$ under which $K$ is isomorphic to the uniform Bernoulli shift on $A$. Citation: Karma Dajani, Charlene Kalle. Random β-expansions with deleted digits. Discrete & Continuous Dynamical Systems - A, 2007, 18 (1) : 199-217. doi: 10.3934/dcds.2007.18.199 2019 Impact Factor: 1.338
2020-12-02 18:34:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7134825587272644, "perplexity": 1225.9892763355342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141715252.96/warc/CC-MAIN-20201202175113-20201202205113-00594.warc.gz"}
https://byjus.com/question-answer/a-given-wire-of-residence-1-omega-is-stretched-to-double-its-length-what-will/
Question # A given wire of residence $$1\Omega$$ is stretched to double its length. What will be its new resistance. Solution ## Given resistance$$=1\Omega$$ and initial length =L and final length =2 L.Let Initial area$$A$$ and final area$${A}_{f}=\dfrac{A}{2}$$ Though the length and cross sectional area of the wire changes it volume will remain constant.$${V}_{i} ={V}_{f}$$Thus initial resistance$${R}_{i}=\rho \dfrac{L}{A}$$--------(1)  Final resistance $${R}_{f}=\rho\dfrac{2L}{\dfrac{A}{2}}$$--------(2)Computing 1 and 2 we get $${R}_{f}=4{R}_{i}$$$${R}_{f}=4 \Omega$$Physics Suggest Corrections 0 Similar questions View More People also searched for View More
2022-01-23 09:04:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5458027124404907, "perplexity": 5318.123618202096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304217.55/warc/CC-MAIN-20220123081226-20220123111226-00334.warc.gz"}
https://mathoverflow.net/questions/351124/limited-sum-for-whole-sum-approximation
# Limited sum for whole sum approximation Let $$d_n, n\in\{1,2,\cdots,N\}$$ be $$N$$ realizations drawn independent and identically from uniform distribution on $$(0,L)$$ where $$L=\gamma\sqrt{N}$$ with constant $$\gamma$$. Suppose that we need to approximate the sum $$\alpha=\sum_{n=1}^{N}d_n^{-3},$$ with the restricted sum $$\hat{\alpha}=\sum_{n\in\mathcal I}d_n^{-3},$$ where $$\mathcal I\subset N$$. We define the set $$\mathcal I$$ as the largest $$|\mathcal I|$$ element of random variables $$d_1^{-3},d_2^{-3},\cdots,d_N^{-3}$$. Now, the question is to find the order of size of subset $$\mathcal{I}$$ which produce a good approximation ($$\hat{\alpha}\simeq\alpha$$), i.e., $$\mathbb{P}[|\alpha-\hat{\alpha}|\leq\epsilon]\geq 1 -\beta.$$ Is $$|\mathcal I|$$ in order of sub-linear respect to $$N$$? • If you miss even a single realization you may have an error as large as $1/\epsilon^3$ with probabiity $\epsilon/L$ that vanishes only linearly in $\epsilon$, so this will never provide a good approximation. – Carlo Beenakker Jan 25 at 15:14 • But, this occurs with a very small probability. By choosing $|\mathcal{I}|$ largest value of $d_n^{-3}$, it seems that by properly choosing $|\mathcal{I}|$ we have a good approximation. – Math_Y Jan 25 at 15:21 • But the probability that all of $N$ points be in $(0,\epsilon)$ is $(\epsilon/L)^N$ which is so small. – Math_Y Jan 25 at 15:39 • For example, assume that $N$ points are $L/N,2L/N,\cdots,L$. Then, the question is that is it possible to truncate the series $\sum_{i=1}^{N}i^-3$ to have a good approximation? – Math_Y Jan 25 at 15:43 • It is computationally efficient for me to choose only a small fraction of them. – Math_Y Jan 25 at 15:44 (Edited after noticing an error which completely changes the answer.) This is not a complete solution; however, it strongly suggests that the answer is positive. Let $$U$$ be a random variable with uniform distribution on $$[0, 1]$$. The random variable $$U^{-3}$$ has tail $$\mathbb{P}[U^{-3} > x] = x^{-1/3}$$, and hence it is in the domain of attraction of a stable law with index $$\alpha = \tfrac{1}{3}$$. Observe that $$d_n = L U_n = \gamma \sqrt{N} U_n$$ for an i.i.d. sequence $$U_n$$ with uniform distribution on $$[0, 1]$$. By the invariance principle, the processes $$X^N_t = \frac{1}{N^3} \sum_{n = 1}^{\lfloor N t \rfloor} U_n^{-3}$$ converge (in the appropriate Skorokhod topology) to the increasing $$\tfrac{1}{3}$$-stable Lévy process (i.e. the $$\tfrac{1}{3}$$-stable subordinator) $$X_t$$. Clearly, $$\sum_{n = 1}^{\lfloor N t \rfloor} d_n^{-3} = \frac{N^{3/2} X^N_t}{\gamma^3} \, .$$ Denote $$J = |\mathcal{I}|$$, and let $$\hat\alpha_N$$ be the sum of $$J$$ largest variables among $$d_n^{-3}$$, $$n = 1, 2, \ldots, N$$. Then $$\hat\alpha_N \approx \frac{N^{3/2}}{\gamma} \times (\text{sum of J largest jumps of X_t, t \in [0, 1]}) .$$ (I am not rigorous here. However, I am rather convinced one can turn this into a completely argument.) The question thus turns into the following problem, with $$\alpha = \tfrac{1}{3}$$ and $$\delta = \tfrac{3}{2}$$: How many largest jumps of the $$\alpha$$-stable subordinator $$X_t$$, $$t \in [0, 1]$$, one has to add in order to get an approximation which is within $$\pm\gamma N^{-\delta} \times \epsilon$$ from $$X_1$$ with probability $$1 - \beta$$. This has been studied a lot: it is the question how fast does the Ferguson–Klass–LePage series of a stable subordinator converge. In particular, since the $$n$$-th largest jump of $$X_t$$ is comparable with $$n^{-1/\alpha}$$, the error should be of the order $$J^{1 - 1/\alpha}$$ when $$J$$ largest jumps are taken into account. This suggests that $$J \approx N^{\delta / (1/\alpha - 1)}$$ should do the job. In our case, this gives $$J \approx N^{(3/2) / (3 - 1)} = N^{3/4}$$, and thus it suggests that it is sufficient to take roughly $$|\mathcal{I}| = N^{3/4}$$ largest values of $$d_n^{-3}$$. I bet one can find a reference for what is written above (and in fact at least some authors do work with Pareto-distributed jumps $$U_n^{-1/\alpha}$$ rather than the ordered jumps of $$X_t$$). I am not an expert in this area, though, and I failed to find a reference in a (very) quick Internet search. The closest one that I encountered is: Bentkus, V., Juozulynas, A. & Paulauskas, V. Lévy–LePage Series Representation of Stable Vectors: Convergence in Variation. Journal of Theoretical Probability 14, 949–978 (2001) One may also search in the standard reference for simulation of stable random variables: Janicki, A., and Weron, A. (1994). Simulation and Chaotic Behaviour of $$\alpha$$-stable Stochastic Processes. Marcel Dekker, New York If we first choose $$d_n=\frac{L}{nN}$$ and include the $$M$$ smallest $$d_n$$'s in the restricted sum, then the relative error is $$E=\frac{\sum_{n=M+1}^N 1/n^3}{\sum_{n=1}^N 1/n^3}=\frac{\psi ^{(2)}(M+1)}{\psi ^{(2)}(1)}+{\cal O}(N^{-2}).$$ This ratio of polygamma functions amounts to less than a 1% error for $$M=6$$, independent of $$N$$. For a statistical test I compared $$N=50$$ and $$N=500$$ at a fixed $$M=10$$: shown below are the two histograms of the cumulative distribution of the error $$E$$, obtained from $$10^3$$ realizations of the set of random variables $$d_1,d_2,\ldots d_N$$. As you can see, the two histograms ($$N=50$$ on the left, $$N=500$$ on the right) are nearly the same, with $$E<1\%$$ happening with probability 0.9, so you can keep $$M$$ fixed as you scale up $$N$$. • Thank you so much. I only said $nL/N$ as an example. Maybe, it is better to write the problem in probabilistic way. I mean that $\mathbb{P}(|\alpha-\bar{\alpha}|\leq\beta)$. – Math_Y Jan 25 at 16:22
2020-09-24 12:40:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 72, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9408167004585266, "perplexity": 184.25041153538243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400217623.41/warc/CC-MAIN-20200924100829-20200924130829-00739.warc.gz"}
https://www.physicsforums.com/threads/differential-geometry-coordinate-patches.406751/
# Differential geometry: coordinate patches 1. May 30, 2010 ### SNOOTCHIEBOOCHEE 1. The problem statement, all variables and given/known data For a coordinate patch x: U--->$$\Re^{3}$$show that$$u^{1}$$is arc length on the $$u^{1}$$ curves iff $$g_{11} \equiv 1$$ 3. The attempt at a solution So i know arc legth of a curve $$\alpha (t) = \frac{ds}{dt} = \sum g_{ij} \frac {d\alpha^{i}}{dt} \frac {d\alpha^{j}}{dt}$$ (well thats actually arclength squared but whatever). But im not sure how to write this for just a $$u^{1}$$ curve. A $$u^{1}$$ curve throught the point P= x(a,b) is $$\alpha(u^{1})= x(u^{1},b)$$ But i have no idea how to find this arclength applies to u^1 curves. Furthermore i know some stuff about our metric $$g_{ij}(u^{1}, u^{2})= <x_{i}(u^{1}, u^{2}), x_{j}(u^{1}, u^{2})$$ But i do not know how to use that to show that u^1 must be arclength but here is what i have so far: $$g_{11}(u^{1}, b)= <x_{1}(u^{1}, u^{2}), x_{2}(u^{1}, u^{2})>$$ We know that $$x_{1}= (1,0)$$ and that is as far as i got :/ Any help appreciated. Last edited: May 30, 2010 2. May 31, 2010 ### SNOOTCHIEBOOCHEE bump, i still need help on this 3. May 31, 2010 ### SNOOTCHIEBOOCHEE one last bump, can anybody help me on this? Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Have something to add?
2016-07-31 05:28:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6462763547897339, "perplexity": 1055.951504290672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258950570.93/warc/CC-MAIN-20160723072910-00325-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/physics-for-scientists-and-engineers-a-strategic-approach-with-modern-physics-3rd-edition/chapter-12-rotation-of-a-rigid-body-exercises-and-problems-page-351/63
## Physics for Scientists and Engineers: A Strategic Approach with Modern Physics (3rd Edition) Let $m_b$ be the mass of the beam and let $m_y$ be the mass of the boy. When the boy reaches the limit just before the beam starts to tip, the upward force exerted by the left post will be zero. We can consider the net torque about an axis of rotation located at the position of the right post. We can find $r_y$, the boy's distance from the right post. Note that the beam's center of mass is 0.5 meters from the right post. $\sum \tau = 0$ $r_b~m_b~g - r_y~m_y~g = 0$ $r_b~m_b~g = r_y~m_y~g$ $r_y = \frac{r_b~m_b}{m_y}$ $r_y = \frac{(0.5~m)(40~kg)}{20~kg}$ $r_y = 1.0~m$ The boy can go up to 1.0 meter past the right post. The boy can walk to the point that is 1.0 meter from the end of the beam without the beam tipping.
2018-04-26 11:45:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7776181697845459, "perplexity": 207.06582410248714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948126.97/warc/CC-MAIN-20180426105552-20180426125552-00315.warc.gz"}
http://aas.org/archives/BAAS/v25n4/aas183/abs/S418.html
Physical Processes In Soft Gamma-Ray Repeaters Previous abstract Next abstract Session 4 -- Gamma Ray Astrophysics Display presentation, Wednesday, January 12, 9:30-6:45, Salons I/II Room (Crystal Gateway) ## [4.18] Physical Processes In Soft Gamma-Ray Repeaters M. Fatuzzo (U Michigan) and F. Melia (U Arizona) () There is some evidence that Soft Gamma-Ray Repeaters (SGR) may be neutron stars undergoing structural adjustments that produce the observed transient $\gamma$-ray events. We here consider the physics of Alfv\'en wave propagation and dissipation in the closed field-line region of the disturbed magnetosphere surrounding such an object, and show that the charged particle gas is unbounded along those field lines with the largest radius of curvature, leading to a highly dynamic (locally super-Eddington) outflow. Within this medium, the synchrotron emission is strongly self-absorbed and the plasma is optically thick to scattering. The radiation distribution in the fluid rest frame therefore approximates that of a blackbody in LTE with the expanding gas. However, the observed spectrum is modified by several factors, including (1) the frequency-dependent location of the photosphere and (2) the angle-dependent boosting for emitting elements at different inclination angles relative to the observer. We show that the spectrum calculated in this way is a reasonable fit to the observed SGR burst spectra, with a characteristic temperature correlated to the burst luminosity. The inferred source distance to SGR 0526-66 and SGR 1806-20 is consistent with their apparent positioning within the LMC (the former) and the Galactic plane (the latter).
2014-07-24 16:04:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7167977690696716, "perplexity": 3108.4450643944915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997889314.41/warc/CC-MAIN-20140722025809-00021-ip-10-33-131-23.ec2.internal.warc.gz"}
https://pandax.sjtu.edu.cn/pub-sci
# Scientific Publications ## Neutron-induced nuclear recoil background in the PandaX-4T experiment Chinese Physics C(2022) Neutron-induced nuclear recoil background is critical to the dark matter searches in the PandaX-4T liquid xenon experiment. This paper studies the feature of neutron background in liquid xenon and evaluates their contribution in the single scattering nuclear recoil events through three methods. The first method is fully Monte Carlo simulation based. The last two are data-driven methods that also use the multiple scattering signals and high energy signals in the data, respectively. In the PandaX-4T commissioning data with an exposure of 0.63 tonne⋅year, all these methods give a consistent result that there are 1.15±0.57 neutron-induced background in dark matter signal region within an approximated nuclear recoil energy window between 5 and 100 keV. ## A search for two-component Majorana dark matter in a simplified model using the full exposure data of PandaX-II experiment Physics Letters B(2022) In the two-component Majorana dark matter model, one dark matter particle can scatter off the target nuclei, and turn into a slightly heavier component. In the framework of a simplified model with a vector boson mediator, both the tree-level and loop-level processes contribute to the signal in direct detection experiment. In this paper, we report the search results for such dark matter from PandaX-II experiment, using total data of the full 100.7 tonne⋅day exposure. No significant excess is observed, so strong constraints on the combined parameter space of mediator mass and dark matter mass are derived. With the complementary search results from collider experiments, a large range of parameter space can be excluded. ## Search for Cosmic-Ray Boosted Sub-GeV Dark Matter at the PandaX-II Experiment Phys. Rev. Lett., Vol.128, 171801(2022) We report a novel search for the cosmic-ray boosted dark matter using the 100 tonne · day full dataset of the PandaX-II detector located at the China Jinping Underground Laboratory. With the extra energy gained from the cosmic rays, sub-GeV dark matter particles can produce visible recoil signals in the detector. The diurnal modulations in rate and energy spectrum are utilized to further enhance the signal sensitivity. Our result excludes the dark matter–nucleon elastic scattering cross section between 10−31 and 10−28 cm2 for dark matter masses from 0.1 MeV/c2 to 0.1 GeV/c2, with a large parameter space previously unexplored by experimental collaborations. ## Dark Matter Search Results from the PandaX-4T Commissioning Run PHYSICAL REVIEW LETTERS, Vol.127(2021) We report the first dark matter search results using the commissioning data from PandaX-4T. Using a time projection chamber with 3.7-tonne of liquid xenon target and an exposure of 0.63~tonne⋅year, 1058 candidate events are identified within an approximate electron equivalent energy window between 1 and 30 $keV$. No significant excess over background is observed. Our data set a stringent limit to the dark matter-nucleon spin-independent interactions, with a lowest excluded cross section (90% C.L.) of $3.3 \times 10^{−47}cm^2$ at a dark matter mass of 30 $GeV/c^2$. ## ${}^{83}Rb$/${}^{83m}Kr$ production and cross-section measurement with 3.4 MeV and 20 MeV proton beams Physical Review C(2021) ${}^{83m}Kr$ with a short lifetime is an ideal calibration source for liquid xenon or liquid argon detector. The ${}^{83m}Kr$ isomer can be generated through the decay of ${}^{83}Rb$ isotope, and ${}^{83}Rb$ is usually produced by proton beams bombarding natural krypton atoms. In this paper, we report a successful production of ${}^{83}Rb$/${}^{83m}Kr$ with 3.4 MeV proton beam energy and measure the production rate with such low proton energy for the first time. Another production attempt was performed with newly available 20 MeV proton beam in China, the production rate is consistent with our expectation. The produced ${}^{83m}Kr$ source has been successfully injected into PandaX-II liquid xenon detector and yielded enough statistics for detector calibration. ## Light yield and field dependence measurement in PandaX-II dual-phase xenon detector Journal of Instrumentation(2021) The dual-phase xenon detector is one of the most sensitive detectors for dark matter direct detection, where the energy deposition of incoming particles can be converted into light and electrons through xenon excitation and ionization. The detector response to signal energy deposition varies significantly with the electric field in liquid xenon . We study the detector light yield and its dependence on the electric field in PandaX-II dual-phase detector containing 580 kg liquid xenon in the sensitive volume. From measurement, the light yield at electric field from 0 V/cm to 317 V/cm is obtained for energy deposition up to 236 keV. ## Constraining self-interacting dark matter with the full dataset of PandaX-II SCIENCE CHINA Physics, Mechanics & Astronomy, Vol.64, 111062(2021) Self-interacting dark matter (SIDM) is a leading candidate proposed to solve discrepancies between predictions of the prevailing cold dark matter theory and observations of galaxies. Many SIDM models predict the existence of a light force carrier that mediates strong dark matter self-interactions. If the mediator couples to the standard model particles, it could produce characteristic signals in dark matter direct detection experiments. We report searches for signals of SIDM models with a light mediator using the full dataset of the PandaX-II experiment, basing on a total exposure of 132 tonne-days. No significant excess over background is found, and our likelihood analysis leads to a strong upper limit on the dark matter-nucleon coupling strength. We further combine the PandaX-II constraints and those from observations of the light element abundances in the early universe, and show that direct detection and cosmological probes can provide complementary constraints on dark matter models with a light mediator. ## Search for Light Dark Matter-Electron Scatterings in the PandaX-II Experiment Physical Review Letters, Vol.126, 211803 (2021) We report constraints on light dark matter through its interactions with shell electrons in the PandaX-II liquid xenon detector with a total 46.9 tonnes⋅day exposure. To effectively search for these very low energy electron recoils, ionization-only signals are selected from the data. 1821 candidates are identified within an ionization signal range between 50 and 75 photoelectrons, corresponding to a mean electronic recoil energy from 0.08 to 0.15 $keV$. The 90% C.L. exclusion limit on the scattering cross section between the dark matter and electron is calculated with systematic uncertainties properly taken into account. Under the assumption of point interaction, we provide the world’s most stringent limit within the dark matter mass range from 15 to 30 $MeV/c^2$, with the corresponding cross section from $2.5 \times 10^{-37}$ to $3.1 \times 10^{−38} cm^2$. ## Determination of responses of liquid xenon to low energy electron and nuclear recoils using the PandaX-II detector Chinese Physics C(2021) We report a systematic determination of the responses of PandaX-II, a dual phase xenon time projection chamber detector, to low energy recoils. The electron recoil (ER) and nuclear recoil (NR) responses are calibrated, respectively, with injected tritiated methane or ${}^{220}Rn$ source, and with ${}^{241}Am-Be$ neutron source, within an energy range from 1-25 keV (ER) and 4-80 keV (NR), under the two drift fields of 400 and 317 V/cm. An empirical model is used to fit the light yield and charge yield for both types of recoils. The best fit models can well describe the calibration data. The systematic uncertainties of the fitted models are obtained via statistical comparison against the data. ## Results of Dark Matter Search using the Full PandaX-II Exposure Chinese Physics C, Vol.44, 125001(2020) We report the dark matter search results using the full 132 ton⋅day exposure of the PandaX-II experiment, including all data from March 2016 to August 2018. No significant excess of events were identified above the expected background. Upper limits are set on the spin-independent dark matter-nucleon interactions. The lowest 90% confidence level exclusion on the spin-independent cross section is $2.0 \times 10^{−46} cm^{2}$ at a WIMP mass of 15 $GeV / c^{2}$. ## A search for solar axions and anomalous neutrino magnetic moment with the complete PandaX-II data Chinese Physics Letters, Vol.38, 011301(2020) We report a search for new physics signals using the low energy electron recoil events in the complete data set from PandaX-II, in light of the recent event excess reported by XENON1T. The data correspond to a total exposure of 100.7 ton-day with liquid xenon. With robust estimates of the dominant background spectra, we perform sensitive searches on solar axions and neutrinos with enhanced magnetic moment. We find that the axion-electron coupling $g_{Ae} \lt 4.6\times 10^{-12}$ for an axion mass less than $\rm 0.1~keV/c^2$ and the neutrino magnetic moment $μ_ν \lt 3.2\times 10^{-11}μ_{B}$ at 90% confidence level. The observed excess from XENON1T is within our experimental constraints. ## An Improved Evaluation of the Neutron Background in the PandaX-II Experiment SCIENCE CHINA Physics, Mechanics & Astronomy(2019) In dark matter direct detection experiments, neutron is a serious source of background, which can mimic the dark matter-nucleus scattering signals. In this paper, we present an improved evaluation of the neutron background in the PandaX-II dark matter experiment by a novel approach. Instead of fully relying on the Monte Carlo simulation, the overall neutron background is determined from the neutron-induced high energy signals in the data. In addition, the probability of producing a dark-matter-like background per neutron is evaluated with a complete Monte Carlo generator, where the correlated emission of neutron(s) and $\gamma$(s) in the ($\alpha$, n) reactions and spontaneous fissions is taken into consideration. With this method, the neutron backgrounds in the Run 9 (26-ton-day) and Run 10 (28-ton-day) data sets of PandaX-II are estimated to be 0.66±0.24 and 0.47±0.25 events, respectively. ## Searching for neutrino-less double beta decay of 136Xe with PandaX-II liquid xenon detector Chinese Physics C, Vol.43, 113001(2019) We report the Neutrino-less Double Beta Decay (NLDBD) search results from PandaX-II dual-phase liquid xenon time projection chamber. The total live time used in this analysis is 403.1 days from June 2016 to August 2018. With NLDBD-optimized event selection criteria, we obtain a fiducial mass of 219 kg of natural xenon. The accumulated xenon exposure is 242 kg $\cdotp$ yr, or equivalently 22.2 kg $\cdotp$ yr of ${}^{136}Xe$ exposure. At the region around ${}^{136}Xe$ decay Q-value of 2458 keV, the energy resolution of PandaX-II is 4.2%. We find no evidence of NLDBD in PandaX-II and establish a lower limit for decay half-life of 2.1$\times 10^{23}$ yr at the 90% confidence level, which corresponds to an effective Majorana neutrino mass $m_{\beta \beta}\lt$ (1.4 - 3.7) eV. This is the first NLDBD result reported from a dual-phase xenon experiment. ## Dark matter direct search sensitivity of the PandaX-4T experiment Science China Physics, Mechanics & Astronomy, Vol.62, 31011(2018) The PandaX-4T experiment, a 4-ton scale dark matter direct detection experiment, is being planned at the China Jinping Un- derground Laboratory. In this paper we present a simulation study of the expected background in this experiment. In a 2.8-ton fiducial mass and the signal region between 1-10 keV electron equivalent energy, the total electron recoil background is found to be $4.9 \times 10^{5} kg^{-1}d^{-1}keV^{-1}$. The nuclear recoil background in the same region is $2.8 \times 10^{-7} kg^{-1}d^{-1}keV^{-1}$. With an exposure of 5.6 ton-years, the sensitivity of PandaX-4T could reach a minimum spin-independent dark matter-nucleon cross section of $6 \times 10^{-48}cm^{2}$ at a dark matter mass of 40 $GeV/c^{2}$. ## Constraining Dark Matter Models with a Light Mediator at the PandaX-II Experiment Physical Review Letters, Vol.121, 021304(2018) We search for nuclear recoil signals of dark matter models with a light mediator in PandaX-II, a direct detection experiment in the China Jinping underground laboratory. Using data collected in 2016 and 2017 runs, corresponding to a total exposure of 54 ton day, we set upper limits on the zero-momentum dark matter-nucleon cross section. These limits have a strong dependence on the mediator mass when it is comparable to or below the typical momentum transfer. We apply our results to constrain self-interacting dark matter models with a light mediator mixing with standard model particles, and set strong limits on the model parameter space for the dark matter mass ranging from 5 GeV to 10 TeV. ## PandaX-II Constraints on Spin-Dependent WIMP-Nucleon Effective Interactions Physics Letters B, Vol.792, 193-198(2018) We present PandaX-II constraints on candidate WIMP-nucleon effective interactions involving the nucleon or WIMP spin, including, in addition to standard axial spin-dependent (SD) scattering, various couplings among vector and axial currents, magnetic and electric dipole moments, and tensor interactions. The data set corresponding to a total exposure of 54-ton-days is reanalyzed to determine constraints as a function of the WIMP mass and isospin coupling. We obtain WIMP-nucleon cross section bounds of $1.6\times 10^{-41}cm^2$ and $9.0\times10^{-42}cm^2$ (90% c.l.) for neutron-only SD and tensor coupling, respectively, for a mass $M_{WIMP} ∼ 40 GeV/c^2$. The SD limits are the best currently available for $M_{WIMP} > 40 GeV/c^2$. We show that PandaX-II has reached a sensitivity sufficient to probe a variety of other candidate spin-dependent interactions at the weak scale. ## Explore the Inelastic Frontier with 79.6-day of PandaX-II Data Phys. Rev. D, Vol.96, 102007(2017) We report here the results of searching for inelastic scattering of dark matter (initial and final state dark matter particles differ by a small mass splitting) with a nucleon for the first 79.6 days of PandaX-II data (Run 9). We set the upper limits for the spin independent weakly interactive massive particle–nucleon scattering cross section up to a mass splitting of $300 keV/c^{2}$ at two benchmark dark matter masses of 1 and $10 TeV/c^{2}$. ## Limits on Axion Couplings from the First 80 Days of Data of the PandaX-II Experiment Physical Review Letters, Vol.119, 181806(2017) We report new searches for solar axions and galactic axionlike dark matter particles, using the first low-background data from the PandaX-II experiment at China Jinping Underground Laboratory, corresponding to a total exposure of about $2.7×10^{4}$  kg day. No solar axion or galactic axionlike dark matter particle candidate has been identified. The upper limit on the axion-electron coupling ($g_{Ae}$) from the solar flux is found to be about $4.35×10^{−12}$ in the mass range from $10^{-5}$ to $1 keV/c^{2}$ with 90% confidence level, similar to the recent LUX result. We also report a new best limit from the ${}^{57}Fe$ deexcitation. On the other hand, the upper limit from the galactic axions is on the order of $10^{-13}$ in the mass range from 1 to $10 keV/c^{2}$ with 90% confidence level, slightly improved compared with the LUX. ## Dark Matter Results From 54-Ton-Day Exposure of PandaX-II Experiment Physical Review Letters, Vol.119, 181302(2017) We report a new search for weakly interacting massive particles (WIMPs) using the combined low background data sets acquired in 2016 and 2017 from the PandaX-II experiment in China. The latest data set contains a new exposure of 77.1 live days, with the background reduced to a level of $0.8×10^{-3} evt/kg/day$, improved by a factor of 2.5 in comparison to the previous run in 2016. No excess events are found above the expected background. With a total exposure of $5.4×10^{4}kg day$, the most stringent upper limit on the spin-independent WIMP-nucleon cross section is set for a WIMP with mass larger than $100 GeV/c^{2}$, with the lowest 90% C.L. exclusion at $8.6×10^{-47}cm^{2}$ at $40 GeV/c^{2}$. ## First dark matter search results from the PandaX-I experiment Science China Physics, Mechanics & Astronomy, Vol.57, 2024(2017) We report on the first dark-matter (DM) search results from PandaX-I, a low threshold dual-phase xenon experiment operating at the China JinPing Underground Laboratory. In the 37-kg liquid xenon target with 17.4 live-days of exposure, no DM particle candidate event was found. This result sets a stringent limit for low-mass DM particles and disfavors the interpretation of previously-reported positive experimental results. The minimum upper limit, $3.7 × 10^{-44}cm^{2}$, for the spin-independent isoscalar DM-particle-nucleon scattering cross section is obtained at a DM-particle mass of $49GeV/c^{2}$ at 90% confidence level. ## Spin-Dependent Weakly-Interacting-Massive-Particle–Nucleon Cross Section Limits from First Data of PandaX-II Experiment Physical Review Letters, Vol.118, 071301(2017) New constraints are presented on the spin-dependent weakly-interacting-massive-particle-(WIMP-)nucleon interaction from the PandaX-II experiment, using a data set corresponding to a total exposure of $3.3×10^{4}kg day$. Assuming a standard axial-vector spin-dependent WIMP interaction with ${}^{129}Xe$ and ${}^{131}Xe$ nuclei, the most stringent upper limits on WIMP-neutron cross sections for WIMPs with masses above $10 GeV/c^{2}$ are set in all dark matter direct detection experiments. The minimum upper limit of $4.1×10^{-41}cm^{2}$ at 90% confidence level is obtained for a WIMP mass of $40 GeV/c^{2}$. This represents more than a factor of 2 improvement on the best available limits at this and higher masses. These improved cross-section limits provide more stringent constraints on the effective WIMP-proton and WIMP-neutron couplings. ## Krypton and radon background in the PandaX-I dark matter experiment Journal of Instrumentation, Vol.12, T02002(2017) We discuss an in-situ evaluation of the ${}^{85}Kr$, ${}^{222}Rn$, and ${}^{220}Rn$ background in PandaX-I, a 120-kg liquid xenon dark matter direct detection experiment. Combining with a simulation, their contributions to the low energy electron-recoil background in the dark matter search region are obtained. ## Dark Matter Results from First 98.7 Days of Data from the PandaX-II Experiment Physical Review Letters, Vol.117, 21303(2016) We report the weakly interacting massive particle (WIMP) dark matter search results using the first physics-run data of the PandaX-II 500 kg liquid xenon dual-phase time-projection chamber, operating at the China JinPing underground laboratory. No dark matter candidate is identified above background. In combination with the data set during the commissioning run, with a total exposure of $3.3×10^{4} kg day$, the most stringent limit to the spin-independent interaction between the ordinary and WIMP dark matter is set for a range of dark matter mass between $5$ and $1000 GeV/c^{2}$. The best upper limit on the scattering cross section is found $2.5×10^{-46}cm^{2}$ for the WIMP mass $40 GeV/c^{2}$ at 90% confidence level. ## Low-mass dark matter search results from full exposure of the PandaX-I experiment Physical Review D, Vol.92, 052004(2015) We report the results of a weakly interacting massive particle (WIMP) dark matter search using the full 80.1 live-day exposure of the first stage of the PandaX experiment (PandaX-I) located in the China Jin-Ping Underground Laboratory. The PandaX-I detector has been optimized for detecting low-mass WIMPs, achieving a photon detection efficiency of 9.6%. With a fiducial liquid xenon target mass of 54.0 kg, no significant excess events were found above the expected background. A profile likelihood ratio analysis confirms our earlier finding that the PandaX-I data disfavor all positive low-mass WIMP signals reported in the literature under standard assumptions. A stringent bound on a low-mass WIMP is set at a WIMP mass below $10 GeV/c^{2}$, demonstrating that liquid xenon detectors can be competitive for low-mass WIMP searches.
2022-08-13 00:32:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6423905491828918, "perplexity": 2119.171923677232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571847.45/warc/CC-MAIN-20220812230927-20220813020927-00179.warc.gz"}
https://www.thenakedscientists.com/forum/index.php?topic=65524.25
Debunking of Time Dilation due to Relative Velocity • 22 Replies • 3914 Views 0 Members and 1 Guest are viewing this topic. A.J.Hodgson • First timers • 7 Debunking of Time Dilation due to Relative Velocity « on: 20/01/2016 17:33:07 » Consider a simple clock consisting of two mirrors A and B, between which a light pulse is bouncing. Speed of light constant & not additive, so when moving parallel relative to setup light pulse observed as tracing out a longer, angled path, the effect cannot be change in the consistent speed of light, therefore, according to relativity, must be accredited to effects on time (dilation). However when the relativists returned to earth & the stationary me was unexpectedly younger, it was found that I hadn't been stationary but travelling at speed of earths rotation + suns rotation + galaxies rotation etc. while they had just been travelling in a straight line. Or in other words; speed of light is not additive - including sideways - once it is emitted from its source it travels in a straight line, but is dispersed, a constant by which the movement of rest of universe can be measured. Colin2B • Global Moderator • Neilep Level Member • 2029 Re: Debunking of Time Dilation due to Relative Velocity « Reply #1 on: 20/01/2016 19:21:44 » You haven't debunked anything, just added extra calculation. The complicated path of earth can be calculated relative to the travelling twin, but the distance travelled is minor compared to that of the traveller who journeys light years. Relativity has nothing to do with light travelling in a straight line, that is a red herring. and the misguided shall lead the gullible, the feebleminded have inherited the earth. jeffreyH • Global Moderator • Neilep Level Member • 4062 • The graviton sucks Re: Debunking of Time Dilation due to Relative Velocity « Reply #2 on: 20/01/2016 19:36:19 » Time dilation gets too much attention. Consider S as representing the distance traveled by light in 1 second. However S is not a constant but an upper limit. Then we can define $$\Delta\,=\,\frac{f(s-ds) + f(s)}{ds}$$ where the result is a value between 1 and 2. At 1 we are at the speed of light and at 2 we are stationary with respect to the background. Both excluded by relativity. Clear and simple. Bored chemist • Neilep Level Member • 8739 Re: Debunking of Time Dilation due to Relative Velocity « Reply #3 on: 20/01/2016 21:03:38 » Consider a simple clock consisting of two mirrors A and B, between which a light pulse is bouncing. Speed of light constant & not additive, so when moving parallel relative to setup light pulse observed as tracing out a longer, angled path, the effect cannot be change in the consistent speed of light, therefore, according to relativity, must be accredited to effects on time (dilation). However when the relativists returned to earth & the stationary me was unexpectedly younger, it was found that I hadn't been stationary but travelling at speed of earths rotation + suns rotation + galaxies rotation etc. while they had just been travelling in a straight line. Or in other words; speed of light is not additive - including sideways - once it is emitted from its source it travels in a straight line, but is dispersed, a constant by which the movement of rest of universe can be measured. You have typed some words Experiments have shown that time dilation is real and has the value predicted by relativity. What should I believe? alysdexia • Sr. Member • 121 Re: Debunking of Time Dilation due to Relative Velocity « Reply #4 on: 23/01/2016 04:33:29 » I disagree that the complicated path travelled by earth can be calculated by people who do not know what dark matter, dark energy & dark flow are, let alone find gravitational waves. Prove it. Quote Regarding straight lines - Light is energy liberated in one direction, 1 dimensional space. If you mean the radial direction, the wave also undergoes polarization. Quote My point was to even talk of stationary frame of reference is worthless, you only know of one constant, the speed of light in a vacuum, I am simply using it to also ascertain all other movement. I specifically mentioned Light as separate from the space that warps it & hence a constant that can be used to ascertain said warping. All experiments have shown is space can be warped, this takes energy, so for example, when a beam of light sent from a satellite to earth & the path is curved, the signal must either be sent earlier to account for longer curved path, or more energy used to fill the extra space of a curved path compared to a straight one. velocity ∈ special relativity. acceleration ∈ general relativity. Colin2B • Global Moderator • Neilep Level Member • 2029 Re: Debunking of Time Dilation due to Relative Velocity « Reply #5 on: 24/01/2016 00:49:11 » I disagree that the complicated path travelled by earth can be calculated by people who do not know what dark matter, dark energy & dark flow are, let alone find gravitational waves. When the geocentric view was commonly held the motions of the planets relative to earth appeared extremely complex - some planets appeared to reverse direction. However, Arab mathematicians were close to solving the equations when Copernicus and Galileo changed the game with the heliocentric theory. By the time Keppler and Newton had finished with it we had a pretty accurate view of the motions of the universe, all without the use of today's supercomputers. Regarding straight lines - Light is energy liberated in one direction, 1 dimensional space. Light has speed hence requires at least 2 dimensions, distance and time. My point was to even talk of stationary frame of reference is worthless, That's why we don't. We talk of being at rest within an inertial ie non accelerating, frame of reference, and consider other motions relative to that frame. However, that frame can be moving. Consider 2 cars travelling towards each other at a constant 30mph. We can define car A as being at rest in an inertial frame surrounding it and moving with it. Relative to car A, Car B is moving towards it at 60mph. This concept was developed by Galileo, in fact it is named Galilean Relativity in his honour, all he was lacking was the knowledge the constancy of the speed of light and he would have discovered Special Relativity. you only know of one constant, the speed of light in a vacuum, I am simply using it to also ascertain all other movement. There are many constants, but as with the speed of light they cannot be used to ascertain all other movements. I specifically mentioned Light as separate from the space that warps it & hence a constant that can be used to ascertain said warping. This is accepted by relativity, but your original post was regarding time dilation which has nothing to do with warping of spacetime. when a beam of light sent from a satellite to earth & the path is curved, the signal must either be sent earlier to account for longer curved path, If the satellite is geostationary then no, if it is moving relative to earth surface the beam will need to be aimed off, as with a rifle, in order to hit the target. Although the path is curved relative to earth surface, the light is still travelling in a straight line. Prove it. This is good advice if you want to debunk time dilation or any other part of relativity. However, you are working against very large number of experiments which show the theories hold true, so you will have to devise and perform a series of experiments which consistently demonstrate they do not, and these experiments will need to be repeatable by anyone else. Don't shirk from this task, if you succeed fame and fortune await you. and the misguided shall lead the gullible, the feebleminded have inherited the earth. A.J.Hodgson • First timers • 7 Re: Debunking of Time Dilation due to Relative Velocity « Reply #6 on: 25/01/2016 17:33:08 » Where have I posted anything that disagrees with experiment? This is about whether time or space is warped, I propose space is warped never time/electromagnetic wave, I have tried to demonstrate this by showing the electromagnetic wave sent down a curved path must be either sent earlier or require more energy to fill when compared to a straight path. Please tell me what experiment has confirmed the electromagnetic wave is given sideways additive momentum, as is proposed in the analogy of a spaceship flying off at nearly light speed & returning with a younger crew due to additive sideways momentum requiring the electromagnetic wave to have travelled a further distance than without the addition? Colin2B • Global Moderator • Neilep Level Member • 2029 Re: Debunking of Time Dilation due to Relative Velocity « Reply #7 on: 25/01/2016 18:01:05 » Please tell me what experiment has confirmed the electromagnetic wave is given sideways additive momentum, as is proposed in the analogy of a spaceship flying off at nearly light speed & returning with a younger crew due to additive sideways momentum requiring the electromagnetic wave to have travelled a further distance than without the addition? I think you must have misunderstood what relativity is all about. There is no additive sideways momentum requiring the electromagnetic wave to have travelled a further distance. No extra energy is required. That has never been a part of special or general relativity. « Last Edit: 25/01/2016 18:08:13 by Colin2B » and the misguided shall lead the gullible, the feebleminded have inherited the earth. GoC • Sr. Member • 447 Re: Debunking of Time Dilation due to Relative Velocity « Reply #8 on: 25/01/2016 21:04:27 » Posted by: A.J.Hodgson « on: Today at 17:33:08 » "Please tell me what experiment has confirmed the electromagnetic wave is given sideways additive momentum, as is proposed in the analogy of a spaceship flying off at nearly light speed & returning with a younger crew due to additive sideways momentum requiring the electromagnetic wave to have travelled a further distance than without the addition?" Atomic clocks of course. You are misunderstanding the basic premise of time dilation. It is time of reaction in different frames. Life being a biological clock is affected by that frames chemical reaction clock. Lets say the electron cycles. We will not argue about the path of the cycle but just that there is a cycle (distance). At the speed of light there is no energy left for cycling because all of the energy is used in inertial travel. Also this is also not a linear relationship. Half the speed of light the reaction time is 86.6% of the reaction speed at relative rest. 13.4% slower chemical reaction relative. At 86.6% of the speed of light the chemical reaction vs. being at rest is one half or 50% of the reaction speed relative to rest. Why is the reaction speed less? Because of the increased distance the electron has to travel per cycle with inertial speed. The amount of motion possible is c. Inertial speed reduces the potential cycle time at rest for the atom by increasing the relative distance through space the electron travels. The electron travel distance and the light travel distance is always relative to measure the speed of light the same in every frame. In GR space dilates by 13.4% to have an attraction of half the speed of light. When space dilates to 100% of the speed of light there is no energy left to keep atoms apart and a black hole forms void of the energy of time. In SR Inertial distance is relative to dilation of space caused by mass in GR. There is an equivalence but not equal in cause. While both are related to c. This is just what I suspect. Reality may or may not be different then this subjective interpretation. jeffreyH • Global Moderator • Neilep Level Member • 4062 • The graviton sucks Re: Debunking of Time Dilation due to Relative Velocity « Reply #9 on: 26/01/2016 09:22:54 » Posted by: A.J.Hodgson « on: Today at 17:33:08 » "Please tell me what experiment has confirmed the electromagnetic wave is given sideways additive momentum, as is proposed in the analogy of a spaceship flying off at nearly light speed & returning with a younger crew due to additive sideways momentum requiring the electromagnetic wave to have travelled a further distance than without the addition?" Atomic clocks of course. You are misunderstanding the basic premise of time dilation. It is time of reaction in different frames. Life being a biological clock is affected by that frames chemical reaction clock. Lets say the electron cycles. We will not argue about the path of the cycle but just that there is a cycle (distance). At the speed of light there is no energy left for cycling because all of the energy is used in inertial travel. Also this is also not a linear relationship. Half the speed of light the reaction time is 86.6% of the reaction speed at relative rest. 13.4% slower chemical reaction relative. At 86.6% of the speed of light the chemical reaction vs. being at rest is one half or 50% of the reaction speed relative to rest. Why is the reaction speed less? Because of the increased distance the electron has to travel per cycle with inertial speed. The amount of motion possible is c. Inertial speed reduces the potential cycle time at rest for the atom by increasing the relative distance through space the electron travels. The electron travel distance and the light travel distance is always relative to measure the speed of light the same in every frame. In GR space dilates by 13.4% to have an attraction of half the speed of light. When space dilates to 100% of the speed of light there is no energy left to keep atoms apart and a black hole forms void of the energy of time. In SR Inertial distance is relative to dilation of space caused by mass in GR. There is an equivalence but not equal in cause. While both are related to c. This is just what I suspect. Reality may or may not be different then this subjective interpretation. I will be getting back to you on this point. Very interesting. puppypower • Hero Member • 573 Re: Debunking of Time Dilation due to Relative Velocity « Reply #10 on: 26/01/2016 14:02:01 » One problem, I see, is connected to the definition of time versus how we measure time. We measure time with clocks which use cyclic action. The second hand on the clock, reaches 12, once a minute. On the other hand, time does not move in cycles, but rather time moves to the future, never repeating, never cycling, and never going back to the beginning. The action of the clock does not parallel the nature of time, sine it cycles but time does not. The clock is manmade and does not parallel time. This will create disconnect. One well know phenomena that cycles is energy; sine waves. We are using the cyclic nature of energy to describe and measure time, even though energy has wavelength (distance) and frequency (time), while time only has time. There is a conceptual disconnect. The numbers may work out, but the disconnect between the two phenomena, will have an impact on how the mind equates the clock and time. As an analogy, I can measure temperature with a hydrometer; relative humidity, if I keep pressure constant. At any given temperature and pressure, so much water vapor will be in equilibrium with its liquid. In this detached thermometer, I am using the evaporation of water, at constant pressure, to measure temperature. This method will give good results, based on humidity charts. However, conceptually, the brain will begin to theorize how temperature is somehow connected to evaporation.The cause and effect can become backwards or upside down, if we use the tool's action to describe temperature. In the case of time, since clocks measure time with a cycle, similar to energy,  we will attempt to explain time in terms of (distance-time); space-time, speed of light, inertial speed, wavelength/frequency of energy. We are explaining what happens to the clock, not time. Time does not use distance. The idea of integrated space-time stems from the cyclic clock, since time is never fully separated  from distance. Let us infer a better tool that more closely parallels time. The flow of time is far closer to the concept of entropy, than the cyclic nature of energy. Both time and entropy naturally increase; 2nd law, as we move to the future. Neither time or entropy will spontaneously cycle backwards. In chemistry, entropy is a state function, meaning a given state of matter will have a given level of entropy. The original definition of entropy, was not randomness, but was based on states. Random appears to help the disconnect. For example, water at 25C has an entropy value of 6.6177 J ˣ mol-1 ˣ K-1. The magnitude of entropy is not random, but has exact value; the state is an average for all the units. An entropy based clock might be a human life, such as in the twin paradox. Entropy increase implies forward progressing changes of state, with each state having more entropy than the last. A slowing of time will simply slow the rate of entropy. A slowing of entropy will mean the human clock will take longer to reach each successive state; age slower. If we use cyclic clocks, instead of entropy clocks, we will explain this with distance and time; curved space-time and the movement of light, wavelength changes, etc., since these are all connected to the units of energy based clocks, which do not accurately portray the nature of time. Relative to GR, when gravity acts upon matter, it causes the entropy of the matter to decrease. Gravitation pressure is being applied. For example, liquid water has lower entropy than water vapor. The pressure increase created by gravity, will turn water vapor into liquid, lowering the entropy of the water. The result is a lowered entropy platform. This lower entropy floor sets a potential, that lowers the rate of entropy gain. Our human entropy clock ages slower. Entropy and time will still increase, but they will do so slower. Special Relativity only works, in reality, if we have mass in motion. This is why Einstein defined three parameters in SR; mass, distance and time. If you use just distance and time, such as in the imagination; fly to the moon near C, nothing tangible will happen to reality. It may happen on paper, via math, but this a mental exercise will not impact your human entropy clock. Your mass needs to move since entropy is connected to states of matter. The changes in relativistic mass, is altering the platform for entropy. Velocity will lower the platform for entropy, placing a drag on entropy. We don't go back in time, but will move slower to the future. Twin ages slower. The two different approaches simply depend on whether you use an energy or entropy clock to describe time. It is much simpler with the entropy clock since you don't require a complicated translation process that leads to confusion, such as how pressure can alter temperate using an hydrometer thermometer. « Last Edit: 26/01/2016 14:04:25 by puppypower » jeffreyH • Global Moderator • Neilep Level Member • 4062 • The graviton sucks Re: Debunking of Time Dilation due to Relative Velocity « Reply #11 on: 26/01/2016 14:55:29 » I think the idea of using an entropy based clock is interesting. Your point about gravity lowering entropy is very astute. What does this imply about an event horizon and the Beckenstein bound? A.J.Hodgson • First timers • 7 Re: Debunking of Time Dilation due to Relative Velocity « Reply #12 on: 26/01/2016 18:36:02 » vV=xE EV=xv Ev=xV ... A.J.Hodgson • First timers • 7 Re: Debunking of Time Dilation due to Relative Velocity « Reply #13 on: 26/01/2016 18:47:19 » v=Frequency of Electromagnetic wave V=Shortest reproducible Wavelength x=Amount E=Energy That is frequency multiplied by distance-shortest wavelength gives an energy value, when distance changes the effect must be in changing of frequency & when the frequency changes the effect must be in distance. GoC • Sr. Member • 447 Re: Debunking of Time Dilation due to Relative Velocity « Reply #14 on: 27/01/2016 01:21:23 » Time is just distance of motion at c. Planks distance at c. The shortest measurement of time is distance. Energy causes motion c. Colin2B • Global Moderator • Neilep Level Member • 2029 Re: Debunking of Time Dilation due to Relative Velocity « Reply #15 on: 27/01/2016 12:59:56 » That is frequency multiplied by distance-shortest wavelength What do you mean by distance-shortest wavelength. The frequency and wavelength of an electromagnetic wave are linked by a constant c, the speed of light. and the misguided shall lead the gullible, the feebleminded have inherited the earth. GoC • Sr. Member • 447 Re: Debunking of Time Dilation due to Relative Velocity « Reply #16 on: 27/01/2016 15:43:59 » Wavelengths are affected by dilation in GR (redshift) and inertial speed on SR (redshift). They are equivalent but not for the same reason. The beginning and end of the wave creation is in a greater volume of space dilation GR and distance of space by SR. Red shift for light being the end result being equal. A.J.Hodgson • First timers • 7 Re: Debunking of Time Dilation due to Relative Velocity « Reply #17 on: 29/01/2016 04:25:57 » v=s <c> i=d When distance added to c, frequency proves better constant for measurement of speed. When frequency of c remains constant,  intensity proves better constant for measurement of distance. Colin2B • Global Moderator • Neilep Level Member • 2029 Re: Debunking of Time Dilation due to Relative Velocity « Reply #18 on: 29/01/2016 11:52:42 » When distance added to c, frequency proves better constant for measurement of speed. Also what do you mean by "distance-shortest wavelength"? I have never come across this term. and the misguided shall lead the gullible, the feebleminded have inherited the earth. GoC • Sr. Member • 447 Re: Debunking of Time Dilation due to Relative Velocity « Reply #19 on: 29/01/2016 16:06:47 » I would like to expand on Colin2B's question. Are you discussing SR or GR? I am also curious. jeffreyH • Global Moderator • Neilep Level Member • 4062 • The graviton sucks Re: Debunking of Time Dilation due to Relative Velocity « Reply #20 on: 05/02/2016 18:22:42 » Getting back to GoC and puppypower. If gravity lowers entropy (locally only) and increasing gravitational field strength slows down time then there looks like there is an indirect relationship between time dilation and entropy. If an increase in time dilation can be tied to lower entropy then at the event horizon of a black hole the entropy would have to decrease. Yet tidal forces would have exactly the opposite effect. Both can't be true. To resolve this issue there would need to be a velocity dependence on the state of entropy when moving through an increasing gravitational field. puppypower • Hero Member • 573 Re: Debunking of Time Dilation due to Relative Velocity « Reply #21 on: 02/03/2016 23:20:45 » Getting back to GoC and puppypower. If gravity lowers entropy (locally only) and increasing gravitational field strength slows down time then there looks like there is an indirect relationship between time dilation and entropy. If an increase in time dilation can be tied to lower entropy then at the event horizon of a black hole the entropy would have to decrease. Yet tidal forces would have exactly the opposite effect. Both can't be true. To resolve this issue there would need to be a velocity dependence on the state of entropy when moving through an increasing gravitational field. Gravity not only impacts space-time, but gravity also generates pressure. This pressure is separate from space-time as inferred by time. For example, at the bottom of the space-time well time runs slowest. Yet in the center of gravity, which is the same spot, this place of highest pressure, defines phases of matter with the fastest frequencies; matter gets faster. There are two separate layers of time affects. The simplest way to explain this is, gravity is an acceleration; d/t/t. Using simple dimensional analysis, acceleration  is one part distance and two parts time. Space-time is one part distance and one part time. We need to account for the extra time. Pressure makes use of the second unit of time, which in the case of the core of the star, allows the fastest material phase frequencies. This aspect of time (faster) goes the other way compared to the time in space-time (slower). In special relativity, since we deal with velocity; d/t, which is one part time and one part distance, we don't have to deal with the extra time of gravity. A relativistic space-ship will not implode, due to the extreme pressure that one will see if this reference was based on gravity. As you go from space to the event horizon, space-time contracts. However, we also have that extra time due to acceleration. The pressure causes matter to assume phases of higher and higher frequency. The entropy change will depend on what these phases are. For example, if we add pressure to water vapor we can get liquid water. This change of phase from vapor to liquid, by gravitational pressure, causes a drop in entropy. As we increase pressure even more, liquid water molecules will get closer, disrupting low entropy water structures that form in water. These structures depend on the hydrogen bonds being somewhat stretched. If we push these bonds closer, this disrupts the order and causes the entropy of the liquid water to increase. Theoretically, pressure can make entropy go both ways based on the nature of the new phases and what happens in the phases.  Fusion by binding two mobile atoms into one heavier and slower atom, lowers entropy. If we keep adding pressure, we can make the sub particles inside the nuclei to get too close, which may result in larger sub-particle composites with even higher entropy. This may occur when we exceed neutron density. In the case of event horizon, it all depends on the phases due to pressure and materials. One possible scenario is say the pressure causes phases with extreme frequency and increasing entropy. What does this spectrum look like? If we red shift this extreme spectrum so it is larger, it still may be small to see or detect? The percent difference in the space-time well of the sun is much tighter than the material frequency difference from the surface to the core. We may need an extreme pressure sub-particle film to see it. What is smaller than gamma? « Last Edit: 02/03/2016 23:23:56 by puppypower » jeffreyH • Global Moderator • Neilep Level Member • 4062 • The graviton sucks Re: Debunking of Time Dilation due to Relative Velocity « Reply #22 on: 03/03/2016 09:53:57 » That is vey interesting. There are two points of significance at the event horizon. The light like orbital and the horizon itself. The radial distance between the two will vary with the size of black hole. However the change in acceleration between these  two points will not vary. Otherwise light speed will be violated. This is the volume of space-time that is most important when considering entropy.
2017-01-21 15:27:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6679025888442993, "perplexity": 1132.1367390083446}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00550-ip-10-171-10-70.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-i-use-the-quadratic-formula-to-solve-125x-3-1-0
# How do I use the quadratic formula to solve 125x^3 - 1 = 0? Jul 25, 2014 Hopefully you know what "difference of cubes" is, since this problem involves this method before you can use the quadratic formula... You'll see what I mean. To begin, because you are subtracting you know that the difference of cubes should be in the form $\left({x}^{3} - {y}^{3}\right) = \left(x - y\right) \left({x}^{2} + x y + {y}^{2}\right)$. To make this problem simpler, get your given equation in the form $\left({x}^{3} - {y}^{3}\right)$ . You should know that ${5}^{3} i s 125$ and ${1}^{3}$ is $1$ ... so rewrite the problem as follows: ${\left(5 x\right)}^{3} - {\left(1\right)}^{3}$ Now, you know that (5x) is the x value of the difference of cubes formula, and that 1 is the y value. So, sub these values into the formula (x - y)(x^2 + xy +y^2) like so: $= \left(5 x - 1\right) \left[{\left(5 x\right)}^{2} + \left(5 x\right) \left(1\right) + {\left(1\right)}^{2}\right]$ Simplify it all: $\left(5 x - 1\right) \left(25 {x}^{2} + 5 x + 1\right)$ Now, because the bracket on the left is in linear form (just x to the power of 1), you can solve it: 5x - 1 = 0 5x = 1 x = 1/5 That's your first solution. Now, you use the quadratic formula to solve the other bracket, since it is quadratic. Now sub in all your values from 25x^2 + 5x +1. a = 25 , b = 5 , and c = 1 . Now solve: WAIT! If you check the discriminant (under the root sign), you'll notice that it comes out to be negative ! This means there are no real roots! If the discriminant was NOT NEGATIVE, you would solve for the roots as usual and you would have 3 roots! As such, the one root you got before (x=1/5) is the only real root! Hopefully you understood all this and hopefully I was of some help!
2019-11-22 09:31:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8980520367622375, "perplexity": 376.49333126298376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671249.37/warc/CC-MAIN-20191122092537-20191122120537-00173.warc.gz"}