url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://drorbn.net/index.php?title=Notes_for_wClips-120201/0:43:15
# Notes for wClips-120201/0:43:15 Peter is failing to tell us that $P\!v\!B_n$ is indeed quadratic, and that this is his theorem - see his paper, The Pure Virtual Braid Group Is Quadratic, arXiv:1110.2356. However our lecture series goes in a different direction... --Drorbn 19:35, 1 February 2012 (EST)
2022-11-29 10:46:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8568445444107056, "perplexity": 1560.2042766797208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710691.77/warc/CC-MAIN-20221129100233-20221129130233-00013.warc.gz"}
https://chemistry.stackexchange.com/revisions/31931/2
1. In general, acid + $$\ce{H_2O} \rightleftharpoons$$ base + $$\ce{H_3O^+}$$ 2. It is called the Henderson-Hasselbach equation $$pH = pK_a + \log\frac{[base]}{[acid]}$$ 3. I believe the equilibrium constant for a such a neutralisation would look like $$K_n = K_aK_b\frac{1}{K_w}$$
2019-10-15 02:44:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9039212465286255, "perplexity": 994.4844855131241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655735.13/warc/CC-MAIN-20191015005905-20191015033405-00074.warc.gz"}
https://elibm.org/article/10012134
## Reprint: Zur Approximation algebraischer Zahlen. II: Über die Anzahl der Darstellungen ganzer Zahlen durch Binärformen (1933) ##### Doc. Math. Extra Vol. Mahler Selecta, 367-386 (2019) DOI: 10.25537/dm.2019.SB-367-386 ### Summary Extending his work in Part I, Mahler now shows that the number of representations of a rational integer $g$ by a binary form $F(x,y)$ is at most $O(|g|^{\varepsilon})$, where $\varepsilon$ is any arbitrarily small positive constant. \par Reprint of the author's paper [Math. Ann. 108, 37--55 (1933; Zbl 0006.15604; JFM 39.0269.01)]. For Part I see [Zbl 1465.11012]. 11-03, 11J68
2023-02-01 16:47:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8789296746253967, "perplexity": 3476.164138888702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00677.warc.gz"}
https://www.groundai.com/project/large-cuts-with-local-algorithms-on-triangle-free-graphs/
1 Introduction Large Cuts with Local Algorithms on Triangle-Free Graphs Juho Hirvonen Helsinki Institute for Information Technology HIIT, Department of Information and Computer Science, Aalto University, Finland juho.hirvonen@aalto.fi Joel Rybicki Helsinki Institute for Information Technology HIIT, Department of Information and Computer Science, Aalto University, Finland joel.rybicki@aalto.fi Stefan Schmid TU Berlin & T-Labs, Germany stefan@net.t-labs.tu-berlin.de Jukka Suomela Helsinki Institute for Information Technology HIIT, Department of Information and Computer Science, Aalto University, Finland jukka.suomela@aalto.fi Abstract. We study the problem of finding large cuts in -regular triangle-free graphs. In prior work, Shearer (1992) gives a randomised algorithm that finds a cut of expected size , where is the number of edges. We give a simpler algorithm that does much better: it finds a cut of expected size . As a corollary, this shows that in any -regular triangle-free graph there exists a cut of at least this size. Our algorithm can be interpreted as a very efficient randomised distributed algorithm: each node needs to produce only one random bit, and the algorithm runs in one synchronous communication round. This work is also a case study of applying computational techniques in the design of distributed algorithms: our algorithm was designed by a computer program that searched for optimal algorithms for small values of . ## 1 Introduction We study the problem of finding large cuts in triangle-free graphs. In particular, we are interested in the design of fast and simple randomised distributed algorithms. ### 1.1 Random Cuts Let be a simple undirected graph. A cut is a function that labels the nodes with symbols and . An edge is a cut edge if . We use the convention that the weight of a cut is the fraction of edges that are cut edges; that is, the weight of the cut is normalised so that it is in the range . See Figure 1 for an illustration. While the problem of finding a maximum cut (or a good approximation of one) is NP-hard [4, 12, 5, 16, 7], there is a very simple randomised algorithm that finds a relatively large cut: for each node , pick independently and uniformly at random. We say that is a uniform random cut. In a uniform random cut, each edge is a cut edge with probability . It follows that the expected weight of a uniform random cut is also . ### 1.2 Regular Triangle-Free Graphs In general graphs, we cannot expect to find cuts that are much better than uniform random cuts. For example, in a complete graph on nodes, the weight of any cut is at most . However, there is a family of graphs that makes for a much more interesting case from the perspective of the max-cut problem: regular triangle-free graphs. Erdős [2] raised the problem of estimating the minimum possible size of a maximum cut in a high-girth graph, and especially the case of triangle-free graphs attracted much interest from the research community [15, 13, 1]. Accordingly, from now on, we assume that is a -regular graph for some constant , and that there are no triangles (cycles of length three) in . While focusing on regular triangle-free graphs may seem overly restrictive, our algorithm can be applied in a much more general setting; we will briefly discuss extensions in Section 3. ### 1.3 Shearer’s Algorithm In triangle-free graphs, it is easy to find cuts that are (in expectation) larger than uniform random cuts. Nevertheless, a uniform random cut is a good starting point. Shearer’s [15] algorithm proceeds as follows. Pick three uniform random cuts , , and . For each node , let ℓ(v)=∣∣{v,u}∈E:c1(v)=c1(u)}∣∣ be the number of like-minded neighbours in . Then the output of a node is c(v)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩c1(v),if ℓ(v)d/2. (1) Put otherwise, a node follows if it seems that there are many cut edges w.r.t.  in its immediate neighbourhood, and it falls back to another cut otherwise. The value is just used as a random tie-breaker. Shearer [15] shows that the expected weight of cut (1) is at least 12+√28√d≈12+0.177√d (2) in -regular triangle-free graphs. ### 1.4 Our Algorithm Shearer’s algorithm can be characterised as follows: take a uniform random cut and then improve it with the help of a randomised rule described in (1). In this work, we show that we can do much better with the help of a simple deterministic rule. In our algorithm we pick one uniform random cut . Again, each node counts the number of like-minded neighbours ℓ(v)=∣∣{v,u}∈E:c1(v)≠c1(u)}∣∣. We define the threshold τ=⌈d+√d2⌉. (3) Now the output of a node is simply c(v)={c1(v), if ℓ(v)<τ,−c1(v), if ℓ(v)≥τ. (4) Here is the complement of , that is, and . In the algorithm each node simply changes its mind if it seems that there are too many like-minded neighbours. It is not obvious that such a rule makes sense, or that this particular choice of is good. Nevertheless, we show in this work that the expected weight of cut (4) is at least 12+932√d=12+0.28125√d, (5) which is much larger than Shearer’s bound (2), at least in low-degree graphs. As a corollary, any -regular triangle-free graph admits a cut of at least this size. Our algorithm can be implemented very efficiently in a distributed setting: each node only needs to produce one random bit, and the algorithm only requires one communication round. In Shearer’s algorithm each node has to produce up to three random bits. Perhaps the most interesting feature of the algorithm is that it was not designed by a human being—it was discovered by a computer program. Indeed, cuts in triangle-free graphs serve as an example of a computational problem in which computer-aided methods can be used to partially automate algorithm design and analysis (this process is also known as “algorithm synthesis” or “protocol synthesis”). There is a wide range of other graph problems in which a similar approach has a lot of potential as a shortcut to the discovery of new distributed algorithms. In Section 2, we outline the procedure that we used to design the algorithm, and then present an analysis of its performance. In Section 3 we discuss how to apply the algorithm in a more general setting beyond regular triangle-free graphs. ## 2 Algorithm Design and Analysis We begin this section with an informal overview of so-called neighbourhood graphs. The formal definitions that we use in this work are given after that. ### 2.1 Neighbourhood Graphs in Prior Work In the context of distributed systems, the radius- neighbourhood of a node refers to all information that node may gather in communication rounds. Depending on the model of computation that we use, this may include all nodes that are within distance from , the edges incident to these nodes, their local inputs, and the random bits that these nodes have generated. The idea is that whatever decision node takes, it can only depend on its radius- neighbourhood—any distributed algorithm that runs in communication rounds can be interpreted as a mapping from local neighbourhoods to local outputs. A neighbourhood graph is a graph representation of all possible radius- neighbourhoods that a distributed algorithm may encounter. Each node of the neighbourhood graph corresponds to a possible local neighbourhood: there is at least one communication network in which some node has a local neighbourhood isomorphic to . We have an edge in the neighbourhood graph if there is some communication network in which nodes with local neighbourhoods and are adjacent; see Figure 2 for an example. Neighbourhood graphs are a convenient concept in the study of graph colouring algorithms, both from the perspective of traditional algorithm design [10, 11, 6, 9, 3] and from the perspective of computational algorithm design [14]. The key observation is that the following two statements are equivalent: • is a proper colouring of the neighbourhood graph , • is a distributed algorithm that finds a proper -colouring in rounds. To see this, consider any graph . If nodes and are adjacent in , then their local views and are adjacent in , and by assumption assigns a different colour to and . Hence distributed algorithm finds a proper -colouring of . Conversely, if algorithm finds a proper colouring in any communication network, it defines a proper -colouring of . In summary, colourings of the neighbourhood graph correspond to distributed algorithms for graph colouring, and vice versa. In general, a similar property does not hold for arbitrary graph problems. For example, there is no one-to-one correspondence between maximal independent sets of and distributed algorithms that find maximal independent sets [14, Section 8.5]. However, as we will see in this work, we can use neighbourhood graphs also in the context of the maximum cut problem. It turns out that we can define a weighted version of neighbourhood graphs, so that there is a one-to-one correspondence between heavy cuts in the weighted neighbourhood graph, and randomised distributed algorithms that find large cuts in expectation. ### 2.2 Model of Distributed Computing Next, we formalise the model of distributed computing that is sufficient for the purposes of our algorithm. Fix the parameter ; recall that we are interested in -regular triangle-free graphs. Let be such a graph, and let be a uniform random cut in . The local neighbourhood of a node is , where ℓc(v)=∣∣{v,u}∈E:c(v)=c(u)}∣∣ is the number of neighbours with the same random bit. Note that there are only possible local neighbourhoods. A distributed algorithm is a function that associates an output with each local neighbourhood . For any -regular triangle-free graph , function defines a randomised process that produces a random cut as follows: 1. Pick a uniform random cut . 2. For each node , let . We use the notation for the random cut produced by algorithm in graph . In particular, we are interested in the quantity , the expected weight of cut . A priori, we might expect that would depend on . However, as we will soon see, this is not the case—it only depends on parameter and algorithm . ### 2.3 Weighted Neighbourhood Graph A weighted digraph is a pair with . Here is the set of nodes, and associates a non-negative weight with each directed edge . Let be a cut in weighted digraph . The weight of cut is w(c)=∑(u,v)∈V×V,c(u)≠c(v)w(u,v), the total weight of all cut edges. The weighted neighbourhood graph is a weighted digraph defined as follows (see Figure 3 for an illustration). The set of nodes VN={(k,i):k∈{a,b},i∈{0,1,…,d}} consists of all possible neighbourhoods that we may encounter in -regular triangle-free graphs. We define the edge weights as follows: wN((k1,i1),(k2,i2))=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩14d(d−1i1)(d−1i2)if k1≠k2,14d(d−1i1−1)(d−1i2−1)if % k1=k2. We follow the convention that for and . Note that the weights are symmetric, and the total weight of all edges is . The following lemma shows that the weight of the edge in the neighbourhood graph equals the probability of “observing” adjacent neighbourhoods of types and ; see Figure 4. Note that the probability does not depend on the choice of graph or edge . ###### Lemma 1. Let be a -regular triangle-free graph, and let be an edge of . Consider a uniform random cut of . Then for any given neighbourhoods we have Pr[Nc(u)=N1 and Nc(v)=N2]=wN(N1,N2). ###### Proof. In what follows, we will denote the neighbours of by where . Similarly, the neighbours of are where . As is triangle-free, sets and are disjoint. In particular, the random variables for are independent. Let and . There are two cases. First assume that . Then Pr[Nc(u)=N1 and Nc(v)=N2] =Pr[c(u)=k1 and c(v)=k2]⋅Pr[|{y∈Su:c(y)=k1}|=i1−1]⋅Pr[|{y∈Sv:c(y)=k2}|=i2−1] =14⋅12d−1(d−1i1−1)⋅12d−1(d−1i2−1) =wN(N1,N2). Second, assume that . Then Pr[Nc(u)=N1 and Nc(v)=N2] =Pr[c(u)=k1 and c(v)=k2]⋅Pr[|{y∈Su:c(y)=k1}|=i1]⋅Pr[|{y∈Sv:c(y)=k2}|=i2] =14⋅12d−1(d−1i1)⋅12d−1(d−1i2) =wN(N1,N2).\qed ### 2.4 Cuts in Neighbourhood Graphs Any function can be interpreted in two ways: 1. A cut of weight in the weighted neighbourhood graph . 2. A distributed algorithm that finds a cut in any -regular triangle-free graph: the algorithm picks a uniform random cut , and then node outputs . The following lemma shows that the two interpretations are closely related: if is a cut of weight in neighbourhood graph , then it immediately gives us a distributed algorithm that finds a cut of expected weight in any -regular triangle-free graph. ###### Lemma 2. If is a cut in neighbourhood graph , and is a -regular triangle-free graph, then . ###### Proof. Fix a graph and an edge of . By Lemma 1 we have The claim follows by summing over all edges of . ∎ ### 2.5 Computational Algorithm Design Now we have all the tools that we need. Lemma 2 gives a one-to-one correspondence between large cuts of the neighbourhood graph and distributed algorithms that find large cuts. For any fixed value of , the task of designing a distributed algorithm is now straightforward: 1. Construct the weighted neighbourhood graph . 2. Find a heavy cut in . See Figure 5 for an example. For , the heaviest cut of is Aopt((k,i))={kif i<3,−kif i≥3. (6) This is also the best possible algorithm for this value of , for the model of computing that we defined in Section 2.2. ###### Remark 1. The reader may want to compare (6) with Section 1.4. For , the algorithms are identical, albeit with a slightly different notation. Note that . Of course finding a maximum-weight cut is hard in the general case. However, in this particular case neighbourhood graphs are relatively small (only nodes). While the smallest cases could be easily solved with brute force, slightly more refined approaches are helpful for moderate values of . We took the following approach. First, we reduced the max-weight-cut instance to a max-weight-SAT instance in a straightforward manner: • For each node we have a Boolean variable in formula . • For each edge of weight we have two clauses in formula , both of weight : xu∨xvand¬xu∨¬xv Note that at least one of these clauses is always satisfied, while both of them are satisfied if and only if and have different values. Now it is easy to see that a variable assignment of that maximises the total weight of satisfied clauses also gives a maximum-weight cut in : let iff is true. More precisely, the total weight of the clauses satisfied by is , where is the total weight of all edges. With this reduction, we can then resort to off-the-self max-weight-SAT solvers. In our experiments we used akmaxsat solver [8]; with it we can solve the cases very quickly (e.g., the case on a low-end laptop in less than 5 seconds). Surprisingly, in all cases the max-weight cut has the following simple structure: Aτ((k,i))={kif i<τ,−kif i≥τ. (7) The exact values of for the heaviest cuts are given in Table 1; note that all values are slightly larger than . ### 2.6 Generalisation Now it is easy to generalise the findings: we can make the educated guess that algorithms of form (7) are good also in the case of a general . All we need to do is to find a general expression for the threshold , and prove that algorithm indeed works well in the general case. To facilitate algorithm analysis, let us define the shorthand notation for the performance of algorithm . It is easy to see that , as the threshold value of simply means that algorithm outputs a uniform random cut, while means that outputs the complement of the uniform random cut. The general shape of is illustrated in Figure 6. We are interested in the region , where . In the following, we derive a relatively simple expression for in this region—the proof strategy is inspired by Shearer [15]. ###### Lemma 3. For all and we have α(τ,d)=12+14d−1(d−1τ−1)τ−1∑i=d−τ+1(d−1i). ###### Proof. Fix a triangle-free -regular graph . Recall that is a uniform random cut, is the local neighbourhood of node , and is the output of algorithm at node . Consider an edge of . We will calculate the probability that is a cut edge. To this end, define p =Pr[c(u)≠c(v) and ℓc(u),ℓc(v)≥τ], q =Pr[c(u)≠c(v) and ℓc(u),ℓc(v)<τ], r =Pr[c(u)=c(v) and either ℓc(u)<τ≤ℓc(v) or ℓc(v)<τ≤ℓc(u)]. These are precisely the cases in which ; hence is a cut edge with probability . For each , let px =Pr[ℓc(x)≥τ∣c(u)≠c(v)], qx =Pr[ℓc(x)<τ∣c(u)≠c(v)], rx =Pr[ℓc(x)≥τ∣c(u)=c(v)]. Now we have the following identities: p =12pupv, q =12quqv, r =12(rv(1−ru)+ru(1−rv)). By definition, , and by symmetry, , , and . Hence the probability that is a cut edge is p+q+r=12p2u+12q2u+ru(1−ru)=12+pu(pu−1)+ru(1−ru)=12−puqu+ru(pu+qu−ru)=12+(ru−pu)(qu−ru). (8) An argument similar to what we used in Lemma 1 gives pu =12d−1d−1∑i=τ(d−1i), qu =12d−1τ−1∑i=0(d−1i), ru =12d−1d−1∑i=τ−1(d−1i). Recall that we assumed that ; hence and 2d−1(ru−pu) =(d−1τ−1), 2d−1(qu−ru) =τ−1∑i=0(d−1i)−d−τ∑i=0(d−1)i=τ−1∑i=d−τ+1(d−1i). From (8) we therefore obtain p+q+r=12+14d−1(d−1τ−1)τ−1∑i=d−τ+1(d−1i).\qed Now we can easily find an optimal threshold for any given : simply try all possible values and apply Lemma 3. Figure 7 is a plot of optimal for . At least for small values of , it appears that τ≈d+12+0.439√d is close to the optimum. For notational convenience, we pick a slightly larger value τ=⌈d+√d2⌉. Now we have arrived at the algorithm that we already described in Section 1.4. What remains is a proof of the performance guarantee (5). Figure 8 gives some intuition on how good the bounds are. ###### Theorem 4. Let and τ=⌈d+√d2⌉. Then α(τ,d)≥12+932√d. ###### Proof. See Appendix A. ∎ ## 3 Conclusions In this work, we have presented a new randomised distributed algorithm for finding large cuts. The key observation was that the task of designing randomised distributed algorithms for finding large cuts can be reduced to the problem of finding a max-weight cut in a weighted neighbourhood graph. This way we were able to use computers to find optimal algorithms for small values of . The general form of the optimal algorithms was apparent, and hence the results were easy to generalise. Our algorithm was designed for -regular triangle-free graphs. However, it can be easily applied in a much more general setting as well. To see this, recall that is not only the expected weight of the cut, but it is also the probability that any individual edge is a cut edge. The analysis only assumes that and are of degree and they do not have a common neighbour. Hence we have the following immediate generalisations. 1. Our algorithm can be applied in triangle-free graphs of maximum degree as follows: a node of degree simulates the behaviour of missing neighbours. We still have the same guarantee that each original edge is a cut edge with probability . The running time of the algorithm is still one communication round; however, some nodes need to produce more random bits. 2. Our algorithm can also be applied in any graph, even in those that contain triangles. Now our analysis shows that each edge that is not part of a triangle will be a cut edge with probability . This observation already gives a simple bound: if at most a fraction of all edges are part of a triangle, we will find a cut of expected size at least . ## Acknowledgements Computer resources were provided by the Aalto University School of Science “Science-IT” project (Triton cluster), and by the Department of Computer Science at the University of Helsinki (Ukko cluster). ## References • Alon [1996] Noga Alon. Bipartite subgraphs. Combinatorica, 16(3):301–311, 1996. • Erdős [1979] Paul Erdős. Problems and results in graph theory and combinatorial analysis. In John Adrian Bondy and U. S. R. Murty, editors, Proc. Graph Theory and Related Topics (University of Waterloo, July 1977), pages 153–163. Academic Press, 1979. • Fraigniaud et al. [2007] Pierre Fraigniaud, Cyril Gavoille, David Ilcinkas, and Andrzej Pelc. Distributed computing with advice: information sensitivity of graph coloring. In Proc. 34th International Colloquium on Automata, Languages and Programming (ICALP 2007), volume 4596 of Lecture Notes in Computer Science, pages 231–242. Springer, 2007. • Garey and Johnson [1979] Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman and Company, New York, 1979. • Håstad [2001] Johan Håstad. Some optimal inapproximability results. Journal of the ACM, 48(4):798–859, 2001. • Kelsen [1996] Pierre Kelsen. Neighborhood graphs and distributed -coloring. In Proc. 5th Scandinavian Workshop on Algorithm Theory (SWAT 1996), volume 1097 of Lecture Notes in Computer Science, pages 223–233. Springer, 1996. • Khot et al. [2007] Subhash Khot, Guy Kindler, Elchanan Mossel, and Ryan O’Donnell. Optimal inapproximability results for MAX-CUT and other 2-variable CSPs? SIAM Journal on Computing, 37(1):319–357, 2007. • Kügel [2012] Adrian Kügel. Improved exact solver for the weighted Max-SAT problem. In Daniel Le Berre, editor, Proc. Pragmatics of SAT Workshop (POS 2010), volume 8 of EasyChair Proceedings in Computing, pages 15–27, 2012. • Kuhn and Wattenhofer [2006] Fabian Kuhn and Roger Wattenhofer. On the complexity of distributed graph coloring. In Proc. 25th Annual ACM Symposium on Principles of Distributed Computing (PODC 2006), pages 7–15. ACM Press, 2006. • Linial [1992] Nathan Linial. Locality in distributed graph algorithms. SIAM Journal on Computing, 21(1):193–201, 1992. • Naor [1991] Moni Naor. A lower bound on probabilistic algorithms for distributive ring coloring. SIAM Journal on Discrete Mathematics, 4(3):409–412, 1991. • Papadimitriou and Yannakakis [1991] Christos H. Papadimitriou and Mihalis Yannakakis. Optimization, approximation, and complexity classes. Journal of Computer and System Sciences, 43(3):425–440, 1991. • Poljak and Tuza [1995] Svatopluk Poljak and Zsolt Tuza. Maximum cuts and largest bipartite subgraphs. In William Cook, László Lovász, and Paul Seymour, editors, Combinatorial Optimization, volume 20 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 181–244. AMS, 1995. • Rybicki [2011] Joel Rybicki. Exact bounds for distributed graph colouring. Master’s thesis, Department of Computer Science, University of Helsinki, May 2011. • Shearer [1992] James B. Shearer. A note on bipartite subgraphs of triangle-free graphs. Random Structures & Algorithms, 3(2):223–226, 1992. • Trevisan et al. [2000] Luca Trevisan, Gregory B. Sorkin, Madhu Sudan, and David P. Williamson. Gadgets, approximation, and linear programming. SIAM Journal on Computing, 29(6):2074–2097, 2000. ## Appendix A Proof of Theorem 4 We need to prove a lower bound on α(τ,d)=12+14d−1(d−1τ−1)τ−1∑i=d−τ+1(d−1i) in the region . Our general strategy is as follows: 1. Verify cases with a computer. 2. Prove a closed-form lower bound for . The first part is easily solved with a simple Python script or with a short calculation in Mathematica (see Figure 8 for examples of the results for ). We will now focus on the second part; for that we will need various estimates of binomial coefficients. The proof given here is certainly not the most elegant way to derive the bound, but it is self-contained and gets the job done. Proving the claim for a “sufficiently large” would be straightforward. However, we need to show that already a concrete relatively small such as is enough. We will first approximate binomial coefficients with the normal distribution. Let , and define δj(n)=⌊j√n/32⌋,gj=e−j2/32 for each . ###### Fact 5. For any we have 0.999√πn<14n(2nn)<1√πn. ###### Lemma 6. For any , , and we have (2nn+δ)>0.995⋅gj⋅(2nn) ###### Proof. We can estimate (2nn+δ)/(2nn)=n!(n+δ)!⋅n!(n−δ)!=n−δ+1n+1⋅n−δ+2n+2⋯nn+δ>(1−δn)δ≥hj(δ), where hj(δ)=(1−j232δ)δ. Now as . For each we can verify that when . ∎ ###### Lemma 7. For and we have 14nδ∑i=−δ+1(2nn+i) >0.6088, 14nδ−1∑i=−δ+1(2nn+i) >0.5975. ###### Proof. Here we could apply the Berry–Esseen theorem, but the following simple piecewise estimate is sufficient for our purposes. As δj(n)>j√n/32−1, we have Hence using Fact 5 and Lemma 6 we have The claim follows from the observations 14nδ∑i=−δ+1(2nn+i) >24nδ∑i=1(2nn+i)>2⋅0.3044=0.6088, 14nδ−1∑i=−δ+1(2nn+i) >(2−1δ)14nδ∑i=1(2nn+i)>1.9629⋅0.3044>0.5975.\qed Now we have the estimates that we will use in the proof of Theorem 4. We will consider the odd and even values of separately. #### Odd d. Assume that , . Let δ=τ−n=⌈√n/2+1/4+1/2⌉,δ′=δ4(n), and observe that √n/2<√n/2+1/4+1/2<√n/2+1. It follows that δ′+1≤δ≤δ′+2. Therefore α(τ,d)=12+14d−1(d−1τ−1)τ−1∑i=d−τ+1(d−1i)=12+142n(2nn+δ−1)δ−1∑i=−δ+2(2nn+i)≥12+142n(2nn+δ′+1)δ′∑i=−δ′+1(2nn+i)=12+n−δ′n+δ′+1⋅14n(2nn+δ′)⋅14nδ′∑i=−δ′+1(2nn+i)>12+0.964⋅0.995⋅g4⋅0.999√πn⋅0.6088>12+0.2823√d−1>12+932√d. #### Even d. Assume that , . Let δ=τ−n=⌈√n/2⌉,δ′=δ4(n). Now we have δ′≤δ≤δ′+1. For any we have the identity k∑i=−k(2nn+i)=k∑i=−k((2n−1n+i−1)+(2n−1n+i))=k∑i=−k((2n−1n−i)+(2n−1n+i))=2k∑i=−k(2n−1n+i). We can use it to derive α(τ,d)=12+14d−1(d−1τ−1)τ−1∑i=d−τ+1(d−1i)=12+142n−1(2n−1n+δ−1)δ−1∑i=−δ+1(2n−1n+i)≥12+142n−1(2n−1n+δ′)δ′−1∑i=−δ′+1(2n−1n+i)=12+142n−1⋅n−δ′2n(2nn+δ′)⋅12δ′−1∑i=−δ′+1(2nn+i)=12+n−δ′n⋅14n(2nn+δ′)⋅14nδ′−1∑i=−δ′+1(2nn+i)>12+0.982⋅0.995⋅g4⋅0.999√πn⋅0.5975>12+0.2822√d>12+932√d. This completes the proof of Theorem 4. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2021-03-07 12:32:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8247355818748474, "perplexity": 589.3373659904552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376467.86/warc/CC-MAIN-20210307105633-20210307135633-00494.warc.gz"}
https://docs.mosek.com/latest/rmosek/tutorial-sdo-shared.html
# 6.7 Semidefinite Optimization¶ Semidefinite optimization is a generalization of conic optimization, allowing the use of matrix variables belonging to the convex cone of positive semidefinite matrices $\PSD^r = \left\lbrace X \in \Symm^r: z^T X z \geq 0, \quad \forall z \in \real^r \right\rbrace,$ where $$\Symm^r$$ is the set of $$r \times r$$ real-valued symmetric matrices. MOSEK can solve semidefinite optimization problems stated in the primal form, (6.19)$\begin{split}\begin{array}{lccccll} \mbox{minimize} & & & \sum_{j=0}^{p-1} \left\langle \barC_j, \barX_j \right\rangle + \sum_{j=0}^{n-1} c_j x_j + c^f & & &\\ \mbox{subject to} & l_i^c & \leq & \sum_{j=0}^{p-1} \left\langle \barA_{ij}, \barX_j \right\rangle + \sum_{j=0}^{n-1} a_{ij} x_j & \leq & u_i^c, & i = 0, \ldots, m-1,\\ & & & \sum_{j=0}^{p-1} \left\langle \barF_{ij}, \barX_j \right\rangle + \sum_{j=0}^{n-1} f_{ij} x_j + g_i & \in & \K_{i}, & i = 0, \ldots, q-1,\\ & l_j^x & \leq & x_j & \leq & u_j^x, & j = 0, \ldots, n-1,\\ & & & x \in \K, \barX_j \in \PSD^{r_j}, & & & j = 0, \ldots, p-1 \end{array}\end{split}$ where the problem has $$p$$ symmetric positive semidefinite variables $$\barX_j\in \PSD^{r_j}$$ of dimension $$r_j$$. The symmetric coefficient matrices $$\barC_j\in \Symm^{r_j}$$ and $$\barA_{i,j}\in \Symm^{r_j}$$ are used to specify PSD terms in the linear objective and the linear constraints, respectively. The symmetric coefficient matrices $$\barF_{i,j}\in \Symm^{r_j}$$ are used to specify PSD terms in the affine conic constraints. Note that $$q$$ ((6.19)) is the total dimension of all the cones, i.e. $$q=\text{dim}(\K_1 \times \ldots \times \K_k)$$, given there are $$k$$ ACCs. We use standard notation for the matrix inner product, i.e., for $$A,B\in \real^{m\times n}$$ we have $\left\langle A,B \right\rangle := \sum_{i=0}^{m-1} \sum_{j=0}^{n-1} A_{ij} B_{ij}.$ In addition to the primal form presented above, semidefinite problems can be expressed in their dual form. Constraints in this form are usually called linear matrix inequalities (LMIs). LMIs can be easily specified in MOSEK using the vectorized positive semidefinite cone which is defined as: • Vectorized semidefinite domain: $\PSD^{d,\mathrm{vec}} = \left\{(x_1,\ldots,x_{d(d+1)/2})\in \real^n~:~ \mathrm{sMat}(x)\in\PSD^d\right\},$ where $$n=d(d+1)/2$$ and, $\begin{split}\mathrm{sMat}(x) = \left[\begin{array}{cccc}x_1 & x_2/\sqrt{2} & \cdots & x_{d}/\sqrt{2} \\ x_2/\sqrt{2} & x_{d+1} & \cdots & x_{2d-1}/\sqrt{2} \\ \cdots & \cdots & \cdots & \cdots \\ x_{d}/\sqrt{2} & x_{2d-1}/\sqrt{2} & \cdots & x_{d(d+1)/2}\end{array}\right],\end{split}$ or equivalently $\PSD^{d,\mathrm{vec}} = \left\{\mathrm{sVec}(X)~:~X\in\PSD^d\right\},$ where $\mathrm{sVec}(X) = (X_{11},\sqrt{2}X_{21},\ldots,\sqrt{2}X_{d1},X_{22},\sqrt{2}X_{32},\ldots,X_{dd}).$ In other words, the domain consists of vectorizations of the lower-triangular part of a positive semidefinite matrix, with the non-diagonal elements additionally rescaled. LMIs can be expressed by restricting appropriate affine expressions to this cone type. For other types of cones supported by MOSEK, see Sec. 13.6 (Supported domains) and the other tutorials in this chapter. Different cone types can appear together in one optimization problem. We demonstrate the setup of semidefinite variables and their coefficient matrices in the following examples: ## 6.7.1 Example SDO1¶ We consider the simple optimization problem with semidefinite and conic quadratic constraints: (6.20)$\begin{split}\begin{array} {llcc} \mbox{minimize} & \left\langle \left[ \begin{array} {ccc} 2 & 1 & 0 \\ 1 & 2 & 1 \\ 0 & 1 & 2 \end{array} \right], \barX \right\rangle + x_0 & & \\ \mbox{subject to} & \left\langle \left[ \begin{array} {ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right], \barX \right\rangle + x_0 & = & 1, \\ & \left\langle \left[ \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{array} \right], \barX \right\rangle + x_1 + x_2 & = & 1/2, \\ & x_0 \geq \sqrt{{x_1}^2 + {x_2}^2}, & \barX \succeq 0, & \end{array}\end{split}$ The problem description contains a 3-dimensional symmetric semidefinite variable which can be written explicitly as: $\begin{split}\barX = \left[ \begin{array} {ccc} \barX_{00} & \barX_{10} & \barX_{20} \\ \barX_{10} & \barX_{11} & \barX_{21} \\ \barX_{20} & \barX_{21} & \barX_{22} \end{array} \right] \in \PSD^3,\end{split}$ and an affine conic constraint (ACC) $$(x_0, x_1, x_2) \in \Q^3$$. The objective is to minimize $2(\barX_{00} + \barX_{10} + \barX_{11} + \barX_{21} + \barX_{22}) + x_0,$ subject to the two linear constraints $\begin{split}\begin{array}{ccc} \barX_{00} + \barX_{11} + \barX_{22} + x_0 & = & 1, \\ \barX_{00} + \barX_{11} + \barX_{22} + 2(\barX_{10} + \barX_{20} + \barX_{21}) + x_1 + x_2 & = & 1/2. \end{array}\end{split}$ Setting up the linear and conic part The linear and conic parts (constraints, variables, objective, ACC) are set up using the methods described in the relevant tutorials; Sec. 6.1 (Linear Optimization), Sec. 6.2 (From Linear to Conic Optimization). Here we only discuss the aspects directly involving semidefinite variables. Appending semidefinite variables The dimensions of semidefinite variables are passed in prob$bardim. Coefficients of semidefinite terms. Every term of the form $$(\barA_{i,j})_{k,l}(\barX_j)_{k,l}$$ is determined by four indices $$(i,j,k,l)$$ and a coefficient value $$v=(\barA_{i,j})_{k,l}$$. Here $$i$$ is the number of the constraint in which the term appears, $$j$$ is the index of the semidefinite variable it involves and $$(k,l)$$ is the position in that variable. This data is passed in the structure prob$barA. Note that only the lower triangular part should be specified explicitly, that is one always has $$k\geq l$$. Semidefinite terms $$(\barC_j)_{k,l}(\barX_j)_{k,l}$$ of the objective are specified in the same way in prob$barc but only include $$(j,k,l)$$ and $$v$$. Source code Listing 6.8 R implementation of model (6.20). Click here to download. library("Rmosek") getbarvarMatrix <- function(barvar, bardim) { N <- as.integer(bardim) new("dspMatrix", x=barvar, uplo="L", Dim=c(N,N)) } sdo1 <- function() { # Specify the non-matrix variable part of the problem. prob <- list(sense="min") prob$c <- c(1, 0, 0) prob$A <- sparseMatrix(i=c(1, 2, 2), j=c(1, 2, 3), x=c(1, 1, 1), dims=c(2, 3)) prob$bc <- rbind(blc=c(1, 0.5), buc=c(1, 0.5)) prob$bx <- rbind(blx=rep(-Inf,3), bux=rep( Inf,3)) # NOTE: The F matrix is internally stored in the sparse # triplet form. Use 'giveCsparse' or 'repr' option # in the sparseMatrix() call to construct the F # matrix directly in the sparse triplet form. prob$F <- sparseMatrix(i=c(1,2,3), j=c(1,2,3), x=c(1,1,1), dims = c(3,3)) prob$g <- c(1:3)*0 prob$cones <- cbind(list("QUAD", 3, NULL)) # Specify semidefinite matrix variables (one 3x3 block) prob$bardim <- c(3) # Block triplet format specifying the lower triangular part # of the symmetric coefficient matrix 'barc': prob$barc$j <- c(1, 1, 1, 1, 1) prob$barc$k <- c(1, 2, 3, 2, 3) prob$barc$l <- c(1, 2, 3, 1, 2) prob$barc$v <- c(2, 2, 2, 1, 1) # Block triplet format specifying the lower triangular part # of the symmetric coefficient matrix 'barA': prob$barA$i <- c(1, 1, 1, 2, 2, 2, 2, 2, 2) prob$barA$j <- c(1, 1, 1, 1, 1, 1, 1, 1, 1) prob$barA$k <- c(1, 2, 3, 1, 2, 3, 2, 3, 3) prob$barA$l <- c(1, 2, 3, 1, 2, 3, 1, 1, 2) prob$barA$v <- c(1, 1, 1, 1, 1, 1, 1, 1, 1) # Solve the problem r <- mosek(prob) # Print matrix variable and return the solution stopifnot(identical(r$response$code, 0)) print( list(barx=getbarvarMatrix(r$sol$itr$barx[[1]], prob$bardim[1])) ) r$sol } sdo1() The numerical values of $$\barX_j$$ are returned in the list r$sol$itr$barx; the $$j$$-th element of the list is the lower triangular part of each $$\barX_j$$ stacked column-by-column into a numeric vector. Similarly, the dual semidefinite variables $$\barS_j$$ are recovered through r$sol$itr$bars. ## 6.7.2 Example SDO2¶ We now demonstrate how to define more than one semidefinite variable using the following problem with two matrix variables and two types of constraints: (6.21)$\begin{split}\begin{array}{lrll} \mbox{minimize} & \langle C_1,\barX_1\rangle + \langle C_2,\barX_2\rangle & & \\ \mbox{subject to} & \langle A_1,\barX_1\rangle + \langle A_2,\barX_2\rangle & = & b, \\ & (\barX_2)_{01} & \leq & k, \\ & \barX_1, \barX_2 & \succeq & 0. \end{array}\end{split}$ In our example $$\dim(\barX_1)=3$$, $$\dim(\barX_2)=4$$, $$b=23$$, $$k=-3$$ and $\begin{split}C_1= \left[\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 6 \end{array}\right], A_1= \left[\begin{array}{ccc} 1 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 2 \end{array}\right],\end{split}$ $\begin{split}C_2= \left[\begin{array}{cccc} 1 & -3 & 0 & 0\\ -3 & 2 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array}\right], A_2= \left[\begin{array}{cccc} 0 & 1 & 0 & 0\\ 1 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -3 \\ \end{array}\right],\end{split}$ are constant symmetric matrices. Note that this problem does not contain any scalar variables, but they could be added in the same fashion as in Sec. 6.7.1 (Example SDO1). For explanations of data structures used in the example see Sec. 6.7.1 (Example SDO1). Note that the field bardim is used to specify that we have two semidefinite variables of dimensions 3 and 4. The code representing the above problem is shown below. Listing 6.9 Implementation of model (6.21). Click here to download. library("Rmosek") getbarvarMatrix <- function(barvar, bardim) { N <- as.integer(bardim) new("dspMatrix", x=barvar, uplo="L", Dim=c(N,N)) } sdo2 <- function() { # Sample data in sparse lower-triangular triplet form C1_k <- c(1, 3); C1_l <- c(1, 3); C1_v <- c(1, 6); A1_k <- c(1, 3, 3); A1_l <- c(1, 1, 3); A1_v <- c(1, 1, 2); C2_k <- c(1, 2, 2, 3); C2_l <- c(1, 1, 2, 3); C2_v <- c(1, -3, 2, 1); A2_k <- c(2, 2, 4); A2_l <- c(1, 2, 4); A2_v <- c(1, -1, -3); b <- 23.0; k <- -3.0; # Specify the dimensions of the problem prob <- list(sense="min"); # Two constraints prob$A <- Matrix(nrow=2, ncol=0); # 2 constraints prob$c <- numeric(0); prob$bx <- rbind(blc=numeric(0), buc=numeric(0)); # Dimensions of semidefinite matrix variables prob$bardim <- c(3, 4); # Constraint bounds prob$bc <- rbind(blc=c(b, -Inf), buc=c(b, k)); # Block triplet format specifying the lower triangular part # of the symmetric coefficient matrix 'barc': prob$barc$j <- c(rep(1, length(C1_v)), rep(2, length(C2_v))); # Which PSD variable (j) prob$barc$k <- c(C1_k, C2_k); # Entries: (k,l)->v prob$barc$l <- c(C1_l, C2_l); prob$barc$v <- c(C1_v, C2_v); # Block triplet format specifying the lower triangular part # of the symmetric coefficient matrix 'barA': prob$barA$i <- c(rep(1, length(A1_v)+length(A2_v)), 2); # Which constraint (i) prob$barA$j <- c(rep(1, length(A1_v)), rep(2, length(A2_v)), 2); # Which PSD variable (j) prob$barA$k <- c(A1_k, A2_k, 2); # Entries: (k,l)->v prob$barA$l <- c(A1_l, A2_l, 1); prob$barA$v <- c(A1_v, A2_v, 0.5); # Solve the problem r <- mosek(prob); # Print matrix variable and return the solution stopifnot(identical(r$response$code, 0)); print( list(X1=getbarvarMatrix(r$sol$itr$barx[[1]], prob$bardim[1])) ); print( list(X2=getbarvarMatrix(r$sol$itr$barx[[2]], prob$bardim[2])) ); } sdo2(); The numerical values of $$\barX_j$$ are returned in the list r$sol$itr$barx; the $$j$$-th element of the list is the lower triangular part of each $$\barX_j$$ stacked column-by-column into a numeric vector. Similarly, the dual semidefinite variables $$\barS_j$$ are recovered through r$sol$itr$bars. ## 6.7.3 Example SDO_LMI: Linear matrix inequalities and the vectorized semidefinite domain¶ The standard form of a semidefinite problem is usually either based on semidefinite variables (primal form) or on linear matrix inequalities (dual form). However, MOSEK allows mixing of these two forms, as shown in (6.22) (6.22)$\begin{split}\begin{array} {llcc} \mbox{minimize} & \left\langle \left[ \begin{array} {cc} 1 & 0 \\ 0 & 1 \end{array} \right], \barX \right\rangle + x_0 + x_1 + 1 & & \\ \mbox{subject to} & \left\langle \left[ \begin{array} {cc} 0 & 1 \\ 1 & 0 \end{array} \right], \barX \right\rangle - x_0 - x_1 & \in & \real_{\geq 0}^1, \\ & x_0 \left[ \begin{array}{cc} 0 & 1 \\ 1 & 3 \end{array} \right] + x_1 \left[ \begin{array}{cc} 3 & 1 \\ 1 & 0 \end{array} \right] - \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] & \succeq & 0, \\ & \barX \succeq 0. & & \end{array}\end{split}$ The first affine expression is restricted to a linear domain and could also be modelled as a linear constraint (instead of an ACC). The lower triangular part of the linear matrix inequality (second constraint) can be vectorized and restricted to the "MSK_DOMAIN_SVEC_PSD_CONE". This allows us to express the constraints in (6.22) as the affine conic constraints shown in (6.23). (6.23)$\begin{split}\begin{array}{ccccccc} \left\langle\left[\begin{array}{cc}0&1\\1&0\end{array}\right],\barX \right\rangle & + & \left[\begin{array}{cc}-1&-1\end{array}\right] x & + & \left[\begin{array}{c}0\end{array}\right] & \in & \real_{\geq 0}^1, \\ & & \left[\begin{array}{cc}0&3\\ \sqrt{2}&\sqrt{2}\\3&0\end{array}\right] x & + & \left[\begin{array}{c}-1\\0\\-1\end{array}\right] & \in & \PSD^{3,\mathrm{vec}} \end{array}\end{split}$ Vectorization of the LMI is performed as explained in Sec. 13.6 (Supported domains). Setting up the linear part The linear parts (objective, constraints, variables) and the semidefinite terms in the linear expressions are defined exactly as shown in the previous examples. Setting up the affine conic constraints with semidefinite terms To define affine conic constraints, we set prob$F and prob$g to the values that are shown in (6.23). The coefficient for the semidefinite variable is defined by setting prob$barF equal to the symmetric matrix shown in (6.23). prob$barF$i <- c(1, 1) prob$barF$j <- c(1, 1) prob$barF$k <- c(1, 2) prob$barF$l <- c(1, 1) prob$barF$v <- c(0, 1) The domains are specified as columns in prob$cones where the first row selects the "type" of cone, such as "MSK_DOMAIN_RPLUS" or "MSK_DOMAIN_SVEC_PSD_CONE" (note that RPLUS and SVEC_PSD_CONE are valid aliases for each domain, respectively). The second row specifies the dimension ("dim"), here set to 1 and 3, respectively. The third row sets the parameters ("conepar") for parametric domains but here it is set to NULL. prob$F <- sparseMatrix(i=c(1, 1, 2, 3, 3, 4), j=c(1, 2, 2, 1, 2, 1), x=c(-1, -1, 3, sqrt(2), sqrt(2), 3), dims=c(4, 2)) prob$g <- c(0, -1, 0, -1) prob$cones <- matrix(list(), nrow=3, ncol=2) prob$cones[,1] <- list("RPLUS", 1, NULL) prob$cones[,2] <- list("SVEC_PSD_CONE", 3, NULL) Source code Listing 6.10 Source code solving problem (6.22). Click here to download. library("Rmosek") getbarvarMatrix <- function(barvar, bardim) { N <- as.integer(bardim) new("dspMatrix", x=barvar, uplo="L", Dim=c(N,N)) } sdo_lmi <- function() { # Specify the non-matrix variable part of the problem. prob <- list(sense="min") prob$c <- c(1, 1) prob$c0 <- 1 # Specify variable bounds prob$bx <- rbind(blx=rep(-Inf,2), bux=rep( Inf,2)) # The following two entries must always be defined, even if set to zero. prob$A <- Matrix(c(0, 0), nrow=1, sparse=TRUE) prob$bc <- rbind(blc=rep(-Inf,1),buc=rep(Inf, 1)) prob$F <- sparseMatrix(i=c(1, 1, 2, 3, 3, 4), j=c(1, 2, 2, 1, 2, 1), x=c(-1, -1, 3, sqrt(2), sqrt(2), 3), dims=c(4, 2)) prob$g <- c(0, -1, 0, -1) prob$cones <- matrix(list(), nrow=3, ncol=2) prob$cones[,1] <- list("RPLUS", 1, NULL) prob$cones[,2] <- list("SVEC_PSD_CONE", 3, NULL) # Specify semidefinite matrix variables (one 2x2 block) prob$bardim <- c(2) # Block triplet format specifying the lower triangular part # of the symmetric coefficient matrix 'barc': prob$barc$j <- c(1, 1, 1) prob$barc$k <- c(1, 2, 2) prob$barc$l <- c(1, 1, 2) prob$barc$v <- c(1, 0, 1) # Block triplet format specifying the lower triangular part # of the symmetric coefficient matrix 'barF' for the ACC: prob$barF$i <- c(1, 1) prob$barF$j <- c(1, 1) prob$barF$k <- c(1, 2) prob$barF$l <- c(1, 1) prob$barF$v <- c(0, 1) # Solve the problem r <- mosek(prob) # Print matrix variable and return the solution stopifnot(identical(r$response$code, 0)) print( list(barx=getbarvarMatrix(r$sol$itr$barx[[1]], prob$bardim[1]), xx=r$sol$itr$xx) ) r\$sol } sdo_lmi()
2023-02-07 23:52:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9880920648574829, "perplexity": 2142.404490920027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500664.85/warc/CC-MAIN-20230207233330-20230208023330-00160.warc.gz"}
http://math.stackexchange.com/questions/95808/proving-5n-equiv-1-pmod-2r-when-n-2r-2
# Proving $5^n \equiv 1 \pmod {2^r}$ when $n=2^{r-2}$ How can I prove that $5^n \equiv 1 (\bmod 2^r$) when $n=2^{r-2}$? Actually what I am trying to prove is that the cyclic group generated by the residue class of $5 (\bmod 2^r)$ is of order $2^{r-2}$. - I think induction on $r$ does it. – lhf Jan 2 '12 at 13:59 If you want to use a cannonball to swat flies, $(\mathbb{Z}/2^r\mathbb{Z})^*\cong \mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2^{r-2}\mathbb{Z}$ for $r\geq 2$; hence, the group of units has exponent $2^{r-2}$. when $r\geq 3$. – Arturo Magidin Jan 2 '12 at 18:21 @ArturoMagidin: This was the question, actually. Which I couldn't prove. – HbCwiRoJDp Jan 2 '12 at 18:26 @Karatug: Well, no; the statement you have is weaker (you are only asking how to show the order of $5$ divides $2^{r-2}$; and even if you show the order is exactly $2^{r-2}$, then you don't get the structure of the unit group from that (you only get that it has a cyclic factor of order at least $2^{r-2}$, but it could be cyclic of order $2^{r-1}$ if all you know is the order of $5$). – Arturo Magidin Jan 2 '12 at 18:52 @Karatug: Here is a previous answer that shows the order of $5$ is exactly $2^{r-2}$ modulo $2^r$. – Arturo Magidin Jan 2 '12 at 18:56 Hint: If $a=1\pmod{2b}$ then $a^2=1\pmod{4b}$. (Proof: if $a=2bk+1$ then $a^2=4bk(bk+1)+1$. End of the proof.) Use this for $a=5^n$, $n=2^{r-2}$ and $b=2^{r-1}$, to build a proof by induction on $r$, starting from the $r=2$ case $5^1=1\pmod{4}$. - Works for the first question. To reach OP's extra goal you need to make the extra observation that when $b$ is even and $k$ is odd, then $$a^2\not\equiv 1\pmod{8b}.$$ – Jyrki Lahtonen Jan 2 '12 at 18:46 We divide your original problem into two parts. The specific question you asked has nothing to do with $5$ and is answered for all odd $a$ in Part $1$. The fact that the order of $5$ is actually $2^{r-2}$ and not something smaller is proved in Part $2$. Part $1$: Suppose that $a$ is odd, and $r \ge 3$. Then $a^{2^{r-2}}\equiv 1 \pmod{2^r}$. Since $2^r$ does not have a primitive root, the order of $a$ modulo $2^r$ is less than $\varphi(2^r)$. But the order of $a$ divides $\varphi(2^r)$. Since $\varphi(2^r)=2^{r-1}$, it follows that the order of $a$ is a divisor of $2^{r-1}$ which is less than $2^{r-1}$. Thus the order of $a$ divides $2^{r-2}$. It follows that $a^{2^{r-2}}\equiv 1 \pmod{2^r}$. Part $2$: We show that if $r\ge 3$, then $5^{2^{r-3}}\not\equiv 1 \pmod{2^r}$. This will show that the order of $5$ modulo $2^r$ is actually $2^{r-2}$, and not something smaller. We show by induction that $5^{2^{r-3}}\equiv 1+2^{r-1} \pmod{2^r}$, and so in particular $5^{2^{r-3}}\not\equiv 1 \pmod{2^r}$. It is easy to check that the result holds at $r=3$. Suppose now that $5^{2^{k-3}}\equiv 1+2^{k-1} \pmod{2^k}$. We show that $5^{2^{k-2}}\equiv 1+2^{k} \pmod{2^{k+1}}$. By the induction assumption $5^{2^{k-3}}= 1+2^{k-1} +s2^k$ for some $s$. Square both sides, and simplify modulo $2^{k+1}$. We get that $$5^{2^{k-2}}=(1+2^{k-1} +s2^k)^2 \equiv 1+2^k +2^{2k-2}\pmod{2^{k+1}}.$$ But $2^{2k-2}$ is divisible by $2^{k+1}$, since $k \ge 3$. The result follows. - Hint: $$5^{2^{k+1}}-1=(5^{2^k}-1)(5^{2^k}+1)$$ The latter factor is always congruent to $2\pmod 4$. I recommend that you then prove a result giving the exact power of two that divides $5^{2^n}-1$ by induction on $n$. This gives you the order of the residue class of $5$ modulo any power of two. - Let a be an odd integer=2b+1(say), $$a^2=4b^2+4b+1=8b(b+1)/2+1=8c+1$$(say) where $$c=b(b+1)/2$$ an integer (See also If $n$ is an odd natural number, then $8$ divides $n^{2}-1$) $$a^4=(1+8c)^2=1+16c+64c^2=1+16d$$ for some integer $$d=c+4c^2$$ $$a^8=(1+16d)^2=1+32d+256d^2=1+32e$$ Using induction for n≥3,we get $$a^{2^{n-2}}≡1(mod 2^n).$$ See http://www.amazon.com/Introduction-Theory-Numbers-Ivan-Niven/dp/0471625469 -
2016-05-29 02:30:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9574383497238159, "perplexity": 116.23137694095449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278385.1/warc/CC-MAIN-20160524002118-00012-ip-10-185-217-139.ec2.internal.warc.gz"}
https://kerodon.net/tag/01FA
Kerodon $\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$ Proposition 4.5.3.5 (Transitivity). Suppose we are given a commutative diagram of simplicial sets $\xymatrix@R =50pt@C=50pt{ A \ar [r] \ar [d] & B \ar [r] \ar [d] & C \ar [d] \\ A' \ar [r] & B' \ar [r] & C', }$ where the left square is a categorical pushout diagram. Then the right square is a categorical pushout diagram if and only if the outer rectangle is a categorical pushout diagram. Proof. Apply Proposition 3.4.1.9. $\square$
2021-09-20 20:54:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229761123657227, "perplexity": 461.5147060616202}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057091.31/warc/CC-MAIN-20210920191528-20210920221528-00309.warc.gz"}
https://ctftime.org/writeup/24543
Rating: so... we were given a shell through ssh ssh [email protected] and ther was some restrictions that we couldn't use cat command to read the flag. there is a looot of ways to bypass it. depending on how it's implemented. I have no idea why I choose this way among others, but I tried to raise an error by exec < ./flag.txt and it worked, showed the flag.txt content to say there is no such command/file RaziCTF{th3r3_!s_4_c4t_c4ll3d_fl4g}. though of course before it reach the part that there is no such flag, it got that exclamination sign character and raised error about bash events.
2023-04-01 01:27:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4479144513607025, "perplexity": 1539.3866441493783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00342.warc.gz"}
https://zbmath.org/?q=an%3A1106.34010
## Existence of solution for a boundary value problem of fractional order.(English)Zbl 1106.34010 The paper is concerned with fractional differential equations of order $$\alpha \in (1,2)$$ viewed as a boundary value problem with boundary conditions given at $$0$$ and $$1$$. Both linear and nonlinear equations are considered and existence results are derived. ### MSC: 34B15 Nonlinear boundary value problems for ordinary differential equations 26A33 Fractional derivatives and integrals Full Text:
2022-08-12 20:50:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37021583318710327, "perplexity": 253.17279124168977}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00282.warc.gz"}
http://freqnbytes.com/standard-error/compute-standard-error-of-mean.php
Home > Standard Error > Compute Standard Error Of Mean # Compute Standard Error Of Mean ## Contents Learn how. Because the 9,732 runners are the entire population, 33.88 years is the population mean, μ {\displaystyle \mu } , and 9.27 years is the population standard deviation, σ. Khan Academy 492,985 views 15:15 How to calculate Standard Deviation, Mean, Variance Statistics, Excel - Duration: 4:35. By using this site, you agree to the Terms of Use and Privacy Policy. http://freqnbytes.com/standard-error/compute-standard-error-mean-r.php n is the size (number of observations) of the sample. The standard deviation of all possible sample means of size 16 is the standard error. R Backman 6,922 views 4:42 Loading more suggestions... ISBN 0-7167-1254-7 , p 53 ^ Barde, M. (2012). "What to use to express the variability of data: Standard deviation or standard error of mean?". http://www.radford.edu/~biol-web/stats/standarderrorcalc.pdf ## Compute Standard Deviation The notation for standard error can be any one of SE, SEM (for standard error of measurement or mean), or SE. Standard error of the mean Further information: Variance §Sum of uncorrelated variables (Bienaymé formula) The standard error of the mean (SEM) is the standard deviation of the sample-mean's estimate of a Tips Calculations of the mean, standard deviation, and standard error are most useful for analysis of normally distributed data. It is very easy to make mistakes or enter numbers incorrectly. 1. This represents how well the sample mean approximates the population mean. 2. statisticsfun 577,879 views 5:05 How to calculate Confidence Intervals and Margin of Error - Duration: 6:44. 3. Or decreasing standard error by a factor of ten requires a hundred times as many observations. 4. Boost Your Self-Esteem Self-Esteem Course Deal With Too Much Worry Worry Course How To Handle Social Anxiety Social Anxiety Course Handling Break-ups Separation Course Struggling With Arachnophobia? 6. The mean of all possible sample means is equal to the population mean. 7. and Keeping, E.S. (1963) Mathematics of Statistics, van Nostrand, p. 187 ^ Zwillinger D. (1995), Standard Mathematical Tables and Formulae, Chapman&Hall/CRC. 8. Thus if the effect of random changes are significant, then the standard error of the mean will be higher. DrKKHewitt 15,693 views 4:31 Standard Error - Duration: 7:05. What is the mean of a data at 5% standard error? The standard error of the mean (SEM) (i.e., of using the sample mean as a method of estimating the population mean) is the standard deviation of those sample means over all Mean And Standard Error Calculator Published on Sep 20, 2013Find more videos and articles at: http://www.statisticshowto.com Category People & Blogs License Standard YouTube License Show more Show less Loading... The mean age was 33.88 years. The following expressions can be used to calculate the upper and lower 95% confidence limits, where x ¯ {\displaystyle {\bar {x}}} is equal to the sample mean, S E {\displaystyle SE} This feature is not available right now. http://vassarstats.net/dist.html The mean age for the 16 runners in this particular sample is 37.25. Yes No Not Helpful 0 Helpful 0 Unanswered Questions How do I calculate a paired t-test? How To Find The Standard Error Of The Sample Mean The standard error is calculated as 0.2 and the standard deviation of a sample is 5kg. wikiHow Contributor To find the mean, add all the numbers together and divide by how many numbers there are. Notice that s x ¯   = s n {\displaystyle {\text{s}}_{\bar {x}}\ ={\frac {s}{\sqrt {n}}}} is only an estimate of the true standard error, σ x ¯   = σ n ## How To Compute Standard Error Of The Mean In Excel Sampling from a distribution with a large standard deviation The first data set consists of the ages of 9,732 women who completed the 2012 Cherry Blossom run, a 10-mile race held For example, the sample mean is the usual estimator of a population mean. Compute Standard Deviation Compare the true standard error of the mean to the standard error estimated using this sample. Compute Margin Of Error Ecology 76(2): 628 – 639. ^ Klein, RJ. "Healthy People 2010 criteria for data suppression" (PDF). Loading... get redirected here The sample mean will very rarely be equal to the population mean. This estimate may be compared with the formula for the true standard deviation of the sample mean: SD x ¯   = σ n {\displaystyle {\text{SD}}_{\bar {x}}\ ={\frac {\sigma }{\sqrt {n}}}} See unbiased estimation of standard deviation for further discussion. Compute Sampling Error JSTOR2682923. ^ Sokal and Rohlf (1981) Biometry: Principles and Practice of Statistics in Biological Research , 2nd ed. Learn More . Method 3 The Standard Deviation 1 Calculate the standard deviation. http://freqnbytes.com/standard-error/compute-the-standard-error-of-the-regression.php They may be used to calculate confidence intervals. The distribution of these 20,000 sample means indicate how far the mean of a sample may be from the true population mean. Standard Error Of The Mean Formula This gives 9.27/sqrt(16) = 2.32. Repeating the sampling procedure as for the Cherry Blossom runners, take 20,000 samples of size n=16 from the age at first marriage population. ## Flag as duplicate Thanks! Answer this question Flag as... However, the sample standard deviation, s, is an estimate of σ. If σ is not known, the standard error is estimated using the formula s x ¯   = s n {\displaystyle {\text{s}}_{\bar {x}}\ ={\frac {s}{\sqrt {n}}}} where s is the sample Standard Error Of The Mean Calculator Online Skip navigation UploadSign inSearch Loading... Community Q&A Search Add New Question How do you find the mean given number of observations? The unbiased standard error plots as the ρ=0 diagonal line with log-log slope -½. Gurland and Tripathi (1971)[6] provide a correction and equation for this effect. my review here It is rare that the true population standard deviation is known.
2018-01-16 23:16:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8122807145118713, "perplexity": 1572.039071006397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886758.34/warc/CC-MAIN-20180116224019-20180117004019-00487.warc.gz"}
https://ai.stackexchange.com/questions/23208/wind-speed-forecasting-using-arima-model-in-python3
# Wind speed forecasting using ARIMA model in Python3 Recently, I started working on time-series models and would mention that I am very new to python and ML as a whole. I tried to implement a time-series model on wind speed data. Being a newbie, I followed the steps given in this article: https://kanoki.org/2020/04/30/time-series-analysis-and-forecasting-with-arima-python/ and it's a great one to start with but somehow I am unable to forecast my data (or I would say the forecasted data is constant around the 5.88 value!) # FORECAST n=39 forecast,err,ci = results.forecast(steps=n, alpha= 0.05) windspeed_forecast = pd.DataFrame({'forecast':forecast},index=pd.date_range(start='22/8/2020 01:00:00', periods=n, freq='MS')) windspeed_forecast #plot for forecast ax = windspeed[19:].SPEED.plot(label='observed', figsize=(20, 15)) windspeed_forecast.plot(ax=ax,label='Forecast',color='r') ax.fill_between(windspeed_forecast.index, ci[:,0], ci[:,1], color='b', alpha=.005) ax.set_xlabel('DATE') ax.set_ylabel('SPEED') plt.legend() plt.show() -What I think is the high AIC value of 700 might be the problem! -Also, I am unable to figure out how can I create the column of Date-time for the forecasted values same as that of the original data(i.e. hourly based data of a specific date) [As shown in the ss number 1 and as shown in ss below - I need a column starting from 22/8/2020 with hourly gaps and so on]. Also, PFA the ss of my data in jupyter notebook (out of 191 total data, 152 used as train data and rest as test data) Any suggestion/help regarding the same will be appreciated :) • I don't really understand what your main specific question is here. Can you clarify that (although this is an old question)? – nbro Jan 20 at 19:57
2021-08-04 19:09:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36434075236320496, "perplexity": 1640.0016081472288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154897.82/warc/CC-MAIN-20210804174229-20210804204229-00687.warc.gz"}
https://www.betterment.com/press/newsroom/smartest-steps-take-win-700-million-powerball-jackpot/
# The smartest steps to take if you win the $700 million Powerball jackpot By Kathleen Elkins The jackpot for Wednesday’s Powerball lottery is now at$700 million, the second-largest prize in the game’s history. On the off chance you hold the winning numbers, you’ll want to be smart about how you handle the windfall. After all, the CFP Board of Standards says nearly one-third of lottery winners eventually declare bankruptcy. CNBC Make It spoke with Nick Holeman, certified financial planner at Betterment, who outlined five smart steps to take if you walk away with the jackpot.
2017-09-20 09:14:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2839193344116211, "perplexity": 3926.850427314351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686983.8/warc/CC-MAIN-20170920085844-20170920105844-00392.warc.gz"}
https://ai.stackexchange.com/questions/17188/why-does-the-adversarial-search-minimax-algorithm-use-depth-first-search-dfs-i
# Why does the adversarial search minimax algorithm use Depth-First Search (DFS) instead of Breadth-First Search (BFS)? I understand that the actual algorithm calls for using Depth-First Search, but is there a functionality reason for using it over another search algorithm like Breadth-First Search? The primary reason is that Breadth-First Search requires much more memory (and this probably also makes it a little bit slower in practice, due to time required to allocate memory, jumping around in memory rather than working with what's still in the CPU's caches, etc.). Breadth-First Search needs memory to remember "where it was" in all the different branches, whereas Depth-First Search completes an entire path first before recursing back -- which doesn't really require any memory other than the stack trace. This is assuming we're using a recursive implementation for DFS -- which we normally do in the case of minimax. You can clearly see this if you look at pseudocode for the two approaches (ignoring the minimax details here, just presenting pseudocode for straightforward searches): BreadthFirstSearch(start): Q = new queue() Q.append(start) while Q is not empty: node = Q.pop() if node is leaf: do something with leaf else: for each child of node: Q.append(child) DepthFirstSearch(start): if start is leaf: do something with leaf for each child of start: DepthFirstSearch(child) // probably do something with return value from the recursive DFS call You see that the BFS requires a queue object that explicitly stores a bunch of stuff in memory, whereas DFS doesn't. There's more to the story once you get to extensions of Minimax, like Alpha-Beta pruning and Iterative Deepening... but since the question is just about Minimax, I'll leave it at that for now.
2021-04-17 12:24:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4587631821632385, "perplexity": 1454.8354050475828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038119532.50/warc/CC-MAIN-20210417102129-20210417132129-00221.warc.gz"}
https://www.futurelearn.com/info/futurelearn-international/canada-carbon-tax
# What is Canada’s National Carbon Tax and how does it affect us? Here’s what you need to know about Canada’s carbon tax, the Greenhouse Gas Pollution Pricing Act, why we have it and how it affects you. Climate change is well on its way towards climate disaster, and Canadians experiencing the extreme highs in temperatures coupled with the rampant wildfires tearing apart towns and forests across Canada this summer have had a glimpse of the bleak future if urgent action isn’t taken. In an effort to curb climate change and not a second too soon, Canada’s Supreme Court upheld the Trudeau government’s 2019 implementation of the Greenhouse Gas Pollution Pricing Act on March 25th of this year to help the country reach its sustainability goals This act implements a carbon tax that puts a price on carbon in an effort to reduce emissions by at least 30% below 2005 levels by 2030, and it’s one of many ways we’re aiming to undo the environmental damage done by human hands. Here’s what you need to know about Canada’s carbon tax. ## What is a carbon tax? A carbon tax is a tax imposed on the amount of carbon emitted into the atmosphere as a result of human activity. The carbon emitted is usually in the form of carbon dioxide (CO2) from burning fossil fuels. The goal of a carbon tax is to create incentives for individuals and businesses to reduce their amount of carbon emissions in order to help curb climate change. Carbon tax may also be referred to as carbon pricing, price on carbon, greenhouse gas tax (GHG tax) or fuel charge, although each of these has slightly varying definitions. The term “carbon pricing” or “price of carbon” can be misunderstood, because rather than the traditional way of understanding price, where the price is related to the amount of something consumed, carbon pricing indicates the amount of carbon emitted. Carbon pricing is applied to the amount of carbon in the form of CO2 emitted into the atmosphere from burning fossil fuels. It is used in both carbon taxes and the cap-and-trade system. A greenhouse gas tax, or GHG tax, is generally speaking what most carbon taxes actually refer to. Carbon dioxide is one of many gases that contribute to the greenhouse effect which warms the atmosphere, so taxation on such gases are all-inclusive of the various types of gases. If it wasn’t, then a tax that purely targets carbon dioxide would mean people and industry may shift towards using another harmful type of emission-producing fuel rather than renewable energies. Fuel charge refers to the part of carbon tax that affects individuals, whereby the fuel consumed for operating motor vehicles or heating homes have an additional tax separate from HST. To learn about how you can save on fuel tax with the innovations and technologies behind low emission vehicles, you can enrol in the Introduction to Low Emission Road Transport course ## Why do countries tax carbon emissions? Carbon taxes take into account activities such as fueling cars and homes and the operation of factories that burn greenhouse gases. The reason why carbon emissions need to be reduced is that carbon dioxide emitted into the atmosphere directly contributes to climate change. ### The science that backs the carbon tax When fossil fuels such as coal, petroleum and natural gas are combusted in order to generate power for various machinery and uses, carbon dioxide is emitted into the atmosphere. This contributes to the greenhouse effect of the Earth’s atmosphere, where heat gets trapped inside the atmosphere, being absorbed by its immense amount of ocean surfaces and heating up the average temperature of the planet. Through many scientific models of climate, it is unanimously understood that a shift in average temperature will have devastating effects on the livability of the planet, including its oceans and land. The Earth’s climate has had several drastic shifts before, and carbon in the atmosphere does not even comprise a majority of the greenhouse gases. So why do we care about carbon emissions? The reason for this is because the rapid accumulation of carbon in the atmosphere is due to human activity on an industrial scale. The amount of carbon in the atmosphere has significantly increased since the beginning of the industrial revolution in the late 18th century. This rapid accumulation of carbon has contributed to the average temperature on Earth increasing by more than 1° Celsius since 1880. This shift in temperature is leading to devastating effects, as explained in Tipping Points: Climate Change and Society from the University of Exeter. ### Carbon taxes help tackle the scale of the issue It is undeniable that human activity is what has contributed to global warming. There are individual things we can do to live more sustainably, such as opting for a plant-based diet or avoiding fast fashion, but because of the scale on which any solution must come about, it is up to governments to implement policies that will reduce the carbon in the atmosphere. This is where carbon taxes come in. Carbon taxes are used as an incentive to reduce these emissions in a direct way. In order to pay less in carbon taxes, individuals and businesses will reduce their use of excessive carbon emitters while renewable energy producers will be given a competitive edge. For a uniquely innovative way in which another country has combated the climate crisis, see the course Tackling the Climate Crisis: Innovation from Cuba ## Carbon tax pros and cons Although popular amongst many economists and governments, carbon tax is not without its critics. There are many pros as well as cons to using a carbon tax as a way to reduce the carbon in the atmosphere. Here, we offer insight into some of the advantages as well as disadvantages of using a carbon tax to combat climate change. ### Pros of carbon taxes: • Carbon taxes are a direct step in saving the planet. It is generally agreed upon by economists that carbon taxes are an effective and efficient way to lower carbon emissions because it allows the market to determine the most efficient way to reduce emissions and gives renewable energy sources a more competitive edge. • Carbon taxes support innovation that helps the environment. These taxes bolster renewable energy industries by giving facilities and individuals who use energy incentives to opt for renewable sources, offering them a bigger market share of buyers. • Putting a price on carbon is something that can be implemented fairly quickly without overhauling an entire system. Taxes are something that individuals and industry are used to, and is something that is perennially accounted for. Adding a carbon tax is not a strenuous reinvention of the system, which means it can be applied quickly and efficiently. • Carbon taxes encourage both industry and individuals to reduce their carbon footprint in order to save money in addition to saving the planet. • The efficacy of carbon taxes is backed by evidence. Most countries that have implemented carbon taxes before Canada have done so successfully with a reduction in their carbon output. The one exception is Australia, which repealed its carbon tax after two years in 2014 by their then centre-right government. ### Cons of carbon taxes: • Implemented on their own, carbon taxes can be harmful to lower-income families. These households tend to use a higher percentage of their income on high-emission activities, such as heating homes and transportation, than those with higher income. This makes carbon tax a regressive tax. • Money might not be a big enough incentive for those who have money. It has been proposed that those with more money can afford to pay more and thus may not effectively reduce their burning of fossil fuels. One would hope that the effects of burning these fuels on the environment would be enough to deter certain practices, but there is no guarantee that understanding of the climate crisis is universal. • Carbon taxes have been criticised as a scheme that continues to operate under a for-profit system. This system is seen by many as the underlying cause of climate change, whereby the system itself must be fixed. Carbon taxes do not alter the system but rather operate within it, which leaves some sceptical of the long-term change that it may affect. An example of the tax being harmful to low income families is if an individuals annual income is 35,000 dollars, and they spend 3000 dollars commuting to and from work plus an additional 4000 dollars on their home, 20% or a fifth of their income would be used on an activity that contributes to carbon emissions. Contrast this with someone whose income is in the 6-figures range, and there is a stark difference in disposable income after the fact. But governments are well aware of this. To counter the regressive property of carbon taxes, these taxes are usually implemented along with other measures to ensure that those with lower income are not hit the hardest. These measures may be either a redistribution of the revenue from carbon taxes or with carbon tax rebates. For the Canadian carbon tax, the federal government offers a rebate to individuals in order to offset increases in bills. If you’re in Ontario, Alberta, Calgary or Manitoba, you may be eligible for a carbon tax rebate from the federal government. ### Striving towards Canada’s climate goals Regardless of its criticisms, the ability to implement this tax without systemic overhaul and reorganisation means carbon taxes continue to be an attractive measure to realise Canada’s time-limited climate goals. The hope is to eventually leave behind non-renewable energy production completely. In the meantime, as clean energy solutions are innovated and help us achieve clean growth and clean cities, carbon tax in Canada will aid us in striving towards our climate objectives. This is one of several ways taxes serve to better the societies we live in. Find out more about the different ways taxes serve public policy objectives with our open step with Professor Tony Allen of SOAS University of London. ## A brief history of Canada’s carbon tax Carbon taxes have existed in Canada since 2007 when the province of Quebec first implemented a carbon tax on the energy sector. But while other provinces had followed suit, there remain stragglers when it comes to enforcing policy to reduce carbon emissions As is the case with governments that have a turnover rate of every few years, policies are constantly rebuked and reinstated. For example, while a cap and trade system was implemented by Ontario Kathleen Wynne in 2016, this was overturned by Conservative premier Doug Ford who was voted in after 2019. To eradicate this type of flip-flopping of policy over something as crucial as the fight against climate change, and to guarantee that every province will continue to do their part, the Canadian parliament passed the Greenhouse Gas Pollution Pricing Act (GHGPPA) in 2018, colloquially known as the carbon tax. This was then implemented by the Trudeau government in 2019. ### Controversies on Canada’s carbon tax But it wasn’t simply smooth sailing from there. The governments of Ontario, Alberta and Saskatchewan, along with a few other political bodies, disputed the tax. They filed separate lawsuits that challenged the federal government’s authority on carbon pricing. Their arguments were all variations on the claim that this amount of jurisdiction from federal to provincial government was unconstitutional, and that a carbon tax should be enforced at the discretion of the province because natural resources are a provincial issue. On March 25th, 2021, the Canadian Supreme Court determined in a 6-3 ruling that the Greenhouse Gas Pollution Pricing Act was constitutional. This was based on the Peace, Order, and Good Government of Canada clause (POGG), and it overruled the provinces opposing the act. This win for the battle against climate change saw that provinces that didn’t previously meet the federal standard abide by the carbon tax imposed by Trudeau’s government. With Canada’s carbon tax confirmed as constitutional by the Supreme Court in March 2021, there is hope that Canada will be able to reach its goal of reducing carbon emissions by 30% below the levels in 2005 by 2030 as per its commitment to the Paris Agreement. ## How does Canada’s carbon tax work? Industry has a different pricing structure than individuals. The two-part system imposes a pollution price on fuel, known as the fuel charge, and a pollution price for industry, known as the Output-Based Pricing System (OBPS). The carbon tax on fuel set a minimum price of 20 dollars per tonne of CO2 in 2019, rising my 10 dollars every year to 50 dollars in 2022, where it will increase by 15 dollars every year until it reaches 170 dollars in 2030. As of April 2021, the carbon tax per tonne of CO2 is 40 dollars. Each province and territory can set their own taxes that either meet or exceed this minimum. Exemptions include provinces that instead have a cap-and-trade system which achieves a similar result. Currently, the four provinces that have not set their own carbon taxes or cap-and-trade system are Ontario, Alberta, Saskatchewan and Manitoba, which means the federal government will impose the carbon tax system in its stead. ### Canada’s Output-Based Pricing System (OBPS) Facilities that emit 50,000 tonnes of carbon or more per year will be required to pay for their emissions. The amount they’ll pay is dependent on industry standards and the amount of carbon emitted by facilities that output similar products. For example, a facility working within the aerospace industries would be charged a different amount per tonne of carbon than one that is used for agriculture, based on the average output of their peers. This is set in hopes of keeping Canadian industries competitive with their international colleagues. ### How are different types of greenhouse gas emissions calculated? Although it’s often referred to as a carbon tax, the Greenhouse Gas Pollution Pricing Act, as it’s officially called, applies to all greenhouse gases that contribute to global warming. But not all greenhouse gases are created equal. In order to determine how much each gas will cost when emitted, each gas is set against a standard framework that considers how much a substance increases the greenhouse effect that causes global warming. You can calculate greenhouse gas equivalencies using a calculator available on the Government of Canada website. The carbon rebate, or officially known as the Climate action incentive (CAI), ensures that the lower-income members of society are not the hardest hit by the carbon tax. It applies to residents of Ontario, Alberta, Saskatchewan and Manitoba, all of which are provinces that do not have their own carbon reduction policy. This rebate varies depending on the province you’re a resident of, because each province emits a different amount of carbon, and can be claimed per family. The following chart breaks down the amount you can claim back, depending on which province you’re a resident of and what your family situation is. Province Basic Amount Spouse or common-law partner amount Qualified dependant amount Single parent’s qualified dependant amount Alberta $490$245 $123$245 Saskatchewan $500$250 $125$250 Manitoba $360$180 $90$180 Ontario $300$150 $75$150 For those in small or rural communities, there is also an additional supplement that is not included in the above chart. You can find out more about the Climate action incentive here. The cap-and-trade policy, which currently applies to Quebec and Nova Scotia, implements a cap to the amount of emissions allowed each year. Companies can then purchase emissions credits quarterly within that amount. If they would like to exceed that amount, they can buy credits from the companies that have credit to spare because they emit less. Rather than raising the tax each year like with the carbon tax, here, the government would lower the cap each year. With Trudeau’s commitment to the Paris Agreement, an international treaty on climate change, Canada’s lofty goals of lowering carbon emissions by at least 30% below its 2005 emissions isn’t a task taken upon lightly. But with global warming well on its way and more natural disasters looming, urgent climate action is required. Canada’s carbon tax will increase each year until it reaches \$170 per tonne in 2030, and this will continue to be applied to any province that does not have its own alternative that meets the standards set in order to reach the national goal. There is the risk of the carbon tax being repealed if another government with conflicting interests is voted in, as has occurred in Australia, but it is in the interest of Canadians and all inhabitants of the planet that measures continue to be taken in order to curb climate change. ## Final thoughts The carbon tax is an important step in the fight against climate change, but it shouldn’t and won’t exist in a vacuum. It will operate alongside tax rebates and continued innovation in clean energy so that we can transition into a biobased economy Eventually, if Canada and the rest of the world continue to work towards reducing carbon emissions from burning fuel, the desired outcome is that the amount of carbon in the atmosphere will be cut down to pre-industrial levels. Along with building sustainable cities and innovating new technologies for the sake of saving the environment, there is hope that a long, sustainable future may be ahead of us. We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life. We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas. You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations. ## Explore: The Canadian education system by province and territory The Canadian education system is ranked one of the highest in the world, we explore … ## The Keystone XL Pipeline: Why did its Canadian owners abandon it? The Keystone XL Pipeline was a pipeline extension for transporting oil sands from Alberta to … ## Discover the fastest-growing industries in the Philippines We explore the industries in the Philippines that are growing the fastest and look at … ## What is Canada’s National Carbon Tax and how does it affect us? Here’s what you need to know about Canada’s carbon tax, the Greenhouse Gas Pollution Pricing …
2021-10-23 15:23:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22909224033355713, "perplexity": 2129.4495807411026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585696.21/warc/CC-MAIN-20211023130922-20211023160922-00014.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-sqrt-16-sqrt-4-sqrt-2
# How do you simplify sqrt (16) / (sqrt (4) + sqrt (2))? It is sqrt (16) / (sqrt (4) + sqrt (2))=4/[sqrt2(sqrt2+1)]= 2sqrt2/(sqrt2+1)=2*sqrt2(sqrt2-1)/[(sqrt2+1)*(sqrt2-1)]= 2sqrt2(sqrt2-1) Jun 26, 2016 $4 - 2 \sqrt{2}$ #### Explanation: Try to rationalize the denominator. Multiply numerator and denominatr by $\left(\sqrt{4} - \sqrt{2}\right)$ $\sqrt{16} \frac{\sqrt{4} - \sqrt{2}}{\left(\sqrt{4} + \sqrt{2}\right) \cdot \left(\sqrt{4} - \sqrt{2}\right)}$ $4 \cdot \frac{2 - \sqrt{2}}{4 - 2}$ $4 \cdot \frac{2 - \sqrt{2}}{2}$ $2 \cdot \left(2 - \sqrt{2}\right)$ $4 - 2 \sqrt{2}$ Multiply through by $\frac{2 - \sqrt{2}}{2 - \sqrt{2}}$ and work through to get $4 - 2 \sqrt{2} = 2 \left(2 - \sqrt{2}\right)$ #### Explanation: $\frac{\sqrt{16}}{\sqrt{4} + \sqrt{2}}$ Let's first take the square roots of the perfect squares: $\frac{4}{2 + \sqrt{2}}$ In order to simplify, we need the square root out from the denominator. The way to do this is to ensure that when we do FOIL (the process of multiplying 2 quantities within brackets), we don't end up with more square roots. To do that, we'll multiply by $\left(2 - \sqrt{2}\right)$ which will eliminate that possibility (like this): $\frac{4}{2 + \sqrt{2}} \cdot \left(\frac{2 - \sqrt{2}}{2 - \sqrt{2}}\right)$ $\frac{4 \cdot 2 - 4 \sqrt{2}}{2 \cdot 2 - 2 \sqrt{2} + 2 \sqrt{2} - \sqrt{2} \sqrt{2}}$ $\frac{8 - 4 \sqrt{2}}{4 - 2} = \frac{8 - 4 \sqrt{2}}{2} = 4 - 2 \sqrt{2} = 2 \left(2 - \sqrt{2}\right)$
2019-11-13 21:53:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9725041389465332, "perplexity": 1551.4316385425193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667442.36/warc/CC-MAIN-20191113215021-20191114003021-00236.warc.gz"}
https://mnt.io/tag/hoa/
# sabre/katana ## What is it? sabre/katana is a contact, calendar, task list and file server. What does it mean? Assuming nowadays you have multiple devices (PC, phones, tablets, TVs…). If you would like to get your address books, calendars, task lists and files synced between all these devices from everywhere, you need a server. All your devices are then considered as clients. But there is an issue with the server. Most of the time, you might choose Google or maybe Apple, but one may wonder: Can we trust these servers? Can we give them our private data, like all our contacts, our calendars, all our photos…? What if you are a company or an association and you have sensitive data that are really private or strategic? So, can you still trust them? Where the data are stored? Who can look at these data? More and more, there is a huge need for “personal” server. Moreover, servers like Google or Apple are often closed: You reach your data with specific clients and they are not available in all platforms. This is for strategic reasons of course. But with sabre/katana, you are not limited. See the above schema: Firefox OS can talk to iOS or Android at the same time. sabre/katana is this kind of server. You can install it on your machine and manage users in a minute. Each user will have a collection of address books, calendars, task lists and files. This server can talk to a loong list of devices, mainly thanks to a scrupulous respect of industrial standards: • Mac OS X: • OS X 10.10 (Yosemite), • OS X 10.9 (Mavericks), • OS X 10.8 (Mountain Lion), • OS X 10.7 (Lion), • OS X 10.6 (Snow Leopard), • OS X 10.5 (Leopard), • BusyCal, • BusyContacts, • Fantastical, • Rainlendar, • ReminderFox, • SoHo Organizer, • Spotlife, • Thunderbird , • Windows: • eM Client, • Microsoft Outlook 2013, • Microsoft Outlook 2010, • Microsoft Outlook 2007, • Microsoft Outlook with Bynari WebDAV Collaborator, • Microsoft Outlook with iCal4OL, • Rainlendar, • ReminderFox, • Thunderbird, • Linux: • Evolution, • Rainlendar, • ReminderFox, • Thunderbird, • Mobile: • Android, • BlackBerry 10, • BlackBerry PlayBook, • Firefox OS, • iOS 8, • iOS 7, • iOS 6, • iOS 5, • iOS 4, • iOS 3, • Nokia N9, • Sailfish. Did you find your device in this list? Probably yes 😉. sabre/katana sits in the middle of all your devices and synced all your data. Of course, it is free and open source. Go check the source! ## List of features Here is a non-exhaustive list of features supported by sabre/katana. Depending whether you are a user or a developer, the features that might interest you are radically not the same. I decided to show you a list from the user point of view. If you would like to get a list from the developer point of view, please see this exhaustive list of supported RFC for more details. ### Contacts All usual fields are supported, like phone numbers, email addresses, URLs, birthday, ringtone, texttone, related names, postal addresses, notes, HD photos etc. Of course, groups of cards are also supported. My photo is not in HD, I really have to update it! Cards can be encoded into several formats. The most usual format is VCF. sabre/katana allows you to download the whole address book of a user as a single VCF file. You can also create, update and delete address books. ### Calendars A calendar is just a set of events. Each event has several properties, such as a title, a location, a date start, a date end, some notes, URLs, alarms etc. sabre/katana also support recurring events (“each last Monday of the month, at 11am…”), in addition to scheduling (see bellow). Few words about calendar scheduling. Let’s say you are organizing an event, like New release (we always enjoy release day!). You would like to invite several people but you don’t know if they could be present or not. In your event, all you have to do is to add attendees. How are they going to be notified about this event? Two situations: 1. Either attendees are registered on your sabre/katana server and they will receive an invite inside their calendar application (we call this iTIP), 2. Or they are not registered on your server and they will receive an email with the event as an attached file (we call this iMIP). All they have to do is to open this event in their calendar application. Notice the gorgeous map embedded inside the email! Once they received the event, they can accept, decline or “don’t know” (they will try to be present at) the event. , or Of course, attendees will be notified too if the event has been moved, canceled, refreshed etc. Calendars can be encoded into several formats. The most usal format is ICS. sabre/katana allows you to download the whole calendar of a user as a single ICS file. You can also create, update and delete calendars. A task list is exactly like a calendar (from a programmatically point of view). Instead of containg event objects, it contains todo objects. sabre/katana supports group of tasks, reminder, progression etc. Just like calendars, task lists can be encoded into several formats, whose ICS. sabre/katana allows you to download the whole task list of a user as a single ICS file. You can also create, update and delete task lists. ### Files Finally, sabre/katana creates a home collection per user: A personal directory that can contain files and directories and… synced between all your devices (as usual 😄). sabre/katana also creates a special directory called public/ which is a public directory. Every files and directories stored inside this directory are accessible to anyone that has the correct link. No listing is prompted to protect your public data. Just like contact, calendar and task list applications, you need a client application to connect to your home collection on sabre/katana. Then, your public directory on sabre/katana will be a regular directory as every other. sabre/katana is able to store any kind of files. Yes, any kinds. It’s just files. However, it white-lists the kind of files that can be showed in the browser. Only images, audios, videos, texts, PDF and some vendor formats (like Microsoft Office) are considered as safe (for the server). This way, associations can share musics, videos or images, companies can share PDF or Microsoft Word documents etc. Maybe in the future sabre/katana might white-list more formats. If a format is not white-listed, the file will be forced to download. ## How is sabre/katana built? sabre/katana is based on two big and solid projects: sabre/dav is one of the most powerful CardDAV, CalDAV and WebDAV framework in the planet. Trusted by the likes of Atmail, Box, fruux and ownCloud, it powers millions of users world-wide! It is written in PHP and is open source. Hoa is a modular, extensible and structured set of PHP libraries. Fun fact: Also open source, this project is also trusted by ownCloud, in addition to Mozilla, joliCode etc. Recently, this project has recorded more than 600,000 downloads and the community is about to reach 1000 people. sabre/katana is then a program based on sabre/dav for the DAV part and Hoa for everything else, like the logic code inside the sabre/dav‘s plugins. The result is a ready-to-use server with a nice interface for the administration. To ensure code quality, we use atoum, a popular and modern test framework for PHP. So far, sabre/dav has more than 1000 assertions. ## Conclusion sabre/katana is a server for contacts, calendars, task lists and files. Everything is synced, everytime and everywhere. It perfectly connects to a lot of devices on the market. Several features we need and use daily have been presented. This is the easiest and a secure way to host your own private data. # Control the terminal, the right way Nowadays, there are plenty of terminal emulators in the wild. Each one has a specific way to handle controls. How many colours does it support? How to control the style of a character? How to control more than style, like the cursor or the window? In this article, we are going to explain and show in action the right ways to control your terminal with a portable and an easy to maintain API. We are going to talk about stat, tput, terminfo, Hoa\Console… but do not be afraid, it’s easy and fun! ## Introduction Terminals. They are the ancient interfaces, still not old fashioned yet. They are fast, efficient, work remotely with a low bandwidth, secured and very simple to use. A terminal is a canvas composed of columns and lines. Only one character fits at a position. According to the terminal, we have some features enabled; for instance, a character might be stylized with a colour, a decoration, a weight etc. Let’s consider the former. A colour belongs to a palette, which contains either 2, 8, 256 or more colours. One may wonder: • How many colours does a terminal support? • How to control the style of a character? • How to control more than style, like the cursor or the window? Well, this article is going to explain how a terminal works and how we interact with it. We are going to talk about terminal capabilities, terminal information (stored in database) and Hoa\Console, a PHP library that provides advanced terminal controls. ## The basis of a terminal A terminal, or a console, is an interface that allows to interact with the computer. This interface is textual. Like a graphical interface, there are inputs: The keyboard and the mouse, and ouputs: The screen or a file (a real file, a socket, a FIFO, something else…). There is a ton of terminals. The most famous ones are: Whatever the terminal you use, inputs are handled by programs (or processus) and outputs are produced by these latters. We said outputs can be the screen or a file. Actually, everything is a file, so the screen is also a file. However, the user is able to use redirections to choose where the ouputs must go. Let’s consider the echo program that prints all its options/arguments on its output. Thus, in the following example, foobar is printed on the screen: $echo 'foobar' And in the following example, foobar is redirected to a file called log: $ echo 'foobar' > log We are also able to redirect the output to another program, like wc that counts stuff: $echo 'foobar' | wc -c 7 Now we know there are 7 characters in foobar… no! echo automatically adds a new-line (\n) after each line; so: $ echo -n 'foobar' | wc -c 6 This is more correct! ## Detecting type of pipes Inputs and outputs are called pipes. Yes, trivial, this is nothing more than basic pipes! There are 3 standard pipes: • STDIN, standing for the standard input pipe, • STDOUT, standing for the standard output pipe and • STDERR, standing for the standard error pipe (also an output one). If the output is attached to the screen, we say this is a “direct output”. Why is it important? Because if we stylize a text, this is only for the screen, not for a file. A file should receive regular text, not all the decorations and styles. Hopefully, the Hoa\Console\Console class provides the isDirect, isPipe and isRedirection static methods to know whether the pipe is respectively direct, a pipe or a redirection (damn naming…!). Thus, let Type.php be the following program: echo 'is direct: '; var_dump(Hoa\Console\Console::isDirect(STDOUT)); echo 'is pipe: '; var_dump(Hoa\Console\Console::isPipe(STDOUT)); echo 'is redirection: '; var_dump(Hoa\Console\Console::isRedirection(STDOUT)); Now, let’s test our program: $php Type.php is direct: bool(true) is pipe: bool(false) is redirection: bool(false)$ php Type.php | xargs -I@ echo @ is direct: bool(false) is pipe: bool(true) is redirection: bool(false) $php Type.php > /tmp/foo; cat !!$ is direct: bool(false) is pipe: bool(false) is redirection: bool(true) The first execution is very classic. STDOUT, the standard output, is direct. The second execution redirects the output to another program, then STDOUT is of kind pipe. Finally, the last execution redirects the output to a file called /tmp/foo, so STDOUT is a redirection. How does it work? We use fstat to read the mode of the file. The underlying fstat implementation is defined in C, so let’s take a look at the documentation of fstat(2). stat is a C structure that looks like: struct stat { dev_t st_dev; /* device inode resides on */ ino_t st_ino; /* inode's number */ mode_t st_mode; /* inode protection mode */ uid_t st_uid; /* user-id of owner */ gid_t st_gid; /* group-id of owner */ dev_t st_rdev; /* device type, for special file inode */ struct timespec st_atimespec; /* time of last access */ struct timespec st_mtimespec; /* time of last data modification */ struct timespec st_ctimespec; /* time of last file status change */ off_t st_size; /* file size, in bytes */ quad_t st_blocks; /* blocks allocated for file */ u_long st_blksize; /* optimal file sys I/O ops blocksize */ u_long st_flags; /* user defined flags for file */ u_long st_gen; /* file generation number */ } The value of mode returned by the PHP fstat function is equal to st_mode in this structure. And st_mode has the following bits: #define S_IFMT 0170000 /* type of file mask */ #define S_IFIFO 0010000 /* named pipe (fifo) */ #define S_IFCHR 0020000 /* character special */ #define S_IFDIR 0040000 /* directory */ #define S_IFBLK 0060000 /* block special */ #define S_IFREG 0100000 /* regular */ #define S_IFLNK 0120000 /* symbolic link */ #define S_IFSOCK 0140000 /* socket */ #define S_IFWHT 0160000 /* whiteout */ #define S_ISUID 0004000 /* set user id on execution */ #define S_ISGID 0002000 /* set group id on execution */ #define S_ISVTX 0001000 /* save swapped text even after use */ #define S_IRWXU 0000700 /* RWX mask for owner */ #define S_IRUSR 0000400 /* read permission, owner */ #define S_IWUSR 0000200 /* write permission, owner */ #define S_IXUSR 0000100 /* execute/search permission, owner */ #define S_IRWXG 0000070 /* RWX mask for group */ #define S_IRGRP 0000040 /* read permission, group */ #define S_IWGRP 0000020 /* write permission, group */ #define S_IXGRP 0000010 /* execute/search permission, group */ #define S_IRWXO 0000007 /* RWX mask for other */ #define S_IROTH 0000004 /* read permission, other */ #define S_IWOTH 0000002 /* write permission, other */ #define S_IXOTH 0000001 /* execute/search permission, other */ Awesome, we have everything we need! We mask mode with S_IFMT to get the file data. Then we just have to check whether it is a named pipe S_IFIFO, a character special S_IFCHR etc. Concretly: • isDirect checks that the mode is equal to S_IFCHR, it means it is attached to the screen (in our case), • isPipe checks that the mode is equal to S_IFIFO: This is a special file that behaves like a FIFO stack (see the documentation of mkfifo(1)), everything which is written is directly read just after and the reading order is defined by the writing order (first-in, first-out!), • isRedirection checks that the mode is equal to S_IFREG, S_IFDIR, S_IFLNK, S_IFSOCK or S_IFBLK, in other words: All kind of files on which we can apply a redirection. Why? Because the STDOUT (or another STD* pipe) of the current processus is defined as a file pointer to the redirection destination and it can be only a file, a directory, a link, a socket or a block file. I encourage you to read the implementation of the Hoa\Console\Console::getMode method. So yes, this is useful to enable styles on text but also to define the default verbosity level. For instance, if a program outputs the result of a computation with some explanations around, the highest verbosity level would output everything (the result and the explanations) while the lowest level would output only the result. Let’s try with the toUpperCase.php program: $verbose = Hoa\Console\Console::isDirect(STDOUT);$string = $argv[1];$result = (new Hoa\String\String($string))->toUpperCase(); if(true ===$verbose) echo $string, ' becomes ',$result, ' in upper case!', "\n"; else echo $result, "\n"; Then, let’s execute this program: $ php toUpperCase.php 'Hello world!' Hello world! becomes HELLO WORLD! in upper case! And now, let’s execute this program with a pipe: $php toUpperCase.php 'Hello world!' | xargs -I@ echo @ HELLO WORLD! Useful and very simple, isn’t it? ## Terminal capabilities We can control the terminal with the inputs, like the keyboard, but we can also control the outputs. How? With the text itself. Actually, an output does not contain only the text but it includes control functions. It’s like HTML: Around a text, you can have an element, specifying that the text is a link. It’s exactly the same for terminals! To specify that a text must be in red, we must add a control function around it. Hopefully, these control functions have been standardized in the ECMA-48 document: Control Functions for Coded Character Set. However, not all terminals implement all this standard, and for historical reasons, some terminals use slightly different control functions. Moreover, some information do not belong to this standard (because this is out of its scope), like: How many colours does the terminal support? or does the terminal support the meta key? Consequently, each terminal has a list of capabilities. This list is splitted in 3 categories: • boolean capabilities, • number capabilities, • string capabilities. For instance: • the “does the terminal support the meta key” is a boolean capability called meta_key where its value is true or false, • the “number of colours supported by the terminal” is a… number capability called max_colors where its value can be 2, 8, 256 or more, • the “clear screen control function” is a string capability called clear_screen where its value might be \e[H\e[2J, • the “move the cursor one column to the right” is also a string capability called cursor_right where its value might be \e[C. All the capabilities can be found in the documentation of terminfo(5) or in the documentation of xcurses. I encourage you to follow these links and see how rich the terminal capabilities are! ## Terminal information Terminal capabilities are stored as information in databases. Where are these databases located? In files with a binary format. Favorite locations are: • /usr/share/terminfo, • /usr/share/lib/terminfo, • /lib/terminfo, • /usr/lib/terminfo, • /usr/local/share/terminfo, • /usr/local/share/lib/terminfo, • etc. • or the TERMINFO or TERMINFO_DIRS environment variables. Inside these directories, we have a tree of the form: xx/name, where xx is the ASCII value in hexadecimal of the first letter of the terminal name name, or n/name where n is the first letter of the terminal name. The terminal name is stored in the TERM environment variable. For instance, on my computer: $ echo $TERM xterm-256color$ file /usr/share/terminfo/78/xterm-256color /usr/share/terminfo/78/xterm-256color: Compiled terminfo entry We can use the Hoa\Console\Tput class to retrieve these information. The getTerminfo static method allows to get the path of the terminal information file. The getTerm static method allows to get the terminal name. Finally, the whole class allows to parse a terminal information database (it will use the file returned by getTerminfo by default). For instance: $tput = new Hoa\Console\Tput(); var_dump($tput->count('max_colors')); /** * Will output: * int(256) */ On my computer, with xterm-256color, I have 256 colours, as expected. If we parse the information of xterm and not xterm-256color, we will have: $tput = new Hoa\Console\Tput(Hoa\Console\Tput::getTerminfo('xterm')); var_dump($tput->count('max_colors')); /** * Will output: * int(8) */ ## The power in your hand: Control the cursor Let’s summarize. We are able to parse and know all the terminal capabilities of a specific terminal (including the one of the current user). If we would like a powerful terminal API, we need to control the basis, like the cursor. Remember. We said that the terminal is a canvas of columns and lines. The cursor is like a pen. We can move it and write something. We are going to (partly) see how the Hoa\Console\Cursor class works. ### I like to move it! The moveTo static method allows to move the cursor to an absolute position. For example: Hoa\Console\Cursor::moveTo($x,$y); The control function we use is cursor_address. So all we need to do is to use the Hoa\Console\Tput class and call the get method on it to get the value of this string capability. This is a parameterized one: On xterm-256color, its value is e[%i%p1%d;%p2%dH. We replace the parameters by $x and $y and we output the result. That’s all! We are able to move the cursor on an absolute position on all terminals! This is the right way to do. We use the same strategy for the move static method that moves the cursor relatively to its current position. For example: Hoa\Console\Cursor::move('right up'); We split the steps and for each step we read the appropriated string capability using the Hoa\Console\Tput class. For right, we read the parm_right_cursor string capability, for up, we read parm_up_cursor etc. Note that parm_right_cursor is different of cursor_right: The first one is used to move the cursor a certain number of times while the second one is used to move the cursor only one time. With performances in mind, we should use the first one if we have to move the cursor several times. The getPosition static method returns the position of the cursor. This way to interact is a little bit different. We must write a control function on the output, and then, the terminal replies on the input. See the implementation by yourself. print_r(Hoa\Console\Cursor::getPosition()); /** * Will output: * Array * ( * [x] => 7 * [y] => 42 * ) */ In the same way, we have the save and restore static methods that save the current position of the cursor and restore it. This is very useful. We use the save_cursor and restore_cursor string capabilities. Also, the clear static method splits some parts to clear. For each part (direction or way), we read from Hoa\Console\Tput the appropriated string capabilities: clear_screen to clear all the screen, clr_eol to clear everything on the right of the cursor, clr_eos to clear everything bellow the cursor etc. Hoa\Console\Cursor::clear('left'); See what we learnt in action: echo 'Foobar', "\n", 'Foobar', "\n", 'Foobar', "\n", 'Foobar', "\n", 'Foobar', "\n"; Hoa\Console\Cursor::save(); sleep(1); Hoa\Console\Cursor::move('LEFT'); sleep(1); Hoa\Console\Cursor::move('↑'); sleep(1); Hoa\Console\Cursor::move('↑'); sleep(1); Hoa\Console\Cursor::move('↑'); sleep(1); Hoa\Console\Cursor::clear('↔'); sleep(1); echo 'Hahaha!'; sleep(1); Hoa\Console\Cursor::restore(); echo "\n", 'Bye!', "\n"; The result is presented in the following figure. The resulting API is portable, clean, simple to read and very easy to maintain! This is the right way to do. ### Colours and decorations Now: Colours. This is mainly the reason why I decided to write this article. We see the same and the same libraries, again and again, doing only colours in the terminal, but unfortunately not in the right way 😞. A terminal has a palette of colours. Each colour is indexed by an integer, from 0 to potentially + . The size of the palette is described by the max_colors number capability. Usually, a palette contains 1, 2, 8, 256 or 16 million colours. So first thing to do is to check whether we have more than 1 colour. If not, we must not colorize the given text. Next, if we have less than 256 colours, we have to convert the style into a palette containing 8 colours. Same with less than 16 million colours, we have to convert into 256 colours. Moreover, we can define the style of the foreground or of the background with respectively the set_a_foreground and set_a_background string capabilities. Finally, in addition to colours, we can define other decorations like bold, underline, blink or even inverse the foreground and the background colours. One thing to remember is: With this capability, we only define the style at a given “pixel” and it will apply on the following text. In this case, it is not exactly like HTML where we have a beginning and an end. Here we only have a beginning. Let’s try! Hoa\Console\Cursor::colorize('underlined foreground(yellow) background(#932e2e)'); echo 'foo'; Hoa\Console\Cursor::colorize('!underlined background(normal)'); echo 'bar', "\n"; The API is pretty simple: We start to underline the text, we set the foreground to yellow and we set the background to #932e2e  . Then we output something. We continue with cancelling the underline decoration in addition to resetting the background. Finally we output something else. Here is the result: What do we observe? My terminal does not support more than 256 colours. Thus, #932e2e is automatically converted into the closest colour in my actual palette! This is the right way to do. For fun, you can change the colours in the palette with the Hoa\Console\Cursor::changeColor static method. You can also change the style of the cursor, like ▋, _ or |. A more complete usage of Hoa\Console\Cursor and even Hoa\Console\Window is the Hoa\Console\Readline class that is a powerful readline. More than autocompleters, history, key bindings etc., it has an advanced use of cursors. See this in action: We use Hoa\Console\Cursor to move the cursor or change the colours and Hoa\Console\Window to get the dimensions of the window, scroll some text in it etc. I encourage you to read the implementation. ## The power in your hand: Sound 🎵 Yes, even sound is defined by terminal capabilities. The famous bip is given by the bell string capability. You would like to make a bip? Easy: $tput = new Hoa\Console\Tput(); echo$tput->get('bell'); That’s it! ## Bonus: Window As a bonus, a quick demo of Hoa\Console\Window because it’s fun. The video shows the execution of the following code: Hoa\Console\Window::setSize(80, 35); var_dump(Hoa\Console\Window::getPosition()); foreach([[100, 100], [150, 150], [200, 100], [200, 80], [200, 60], [200, 100]] as list($x,$y)) { sleep(1); Hoa\Console\Window::moveTo($x,$y); } sleep(2); Hoa\Console\Window::minimize(); sleep(2); Hoa\Console\Window::restore(); sleep(2); Hoa\Console\Window::lower(); sleep(2); Hoa\Console\Window::raise(); We resize the window, we get its position, we move the window on the screen, we minimize and restore it, and finally we put it behind all other windows just before raising it. ## Conclusion In this article, we saw how to control the terminal by: Firstly, detecting the type of pipes, and secondly, reading and using the terminal capabilities. We know where these capabilities are stored and we saw few of them in action. This approach ensures your code will be portable, easy to maintain and easy to use. The portability is very important because, like browsers and user devices, we have a lot of terminal emulators released in the wild. We have to care about them. I encourage you to take a look at the Hoa\Console library and to contribute to make it even more awesome 😄. # Generate strings based on regular expressions During my PhD thesis, I have partly worked on the problem of the automatic accurate test data generation. In order to be complete and self-contained, I have addressed all kinds of data types, including strings. This article is the first one of a little series that aims at showing how to generate accurate and relevant strings under several constraints. ## What is a regular expression? We are talking about formal language theory here. In the known world, there are four kinds of languages. More formally, in 1956, the Chomsky hierarchy has been formulated, classifying grammars (which define languages) in four levels: 1. unrestricted grammars, matching langages known as Turing languages, no restriction, 2. context-sensitive grammars, matching contextual languages, 3. context-free grammars, matching algebraic languages, based on stacked automata, 4. regular grammars, matching regular languages. Each level includes the next level. The last level is the “weaker”, which must not sound negative here. Regular expressions are used often because of their simplicity and also because they solve most problems we encounter daily. A regular expression is a small language with very few operators and, most of the time, a simple semantics. For instance ab(c|d) means: a word (a data) starting by ab and followed by c or d. We also have quantification operators (also known as repetition operators), such as ?, * and +. We also have {x,y} to define a repetition between x and y. Thus, ? is equivalent to {0,1}, * to {0,} and + to {1,}. When y is missing, it means \displaystyle +\infty , so unbounded (or more exactly, bounded by the limits of the machine). So, for instance ab(c|d){2,4}e? means: a word starting by ab, followed 2, 3 or 4 times by c or d (so cc, cd, dc, ccc, ccd, cdc and so on) and potentially followed by e. The goal here is not to teach you regular expressions but this is kind of a tiny reminder. There are plenty of regular languages. You might know POSIX regular expression or Perl Compatible Regular Expressions (PCRE). Forget the first one, please. The syntax and the semantics are too much limited. PCRE is the regular language I recommend all the time. Behind every formal language there is a graph. A regular expression is compiled into a Finite State Machine (FSM). I am not going to draw and explain them, but it is interesting to know that behind a regular expression there is a basic automaton. No magic. ### Why focussing regular expressions? This article focuses on regular languages instead of other kind of languages because we use them very often (even daily). I am going to address context-free languages in another article, be patient young padawan. The needs and constraints with other kind of languages are not the same and more complex algorithms must be involved. So we are going easy for the first step. ## Understanding PCRE: lex and parse them The Hoa\Compiler library provides both \displaystyle LL(1) and \displaystyle LL(k) compiler-compilers. The documentation describes how to use it. We discover that the \displaystyle LL(k) compiler comes with a grammar description language called PP. What does it mean? It means for instance that the grammar of the PCRE can be written with the PP language and that Hoa\Compiler\Llk will transform this grammar into a compiler. That’s why we call them “compiler of compilers”. Fortunately, the Hoa\Regex library provides the grammar of the PCRE language in the hoa://Library/Regex/Grammar.pp file. Consequently, we are able to analyze regular expressions written in the PCRE language! Let’s try in a shell at first with the hoa compiler:pp tool: $echo 'ab(c|d){2,4}e?' | hoa compiler:pp hoa://Library/Regex/Grammar.pp 0 --visitor dump > #expression > > #concatenation > > > token(literal, a) > > > token(literal, b) > > > #quantification > > > > #alternation > > > > > token(literal, c) > > > > > token(literal, d) > > > > token(n_to_m, {2,4}) > > > #quantification > > > > token(literal, e) > > > > token(zero_or_one, ?) We read that the whole expression is composed of a single concatenation of two tokens: a and b, followed by a quantification, followed by another quantification. The first quantification is an alternation of (a choice betwen) two tokens: c and d, between 2 to 4 times. The second quantification is the e token that can appear zero or one time. Pretty simple. The final output of the Hoa\Compiler\Llk\Parser class is an Abstract Syntax Tree (AST). The documentation of Hoa\Compiler explains all that stuff, you should read it. The \displaystyle LL(k) compiler is cut out into very distinct layers in order to improve hackability. Again, the documentation teach us we have four levels in the compilation process: lexical analyzer, syntactic analyzer, trace and AST. The lexical analyzer (also known as lexer) transforms the textual data being analyzed into a sequence of tokens (formally known as lexemes). It checks whether the data is composed of the good pieces. Then, the syntactic analyzer (also known as parser) checks that the order of tokens in this sequence is correct (formally we say that it derives the sequence, see the Matching words section to learn more). Still in the shell, we can get the result of the lexical analyzer by using the --token-sequence option; thus: $ echo 'ab(c|d){2,4}e?' | hoa compiler:pp hoa://Library/Regex/Grammar.pp 0 --token-sequence # … token name token value offset ----------------------------------------- 0 … literal a 0 1 … literal b 1 2 … capturing_ ( 2 3 … literal c 3 4 … alternation | 4 5 … literal d 5 6 … _capturing ) 6 7 … n_to_m {2,4} 7 8 … literal e 12 9 … zero_or_one ? 13 10 … EOF 15 This is the sequence of tokens produced by the lexical analyzer. The tree is not yet built because this is the first step of the compilation process. However this is always interesting to understand these different steps and see how it works. Now we are able to analyze any regular expressions in the PCRE format! The result of this analysis is a tree. You know what is fun with trees? Visiting them. ## Visiting the AST Unsurprisingly, each node of the AST can be visited thanks to the Hoa\Visitor library. Here is an example with the “dump” visitor: use Hoa\Compiler; use Hoa\File; $compiler = Compiler\Llk\Llk::load( new File\Read('hoa://Library/Regex/Grammar.pp') ); // 2. Parse a data.$ast = $compiler->parse('ab(c|d){2,4}e?'); // 3. Dump the AST.$dump = new Compiler\Visitor\Dump(); echo $dump->visit($ast); This program will print the same AST dump we have previously seen in the shell. How to write our own visitor? A visitor is a class with a single visit method. Let’s try a visitor that pretty print a regular expression, i.e. transform: ab(c|d){2,4}e? into: a b ( c | d ){2,4} e? Why a pretty printer? First, it shows how to visit a tree. Second, it shows the structure of the visitor: we filter by node ID (#expression, #quantification, token etc.) and we apply respective computations. A pretty printer is often a good way for being familiarized with the structure of an AST. Here is the class. It catches only useful constructions for the given example: use Hoa\Visitor; class PrettyPrinter implements Visitor\Visit { public function visit ( Visitor\Element $element, &$handle = null, $eldnah = null ) { static$_indent = 0; $out = null;$nodeId = $element->getId(); switch($nodeId) { // Reset indentation and… case '#expression': $_indent = 0; // … visit all the children. case '#quantification': foreach($element->getChildren() as $child)$out .= $child->accept($this, $handle,$eldnah); break; // One new line between each children of the concatenation. case '#concatenation': foreach($element->getChildren() as$child) $out .=$child->accept($this,$handle, $eldnah) . "\n"; break; // Add parenthesis and increase indentation. case '#alternation':$oout = []; $pIndent = str_repeat(' ',$_indent); ++$_indent;$cIndent = str_repeat(' ', $_indent); foreach($element->getChildren() as $child)$oout[] = $cIndent .$child->accept($this,$handle, $eldnah); --$_indent; $out .=$pIndent . '(' . "\n" . implode("\n" . $cIndent . '|' . "\n",$oout) . "\n" . $pIndent . ')'; break; // Print token value verbatim. case 'token':$tokenId = $element->getValueToken();$tokenValue = $element->getValueValue(); switch($tokenId) { case 'literal': case 'n_to_m': case 'zero_or_one': $out .=$tokenValue; break; default: throw new RuntimeException( 'Token ID ' . $tokenId . ' is not well-handled.' ); } break; default: throw new RuntimeException( 'Node ID ' .$nodeId . ' is not well-handled.' ); } return $out; } } And finally, we apply the pretty printer on the AST like previously seen: $compiler = Compiler\Llk\Llk::load( ); $ast =$compiler->parse('ab(c|d){2,4}e?'); $prettyprint = new PrettyPrinter(); echo$prettyprint->visit($ast); Et voilà ! Now, put all that stuff together! ## Isotropic generation We can use Hoa\Regex and Hoa\Compiler to get the AST of any regular expressions written in the PCRE format. We can use Hoa\Visitor to traverse the AST and apply computations according to the type of nodes. Our goal is to generate strings based on regular expressions. What kind of generation are we going to use? There are plenty of them: uniform random, smallest, coverage based… The simplest is isotropic generation, also known as random generation. But random says nothing: what is the repartition, or do we have any uniformity? Isotropic means each choice will be solved randomly and uniformly. Uniformity has to be defined: does it include the whole set of nodes or just the immediate children of the node? Isotropic means we consider only immediate children. For instance, a node #alternation has \displaystyle c^1 immediate children, the probability \displaystyle C to choose one child is: \displaystyle P(C) = \frac{1}{c^1} Yes, simple as that! We can use the Hoa\Math library that provides the Hoa\Math\Sampler\Random class to sample uniform random integers and floats. Ready? ### Structure of the visitor The structure of the visitor is the following: use Hoa\Visitor; use Hoa\Math; class IsotropicSampler implements Visitor\Visit { protected$_sampler = null; public function __construct ( Math\Sampler $sampler ) {$this->_sampler = $sampler; return; } public function visit ( Visitor\Element$element, &$handle = null,$eldnah = null ) { switch($element->getId()) { // … } } } We set a sampler and we start visiting and filtering nodes by their node ID. The following code will generate a string based on the regular expression contained in the $expression variable: $expression = '…';$ast = $compiler->parse($expression); $generator = new IsotropicSampler(new Math\Sampler\Random()); echo$generator->visit($ast); We are going to change the value of $expression step by step until having ab(c|d){2,4}e?. ### Case of #expression A node of type #expression has only one child. Thus, we simply return the computation of this node: case '#expression': return $element->getChild(0)->accept($this, $handle,$eldnah); break; ### Case of token We consider only one type of token for now: literal. A literal can contain an escaped character, can be a single character or can be . (which means everything). We consider only a single character for this example (spoil: the whole visitor already exists). Thus: case 'token': return $element->getValueValue(); break; Here, with $expression = 'a'; we get the string a. ### Case of #concatenation A concatenation is just the computation of all children joined in a single piece of string. Thus: case '#concatenation': $out = null; foreach($element->getChildren() as $child)$out .= $child->accept($this, $handle,$eldnah); return $out; break; At this step, with $expression = 'ab'; we get the string ab. Totally crazy. ### Case of #alternation An alternation is a choice between several children. All we have to do is to select a child based on the probability given above. The number of children for the current node can be known thanks to the getChildrenNumber method. We are also going to use the sampler of integers. Thus: case '#alternation': $childIndex =$this->_sampler->getInteger( 0, $element->getChildrenNumber() - 1 ); return$element->getChild($childIndex) ->accept($this, $handle,$eldnah); break; Now, with $expression = 'ab(c|d)'; we get the strings abc or abd at random. Try several times to see by yourself. ### Case of #quantification A quantification is an alternation of concatenations. Indeed, e{2,4} is strictly equivalent to ee|eee|eeee. We have only two quantifications in our example: ? and {x,y}. We are going to find the value for x and y and then choose at random between these bounds. Let’s go: case '#quantification':$out = null; $x = 0;$y = 0; // Filter the type of quantification. switch($element->getChild(1)->getValueToken()) { // ? case 'zero_or_one':$y = 1; break; // {x,y} case 'n_to_m': $xy = explode( ',', trim($element->getChild(1)->getValueValue(), '{}') ); $x = (int) trim($xy[0]); $y = (int) trim($xy[1]); break; } // Choose the number of repetitions. $max =$this->_sampler->getInteger($x,$y); // Concatenate. for($i = 0;$i < $max; ++$i) $out .=$element->getChild(0)->accept($this,$handle, $eldnah); return$out; break; Finally, with $expression = 'ab(c|d){2,4}e?'; we can have the following strings: abdcce, abdc, abddcd, abcde etc. Nice isn’t it? Want more? for($i = 0; $i < 42; ++$i) echo $generator->visit($ast), "\n"; /** * Could output: * abdce * abdcc * abcdde * abcdcd * abcde * abcc * abddcde * abddcce * abcde * abcc * abdcce * abcde * abdce * abdd * abcdce * abccd * abdcdd * abcdcce * abcce * abddc */ ## Performance This is difficult to give numbers because it depends of a lot of parameters: your machine configuration, the PHP VM, if other programs run etc. But I have generated 1 million ( \displaystyle 10^6 ) strings in less than 25 seconds on my machine (an old MacBook Pro), which is pretty reasonable. ## Conclusion and surprise So, yes, now we know how to generate strings based on regular expressions! Supporting all the PCRE format is difficult. That’s why the Hoa\Regex library provides the Hoa\Regex\Visitor\Isotropic class that is a more advanced visitor. This latter supports classes, negative classes, ranges, all quantifications, all kinds of literals (characters, escaped characters, types of characters —\w, \d, \h…—) etc. Consequently, all you have to do is: use Hoa\Regex; // … $generator = new Regex\Visitor\Isotropic(new Math\Sampler\Random()); echo$generator->visit($ast); This algorithm is used in Praspel, a specification language I have designed during my PhD thesis. More specifically, this algorithm is used inside realistic domains. I am not going to explain it today but it allows me to introduce the “surprise”. ### Generate strings based on regular expressions in atoum atoum is an awesome unit test framework. You can use the Atoum\PraspelExtension extension to use Praspel and therefore realistic domains inside atoum. You can use realistic domains to validate and to generate data, they are designed for that. Obviously, we can use the Regex realistic domain. This extension provides several features including sample, sampleMany and predicate to respectively generate one datum, generate many data and validate a datum based on a realistic domain. To declare a regular expression, we must write: $regex = $this->realdom->regex('/ab(c|d){2,4}e?/'); And to generate a datum, all we have to do is: $datum = $this->sample($regex); For instance, imagine you are writing a test called test_mail and you need an email address: public function test_mail ( ) { $this ->given($regex = $this->realdom->regex('/[\w\-_]+(\.[\w\-\_]+)*@\w\.(net|org)/'),$address = $this->sample($regex), $mailer = new \Mock\Mailer(…), ) ->when($mailer->sendTo(\$address)) ->then ->… } Easy to read, fast to execute and help to focus on the logic of the test instead of test data (also known as fixtures). Note that most of the time the regular expressions are already in the code (maybe as constants). It is therefore easier to write and to maintain the tests. I hope you enjoyed this first part of the series :-)! This work has been published in the International Conference on Software Testing, Verification and Validation: Grammar-Based Testing using Realistic Domains in PHP.
2021-02-26 16:09:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.301788866519928, "perplexity": 3950.4032403246747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357929.4/warc/CC-MAIN-20210226145416-20210226175416-00008.warc.gz"}
https://bookstore.ams.org/view?ProductCode=MEMO/186/873
An error was encountered while trying to add the item to the cart. Please try again. Copy To Clipboard Successfully Copied! Exponential Genus Problems in One-Relator Products of Groups A. J. Duncan University of Newcastle, Newcastle upon Tyne, England Available Formats: Electronic ISBN: 978-1-4704-0477-2 Product Code: MEMO/186/873.E List Price: $72.00 MAA Member Price:$64.80 AMS Member Price: $43.20 Click above image for expanded view Exponential Genus Problems in One-Relator Products of Groups A. J. Duncan University of Newcastle, Newcastle upon Tyne, England Available Formats: Electronic ISBN: 978-1-4704-0477-2 Product Code: MEMO/186/873.E List Price:$72.00 MAA Member Price: $64.80 AMS Member Price:$43.20 • Book Details Memoirs of the American Mathematical Society Volume: 1862007; 156 pp MSC: Primary 20; Secondary 57; Exponential equations in free groups were studied initially by Lyndon and Schützenberger and then by Comerford and Edmunds. Comerford and Edmunds showed that the problem of determining whether or not the class of quadratic exponential equations have solution is decidable, in finitely generated free groups. In this paper the author shows that for finite systems of quadratic exponential equations decidability passes, under certain hypotheses, from the factor groups to free products and one-relator products. • Chapters • 1. Introduction • 3. Quadratic exponential equations and $\mathcal {L}$-genus • 4. Resolutions of quadratic equations • 5. Decision problems • 6. Pictures • 7. Corridors • 8. Angle assignment • 9. Curvature • 10. Configurations $C$ • 11. Configurations $D$ • 13. Isoperimetry • 14. Proof of Theorem 5.9 • Requests Review Copy – for reviewers who would like to review an AMS book Permission – for use of book, eBook, or Journal content Accessibility – to request an alternate format of an AMS title Volume: 1862007; 156 pp MSC: Primary 20; Secondary 57; Exponential equations in free groups were studied initially by Lyndon and Schützenberger and then by Comerford and Edmunds. Comerford and Edmunds showed that the problem of determining whether or not the class of quadratic exponential equations have solution is decidable, in finitely generated free groups. In this paper the author shows that for finite systems of quadratic exponential equations decidability passes, under certain hypotheses, from the factor groups to free products and one-relator products. • Chapters • 1. Introduction • 3. Quadratic exponential equations and $\mathcal {L}$-genus • 4. Resolutions of quadratic equations • 5. Decision problems • 6. Pictures • 7. Corridors • 8. Angle assignment • 9. Curvature • 10. Configurations $C$ • 11. Configurations $D$
2023-03-27 22:57:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23378825187683105, "perplexity": 5109.961726736212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00450.warc.gz"}
http://mathoverflow.net/questions/85809/cutting-and-pasting-in-galois-theory
# Cutting and pasting in Galois theory I want to ask who was the first to use cut-paste construction in Galois theory. This question is motivated from the trend in contemporary Galois theory to use patching methods to construct Galois extensions with a given group. For example, Harbater used formal patching to solve the inverse Galois problem over $\mathbb{Q}_p(x)$. Those patching methods are in analogy with the `classical' cutting and pasting constructions of differential geometry. In particular, the proof of Riemann Existence Theorem (saying that there are covers of Riemann surfaces with given groups and ramification) invokes such construction. Was Riemann the first? -
2015-05-22 10:02:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7722054123878479, "perplexity": 593.0910056153492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207924919.42/warc/CC-MAIN-20150521113204-00122-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.divms.uiowa.edu/~luke/classes/STAT7400/coding.html
Except for binary machine code, all computer code is intended to be read by humans. Well-written code makes this easy, and coding standards or guidelines help you create well-written code. Here are some guidelines to follow in code written for this course. [Adapted from coding standards for Roger Peng’s Biostat 776 at Johns Hopkins University.] • Program files should always be ASCII text files. Program files should always be immediately source-able into R or read by a C compiler in the case of C programs. If you cannot source your file directly into R, then the file format is not acceptable. Word processing programs like Microsoft Word, by default, do not save files as text files. • Always use a monospace font to write or display code. Variable space fonts like Times New Roman are not appropriate and can alter the apparent structure of a program (and hence its readability). • Always indent your code. If you use an editor like GNU Emacs, then there is support for automatic indentation of code. I prefer 4 space indentation, as recommended in the R coding standards. Comments should be indented to the same level of indentation of the code to which the comment pertains. Comments can also appear at the end of a code line, if space permits (but see below). • Put spaces around operators and after punctuation marks like commas and semicolons. This makes the code easier to read. • Your code should not extend past 80 columns. This is because standard Unix terminal windows are 80 columns wide and if your code wraps around the end of the line it becomes very difficult to read. Break long lines if you have to. Exceptions should be made only for hard-coded constants (such as path names or URLs) which cannot easily be wrapped or shortened. • As a rule no function or subroutine should be longer that about 30 lines. In particular it should be fully visible, without the need to scroll, in an editor using a reasonable font size. Being able to see the full code helps in understanding the logic of the code and helps limit the complexity of individual functions. With lower level languages like C this rule occasionally needs to be broken, but exceptions should be thought through very carefully. • Don’t repeat yourself. In particular, don’t cut and paste. If you find yourself writing the same bit of code, or very similar bits of code, multiple times then it is time to think about abstracting the core idea out into a function of its own. • Use a consistent scheme for naming variables. I happen to prefer so-called Camel-case, as in fileLength to file_length (called snake case) , but either is fine as long as you are consistent. • Ideally code should be sufficiently well factored into functions and subroutines, with well chosen function and variable names, to be easy to read and understand without comments. Comments should be used only to explain non-obvious steps in tricky computations, or to provide background or attribution. Good programming editors will help immensely in following good programming practices. Some other coding style guides: Some useful tools: • A good programming editor that is aware of R and C syntax. • The indent command for formatting C code. • The formatR package for formatting R code.
2017-09-25 22:32:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4763564467430115, "perplexity": 1556.7914962715013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693459.95/warc/CC-MAIN-20170925220350-20170926000350-00201.warc.gz"}
http://math.stackexchange.com/questions/23670/how-is-the-schnorr-signature-insecure-in-the-following-2-scenarios
# How is the Schnorr Signature insecure in the following 2 scenarios: 1. When there is no hash function used, $s = k-x \cdot m \mod {q}$ instead of $s = k-x \cdot H(m||r) \mod{q}$? 2. When a hash function is defined as $H(m)$ instead of $H(m||r)$? - If you're going to post homework, it's polite to show that you have made some effort. So how far have you got, and what specific problems do you have? – Peter Taylor Feb 25 '11 at 9:46 My understanding is as follows: When m is used as a plain text, Then it can be divided across the equation: s.m^-1 = k.m^-1 - x mod q. Two signatures to the same msg then can solve x from this equation which is total break. I am not able think differently for the second case because attacker can get the hash from the plain text. Apology, its my first time here and I am a beginner in this field. – bala maverick Feb 25 '11 at 10:07 In case 1, is $e$ still the same $H(m || r)$? – Henno Brandsma Feb 25 '11 at 21:46 In case 2, what is the full signature? How do you verify it? – Henno Brandsma Feb 25 '11 at 21:52 As to 2: if $e= H(m)$ and $s = k - x \cdot H(m)$, the $k$ serves no purpose any more. Anyone can create $H(m)$ and there is no way to verify that $s$ has been created by the one possessing the private key $x$. In the original scheme, $g^s \cdot y^e$ yields $g^k = r$, so that we can verify that $e = H(m \| r)$ is correct. In variation 2, the $r$ has been eliminated, and cannot be computed by a verifier. This is not even a signature scheme, really, as there is no verifying beyond checking $e = H(m)$. To address variation 1, as in the comment: if we use $(r,s)$ where $s = k - x \cdot m$, then the verifying equation becomes $g^s \cdot y^m = r$, given $(r,s)$ and the message $m$ and public key $y$; if it holds it's a valid signature (no need for hashing). But then from a valid signature $(r,s)$ for message $m$ we generate one for $m+1$ by multiplying $r$ by $y$, and keeping $s$. The check then becomes $$g^s \cdot y^{m+1} = g^s \cdot y^m \cdot y = r \cdot y$$ which is valid too. So then also the hashing stays necessary. And if we use $e = H(m)$ and so $(r,s) = (g^k, k - x \cdot H(m))$ as the signature, we can do a similar thing: the verifying equation becomes $g^s \cdot y^{H(m)} = r$ (valid iff true) and then $(g \cdot r, s+1)$ is also a valid signature for $m$: verifying gives $$g^{s+1} \cdot y^{H(m)} = g \cdot (g^s \cdot y^{H(m)}) = g \cdot r$$ as $(r,s)$ was valid, and so the new one is valid too. Note also that if we re-use the same $k$ (and thus $r = g^k$) for 2 signatures for different messages (in the secure, normal scheme), and the opponent knows or guesses this, then we can compute $x$, the secret key: we then have $(e_1, s_1)$ and $(e_2, s_2)$ where $e_1 = H(m_1 \| r)$, $e_2 = H(m_2 \| r)$, $s_1 = k - x \cdot e_1$, $s_2 = k - x \cdot e_2$; the opponent computes $$s_1 - s_2 = x \cdot (e_2 - e_1)$$ and $s_1, s_2, e_1, e_2$ are known, allowing us to solve for $x$, in most cases. Of course, for a large enough group, and good random choices, this is very unlikely to happen. As you see, it's quite intricate to see why we need certain stuff. - Thanks! I am almost getting it.. It is that there is no binding between the randomness and the signature components. Am i right? – bala maverick Feb 26 '11 at 11:05 ++ in this case 2: when we have e=H(m) then why cant we pass the signature as (r,s). Will it be of any use? – bala maverick Feb 26 '11 at 11:11 Thanks for the effort. A brilliant explanation, I am getting a clearer picture now of why we use certain primitives. !! #clear – bala maverick Feb 28 '11 at 3:24
2015-11-27 19:56:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8148314356803894, "perplexity": 484.58966734099425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450559.94/warc/CC-MAIN-20151124205410-00223-ip-10-71-132-137.ec2.internal.warc.gz"}
https://handwiki.org/wiki/Physics:Composite_Higgs_models
# Physics:Composite Higgs models In particle physics, composite Higgs models (CHM) are speculative extensions of the Standard Model (SM) where the Higgs boson is a bound state of new strong interactions. These scenarios are models for physics beyond the SM presently tested at the Large Hadron Collider (LHC) in Geneva. In all composite Higgs models the recently discovered Higgs boson is not an elementary particle (or point-like) but has finite size, perhaps around 10−18 meters. This dimension may be related to the Fermi scale (100 GeV) that determines the strength of the weak interactions such as in β-decay, but it could be significantly smaller. Microscopically the composite Higgs will be made of smaller constituents in the same way as nuclei are made of protons and neutrons. ## History Often referred to as "natural" composite Higgs models, CHMs are constructions that attempt to alleviate fine-tuning or "naturalness" problem of the Standard Model.[1] These typically engineer the Higgs boson as a naturally light pseudo-Goldstone boson or Nambu-Goldstone field, in analogy to the pion (or more precisely, like the K-mesons) in QCD. These ideas were introduced by Georgi and Kaplan[2] as a clever[according to whom?] variation on technicolor theories to allow for the presence of a physical low mass Higgs boson. These are forerunners of Little Higgs theories. In parallel, early composite Higgs models arose from the heavy top quark and its renormalization group infrared fixed point, which implies a strong coupling of the Higgs to top quarks at high energies. This formed the basis of top quark condensation theories of electroweak symmetry breaking in which the Higgs boson is composite at extremely short distance scales, composed of a pair of top and anti-top quarks. This was described by Yoichiro Nambu and subsequently developed by Miransky, Tanabashi, and Yamawaki [3][4] and Bardeen, Hill, and Lindner,[5] who connected the theory to the renormalization group and improved its predictions. While these ideas are still compelling, they suffer from a "naturalness problem", a large degree of fine-tuning. To remedy the fine tuning problem, Chivukula, Dobrescu, Georgi and Hill[6] introduced the "Top See-Saw" model in which the composite scale is reduced to the several TeV (trillion electron volts, the energy scale of the LHC). A more recent version of the Top Seesaw model of Dobrescu and Cheng has an acceptable light composite Higgs boson.[7] Top Seesaw models have a nice geometric interpretation in theories of extra dimensions, which is most easily seen via dimensional deconstruction (the latter approach does away with the technical details of the geometry of the extra spatial dimension and gives a renomalizable D-4 field theory). These schemes also anticipate "partial compositeness". These models are discussed in the extensive review of strong dynamical theories of Hill and Simmons.[8] CHMs typically predict new particles with mass around a TeV (or tens of TeV as in the Little Higgs schemes) that are excitations or ingredients of the composite Higgs, analogous to the resonances in nuclear physics. The new particles could be produced and detected in collider experiments if the energy of the collision exceeds their mass or could produce deviations from the SM predictions in "low energy observables" – results of experiments at lower energies. Within the most compelling scenarios each Standard Model particle has a partner with equal quantum numbers but heavier mass. For example, the photon, W and Z bosons have heavy replicas with mass determined by the compositeness scale, expected around 1 TeV. Though naturalness requires that new particles exist with mass around a TeV which could be discovered at LHC or future experiments, nonetheless as of 2018, no direct or indirect signs that the Higgs or other SM particles are composite has been detected. From the LHC discovery of 2012, it is known that there exists a physical Higgs boson (a weak iso-doublet) that condenses to break the electro-weak symmetry. This differs from the prediction ordinary technicolor theories where new strong dynamics directly breaks the electro-weak symmetry without the need of a physical Higgs boson. The CHM proposed by Georgi and Kaplan was based on known gauge theory dynamics that produces the Higgs doublet as a Goldstone boson. It was later realized, as with the case of Top Seesaw models described above, that this can naturally arise in five-dimensional theories, such as the Randall–Sundrum scenario or by dimensional deconstruction. These scenarios can also be realized in hypothetical strongly coupled conformal field theories (CFT) and the AdS-CFT correspondence. This spurred activity in the field. At first the Higgs was a generic scalar bound state. In the influential[according to whom?] work[9] the Higgs as a Goldstone boson was realized in CFTs. Detailed phenomenological studies showed that within this framework agreement with experimental data can be obtained with a mild tuning of parameters. The more recent work on the holographic realization of CHM, which is based on the AdS/QCD correspondence, provided an explicit realization of the strongly coupled sector of CHM and the computation of meson masses, decay constants and the top-partner mass.[10] ## CHM models CHM can be characterized by the mass (m) of the lightest new particles and their coupling (g). The latter is expected to be larger than the SM couplings for consistency. Various realizations of CHM exist that differ for the mechanism that generates the Higgs doublet. Broadly they can be divided in two categories: 1. Higgs is a generic bound state of strong dynamics. 2. Higgs is a Goldstone boson of spontaneous symmetry breaking[11][12] In both cases the electro-weak symmetry is broken by the condensation of a Higgs scalar doublet. In the first type of scenario there is no a priori reason why the Higgs boson is lighter than the other composite states and moreover larger deviations from the SM are expected. ### Higgs as Goldstone boson These are essentially Little Higgs theories. In this scenario the existence of the Higgs boson follows from the symmetries of the theory. This allows to explain why this particle is lighter than the rest of the composite particles whose mass is expected from direct and indirect tests to be around a TeV or higher. It is assumed that the composite sector has a global symmetry G spontaneously broken to a subgroup H where G and H are compact Lie groups. Contrary to technicolor models the unbroken symmetry must contain the SM electro-weak group SU(2)xU(1). According to Goldstone's theorem the spontaneous breaking of a global symmetry produces massless scalar particles known as Goldstone bosons. By appropriately choosing the global symmetries it is possible to have Goldstone bosons that correspond to the Higgs doublet in the SM. This can be done in a variety of ways[13] and is completely determined by the symmetries. In particular group theory determines the quantum numbers of the Goldstone bosons. From the decomposition of the adjoint representation one finds $\displaystyle{ \rm{Adj}[G]={\rm Adj}[H]+{\rm R}[\Pi] }$, where R[Π] is the representation of the Goldstone bosons under H. The phenomenological request that a Higgs doublet exists selects the possible symmetries. Typical example is the pattern $\displaystyle{ \frac {SO(5)}{SU(2)_L\times SU(2)_R}\rightarrow GB=(2,2) }$ that contains a single Higgs doublet as a Goldstone boson. The physics of the Higgs as a Goldstone boson is strongly constrained by the symmetries and determined by the symmetry breaking scale f that controls their interactions. An approximate relation exists between mass and coupling of the composite states, $\displaystyle{ M=g f }$ In CHM one finds that deviations from the SM are proportional to $\displaystyle{ \xi=\frac {v^2}{f^2} }$, where v=246 GeV is the electro-weak vacuum expectation value. By construction these models approximate the SM to arbitrary precision if ξ is sufficiently small. For example, for the model above with SO(5) global symmetry the coupling of the Higgs to W and Z bosons is modified as $\displaystyle{ \frac{h_{VV}}{h_{VV}^{SM}}\approx 1-\frac {\xi} 2 }$. Phenomenological studies suggest f > 1 TeV and thus at least a factor of a few larger than v. However the tuning of parameters required to achieve v < f is inversely proportional to ξ so that viable scenarios require some degree of tuning. Goldstone bosons generated from the spontaneous breaking of an exact global symmetry are exactly massless. Therefore if the Higgs boson is a Goldstone boson the global symmetry cannot be exact. In CHM the Higgs potential is generated by effects that explicitly break the global symmetry G. Minimally these are the SM Yukawa and gauge couplings that cannot respect the global symmetry but other effects can also exist. The top coupling is expected to give a dominant contribution to the Higgs potential as this is the largest coupling in the SM. In the simplest models one finds a correlation between the Higgs mass and the mass M of the top partners,[14] $\displaystyle{ m_h^2\sim \frac {3}{2\pi^2} \frac {M^2}{f^2} v^2 }$ In models with f~TeV as suggested by naturalness this indicates fermionic resonances with mass around 1 TeV. Spin-1 resonances are expected to be somewhat heavier. This is within the reach of future collider experiments. ### Partial compositeness One ingredient of modern CHM is the hypothesis of partial compositeness proposed by D. B. Kaplan.[15] This is similar to a (deconstructed) extra dimension in which every SM particle has a heavy partner(s) that can mix with it. In practice the SM particles are linear combinations of elementary and composite states: $\displaystyle{ |SM\rangle=\cos{\alpha} |El\rangle+\sin{\alpha}|Co\rangle }$ where α denotes the mixing angle. Partial compositeness is naturally realized in the gauge sector where an analogous phenomenon happens quantum chromodynamics and is known as photonρ mixing. For fermions it is an assumption that in particular requires the existence of heavy fermions with equal quantum numbers to SM quarks and leptons. These interact with the Higgs through the mixing. One schematically finds the formula for the SM fermion masses, $\displaystyle{ \frac{m_f}v\approx \sin \alpha_L \cdot Y \cdot \sin \alpha_R }$, where L and R refer to the left and right mixings, and Y is a composite sector coupling. The composite particles are multiplets of the unbroken symmetry H. For phenomenological reasons this should contain the custodial symmetry SU(2)xSU(2) extending the electro-weak symmetry SU(2)xU(1). Composite fermions often belong to representations larger than the SM particles. For example, a strongly motivated representation for left-handed fermions is the (2,2) that contains particles with exotic electric charge 5/3 or –4/3 with special experimental signatures. Partial compositeness ameliorates the phenomenology of CHM providing a logic why no deviations from the SM have been measured so far. In the so-called anarchic scenarios the hierarchies of SM fermion masses are generated through the hierarchies of mixings and anarchic composite sector couplings. The light fermions are almost elementary while the third generation is strongly or entirely composite. This leads to a structural suppression of all effects that involve first two generations that are the most precisely measured. In particular flavor transitions and corrections to electro-weak observables are suppressed. Other scenarios are also possible[16] with different phenomenology. ## Experiments The main experimental signatures of CHM are: 1. New heavy partners of Standard Model particles, with SM quantum numbers and masses around a TeV 2. Modified SM couplings 3. New contributions to flavor observables Supersymmetric models also predict that every Standard Model particle will have a heavier partner. However, in supersymmetry the partners have a different spin: they are bosons if the SM particle is a fermion, and vice versa. In composite Higgs models the partners have the same spin as the SM particles. All the deviations from the SM are controlled by the tuning parameter ξ. The mixing of the SM particles determines the coupling with the known particles of the SM. The detailed phenomenology depends strongly on the flavor assumptions and is in general model-dependent. The Higgs and the top quark typically have the largest coupling to the new particles. For this reason third generation partners are the most easy to produce and top physics has the largest deviations from the SM. Top partners have also special importance given their role in the naturalness of the theory. After the first run of the LHC direct experimental searches exclude third generation fermionic resonances up to 800 GeV.[17][18] Bounds on gluon resonances are in the multi-TeV range[19][20] and somewhat weaker bounds exist for electro-weak resonances. Deviations from the SM couplings is proportional to the degree of compositeness of the particles. For this reason the largest departures from the SM predictions are expected for the third generation quarks and Higgs couplings. The first have been measured with per mille precision by the LEP experiment. After the first run of the LHC the couplings of the Higgs with fermions and gauge bosons agree with the SM with a precision around 20%. These results pose some tension for CHM but are compatible with a compositeness scale f~TeV. The hypothesis of partial compositeness allows to suppress flavor violation beyond the SM that is severely constrained experimentally. Nevertheless, within anarchic scenarios sizable deviations from the SM predictions exist in several observables. Particularly constrained is CP violation in the Kaon system and lepton flavor violation for example the rare decay μ->eγ. Overall flavor physics suggests the strongest indirect bounds on anarchic scenarios. This tension can be avoided with different flavor assumptions. ## Summary The nature of the Higgs boson remains a conundrum. Philosophically, the Higgs boson is either a composite state, built of more fundamental constituents, or it is connected to other states in nature by a symmetry such as supersymmetry (or some blend of these concepts). So far there is no evidence of either compositeness or supersymmetry. That nature provides a single (weak isodoublet) scalar field to generate mass is seemingly incongruent with common sense. We have no idea at what mass/energy scale additional information about the Higgs boson, that may shed light on these issues, will be revealed. While theorists will remain busy concocting explanations, this poses a major challenge to particle physics as we have no clear idea whether accelerators will ever provide new useful information beyond the standard model. It is important that the LHC move forward with upgrades in luminosity and energy in search for new clues. ## References 1. G. F. Giudice, Naturalness after LHC8, PoS EPS HEP2013, 163 (2013) 2. M. J. Dugan, H. Georgi and D. B.Kaplan, Anatomy of a Composite Higgs Model, Nucl. Phys. B254, 299 (1985). 3. Miransky, Vladimir A.; Tanabashi, Masaharu; Yamawaki, Koichi (1989). "Dynamical electroweak symmetry breaking with large anomalous dimension and t quark condensate". Phys. Lett. B 221 (177): 177. doi:10.1016/0370-2693(89)91494-9. Bibcode1989PhLB..221..177M. 4. Miransky, Vladimir A.; Tanabashi, Masaharu; Yamawaki, Koichi (1989). "Is the t quark responsible for the mass of W and Z bosons?". Modern Physics Letters A 4 (11): 1043. doi:10.1142/S0217732389001210. Bibcode1989MPLA....4.1043M. 5. Bardeen, William A.; Hill, Christopher T.; Lindner, Manfred (1990). "Minimal dynamical symmetry breaking of the standard model". Physical Review D 41 (5): 1647–1660. doi:10.1103/PhysRevD.41.1647. PMID 10012522. Bibcode1990PhRvD..41.1647B. 6. Chivukula, R. Sekhar; Dobrescu, Bogdan; Georgi, Howard; Hill, Christopher T. (1999). "Top Quark Seesaw Theory of Electroweak Symmetry Breaking". Physical Review D 59 (5): 075003. doi:10.1103/PhysRevD.59.075003. Bibcode1999PhRvD..59g5003C. 7. Cheng, Hsin-Chia; Dobrescu, Bogdan A.; Gu, Jiayin (2014). "Higgs Mass from Compositeness at a Multi-TeV Scale". JHEP 2014 (8): 095. doi:10.1007/JHEP08(2014)095. Bibcode2014JHEP...08..000C. 8. Hill, Christopher T.; Simmons, Elizabeth H. (2003). "Strong dynamics and electroweak symmetry breaking.". Phys. Rep. 381 (4–6): 235. doi:10.1016/S0370-1573(03)00140-6. Bibcode2003PhR...381..235H. 9. K. Agashe, R. Contino and A. Pomarol, "The Minimal composite Higgs model", Nucl. Phys. B719, 165 (2005) 10. Erdmenger, Johanna; Evans, Nick; Porod, Werner; Rigatos, Konstantinos S. (2021-02-19). "Gauge/Gravity Dynamics for Composite Higgs Models and the Top Mass" (in en). Physical Review Letters 126 (7): 071602. doi:10.1103/PhysRevLett.126.071602. ISSN 0031-9007. 11. R. Contino, The Higgs as a Composite Nambu-Goldstone Boson 12. M. Redi 13. J. Mrazek, A. Pomarol, R. Rattazzi, M. Redi, J. Serra and A. Wulzer, The Other Natural Two Higgs Doublet Model, Nucl. Phys. B853, 1 (2011) https://arxiv.org/abs/1105.5403. 14. M. Redi and A. Tesi, Implications of a Light Higgs in Composite Models, JHEP 1210, 166 (2012) https://arxiv.org/abs/1205.0232. 15. D. B. Kaplan, Flavor at SSC energies: A New mechanism for dynamically generated fermion masses, Nucl. Phys. B 365, 259 (1991).
2022-11-29 10:25:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7162050008773804, "perplexity": 1490.312929462388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710691.77/warc/CC-MAIN-20221129100233-20221129130233-00743.warc.gz"}
https://www.physicsforums.com/threads/derivation-of-splitting-function-from-cross-section.892788/
# I Derivation of splitting function from cross section 1. Nov 9, 2016 ### CAF123 Consider the real emission correction to the tree level process $e^+ e^- \rightarrow q \bar q$ involving a final state gluon emitted from either the outgoing quark or antiquark line. The differential cross section for producing a quark with fractional energy $x_1$ in the $q \bar q g$ final state is $$\frac{1}{\sigma_0} \frac{d \sigma}{d x_1} = \frac{2 \alpha_s}{3 \pi} \int dx_2 \frac{x_1^2 + x_2^2} {(1-x_1)(1-x_2)},$$ where $\sigma_0$ is the tree level cross section for $e^+ e^- \rightarrow q \bar q$. The singularities present for $x_i \rightarrow 1$ are associated with soft and collinear divergences which are removed upon consideration of virtual corrections to the tree level process for $q \bar q$ production. Thus, $$\int_0^1 dx_1 \frac{d \sigma_R^{(1)}}{d x_1} + \sigma_V^{(1)} = \frac{\alpha_s}{\pi} \sigma_0 = \text{finite}$$ This may be rewritten as $$\int_0^1 dx_1 \left( \frac{d \sigma_R^{(1)}}{d x_1} + \left( \sigma_V^{(1)} -\frac{\alpha_s}{\pi} \sigma_0 \right) \delta(1-x) \right) = \text{finite}$$ Now, define $$F(x)_+ = \text{lim}_{\beta \rightarrow 0} \left( F(x) \theta(1-x-\beta) - \delta(1-x-\beta) \int_0^{1-\beta} F(x')dx' \right),$$ where $\beta$ acts as a regulator. We can then write $$\frac{1}{\sigma_0} \frac{d \sigma^{(1)}}{dx_1} = \frac{1}{\sigma_0} \left(\frac{d \sigma^{(1)}}{dx_1}\right)_+ +\alpha_s R \delta(1-x)\,\,\,\,\,(1)$$ where $R = \sigma_0(1+\alpha_s/\pi)\,\,\,\,(2)$ and $$\int_0^1 dx_1 \left(\frac{d \sigma^{(1)}}{dx_1}\right)_+ = 0\,\,\,\,\,(3)$$ and so $$\left(\frac{d \sigma^{(1)}}{dx_1}\right)_+ = \frac{\alpha_2}{2\pi} P_{q \rightarrow qg}(x_1) \cdot L + \alpha_s f(L),\,\,\,\,\,(4)$$ with $L$ the logarithmically divergent piece and $f(L)$ a left over function depending on how the z integration was regularised. Most of this was cited from https://www.ippp.dur.ac.uk/~krauss/Lectures/QuarksLeptons/QCD/DGLAP_2.html. I'm just wondering if someone could explain where equations (1)-(4) come from? I think (3) is clear from the definition of the plus distribution but I am not sure how (4) comes from the earlier equations and how (1) and (2) are initially obtained. Thanks! 2. Nov 10, 2016 ### RGevo Firstly, I think eq. (1) is wrong, since the second part on the RHS (which is meant depict the born+virtual alpha_s correction) would actually lead to two powers of alpha_s here (which can't be, since its a single gluon emission). In addition, the dimensions don't seem to match up for each term. The splitting functions are functions which can be derived from squaring matrix elements (and only some diverge when integrating over them from 0-1). The divergences in the real emission appear when performing the phase space integration (integrating over x, z), which at the end of the day cancel against those present in the virtual corrections (this is the KLN theorem). It can be more clear to perform independently the virtual correction (with its divergence isolated in some way) and combine this the real emission piece integrated entirely over the phase space (with its divergence isolated). Then you see specifically that they cancel. I can suggest trying to do the virtual calculation your self. I think there are more clear examples - I like the discussions in https://cds.cern.ch/record/454171/files/p53.pdf. If you do the calculation differentially, then you have to perform some tricks to combine the two (regulate each piece separately with a beta-regulator in this language). This would allow you to do: 1) Evaluate the contributions from Leading Order + Virtual (+divergent part ) + the real emission part in the divergent limit (which depends on beta). (the divergent parts cancel here) 2) You combine this with the real emission part not in the divergent limit (which would also depend on beta) In total you get an answer which is not divergent, and not dependent on beta. Confusing right? 3. Nov 11, 2016 ### CAF123 Hi RGevo, thanks for reply, I hope we can continue this discussion. As far as I understand, the KLN theorem states that provided we make an inclusive summation over all (degenerate) initial and final states our end result will be free of divergences (it will be 'infrared safe'). In the case of $e^+ e^- \rightarrow q \bar q$, the order $\alpha_s$ corrections amounts to considering final state gluons as well as the interference of the born level result with the self energy gluonic corrections and QCD vertex correction (the former we set to zero assuming we have a consistent UV regularisation and renormalisation scheme in place). Their sum is supposed to be finite as is expressed in the equation $$\int_0^1 dx_1 \frac{d \sigma_R^{(1)}}{dx_1} + \sigma_V^{(1)} = \text{finite}$$ I am interested in doing a proper analysis to see this is true but why should this be finite? The KLN theorem states that we have to sum over all degenerate initial and final states. So I have two questions: 1) We did not sum over all degenerate initial states so why is the KLN theorem not violated? - that is, why is the result finite even though we didn't consider initial state degeneracy (e.g degeneracy associated with an electron state and an electron emitting soft/collinear photon(s)) ? 2) What does degenerate mean exactly? I understand that a quark together with a cloud of collinear gluons is degenerate with a quark but in the computation of the $e^+ e^- \rightarrow q \bar q g$ cross section, we make a phase space integral over all momentum of the gluon so wouldn't this result in summing over configurations where the gluon contribution is no longer degenerate with just a quark (i.e contributions where the emitted gluon is no longer collinear with the quark)? Thanks! Last edited: Nov 11, 2016 4. Nov 11, 2016 ### RGevo Hi CAF, These are important questions. 1) For the first. The KLN theorem in this case is telling us that we should not consider the real photon corrections without the corresponding virtual corrections. In this case, one would consider photon emissions from the electron lines, and also the final state quark lines. This real emission contribution (as you say) has infrared divergences associated to the photon becoming soft and/or collinear with either electron or quark lines. One would then have to also consider the virtual photon corrections to both the initial and final state (a photon correction to the eeGamma/Z-vertex or the qqbarGamma/Z-vertex). In addition, one should also include the photon correction between the initial electron and final state quark lines (a box-diagram in this case). Assuming we had done the UV renormalisation/regularisation first. Then this would only be finite after I included all real emission + virtual corrections to this fixed-order in perturbation theory. Which would be: alpha^3 The case in the previous example (the QCD correction) was: alpha^2 alpha_s So practically, the KLN theorem is telling you to simultaneously compute real emission diagrams and virtual diagrams to a particular order in perturbation theory. 2) If I understand correctly, the contribution which is not degenerate corresponds to the contribution when the q qbar and gluon partons are all distinguishable. Experimentally, this would be when you reconstruct three distinct jets. When the partons are not soft/collinear, there are no infrared divergences. So this phase space contribution gives you the cross section for the three-jet rate (while the soft/collinear pieces + the virtual are together giving you the two-jet rate). Does this make more sense? 5. Nov 12, 2016 ### CAF123 Ok, thanks, so if I understood correctly, if I just consider the emission of a photon off an electron line but do not also consider the virtual correction to the eeGamma/Z vertex then I would end up with a divergence in my result? In $e^+e^- \rightarrow q \bar q$ annihilation at $\mathcal O(\alpha^2, \alpha_s)$, we don't consider any photon emission off the electron line/ vertex correction - is that because at this order we simply don't have such contributions? So at higher orders in perturbation theory, i.e at $\mathcal O(\alpha^3, \alpha_s)$ we would need to consider these photon emissions off the quark and electron lines and the respective vertex corrections to get finite result? I think that is basically what you wrote but I just wanted to rewrite it to see if I understood you correctly. If I have two soft emissions off the electron line, what is it that cancels the divergence in this case? In the phase space integral over the momentum k of the emitted gluon, the regions of k around zero (ie those below the factorisation scale or below the experimental resolution parameter etc) are those which give the soft/collinear divergences which show up as poles in epsilon for example in dim reg. These poles then cancel those in the virtual eeGamma/Z g vertex. The finite pieces of this integral (k above factorisation scale) contribute to the 3 jet event? I’m just wondering why we even perform a full phase space integral over k in the first place when all we want to describe is a degenerate gluon+quark state and quark state? Because even performing the full phase space integral the finite pieces are not just neglected in the $e^+ e^- \rightarrow q \bar q g$ correction so we seem to be including a sector of the integral in the correction which does not contribute to a degenerate state. Thanks again! 6. Nov 13, 2016 ### RGevo Hi again, It is correct that you should consider all diagrams which occurs at a given fixed-order in the expansion. So, as you say at alpha^3 alpha_s you would consider: real-real (so a QED and QCD emission of a photon and gluon respectively) virtual-real (either a virtual QED + real QCD or vice versa) virtual-virtual (so both QCD and QED) Typically the order is defined at the level of the squared amplitude. So I would have to consider in the virtual-virtual for example: 1-loop virtual QCD amplitude * 1-loop virtual QED amplitude also 2-loop virtual (QCD*QED amplitude) * born level amplitude ...etc. Regarding the phase space integration, let's consider an observable relevant for the ee>qqg final state. We do not observe partons (but jets), and so we must also do the same for our fixed-order prediction. Let us now define the observable delta_phi between the two leading final state jets (say leading in transverse momentum). For the simple ee>qq prediction, all configuartions are back-to-back in phi (since only two final state partons). If we compute the cross section as: dsigma/d deltaphi, then the cross section all falls in the back-to-back bin. Now we consider our QCD correction, it is true that a large part of the QCD correction (the virtual, the soft+collinear+also some finite contribution) ends up in the first bin - corresponding to two back-to-back jets. However, there is also a contribution in the other bins corresponding to configurations of three well seperated jets and dphi of the two leading jets != pi. To get the inclusive cross section, you must also consider these configurations. These contribute to the total cross section, and also give you more differential information about the process - which you can measure to test the underlying theoretical predictions. If I wanted to do this as a small project, I would to the following: 1) Compute the born, real, virtual amplitudes 2) Renormalise the virtual amplitudes 3) Squared up my renormalised amplitudes to order alpha^2 alpha_s 4) Perform a regularisation (so typical approaches are dipole subtraction or the old school phase-space slicing method) procedure 5) Perform a numerical integration of my squared matrix elements with a numerical algorithm such as Vegas, applying whatever cuts I was interested in. This would allow me to have a fully differential description of ee>qqbar to NLO in QCD. I could instead at step (3) perform the phase space integrals of the 2-body (qqbar) and 3-body (qqg) in d-dimensions. This would allow me to analytically isolate the infrared divergences. The infrared divergences from the three-body integration (corresponding to the soft and collinear divergences) are cancelled with equivalent 'soft and collinear' divergences in the one-loop integrals. The one-loop integrals are also typically performed in d-dimensions and the (UV and infrared) poles isolated as 1/eps poles. Different one-loop integrals, (depending on whether the internal fermion lines of the integral are massless for example) have different infrared/UV structure. Again, a lot of information to digest/understand for your self. 7. Nov 25, 2016 ### CAF123 Thanks it makes sense to me. Upon looking back at the total cross section for $e^+ e^- \rightarrow q \bar q$ annihilation I came up with another question - the total cross section for this process comes about from considering all QCD permissible final states so e.g. $q \bar q g, q \bar q q \bar q, q \bar q gg$ and so on. We have more particles in the final state other than just a $q \bar q$ to reflect the fact that soft or collinear emissions of these real particles would be undetectable in the experiment with the result of not being distinguishable from a virtual correction. This mathematically shows up as having divergences in the results. (I think what I wrote here is correct). It is commonly said in the literature and books that the $\mathcal O (\alpha_s)$ correction receives contributions from the square of the two single real emission diagrams and the interference of a virtual correction with the born level result. This makes sense. The divergences arising in each of these contributions are then seen to cancel. What I was wondering was why don't we also consider the interference between a graph with two gluon real emissions and the born level result. This would also be $\mathcal O(\alpha_s)$. Thanks again! 8. Nov 25, 2016 ### RGevo Hi Caf, Glad I was of help. In this case, the `double real' (since two emissions) would contribute to the matrix element at level $\mathcal{O}(\alpha_s)$ as you say. However, it won't contribute to the cross section until $\mathcal{O}(\alpha_s^2)$, since the specific distinct final state: e-e+ > q qbar parton_i parton_j only interferes with other diagrams with this specific initia/final state configuration. In other words, the born and the virtual interfere since they are both ee > qqbar. Same for the 2> 3 (it interferes with its self and so on). 9. Nov 26, 2016 ### CAF123 Thanks! So if I let A be the diagram where two gluons are emitted from a quark line and B the diagram where they are emitted from the antiquark line then |A+B|2 will contribute at $\mathcal O (\alpha_s^2)$ to the cross section and I suppose the divergences present in this result will cancel against those present in the interference of the born level result and a diagram where we have a double gluon vertex correction? Does the fact that only diagrams with the same initial/final state interfere come from elementary quantum mechanics? Like for example in the double slit experiment, we put light in a state |i> through a diffraction grating and observe the position |f> of the light quanta hitting the detector on the screen but not through which slit it went. So we have a quantum interference between the two possible paths from |i> to |f>. Or e.g interference between 2 -> 2 t and u channel processes where we have same initial and final state but different attachments of final state external legs to the vertices. Similarly here we have quantum interference at each order in the strong coupling between what happens between the incoming e^-/e^+ pair and the outgoing q qbar pair. Thanks! Last edited: Nov 26, 2016 10. Feb 13, 2017 ### CAF123 Hi RGevo, I see that in going from the second to the third equation in my OP (that is also written in the link I posted at the bottom of the OP), there is an input of a delta function. It seems to me that the replacement $\int_0^1 dz\, \delta(1-z) =1$ is made. I'm not sure why this step is justified because the zero of the delta here occurs at the end point of the integration limit so is not well defined. Yet I have seen this step made in many physics books/resources in the discussions of virtual and real cancellations. Do you have any comments regarding this? Thanks!
2017-08-20 18:21:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.843384325504303, "perplexity": 775.4899390271518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106865.74/warc/CC-MAIN-20170820170023-20170820190023-00416.warc.gz"}
https://isabelle.in.tum.de/repos/isabelle/rev/2b17622e1929
author paulson Wed, 11 Jul 2001 13:57:01 +0200 changeset 11406 2b17622e1929 parent 11405 b6e3ac38397d child 11407 138919f1a135 indexing and tweaks --- a/doc-src/TutorialI/Rules/rules.tex Wed Jul 11 13:56:15 2001 +0200 +++ b/doc-src/TutorialI/Rules/rules.tex Wed Jul 11 13:57:01 2001 +0200 @@ -18,33 +18,15 @@ \index{natural deduction|(}% In Isabelle, proofs are constructed using inference rules. The -most familiar inference rule is probably \emph{modus ponens}: +most familiar inference rule is probably \emph{modus ponens}:% +\index{modus ponens@\emph{modus ponens}} $\infer{Q}{P\imp Q & P}$ -This rule says that from $P\imp Q$ and $P$ -we may infer~$Q$. +This rule says that from $P\imp Q$ and $P$ we may infer~$Q$. -%rule and at most one or two others, along with many complicated -%axioms. Any desired theorem could be obtained by applying \emph{modus -%ponens} or other rules to the axioms, but proofs were -%hard to find. For example, a standard inference system has -%these two axioms (amongst others): -%\begin{gather*} -% P\imp(Q\imp P) \tag{K}\\ -% (P\imp(Q\imp R))\imp ((P\imp Q)\imp(P\imp R)) \tag{S} -%\end{gather*} -%Try proving the trivial fact $P\imp P$ using these axioms and \emph{modus -%ponens}! - -\emph{Natural deduction} is an attempt to formalize logic in a way +\textbf{Natural deduction} is an attempt to formalize logic in a way that mirrors human reasoning patterns. -% -%inference rules and many axioms, it has many inference rules -%and few axioms. -% For each logical symbol (say, $\conj$), there -are two kinds of rules: \emph{introduction} and \emph{elimination} rules. +are two kinds of rules: \textbf{introduction} and \textbf{elimination} rules. The introduction rules allow us to infer this symbol (say, to infer conjunctions). The elimination rules allow us to deduce consequences from this symbol. Ideally each rule should mention @@ -61,7 +43,7 @@ properties, which might otherwise be obscured by the technicalities of its definition. Natural deduction rules also lend themselves to automation. Isabelle's -\emph{classical reasoner} accepts any suitable collection of natural deduction +\textbf{classical reasoner} accepts any suitable collection of natural deduction rules and uses them to search for proofs automatically. Isabelle is designed around natural deduction and many of its tools use the terminology of introduction and elimination rules.% @@ -90,7 +72,8 @@ Carefully examine the syntax. The premises appear to the left of the arrow and the conclusion to the right. The premises (if more than one) are grouped using the fat brackets. The question marks -indicate \textbf{schematic variables} (also called \textbf{unknowns}): they may +indicate \textbf{schematic variables} (also called +\textbf{unknowns}):\index{unknowns|bold} they may be replaced by arbitrary formulas. If we use the rule backwards, Isabelle tries to unify the current subgoal with the conclusion of the rule, which has the form \isa{?P\ \isasymand\ ?Q}. (Unification is discussed below, @@ -115,7 +98,7 @@ (Q\ \isasymand\ P)}. We are working backwards, so when we apply conjunction introduction, the rule removes the outermost occurrence of the \isa{\isasymand} symbol. To apply a rule to a subgoal, we apply -the proof method \isa{rule} --- here with {\isa{conjI}}, the conjunction +the proof method \isa{rule} --- here with \isa{conjI}, the conjunction introduction rule. \begin{isabelle} %\isasymlbrakk P;\ Q\isasymrbrakk\ \isasymLongrightarrow\ P\ \isasymand\ Q\ @@ -125,7 +108,7 @@ \end{isabelle} Isabelle leaves two new subgoals: the two halves of the original conjunction. The first is simply \isa{P}, which is trivial, since \isa{P} is among -the assumptions. We can apply the \isa{assumption} +the assumptions. We can apply the \methdx{assumption} method, which proves a subgoal by finding a matching assumption. \begin{isabelle} \ 1.\ \isasymlbrakk P;\ Q\isasymrbrakk\ \isasymLongrightarrow\ @@ -164,7 +147,7 @@ something else, say $R$, and we know that $P\disj Q$ holds, then we have to consider two cases. We can assume that $P$ is true and prove $R$ and then assume that $Q$ is true and prove $R$ a second time. Here we see a fundamental concept used in natural -deduction: that of the \emph{assumptions}. We have to prove $R$ twice, under +deduction: that of the \textbf{assumptions}. We have to prove $R$ twice, under different assumptions. The assumptions are local to these subproofs and are visible nowhere else. @@ -192,8 +175,8 @@ \end{isabelle} We assume \isa{P\ \isasymor\ Q} and must prove \isa{Q\ \isasymor\ P}\@. Our first step uses the disjunction -elimination rule, \isa{disjE}. We invoke it using \isa{erule}, a method -designed to work with elimination rules. It looks for an assumption that +elimination rule, \isa{disjE}\@. We invoke it using \isaindex{erule}, a +method designed to work with elimination rules. It looks for an assumption that matches the rule's first premise. It deletes the matching assumption, regards the first premise as proved and returns subgoals corresponding to the remaining premises. When we apply \isa{erule} to \isa{disjE}, only two @@ -201,9 +184,9 @@ to get three subgoals, then proving the first by assumption: the other subgoals would have the redundant assumption \hbox{\isa{P\ \isasymor\ Q}}. -Most of the -time, \isa{erule} is the best way to use elimination rules. Only rarely -can an assumption be used more than once. +Most of the time, \isa{erule} is the best way to use elimination rules, since it +replaces an assumption by its subformulas; only rarely does the original +assumption remain useful. \begin{isabelle} %P\ \isasymor\ Q\ \isasymLongrightarrow\ Q\ \isasymor\ P\isanewline @@ -251,11 +234,12 @@ Recall that the conjunction elimination rules --- whose Isabelle names are \isa{conjunct1} and \isa{conjunct2} --- simply return the first or second half of a conjunction. Rules of this sort (where the conclusion is a subformula of a -premise) are called \emph{destruction} rules because they take apart and destroy +premise) are called \textbf{destruction} rules because they take apart and destroy a premise.% \footnote{This Isabelle terminology has no counterpart in standard logic texts, although the distinction between the two forms of elimination rule is well known. -Girard \cite[page 74]{girard89}, for example, writes The elimination rules +Girard \cite[page 74]{girard89},\index{Girard, Jean-Yves|fnote} +for example, writes The elimination rules [for $\disj$ and $\exists$] are very bad. What is catastrophic about them is the parasitic presence of a formula [$R$] which has no structural link with the formula which is eliminated.''} @@ -269,7 +253,7 @@ \end{isabelle} To invoke the elimination rule, we apply a new method, \isa{drule}. -Think of the \isa{d} as standing for \emph{destruction} (or \emph{direct}, if +Think of the \isa{d} as standing for \textbf{destruction} (or \textbf{direct}, if you prefer). Applying the second conjunction rule using \isa{drule} replaces the assumption \isa{P\ \isasymand\ Q} by \isa{Q}. @@ -309,14 +293,14 @@ \section{Implication} \index{implication|(}% -At the start of this chapter, we saw the rule \textit{modus ponens}. It is, in fact, +At the start of this chapter, we saw the rule \emph{modus ponens}. It is, in fact, a destruction rule. The matching introduction rule looks like this in Isabelle: \begin{isabelle} (?P\ \isasymLongrightarrow\ ?Q)\ \isasymLongrightarrow\ ?P\ \isasymlongrightarrow\ ?Q\rulename{impI} \end{isabelle} -And this is \textit{modus ponens}: +And this is \emph{modus ponens}\index{modus ponens@\emph{modus ponens}} : \begin{isabelle} \isasymlbrakk?P\ \isasymlongrightarrow\ ?Q;\ ?P\isasymrbrakk\ \isasymLongrightarrow\ ?Q @@ -389,6 +373,7 @@ \index{implication|)} \medskip +\index{by@\isacommand{by} (command)|(}% The \isacommand{by} command is useful for proofs like these that use \isa{assumption} heavily. It executes an \isacommand{apply} command, then tries to prove all remaining subgoals using @@ -409,7 +394,9 @@ We could use \isacommand{by} to replace the final \isacommand{apply} and \isacommand{done} in any proof, but typically we use it to eliminate calls to \isa{assumption}. It is also a nice way of expressing a -one-line proof. +one-line proof.% +\index{by@\isacommand{by} (command)|)} + \section{Negation} @@ -438,14 +425,18 @@ \rulename{classical} \end{isabelle} +\index{contrapositives|(}% The implications $P\imp Q$ and $\neg Q\imp\neg P$ are logically equivalent, and each is called the -\bfindex{contrapositive} of the other. Three further rules support +\textbf{contrapositive} of the other. Four further rules support reasoning about contrapositives. They differ in the placement of the negation symbols: \begin{isabelle} \isasymlbrakk?Q;\ \isasymnot\ ?P\ \isasymLongrightarrow\ \isasymnot\ ?Q\isasymrbrakk\ \isasymLongrightarrow\ ?P% \rulename{contrapos_pp}\isanewline +\isasymlbrakk?Q;\ ?P\ \isasymLongrightarrow\ \isasymnot\ ?Q\isasymrbrakk\ \isasymLongrightarrow\ +\isasymnot\ ?P% +\rulename{contrapos_pn}\isanewline \isasymlbrakk{\isasymnot}\ ?Q;\ \isasymnot\ ?P\ \isasymLongrightarrow\ ?Q\isasymrbrakk\ \isasymLongrightarrow\ ?P% \rulename{contrapos_np}\isanewline \isasymlbrakk{\isasymnot}\ ?Q;\ ?P\ \isasymLongrightarrow\ ?Q\isasymrbrakk\ \isasymLongrightarrow\ \isasymnot\ ?P% @@ -454,7 +445,8 @@ % These rules are typically applied using the \isa{erule} method, where their effect is to form a contrapositive from an -assumption and the goal's conclusion. +assumption and the goal's conclusion.% +\index{contrapositives|)} The most important of these is \isa{contrapos_np}. It is useful for applying introduction rules to negated assumptions. For instance, @@ -485,7 +477,7 @@ while the negated formula \isa{R\ \isasymlongrightarrow\ Q} becomes the new conclusion. -We can now apply introduction rules. We use the {\isa{intro}} method, which +We can now apply introduction rules. We use the \methdx{intro} method, which repeatedly applies built-in introduction rules. Here its effect is equivalent to \isa{rule impI}. \begin{isabelle} @@ -512,9 +504,8 @@ \end{isabelle} This rule combines the effects of \isa{disjI1} and \isa{disjI2}. Its great advantage is that we can remove the disjunction symbol without deciding -which disjunction to prove.% -\footnote{This type of reasoning is standard in sequent and tableau -calculi.} +which disjunction to prove. This treatment of disjunction is standard in sequent +and tableau calculi. \begin{isabelle} \isacommand{lemma}\ "(P\ \isasymor\ Q)\ \isasymand\ R\ @@ -567,18 +558,17 @@ We have seen examples of many tactics that operate on individual rules. It may be helpful to review how they work given an arbitrary rule such as this: $\infer{Q}{P@1 & \ldots & P@n}$ -Below, we refer to $P@1$ as the \textbf{major premise}. This concept +Below, we refer to $P@1$ as the \bfindex{major premise}. This concept applies only to elimination and destruction rules. These rules act upon an -instance of their major premise, typically to replace it by other -assumptions. +instance of their major premise, typically to replace it by subformulas of itself. Suppose that the rule above is called~\isa{R}\@. Here are the basic rule methods, most of which we have already seen: \begin{itemize} \item -Method \isa{rule\ R} unifies~$Q$ with the current subgoal, which it -replaces by $n$ new subgoals, to prove instances of $P@1$, \ldots,~$P@n$. -This is backward reasoning and is appropriate for introduction rules. +Method \isa{rule\ R} unifies~$Q$ with the current subgoal, replacing it +by $n$ new subgoals: instances of $P@1$, \ldots,~$P@n$. +This is backward reasoning and is appropriate for introduction rules. \item Method \isa{erule\ R} unifies~$Q$ with the current subgoal and simultaneously unifies $P@1$ with some assumption. The subgoal is @@ -589,8 +579,8 @@ \isa{(rule\ R,\ assumption)} is similar, but it does not delete an assumption. \item -Method \isa{drule\ R} unifies $P@1$ with some assumption, which is -then deleted. The subgoal is +Method \isa{drule\ R} unifies $P@1$ with some assumption, which it +then deletes. The subgoal is replaced by the $n-1$ new subgoals of proving $P@2$, \ldots,~$P@n$; an $n$th subgoal is like the original one but has an additional assumption: an instance of~$Q$. It is appropriate for destruction rules. @@ -599,17 +589,17 @@ assumption is not deleted. (See \S\ref{sec:frule} below.) \end{itemize} -When applying a rule, we can constrain some of its -variables: +Other methods apply a rule while constraining some of its +variables. The typical form is \begin{isabelle} -\ \ \ \ \ rule_tac\ $v@1$ = $t@1$ \isakeyword{and} \ldots \isakeyword{and} +\ \ \ \ \ \methdx{rule_tac}\ $v@1$ = $t@1$ \isakeyword{and} \ldots \isakeyword{and} $v@k$ = $t@k$ \isakeyword{in} R \end{isabelle} This method behaves like \isa{rule R}, while instantiating the variables $v@1$, \ldots, -$v@k$ as specified. We similarly have \isa{erule_tac}, \isa{drule_tac} and -\isa{frule_tac}. These methods also let us specify which subgoal to +$v@k$ as specified. We similarly have \methdx{erule_tac}, \methdx{drule_tac} and +\methdx{frule_tac}. These methods also let us specify which subgoal to operate on. By default it is the first subgoal, as with nearly all methods, but we can specify that rule \isa{R} should be applied to subgoal number~$i$: @@ -619,17 +609,16 @@ - \section{Unification and Substitution}\label{sec:unification} \index{unification|(}% -As we have seen, Isabelle rules involve schematic variables that begin with +As we have seen, Isabelle rules involve schematic variables, which begin with a question mark and act as -placeholders for terms. \emph{Unification} refers to the process of +placeholders for terms. \textbf{Unification} refers to the process of making two terms identical, possibly by replacing their schematic variables by -terms. The simplest case is when the two terms are already the same. Next -simplest is when the variables in only one of the term - are replaced; this is called pattern-matching. The +terms. The simplest case is when the two terms are already the same. Next +simplest is \textbf{pattern-matching}, which replaces variables in only one of the +terms. The \isa{rule} method typically matches the rule's conclusion against the current subgoal. In the most complex case, variables in both terms are replaced; the \isa{rule} method can do this if the goal @@ -641,7 +630,7 @@ filled in later, sometimes in stages and often automatically. Unification is well known to Prolog programmers. Isabelle uses -\emph{higher-order} unification, which works in the +\textbf{higher-order} unification, which works in the typed $\lambda$-calculus. The general case is undecidable, but for our purposes, the differences from ordinary unification are straightforward. It handles bound variables @@ -652,6 +641,7 @@ \isa{{\isasymlambda}x.\ f(x,z)} and \isa{{\isasymlambda}y.\ f(y,z)} are trivially unifiable because they differ only by a bound variable renaming. + \begin{warn} Higher-order unification sometimes must invent $\lambda$-terms to replace function variables, @@ -670,11 +660,12 @@ \end{warn} + \subsection{Substitution and the {\tt\slshape subst} Method} \label{sec:subst} \index{substitution|(}% -Isabelle also uses function variables to express \emph{substitution}. +Isabelle also uses function variables to express \textbf{substitution}. A typical substitution rule allows us to replace one term by another if we know that two terms are equal. $\infer{P[t/x]}{s=t & P[s/x]}$ @@ -692,13 +683,13 @@ \rulename{ssubst} \end{isabelle} Crucially, \isa{?P} is a function -variable: it can be replaced by a $\lambda$-expression -involving one bound variable whose occurrences identify the places +variable. It can be replaced by a $\lambda$-term +with one bound variable, whose occurrences identify the places in which $s$ will be replaced by~$t$. The proof above requires -\isa{{\isasymlambda}x.~x=s}. +the term \isa{{\isasymlambda}x.~x=s}. The \isa{simp} method replaces equals by equals, but the substitution -rule gives us more control. The \isa{subst} method is the easiest way to +rule gives us more control. The \methdx{subst} method is the easiest way to use the substitution rule. Suppose a proof has reached this point: \begin{isabelle} @@ -779,40 +770,39 @@ \ 1.\ triple\ (f\ x)\ (f\ x)\ x\ \isasymLongrightarrow\ triple\ (f\ x)\ (f\ x)\ (f\ x) \end{isabelle} The substitution should have been done in the first two occurrences -of~\isa{x} only. Isabelle has gone too far. The \isa{back} -method allows us to reject this possibility and get a new one: +of~\isa{x} only. Isabelle has gone too far. The \commdx{back} +command allows us to reject this possibility and demand a new one: \begin{isabelle} \ 1.\ triple\ (f\ x)\ (f\ x)\ x\ \isasymLongrightarrow\ triple\ x\ (f\ x)\ (f\ x) \end{isabelle} % Now Isabelle has left the first occurrence of~\isa{x} alone. That is -promising but it is not the desired combination. So we use \isa{back} +promising but it is not the desired combination. So we use \isacommand{back} again: \begin{isabelle} \ 1.\ triple\ (f\ x)\ (f\ x)\ x\ \isasymLongrightarrow\ triple\ (f\ x)\ x\ (f\ x) \end{isabelle} % -This also is wrong, so we use \isa{back} again: +This also is wrong, so we use \isacommand{back} again: \begin{isabelle} \ 1.\ triple\ (f\ x)\ (f\ x)\ x\ \isasymLongrightarrow\ triple\ x\ x\ (f\ x) \end{isabelle} % And this one is wrong too. Looking carefully at the series of alternatives, we see a binary countdown with reversed bits: 111, -011, 101, 001. Invoke \isa{back} again: +011, 101, 001. Invoke \isacommand{back} again: \begin{isabelle} \ 1.\ triple\ (f\ x)\ (f\ x)\ x\ \isasymLongrightarrow\ triple\ (f\ x)\ (f\ x)\ x% \end{isabelle} At last, we have the right combination! This goal follows by assumption.% \index{unification|)} -\subsection{Keeping Unification under Control} - -The previous example showed that unification can do strange things with +\medskip +This example shows that unification can do strange things with function variables. We were forced to select the right unifier using the -\isa{back} command. That is all right during exploration, but \isa{back} +\isacommand{back} command. That is all right during exploration, but \isacommand{back} should never appear in the final version of a proof. You can eliminate the -need for \isa{back} by giving Isabelle less freedom when you apply a rule. +need for \isacommand{back} by giving Isabelle less freedom when you apply a rule. One way to constrain the inference is by joining two methods in a \isacommand{apply} command. Isabelle applies the first method and then the @@ -855,9 +845,10 @@ the third unchanged. With this instantiation, backtracking is neither necessary nor possible. -An alternative to \isa{rule_tac} is to use \isa{rule} with the -\isa{of} directive, described in \S\ref{sec:forward} below. An -advantage of \isa{rule_tac} is that the instantiations may refer to +An alternative to \isa{rule_tac} is to use \isa{rule} with a theorem +modified using~\isa{of}, described in +\S\ref{sec:forward} below. An advantage of \isa{rule_tac} is that the +instantiations may refer to \isasymAnd-bound variables in the current subgoal.% \index{substitution|)} @@ -866,16 +857,16 @@ \index{quantifiers|(}\index{quantifiers!universal|(}% Quantifiers require formalizing syntactic substitution and the notion of -\emph{arbitrary value}. Consider the universal quantifier. In a logic +arbitrary value. Consider the universal quantifier. In a logic book, its introduction rule looks like this: $\infer{\forall x.\,P}{P}$ Typically, a proviso written in English says that $x$ must not occur in the assumptions. This proviso guarantees that $x$ can be regarded as arbitrary, since it has not been assumed to satisfy any special conditions. Isabelle's underlying formalism, called the -\emph{meta-logic}, eliminates the need for English. It provides its own universal -quantifier (\isasymAnd) to express the notion of an arbitrary value. We have -already seen another symbol of the meta-logic, namely +\bfindex{meta-logic}, eliminates the need for English. It provides its own +universal quantifier (\isasymAnd) to express the notion of an arbitrary value. We +have already seen another symbol of the meta-logic, namely \isa\isasymLongrightarrow, which expresses inference rules and the treatment of assumptions. The only other symbol in the meta-logic is \isa\isasymequiv, which can be used to define constants. @@ -963,7 +954,7 @@ quantifier by a meta-level quantifier, producing a subgoal that binds the variable~\isa{x}. The leading bound variables (here \isa{x}) and the assumptions (here \isa{{\isasymforall}x.\ P\ -\isasymlongrightarrow\ Q\ x} and \isa{P}) form the \emph{context} for the +\isasymlongrightarrow\ Q\ x} and \isa{P}) form the \textbf{context} for the conclusion, here \isa{Q\ x}. Subgoals inherit the context, although assumptions can be added or deleted (as we saw earlier), while rules such as \isa{allI} add bound variables. @@ -978,14 +969,13 @@ x)\isasymrbrakk\ \isasymLongrightarrow\ Q\ x \end{isabelle} Observe how the context has changed. The quantified formula is gone, -replaced by a new assumption derived from its body. Informally, we have -removed the quantifier. The quantified variable -has been replaced by the curious term -\isa{?x2~x}; it acts as a placeholder that may be replaced -by any term that can be built from~\isa{x}. (Formally, \isa{?x2} is an -unknown of function type, applied to the argument~\isa{x}.) This new assumption is -an implication, so we can use \emph{modus ponens} on it, which concludes -the proof. +replaced by a new assumption derived from its body. We have +removed the quantifier and replaced the bound variable +by the curious term +\isa{?x2~x}. This term is a placeholder: it may become any term that can be +built from~\isa{x}. (Formally, \isa{?x2} is an unknown of function type, applied +to the argument~\isa{x}.) This new assumption is an implication, so we can use +\emph{modus ponens} on it, which concludes the proof. \begin{isabelle} \isacommand{by}\ (drule\ mp) \end{isabelle} @@ -1034,7 +1024,7 @@ Given an existentially quantified theorem and some formula $Q$ to prove, it creates a new assumption by removing the quantifier. As with the universal introduction rule, the textbook version imposes a proviso on the -quantified variable, which Isabelle expresses using its meta-logic. Note that it is +quantified variable, which Isabelle expresses using its meta-logic. It is enough to have a universal quantifier in the meta-logic; we do not need an existential quantifier to be built in as well. @@ -1050,7 +1040,7 @@ \subsection{Renaming an Assumption: {\tt\slshape rename_tac}} -\index{assumptions!renaming|(}\index{*rename_tac|(}% +\index{assumptions!renaming|(}\index{*rename_tac (method)|(}% When you apply a rule such as \isa{allI}, the quantified variable becomes a new bound variable of the new subgoal. Isabelle tries to avoid changing its name, but sometimes it has to choose a new name in order to @@ -1069,7 +1059,8 @@ \isacommand{apply}\ (rename_tac\ v\ w)\isanewline \ 1.\ \isasymAnd v\ w.\ x\ <\ y\ \isasymLongrightarrow \ P\ v\ (f\ w) \end{isabelle} -Recall that \isa{rule_tac}\index{*rule_tac!and renaming} instantiates a +Recall that \isa{rule_tac}\index{*rule_tac (method)!and renaming} +instantiates a theorem with specified terms. These terms may involve the goal's bound variables, but beware of referring to variables like~\isa{xa}. A future change to your theories could change the set of names @@ -1078,13 +1069,13 @@ If the subgoal has more bound variables than there are names given to \isa{rename_tac}, the rightmost ones are renamed.% -\index{assumptions!renaming|)}\index{*rename_tac|)} +\index{assumptions!renaming|)}\index{*rename_tac (method)|)} \subsection{Reusing an Assumption: {\tt\slshape frule}} \label{sec:frule} -\index{assumptions!reusing|(}\index{*frule|(}% +\index{assumptions!reusing|(}\index{*frule (method)|(}% Note that \isa{drule spec} removes the universal quantifier and --- as usual with elimination rules --- discards the original formula. Sometimes, a universal formula has to be kept so that it can be used again. Then we use a new @@ -1092,8 +1083,8 @@ the selected assumption. The \isa{f} is for \emph{forward}. In this example, going from \isa{P\ a} to \isa{P(h(h~a))} -requires two uses of the quantified assumption, one for each~\isa{h} being -affixed to the term~\isa{a}. +requires two uses of the quantified assumption, one for each~\isa{h} +in~\isa{h(h~a)}. \begin{isabelle} \isacommand{lemma}\ "\isasymlbrakk{\isasymforall}x.\ P\ x\ \isasymlongrightarrow\ P\ (h\ x); \ P\ a\isasymrbrakk\ \isasymLongrightarrow\ P(h\ (h\ a))" @@ -1139,7 +1130,7 @@ Alternatively, we could have used the \isacommand{apply} command and bundled the \isa{drule mp} with \emph{two} calls of \isa{assumption}. Or, of course, we could have given the entire proof to \isa{auto}.% -\index{assumptions!reusing|)}\index{*frule|)} +\index{assumptions!reusing|)}\index{*frule (method)|)} @@ -1148,15 +1139,14 @@ We can prove a theorem of the form $\exists x.\,P\, x$ by exhibiting a suitable term~$t$ such that $P\,t$ is true. Dually, we can use an -assumption of the form $\forall x.\,P\, x$ by exhibiting a -suitable term~$t$ such that $P\,t$ is false, or (more generally) -that contributes in some way to the proof at hand. In many cases, +assumption of the form $\forall x.\,P\, x$ to generate a new assumption $P\,t$ for +a suitable term~$t$. In many cases, Isabelle makes the correct choice automatically, constructing the term by unification. In other cases, the required term is not obvious and we must specify it ourselves. Suitable methods are \isa{rule_tac}, \isa{drule_tac} and \isa{erule_tac}. -We have just seen a proof of this lemma: +We have seen (just above, \S\ref{sec:frule}) a proof of this lemma: \begin{isabelle} \isacommand{lemma}\ "\isasymlbrakk \isasymforall x.\ P\ x\ \isasymlongrightarrow \ P\ (h\ x);\ P\ a\isasymrbrakk \ @@ -1180,7 +1170,8 @@ \medskip Existential formulas can be instantiated too. The next example uses the -\emph{divides} relation of number theory: +\textbf{divides} relation\indexbold{divides relation} +of number theory: \begin{isabelle} ?m\ dvd\ ?n\ \isasymequiv\ {\isasymexists}k.\ ?n\ =\ ?m\ *\ k \rulename{dvd_def} @@ -1194,31 +1185,40 @@ \end{isabelle} % -Opening the definition of divides leaves this subgoal: +Unfolding the definition of divides has left this subgoal: \begin{isabelle} \ 1.\ \isasymlbrakk \isasymexists k.\ m\ =\ i\ *\ k;\ \isasymexists k.\ n\ =\ j\ *\ k\isasymrbrakk \ \isasymLongrightarrow \ \isasymexists k.\ m\ *\ -n\ =\ i\ *\ j\ *\ k% -\isanewline +n\ =\ i\ *\ j\ *\ k +\end{isabelle} +% +Next, we eliminate the two existential quantifiers in the assumptions: +\begin{isabelle} \isacommand{apply}\ (erule\ exE)\isanewline \ 1.\ \isasymAnd k.\ \isasymlbrakk \isasymexists k.\ n\ =\ j\ *\ k;\ m\ =\ i\ *\ k\isasymrbrakk \ \isasymLongrightarrow \ \isasymexists k.\ m\ *\ n\ =\ i\ *\ j\ *\ k% \isanewline \isacommand{apply}\ (erule\ exE) -\end{isabelle} -% -Eliminating the two existential quantifiers in the assumptions leaves this -subgoal: -\begin{isabelle} +\isanewline \ 1.\ \isasymAnd k\ ka.\ \isasymlbrakk m\ =\ i\ *\ k;\ n\ =\ j\ *\ ka\isasymrbrakk \ \isasymLongrightarrow \ \isasymexists k.\ m\ *\ n\ =\ i\ *\ j\ *\ k \end{isabelle} % -The term needed to instantiate the remaining quantifier is~\isa{k*ka}: +The term needed to instantiate the remaining quantifier is~\isa{k*ka}. But +\isa{ka} is an automatically-generated name. As noted above, references to +such variable names makes a proof less resilient to future changes. So, +first we rename the most recent variable to~\isa{l}: \begin{isabelle} -\isacommand{apply}\ (rule_tac\ x="k*ka"\ \isakeyword{in}\ exI)\ \isanewline +\isacommand{apply}\ (rename_tac\ l)\isanewline +\ 1.\ \isasymAnd k\ l.\ \isasymlbrakk m\ =\ i\ *\ k;\ n\ =\ j\ *\ l\isasymrbrakk \ +\isasymLongrightarrow \ \isasymexists k.\ m\ *\ n\ =\ i\ *\ j\ *\ k% +\end{isabelle} + +We instantiate the quantifier with~\isa{k*l}: +\begin{isabelle} +\isacommand{apply}\ (rule_tac\ x="k*l"\ \isakeyword{in}\ exI)\ \isanewline \ 1.\ \isasymAnd k\ ka.\ \isasymlbrakk m\ =\ i\ *\ k;\ n\ =\ j\ *\ ka\isasymrbrakk \ \isasymLongrightarrow \ m\ *\ n\ =\ i\ *\ j\ *\ (k\ *\ ka) @@ -1230,23 +1230,15 @@ \isacommand{done}\isanewline \end{isabelle} -\begin{warn} -References to automatically-generated names like~\isa{ka} can make a proof -brittle, especially if the proof is long. Small changes to your theory can -cause these names to change. Robust proofs replace -automatically-generated names by ones chosen using -\isa{rename_tac} before giving them to \isa{rule_tac}. -\end{warn} - -\section{Hilbert's Epsilon-Operator} +\section{Hilbert's $\varepsilon$-Operator} \label{sec:SOME} -\index{Hilbert's epsilon-operator|(}% +\index{Hilbert's $\varepsilon$-operator|(}% HOL provides Hilbert's $\varepsilon$-operator. The term $\varepsilon x. P(x)$ denotes some $x$ such that $P(x)$ is true, provided such a value -exists. In \textsc{ascii}, we write \isa{SOME} for the Greek +exists. In \textsc{ascii}, we write \sdx{SOME} for the Greek letter~$\varepsilon$. \begin{warn} @@ -1259,7 +1251,7 @@ \index{descriptions!definite}% The main use of \hbox{\isa{SOME\ x.\ P\ x}} is as a \textbf{definite description}: when \isa{P} is satisfied by a unique value,~\isa{a}. -We reason using this rule: +We reason using this rule:\REMARK{update if we add iota} \begin{isabelle} \isasymlbrakk P\ a;\ \isasymAnd x.\ P\ x\ \isasymLongrightarrow \ x\ =\ a\isasymrbrakk \ \isasymLongrightarrow \ (SOME\ x.\ P\ x)\ =\ a% @@ -1271,8 +1263,9 @@ prove that the cardinality of the empty set is zero (since $n=0$ satisfies the description) and proceed to prove other facts. -A more challenging example illustrates how Isabelle/HOL defines the least-number -operator, which denotes the least \isa{x} satisfying~\isa{P}: +A more challenging example illustrates how Isabelle/HOL defines the least number +operator, which denotes the least \isa{x} satisfying~\isa{P}:% +\index{least number operator} \begin{isabelle} (LEAST\ x.\ P\ x)\ = (SOME\ x.\ P\ x\ \isasymand \ (\isasymforall y.\ P\ y\ \isasymlongrightarrow \ x\ \isasymle \ y)) @@ -1313,7 +1306,7 @@ \subsection{Indefinite Descriptions} \index{descriptions!indefinite}% -Occasionally, \hbox{\isa{SOME\ x.\ P\ x}} serves as an \emph{indefinite +Occasionally, \hbox{\isa{SOME\ x.\ P\ x}} serves as an \textbf{indefinite description}, when it makes an arbitrary selection from the values satisfying~\isa{P}\@. Here is the definition of~\isa{inv},\index{*inv (constant)} which expresses inverses of functions: @@ -1321,7 +1314,7 @@ inv\ f\ \isasymequiv \ \isasymlambda y.\ SOME\ x.\ f\ x\ =\ y% \rulename{inv_def} \end{isabelle} -The inverse of \isa{f}, when applied to \isa{y}, returns some {x} such that +The inverse of \isa{f}, when applied to \isa{y}, returns some~\isa{x} such that \isa{f~x~=~y}. For example, we can prove \isa{inv~Suc} really is the inverse of the \isa{Suc} function \begin{isabelle} @@ -1347,12 +1340,13 @@ x\isasymrbrakk \ \isasymLongrightarrow \ Q\ (SOME\ x.\ P\ x) \rulename{someI2} \end{isabelle} -Rule \isa{someI} is basic (if anything satisfies \isa{P} then so does -\hbox{\isa{SOME\ x.\ P\ x}}). Rule \isa{someI2} is easier to apply in a backward -proof. +Rule \isa{someI} is basic: if anything satisfies \isa{P} then so does +\hbox{\isa{SOME\ x.\ P\ x}}. The repetition of~\isa{P} in the conclusion makes it +difficult to apply in a backward proof, so the derived rule \isa{someI2} is +also provided. \medskip -For example, let us prove the Axiom of Choice: +For example, let us prove the \rmindex{axiom of choice}: \begin{isabelle} \isacommand{theorem}\ axiom_of_choice: \ "(\isasymforall x.\ \isasymexists y.\ P\ x\ y)\ \isasymLongrightarrow \ @@ -1386,11 +1380,10 @@ $\infer{P[(\varepsilon x. P) / \, x]}{\exists x.\,P}$ This rule is seldom used for that purpose --- it can cause exponential blow-up --- but it is occasionally used as an introduction rule -for~$\varepsilon$-operator. Its name in HOL is \isa{someI_ex}. +for~$\varepsilon$-operator. Its name in HOL is \tdxbold{someI_ex}.%% +\index{Hilbert's $\varepsilon$-operator|)} -\index{Hilbert's epsilon-operator|)} - \section{Some Proofs That Fail} \index{proofs!examples of failing|(}% @@ -1458,8 +1451,8 @@ is there an $x$ such that $R\,x\,y$ holds for all $y$? Let us see what happens when we attempt to prove it. \begin{isabelle} -\isacommand{lemma}\ "\isasymforall \ y.\ R\ y\ y\ \isasymLongrightarrow -\ \isasymexists x.\ \isasymforall \ y.\ R\ x\ y" +\isacommand{lemma}\ "\isasymforall y.\ R\ y\ y\ \isasymLongrightarrow +\ \isasymexists x.\ \isasymforall y.\ R\ x\ y" \end{isabelle} First, we remove the existential quantifier. The new proof state has an unknown, namely~\isa{?x}. @@ -1495,13 +1488,13 @@ \section{Proving Theorems Using the {\tt\slshape blast} Method} \index{*blast (method)|(}% -It is hard to prove substantial theorems using the methods -described above. A proof may be dozens or hundreds of steps long. You +It is hard to prove many theorems using the methods +described above. A proof may be hundreds of steps long. You may need to search among different ways of proving certain subgoals. Often a choice that proves one subgoal renders another impossible to prove. There are further complications that we have not discussed, concerning negation and disjunction. Isabelle's -\emph{classical reasoner} is a family of tools that perform such +\textbf{classical reasoner} is a family of tools that perform such proofs automatically. The most important of these is the \isa{blast} method. @@ -1511,12 +1504,11 @@ We begin with examples from pure predicate logic. The following example is known as Andrew's challenge. Peter Andrews designed -it to be hard to prove by automatic means.% -\footnote{It is particularly hard for a resolution prover. The -nested biconditionals cause a combinatorial explosion in the conversion to -clause form. Pelletier~\cite{pelletier86} describes it and many other -problems for automatic theorem provers.} -However, the +it to be hard to prove by automatic means. +It is particularly hard for a resolution prover, where +converting the nested biconditionals to +clause form produces a combinatorial +explosion~\cite{pelletier86}. However, the \isa{blast} method proves it in a fraction of a second. \begin{isabelle} \isacommand{lemma}\ @@ -1536,7 +1528,7 @@ \end{isabelle} The next example is a logic problem composed by Lewis Carroll. The \isa{blast} method finds it trivial. Moreover, it turns out -that not all of the assumptions are necessary. We can easily +that not all of the assumptions are necessary. We can experiment with variations of this formula and see which ones can be proved. \begin{isabelle} @@ -1571,8 +1563,8 @@ \isacommand{by}\ blast \end{isabelle} The \isa{blast} method is also effective for set theory, which is -described in the next chapter. This formula below may look horrible, but -the \isa{blast} method proves it easily. +described in the next chapter. The formula below may look horrible, but +the \isa{blast} method proves it in milliseconds. \begin{isabelle} \isacommand{lemma}\ "({\isasymUnion}i{\isasymin}I.\ A(i))\ \isasyminter\ ({\isasymUnion}j{\isasymin}J.\ B(j))\ =\isanewline \ \ \ \ \ \ \ \ ({\isasymUnion}i{\isasymin}I.\ {\isasymUnion}j{\isasymin}J.\ A(i)\ \isasyminter\ B(j))"\isanewline @@ -1595,7 +1587,7 @@ An important special case avoids all these complications. A logical equivalence, which in higher-order logic is an equality between formulas, can be given to the classical -reasoner and simplifier by using the attribute \isa{iff}. You +reasoner and simplifier by using the attribute \attrdx{iff}. You should do so if the right hand side of the equivalence is simpler than the left-hand side. @@ -1603,7 +1595,7 @@ The result of appending two lists is empty if and only if both of the lists are themselves empty. Obviously, applying this equivalence will result in a simpler goal. When stating this lemma, we include -the \isa{iff} attribute. Once we have proved the lemma, Isabelle +the \attrdx{iff} attribute. Once we have proved the lemma, Isabelle will make it known to the classical reasoner (and to the simplifier). \begin{isabelle} \isacommand{lemma}\ @@ -1615,7 +1607,7 @@ \end{isabelle} % This fact about multiplication is also appropriate for -the \isa{iff} attribute: +the \attrdx{iff} attribute: \begin{isabelle} (\mbox{?m}\ *\ \mbox{?n}\ =\ 0)\ =\ (\mbox{?m}\ =\ 0\ \isasymor\ \mbox{?n}\ =\ 0) \end{isabelle} @@ -1624,16 +1616,18 @@ disjunctive reasoning is hard, but translating to an actual disjunction works: the classical reasoner handles disjunction properly. -In more detail, this is how the \isa{iff} attribute works. It converts +In more detail, this is how the \attrdx{iff} attribute works. It converts the equivalence $P=Q$ to a pair of rules: the introduction rule $Q\Imp P$ and the destruction rule $P\Imp Q$. It gives both to the classical reasoner as safe rules, ensuring that all occurrences of $P$ in a subgoal are replaced by~$Q$. The simplifier performs the same replacement, since \isa{iff} gives $P=Q$ to the -simplifier. But classical reasoning is different from -simplification. Simplification is deterministic: it applies rewrite rules -repeatedly, as long as possible, in order to \emph{transform} a goal. Classical -reasoning uses search and backtracking in order to \emph{prove} a goal.% +simplifier. + +Classical reasoning is different from +simplification. Simplification is deterministic. It applies rewrite rules +repeatedly, as long as possible, transforming a goal into another goal. Classical +reasoning uses search and backtracking in order to prove a goal outright.% \index{*blast (method)|)}% @@ -1645,7 +1639,7 @@ to a limited extent, giving the user fine control over the proof. Of the latter methods, the most useful is -\isa{clarify}.\indexbold{*clarify (method)} +\methdx{clarify}. It performs all obvious reasoning steps without splitting the goal into multiple parts. It does not apply unsafe rules that could render the @@ -1672,15 +1666,12 @@ and the elimination rule for ~\isa{\isasymand}. It did not apply the introduction rule for \isa{\isasymand} because of its policy never to split goals. -Also available is \isa{clarsimp},\indexbold{*clarsimp (method)} -a method -that interleaves \isa{clarify} and \isa{simp}. Also there is -\isa{safe},\indexbold{*safe (method)} -which like \isa{clarify} performs obvious steps and even applies those that +Also available is \methdx{clarsimp}, a method +that interleaves \isa{clarify} and \isa{simp}. Also there is \methdx{safe}, +which like \isa{clarify} performs obvious steps but even applies those that split goals. -\indexbold{*force (method)}% -The \isa{force} method applies the classical +The \methdx{force} method applies the classical reasoner and simplifier to one goal. Unless it can prove the goal, it fails. Contrast that with the \isa{auto} method, which also combines classical reasoning @@ -1724,41 +1715,41 @@ The proof from this point is trivial. Could we have proved the theorem with a single command? Not using \isa{blast}: it cannot perform the higher-order unification needed here. The -\isa{fast}\indexbold{*fast (method)} method succeeds: +\methdx{fast} method succeeds: \begin{isabelle} \isacommand{apply}\ (fast\ intro!:\ someI) \end{isabelle} -The \isa{best}\indexbold{*best (method)} method is similar to +The \methdx{best} method is similar to \isa{fast} but it uses a best-first search instead of depth-first search. Accordingly, it is slower but is less susceptible to divergence. -Transitivity rules usually cause \isa{fast} to loop where often \isa{best} -can manage. +Transitivity rules usually cause \isa{fast} to loop where \isa{best} +can often manage. Here is a summary of the classical reasoning methods: \begin{itemize} -\item \isa{blast} works automatically and is the fastest -\item \isa{clarify}\indexbold{*clarify (method)} and -\isa{clarsimp}\indexbold{*clarsimp (method)} -perform obvious steps without splitting the goal; -\isa{safe}\indexbold{*safe (method)} even splits goals -\item \isa{force}\indexbold{*force (method)} uses classical reasoning -and simplification to prove a goal; - \isa{auto} is similar but leaves what it cannot prove -\item \isa{fast} and \isa{best} are legacy methods that work well with rules involving -unusual features +\item \methdx{blast} works automatically and is the fastest + +\item \methdx{clarify} and \methdx{clarsimp} perform obvious steps without +splitting the goal; \methdx{safe} even splits goals + +\item \methdx{force} uses classical reasoning and simplification to prove a goal; + \methdx{auto} is similar but leaves what it cannot prove + +\item \methdx{fast} and \methdx{best} are legacy methods that work well with rules +involving unusual features \end{itemize} A table illustrates the relationships among four of these methods. \begin{center} \begin{tabular}{r|l|l|} & no split & split \\ \hline - no simp & \isa{clarify} & \isa{safe} \\ \hline - simp & \isa{clarsimp} & \isa{auto} \\ \hline + no simp & \methdx{clarify} & \methdx{safe} \\ \hline + simp & \methdx{clarsimp} & \methdx{auto} \\ \hline \end{tabular} \end{center} -\section{Directives for Forward Proof}\label{sec:forward} +\section{Forward Proof: Transforming Theorems}\label{sec:forward} \index{forward proof|(}% Forward proof means deriving new facts from old ones. It is the @@ -1766,7 +1757,7 @@ subgoals, can help us find a difficult proof. But it is not always the best way of presenting the proof so found. Forward proof is particularly good for reasoning from the general -to the specific. For example, consider the following distributive law for +to the specific. For example, consider this distributive law for the greatest common divisor: $k\times\gcd(m,n) = \gcd(k\times m,k\times n)$ @@ -1780,11 +1771,11 @@ Re-orientation works by applying the symmetry of equality to an equation, so it too is a forward step. -\subsection{The {\tt\slshape of} and {\tt\slshape THEN} Directives} +\subsection{Modifying a Theorem using {\tt\slshape of} and {\tt\slshape THEN}} Let us reproduce our examples in Isabelle. Recall that in \S\ref{sec:recdef-simplification} we declared the recursive function -\isa{gcd}: +\isa{gcd}:\index{*gcd (constant)|(} \begin{isabelle} \isacommand{consts}\ gcd\ ::\ "nat*nat\ \isasymRightarrow\ nat"\isanewline \isacommand{recdef}\ gcd\ "measure\ ((\isasymlambda(m,n).n))"\isanewline @@ -1797,21 +1788,18 @@ ?k\ *\ gcd\ (?m,\ ?n)\ =\ gcd\ (?k\ *\ ?m,\ ?k\ *\ ?n) \rulename{gcd_mult_distrib2} \end{isabelle} -Now we can carry out the derivation shown above. -The first step is to replace \isa{?m} by~1. -The \isa{of}\indexbold{*of (directive)} -directive -refers to variables not by name but by their order of occurrence in the theorem. -In this case, the variables are \isa{?k}, \isa{?m} and~\isa{?n}. So, the -expression +% +The first step in our derivation is to replace \isa{?m} by~1. We instantiate the +theorem using~\attrdx{of}, which identifies variables in order of their +appearance from left to right. In this case, the variables are \isa{?k}, \isa{?m} +and~\isa{?n}. So, the expression \hbox{\texttt{[of k 1]}} replaces \isa{?k} by~\isa{k} and \isa{?m} by~\isa{1}. \begin{isabelle} \isacommand{lemmas}\ gcd_mult_0\ =\ gcd_mult_distrib2\ [of\ k\ 1] \end{isabelle} % -The keyword \isacommand{lemmas}\index{lemmas@\isacommand{lemmas}|bold} -declares a new theorem, which can be derived +The keyword \commdx{lemmas} declares a new theorem, which can be derived from an existing one using attributes such as \isa{[of~k~1]}. The command \isa{thm gcd_mult_0} @@ -1831,7 +1819,7 @@ The next step is to put the theorem \isa{gcd_mult_0} into a simplified form, performing the steps -$\gcd(1,n)=1$ and $k\times1=k$. The \isaindexbold{simplified} +$\gcd(1,n)=1$ and $k\times1=k$. The \attrdx{simplified} attribute takes a theorem and returns the result of simplifying it, with respect to the default simplification rules: @@ -1861,7 +1849,7 @@ \begin{isabelle} \ \ \ \ \ gcd\ (k,\ k\ *\ ?n)\ =\ k% \end{isabelle} -\isa{THEN~sym}\indexbold{*THEN (directive)} gives the current theorem to the +\isa{THEN~sym}\indexbold{*THEN (attribute)} gives the current theorem to the rule \isa{sym} and returns the resulting conclusion. The effect is to exchange the two operands of the equality. Typically \isa{THEN} is used with destruction rules. Also useful is \isa{THEN~spec}, which removes the @@ -1903,28 +1891,34 @@ resulting theorem will have {\isa{?k}} instead of {\isa{k}}. At the start of this section, we also saw a proof of $\gcd(k,k)=k$. Here -is the Isabelle version: +is the Isabelle version:\index{*gcd (constant)|)} \begin{isabelle} \isacommand{lemma}\ gcd_self\ [simp]:\ "gcd(k,k)\ =\ k"\isanewline \isacommand{by}\ (rule\ gcd_mult\ [of\ k\ 1,\ simplified]) \end{isabelle} +\begin{warn} +To give~\isa{of} a nonvariable term, enclose it in quotation marks, as in +\isa{[of "k*m"]}. The term must not contain unknowns: an +attribute such as +\isa{[of "?k*m"]} will be rejected. +\end{warn} + \begin{exercise} In \S\ref{sec:subst} the method \isa{subst\ mult_commute} was applied. How can we achieve the same effect using \isa{THEN} with the rule \isa{ssubst}? % answer rule (mult_commute [THEN ssubst]) - \end{exercise} -\subsection{The {\tt\slshape OF} Directive} +\subsection{Modifying a Theorem using {\tt\slshape OF}} -\index{*OF (directive)|(}% +\index{*OF (attribute)|(}% Recall that \isa{of} generates an instance of a rule by specifying values for its variables. Analogous is \isa{OF}, which generates an instance of a rule by specifying facts for its premises. -We again need the -\emph{divides} relation of number theory, which as we recall is defined by +We again need the divides relation\index{divides relation} of number theory, which +as we recall is defined by \begin{isabelle} ?m\ dvd\ ?n\ \isasymequiv\ {\isasymexists}k.\ ?n\ =\ ?m\ *\ k \rulename{dvd_def} @@ -1993,23 +1987,23 @@ typically with a destruction rule to extract a subformula of the current theorem. We use \isa{OF} with a list of facts to generate an instance of the current theorem.% -\index{*OF (directive)|)} +\index{*OF (attribute)|)} Here is a summary of some primitives for forward reasoning: \begin{itemize} -\item \isa{of} instantiates the variables of a rule to a list of terms -\item \isa{OF} applies a rule to a list of theorems -\item \isa{THEN} gives a theorem to a named rule and returns the +\item \attrdx{of} instantiates the variables of a rule to a list of terms +\item \attrdx{OF} applies a rule to a list of theorems +\item \attrdx{THEN} gives a theorem to a named rule and returns the conclusion -%\item \isa{rule_format} puts a theorem into standard form +%\item \attrdx{rule_format} puts a theorem into standard form % by removing \isa{\isasymlongrightarrow} and~\isa{\isasymforall} -\item \isa{simplified} applies the simplifier to a theorem +\item \attrdx{simplified} applies the simplifier to a theorem \item \isacommand{lemmas} assigns a name to the theorem produced by the attributes above \end{itemize} -\section{Methods for Forward Proof} +\section{Forward Reasoning in a Backward Proof} We have seen that the forward proof directives work well within a backward proof. There are many ways to achieve a forward style using our existing @@ -2062,7 +2056,7 @@ \subsection{The Method {\tt\slshape insert}} -\index{*insert(method)|(}% +\index{*insert (method)|(}% The \isa{insert} method inserts a given theorem as a new assumption of the current subgoal. This already is a forward step; moreover, we may (as always when using a @@ -2133,21 +2127,27 @@ \end{isabelle} Simplification reduces \isa{(m\ *\ n)\ mod\ n} to zero. Then it cancels the factor~\isa{n} on both -sides of the equation, proving the theorem.% -\index{*insert(method)|)} +sides of the equation \isa{(m\ *\ n)\ div\ n\ *\ n\ =\ m\ *\ n}, proving the +theorem. + +\begin{warn} +Any unknowns in the theorem given to \methdx{insert} will be universally +quantified in the new assumption. +\end{warn}% +\index{*insert (method)|)} \subsection{The Method {\tt\slshape subgoal_tac}} \index{*subgoal_tac (method)}% -A similar method is \isa{subgoal_tac}. +A related method is \isa{subgoal_tac}, but instead of inserting a theorem as an assumption, it inserts an arbitrary formula. This formula must be proved later as a separate subgoal. The idea is to claim that the formula holds on the basis of the current assumptions, to use this claim to complete the proof, and finally -to justify the claim. It is a valuable means of giving the proof -some structure. The explicit formula will be more readable than -proof commands that yield that formula indirectly. +to justify the claim. It gives the proof +some structure. If you find yourself generating a complex assumption by a +long series of forward steps, consider using \isa{subgoal_tac} instead: you can +state the formula you are aiming for, and perhaps prove it automatically. Look at the following example. \begin{isabelle} @@ -2162,10 +2162,10 @@ \isacommand{apply}\ force\isanewline \isacommand{done} \end{isabelle} -Let us prove it informally. The first assumption tells us -that \isa{z} is no greater than 36. The second tells us that \isa{z} -is at least 34. The third assumption tells us that \isa{z} cannot be 35, since -$35\times35=1225$. So \isa{z} is either 34 or 36, and since \isa{Q} holds for +The first assumption tells us +that \isa{z} is no greater than~36. The second tells us that \isa{z} +is at least~34. The third assumption tells us that \isa{z} cannot be 35, since +$35\times35=1225$. So \isa{z} is either 34 or~36, and since \isa{Q} holds for both of those values, we have the conclusion. The Isabelle proof closely follows this reasoning. The first @@ -2201,8 +2201,8 @@ \medskip Summary of these methods: \begin{itemize} -\item {\isa{insert}} adds a theorem as a new assumption -\item {\isa{subgoal_tac}} adds a formula as a new assumption and leaves the +\item \methdx{insert} adds a theorem as a new assumption +\item \methdx{subgoal_tac} adds a formula as a new assumption and leaves the subgoal of proving that formula \end{itemize} \index{forward proof|)} @@ -2217,8 +2217,9 @@ \subsection{Tacticals, or Control Structures} +\index{tacticals|(}% If the proof is long, perhaps it at least has some regularity. Then you can -express it more concisely using \bfindex{tacticals}, which provide control +express it more concisely using \textbf{tacticals}, which provide control structures. Here is a proof (it would be a one-liner using \isa{blast}, but forget that) that contains a series of repeated commands: @@ -2240,7 +2241,7 @@ concludes~\isa{S}. The final step matches the assumption \isa{S} with the goal to be proved. -Suffixing a method with a plus sign~(\isa+) +Suffixing a method with a plus sign~(\isa+)\index{*"+ (tactical)} expresses one or more repetitions: \begin{isabelle} \isacommand{lemma}\ "\isasymlbrakk P\isasymlongrightarrow Q;\ Q\isasymlongrightarrow R;\ R\isasymlongrightarrow S;\ P\isasymrbrakk \ \isasymLongrightarrow \ S"\isanewline @@ -2252,10 +2253,14 @@ for a chain of implications having any length, not just three. Choice is another control structure. Separating two methods by a vertical -bar~(\isa|) gives the effect of applying the first method, and if that fails, -trying the second. It can be combined with repetition, when the choice must be -made over and over again. Here is a chain of implications in which most of the -antecedents are proved by assumption, but one is proved by arithmetic: +% we must use ?? rather than "| as the sorting item because somehow the presence +% of | (even quoted) stops hyperref from putting |hyperpage at the end of the index +% entry. +bar~(\isa|)\index{??@\texttt{"|} (tactical)} gives the effect of applying the +first method, and if that fails, trying the second. It can be combined with +repetition, when the choice must be made over and over again. Here is a chain of +implications in which most of the antecedents are proved by assumption, but one is +proved by arithmetic: \begin{isabelle} \isacommand{lemma}\ "\isasymlbrakk Q\isasymlongrightarrow R;\ P\isasymlongrightarrow Q;\ x<\#5\isasymlongrightarrow P;\ Suc\ x\ <\ \#5\isasymrbrakk \ \isasymLongrightarrow \ R"\ \isanewline @@ -2266,9 +2271,11 @@ \isa{assumption}. Therefore, we combine these methods using the choice operator. -A postfixed question mark~(\isa?) expresses zero or one repetitions of a method. -It can also be viewed as the choice between executing a method and doing nothing. -It is useless at top level but may be valuable within other control structures. +A postfixed question mark~(\isa?)\index{*"? (tactical)} expresses zero or one +repetitions of a method. It can also be viewed as the choice between executing a +method and doing nothing. It is useless at top level but may be valuable within +other control structures.% +\index{tacticals|)} \subsection{Subgoal Numbering} @@ -2286,7 +2293,7 @@ \end{isabelle} If each \isa{bigsubgoal} is 15 lines or so, the proof state will be too big to scroll through. By default, Isabelle displays at most 10 subgoals. The -\isacommand{pr} command lets you change this limit: +\commdx{pr} command lets you change this limit: \begin{isabelle} \isacommand{pr}\ 2\isanewline \ 1.\ bigsubgoal1\isanewline @@ -2311,7 +2318,7 @@ \ 3.\ Q\ \isasymLongrightarrow \ Q% \end{isabelle} % -The \isacommand{defer} command moves the first subgoal into the last position. +The \commdx{defer} command moves the first subgoal into the last position. \begin{isabelle} \isacommand{defer}\ 1\isanewline \ 1.\ \isasymnot \ \isasymnot \ P\ \isasymLongrightarrow \ P\isanewline @@ -2328,7 +2335,7 @@ that we can devote attention to the difficult part. \medskip -The \isacommand{prefer} command moves the specified subgoal into the +The \commdx{prefer} command moves the specified subgoal into the first position. For example, if you suspect that one of your subgoals is invalid (not a theorem), then you should investigate that subgoal first. If it cannot be proved, then there is no point in proving the other subgoals. @@ -2376,7 +2383,7 @@ \section{Proving the Correctness of Euclid's Algorithm} \label{sec:proving-euclid} -\index{Euclid's algorithm|(}\index{*gcd (function)|(}% +\index{Euclid's algorithm|(}\index{*gcd (constant)|(}\index{divides relation|(}% A brief development will demonstrate the techniques of this chapter, including \isa{blast} applied with additional rules. We shall also see \isa{case_tac} used to perform a Boolean case split. @@ -2404,7 +2411,7 @@ \end{isabelle} The conditional induction hypothesis suggests doing a case -analysis on \isa{n=0}. We apply \isa{case_tac} with type +analysis on \isa{n=0}. We apply \methdx{case_tac} with type \isa{bool} --- and not with a datatype, as we have done until now. Since \isa{nat} is a datatype, we could have written \isa{case_tac~"n"} instead of \isa{case_tac~"n=0"}. However, the definition @@ -2457,8 +2464,8 @@ \isacommand{apply}\ (blast\ dest:\ dvd_mod_imp_dvd)\isanewline \isacommand{done} \end{isabelle} -Attaching the {\isa{dest}} attribute to \isa{dvd_mod_imp_dvd} tells -\isa{blast} to use it as destruction rule: in the forward direction. +Attaching the \attrdx{dest} attribute to \isa{dvd_mod_imp_dvd} tells +\isa{blast} to use it as destruction rule; that is, in the forward direction. \medskip We have proved a conjunction. Now, let us give names to each of the @@ -2467,9 +2474,9 @@ \isacommand{lemmas}\ gcd_dvd1\ [iff]\ =\ gcd_dvd_both\ [THEN\ conjunct1]\isanewline \isacommand{lemmas}\ gcd_dvd2\ [iff]\ =\ gcd_dvd_both\ [THEN\ conjunct2]% \end{isabelle} -Here we see \isacommand{lemmas}\index{lemmas@\isacommand{lemmas}} -used with the \isa{iff} attribute, which supplies the new theorems to the -classical reasoner and the simplifier. Recall that \isa{THEN} is +Here we see \commdx{lemmas} +used with the \attrdx{iff} attribute, which supplies the new theorems to the +classical reasoner and the simplifier. Recall that \attrdx{THEN} is frequently used with destruction rules; \isa{THEN conjunct1} extracts the first half of a conjunctive theorem. Given \isa{gcd_dvd_both} it yields \begin{isabelle} @@ -2576,4 +2583,4 @@ aggressively because it yields simpler subgoals. The proof implicitly uses \isa{gcd_dvd1} and \isa{gcd_dvd2} as safe rules, because they were declared using \isa{iff}.% -\index{Euclid's algorithm|)}\index{*gcd (function)|)} +\index{Euclid's algorithm|)}\index{*gcd (constant)|)}\index{divides relation|)}
2022-01-26 10:35:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9229864478111267, "perplexity": 13117.127367599805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304947.93/warc/CC-MAIN-20220126101419-20220126131419-00208.warc.gz"}
http://mathoverflow.net/questions/148636/how-to-check-whether-a-positive-integer-can-be-written-as-linear-combination-of/148666
# How to check whether a positive integer can be written as linear combination of given others, where all coefficients are positive? Let $n$, $k$ and $m_1, \dots, m_k$ be positive integers. Which is the most efficient algorithm to find out whether there are positive integers $a_1, \dots, a_k$ such that $n = \sum_{i=1}^k a_i m_i$? To make things nontrivial, think of $k$ being in the hundreds, and of $n$ and the $m_i$ having hundreds of decimal digits, each. -- Clearly if we would remove the requirement that the $a_i$ are positive, the Chinese Remainder Theorem would tell us the answer -- but we do require them to be positive. - MO is intended for topics at the graduate-school level and above. – Andy Putman Nov 12 '13 at 4:10 I don’t see how this is not at the graduate-school level or above, nevertheless the problem seems to be equivalent to the unbounded subset-sum problem, whose NP-completeness is mentioned in mathoverflow.net/a/144983/12705 . – Emil Jeřábek Nov 12 '13 at 12:39 The question sounds perfectly legitimate to me (and Emil Jeřábek provided an answer to it -- maybe he wants to post it as an actual answer as opposed to a comment, now that the question is reopened). – André Henriques Nov 12 '13 at 12:47 Andy, this is an important question related to the works of Sylvester and Frobenius where much was discovered in recent decades. – Gil Kalai Nov 12 '13 at 13:58 The problem can be thought of as a coin problem. There are $k$ coins with denominations $m_1,\dots,m_k$ and you want to express an amount $n$ with these coins. As states, the problem is an integer programming question which is NP-complete when $k$ is part of the input. It is in P (with exponential dependence on $k$) when $k$ is fixed by an algorithm by Lenstra. The problem is closely related the Frobenius/Sylvester coin problem - to find the minimum $n$ so that every larger integer has such a representation. See here and here. A polynomial algorithm when $k$ is bounded was achieved by Ravi Kannan. (The dependence on $k$ is double-exponential.) These two problems (finding a representation for fixed $n$ and finding the value of $n$ above which a representation always exists) represent the first two levels in Presburger Hierarchy. An important open problem here is to find a P-algorithm for higher order problems in the Presburger Hierarchy. Of course, another important question is how to solve such questions in practice. I suppose other people can answer that better than me. One method that certainly comes to mind is to consider the linear programming relaxation (i.e. to allow rational $a_i$s) and then apply some rounding and "local" improvement. The range proposed by the OP where $k$ - (the number of coins) is in the hunderds is interesting. I don't know if current algorithms can scratch this value.
2016-05-29 13:53:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8661947250366211, "perplexity": 305.1162197037399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049279525.46/warc/CC-MAIN-20160524002119-00087-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/prove-itex-lim_-a-to-0-frac-1-a-infty-itex.644175/
# Prove $\lim_{a\to 0}\frac{1}{a} = \infty$ ## Homework Statement Prove $\lim_{a\to 0^+}\frac{1}{a} = +\infty$ under the $\epsilon[/math] definition of a limit. 2. The attempt at a solution Well, I can't do [itex]\frac{1}{a} - \infty < \epsilon$ can I? Otherwise it's just obvious that it's infinity .. STEMucator Homework Helper For this particular problem you need to alter your definition abit since |f(x) - ∞| < ε translates into a useless statement. You want to use this definition : $\forall M>0, \exists δ>0 \space | \space 0<|x-c|<δ \Rightarrow f(x) > M$ What this definition essentially means is that we can find a delta such that the function grows without bound. Start by massaging the expression f(x) > M into a suitable form |x-c| < δ which will give you a δ which MIGHT work. Then take that δ and show that it implies f(x) > M.
2021-06-16 08:23:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8681183457374573, "perplexity": 1064.3448103206033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487622234.42/warc/CC-MAIN-20210616063154-20210616093154-00289.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/era.2021034
# American Institute of Mathematical Sciences • Previous Article Fractional $p$-sub-Laplacian operator problem with concave-convex nonlinearities on homogeneous groups • ERA Home • This Issue • Next Article Variations on Lyapunov's stability criterion and periodic prey-predator systems doi: 10.3934/era.2021034 Online First Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible. Readers can access Online First articles via the “Online First” tab for the selected journal. ## Firing patterns and bifurcation analysis of neurons under electromagnetic induction 1 School of Mathematics, South China University of Technology, Guangzhou 510640, China 2 School of Science and Mathematics, Henan Institute of Science and Technology, Xinxiang 453003, China * Corresponding author: Shenquan Liu Received  November 2020 Revised  March 2021 Early access April 2021 Fund Project: The first author is supported by NSF of China under Grant Nos. 11872183 and 11572127 Based on the three-dimensional endocrine neuron model, a four-dimensional endocrine neuron model was constructed by introducing the magnetic flux variable and induced current according to the law of electromagnetic induction. Firstly, the codimension-one bifurcation and Interspike Intervals (ISIs) analysis were applied to study the bifurcation structure with respect to external stimuli and parameter $k_0$, and two dynamical behaviors were found: period-adding and period-doubling bifurcation leading to chaos. Besides, Hopf bifurcation was specially discussed corresponding to the transformation of the state. Secondly, the different firing patterns such as regular bursting, subthreshold oscillations, fast spiking, mixed-mode oscillations (MMOs) etc. can be observed by changing the external stimuli and the induced current. The neuron model presented more firing activities under strong coupling strength. Finally, the codimension-two bifurcation analysis shown more details of bifurcation. At the same time, the Bogdanov-Takens bifurcation point was also analyzed and three bifurcation curves were derived. Citation: Qixiang Wen, Shenquan Liu, Bo Lu. Firing patterns and bifurcation analysis of neurons under electromagnetic induction. Electronic Research Archive, doi: 10.3934/era.2021034 ##### References: [1] J. F. Barry, M. J. Turner, J. M. Schloss, D. R. Glenn, Y. Song, M. D. Lukin, H. Park and R. L. Walsworth, Optical magnetic detection of single-neuron action potentials using quantum defects in diamond, Proc. Natl. Acad. Sci. U.S.A., 113 (2016), 14133-14138.  doi: 10.1073/pnas.1601513113.  Google Scholar [2] R. Bertram and J. E. Rubin, Multi-timescale systems and fast-slow analysis, Math. Biol., 287 (2017), 105-121.  doi: 10.1016/j.mbs.2016.07.003.  Google Scholar [3] F. A. Carrillo, F. Verduzco and J. Delgado, Analysis of the Takens-Bogdanov bifurcation on m-parameterized vector fields, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 20 (2010), 995-1005.  doi: 10.1142/S0218127410026277.  Google Scholar [4] L. Duan, Q. Cao, Z. Wang and J. Su, Dynamics of neurons in the pre-B$\ddot{o}$tzinger complex under magnetic flow effect, Nonlinear Dynam, 94 (2018), 1961-1971.  doi: 10.1007/s11071-018-4468-7.  Google Scholar [5] L. Duan, Q. Lu and Q. Wang, Two-parameter bifurcation analysis of firing activities in the Chay neuronal model, Neurocomputing, 72 (2008), 341-351.  doi: 10.1016/j.neucom.2008.01.019.  Google Scholar [6] H. Gu, Experimental observation of transition from chaotic bursting to chaotic spiking in a neural pacemaker, Chaos, 23 (2013), 023126. doi: 10.1063/1.4810932.  Google Scholar [7] F. Han, Z. Wang, Y. Du, X. Sun and B. Zhang, Robust synchronization of bursting Hodgkin-Huxley neuronal systems coupled by delayed chemical synapses, Int. J. Non. Linear. Mech, 70 (2015), 105-111. doi: 10.1016/j. ijnonlinmec. 2014.10.010.  Google Scholar [8] E. M. Izhikevich, Neural excitability, spiking and bursting, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 10 (2000), 1171-1266.  doi: 10.1142/S0218127400000840.  Google Scholar [9] M. S. Kafraj, F. Parastesh and S. Jafari, Firing patterns of an improved Izhikevich neuron model under the effect of electromagnetic induction and noise, Chaos, Solitons Fractals, 137 (2020), 109782, 11 pp. doi: 10.1016/j. chaos. 2020.109782.  Google Scholar [10] Y. A. Kuznetsov, Elements of Applied Bifurcation Theory, 3$^nd$ edition, Springer-Verlag, New York, 2004. Google Scholar [11] J. Li, Y. Wu, M. Du and W. Liu, Dynamic behavior in firing rhythm transitions of neurons under electromagnetic radiation, Acta Phys. Sin, 64 (2015), 030503. doi: 10.7498/aps. 64.030503.  Google Scholar [12] Y. Loewenstein, S. Mahon, P. Chadderton, K. Kitamura, H. Sompolinsky, Y. Yarom and M. Häusser, Bistability of cerebellar Purkinje cells modulated by sensory stimulation, Nature Neurosci., 8 (2005), 202-211.  doi: 10.1038/nn1393.  Google Scholar [13] B. Lu, S. Liu, X. Liu, X. Jiang and X. Jang, Bifurcation and spike adding transition in Chay-Keizer model, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 26 (2016), 1650090, 13 pp. doi: 10.1142/S0218127416500905.  Google Scholar [14] M. Lv and J. Ma, Multiple modes of electrical activities in a new neuron model under electromagnetic radiation, Neurocomputing, 205 (2016), 375-381.  doi: 10.1016/j.neucom.2016.05.004.  Google Scholar [15] A. Mondal, R. K. Upadhyay and J. Ma et al., Bifurcation analysis and diverse firing activities of a modified excitable neuron model, Cognitive Neurodynamics, 13 (2019), 393-407.  doi: 10.1007/s11571-019-09526-z.  Google Scholar [16] A. Mvogo, C. N. Takembo, H. P. Ekobena Fouda and T. C. Kofan$\acute{e}$, Pattern formation in diffusive excitable systems under magnetic flow effects, Phys. Lett. A, 381 (2017), 2264-2271.  doi: 10.1016/j.physleta.2017.05.020.  Google Scholar [17] C. S. Nunemaker, R. Bertram, A. Sherman, K. Tsaneva-Atanasova, C. R. Daniel and L. S. Satin, Glucose modulates ${[Ca^{2+}]}_i$ oscillations in pancreatic islets via ionic and glycolytic mechanisms, Biophys. J, 91 (2006), 2082-2096.  doi: 10.1529/biophysj.106.087296.  Google Scholar [18] J. Rinzel, Bursting oscillations in an excitable membrane model, Ordinary Partial Differ. Equations, Springer Berlin Heidelberg, Berlin, Heidelberg 1151 (1985), 304-316. doi: 10.1007/BFb0074739.  Google Scholar [19] J. E. Rubin, J. Signerska-Rynkowska, J. D. Touboul and A. Vidal, Wild oscillations in a nonlinear neuron model with resets: (I) bursting, spike-adding and chaos, Discrete. Contin. Dyn. Syst. Ser. B, 22 (2017), 3967-4002.  doi: 10.3934/dcdsb.2017204.  Google Scholar [20] A. Saito, T. Terai, K. Makino, M. Takahashi and al. et, Real-time detection of stimulus response in cultured neurons by high-intensity intermediate-frequency magnetic field exposure, Integr. Biol, 10 (2018), 442-449.  doi: 10.1039/C8IB00097B.  Google Scholar [21] A. Saito, K. Wada, Y. Suzuki and S. Nakasono, The response of the neuronal activity in the somatosensory cortex after high-intensity intermediate-frequency magnetic field exposure to the spinal cord in rats under anesthesia and waking states, Brain Res., 1747 (2020), 147063. doi: 10.1016/j. brainres. 2020.147063.  Google Scholar [22] W. Teka, K. Tsaneva-Atanasova, R. Bertram and J. Tabak, From plateau to pseudo-plateau bursting: Making the transition, Bull. Math. Biol., 73 (2011), 1292-1311.  doi: 10.1007/s11538-010-9559-7.  Google Scholar [23] K. Tsaneva-Atanasova, H. M. Osinga, T. Rie$\beta$ and A. Sherman, Full system bifurcation analysis of endocrine bursting models, J. Theor. Biol., 264 (2010), 1133-1146.  doi: 10.1016/j.jtbi.2010.03.030.  Google Scholar [24] X. J. Wang, Genesis of bursting oscillations in the Hindmarsh-Rose model and homoclinicity to a chaotic saddle, Phys. D, 62 (1993), 263-274.  doi: 10.1016/0167-2789(93)90286-A.  Google Scholar [25] F. Zhan, S. Liu, X. Zhang, J. Wang and B. Lu, Mixed-mode oscillations and bifurcation analysis in a pituitary model, Nonlinear Dynam., 94 (2018), 807-826.  doi: 10.1007/s11071-018-4395-7.  Google Scholar show all references ##### References: [1] J. F. Barry, M. J. Turner, J. M. Schloss, D. R. Glenn, Y. Song, M. D. Lukin, H. Park and R. L. Walsworth, Optical magnetic detection of single-neuron action potentials using quantum defects in diamond, Proc. Natl. Acad. Sci. U.S.A., 113 (2016), 14133-14138.  doi: 10.1073/pnas.1601513113.  Google Scholar [2] R. Bertram and J. E. Rubin, Multi-timescale systems and fast-slow analysis, Math. Biol., 287 (2017), 105-121.  doi: 10.1016/j.mbs.2016.07.003.  Google Scholar [3] F. A. Carrillo, F. Verduzco and J. Delgado, Analysis of the Takens-Bogdanov bifurcation on m-parameterized vector fields, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 20 (2010), 995-1005.  doi: 10.1142/S0218127410026277.  Google Scholar [4] L. Duan, Q. Cao, Z. Wang and J. Su, Dynamics of neurons in the pre-B$\ddot{o}$tzinger complex under magnetic flow effect, Nonlinear Dynam, 94 (2018), 1961-1971.  doi: 10.1007/s11071-018-4468-7.  Google Scholar [5] L. Duan, Q. Lu and Q. Wang, Two-parameter bifurcation analysis of firing activities in the Chay neuronal model, Neurocomputing, 72 (2008), 341-351.  doi: 10.1016/j.neucom.2008.01.019.  Google Scholar [6] H. Gu, Experimental observation of transition from chaotic bursting to chaotic spiking in a neural pacemaker, Chaos, 23 (2013), 023126. doi: 10.1063/1.4810932.  Google Scholar [7] F. Han, Z. Wang, Y. Du, X. Sun and B. Zhang, Robust synchronization of bursting Hodgkin-Huxley neuronal systems coupled by delayed chemical synapses, Int. J. Non. Linear. Mech, 70 (2015), 105-111. doi: 10.1016/j. ijnonlinmec. 2014.10.010.  Google Scholar [8] E. M. Izhikevich, Neural excitability, spiking and bursting, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 10 (2000), 1171-1266.  doi: 10.1142/S0218127400000840.  Google Scholar [9] M. S. Kafraj, F. Parastesh and S. Jafari, Firing patterns of an improved Izhikevich neuron model under the effect of electromagnetic induction and noise, Chaos, Solitons Fractals, 137 (2020), 109782, 11 pp. doi: 10.1016/j. chaos. 2020.109782.  Google Scholar [10] Y. A. Kuznetsov, Elements of Applied Bifurcation Theory, 3$^nd$ edition, Springer-Verlag, New York, 2004. Google Scholar [11] J. Li, Y. Wu, M. Du and W. Liu, Dynamic behavior in firing rhythm transitions of neurons under electromagnetic radiation, Acta Phys. Sin, 64 (2015), 030503. doi: 10.7498/aps. 64.030503.  Google Scholar [12] Y. Loewenstein, S. Mahon, P. Chadderton, K. Kitamura, H. Sompolinsky, Y. Yarom and M. Häusser, Bistability of cerebellar Purkinje cells modulated by sensory stimulation, Nature Neurosci., 8 (2005), 202-211.  doi: 10.1038/nn1393.  Google Scholar [13] B. Lu, S. Liu, X. Liu, X. Jiang and X. Jang, Bifurcation and spike adding transition in Chay-Keizer model, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 26 (2016), 1650090, 13 pp. doi: 10.1142/S0218127416500905.  Google Scholar [14] M. Lv and J. Ma, Multiple modes of electrical activities in a new neuron model under electromagnetic radiation, Neurocomputing, 205 (2016), 375-381.  doi: 10.1016/j.neucom.2016.05.004.  Google Scholar [15] A. Mondal, R. K. Upadhyay and J. Ma et al., Bifurcation analysis and diverse firing activities of a modified excitable neuron model, Cognitive Neurodynamics, 13 (2019), 393-407.  doi: 10.1007/s11571-019-09526-z.  Google Scholar [16] A. Mvogo, C. N. Takembo, H. P. Ekobena Fouda and T. C. Kofan$\acute{e}$, Pattern formation in diffusive excitable systems under magnetic flow effects, Phys. Lett. A, 381 (2017), 2264-2271.  doi: 10.1016/j.physleta.2017.05.020.  Google Scholar [17] C. S. Nunemaker, R. Bertram, A. Sherman, K. Tsaneva-Atanasova, C. R. Daniel and L. S. Satin, Glucose modulates ${[Ca^{2+}]}_i$ oscillations in pancreatic islets via ionic and glycolytic mechanisms, Biophys. J, 91 (2006), 2082-2096.  doi: 10.1529/biophysj.106.087296.  Google Scholar [18] J. Rinzel, Bursting oscillations in an excitable membrane model, Ordinary Partial Differ. Equations, Springer Berlin Heidelberg, Berlin, Heidelberg 1151 (1985), 304-316. doi: 10.1007/BFb0074739.  Google Scholar [19] J. E. Rubin, J. Signerska-Rynkowska, J. D. Touboul and A. Vidal, Wild oscillations in a nonlinear neuron model with resets: (I) bursting, spike-adding and chaos, Discrete. Contin. Dyn. Syst. Ser. B, 22 (2017), 3967-4002.  doi: 10.3934/dcdsb.2017204.  Google Scholar [20] A. Saito, T. Terai, K. Makino, M. Takahashi and al. et, Real-time detection of stimulus response in cultured neurons by high-intensity intermediate-frequency magnetic field exposure, Integr. Biol, 10 (2018), 442-449.  doi: 10.1039/C8IB00097B.  Google Scholar [21] A. Saito, K. Wada, Y. Suzuki and S. Nakasono, The response of the neuronal activity in the somatosensory cortex after high-intensity intermediate-frequency magnetic field exposure to the spinal cord in rats under anesthesia and waking states, Brain Res., 1747 (2020), 147063. doi: 10.1016/j. brainres. 2020.147063.  Google Scholar [22] W. Teka, K. Tsaneva-Atanasova, R. Bertram and J. Tabak, From plateau to pseudo-plateau bursting: Making the transition, Bull. Math. Biol., 73 (2011), 1292-1311.  doi: 10.1007/s11538-010-9559-7.  Google Scholar [23] K. Tsaneva-Atanasova, H. M. Osinga, T. Rie$\beta$ and A. Sherman, Full system bifurcation analysis of endocrine bursting models, J. Theor. Biol., 264 (2010), 1133-1146.  doi: 10.1016/j.jtbi.2010.03.030.  Google Scholar [24] X. J. Wang, Genesis of bursting oscillations in the Hindmarsh-Rose model and homoclinicity to a chaotic saddle, Phys. D, 62 (1993), 263-274.  doi: 10.1016/0167-2789(93)90286-A.  Google Scholar [25] F. Zhan, S. Liu, X. Zhang, J. Wang and B. Lu, Mixed-mode oscillations and bifurcation analysis in a pituitary model, Nonlinear Dynam., 94 (2018), 807-826.  doi: 10.1007/s11071-018-4395-7.  Google Scholar (a) The one-parameter bifurcation versus $I_{ext}$ in improved endocrine model. H is Hopf bifurcation point, ${\rm LP}_i\ \ \left(i=1,\ 2\right)$ are fold bifurcation points. (b) The bifurcation diagram of $k_0$. LP is fold bifurcation point, ${\rm H}_1$ is Hopf bifurcation point, ${\rm H}_2$ is neutral saddle (a) The bifurcation diagram of ISIs with respect to $I_{ext}$. (b) The bifurcation diagram of ISIs with respect to $k_0$ (a) The inverse period-double bifurcation of ISIs over the range $\left[0.234,0.265\right]$ of $I_{ext}$. (b) The period-double bifurcation of ISIs over the range $\left[0.49,0.53\right]$ of $I_{ext}$. (c) The period-double bifurcation of ISIs over the range [0.006, 0.007] of $k_0$. (d), (e), and (f) The first (red) and second (blue) Lyapunov exponents $\lambda_{1,2}$ corresponding to (a), (b), and (c) respectively The diagrams on the left are the firing patterns generated by $I_{ext}$, and the diagrams on the right are the firing patterns only produced by induced current. (a) $I_{ext}=-0.1$, (c) $I_{ext}=-0.03$, (e) $I_{ext}=0.21$, (g) $I_{ext}=0.2406$, (b) $k_0=0.0116$, (d) $k_0=0.0104$, (f) $k_0=0.0079$, (g) $k_0=0.00666.$ The fast-slow analysis of fast subsystem under different external forcing currents. The dotted green curve is slow-nullcline for $\dot{c}=0$, the black and blue lines are stable and unstable equilibria. The trajectory of the system (2) (the blue curve) is superimposed. (a) $I_{ext}=-0.1$, (b) $I_{ext}=-0.03$, (c) $I_{ext}=0.21$, (d)$I_{ext}=0.2406$ The diagram of the membrane potential sequence for $I_{ext}=-0.5$, $V_{ml}=-27.5$. (b) Fast-slow dynamics of"subHopf/homoclinic" bursting via the "fold/homoclinic" hysteresis loop. The point sub${\rm H}_1$ represent subcritical Hopf bifurcation. ${\rm H}_i$ are neutral saddle, ${\rm LP}_i$ represent fold bifurcation. HC represents the saddle homoclinic bifurcation. LPC is the limit point of cycle Firing patterns of neuron model with variations of $k_0$. (a) $k_0=0.0001$, (b) $k_0=0.007$, (c) $k_0=0.007408$ (d) $I_{ext}=0.00742$(e) $k_0=0.00747$, (f) $k_0=0.0085$, (g) $k_0=0.00866$, (h) $k_0=0.009$ ISIs bifurcation diagram with respect to $k_0$, $I_{ext}$=0 Firing patterns of neuron model with variations of $I_{ext}$, $k_0=0.01$.(a) $I_{ext}=0.075$, (b) $I_{ext}=0.75$, (c) $I_{ext}=0.85$, (d) $I_{ext}=0.895$, (e) $I_{ext}=0.9075$, (f) $I_{ext}=0.95$ ISIs bifurcation diagram with respect to $I_{ext}$, $k_0$=0.01 Firing patterns of neuron model with variations of $I_{ext}$, $k_0=0.02$.(a) $I_{ext}=1$, (b) $I_{ext}=1.04$, (c) $I_{ext}=1.08$, (d) $I_{ext}=1.14$, (e) $I_{ext}=1.2$, (f) $I_{ext}=1.6$ ISIs bifurcation diagram with respect to $I_{ext}$, $k_0$=0.02 Codimension-two bifurcation analysis of the neuron model. (a) Representation of the two-parameter bifurcation diagram in the $\left(I_{ext},k_0\right)$-plane. (b)-(d) are the partial enlargement of the diagram (a). $f_1$ and $f_2$ are the fold bifurcation curves; $h_1$ and $h_2$ are Hopf bifurcation curves Parameter values used in this paper Parameter Value Parameter Value Parameter Value $f_c$ 0.0001 $g_{Ca}$ 0.81nS $k_{PMCA}$ 20$s^{-1}$ $d_{cell}$ $10\mu m$ $g_{K\left(Ca\right)}$ 0.2nS $\tau_n$ 0.03$s^{-1}$ $V_{ml}$ -22.5mV $g_K$ 2.25nS $\alpha$ 1 $V_K$ -65mV $k_0$ 0.01 $k_1$ 1 $V_{Ca}$ 0mV $\beta$ 0.0001 $k_2$ 3 Parameter Value Parameter Value Parameter Value $f_c$ 0.0001 $g_{Ca}$ 0.81nS $k_{PMCA}$ 20$s^{-1}$ $d_{cell}$ $10\mu m$ $g_{K\left(Ca\right)}$ 0.2nS $\tau_n$ 0.03$s^{-1}$ $V_{ml}$ -22.5mV $g_K$ 2.25nS $\alpha$ 1 $V_K$ -65mV $k_0$ 0.01 $k_1$ 1 $V_{Ca}$ 0mV $\beta$ 0.0001 $k_2$ 3 Data related to special points Poins Parameter values ($I_{ext}, k_{0}$) Eigenvalues ($\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}$) Normal Form Parameter $CP_{1}$ (-64.649, -12.888) $\lambda_{1}=0,\lambda_{2}=-19.1961,$ $c=-4.21\times10^{-4}$ $\lambda_{3}=-3.0007,\lambda_{4}=3447$ $CP_{2}$ (-66.356, -1.1266) $\lambda_{1}=0,\lambda_{2}=-88.818+206.79i$ $c=-3.22\times{10}^{-4}$ $\lambda_{3}=-\ 88.818-206.79i,\lambda_{4}=-2.999$, $CP_{3}$ (-78.42135, -4.6267) $\lambda_{1}=0,\lambda_{2}=-2.9873$, $c=1.9\times{10}^{-4}$ $\lambda_{3}=4.4263,\lambda_{4}=1.003963$ $CP_{4}$ (-23.9617, -0.5826) $\lambda_{1}=0,\lambda_{2}=-29.6947$, $c=-2.33\times{10}^{-5}$ $\lambda_{3}=193.632,\lambda_{4}=-3.0969$ $CP_{5}$ (-16.1428, 0.3545) $\lambda_{1}=0,\lambda_{2}=-14.6374$, $c=3.66\times{10}^{-5}$ $\lambda_{3}=-3.1904,\lambda_{4}=90.4468$ $CP_{6}$ (1.599659, 0.025901) $\lambda_{1}=0,\lambda_{2}=-30.6756$, $c=9.29\times{10}^{-5}$ $\lambda_{3}=-2.71,\lambda_{4}=2.0821$ $GH_{1}$ (-5.514, -0.07359) $\lambda_1=1.0119i,\lambda_2=-1.0119i,$ $l_1=36,03$ $\lambda_3=-3.73+4.19i,\lambda_4=-3.73-4.19i$ $GH_{2}$ (-13.993, -0.22718) $\lambda_1=70.688i,\lambda_2=-70.6880i,$ $l_1=-1.7\times{10}^{-3}$ $\lambda_3=-2.98418654,\lambda_4=-0.00208283$ $GH_{3}$ (1.98293, 0.33597) $\lambda_1=0.09056i,\lambda_2=-0.09056i,$ $l_1=-0.117$ $\lambda_3=-30.013089,\lambda_4=-2.371837$ $GH_{4}$ (-7.88796, -0.10746) $\lambda_1=20.5112i,\lambda_2=-20.5112i,$ $l_1=-7.8\times{10}^{-3}$ $\lambda_3=-2.8259,\lambda_4=-0.016958$ $GH_{5}$ (-7.7194, -0.1145) $\lambda_1=2.35556i,\lambda_2=-2.35556,$ $l_1=-2.5\times{10}^5$ $\lambda_3=0.715+1.936i,\lambda_4=0.715-1.936i$ $GH_{6}$ (-7.68801, -0.11345) $\lambda_1=1.577914i,\lambda_2=-1.577914i,$ $l_1=-4.4\times{10}^6$ $\lambda_3=0.582+3.039i,\lambda_4=0.582-3.039i$ $GH_{7}$ (-67.9198, -1.7171) $\lambda_1=214.4301i,\lambda_2=-214.4301i,$ $l_1=-0.03122$ $\lambda_3=-2.999953,\lambda_4=-0.000808023$ $GH_{8}$ (-67.1444, -1.6382) $\lambda_1=206.6605i,\lambda_2=-206.6605i,$ $l_1=0.03811$ $\lambda_3=-2.9997,\lambda_4=-0.00073251$ $GH_{10}$ (-64.649, -12.888) $\lambda_1=1.268\times{10}^{-7}i,\lambda_2=-1.268\times{10}^{-7}i,$ $l_1=8.63\times{10}^{-3}$ $\lambda_3=2.999109,\lambda_4=1394.19249$ $ZH_{1}$ (-73.3066, -6.1271) $\lambda_1=209.65718i,\lambda_2=-209.65718i,$ $(s,\theta,E_0)=$ $\lambda_3=0,\lambda_4=-2.99981307$ $(1,-5866.3,-1)$ $ZH_{2}$ (-67.4984, -1.6958) $\lambda_1=212.9485i,\lambda_2=-212.9485i,$ $(s,\theta,E_0)=$ $\lambda_3=0,\lambda_4=2.999916$ (-1, 4043.5, -1) $BT_{1}$ (-73.124, -6.294215) $\lambda_1=0,\lambda_2=0,$ $a=1.19\times{10}^{-3}$ $\lambda_3=-3,\lambda_4=1420.03$ $b=1.097$ $BT_{2}$ (0.64939, 0.00913) $\lambda_1=0,\lambda_2=0,$ $a=4.54\times{10}^{-4}$ $\lambda_3=-33.0915,\lambda_4=-2.7655$ $b=0.455$ $BT_{3}$ (-47.737, 1.87268) $\lambda_1=0,\lambda_2=0,$ $a=6.1\times{10}^{-3}$ $\lambda_3=-2.7509,\lambda_4=416.481$ $b=6.147$ $HH_{1}$ (-7.4232, -0.10856) $\lambda_1=3.700106i,\lambda_2=-3.700106i,$ $(p_{11}p_{22},\vartheta,\delta)=$(1, -2, -2) $\lambda_3=1.3485054i,\lambda_4=-1.3485054i$ $(\Theta,\Delta)=$(-50.2, 285) $NS_{1}$ (-72.904, -5.822) $\lambda_1=0,\lambda_2=1285.27,$ None $\lambda_3=-3,\lambda_4=3$ $NS_{2}$ (-1.3217, -0.03665) $\lambda_1=0,\lambda_2=-3.054,$ None $\lambda_3=-29.495,\lambda_4=29.495$ $NS_{3}$ (1.5887, 0.02568) $\lambda_1=0,\lambda_2=-30.46293,$ None $\lambda_3=-2.74899,\lambda_4=2.74899$ Poins Parameter values ($I_{ext}, k_{0}$) Eigenvalues ($\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}$) Normal Form Parameter $CP_{1}$ (-64.649, -12.888) $\lambda_{1}=0,\lambda_{2}=-19.1961,$ $c=-4.21\times10^{-4}$ $\lambda_{3}=-3.0007,\lambda_{4}=3447$ $CP_{2}$ (-66.356, -1.1266) $\lambda_{1}=0,\lambda_{2}=-88.818+206.79i$ $c=-3.22\times{10}^{-4}$ $\lambda_{3}=-\ 88.818-206.79i,\lambda_{4}=-2.999$, $CP_{3}$ (-78.42135, -4.6267) $\lambda_{1}=0,\lambda_{2}=-2.9873$, $c=1.9\times{10}^{-4}$ $\lambda_{3}=4.4263,\lambda_{4}=1.003963$ $CP_{4}$ (-23.9617, -0.5826) $\lambda_{1}=0,\lambda_{2}=-29.6947$, $c=-2.33\times{10}^{-5}$ $\lambda_{3}=193.632,\lambda_{4}=-3.0969$ $CP_{5}$ (-16.1428, 0.3545) $\lambda_{1}=0,\lambda_{2}=-14.6374$, $c=3.66\times{10}^{-5}$ $\lambda_{3}=-3.1904,\lambda_{4}=90.4468$ $CP_{6}$ (1.599659, 0.025901) $\lambda_{1}=0,\lambda_{2}=-30.6756$, $c=9.29\times{10}^{-5}$ $\lambda_{3}=-2.71,\lambda_{4}=2.0821$ $GH_{1}$ (-5.514, -0.07359) $\lambda_1=1.0119i,\lambda_2=-1.0119i,$ $l_1=36,03$ $\lambda_3=-3.73+4.19i,\lambda_4=-3.73-4.19i$ $GH_{2}$ (-13.993, -0.22718) $\lambda_1=70.688i,\lambda_2=-70.6880i,$ $l_1=-1.7\times{10}^{-3}$ $\lambda_3=-2.98418654,\lambda_4=-0.00208283$ $GH_{3}$ (1.98293, 0.33597) $\lambda_1=0.09056i,\lambda_2=-0.09056i,$ $l_1=-0.117$ $\lambda_3=-30.013089,\lambda_4=-2.371837$ $GH_{4}$ (-7.88796, -0.10746) $\lambda_1=20.5112i,\lambda_2=-20.5112i,$ $l_1=-7.8\times{10}^{-3}$ $\lambda_3=-2.8259,\lambda_4=-0.016958$ $GH_{5}$ (-7.7194, -0.1145) $\lambda_1=2.35556i,\lambda_2=-2.35556,$ $l_1=-2.5\times{10}^5$ $\lambda_3=0.715+1.936i,\lambda_4=0.715-1.936i$ $GH_{6}$ (-7.68801, -0.11345) $\lambda_1=1.577914i,\lambda_2=-1.577914i,$ $l_1=-4.4\times{10}^6$ $\lambda_3=0.582+3.039i,\lambda_4=0.582-3.039i$ $GH_{7}$ (-67.9198, -1.7171) $\lambda_1=214.4301i,\lambda_2=-214.4301i,$ $l_1=-0.03122$ $\lambda_3=-2.999953,\lambda_4=-0.000808023$ $GH_{8}$ (-67.1444, -1.6382) $\lambda_1=206.6605i,\lambda_2=-206.6605i,$ $l_1=0.03811$ $\lambda_3=-2.9997,\lambda_4=-0.00073251$ $GH_{10}$ (-64.649, -12.888) $\lambda_1=1.268\times{10}^{-7}i,\lambda_2=-1.268\times{10}^{-7}i,$ $l_1=8.63\times{10}^{-3}$ $\lambda_3=2.999109,\lambda_4=1394.19249$ $ZH_{1}$ (-73.3066, -6.1271) $\lambda_1=209.65718i,\lambda_2=-209.65718i,$ $(s,\theta,E_0)=$ $\lambda_3=0,\lambda_4=-2.99981307$ $(1,-5866.3,-1)$ $ZH_{2}$ (-67.4984, -1.6958) $\lambda_1=212.9485i,\lambda_2=-212.9485i,$ $(s,\theta,E_0)=$ $\lambda_3=0,\lambda_4=2.999916$ (-1, 4043.5, -1) $BT_{1}$ (-73.124, -6.294215) $\lambda_1=0,\lambda_2=0,$ $a=1.19\times{10}^{-3}$ $\lambda_3=-3,\lambda_4=1420.03$ $b=1.097$ $BT_{2}$ (0.64939, 0.00913) $\lambda_1=0,\lambda_2=0,$ $a=4.54\times{10}^{-4}$ $\lambda_3=-33.0915,\lambda_4=-2.7655$ $b=0.455$ $BT_{3}$ (-47.737, 1.87268) $\lambda_1=0,\lambda_2=0,$ $a=6.1\times{10}^{-3}$ $\lambda_3=-2.7509,\lambda_4=416.481$ $b=6.147$ $HH_{1}$ (-7.4232, -0.10856) $\lambda_1=3.700106i,\lambda_2=-3.700106i,$ $(p_{11}p_{22},\vartheta,\delta)=$(1, -2, -2) $\lambda_3=1.3485054i,\lambda_4=-1.3485054i$ $(\Theta,\Delta)=$(-50.2, 285) $NS_{1}$ (-72.904, -5.822) $\lambda_1=0,\lambda_2=1285.27,$ None $\lambda_3=-3,\lambda_4=3$ $NS_{2}$ (-1.3217, -0.03665) $\lambda_1=0,\lambda_2=-3.054,$ None $\lambda_3=-29.495,\lambda_4=29.495$ $NS_{3}$ (1.5887, 0.02568) $\lambda_1=0,\lambda_2=-30.46293,$ None $\lambda_3=-2.74899,\lambda_4=2.74899$ [1] Alexandre Caboussat, Allison Leonard. Numerical solution and fast-slow decomposition of a population of weakly coupled systems. Conference Publications, 2009, 2009 (Special) : 123-132. doi: 10.3934/proc.2009.2009.123 [2] Zhuoqin Yang, Tingting Guan. Bifurcation analysis of complex bursting induced by two different time-scale slow variables. Conference Publications, 2011, 2011 (Special) : 1440-1447. doi: 10.3934/proc.2011.2011.1440 [3] Luca Dieci, Cinzia Elia. Smooth to discontinuous systems: A geometric and numerical method for slow-fast dynamics. Discrete & Continuous Dynamical Systems - B, 2018, 23 (7) : 2935-2950. doi: 10.3934/dcdsb.2018112 [4] Andrea Giorgini. On the Swift-Hohenberg equation with slow and fast dynamics: well-posedness and long-time behavior. Communications on Pure & Applied Analysis, 2016, 15 (1) : 219-241. doi: 10.3934/cpaa.2016.15.219 [5] Chunhua Shan. Slow-fast dynamics and nonlinear oscillations in transmission of mosquito-borne diseases. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021097 [6] C. Connell Mccluskey. Lyapunov functions for tuberculosis models with fast and slow progression. Mathematical Biosciences & Engineering, 2006, 3 (4) : 603-614. doi: 10.3934/mbe.2006.3.603 [7] Lixia Duan, Zhuoqin Yang, Shenquan Liu, Dunwei Gong. Bursting and two-parameter bifurcation in the Chay neuronal model. Discrete & Continuous Dynamical Systems - B, 2011, 16 (2) : 445-456. doi: 10.3934/dcdsb.2011.16.445 [8] Younghae Do, Juan M. Lopez. Slow passage through multiple bifurcation points. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 95-107. doi: 10.3934/dcdsb.2013.18.95 [9] Jingzhi Li, Hongyu Liu, Qi Wang. Fast imaging of electromagnetic scatterers by a two-stage multilevel sampling method. Discrete & Continuous Dynamical Systems - S, 2015, 8 (3) : 547-561. doi: 10.3934/dcdss.2015.8.547 [10] Jie Xu, Yu Miao, Jicheng Liu. Strong averaging principle for slow-fast SPDEs with Poisson random measures. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2233-2256. doi: 10.3934/dcdsb.2015.20.2233 [11] Seung-Yeal Ha, Dohyun Kim, Jinyeong Park. Fast and slow velocity alignments in a Cucker-Smale ensemble with adaptive couplings. Communications on Pure & Applied Analysis, 2020, 19 (9) : 4621-4654. doi: 10.3934/cpaa.2020209 [12] Alexandre Vidal. Periodic orbits of tritrophic slow-fast system and double homoclinic bifurcations. Conference Publications, 2007, 2007 (Special) : 1021-1030. doi: 10.3934/proc.2007.2007.1021 [13] Yong Xu, Bin Pei, Rong Guo. Stochastic averaging for slow-fast dynamical systems with fractional Brownian motion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2257-2267. doi: 10.3934/dcdsb.2015.20.2257 [14] Renato Huzak. Cyclicity of the origin in slow-fast codimension 3 saddle and elliptic bifurcations. Discrete & Continuous Dynamical Systems, 2016, 36 (1) : 171-215. doi: 10.3934/dcds.2016.36.171 [15] Chunhua Jin. Boundedness and global solvability to a chemotaxis-haptotaxis model with slow and fast diffusion. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1675-1688. doi: 10.3934/dcdsb.2018069 [16] Bernold Fiedler. Global Hopf bifurcation in networks with fast feedback cycles. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 177-203. doi: 10.3934/dcdss.2020344 [17] Feng Zhang, Wei Zhang, Pan Meng, Jianzhong Su. Bifurcation analysis of bursting solutions of two Hindmarsh-Rose neurons with joint electrical and synaptic coupling. Discrete & Continuous Dynamical Systems - B, 2011, 16 (2) : 637-651. doi: 10.3934/dcdsb.2011.16.637 [18] Lixia Duan, Dehong Zhai, Qishao Lu. Bifurcation and bursting in Morris-Lecar model for class I and class II excitability. Conference Publications, 2011, 2011 (Special) : 391-399. doi: 10.3934/proc.2011.2011.391 [19] Ilya Schurov. Duck farming on the two-torus: Multiple canard cycles in generic slow-fast systems. Conference Publications, 2011, 2011 (Special) : 1289-1298. doi: 10.3934/proc.2011.2011.1289 [20] Anatoly Neishtadt, Carles Simó, Dmitry Treschev, Alexei Vasiliev. Periodic orbits and stability islands in chaotic seas created by separatrix crossings in slow-fast systems. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 621-650. doi: 10.3934/dcdsb.2008.10.621 2020 Impact Factor: 1.833
2021-10-20 17:10:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7146099209785461, "perplexity": 7918.1476592033605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00603.warc.gz"}
https://zbmath.org/?q=an%3A1248.35038
## Approximating travelling waves by equilibria of non-local equations.(English)Zbl 1248.35038 Summary: We consider an evolution equation of parabolic type in $$\mathbb{R}$$ having a travelling wave solution. We study the effects on the dynamics of an appropriate change of variables which transforms the equation into a non-local evolution one having a travelling wave solution with zero speed of propagation with exactly the same profile as the original one. This procedure allows us to compute simultaneously the travelling wave profile and its propagation speed avoiding moving meshes, as we illustrate with several numerical examples. We analyze the relation of the new equation with the original one in the entire real line. We also analyze the behavior of the non-local problem in a bounded interval with appropriate boundary conditions. We show that it has a unique stationary solution which approaches the traveling wave as the interval gets larger and larger and that is asymptotically stable for large enough intervals. ### MSC: 35C07 Traveling wave solutions 35K58 Semilinear parabolic equations 35R09 Integro-partial differential equations 35K57 Reaction-diffusion equations Full Text:
2022-10-07 06:27:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5710875988006592, "perplexity": 229.55393032855997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00190.warc.gz"}
https://paulkernfeld.com/2018/08/06/rust-needs-bfgs.html
Rust needs BFGS. What is BFGS? 06 Aug 2018 Paul Kernfeld dot com Rust has the potential to be a language for building world-class machine learning tools, but it’s currently missing some important pieces of mathematical infrastructure. BFGS is one of these missing pieces. In many popular machine learning algorithms, the goal is to find parameters that minimize the errors between the algorithm’s predictions and the training data. This can be formulated as finding an input that minimizes a loss function. Linear regression, logistic regression, neural networks, and a couple different bayesian techniques such as maximum likelihood estimation and maximum a priori estimation can be formulated as minimization problems. There is no general “best” way to minimize a function; different kinds of functions require different strategies. However, Python’s scipy and R’s optim both prominently feature an algorithm called BFGS. I’ll explain what BFGS stands for, the problem that it solves, and how it solves it. Introduction BFGS stands for Broyden-Fletcher-Goldfarb-Shanno, the names of four researchers who each independently published the algorithm in 1970. I have never heard of so many people independently discovering the same thing at once! First, a couple definitions: • $P$: this is the number of dimensions in our problem. In a machine learning context, this is the number of parameters, not the number of data points. • $x$: a possible value for the parameters • $f(x)$, the objective function: We are trying to find a value of $x$ that minimizes $f(x)$. The input is a vector of length $P$ and the output is a scalar. In Rust: Fn(Array1<f64>) -> f64 • $g(x)$, the gradient of $f$: the multidimensional derivative of $f$. The input and output are both vectors of length $P$. In Rust: Fn(Array1<f64>) -> Array1<f64> • $H(x)$, the Hessian of $f$: the multidimensional derivative of $g$. The input is a vector of length $P$ and the output is a matrix of size $P \times P$. In Rust: Fn(Array1<f64>) -> Array2<f64> BFGS is useful when $f$: • Has no closed-form solution (otherwise, we would just solve it!) • Is convex. Intuitively, it must be roughly bowl-shaped with no tricky nooks or crannies. • Is twice-differentiable. Intuitively, it doesn’t have sharp corners or edges. • May be high-dimensional. State-of-the-art neural networks can now have billions of parameters. In high dimensions, it is important to make use of $g$. Gradient descent is the simplest way to do this, taking tiny steps in the direction of the gradient. Newton’s method improves on gradient descent by using a second-degree Taylor series to approximate the objective function with a quadratic bowl. Here is what the Newton update looks like: $x_{k+1} = x_k - H(x_k)^{-1}g(x_k)$ Here’s how we might write this in Rust: pub fn newton<F, G, H>(x0: Array1<f64>, f: F, g: G, h: H) -> Array1<f64> where F: Fn(&Array1<f64>) -> f64, G: Fn(&Array1<f64>) -> Array1<f64>, H: Fn(&Array1<f64>) -> Array2<f64>, { let mut x = x0; let mut f_x = f(&x); loop { let f_x_old = f_x; f_x = f(&x); // If f only improved by a small amount, call it a day if stop(f_x_old, f_x) { return x; } } } This can still be improved in a couple ways, though. Notably: • Inverting a matrix is expensive, between $O(P^2)$ and $O(P^3)$ time • Computing the Hessian can be expensive in some cases • The need to pass in the Hessian could be inconvenient for users Time for math Feel free to skim over this section if you’d like to get to the code. The Hessian The Hessian is the matrix of second derivatives of $f$, where element $H_{i,j}$ is ${\frac {\partial ^{2}f}{\partial x_{i}\partial x_{j}}}$. A Hessian is always square. In our case, it’s $P \times P$. In our case, the Hessian is symmetric because ${\frac {\partial }{\partial x_{i}}}\left({\frac {\partial f}{\partial x_{j}}}\right)={\frac {\partial }{\partial x_{j}}}\left({\frac {\partial f}{\partial x_{i}}}\right)$, i.e. the order of differentiation doesn’t matter. This opens up the possibility of storing only half of $H$, although I haven’t tried to do that. Approximating the Hessian Since we’re only approximating the objective function anyways, we might as well approximate the Hessian as well. Methods that do this are called quasi-Newton methods. I’ll call the approximate Hessian $B$. When minimizing, it’s important that $B$ is positive definite. This means that our quadratic approximation has a single point that is a global minimum. There are no saddle points, the minimum can’t be a line or plane, and there can’t be any maxima. If $B$ stops being positive definitel, we can reset it to its initial state. The secant method In order to approximate the Hessian, we can use the secant method, which is like Newton’s method except that instead of computing the derivative it uses finite differences to approximate it. This lets us approximate the $H$ based on successive evaluations of $g$. Thus, the user never needs to compute the Hessian. Note that we are not using the secant method to approximate $g$ from evaluations of $f$; we are actually calculating $g$. The Sherman-Morrison formula Inverting the Hessian for each step of the optimization process would be expensive. It would be much nicer if we could just work directly with the inverse Hessian, avoiding inversions at all. The Sherman-Morrison formula provides a way to apply an update to the inverse of a matrix. $(A+uv^T)^{-1} = A^{-1} - \frac{A^{-1}uv^T A^{-1}}{1 + v^T A^{-1}u}$ BFGS BFGS is a quasi-Newton method that uses the secant method to approximate $H$ from successive evaluations of $g$ along with the Sherman-Morrison formula to efficiently perform the updates without any matrix inversions. The derivation of the update is kind of complicated but you can find it on Wikipedia. Here is the main loop. I’ve left out some details but the full code is here. pub fn bfgs<F, G>(x0: Array1<f64>, f: F, g: G) -> Array1<f64> where F: Fn(&Array1<f64>) -> f64, G: Fn(&Array1<f64>) -> Array1<f64>, { let mut x = x0; let mut f_x = f(&x); let mut g_x = g(&x); // Initialize the inverse approximate Hessian to the identity matrix let mut b_inv = new_identity_matrix(x.len()); loop { // Find the search direction let search_dir = -1.0 * b_inv.dot(&g_x); // Find a good step size let epsilon = line_search(|epsilon| f(&(&search_dir * epsilon + &x))); // Save the old state let f_x_old = f_x; let g_x_old = g_x; // Take a step in the search direction f_x = f(&x); g_x = g(&x); // Compute deltas between old and new let y = (&g_x - &g_x_old).into_shape((p, 1)).unwrap(); let s = (epsilon * search_dir).into_shape((p, 1)).unwrap(); let sy = s.t().dot(&y).into_shape(()).unwrap()[()]; let ss = s.dot(&s.t()); if stop(f_x_old, f_x) { return x; } // Update the Hessian approximation let to_add = ss * (sy + &y.t().dot(&b_inv.dot(&y))) / sy.powi(2); let to_sub = (b_inv.dot(&y).dot(&s.t()) + s.dot(&y.t().dot(&b_inv))) / sy; b_inv = b_inv + to_add - to_sub; } } I have tested this on a few simple functions, including the Rosenbrock function. Limitations BFGS is not trivially parallelizable because it involves matrix multiplication. However, particular parts of the computation might be parallelizable, e.g. using ndarray-parallel. The memory usage of BFGS can be high if there are a lot of parameters. Future The line search that I have implemented could be improved quite a bit. In fact, it’s terrible! There are also a few algorithms derived from BFGS that provide various benefits: • Limited-memory BFGS (L-BFGS): Instead of using a square matrix to represent the $B$, we can save memory by representing part of it with a low-rank matrix. • Bounded L-BFGS (L-BFGS-B): This improvement to limited-memory BFGS allows it to support bounded variables. There is already a Rust crate called lbfgsb-sys that includes bindings to the canonical Fortran implementation of L-BFGS-B. • Online L-BFGS: This variant of L-BFGS doesn’t need to look at all data at a time. Want to help? If you want to use this from rust, it’s published as the crate bfgs. Rustaceans: I think that my code is doing some unnecessary copying and allocation, and I’d love help optimizing it. Mathematicians: any suggestions for functions that I should use to test this code? In particular, I don’t feel confident that everything is as numerically stable as it should be. Thanks To my brother Eric for walking me through most of this math. I did not think that I could ever understand BFGS, but now I mostly understand BFGS. Also thanks to Max Livingston for reviewing.
2021-05-07 16:31:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 38, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6936858296394348, "perplexity": 1119.6428761827592}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988796.88/warc/CC-MAIN-20210507150814-20210507180814-00261.warc.gz"}
https://mathematica.stackexchange.com/questions/64861/general-strategy-to-use-nintegrate-for-multidimension-integrals
# “General” strategy to use NIntegrate for multidimension integrals? I don't have much experience of numerical methods for multidimensional integrals. Currently, the particular function I want to integrate is: $$f(x,y,z,p_x,p_y,p_z) = \frac{p_x^2(2 p_x x(p_y y + 4 p_z z)-2 p_x^2(y^2+4 z^2)+x^2 \sqrt{x^2+y^2+4 z^2})}{2 (x^2+y^2+4 z^2)^{\frac{3}{2}}}$$ I want to integrate $f(x,y,z,p_x,p_y,p_z)$ over the entire real plane: $\int_{-\infty}^{\infty}dx\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz\int_{-\infty}^{\infty}dp_x\int_{-\infty}^{\infty}dp_y\int_{-\infty}^{\infty}dp_z f(x,y,z,p_x,p_y,p_z)$ NIntegrate[ (px^2 (2 px x (py y + 4 pz z) - 2 px^2 (y^2 + 4 z^2) + x^2 Sqrt[x^2 + y^2 + 4 z^2]))/(2 (x^2 + y^2 + 4 z^2)^(3/2)), {x, -∞, ∞}, {y, -∞, ∞}, {z, -∞, ∞}, {px, -∞, ∞}, {py, -∞, ∞}, {pz, -∞, ∞} ] The above code couldn't evaluate a value and returns: NIntegrate::slwcon: Numerical integration converging too slowly; suspect one of the following: singularity, value of the integration is 0, highly oscillatory integrand, or WorkingPrecision too small. Is there some general strategy to look at the function and see which numerical method to use? Edit If I break down the function as follows: $f(x,y,z,p_x,p_y,p_z) = \frac{2 p_x^3 x(p_y y + 4 p_z z)}{2 (x^2+y^2+4 z^2)^{\frac{3}{2}}} - \frac{p_x^4 (y^2 + 4 z^2)}{(x^2+y^2+4 z^2)^{\frac{3}{2}}} + \frac{x^2 p_x^2}{2(x^2+y^2+4 z^2)}$ The first term when integrated will be 0 because it is an odd function in $p_x$ (NIntegrate also return the same result) $\int_{-\infty}^{\infty} d p_x \frac{2 p_x^3 x(p_y y + 4 p_z z)}{2 (x^2+y^2+4 z^2)^{\frac{3}{2}}} = 0$ Naively, I would expect the second and third term when integrated to give $-\infty$ and $\infty$ respectively. Is it possible for those 2 terms to cancel to 0 or something finite? • Change NIntegrate to Integrate and you'll get a non-convergence warning. It seems plausible, but I don't have time to track it down just now...(sorry). – Michael E2 Nov 4 '14 at 15:28 • Interesting, I just tested that, it says my integral does not converge in the range of -infinity to infinity. – user29165 Nov 4 '14 at 15:40 • Looking at the integrand more closely, I see it is a polynomial in {px, py, pz}. The integral most certainly diverges. – Michael E2 Nov 4 '14 at 17:56 • The integral of the odd term is still divergent and equal zero only in the sense of the Cauchy Principal Value. You can ask Mathematica to compute this with the option PrincipalValue -> True (for Integrate). – Michael E2 Nov 5 '14 at 21:06 • There are many ways to define values for divergent integral (and series), but they are tricky. You need to have some external reason for thinking one way will give reliable results that are applicable to the problem at hand. For instance, the Cauchy Principal Value is not translation invariant: change x to x - 1 and you change the value (for some integrals). Intuitively that means shifting the region changes the area, which is un-geometric. (Of course, there's no real violation because the region is infinite.) But blindly choosing a method might give unreliable results. – Michael E2 Nov 5 '14 at 21:13
2019-07-23 18:32:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076023459434509, "perplexity": 987.8052731777431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529481.73/warc/CC-MAIN-20190723172209-20190723194209-00171.warc.gz"}
https://math.stackexchange.com/questions/433842/test-for-the-convergence-of-the-series
# test for the convergence of the series Test the convergence of summation $$\sum_{n=1}^\infty x_n$$ where $$x_{2n-1}=\frac{n}{n+1}\\ x_{2n}=-\frac{n}{n+1}$$ That is the series $$\frac 1 2-\frac 12+\frac 23-\frac 23 +-\cdots$$ what I did was let Sn be the partial sums of the series.Then $$S_n=\begin{cases} 0 & \text{when } n \text{ is even} \\ \frac{n}{n+1} &\text{when } n \text{ is odd}\end{cases}$$ thus $$\lim\limits_{ n\to \infty} S_n= \begin{cases} 0 & \text{when } n \text{ is even}\\ 1 & \text{when } n \text{ is odd}\end{cases}$$ Thus $\lim\limits_{ n\to \infty} S_n$ doesn't converge to a particular value. Hence $\lim\limits_{ n\to \infty} S_n$ doesn't exist. Therefore the series diverge. • Please see here for how to typeset common math expressions with MathJax, and see here for how to use Markdown formatting. – Zev Chonoles Jul 1 '13 at 17:54 • Your formula for $S_n$ when $n$ is odd is wrong. The formula for $S_{2n-1} =\frac{n}{n+1}$. – Thomas Andrews Jul 1 '13 at 18:09 Your answer is correct, but I would say it differently. What you mean is that $$\lim_{n\to\infty}S_{2n}=0$$ $$\lim_{n\to\infty}S_{2n+1}=1$$ Note that if we "introduced parenthesis", and set $$a_n=x_{2n-1}+x_{2n}$$ the series $$\sum a_n$$ would converge to $0$. • What you mean by introduced paranthesis is after a rearrangement of the series right?I want to know if my statement "Thus limn→∞Sn doesn't converge to a particular value. Hence limn→∞Sn doesn't exist. Therefore the series diverge." is correct. – clarkson Jul 2 '13 at 18:49 A much easier way, imo, to show the series doesn't converge: $$|x_n|=\frac n{n+1}\xrightarrow[n\to\infty]{}1\neq 0$$ and thus the series $\,\sum x_n\,$ cannot converge. • Is it okay to consider only limit of n/n+1.Because clearly this is not $a_n$ of the series.This was done by divergence test right? (since limit of $a_n$ is not 0 the series diverge).Also if we have a series which is made up of two series where one converge and the other diverge, does the complete series diverge – clarkson Jul 2 '13 at 18:53 • Yes, it is o.k. since even if it is only a subsequence the fact that its limit isn't zero shows that the general term sequence doesn't converge to zero. – DonAntonio Jul 2 '13 at 21:19 • This is a series not a sequence.For a sequence I know that every subsequence should converge.But does the same thing apply for a series? – clarkson Jul 3 '13 at 2:34 • A series is just a sequence of partial sums. – Paul Malinowski May 30 '14 at 16:10 • No Paul: a series is not a sequence, neither of partial sums or anything else. In order to find out whether a series converges or not then we take its partial sums' sequence, but those are two rather different things. – DonAntonio May 30 '14 at 16:25
2019-07-21 15:31:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9638512134552002, "perplexity": 463.137698311189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527048.80/warc/CC-MAIN-20190721144008-20190721170008-00278.warc.gz"}
https://tex.stackexchange.com/questions/450453/conditional-based-on-displaystyle
# Conditional based on displaystyle I'm trying to write a macro that returns different things depending on it if is used in a displayed equation or if it is used in an inline equation. Something like this, I assume: \newcommand{\foo}{% \ifdisplaystyle This equation is displayed! \else This equation is inline! \fi} I assume there's a way to do this, as $\sum$ and $$\sum$$ look different. How is it done? • Possible duplicate: What is \mathchoice? Sep 11 '18 at 18:24 • I think the questions are certainly similar. If your answer had simply been "go read this question", that would have been enough. I'm not very familiar with StackExchange's opinion on how similar things should be to be marked as duplicate, so maybe it's correct to do so. I think my question has the advantage of being asked in a way that someone might be able to find it without knowing the solution (figure out how \mathchoice works and then use it). Sep 11 '18 at 19:28 You can use \mathchoice in the following way to condition: \mathchoice {<display style>} {<text style>} {<script style>} {<script-script style>} to discern between what is displayed within the respective styles. Here's an example: \documentclass{article} \newcommand{\foo}{{% \mathchoice {D} % \displaystyle {T} % \textstyle {S} % \scriptstyle {s} % \scriptscriptstyle }} \begin{document} See $\foo^{\foo^{\foo}}$. Also see $\foo^{\foo^{\foo}} \quad \mbox{and} \quad \textstyle \foo_{\foo_{\foo}}$ \end{document}
2021-09-18 04:39:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.795395016670227, "perplexity": 1031.212196209687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056297.61/warc/CC-MAIN-20210918032926-20210918062926-00650.warc.gz"}
https://ieeexplore.ieee.org/xpl/tocresult.jsp?reload=true&isnumber=6016553
# IEEE Transactions on Circuits and Systems I: Regular Papers ## Filter Results Displaying Results 1 - 25 of 36 Publication Year: 2011, Page(s):C1 - C4 | PDF (51 KB) • ### IEEE Transactions on Circuits and Systems—I: Regular Papers publication information Publication Year: 2011, Page(s): C2 | PDF (39 KB) • ### Guest Editorial Special Section on 2010 IEEE Custom Integrated Circuits Conference (CICC 2010) Publication Year: 2011, Page(s):1993 - 1995 | PDF (524 KB) | HTML • ### Technology Variability From a Design Perspective Publication Year: 2011, Page(s):1996 - 2009 Cited by:  Papers (11) | | PDF (2757 KB) | HTML Increased variability in semiconductor process technology and devices requires added margins in the design to guarantee the desired yield. Variability is characterized with respect to the distribution of its components, its spatial and temporal characteristics and its impact on specific circuit topologies. Approaches to variability characterization and modeling for digital logic and SRAM are analy... View full abstract» • ### The Design and Characterization of a Half-Volt 32 nm Dual-Read 6T SRAM Publication Year: 2011, Page(s):2010 - 2016 Cited by:  Papers (1) | | PDF (1012 KB) | HTML Dual read port 6-transistor (6T) SRAMs play a critical role in high-performance cache designs thanks to doubling of access bandwidth, but stability and sensing challenges typically limit the low-voltage operation. We report a high-performance dual read port 8-way set associative 6T SRAM with a one clock cycle access latency, in a 32 nm metal-gate partially depleted SOI process technology, for low-... View full abstract» • ### All-Digital Circuit-Level Dynamic Variation Monitor for Silicon Debug and Adaptive Clock Control Publication Year: 2011, Page(s):2017 - 2025 Cited by:  Papers (22) | | PDF (1483 KB) | HTML A 45 nm microprocessor integrates an all-digital dynamic variation monitor (DVM) to continuously measure the impact of dynamic parameter variations on circuit-level performance to enhance silicon debug and adaptive clock control. The DVM consists of a tunable replica circuit, a time-to-digital converter, and multiplexers to measure circuit delay or frequency changes with less than a 1% measured re... View full abstract» • ### Dynamic NBTI Management Using a 45 nm Multi-Degradation Sensor Publication Year: 2011, Page(s):2026 - 2037 Cited by:  Papers (24) | | PDF (1646 KB) | HTML This paper proposes a low power unified oxide and negative bias temperature instability (NBTI) degradation sensor designed in 45 nm process node. The cell power consumption is 105 lower than a previously proposed sensor. The unified nature enables efficient reliability monitoring with reduced sensor deployment effort and area overhead. Using the sensor dynamic NBTI management (DNM) has ... View full abstract» • ### Highly Integrated and Tunable RF Front Ends for Reconfigurable Multiband Transceivers: A Tutorial Publication Year: 2011, Page(s):2038 - 2050 Cited by:  Papers (38)  |  Patents (9) | | PDF (1441 KB) | HTML Architectural and circuit techniques to integrate the RF front end passive components, namely the SAW filters and duplexers that are traditionally implemented off chip, are presented. Intended for software-defined and cognitive radio platforms, tunable high-Q filters realized by CMOS switches and linear or MOS capacitors allow the integration of highly reconfigurable transceiver front ends that ar... View full abstract» • ### Spurious-Free Time-to-Digital Conversion in an ADPLL Using Short Dithering Sequences Publication Year: 2011, Page(s):2051 - 2060 Cited by:  Papers (12)  |  Patents (1) | | PDF (1601 KB) | HTML We propose an enhancement to the digital phase detection mechanism in an all-digital phase-locked loop (ADPLL) by randomization of the frequency reference using carefully chosen dither sequences. This dithering renders the digital phase detector, realized as a time-to-digital converter (TDC), free from any phase domain spurious tones generated as a consequence of an ill-conditioned sampling of the... View full abstract» • ### A 25 Gb/s 65-nm CMOS Low-Power Laser Diode Driver With Mutually Coupled Peaking Inductors for Optical Interconnects Publication Year: 2011, Page(s):2061 - 2068 Cited by:  Papers (11) | | PDF (1830 KB) | HTML A 25 Gb/s laser diode (LD) driver has been developed on the basis of standard 65 nm CMOS technology for optical interconnects. The LD driver consists of a main driver capable of providing an average current of 30 mA and a predriver providing a gain of 20 dB. The main driver uses mutually coupled inductors to adjust the inductive peaking to improve eye patterns under various packaging conditions. T... View full abstract» • ### A Low-Power ECoG/EEG Processing IC With Integrated Multiband Energy Extractor Publication Year: 2011, Page(s):2069 - 2082 Cited by:  Papers (35)  |  Patents (1) | | PDF (1531 KB) | HTML Electrocorticography (ECoG) implants have recently demonstrated promising results towards potential use in brain-computer interfaces (BCIs). Spectral changes in ECoG signals can provide insight on functional mapping of sensorimotor cortex. We present a 6.4 μ W electrocorticography (ECoG)/electroencephalography (EEG) processing integrated circuit (EPIC) with 0.46 μVrms View full abstract» • ### A Multibit Dual-Feedback CT $\Delta\Sigma$ Modulator With Lowpass Signal Transfer Function Publication Year: 2011, Page(s):2083 - 2095 Cited by:  Papers (13)  |  Patents (1) | | PDF (2572 KB) | HTML This paper presents a dual-feedback continuous-time delta-sigma modulator that features a signal transfer function with low sensitivity to coefficient variations. The anti-aliasing of this topology is similar to that of the feedback architecture while using only two feedback paths for modulators of any order. The proposed architecture is a good candidate for low-power applications as it shows rela... View full abstract» Publication Year: 2011, Page(s):2096 - 2107 Cited by:  Papers (23)  |  Patents (3) | | PDF (2320 KB) | HTML This paper investigates the performance benefit of using nonuniformly quantized ADCs for implementing high-speed serial receivers with decision-feedback equalization (DFE). A way of determining an optimal set of ADC thresholds to achieve the minimum bit-error rate (BER) is described, which can yield a very different set from the one that minimizes signal quantization errors. By recognizing that bo... View full abstract» • ### Avoiding the Gain-Bandwidth Trade Off in Feedback Amplifiers Publication Year: 2011, Page(s):2108 - 2113 Cited by:  Papers (10) | | PDF (317 KB) | HTML The gain-bandwidth conflict is one the most important limitations of high gain feedback amplifiers. In this tutorial paper we will discuss in a unified manner the most important approaches aimed to design amplifiers with a constant closed-loop bandwidth. Advantages and drawbacks are evidenced and new potential solutions are also formulated. View full abstract» • ### Comments on “Avoiding the Gain-Bandwidth Trade Off in Feedback Amplifiers” Publication Year: 2011, Page(s):2114 - 2116 Cited by:  Papers (5) | | PDF (110 KB) | HTML In this comment, the previous work on bandwidth enhancement of finite gain amplifiers using two or more opamps is brought to the attention of the reader. An analogy between finite gain amplifiers using opamps and current feedback operational amplifiers is also presented. View full abstract» Publication Year: 2011, Page(s): 2117 Cited by:  Papers (1) | | PDF (27 KB) | HTML This paper discussed the comments on avoiding the gain-bandwidth tradeoff in feedback amplifiers. View full abstract» • ### On the Excess Noise Factor $\Gamma$ of a FET Driven by a Capacitive Source Publication Year: 2011, Page(s):2118 - 2126 Cited by:  Papers (4) | | PDF (378 KB) | HTML The excess noise factor Γ , also known as Ogawa's noise factor, is frequently used in the literature on optical receivers to calculate the noise and sensitivity of FET front-ends. After revisiting its definition and clarifying its applications and limitations, we derive an analytical expression for Γ in terms of the channel noise factor γ, the gate noise factor δ , and ... View full abstract» • ### A 12b 50 MS/s 21.6 mW 0.18 $\mu$m CMOS ADC Maximally Sharing Capacitors and Op-Amps Publication Year: 2011, Page(s):2127 - 2136 Cited by:  Papers (21) | | PDF (2551 KB) | HTML A 12b 50 MS/s 0.18 μ m CMOS pipeline ADC is described. The proposed capacitor and operational amplifier (op-amp) sharing techniques merge the front-end sample-and-hold amplifier (SHA) and the first multiplying digital-to-analog converter (MDAC1) to achieve low power without an additional reset timing and a memory effect. The second and third MDACs share a single op-amp to reduce power consu... View full abstract» • ### Hardware Reduction in Digital Delta-Sigma Modulators Via Bus-Splitting and Error Masking—Part I: Constant Input Publication Year: 2011, Page(s):2137 - 2148 Cited by:  Papers (13) | | PDF (1730 KB) | HTML In this two-part paper, a design methodology for bus-splitting digital delta-sigma modulators (DDSMs) is presented. The design methodology is based on error masking and is applied to both ditherless and dithered DDSMs with constant and sinusoidal inputs. Rules for selecting the appropriate wordlengths of the constituent DDSMs are derived which ensure that the spectral performance of the bus-splitt... View full abstract» • ### A Continuously Tunable Hybrid LC-VCO PLL With Mixed-Mode Dual-Path Control and Bi-level $\Delta-\Sigma$ Modulated Coarse Tuning Publication Year: 2011, Page(s):2149 - 2158 Cited by:  Papers (4)  |  Patents (2) | | PDF (2139 KB) | HTML This paper presents a dual-path PLL using a hybrid VCO to perform digital based frequency acquisition and analog based bandwidth control. With the mixed-mode dual-path control, the proposed PLL significantly alleviates noise coupling and area problems in the coarse-tuning path while minimizing open-loop gain variation in the fine-tuning path. In the hybrid VCO design, the nonlinearity issue of the... View full abstract» • ### A Two-Dimensional Configurable Active Silicon Dendritic Neuron Array Publication Year: 2011, Page(s):2159 - 2171 Cited by:  Papers (6) | | PDF (1843 KB) | HTML This paper presents a 2-D programmable dendritic neuron array consisting of a 3× 32 dendritic compartment array and a 1 × 32 somatic compartment array. Each dendritic compartment contains two types of regenerative nonlinearities: a NMDA synaptic nonlinearity and a dendritic spike nonlinearity. The chip supports the programmability of local synaptic weights and the configuration of de... View full abstract» • ### Design and Modeling of a Neuro-Inspired Learning Circuit Using Nanotube-Based Memory Devices Publication Year: 2011, Page(s):2172 - 2181 Cited by:  Papers (9) | | PDF (1783 KB) | HTML We present an original method to implement neuro-inspired supervised learning for a synaptic array based on carbon nanotube devices. The device characteristics required to implement on chip learning within a crossbar of carbon nanotube field effect transistors (CNTFETs) as synaptic arrays were experimentally demonstrated and accurately modeled through a specific electrical compact model. We perfor... View full abstract» • ### Determining the Range of the Power Consumption in Linear DC Interval Parameter Circuits Publication Year: 2011, Page(s):2182 - 2188 Cited by:  Papers (4) | | PDF (271 KB) | HTML The paper considers the following worst-case steady-state tolerance analysis problem: given a linear dc circuit whose resistors and sources have preset tolerances, determine the range of the electrical power consumed in the circuit. It is shown that the power range sought can be computed as the range of an associated interval linear programming problem. A method for solving the latter problem is s... View full abstract» • ### Design Trade-offs in Ultra-Low-Power Digital Nanoscale CMOS Publication Year: 2011, Page(s):2189 - 2200 Cited by:  Papers (30) | | PDF (1205 KB) | HTML While the general trend in CMOS technology scaling is mostly focused on high-performance and high-speed circuits, the potential use of advanced nanoscale technologies for ultra-low power (ULP) applications with lower operating frequencies is still debated. In these types of applications, the supply voltage is generally reduced well below threshold voltage of MOS devices in order to limit dissipati... View full abstract» • ### Carry Chains for Ultra High-Speed SiGe HBT Adders Publication Year: 2011, Page(s):2201 - 2210 Cited by:  Papers (7) | | PDF (2296 KB) | HTML Adder structures utilizing SiGe Hetero-junction Bipolar Transistor (HBT) digital circuits are examined for use in high clock rate digital applications requiring high-speed integer arithmetic. A 4-gate deep test structure for 32-bit addition using a 210 GHz fT process has been experimentally verified to operate with 37.5 ps delay or 26.7 GHz speed. The paper documents a unique ble... View full abstract» ## Aims & Scope The theory, analysis, design, and practical implementations of circuits, and the application of circuit techniques to systems and to signal processing. Full Aims & Scope ## Meet Our Editors Editor-in-Chief Andreas Demosthenous Dept. Electronic & Electrical Engineering University College London London WC1E 7JE, UK
2018-05-24 12:15:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17756113409996033, "perplexity": 13083.864614675334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866276.61/warc/CC-MAIN-20180524112244-20180524132244-00195.warc.gz"}
https://brilliant.org/problems/cool-inequality-4-only-holders/
# Use only Holder's If $a$ and $b$ are positive real numbers such that $a^{2015}+b^{2015}=2015,$ find the maximum value of $a+b$. ×
2022-06-29 03:14:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6157137155532837, "perplexity": 126.03997501476758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103620968.33/warc/CC-MAIN-20220629024217-20220629054217-00093.warc.gz"}
https://socratic.org/questions/how-do-you-factor-the-trinomial-x-2-8x-24-0
# How do you factor the trinomial x^2+8x+24=0? ${x}^{2} + 8 x + 24 = \left(x + 4 + i 2 \sqrt{2}\right) \left(x + 4 - i 2 \sqrt{2}\right)$ As the determinant for equation ${x}^{2} + 8 x + 24 = 0$ is ${8}^{2} - 4 \times 1 \times 24 = 64 - 96 = - 32$, the factors will be complex Using quadratic formula, zeros of ${x}^{2} + 8 x + 24$ are $\frac{- 8 \pm \sqrt{- 32}}{2}$ or $- 4 \pm \frac{4}{2} \sqrt{- 2}$ or $- 4 \pm i 2 \sqrt{2}$ Hence factors of ${x}^{2} + 8 x + 24$ are $\left(x + 4 + i 2 \sqrt{2}\right) \left(x + 4 - i 2 \sqrt{2}\right)$
2023-03-30 00:50:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9601231217384338, "perplexity": 342.30532531430777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00302.warc.gz"}
http://bootmath.com/race-problem-counting.html
# Race Problem counting In a race there are n horses.In the race more than one horse may get the same position. For example, 2 horses can finish in 3 ways. 1. Both first 2. horse1 first and horse2 second 3. horse2 first and horse1 second In how many ways,the race can finish so that a particular horse can never be first.If f(n) is the number of ways, is there any recurrence relation to get f(n). #### Solutions Collecting From Web of "Race Problem counting" Let $g(h,p)$ be the number of ways that $h$ horses can be ordered in $p$ positions. Then consider introducing an extra horse: it go in one of the $p$ existing positions of $h-1$ horses or in one of the gaps between the positions or before or after all the $p-1$ positions of $h-1$ horses. This leads to the recurrence $$g(h,p)= kg(n-1,p)+kg(n-1,p-1)$$ starting at $g(1,p)=0$ except $g(1,1)=1$. This gives a table of numbers which are in fact factorials multiplied by Stirling numbers of the second kind. So for $h$ horses, the number of possibilities is the sum of these, e.g. $\displaystyle G(h)=\sum_{p=1}^h g(h,p)$. These are called Fubini numbers or ordered Bell numbers. But you want to exclude a particular horse from being first alone or equal first. The number of cases where it would be first alone among $h$ horses is equal to the total number of cases with $h-1$ horses and (assuming there is more than one horse) the number of cases where it would be equal first among $h$ horses is also equal to the total number of cases with $h-1$ horses. So subtract these from the total with $h$ horses to get $$f(h)=G(h)-2G(h-1)$$ and for the special case of $h=1$ see that obviously $f(1)=0$ since the single horse must come first. Possible Outline: Let $N(n)$ be the number of possible positions for $n$ horses (not worrying about making sure that one doesn’t come in first). Define $N(0)=1$. Let’s start counting $N(1) = 1$ clearly. $N(2) = 3$ since both could tie for first, horse A could win and B lose, or B wins and A loses. $N(3) = {{3}\choose{0}}N(0) + {{3}\choose{1}}N(1) + {{3}\choose{2}}N(2)$ because we have that of the $n=3$ horses, we can rearrange the $k = 0, 1, 2$ horses that don’t win in $N(k)$ ways.. Continuing you will find that $$N(n) = \sum\limits_{k=0}^{n-1} {{n}\choose{k}} N(k).$$ Now, if you ask how many of those $N(n)$ arrangements have it so that horse A doesn’t win, it seems you will find $$f(n) = N(n) – 2N(n-1)$$ where $2N(n-1)$ is the number of ways where horse A does win. Indeed, there are $N(n-1)$ ways to arrange the $n-1$ horses that aren’t horse A if horse A is the unique winner, and there are $N(n-1)$ ways to arrange the horses that aren’t horse A if horse A is not the unique winner. Let’s say there’re $n$ horses and we don’t want horse pony to be the first. Let’s first define $g(k,n)$ which is the number of all possibilities for the $n$ horses to take positions, and they all got ranks from $1$ to $k$. $g(k,n)$ is exactly equal to $k^n – k\times (k-1)^n + \binom k 2 \times (k-2)^n – \dots = \sum_{i=0}^{k}{(-1)^i \binom k i (k-i)^n}$ by inclusion-exclusion principle. Now the problem is easy. First order the other $n-1$ horses, and take the position of pony at last. If the other $n-1$ horses take rank from $1$ to $k$ their are $2k-1$ possible position for pony not to be on the first. Therefore, $f(n)=\sum_{k=1}^{n-1}{(2k-1)g(k,n-1)} = \sum_{k=1}^{n-1}{\Big( (2k-1)\sum_{i=0}^{k}{(-1)^i \binom {k} i (k-i)^{n-1}}\Big)}$
2018-07-21 15:44:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5749439597129822, "perplexity": 391.0652863248006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592636.68/warc/CC-MAIN-20180721145209-20180721165209-00333.warc.gz"}
https://calculator.academy/degree-to-percent-slope-calculator/
Enter the slope (degrees) into the Percent Slope From Degrees Calculator. The calculator will evaluate and display the Percent Slope From Degrees. ## Percent Slope From Degrees Formula The following formula is used to calculate the Percent Slope From Degrees. PS = tan(s) * 100 • Where PS is the Percent Slope From Degrees (%) • s is the slope (degrees) To calculate the percent slope, multiply the tangent of the slope by 100. ## How to Calculate Percent Slope From Degrees? The following example problems outline how to calculate Percent Slope From Degrees. Example Problem #1: 1. First, determine the slope (degrees). • The slope (degrees) is given as: 6. 2. Finally, calculate the Percent Slope From Degrees using the equation above: PS = tan(s) * 100 The values given above are inserted into the equation below and the solution is calculated: PS = tan(6deeg) * 100 = 10.51 (%) Example Problem #2: For this problem, the variables needed are provided below: slope (degrees) = 15 This example problem is a test of your knowledge on the subject. Use the calculator above to check your answer. PS = tan(s) * 100 = ?
2023-02-05 08:42:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5593204498291016, "perplexity": 3283.8054261251523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500250.51/warc/CC-MAIN-20230205063441-20230205093441-00179.warc.gz"}
http://www.ck12.org/trigonometry/Simplifying-Trigonometric-Expressions-using-Sum-and-Difference-Formulas/prepostread/Simplifying-Trig-Expressions-using-Sum-and-Difference-Formulas-KWL-Chart/r1/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Simplifying Trigonometric Expressions using Sum and Difference Formulas ( Activities ) | Trigonometry | CK-12 Foundation # Simplifying Trigonometric Expressions using Sum and Difference Formulas % Progress Progress % Simplifying Trig Expressions using Sum and Difference Formulas KWL Chart To activate prior knowledge, to generate questions about a given topic, and to organize knowledge using a KWL Chart.
2014-12-18 23:17:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8605555891990662, "perplexity": 8039.459076250914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768044.102/warc/CC-MAIN-20141217075248-00114-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/derivative-y-x-x.470450/
# Derivative y=√x+√x ## Homework Statement derivative y=√x+√x ## The Attempt at a Solution $$\sqrt{x} = x^{\frac{1}{2}$$ $$\sqrt{x} = x^{\frac{1}{2}$$ yes but y=sqrt x+sqrtx Char. Limit Gold Member I assume you mean $$y=\sqrt{x+\sqrt{x}}$$ If that's the case, use the chain rule. But be sure to mark clearly with parentheses. What you wrote could just as easily be: $$y=\sqrt{x}+\sqrt{x}$$ I assume you mean $$y=\sqrt{x+\sqrt{x}}$$ If that's the case, use the chain rule. But be sure to mark clearly with parentheses. What you wrote could just as easily be: $$y=\sqrt{x}+\sqrt{x}$$ yes that is what I mean but I'm stuck so please would you help me ? Char. Limit Gold Member Well, do you know the chain rule? no we havent studied that yet Char. Limit Gold Member Well, that's not good, because you need the chain rule to solve this. Basically, it states that if we have two functions, f(x) and g(x), that the derivative of f(g(x)) is f'(g(x)*g'(x). Or, in Leibniz notation... $$\frac{df}{dx} = \frac{df}{dg} \frac{dg}{dx}$$ Now, by setting $f(g(x)) = \sqrt(g(x))$ and $g(x)=x+\sqrt(x)$, you can use the chain rule to get the derivative. Well, that's not good, because you need the chain rule to solve this. Basically, it states that if we have two functions, f(x) and g(x), that the derivative of f(g(x)) is f'(g(x)*g'(x). Or, in Leibniz notation... $$\frac{df}{dx} = \frac{df}{dg} \frac{dg}{dx}$$ Now, by setting $f(g(x)) = \sqrt(g(x))$ and $g(x)=x+\sqrt(x)$, you can use the chain rule to get the derivative. thank you very much
2021-06-18 12:16:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9509100317955017, "perplexity": 543.3638240815944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487636559.57/warc/CC-MAIN-20210618104405-20210618134405-00632.warc.gz"}
https://en.wikipedia.org/wiki/Abel_test
# Abel's test (Redirected from Abel test) In mathematics, Abel's test (also known as Abel's criterion) is a method of testing for the convergence of an infinite series. The test is named after mathematician Niels Henrik Abel. There are two slightly different versions of Abel's test – one is used with series of real numbers, and the other is used with power series in complex analysis. Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions dependent on parameters. ## Abel's test in real analysis Suppose the following statements are true: 1. ${\displaystyle \sum a_{n}}$ is a convergent series, 2. {bn} is a monotone sequence, and 3. {bn} is bounded. Then ${\displaystyle \sum a_{n}b_{n}}$ is also convergent. It is important to understand that this test is mainly pertinent and useful in the context of non absolutely convergent series ${\displaystyle \sum a_{n}}$. For absolutely convergent series, this theorem, albeit true, is almost self evident. ## Abel's test in complex analysis A closely related convergence test, also known as Abel's test, can often be used to establish the convergence of a power series on the boundary of its circle of convergence. Specifically, Abel's test states that if a sequence of positive real numbers ${\displaystyle (a_{n})}$ is decreasing monotonically (or at least that for all n greater than some natural number m, we have ${\displaystyle a_{n}\geq a_{n+1}}$) with ${\displaystyle \lim _{n\rightarrow \infty }a_{n}=0\,}$ then the power series ${\displaystyle f(z)=\sum _{n=0}^{\infty }a_{n}z^{n}\,}$ converges everywhere on the closed unit circle, except when z = 1. Abel's test cannot be applied when z = 1, so convergence at that single point must be investigated separately. Notice that Abel's test implies in particular that the radius of convergence is at least 1. It can also be applied to a power series with radius of convergence R ≠ 1 by a simple change of variables ζ = z/R.[1] Notice that Abel's test is a generalization of the Leibniz Criterion by taking z = −1. Proof of Abel's test: Suppose that z is a point on the unit circle, z ≠ 1. For each ${\displaystyle n\geq 1}$, we define ${\displaystyle f_{n}(z):=\sum _{k=0}^{n}a_{k}z^{k}.}$ By multiplying this function by (1 − z), we obtain {\displaystyle {\begin{aligned}(1-z)f_{n}(z)&=\sum _{k=0}^{n}a_{k}(1-z)z^{k}=\sum _{k=0}^{n}a_{k}z^{k}-\sum _{k=0}^{n}a_{k}z^{k+1}=a_{0}+\sum _{k=1}^{n}a_{k}z^{k}-\sum _{k=1}^{n+1}a_{k-1}z^{k}\\&=a_{0}+a_{n}z^{n+1}+\sum _{k=1}^{n}(a_{k}-a_{k-1})z^{k}.\end{aligned}}} The first summand is constant, the second converges uniformly to zero (since by assumption the sequence ${\displaystyle (a_{n})}$ converges to zero). It only remains to show that the series converges. We will show this by showing that it even converges absolutely: ${\displaystyle \sum _{k=1}^{\infty }\left|(a_{k}-a_{k-1})z^{k}\right|=\sum _{k=1}^{\infty }|a_{k}-a_{k-1}|\cdot |z|^{k}\leq \sum _{k=1}^{\infty }(a_{k-1}-a_{k})}$ where the last sum is a converging telescoping sum. It should be noted that the absolute value vanished because the sequence ${\displaystyle (a_{n})}$ is decreasing by assumption. Hence, the sequence ${\displaystyle (1-z)f_{n}(z)}$ converges (even uniformly) on the closed unit disc. If ${\displaystyle z\not =1}$, we may divide by (1 − z) and obtain the result. ## Abel's uniform convergence test Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions or an improper integration of functions dependent on parameters. It is related to Abel's test for the convergence of an ordinary series of real numbers, and the proof relies on the same technique of summation by parts. The test is as follows. Let {gn} be a uniformly bounded sequence of real-valued continuous functions on a set E such that gn+1(x) ≤ gn(x) for all x ∈ E and positive integers n, and let {ƒn} be a sequence of real-valued functions such that the series Σƒn(x) converges uniformly on E. Then Σƒn(x)gn(x) converges uniformly on E. ## Notes 1. ^ (Moretti, 1964, p. 91)
2017-05-23 17:22:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9477293491363525, "perplexity": 172.89359688083664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607648.39/warc/CC-MAIN-20170523163634-20170523183634-00224.warc.gz"}
https://pos.sissa.it/380/358/
Volume 380 - Particles and Nuclei International Conference 2021 (PANIC2021) - QCD, spin physics and chiral dynamics QCD physics measurements at LHCb D. Zuliani*  on behalf of the LHCb collaboration Full text: pdf Pre-published on: March 02, 2022 Published on: May 24, 2022 Abstract The LHCb experiment is a general purpose forward detector which studies a phase space region complementary to ATLAS and CMS. Its excellent vertex and track reconstruction system allows to perform several measurements of perturbative QCD physics in a region unexplored by other experiments. In the following the latest QCD physics analyses performed at LHCb studying $pp$ collisions during Run 1 and Run 2 data are presented. DOI: https://doi.org/10.22323/1.380.0358 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
2023-03-24 19:51:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40008988976478577, "perplexity": 5164.212952490755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945288.47/warc/CC-MAIN-20230324180032-20230324210032-00404.warc.gz"}
http://math.stackexchange.com/questions/116964/the-set-of-functions-which-map-convergent-series-to-convergent-series
# The set of functions which map convergent series to convergent series Suppose $f$ is some real function with the above property, i.e. if $\sum\limits_{n = 0}^\infty {x_n}$ converges, then $\sum\limits_{n = 0}^\infty {f(x_n)}$ also converges. My question is: can anything interesting be said regarding the behavior of such a function close to $0$, other than the fact that $f(0)=0$? - A simple example: If $\sum x_n$ converges absolutely, then any $f(x) = O(x)$ has the desired property. –  Antonio Vargas Mar 6 '12 at 5:19 All I can think is that , if the space you're in is complete, uniformly-continuous functions take Cauchy sequences to Cauchy sequences. This allows a unique continuous extension for functions defined in dense subsets.And continuous alone is not enough for this last, as f(x)=1/(x-2^{1/2}) is an example. –  AQP Mar 6 '12 at 5:21 Let me ask this: must $f$ be continuous at $0$? –  the_fox Mar 6 '12 at 5:25 I misread your question so deleted the comment I posted. –  Patrick Mar 6 '12 at 5:30 Are your spaces--initial and target--complete, or do you want a more general result for any spaces? From your f(0)=0, I assume your maps are from R to R? –  AQP Mar 6 '12 at 5:39 I'm quite late on this one, but I think the result is nice enough to be included here. Definition A function $f : \mathbb R \to \mathbb R$ is said to be convergence-preserving (hereafter CP) if $\sum f(a_n)$ converges for every convergent series $\sum a_n$. Theorem (Wildenberg): The CP functions are exactly the ones which are linear on some neighbourhood of 0. Proof (Smith): Clearly, whether $f$ is CP only depends on the restriction of $f$ on an arbitrary small neighbourhood of $0$. Since the linear functions are CP, the condition is clearly sufficient. Let's prove that it is also necessary. We will prove two preliminary results. Lemma 1: $f$ CP $\Rightarrow$ $f$ continuous at $0$. Proof: Let's suppose that $f$ isn't continuous at 0. This implies that there exists a sequence $\epsilon_n \to 0$ and a positive real $\eta > 0$ such that $\forall n, |f(\epsilon_n)| \geq \eta$. But it is easy to extract a subsequence $\epsilon_{\phi(n)}$ such that $\sum \epsilon_{\phi(n)}$ converges (take $\phi$ s.t. $\epsilon_{\phi(n)} \leq 2^{-n}$, for instance). For such a subsequence, we still have that $|f(\epsilon_{\phi(n)})| \geq \eta$. This prevents $\sum f(\epsilon_{\phi(n)})$ to converge and, thus, $f$ to be CP, a contradiction. Lemma 2: The function $(x, y) \mapsto f(x+y) + f(-x) + f(-y)$ vanishes on some neighbourhood of $0$. Proof: If it didn't, one would be able to find sequences $x_n \to 0$ and $y_n \to 0$ s.t. $\forall n, f(x_n + y_n) + f(-x_n) + f(-y_n) \neq 0$. Up to some extraction, we can assume that $\delta_n = f(x_n + y_n) + f(-x_n) + f(-y_n)$ always has the same sign (let's say $\delta_n > 0$, for the sake of simplicity.) Consider now the series $$\begin{array}{l@{}l} (x_0 + y_0) &+ (-x_0) + (-y_0) + \cdots + (x_0 + y_0) + (-x_0) + (-y_0)\\ & +(x_1 + y_1) + (-x_1) + (-y_1) + \cdots + (x_1 + y_1) + (-x_1) + (-y_1)\\ &+\cdots\\ &+(x_n + y_n) + (-x_n) + (-y_n) + \cdots + (x_n + y_n) + (-x_n) + (-y_n)+\cdots, \end{array}$$ where every triplet of termes $(x_i+y_i) + (-x_i) + (-y_i)$ is repeated $M_i > 0$ times, for some integer $M_i > 0$. Because $x_n \to 0$ and $y_n \to 0$ and the three terms $x_i + y_i, -x_i, -y_i$ add to 0, it is easy to see that this series is convergent, regardless of the choice of the $M_i$'s. On the other hand, if we choose $M_i \geq \delta_i^{-1}$, the image of our series by $f$ is $$\begin{array}{l@{}l} f(x_0 + y_0) &+ f(-x_0) + f(-y_0) + \cdots + f(x_0 + y_0) + f(-x_0) + f(-y_0)\\ & +f(x_1 + y_1) + f(-x_1) + f(-y_1) + \cdots + f(x_1 + y_1) + f(-x_1) + f(-y_1)\\ &+\cdots\\ &+f(x_n + y_n) + f(-x_n) + f(-y_n) + \cdots + f(x_n + y_n) + f(-x_n) + f(-y_n)+\cdots, \end{array}$$ which diverges, for every line adds to $M_i \delta_i > 1$. Again, this in direct contradiction with the CPness of $f$. If we apply the result of lemma 2 with $y = 0$, we get that $f(-x) = -f(x)$. So we can rewrite lemma 2 in the following way: $\exists \eta > 0 : \forall x, y \in (-\eta, \eta), f(x+y) = f(x) + f(y)$. This property and the continuity at 0 imply first the continuity on the whole of $(-\eta, \eta)$ and it is then not hard to adapt the classical proof to show that $f$ is linear on $(-\eta, \eta)$. Q.E.D. - Great answer! Thank you. –  the_fox Sep 30 '14 at 16:07 Do you mean, in applying lemma 2, that $f(-x)=-f(x)$? Hard to see how you'd get $f(x)=-x$. –  Thomas Andrews Nov 2 '14 at 14:06 You're absolutely right, of course. I fixed it. –  PseudoNeo Nov 3 '14 at 15:48 If $f$ is not continuous at $0$, then we can find a sequence $x_n$ that converges to $0$ but $f(x_n)$ doesn't converge to $0$. First get a subsequence $y_n$ of $x_n$ with $|f( y_n)| > r$ for some $r>0$. Next choose some subsequence $z_n$ of $y_n$ so that $\sum z_n$ converges. However the series $\sum f(z_n)$ diverges and it follows that $f$ is continuous at $0$. - Ok, next question: must $f$ be differentiable at $0$? –  the_fox Mar 6 '12 at 5:54 Answer to the next question: no. Let $f\colon\mathbb{R}\to\mathbb{R}$ be defined by $$f(x)=\begin{cases} n\,x & \text{if } x=2^{-n}, n\in\mathbb{N},\\ x & \text{otherwise.} \end{cases}$$ Then $\lim_{x\to0}f(x)=f(0)=0$, $f$ transforms convergent series in convergent series, but $f(x)/x$ is not bounded in any open set containing $0$. In particular $f$ is not differentiable at $x=0$. This example can be modified to make $f$ continuous. Proof. Let $\sum_{k=1}^\infty x_k$ be a convergent series. Let $I=\{k\in\mathbb{N}:x_k=2^{-n}\text{ for some }n\in\mathbb{N}\}$. For each $k\in I$ let $n_k\in\mathbb{N}$ be such that $x_k=2^{-n_k}$. Then $$\sum_{k=1}^\infty f(x_k)=\sum_{k\in I} n_k\,2^{-n_k}+\sum_{n\not\in I} x_n.$$ The series $\sum_{k\in I} n_k\,2^{-n_k}$ is convergent. It is enough to show that also $\sum_{n\not\in I} x_n$ is convergent. This follows from the equality $$\sum_{n=1}^\infty x_n=\sum_{n\in I} x_n+\sum_{n\not\in I} x_n$$ and the fact that $\sum_{n=1}^\infty x_n$ is convergent and $\sum_{k\in I} x_n$ absolutely convergent. The proof is wrong. $\sum_{k\in I} x_n$ may be divergent. Consider the series $$\frac12-\frac12+\frac14-\frac14+\frac14-\frac14+\frac18-\frac18+\frac18-\frac18+\frac18-\frac18+\frac18-\frac18+\dots$$ It is convergent, since its partial sums are $$\frac12,0,\frac14,0,\frac14,0,\frac18,0,\frac18,0,\frac18,0,\frac18,0,\dots$$ The transformed series is $$\frac12-\frac12+\frac24-\frac14+\frac24-\frac14+\frac38-\frac18+\frac38-\frac18+\frac38-\frac18+\frac38-\frac18+\dots$$ whose partial sums are $$\frac12,0,\frac12,\frac14,\frac34,\frac12,\frac78,\frac34,\frac98,1,\frac{11}8,\frac54,\dots$$ which grow without bound. On the other hand, $f(x)=O(x)$, the condition in Antonio Vargas' comment, is not enough when one considers series of arbitrary sign. Let $$f(x)=\begin{cases} x\cos\dfrac{\pi}{x} & \text{if } x\ne0,\\ 0 & \text{if } x=0, \end{cases} \quad\text{so that }|f(x)|\le|x|.$$ Let $x_n=\dfrac{(-1)^n}{n}$. Then $\sum_{n=1}^\infty x_n$ converges, but $$\sum_{n=1}^\infty f(x_n)=\sum_{n=1}^\infty\frac1n$$ diverges. - Can you please provide a short proof why $f$ sends convergent series to convergent series? –  the_fox Mar 6 '12 at 20:16 I have written the proof. –  Julián Aguirre Mar 6 '12 at 21:04 Not sure $\sum\limits_{k\in I}n_k/2^{n_k}$ always converges. Consider $(x_i)_{i\geqslant1}$ such that every $x_i$ is a negative power of $2$ and assume that $x_i=1/2^k$ roughly $2^k/k^2$ times. Then $\sum\limits_{i\geqslant1}x_i\approx\sum\limits_{k\geqslant1}1/k^2$ converges but $\sum\limits_{i\geqslant1}f(x_i)\approx\sum\limits_{k\geqslant1}1/k$ diverges. –  Did Mar 6 '12 at 21:35 Aside from Didier's point above, are you certain you can split the series? –  the_fox Mar 6 '12 at 21:42
2015-08-01 16:47:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851975440979004, "perplexity": 201.5244878692984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988840.31/warc/CC-MAIN-20150728002308-00026-ip-10-236-191-2.ec2.internal.warc.gz"}
https://calendar.math.illinois.edu/?year=2021&month=03&day=29&interval=day
Department of # Mathematics Seminar Calendar for events the day of Monday, March 29, 2021. . events for the events containing Questions regarding events or the calendar should be directed to Tori Corkery. February 2021 March 2021 April 2021 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 7 8 9 10 11 12 13 7 8 9 10 11 12 13 4 5 6 7 8 9 10 14 15 16 17 18 19 20 14 15 16 17 18 19 20 11 12 13 14 15 16 17 21 22 23 24 25 26 27 21 22 23 24 25 26 27 18 19 20 21 22 23 24 28 28 29 30 31 25 26 27 28 29 30 Monday, March 29, 2021 3:00 pm in Zoom,Monday, March 29, 2021 #### The Telescope Conjecture ###### Liz Tatum (UIUC) Abstract: In his 1984 paper “Localization with Respect to Certain Periodic Homotopy Theories”, Ravenel made seven major conjectures about homotopy theory. While the rest of these conjectures were quickly proven and are an important part of the framework for chromatic homotopy theory, the telescope conjecture remains open. Roughly, the telescope conjecture claims that: “finite localization and smashing localization in the stable homotopy category are the same”. In this talk, we’ll discuss localization in the stable homotopy category and various ways to state the telescope conjecture. Time permitting, we’ll briefly discuss a generalization of this conjecture to other categories. Please email vb8 at illinois dot edu for the zoom details. 5:00 pm in Altgeld Hall,Monday, March 29, 2021 #### Seemingly injective von Neumann algebras ###### Timur Oikhberg Abstract: Part 3 https://illinois.zoom.us/j/87333719853?pwd=bFdWSkdqeEt1M3djNmVEU2Jvb3JzUT09
2021-09-28 02:28:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39901047945022583, "perplexity": 524.9912189987875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058589.72/warc/CC-MAIN-20210928002254-20210928032254-00222.warc.gz"}
http://saltotech.it/ruqj/curly-bracket-latex-multiple-lines.html
ò Click on the Typeset button. It is easy if there are only few lines in the file. they are used to surround mandatory arguments of a command. > curly bracket spanning three lines: > > Here is a list: > > (A) the first item } > (B) the second item } these are all items > (C) the third item } > > Those three curly brackets should be one enormous bracket, though, > covering all three lines, with the middle pointy bit aiming at the words > on the right. gaps adds empty rows (or, more generally, additional vertical space) between coefficients to increase readability (empty rows are also inserted between the table's header, body, and footer, unless lines is activated). On 32-bit builds and in earlier versions, a string can be as. edu September 12, 2005 1 Introduction LATEX is the standard mathematical typesetting program. Lindsay Tustison Jul 15, 2014. Citations – Classes – Ctan – Deprecated – Figures – Footnotes – Hyphenation – Index – Labels – Latex – Latex3 – Layout – Lists – Luatex – Macros – Math – Metafont – Pdftex – References – Structure – Tables – Toc – Unicode – Xetex. Angle Brackets <…> are typically used to enclose and illustrate highlighted information. In the second set of brackets contained in the hyperlink command, one types in the citation as usual, in any format. When I was a student I used to put curly braces on the same line, so that there are fewer lines, and the code gets printed on fewer pages. When writing a math definition of a function, for example, the function may have different results depending on the value of the inputs. Use dictation to convert spoken words into text anywhere on your PC with Windows 10. , (3 + 2) x (10-3) = 35) but not for equations that have greater vertical size, such as those using fractions. For documentation describing the tree-dvips macros, just type latex tree-manual and then print tree-manual. In this case, I simply got lucky because on each line before the + sign, there is only a 1 and an equals sign. A comment line begins with a percent symbol (%) and continues to the end of the line. curly bracket spanning three lines: Here is a list: (A) the first item } (B) the second item } these are all items (C) the third item } Those three curly brackets should be one enormous bracket, though, covering all three lines, with the middle pointy bit aiming at the words on the right. It supports files like JavaScript, HTML, JSON, XML, CSS, PNG, and JPG. The second algorithm example has nested ForEach loop with If/ElseIf/Else condition inside it. , indicate a matrix. This article has also been viewed 170,369 times. Also the \text {} macro for setting text inside math mode. Using curly arrows to show the movement of single electrons. This should get you started, and you can use on-line guides and books in the library to help you make more sophisticated ones. concerning multiple columns, amsmath provides several align-like environments, have a look at amsldoc. Square brackets are common enough that the word "square" can be omitted. Subscripts and superscripts are created using _{} and ^{}. Best Way to Type/Insert Parenthesis, Brackets. I do not understand the order of. You supply all three elements of the display in a single statement: the lower limit first, the upper limit second, and the expression third. Hence the 'dirty' way. Import LaTeX tables. Last updated: 2019-07-01. Consider the problem of typesetting the formula It is necessary to ensure that the = signs are aligned with one another. The visitor's LaTeX, entered or copied into the editing window below, will be quickly rendered by up to three renderers (in different ways). If three or more periods occur before the end of a line, then MATLAB ignores the rest of the line and continues to the next line. matrix without brackets. New lines: What you type: What you see: Line break here. @murray $\endgroup$ - TransferOrbit Mar 15 '13 at 8:07. The only other place you may encounter curly brackets is on forums or when instant messaging. The first one follows immediately after the brace, which is the citation key. Go back to the code, and after the move line, add another line (but still inside the curly brackets for the act method) that says turn(3), like this: You'll see that the crab runs in a circle. something like this: F = (a+b+c+d+e+ f+g+h+i+j+k)^^2^^ Then, you could use the math-panel to insert a special pair of brackets. RemoveSemicolonAfterCurly: Remove semicolon after closing curly brace. cls) and a theorem package. It does not carry on below the line I want it to in the last column. println("hello");. Bracket over multiple lines of text with drawing brace shape. For single quotes, (on British keyboards, this symbol is found on the key adjacent to the number 1) gives a left quote mark, and ' is the right. Dnia Fri, 10 Dec 2010 19:00:06 +0100, zappathustra napisał(a): >> LaTeX (and TeX for that matter) doesn't let you insert ,,blank lines''. 2to get the basics. This extension allows matching brackets to be identified with colours. Please do not use this unfinished and/or still unreliable template. For example, this example runs a program without knowing where it is located. Trusted since 1901. The alternate syntax for accessing object properties is known as bracket notation. I want to be able to group a section of a screenshot together using a bracket. I am not an engineer nor a mathematician. You just use them as you use them normally: [code](x+y)(x-y)=x^2-y^2 [/code]$(x+y)(x-y)=x^2-y^2$ If you want something bigger in the parentheses like [code. Restriction: In addition to the LaTeX command the unlicensed version will copy a reminder to purchase a license to the clipboard when you select a symbol. To create a link, you must place the text of the link in square brackets followed by the URL in parentheses. Explicit\" provides a way to write command line parsers for both single mode programs (most programs) and multiple mode programs (e. Here is a tutorial demonstrating how to draw curly hair. 2, “How to import multiple members in Scala (wildcard and curly braces syntax). Use search box to filter the result. Please do as follows. For documentation describing the tree-dvips macros, just type latex tree-manual and then print tree-manual. >> That's because it's a bad practice to separate the body of the text by >> blanks. And today, I want to show that by using same logic we can create a COUNT OR formula. 6 is required. I gave that a try, but it won't work for me because I can't get a long enough bracket unless it's such a large font size that it overwhelms the image. For example, typing $\sqrt {x} = 5$ gives us. Note the use of curly brackets, these will be very useful in Latex. For this to work, you must have \usepackage {amsmath} in the preamble. Much of the text before the \begin{document} line are Latex statements used to create margin and text sizes needed for most generic scientific documents. Executable Statements, which initiate actions. In large equations or derivations which span multiple lines, we can use the \begin{align} and \end{align} commands to correctly display the aligned mathematics. Using Visual Studio, I just started coding a little fill in the blank type story. Introduction Kate’s VI mode is a project to bring Vim-like, modal editing to the Kate text editor and by extension to other KDE programs who share the same editor component. More help. I have continued to maintain and further develop this code and the number of missing features from Vim are. 40 7 Emacs and IEEEeqnarray42 8 Some Useful De nitions43 9 Some Final Remarks and Acknowledgments45 Index 45 Over the years this manual has grown to quite an extended size. " explicit markup start with nothing on the first line (except possibly whitespace) and a blank line immediately following, could serve as an "unindent". F1 -> phpfmt: Format This File or keyboard shortcut Ctrl + Shift + I which is Visual Studio Code default formatter shortcut. Recently, I was wondering how to cite multiple references at once, e. You can draw a left or right brace to bracket over multiple lines of text in Word document. The one true brace style is one of the most. 2 recently, with some great new features which you can read about on their blog. Hat and underscore are used for superscripts and subscripts. It's easiest to describe this with an example. O Mar 29 '11 at 8:53. Since the column alignment with eqnarray is right, center, left, the second and third line begin in the. Click Insert > Shapes, then select a left or right brace shape from the drop-down list. Examples of each bracket use is given. The user can define which characters to match, and which colours to use. If the contents of the brace is a single short line it goes on the same line as the open (and close) brace Otherwise the contents are indented and the closing brace is on its own line. Personally I tend to use the square brackets to denote that the matrix is of linear nature, the parentheses for nonlinear matrices. $\endgroup$ - Ruslan Feb 9 '14 at 7:18. curly_bracket_next_line: true, false: Denotes whether the left part of the curly bracket should be on the next line or not: C-family languages (such as C, C++, Java, Javascript, etc. There are two types of this "math mode": In-line Math Mode. Plasma cells are a type of white blood cell found in bone marrow, which is the soft tissue inside most of your bones that produces. I then use pdflatex to directly create an PDF file from the latex file. Creating empty files can be done with touch command. 2 + Fix problem when using the task comments plugin + Modularize CSS to use SCSS instead of custom buildscript + Draw project members tree in project details on the dashboard + Hook comments plugin into task details in my tasks + Hook comments plugin into single task view + Update PHP Mailer, HTMLPurifier, Smarty to their latest versions + Fix issues with tinyMCE not correctly. The amsmath part is an extension package for LaTeX that provides various features to facilitate writing math formulas and to improve the typographical quality of their output. In LaTeX, curly braces have two functions: 1. A number enclosed in curly braces is multiplied by four. A copy of the license is included in % the section entitled "GNU Free Documentation License". Each statement belongs to one of the following categories: Declaration Statements, which name a variable, constant, or procedure, and can also specify a data type. In vim, you can select by using type "v i" follows by quote, parenthesis, etc. A matching pair of curly braces is only interpreted as inline markup if the left brace is immediately preceeded by a capital letter. RemoveIncludeParentheses: Remove parentheses from include declarations. Close search. Layout : If a statement spans multiple lines, the closing curly bracket must be placed on its own line. bmatrix Latex matrix pmatrix vmatrix. Superscripts, subscripts and groups of characters. LaTeX and MathML are supported by all three iWork apps (Pages, Numbers, and Keynote) and iBooks Author. Click on the first cell, then hold Shift and click on the last cell. right - latex split equation with brackets Two statements next to curly brace in an equation (4) Are you looking for. In case that matter. Square brackets are used in the next higher level grouping, and braces are used in the most outer groupings (see " Nested expressions " for an example). Is there a shortcut for going to the trailing curly brace in a long piece of code? In a few editors if I double click the { it takes me to the trailing brace }. ISBN 0-8176-3805-9 (acid-free paper) (pbk. Using: echo {10. (environment,paper wastage) But when coding large applications, allowing some lines with only braces in them are. Info: If you want create images with up to 800 dpi, you need to be a member of the L4t-community. Almost every conceivable symbol and operation is defined within LaTeX, and we only have time to go into a few here. ò Click on the Typeset button. 2 thoughts on " VSCode tips - How to select everything between brackets or quotes " Vim Lovers October 26, 2019 at 5:06 am. 0 Vim can search for text that spans multiple lines. I would love to do this by just manipulating the curly brace ( perhaps some sort of decoration or edge style). Punctuation pair: Square brackets [ ] The code written between the square brackets of the motor command indicates which motor the command should use. Inline LaTeX formulae including curly brackets show them correctly, though, at least the ones I wrote in the same answer. At the end of the \begin{block} command we simply enter the block's title in curly brackets. Here is a list of 150+ shortcuts for various bracket symbols. Code Comment: Comment or uncomment blocks of code. You can read on this in bash manual. ; Semicolon. To edit the contents of a macro button, tap and hold down the button for 1 second or longer. In display math mode, we enclose our code in. - VonC Aug 8 '10 at 17:29. LaTeX \nolinebreak. This means that PHP only supports a 256-character set, and hence does not offer native Unicode support. This title page template features a large title and subtitle surrounded by two curly brackets to bring focus to the title and add to the stylish design of the template. Created by Evangelos Evangelou, If you are looking for the following behaviour Use the following latex. 25 bronze badges. edited Nov 3 '16 at 17:27. Curly braces (also referred to as just "braces" or as "curly brackets") are a major part of the C++ programming language. Comparison operators are an often overlooked aspect of PHP, which can lead to many unexpected outcomes. In in-line math mode, we use signs to enclose the math we want to display, and it displays in-line with our text. Curly Bracket Matrix Latex. BibTex allows you to automatically generate and format a bibliography in a LaTeX document. You can easily do this by typing everything except the curly brackets and then selecting the text enclosed by each pair of curly brackets and pressing Ctrl+ F9. something like this: F = (a+b+c+d+e+ f+g+h+i+j+k)^^2^^ Then, you could use the math-panel to insert a special pair of brackets. Think of all the times …. The alternate syntax for accessing object properties is known as bracket notation. The Excel Linest Function uses the least squares method to calculate the line of best fit through a supplied set of y- and x- values. This is one the shorter recipes, Recipe 7. 10 Responses to "how to add single-sided brackets in latex" on 21 Apr 2006 at 7:20 pm M. Today, I am going to talk about lists in LaTeX. This was due to the way LyX resolved these glyphs to LaTeX (see below for the technical explanation). Bracket over multiple lines? Hi, Latex noob here, and I'm looking to create a set of equations of the form shown in the picture below, where the equal-to symbols are left-justified and aligned, and I can add notes that are right-aligned like "drop the k = 0 term" as shown in this example. For multi-statement lines, the comma can be replaced by a semicolon to suppress printing. asked Nov 3 '16 at 17:24. The \nolinebreak command prevents from breaking the current line at the point of the command. Template literals are enclosed by the back-tick ( ) ( grave accent) character instead of double or single quotes. Note that math mode ignores whitespace, in fact, this whole code could have been put on one line and still would have compiled correctly. The amsmath part is an extension package for LaTeX that provides various features to facilitate writing math formulas and to improve the typographical quality of their output. See this TEX thread. The visitor's LaTeX, entered or copied into the editing window below, will be quickly rendered by up to three renderers (in different ways). Bracket matching. Do not surround a command-line argument name with parentheses, square brackets, angle brackets, curly brackets, or quotation marks. This should get you started, and you can use on-line guides and books in the library to help you make more sophisticated ones. At the end of the \begin{block} command we simply enter the block's title in curly brackets. @murray\endgroup– TransferOrbit Mar 15 '13 at 8:07. (For curly braces, you need to put a backslash in front of the braces so that LaTeX realizes they are not LaTeX grouping symbols. I don't know if that was on the receiving end or just the replying end, but If it didn't come through for you, be sure to go into the Community directly to get it. Function identifiers are strings used to identify the stub. They were called "template strings" in prior editions of the ES2015 specification. While writing large pieces of code I seem to forget to add this at times and also it would make debugging easy if such an implementation is available. Superscripts, subscripts and groups of characters. Re: Using a Brace to Encompass Multiple Lines by John_Ha » Thu Jul 31, 2014 8:04 pm Check out the Math User Guide at OpenOffice. Use dictation to convert spoken words into text anywhere on your PC with Windows 10. Command prompt. Easy and consistent with LaTeX syntax. , the number adjacent to the opening or closing square bracket is included in the interval). Physical lines¶. If you are already familiar with using alt codes, simply select the alt code category you need from the table below. The natbib package is a reimplementation of the L A T E X \cite command, to work with both author-year and numerical citations. Symbols There's a really useful document (a PDF) on the Internet, which has…. Select the graphic you have drawn. The missing phpfmt extension for Visual Studio Code. But as far as an aid in reviewing code for trivial syntax errors, why not cut out the middle man and alert the OP of mismatched, incorrect or missing brackets, parentheses and line-enders when they ask their question. The amsmath package. This makes it possible to create macros with text spanning multiple lines. append(item) filehandle. You can shorten "parenthesis" to "paren", and shorten "semicolon" to "sem". Export (png, jpg, gif, svg, pdf) and save & share. Information about the F13 through F24 keyboard keys. If you have limited time, read Section4. In vim, you can select by using type "v i" follows by quote, parenthesis, etc. Please note that you will need to reset the font size with one of these commands after changing it. Prevent null index generation when base latex filename is unknown. See screenshot: 2. Between these braces lie semi-colon separated style declarations. Un éditeur LaTeX en ligne facile à utiliser. Inline LaTeX formulae including curly brackets show them correctly, though, at least the ones I wrote in the same answer. The removal of curly braces is only allowed if a single statement follows. The project started as a Google Summer of Code project in 2008 – where all the basic functionality was written. To achieve correct break and alignment of the above equation try the code below. She is taller than average for her age bracket. Note 3: While arrays are not covered here, please note that the parentheses style of bracket would be used to declare an array. To type the curly brackets {} you use pinky, stretching a distance of 1 row above and 1 column to right, and hold down Shift key. This package provides features for aligning several equations, for handling multiline equations, compound symbols and certain. You may omit curly braces when a block consists of a single statement; however, you must consistently either use or not use curly braces for single statement blocks. Import LaTeX tables. Always indent the code inside the curly braces. The following function inserts the closing brace only when the cursor is at the end of the line. bmatrix Latex matrix pmatrix vmatrix. How do I do it?. For example. You can purchase a license here: Buy Detexify for Mac. Pas d'installation, collaboration en temps réel, gestion des versions, des centaines de modèles de documents LaTeX, et plus encore. The first algorithm has While loop along with If/Else condition. For example, {3, 4, 5, 6} means a set including the numbers 3, 4, 5 and 6. Braces ("curly braces") Braces are used to group statements and declarations. On English keyboards, the open bracket and close bracket are on the same key as the square bracket keys close to the Enter key. - Use "hyperref" package together with "bookmark" (improved hyperlinking by the same author). In this context, they are used to indicate a hug. Personally I tend to use the square brackets to denote that the matrix is of linear nature, the parentheses for nonlinear matrices. The second algorithm example has nested ForEach loop with If/ElseIf/Else condition inside it. Three or more points at the end of a line indicate continuation. 2 recently, with some great new features which you can read about on their blog. To my understanding, when this is done, Excel knows to designate space in memory for storage of intermediate values when it would not normally do so. phpfmt for Visual Studio Code. They are used in several different constructs, outlined below, and this can sometimes be confusing for beginners. The size of brackets and parentheses can be manually set, or they can be resized dynamically in your document, as shown in the next example: F = G \ left ( \ frac { m_1 m_2}{r^2 } \ right ) \ ] Notice that to insert the parentheses or brackets, the \left and \right commands are used. It needs a zero-to-many * wildcard after the closing square bracket ]* and also, sed always operates only on a single line, unless you do some buffer hold of the line and sub-process subsequent lines until you find a matching '}' – Peter. Usually, to comment out a line, we place the cursor at the beginning of the line, press i, and type #. Multiple myeloma is a type of cancer that affects plasma cells. Curly-brace groups may be nested but not concatenated. LaTeX Warning: Citation lamport94' on page 1 undefined on input line 21. For example:. bst, as well as with those for harvard, apalike. Everything between the first URL and the end of line is considered a comment - and of course, that includes the closing brace for the URL entry. Alternately, use (citealtp{ref1}; citealtp{ref2}; citealtp{ref3}) wherever you are using multiple citations. Reorganize output file descriptors for the above. Anyway, sometimes is necessary to have more control over the layout of the document; and for this reason in this article is explained how to insert line breaks, page breaks and arbitrary blank spaces. Brackets and Norms. 10} prints out the numbers from 0 to 10. For a human to be able to read the code indentation is a much better way of providing the visual cues about block structure. Bracket Completion: Add automatically a closing bracket when you insert one. shows the movement of an electron pair. My desired output should be (6 Replies). Stephanie Geer,. Here's a complete list of Unicode brackets and quotation marks, grouped by style, covering Unicode 11 (released in 2018-06). xsl Allow selection by role for multiple imageobject elements within an imageobjectco, which since Docbook 5 allows multiple imageobjects. #N#MATRIX as of 2/6/2020. MS Word clinic - bracket OVER words For a very particular situation I'd like to put a bracket (prefer square but curly will do) OVER two or more words. Bracket matching. You're not using multiple lines of code in one if without curly braces, only one. The curly brackets (in this case) tell us that you've told Excel to treat the function as an arrayed function. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. You have probably used globbing before without knowing it. —depends on what you're targeting). Maintainability : A C# document may only contain a single class at the root level unless all of the classes are partial and are of the same type. Re: Using a Brace to Encompass Multiple Lines by John_Ha » Thu Jul 31, 2014 8:04 pm Check out the Math User Guide at OpenOffice. It's also common for curly brackets to represent sets. An opening curly brace should never go on its own line; a closing curly brace should always go on its own line. Installation. Drag a rectangle as tall as the three lines of text, and as wide as you want the bracket to be. Since the column alignment with eqnarray is right, center, left, the second and third line begin in the. Which intervention should the nurse prepare to. In this example, all entries are. An array consists of rows and columns, not a single cell. In bracket notation, the object name is followed by a set of square brackets. eqnarray vs. (Note: new lines (\\) do not work in equation environments. The second algorithm example has nested ForEach loop with If/ElseIf/Else condition inside it. Draw spaces: Draw Spaces and Tabs. Two statements next to curly brace in an equation. > curly bracket spanning three lines: > > Here is a list: > > (A) the first item } > (B) the second item } these are all items > (C) the third item } > > Those three curly brackets should be one enormous bracket, though, > covering all three lines, with the middle pointy bit aiming at the words > on the right. Large brackets around an array of numbers, e. With this post, I wish, I could shed some light on the use of Curly Brackets to create arrays in Google Sheets. bst, as well as with those for harvard, apalike. See this TEX thread. 1087 silver badges. PC keyboards also have a Menu key that looks like a cursor pointing to a menu. Braces are special characters and so the opening brace needs to be escaped with a backslash. Declaration Block Start – An open curly brace { marks the start of a declaration block. Do not surround a command-line argument name with parentheses, square brackets, angle brackets, curly brackets, or quotation marks. [ ] Brackets ("square brackets"). If multiple citations are given, ## then split them using the comma. For example, we'll start at (0,0) and head towards (0,4) adding a battery in. Hi Everyone, in the below "xyz (Exception e)" part after the curly braces, there is a new line and immediately few tabs are present before closing curly brace. To type the curly brackets {} you use pinky, stretching a distance of 1 row above and 1 column to right, and hold down Shift key. Select the graphic you have drawn. The "recipe" how to bake this image is inside a multi-line macro. RemoveIncludeParentheses: Remove parentheses from include declarations. We'll then add an ammeter in on the way to (4,4) followed by a simple line to (4,0). - Let LaTeX determine the column widths in tables with "colwidths-auto". I'm often using the alignat environment when several columns are needed. Created by Evangelos Evangelou, If you are looking for the following behaviour Use the following latex. This should get you started, and you can use on-line guides and books in the library to help you make more sophisticated ones. This is accomplished by placing the relevance expression in curly braces. 11 Double-column equations in a two-column layout. Yealink (Stock Code: 300628) is a global brand that specializes in video conferencing, voice communications and collaboration solutions with best-in-class quality, innovative technology and user-friendly experience. Why separate sections by indentation instead of by brackets or 'end' (last edited 2009-04-05 23:45. This is one the shorter recipes, Recipe 7. right - latex split equation with brackets Two statements next to curly brace in an equation (4) Are you looking for. the best solution is to use a key macro that inserts both left and right brackets and place cursor between them. 5 silver badges. append(item) filehandle. A comment line begins with a percent symbol (%) and continues to the end of the line. So in "\textbf{Bla}" the braces indicate the extent of the text that should be typeset in bold. Brackets and Norms. Multiple entry rolls are enclosed in curly brackets: { }. For double quotes, simply double the symbols, and LaTeX will interpret them accordingly. Today, I am going to talk about lists in LaTeX. Curly Bracket Matrix Latex. Lines are separted by the \ \ command. It can be added with the command \\underbrace{text}, or for the added number under the underbrace, \\underbrace{text}_{text}. equations line-breaking. Have you ever faced a situation where you need to count more than one value from a single column or a range. 202544536–dc20 CIP Printed on acid-free paper °c Birkh. With this post, I wish, I could shed some light on the use of Curly Brackets to create arrays in Google Sheets. 2 and AMS-Fonts 2. >> That's because it's a bad practice to separate the body of the text by >> blanks. Select the graphic you have drawn. In in-line math mode, we use signs to enclose the math we want to display, and it displays in-line with our text. Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. Those brackets show up when you enter a formula by hitting CONTROL+SHIFT+ENTER at the same time, instead of just enter as in. they are used to surround mandatory arguments of a command. Multiple myeloma is a type of cancer that affects plasma cells. Likewise, subscripts are denoted by an underline (_). It could be in long form (with full bibliographical information) or it could appear as given by the ‘citet’ or ‘ci-tep’ commands. ) The LaTeX array environment has an argument, in this case ccc, that determines how the entries in each column are aligned. Under the logo, there's a File menu. I have used {{some text}}, and it works some times, but other times the brackets don't do anything. It's not specific to Canvas, but has a larger pool of LaTeX experts to pull from. ConsoleApplication1 -------\Program. I don't know if that was on the receiving end or just the replying end, but If it didn't come through for you, be sure to go into the Community directly to get it. Please do not use this unfinished and/or still unreliable template. You must use the following package: \usepackage {amsmath} \begin {matrix} \begin {pmatrix} \begin {bmatrix} \begin {vmatrix} \begin {Vmatrix}. Method bodies and constructor bodies are enclosed in braces. They usually work all the same: an external tool generates an image. PSR2MultilineFunctionParams Break function parameters into multiple lines. GDP grew in line with the average growth rate since 1980 and the inflation rate was the lowest since 1969/70, according to a government economic survey. Please note that you will need to reset the font size with one of these commands after changing it. Download UltraEdit. Information about the F1 through F12 keyboard keys. Two different suggestions two lines of text the bottom line is mtext eg (12. Adjust startup messages to reflect knowledge of input file. But writing documents in LaTeX can be confusing because you need to know a lot of commands, and your text is littered with backslashes, curly braces, and other syntax distractions. - Use "hyperref" package together with "bookmark" (improved hyperlinking by the same author). How to create matrices in LaTeX This is the 16th video in a series of 21 by Dr Vincent Knight of Cardiff University. For example, the search /hello\\_sworld finds "hello world" in a single line, and also finds "hello" ending one line, with "world" starting the next line. Nothing to say or declare. AkkawiRS Posts: 1 Joined: Sat Oct 08, 2011 11:14 am. An array begins with an opening bracket ([) and ends with a closing one (]). Style Guide for C¶ There’s no one, right way to stylize code. Enclose LaTeX code in double dollar signs to display expressions in a centered paragraph. ie\/connected-health\/?cachetomax=true&layout=products_multicolumns","notifications":[],"text":". To start dictating, select a text field and press the Windows logo key + H to open the dictation toolbar. What i don't understood is in reslity how you're no longer really a lot more well-favored than yyou may be right now. We will see how to create multiple files using this command in one shot. To set the bracket size for the second line correctly, the first line is ended with \right. Then draw a brace shape to bracket over the lines you need. you have numbers in A1:B6 and you want to sum the difference in each row. make a left curly bracket enclose more than three lines to the right of it. Use dictation to convert spoken words into text anywhere on your PC with Windows 10. Step 3: Create your TEX file Save template. Arabic ornate parenthesis. Use the Ctrl + Click combination and this will directly navigate you to the line of javascript code where the function, label or variable is declared. The goals of this project include a. This command forces LaTeX to give an equation the full height it needs to display as if it were on its own line. Another thing to notice is the effect of the \displaystyle command. BibTex allows you to automatically generate and format a bibliography in a LaTeX document. bmatrix Latex matrix pmatrix vmatrix. Citations – Classes – Ctan – Deprecated – Figures – Footnotes – Hyphenation – Index – Labels – Latex – Latex3 – Layout – Lists – Luatex – Macros – Math – Metafont – Pdftex – References – Structure – Tables – Toc – Unicode – Xetex. Kutools for Excel : with more than 300 handy Excel add-ins, free to try with no limitation in 60 days. " idea from your post which solved my problem of adding single-sided curly brackets into a LaTeX document I'm working on. Go back to the code, and after the move line, add another line (but still inside the curly brackets for the act method) that says turn(3), like this: You'll see that the crab runs in a circle. The output of this code should include four different curly brackets (two opening, two closing), but they are not being rendered. An endotracheal. To achieve correct break and alignment of the above equation try the code below. For example, your foreach loop is considered as one statement, even if the code inside your loop is on multiple lines. In CSS, Less, Sass, SCSS, and JS files, you can collapse code based on curly brackets. 11 Double-column equations in a two-column layout. "Curly bracket" is very uncommon in programming books. Once the citation is entered, close the bracket. Command prompt. If you have an object that is within big brackets and it needs to span two lines, e. Add \input {header. Matching brackets will be highlighted as soon as the cursor is near one of them. Tip 242 Printable Monobook Previous Next created 2002 · complexity intermediate · version 6. The string inside often may extend over several lines, and there may be other occurences of curly brackets inside it. Alternately, use (citealtp{ref1}; citealtp{ref2}; citealtp{ref3}) wherever you are using multiple citations. ReindentSwitchBlocks: Reindent one level deeper the content of switch blocks. Trodat Numberer Stamp Metal Sequential Automatic Self-inking 31162804. We will see how to create multiple files using this command in one shot. Code Review Stack Exchange is a question and answer site for peer programmer code reviews. The names of special characters used in programming vary by region. If multiple citations are given, ## then split them using the comma. Array Builder. That's why you don't need curly braces for your first if. Used to separate matrix subscripts and function arguments. wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. What can be done is to put anything in square brackets also in curly brackets. This page is not a demonstration of how to use Symbol font; it provides a warning of the problems that it causes, and shows how to use Unicode instead. Some examples of using $$\LaTeX$$ in R Markdown documents. Two different suggestions two lines of text the bottom line is mtext eg (12. Which intervention should the nurse prepare to implement to maintain the client’s airway? A. =MIN (IF (A1:A10,A1:A10)) which, when entered by hitting CONTROL+SHIFT+ENTER at the same time, will appear on the Formula Bar as. Hi, I have several pages in Confluence that require monospace text created inside a single line of regular text. I've also tried \bigg\{but I'm sure there's a proper way of doing it. The following function inserts the closing brace only when the cursor is at the end of the line. Likewise, subscripts are denoted by an underline (_). asked Nov 3 '16 at 17:24. For single quotes, (on British keyboards, this symbol is found on the key adjacent to the number 1) gives a left quote mark, and ' is the right. To type the curly brackets {} you use pinky, stretching a distance of 1 row above and 1 column to right, and hold down Shift key. , indicate a matrix. bst, as well as with those for harvard, apalike. For placing numerators over denominators, with or without dividing lines, corresponding to LaTeX's frac, atop and choose commands. In this video, Vince shows how to quickly write out matrices in LaTeX, using the amsmath package and the \pmatrix (for a matrix with curly brackets), \matrix (for a matrix with no brackets), and \vmatrix (used to denote the determinant of a matrix) commands. To get the same sized brakets again. Warning SA1500 : CSharp. bst, as well as with those for harvard, apalike, chicago, astron, authordate. Some Tips and Tricks for Using LaTeX in Math Theses by Rob Benedetto How to Use the les samplethesis. Angle Brackets: Angle brackets in LaTeX are not the same as the inequality symbols: the angle bracket characters are '〈' and '〉', not '' and '>'. Always indent the code inside curly braces. I like this approach, as it allows to extend redmine in a very flexible way. Microsoft calls "C#" "C. Basic Code Special keys. "Curly bracket" is very uncommon in programming books. For documentation describing the tree-dvips macros, just type latex tree-manual and then print tree-manual. harveynorman. I've also tried \bigg\{but I'm sure there's a proper way of doing it. edited Nov 3 '16 at 17:27. Both headers and footers can contain more than one line. On English keyboards, the open bracket and close bracket are on the same key as the square bracket keys close to the Enter key. Please note that when generating titlepage template stylesheets you have to pass FO or XHTML namespace inside ns parameter. Segletes Nov 3 '16 at 17:28. It supports files like JavaScript, HTML, JSON, XML, CSS, PNG, and JPG. Expressions are grouped together using curly brackets. Reorganize output file descriptors for the above. The way the curly bracket is facing indicates the direction of the hug. @Alexander I agree: inline for inline, curly braces for indenting/multiple lines - ke4ukz May 2 '18 at 16:42. xsl Allow selection by role for multiple imageobject elements within an imageobjectco, which since Docbook 5 allows multiple imageobjects. Release Notes for the DocBook XSL Stylesheets Revision: 9401 Date: 2012-06-04 21:47:26 +0000 (Mon, 04 Jun 2012) 2012-12-18 This release-notes document is. [ ] Brackets ("square brackets"). Microsoft call this character "sharp" as with C#, J# (but it is not the musical SHARP ♯ which has vertical lines and oblique horizontal lines. Brackets and Norms. pdf with brace expansion alone, you would have to use this "hack":. The markup between the dollar signs is standard LaTeX math typesetting. Cell arrays in Matlab use the curly bracket {} notation instead of the normal parentheses (). In 2003 choose View Toolbars and Drawing and use the same bracket. You can easily do this by typing everything except the curly brackets and then selecting the text enclosed by each pair of curly brackets and pressing Ctrl+ F9. Please note that when generating titlepage template stylesheets you have to pass FO or XHTML namespace inside ns parameter. =MIN (IF (A1:A10,A1:A10)) which, when entered by hitting CONTROL+SHIFT+ENTER at the same time, will appear on the Formula Bar as. On English keyboards, the open bracket and close bracket are on the same key as the square bracket keys close to the Enter key. It's easiest to describe this with an example. It could be in long form (with full bibliographical information) or it could appear as given by the ‘citet’ or ‘ci-tep’ commands. Always indent the code inside curly braces. Export (png, jpg, gif, svg, pdf) and save & share. Mathematics printing–Computer programs. Swiping horizontally again dismisses it. I recall some months ago being able to add a curly bracket symbol and being able to span the symbol over multiple cells for purposes of adding an inclusive notation. ChkTeX - LaTeX semantic checker. Using Substitution. I'm running this command in a bash shell on Ubuntu 12. LaTeX Warning: Citation lamport94' on page 1 undefined on input line 21. Learn more. The way the curly bracket is facing indicates the direction of the hug. The equation should look like:. Easy and consistent with LaTeX syntax. Here are few examples to write quickly matrices. This means that PHP only supports a 256-character set, and hence does not offer native Unicode support. A matching pair of brackets is not balanced if the set of. You get the angle brackets using the following commands in math mode: \langle and \rangle. The following list is largely limited to non-alphanumeric characters. Square brackets are used in the next higher level grouping, and braces are used in the most outer groupings (see " Nested expressions " for an example). or right mouse context menu Format Document. There are two types of this "math mode": In-line Math Mode. Used inside brackets to end rows. 5 silver badges. ò Click on the Typeset button. Then draw a brace shape to bracket over the lines you need. Notice that the second and third line have an extra space provided by the empty curly bracket pair. I don't know if that was on the receiving end or just the replying end, but If it didn't come through for you, be sure to go into the Community directly to get it. Microsoft calls "C#" "C. curly bracket spanning three lines: Here is a list: (A) the first item } (B) the second item } these are all items (C) the third item } Those three curly brackets should be one enormous bracket, though, covering all three lines, with the middle pointy bit aiming at the words on the right. Visual Studio Code is free and available on your favorite platform - Linux, macOS, and Windows. eqnarray vs. Matrices with side curly brackets. In a search, \\s finds space or tab, while \\_s finds newline or space or tab: an underscore. Stephanie Geer,. How to write multi lined equations using the align environment This is the 17th video in a series of 21 by Dr Vincent Knight of Cardiff University. This can be usefull for showing "combining like terms," because a number can also be placed under the underbrace. Between these braces lie semi-colon separated style declarations. Value: This is the value of the property. The resulting brace looks more appealing, but it may also not scale up in the way you wish. You may have seen the expression [ sic] used in a. Mathematics printing–Computer programs. We will see how to create multiple files using this command in one shot. \left and \right can dynamically adjust the size, as shown by the next example:. This is one the shorter recipes, Recipe 7. You can add hover text by placing the text in quotes after the link. Below is an overview of a computer keyboard with the open curly bracket and close curly bracket keys highlighted in blue. Tip: You can jump to the matching bracket with ⇧⌘\ (Windows, Linux Ctrl+Shift+\) Reference information. (For curly braces, you need to put a backslash in front of the braces so that LaTeX realizes they are not LaTeX grouping symbols. ) spaces_around_operators: true, false, hybrid: Denotes whether spaces should be present around arithmetic and boolean operators: with infix operators and optional. It can contain keywords, operators, variables, constants, and expressions. This template is under construction. This is accomplished by placing the relevance expression in curly braces. A bracket is considered to be any one of the following characters: (, ), {, }, [, or ]. The {{curly bracket}} template ({} for short) makes large left or right curly brackets in HTML+CSS or L a T e X and is used by the {{(||)}} template to [curly] bracket expressions in HTML+CSS or L a T e X. You can graph a single line by entering an expression like y = 2x + 3. Dictation uses speech recognition, which is built into Windows 10, so there's nothing you need to download and install to use it. 6 posts • Page 1 of 1. Some of these symbols have multiple meanings, depending on the context. For multi-statement lines, the comma can be replaced by a semicolon to suppress printing. For example, typing \sqrt {x} = 5 gives us. Typically deployed in symmetric pairs, an individual bracket may be identified as a left or right bracket or, alternatively, an opening paired bracket or closing paired bracket, respectively, depending on the directionality of the context. Line specific spacing is possible, just state the space in square brackets after the linebreak, for instance:. Parenthesis can be used to split commands across multiple lines. The curly brackets (in this case) tell us that you've told Excel to treat the function as an arrayed function. LaTeX Code-Snippet for math-mode,line-breaking,align,multline. xyz (Exception e) { } note: there can be one or more newlines between the curly braces. LyX's native IPA support via the dedicated IPA inset cures these issues. Curly braces. to me the left curly brace after an if block seems visually redundant. I have tried to insert a symbol and rasterize it and just drag handles, but small and large brackets end up looking drastically different. Curly numbers are structured according to the following rules: Adding a semicolon adds one to the number. Verb I wouldn't exactly bracket your paintings with those of Michelangelo and Leonardo da Vinci. This package also supports subdivided bibliographies, multiple bibliographies within one document, and separate lists of bibliographic information such as abbreviations of various fields. All the versions of this article: < français > Here are few examples to write quickly matrices. Braces Bracket {Curly} Square Bracket [] Summary of PowerShell Brackets ♣ 1) Parenthesis Bracket When a PowerShell construction requires multiple sets of brackets, the parenthesis (style) usually comes first. Some Tips and Tricks for Using LaTeX in Math Theses by Rob Benedetto How to Use the les samplethesis. The open brace { signals the beginning of the escape sequence, and the closed brace } indicates the end of the sequence. For example if you forget to close a curly brace which encloses, say, italics, LaTeX won't report this until something else occurs which can't happen until the curly brace is encountered (e. Use dictation to convert spoken words into text anywhere on your PC with Windows 10. If no next part of a multi-block statement present, brace must be alone on line. Question Categories. Likewise, subscripts are denoted by an underline (_). vue/html-closing-bracket-newline: require or disallow a line break before tag's closing brackets 🔧 vue/html-closing-bracket-spacing: require or disallow a space before tag's closing brackets 🔧 vue/html-end-tags: enforce end tag style 🔧 vue/html-indent: enforce consistent indentation in 🔧 vue/html-quotes: enforce quotes. Input your desired values and click the expression or the graph to complete the adjustment. To set the bracket size for the second line correctly, the first line is ended with \right. Collabtive 3. And today, I want to show that by using same logic we can create a COUNT OR formula. Flag letters may appear within the brackets also, to indicate the status of the attribute; if the flag letters are in parentheses, they indicate flags for the global definition. Braces are special characters and so the opening brace needs to be escaped with a backslash. Three important—and related—symbols you'll see often in math are parentheses, brackets, and braces, which you'll encounter frequently in prealgebra and algebra. Some of the most useful for scientists are: _{} the text in between the brackets is a subscript, e. Description: The @ symbol forms a handle to either the named function that follows the @ sign, or to the anonymous function that follows the @ sign. Citations – Classes – Ctan – Deprecated – Figures – Footnotes – Hyphenation – Index – Labels – Latex – Latex3 – Layout – Lists – Luatex – Macros – Math – Metafont – Pdftex – References – Structure – Tables – Toc – Unicode – Xetex. That's why you don't need curly braces for your first if. pathreplacing,calc} \begin{document. Installation. , (3 + 2) x (10-3) = 35) but not for equations that have greater vertical size, such as those using fractions. The way the curly bracket is facing indicates the direction of the hug. Note that brackets are the default for confidence intervals. Each value determines the line style of the corresponding partition line (0 for none, 1 for solid, 2 for dashed, or 3 for dotted). (The symbol is also commonly used. % %===== % % Typeset using LaTeX with the AMS-LaTeX 1. 3: SA1500: If a statement spans multiple lines, the closing curly bracket must be placed on its own line. Here's how to document command-line commands and their arguments. Press one of the alt or option keys and type 29FD to produce right pointing curved angle bracket like ⧽. The mathematics is done using a version of $$\LaTeX$$, the premiere mathematics typesetting program. Code Comment: Comment or uncomment blocks of code. More help. The terminology used by Artistic Style, followed by other common names, is: braces or curly braces { } ‑ also called brackets, or curly brackets. The most familiar of these unusual symbols is probably the ( ), called parentheses. One such problem stems from strict comparisons (the comparison of booleans as integers). The size of brackets and parentheses can be manually set, or they can be resized dynamically in your document, as shown in the next example: \[ F = G \ left ( \ frac { m_1 m_2}{r^2 } \ right ) \ ] Notice that to insert the parentheses or brackets, the \left and \right commands are used. For example, your foreach loop is considered as one statement, even if the code inside your loop is on multiple lines. I have problems understanding when to use curly braces on If Else statements. Draw border. Then draw a brace shape to bracket over the lines you need. Similarly do companies typically adopt their own, company-wide conventions for style. In the beginning of your career when learning ReactJS and ES6 Javascript syntax, it can be confusing when to use curly braces { } and when to use parenthesis ( ). The visitor's LaTeX, entered or copied into the editing window below, will be quickly rendered by up to three renderers (in different ways). This can make code more readable. For example, this example runs a program without knowing where it is located. Two brackets are considered to be a matched pair if the an opening bracket (i. R Markdown allows you to mix text, R code, R output, R graphics, and mathematics in a single document. Using curly brackets in mathematical expressions. Double line scalable: left ldline d right rdline: Brace scalable: left lbrace e right rbrace: Angle bracket scalable: left langle f right rangle: Operator brackets scalable: left langle g mline h right rangle: Over brace scalable {The brace is above} overbrace a: Under brace scalable {the brace is below}underbrace {f} Floor Brackets: lfloor a. In this case, I simply got lucky because on each line before the + sign, there is only a 1 and an equals sign. He earned enough to put him in a higher tax bracket. Displaying a summation formula complete with sigma and the upper and lower limit involves using LaTeX with the sum function. The size of brackets and parentheses can be manually set, or they can be resized dynamically in your document, as shown in the next example: \ [ F = G \left( \frac{m_1 m_2} {r^2} \right) Notice that to insert the parentheses or brackets, the \left and \right commands are used. Overview The natbib package is a reimplementation of the LATEX \cite command, to work with both author{year and numerical citations. Line break after the opening brace. These curly brackets are entered manually. Here's a complete list of Unicode brackets and quotation marks, grouped by style, covering Unicode 11 (released in 2018-06). Back to Basics: JavaScript Object Syntax. Some symbols have a different meaning depending on the context and appear accordingly several times in the list. The brace expression itself may contain either a comma-separated list of strings, or a range of integers or single characters. Hello, Plese, can anybody help me with entering this kind of definition in equation editor in Microsoft Word 2013: I don't know how to enter this curly bracket and to allign it with the text after it. This command forces LaTeX to give an equation the full height it needs to display as if it were on its own line. A physical line is a sequence of characters terminated by an end-of-line sequence. Info: If you want create images with up to 800 dpi, you need to be a member of the L4t-community. More help. ie\/connected-health\/?cachetomax=true&layout=products_multicolumns","notifications":[],"text":". Click on it and then follows File > Import table > LaTeX. either of two symbols put around a word, phrase, or sentence in a piece of writing to show that…. Export (png, jpg, gif, svg, pdf) and save & share. Here we use the ampersand (&) command to ensure the equations always line up as desired. The word "curly" is rare enough (in programming) that "curly bracket" can be shortened to "curly". If multiple citations are given, ## then split them using the comma. Commands can use wildcards to perform actions on more than one file at a time, or to find part of a phrase in a text file. bmatrix Latex matrix pmatrix vmatrix. To adjust the limits and interval of your slider, click either of the values at the ends of the slider bar. If your command-line instructions show multiple lines of input, then start each line of input with the prompt symbol. {=MIN (IF (A1:A10,A1:A10))} The only time you enter these curly brackets yourself is when. Documents like “ Obsolete packages and commands ” (“l2tabu”) address the need of up-to-date information. bst, as well as with those for harvard, apalike. This should get you started, and you can use on-line guides and books in the library to help you make more sophisticated ones. Another thing to notice is the effect of the \displaystyle command. xyz (Exception e) { } note: there can be one or more newlines between the curly braces. \classes\com\example\graphics\Rectangle. If you have an object that is within big brackets and it needs to span two lines, e. jb1en326evn, cqgt6y9o2irn, 148dsdfc58f, jumranf7plx04, pordsmi736, 31tqdm32tfmdk0d, f8euinscb9z6, 01pgesb3w8yuqia, s8mbsvul2e93hzb, avsmfkh97d6w8, n31jzgm7btv, ke434etpvtk0o, 3vabng8i6x1gyr, dtdye8hcko, 1hk8elr78lv5a, ickiq11je0yjfjp, 5o4lbmoppedj2, svqbxdxk5t0dmys, jjpa0rb4pisjn, cmenzkwumuo4wi, 5tzho43tyx, ho5bi5vjwvs, 0jnyi7auguay4, l8ylzul4we4n8o, kqcgxntafg, 12pufbpe88oa2
2020-05-31 10:23:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7756649255752563, "perplexity": 2040.4147450671292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413097.49/warc/CC-MAIN-20200531085047-20200531115047-00152.warc.gz"}
http://www.ck12.org/book/Probability-and-Statistics---Advanced-%2528Second-Edition%2529/r1/section/3.1/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> You are reading an older version of this FlexBook® textbook: CK-12 Probability and Statistics - Advanced (Second Edition) Go to the latest version. # 3.1: Events, Sample Spaces, and Probability Difficulty Level: At Grade Created by: CK-12 ## Learning Objectives • Know basic statistical terminology. • List simple events and sample spaces. • Know the basic rules of probability. ## Introduction The concept of probability plays an important role in our daily lives. Assume you have an opportunity to invest some money in a software company. Suppose you know that the company’s records indicate that in the past five years, its profits have been consistently decreasing. Would you still invest your money in it? Do you think the chances are good for the company in the future? Here is another illustration. Suppose that you are playing a game that involves tossing a single die. Assume that you have already tossed it 10 times, and every time the outcome was the same, a 2. What is your prediction of the eleventh toss? Would you be willing to bet $100 that you will not get a 2 on the next toss? Do you think the die is loaded? Notice that the decision concerning a successful investment in the software company and the decision of whether or not to bet$100 on the next outcome of the die are both based on probabilities of certain sample results. Namely, the software company’s profits have been declining for the past five years, and the outcome of rolling a 2 ten times in a row seems strange. From these sample results, we might conclude that we are not going to invest our money in the software company or bet on this die. In this lesson, you will learn mathematical ideas and tools that can help you understand such situations. ### Events, Sample Spaces, and Probability An event is something that occurs, or happens. For example, flipping a coin is an event, and so is walking in the park and passing by a bench. Anything that could possibly happen is an event. Every event has one or more possible outcomes. While tossing a coin is an event, getting tails is the outcome of that event. Likewise, while walking in the park is an event, finding your friend sitting on the bench is an outcome of that event. Suppose a coin is tossed once. There are two possible outcomes, either heads, $H$, or tails, $T$. Notice that if the experiment is conducted only once, you will observe only one of the two possible outcomes. An experiment is the process of taking a measurement or making an observation. These individual outcomes for an experiment are each called simple events. Example: A die has six possible outcomes: 1, 2, 3, 4, 5, or 6. When we toss it once, only one of the six outcomes of this experiment will occur. The one that does occur is called a simple event. Example: Suppose that two pennies are tossed simultaneously. We could have both pennies land heads up (which we write as $HH$), or the first penny could land heads up and the second one tails up (which we write as $HT$), etc. We will see that there are four possible outcomes for each toss, which are $HH, HT, TH$, and $TT$. The table below shows all the possible outcomes. $&& H && T\\H && HH && HT\\T && TH && TT$ Figure: The possible outcomes of flipping two coins. What we have accomplished so far is a listing of all the possible simple events of an experiment. This collection is called the sample space of the experiment. The sample space is the set of all possible outcomes of an experiment, or the collection of all the possible simple events of an experiment. We will denote a sample space by $S$. Example: We want to determine the sample space of throwing a die and the sample space of tossing a coin. Solution: As we know, there are 6 possible outcomes for throwing a die. We may get 1, 2, 3, 4, 5, or 6, so we write the sample space as the set of all possible outcomes: $S = \left \{1, 2, 3, 4, 5, 6 \right \}$ Similarly, the sample space of tossing a coin is either heads, $H$, or tails, $T$, so we write $S=\left \{H,T\right \}$. Example: Suppose a box contains three balls, one red, one blue, and one white. One ball is selected, its color is observed, and then the ball is placed back in the box. The balls are scrambled, and again, a ball is selected and its color is observed. What is the sample space of the experiment? It is probably best if we draw a tree diagram to illustrate all the possible selections. As you can see from the tree diagram, it is possible that you will get the red ball, $R$, on the first drawing and then another red one on the second, $RR$. You can also get a red one on the first and a blue on the second, and so on. From the tree diagram above, we can see that the sample space is as follows: $S = \left \{RR, RB, RW, BR, BB, BW, WR, WB, WW \right \}$ Each pair in the set above gives the first and second drawings, respectively. That is, $RW$ is different from $WR$. We can also represent all the possible drawings by a table or a matrix: $&& R && B && W\\R && RR && RB && RW\\B && BR && BB && BW\\W && WR && WB && WW$ Figure: Table representing the possible outcomes diagrammed in the previous figure. The first column represents the first drawing, and the first row represents the second drawing. Example: Consider the same experiment as in the last example. This time we will draw one ball and record its color, but we will not place it back into the box. We will then select another ball from the box and record its color. What is the sample space in this case? Solution: The tree diagram below illustrates this case: You can clearly see that when we draw, say, a red ball, the blue and white balls will remain. So on the second selection, we will either get a blue or a while ball. The sample space in this case is as shown: $S= \left \{RB, RW, BR, BW, WR, WB\right \}$ Now let us return to the concept of probability and relate it to the concepts of sample space and simple events. If you toss a fair coin, the chance of getting tails, $T$, is the same as the chance of getting heads, $H$. Thus, we say that the probability of observing heads is 0.5, and the probability of observing tails is also 0.5. The probability, $P$, of an outcome, $A$, always falls somewhere between two extremes: 0, which means the outcome is an impossible event, and 1, which means the outcome is guaranteed to happen. Most outcomes have probabilities somewhere in-between. Property 1: $0 \le P(A) \le 1$, for any event, $A$. The probability of an event, $A$, ranges from 0 (impossible) to 1 (certain). In addition, the probabilities of all possible simple outcomes of an event must add up to 1. This 1 represents certainty that one of the outcomes must happen. For example, tossing a coin will produce either heads or tails. Each of these two outcomes has a probability of 0.5. This means that the total probability of the coin landing either heads or tails is $0.5 + 0.5 = 1.$ That is, we know that if we toss a coin, we are certain to get heads or tails. Property 2: $\sum P(A)=1$ when summed over all possible simple outcomes. The sum of the probabilities of all possible outcomes must add up to 1. Notice that tossing a coin or throwing a die results in outcomes that are all equally probable. That is, each outcome has the same probability as all the other outcomes in the same sample space. Getting heads or tails when tossing a coin produces an equal probability for each outcome, 0.5. Throwing a die has 6 possible outcomes, each also having the same probability, $\frac{1}{6}$. We refer to this kind of probability as classical probability. Classical probability is defined to be the ratio of the number of cases favorable to an event to the number of all outcomes possible, where each of the outcomes is equally likely. Probability is usually denoted by $P$, and the respective elements of the sample space (the outcomes) are denoted by $A, B, C,$ etc. The mathematical notation that indicates the probability that an outcome, $A$, happens is $P(A)$. We use the following formula to calculate the probability of an outcome occurring: $P(A)=\frac{\text{The number of outcomes for} \ A \ \text{to occur}}{\text{The size of the sample space}}$ Example: When tossing two coins, what is the probability of getting a head on both coins, $HH$? Is the probability classical? Since there are 4 elements (outcomes) in the sample space set, $\left \{HH, HT, TH, TT\right \}$, its size is 4. Furthermore, there is only 1 $HH$ outcome that can occur. Therefore, using the formula above, we can calculate the probability as shown: $P(A)=\frac{\text{The number of outcomes for} \ HH \ \text{to occur}}{\text{The size of the sample space}}=\frac{1}{4}=25\%$ Notice that each of the 4 possible outcomes is equally likely. The probability of each is 0.25. Also notice that the total probability of all possible outcomes in the sample space is 1. Example: What is the probability of throwing a die and getting $A = 2, 3, \ \text{or} \ 4$? There are 6 possible outcomes when you toss a die. Thus, the total number of outcomes in the sample space is 6. The event we are interested in is getting a 2, 3, or 4, and there are three ways for this event to occur. $P(A)=\frac{\text{The number of outcomes for 2, 3, or 4 to occur}}{\text{The size of the sample space}}=\frac{3}{6}=\frac{1}{2}=50\%$ Therefore, there is a probability of 0.5 that we will get 2, 3, or 4. Example: Consider tossing two coins. Assume the coins are not balanced. The design of the coins is such that they produce the probabilities shown in the table below: Outcome Probability $HH$ $\frac{4}{9}$ $HT$ $\frac{2}{9}$ $TH$ $\frac{2}{9}$ $TT$ $\frac{1}{9}$ Figure: Probability table for flipping two weighted coins. What is the probability of observing exactly one head, and what is the probability of observing at least one head? Notice that the simple events $HT$ and $TH$ each contain only one head. Thus, we can easily calculate the probability of observing exactly one head by simply adding the probabilities of the two simple events: $P & = P(HT)+P(TH)\\& =\frac{2}{9}+\frac{2}{9}\\& =\frac{4}{9}$ Similarly, the probability of observing at least one head is: $P& =P(HH)+P(HT)+P(TH)\\& =\frac{4}{9}+\frac{2}{9}+\frac{2}{9}=\frac{8}{9}$ ## Lesson Summary An event is something that occurs, or happens, with one or more possible outcomes. An experiment is the process of taking a measurement or making an observation. A simple event is the simplest outcome of an experiment. The sample space is the set of all possible outcomes of an experiment, typically denoted by $S$. For a description of how to find an event given a sample space (1.0), see teachertubemath, Probability Events (2:23). ## Review Questions 1. Consider an experiment composed of throwing a die followed by throwing a coin. 1. List the simple events and assign a probability for each simple event. 2. What are the probabilities of observing the following events? (i) A $2$ on the die and $H$ on the coin (ii) An even number on the die and $T$ on the coin (iii) An even number on the die (iv) $T$ on the coin 1. The Venn diagram below shows an experiment with six simple events. Events $A$ and $B$ are also shown. The probabilities of the simple events are: $P(1)& =P(2)=P(4)=\frac{2}{9}\\P(3)& =P(5)=P(6)=\frac{1}{9}$ 1. Find $P(A)$ 2. Find $P(B)$ 2. A box contains two blue marbles and three red ones. Two marbles are drawn randomly without replacement. Refer to the blue marbles as $B1$ and $B2$ and the red ones as $R1$, $R2$, and $R3$. 1. List the outcomes in the sample space. 2. Determine the probability of observing each of the following events: (i) Drawing 2 blue marbles (ii) Drawing 1 red marble and 1 blue marble (iii) Drawing 2 red marbles Feb 23, 2012 Dec 15, 2014
2015-05-25 20:13:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 67, "texerror": 0, "math_score": 0.8451514840126038, "perplexity": 232.19881671800167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928586.49/warc/CC-MAIN-20150521113208-00339-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.techwhiff.com/issue/all-rocks-are-made-of-a-minerals-b-something-that-was--171459
# All rocks are made of: a. minerals b. something that was once living c. sand ###### Question: a. minerals b. something that was once living c. sand ### Who is the central character introduced in the beginning of the myth? What major conflict is introduced in the beginning? How is the major conflict developed in the middle? How is the conflict resolved in the end? Who is the central character introduced in the beginning of the myth? What major conflict is introduced in the beginning? How is the major conflict developed in the middle? How is the conflict resolved in the end?... ### If a compound sentence is joined using a conjunction, what punctuation mark is used with the conjunction to join the clauses? A period B semicolon C comma D dash If a compound sentence is joined using a conjunction, what punctuation mark is used with the conjunction to join the clauses? A period B semicolon C comma D dash... ### The following Dot Plot shows the scores of 25 people who played in online trivia game which of the following statements is best description of the distribution of scores.​ the following Dot Plot shows the scores of 25 people who played in online trivia game which of the following statements is best description of the distribution of scores.​... ### GIVING OUT BRAINLIEST GIVING OUT BRAINLIEST... ### Is -1/3 - 4/5 positive, negative, or zero? is -1/3 - 4/5 positive, negative, or zero?... ### A figure is rotated around a center point, and it still appears exactly as it did before the rotation, it is said to have A. reflection B. Translation C. Rotation D. None of these A figure is rotated around a center point, and it still appears exactly as it did before the rotation, it is said to have A. reflection B. Translation C. Rotation D. None of these... ### The order of slides can be changed here. The order of slides can be changed here.... ### Han spent 75 minutes practicing the piano over the weekend. Tyler practiced the clarinet for 64% as much time as Han practiced the piano. How long did he practice? Han spent 75 minutes practicing the piano over the weekend. Tyler practiced the clarinet for 64% as much time as Han practiced the piano. How long did he practice?... ### A good example of central planning at work in the U.S. would be: Select one: a. Burger King's value meal price control. b. McDonald's fries being the same everywhere in the U.S. c. union wages. d. New York City's rent control. A good example of central planning at work in the U.S. would be: Select one: a. Burger King's value meal price control. b. McDonald's fries being the same everywhere in the U.S. c. union wages. d. New York City's rent control.... ### 5 = w/2.2 (This means w is divided by 2.) * 5 = w/2.2 (This means w is divided by 2.) *... ### Spain came to the new world for all the fallowing reasons except Spain came to the new world for all the fallowing reasons except... ### E. Fill in the blank with the appropriate vocabulary term. (5pts) 28. Nosotros vivíamos en un B. espejo C. apartamento D. estufa A. pared 29. ¿Sacabas la cuando eras niño? A. sótano B. tocador C. cortina D. basura 30. Los niños durmieron en la anoche. A. mesa B. techo C. cama D. jardín E. Fill in the blank with the appropriate vocabulary term. (5pts) 28. Nosotros vivíamos en un B. espejo C. apartamento D. estufa A. pared 29. ¿Sacabas la cuando eras niño? A. sótano B. tocador C. cortina D. basura 30. Los niños durmieron en la anoche. A. mesa B. techo C. cama D. jardín... ### Write the expression as a single natural logarithm. 2 ln 8 + 2 ln y Write the expression as a single natural logarithm. 2 ln 8 + 2 ln y... ### Y ³ ——— 10y + 4 i got 6/24 or just 4 is this correct y ³ ——— 10y + 4 i got 6/24 or just 4 is this correct...
2022-10-01 01:34:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3768015503883362, "perplexity": 4515.983367316578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00109.warc.gz"}
https://paperswithcode.com/paper/lymph-node-graph-neural-networks-for-cancer
# Lymph Node Graph Neural Networks for Cancer Metastasis Prediction 3 Jun 2021  ·  , · Predicting outcomes, such as survival or metastasis for individual cancer patients is a crucial component of precision oncology. Machine learning (ML) offers a promising way to exploit rich multi-modal data, including clinical information and imaging to learn predictors of disease trajectory and help inform clinical decision making. In this paper, we present a novel graph-based approach to incorporate imaging characteristics of existing cancer spread to local lymph nodes (LNs) as well as their connectivity patterns in a prognostic ML model. We trained an edge-gated Graph Convolutional Network (Gated-GCN) to accurately predict the risk of distant metastasis (DM) by propagating information across the LN graph with the aid of soft edge attention mechanism. In a cohort of 1570 head and neck cancer patients, the Gated-GCN achieves AUROC of 0.757 for 2-year DM classification and $C$-index of 0.725 for lifetime DM risk prediction, outperforming current prognostic factors as well as previous approaches based on aggregated LN features. We also explored the importance of graph structure and individual lymph nodes through ablation experiments and interpretability studies, highlighting the importance of considering individual LN characteristics as well as the relationships between regions of cancer spread. PDF Abstract ## Code Add Remove Mark official No code implementations yet. Submit your code now ## Datasets Add Datasets introduced or used in this paper ## Results from the Paper Edit Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.
2022-12-04 20:53:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33658045530319214, "perplexity": 4030.34768219298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710980.82/warc/CC-MAIN-20221204204504-20221204234504-00046.warc.gz"}
https://mathematica.stackexchange.com/questions/135691/how-to-improve-resolution-of-plot-with-underflow-and-overflow
# How to improve resolution of plot with underflow and overflow? I am working with a function that has the form Exp[-a^2](Erfi[a]-Log[-1/a]-Log[a])/a where a is complex number. When I tried to plot it, my expression yields underflow and overflow. Here, you can see that my blue plot is oscillating because of this problem. I want to make a nice and clean plot! PlotPoints will make it little better, but there still is a missing region. Any help? Here is a sample function (It might take a few seconds) f[d_]:=(1/(2 Sqrt[(-2562890621 - 112500 I) π]))(-((2 - 28125 I) + Sqrt[-2562890621 - 112500 I] + 4500 I Sqrt[70]) E^(-(1/ 16) ((-2 + 50629 I) + Sqrt[-2562890621 - 112500 I] - 4 d)^2) (π Erfi[ 1/4 ((-2 + 50629 I) + Sqrt[-2562890621 - 112500 I] - 4 d)] - Log[(-2 + 50629 I) + Sqrt[-2562890621 - 112500 I] - 4 d] - Log[1/((2 - 50629 I) - Sqrt[-2562890621 - 112500 I] + 4 d)]) + ((-2 + 28125 I) + Sqrt[-2562890621 - 112500 I] - 4500 I Sqrt[70]) E^(-(1/ 16) ((2 - 50629 I) + Sqrt[-2562890621 - 112500 I] + 4 d)^2) (π Erfi[ 1/4 ((2 - 50629 I) + Sqrt[-2562890621 - 112500 I] + 4 d)] + Log[(-2 + 50629 I) - Sqrt[-2562890621 - 112500 I] - 4 d] + Log[1/((2 - 50629 I) + Sqrt[-2562890621 - 112500 I] + 4 d)])) Plot[Im[f[d]],{d,-10,10}] or (Same thing, I just make it look nicer) k = -2562890621-112500I; l = 2-28125I; o = 2-50629I; kk[d_] := 1/(2 Sqrt[k Pi])* (-(l+Sqrt[k]+4500 I Sqrt[70]) E^(-(1/16)(-o+Sqrt[k]-4d)^2)(Pi Erfi[1/4(-o+Sqrt[k]-4d)]-Log[-o+Sqrt[k]-4d]-Log[1/(o-Sqrt[k]+4d)]) +(l+Sqrt[k]-4500 I Sqrt[70]) E^(-(1/16)(o+Sqrt[k]+4d)^2)(Pi Erfi[1/4(o+Sqrt[k]+4d)]+Log[-o-Sqrt[k]-4d]+Log[1/(o+Sqrt[k]+4d)])) Plot[Im[kk[d]], {d, -10, 10}] • Evaluate these two values, {Im[f[3.]], Im[f[2.]]} to see why some places have no line – Jason B. Jan 18 '17 at 16:47 • @JasonB. I think those results are due to numerical errors; he is aware of that problem (hence his reference to underflow / overflow), and he is asking for a solution for it. Even increasing WorkingPrecision dramatically, however, does not seem to solve the problem. Saesun, I wonder if you can rewrite / Simplify / refactor your expression to a form that isn't affected by such huge errors. – MarcoB Jan 18 '17 at 16:51 • Thanks, I am trying to refactor my expression, I will let you know if it works! – Saesun Kim Jan 18 '17 at 17:08 With[{k = -2562890621 - 112500 I, l = 2 - 28125 I, o = 2 - 50629 I}, Plot[With[{c = -l - 4500 Sqrt[-70], f = 2 Sqrt[π], sm = -o + Sqrt[k] - 4 d, sp = o + Sqrt[k] + 4 d}, Im[((c - Sqrt[k]) (f DawsonF[sm/4] - Exp[-(sm/4)^2] (Log[sm] + Log[-1/sm])) + (c + Sqrt[k]) (f DawsonF[sp/4] + Exp[-(sp/4)^2] (Log[-sp] + Log[1/sp])))/(f Sqrt[k])]], {d, -10, 10}]] # Notes 1. Be kind to yourself; identify common subexpressions and isolate them. You'll thank yourself later when you're debugging. 2. In applications, the imaginary error function $\operatorname{erfi}(z)$ is not the quantity of interest, but rather its exponentially-scaled version, Dawson's integral (built-in as DawsonF[]). This has the advantage of being bounded even for large values. • Thank you for introducing me Dawson's integral! – Saesun Kim Jan 18 '17 at 20:44 If all you need is a continuous Plot, you can interpolate the available Plot data. plot1 = Plot[Im[f[d]], {d, -10, 10}]; points = Join @@ Cases[plot1, Line[pts__] :> pts, Infinity]; f2 = Interpolation[points]; {min, max} = MinMax[points[[All, 1]]]; Plot[f2[d], {d, min, max}]
2019-10-20 20:51:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6425856947898865, "perplexity": 5944.40699573986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986718918.77/warc/CC-MAIN-20191020183709-20191020211209-00082.warc.gz"}
https://www.jobilize.com/physics12/section/the-alkynes-hydrocarbons-by-openstax?qcr=www.quizover.com
# 1.1 Hydrocarbons  (Page 6/6) Page 6 / 6 ## Naming the alkenes Give the IUPAC name for each of the following alkenes: 1. CH ${}_{2}$ CHCH ${}_{2}$ CH ${}_{2}$ CH ${}_{3}$ 2. CH ${}_{3}$ CHCHCH ${}_{3}$ ## The properties of the alkenes The properties of the alkenes are very similar to those of the alkanes, except that the alkenes are more reactive because they are unsaturated. As with the alkanes, compounds that have four or less carbon atoms are gases at room temperature, while those with five or more carbon atoms are liquids. ## Reactions of the alkenes Alkenes can undergo addition reactions because they are unsaturated. They readily react with hydrogen, water and the halogens. The double bond is broken and a single, saturated bond is formed. A new group is then added to one or both of the carbon atoms that previously made up the double bond. The following are some examples: 1. A catalyst such as platinum is normally needed for these reactions ${H}_{2}C=C{H}_{2}+{H}_{2}\to {H}_{3}C-C{H}_{3}$ ( [link] ) 2. $C{H}_{2}=C{H}_{2}+HBr\to C{H}_{3}-C{H}_{2}-Br$ ( [link] ) 3. $C{H}_{2}=C{H}_{2}+{H}_{2}O\to C{H}_{3}-C{H}_{2}-OH$ ( [link] ) ## The alkenes 1. Give the IUPAC name for each of the following organic compounds: 1. CH ${}_{3}$ CHCH ${}_{2}$ 2. Refer to the data table below which shows the melting point and boiling point for a number of different organic compounds. Formula Name Melting point ( ${}^{0}$ C) Boiling point ( ${}^{0}$ C) C ${}_{4}$ H ${}_{10}$ Butane -138 -0.5 C ${}_{5}$ H ${}_{12}$ Pentane -130 36 C ${}_{6}$ H ${}_{14}$ Hexane -95 69 C ${}_{4}$ H ${}_{8}$ Butene -185 -6 C ${}_{5}$ H ${}_{10}$ Pentene -138 30 C ${}_{6}$ H ${}_{12}$ Hexene -140 63 1. At room temperature (approx. 25 ${}^{0}$ C), which of the organic compounds in the table are: 1. gases 2. liquids 2. In the alkanes... 1. Describe what happens to the melting point and boiling point as the number of carbon atoms in the compound increases. 2. Explain why this is the case. 3. If you look at an alkane and an alkene that have the same number of carbon atoms... 1. How do their melting points and boiling points compare? 2. Can you explain why their melting points and boiling points are different? 4. Which of the compounds, hexane or hexene, is more reactive? Explain your answer. 3. The following reaction takes place: $C{H}_{3}CHC{H}_{2}+{H}_{2}\to C{H}_{3}C{H}_{2}C{H}_{3}$ 1. Give the name of the organic compound in the reactants. 2. What is the name of the product? 3. What type of reaction is this? 4. Which compound in the reaction is a saturated hydrocarbon? ## The alkynes In the alkynes, there is at least one triple bond between two of the carbon atoms. They are unsaturated compounds and are therefore highly reactive. Their general formula is C ${}_{n}$ H ${}_{2n-2}$ . The simplest alkyne is ethyne ( [link] ), also known as acetylene. Many of the alkynes are used to synthesise other chemical products. ## Interesting fact The raw materials that are needed to make acetylene are calcium carbonate and coal. Acetylene can be produced through the following reactions: $CaC{O}_{3}\to CaO$ $CaO+3C\to Ca{C}_{2}+CO$ $Ca{C}_{2}+2{H}_{2}O\to Ca{\left(OH\right)}_{2}+{C}_{2}{H}_{2}$ An important use of acetylene is in oxyacetylene gas welding. The fuel gas burns with oxygen in a torch. An incredibly high heat is produced, and this is enough to melt metal. ## Naming the alkynes The same rules will apply as for the alkanes and alkenes, except that the suffix of the name will now be -yne. Give the IUPAC name for the following compound: 1. There is a triple bond between two of the carbon atoms, so this compound is an alkyne. The suffix will be -yne. The triple bond is at the second carbon, so the suffix will in fact be 2-yne. 2. If we count the carbons in a straight line, there are six. The prefix of the compound's name will be 'hex'. 3. In this example, you will need to number the carbons from right to left so that the triple bond is between carbon atoms with the lowest numbers. 4. There is a methyl (CH ${}_{3}$ ) group attached to the fifth carbon (remember we have numbered the carbon atoms from right to left). 5. If we follow this order, the name of the compound is 5-methyl-hex-2-yne . ## The alkynes Give the IUPAC name for each of the following organic compounds. 1. C ${}_{2}$ H ${}_{2}$ 2. CH ${}_{3}$ CH ${}_{2}$ CCH can it say it cause I understand it better Hey, what is momentum? the product of mass n its velocity Albert And what's simple harmonic? Inocent What is organic compounds are compounds which consists of carbon atoms Mkhize Thanks bro Gaba How many industrial processes are there? A homologous group is a group with the same amount of electrons in the outermost shell. What are hydrocarbons Its a compound of hydrogen and carbon atoms Luyanda do hydrocarbons get unsaturated? yes Thembi they are saturated Sinoty Yes Gugu what is monomer what is polymerism Tobeka I think a polymer is a chemical compound with molecules bonded together in long, repeating chains. Because of their structure, polymers have unique properties that can be tailored for different uses. Kagiso Louis which essays are expected in march exams what do you look at when you want to successfully compare the boiling points of two|three organic molecules 1-check the homologous group 2-number of carbons or chain length n their intermolecular forces between the bonds Lesego why are alkanes none as primary meant to say known as primary alchols Babulele because there is only 1 OH attached to carbon and carbon Sinoty intermoleculer forces what is meaning of le chateliers princeple what's an organic molecule Is a molecules containing carbon atoms Katlego It is necessary to put a hyphen between names? Thank you Katlego ok thanks machawe Ok.. I thought we were only allowed to put it between word and a number Katlego nop Nkosi
2020-03-30 00:40:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 37, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48381122946739197, "perplexity": 2455.3565962353077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496330.1/warc/CC-MAIN-20200329232328-20200330022328-00182.warc.gz"}
https://tex.stackexchange.com/questions/458699/automatic-subequations-for-amsmath-environments
# Automatic “subequations” for amsmath environments? Is it possible to have amsmath automatically number equations in "subequations" style, when a multiline construct like \begin{align} E &= m c^2 \\ c^2 &= a^2 + c^2 \end{align} is encountered? Using the subequations environment explicitly is useful, when it is necessary to refer to a group of equations. When the only reason for using it is a style requirement, that successive equations should be numbered as (13a), (13b), etc., it would however be nice to have this automated away. Additionally, this would be extremely useful in LyX, where using the Subequations module breaks the layout of the live-preview. Not a good idea, in my opinion. If LyX is not able to cope with subequation, then don't use it or ask its developers to fix it. Anyway, you can do it: \documentclass{article} \usepackage{amsmath} \usepackage{etoolbox} \makeatletter \let\endalignnosub\endalign \renewenvironment{align*}{\start@align\@ne\st@rredtrue\m@ne}{\endalignnosub} \renewenvironment{alignat*}{\start@align\z@\st@rredtrue}{\endalignnosub} \renewenvironment{xalignat*}{\start@align\@ne\st@rredtrue}{\endalignnosub} \renewenvironment{flalign*}{\start@align\tw@\st@rredtrue\m@ne}{\endalignnosub} \appto\endalign{\endsubequations} \preto\align{\subequations} \preto\alignat{\subequations} \preto\xalignat{\subequations} \preto\flalign{\subequations} \let\endgathernosub\endgather \renewenvironment{gather*}{\start@gather\st@rredtrue}{\endgathernosub} \preto\gather{\subequations} \appto\endgather{\endsubequations} \makeatother \begin{document} \begin{align} a &= b \\ c &= d \end{align} \begin{align*} a &= b \\ c &= d \end{align*} \end{document} It's necessary to change all environments, because they depend on \endalign, so one cannot just change it. But don't. Really. • This solution works pretty well (except that I add a \appto\subequations{\let\subequations\relax} to allow \begin{subequations}\label{eq:equation-group}\begin{align}... without getting labels (1aa), (1ab), (1ac)...). But is there maybe some solution, that still allows using \begin{align} \notag hello \\ & world \end{align} without getting (1a) instead of (1) as the label? – kdb Nov 7 '18 at 15:31 • @kdb Aha! Now you know why it's not good to do that. Really. ;-) – egreg Nov 7 '18 at 15:33
2021-02-27 19:22:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996987581253052, "perplexity": 2687.83814785901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359082.48/warc/CC-MAIN-20210227174711-20210227204711-00320.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-r-review-page-r-33/5
## Algebra: A Combined Approach (4th Edition) Published by Pearson # Chapter R - Review - Page R-33: 5 #### Answer $60$ #### Work Step by Step Step 1. Write the prime factorization of each number. Step 2. Write the product containing each different prime factor (from Step 1) the greatest number of times that it appears in any one factorization. This product is the LCM. $4=2*2$ $6=2*3$ $10=2*5$ $LCM=2*2*3*5=60$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-11-14 08:52:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6543322205543518, "perplexity": 1552.996558530952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741764.35/warc/CC-MAIN-20181114082713-20181114104713-00382.warc.gz"}
https://socratic.org/questions/an-object-with-a-mass-of-12-kg-is-revolving-around-a-point-at-a-distance-of-16-m
# An object with a mass of 12 kg is revolving around a point at a distance of 16 m. If the object is making revolutions at a frequency of 15 Hz, what is the centripetal force acting on the object? May 26, 2016 There are (at least) two ways to answer this: using linear velocity and angular velocity. I have completed both below to increase your knowledge and options. The answer, using both methods, is $F = 1.7 \times {10}^{6}$ $N$ #### Explanation: Linear velocity method: We have the expression $a = {v}^{2} / r$ for the centripetal acceleration. Combining this with Newton's Second Law, $F = m a$, we end up with: $F = \frac{m {v}^{2}}{r}$ We know the mass and radius, but need to calculate the linear velocity (a vector that is at a tangent to the circular motion. The speed is constant but the velocity is continually changing as the direction of the motion changes). The object is rotating 15 times each second, and the circumference of a circle of radius $r$ is $2 \pi r$. One circumference is $2 \pi \times 16 = 100.5$ $m$. The object is traveling 15 rotations each second, or $1508$ $m {s}^{-} 1$. Then $F = \frac{m {v}^{2}}{r} = \frac{12 \times {1508}^{2}}{16} = 1 , 705 , 548$ $N$ Angular velocity method: Similarly, for the centripetal acceleration using angular velocity, we have $a = {\omega}^{2} r$, and therefore $F = m {\omega}^{2} r$. Again, we know the mass and the radius. The object is rotating at $15$ $H z$, and each rotation is $2 \pi$ radians, so the angular velocity is $30 \pi = 94.25$ $r a {\mathrm{ds}}^{-} 1$. Then $F = m {\omega}^{2} r = 12 \times {94.25}^{2} \times 16 = 1 , 705 , 468$ $N$ This is slightly different from the other answer, but only at the 5th significant digit. Remember that we had the data only to 2 significant digits, so this is within the rounding error. For all practical purposes, both methods yield the same answer.
2022-01-29 00:50:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 22, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9047732949256897, "perplexity": 195.25780809723372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299894.32/warc/CC-MAIN-20220129002459-20220129032459-00304.warc.gz"}
https://manual.q-chem.com/5.3/subsec_CASE.html
# 4.6.8 CASE Approximation The Coulomb Attenuated Schrödinger Equation (CASE) approximationAdamson:1996 follows from the KWIK algorithmDombroski:1996 in which the Coulomb operator is separated into two pieces using the error function, Eq. (5.12). Whereas in Section 5.6 this partition of the Coulomb operator was used to incorporate long-range Hartree-Fock exchange into DFT, within the CASE approximation it is used to attenuate all occurrences of the Coulomb operator in Eq. (4.2), by neglecting the long-range portion of the identity in Eq. (5.12). The parameter $\omega$ in Eq. (5.12) is used to tune the level of attenuation. Although the total energies from Coulomb attenuated calculations are significantly different from non-attenuated energies, it is found that relative energies, correlation energies and, in particular, wave functions, are not, provided a reasonable value of $\omega$ is chosen. By virtue of the exponential decay of the attenuated operator, ERIs can be neglected on a proximity basis yielding a rigorous ${\cal{O}}({N})$ algorithm for single point energies. CASE may also be applied in geometry optimizations and frequency calculations. OMEGA Controls the degree of attenuation of the Coulomb operator. TYPE: INTEGER DEFAULT: No default OPTIONS: $n$ Corresponding to $\omega=n/1000$, in units of bohr${}^{-1}$ RECOMMENDATION: None INTEGRAL_2E_OPR Determines the two-electron operator. TYPE: INTEGER DEFAULT: -2 Coulomb Operator. OPTIONS: -1 Apply the CASE approximation. -2 Coulomb Operator. RECOMMENDATION: Use the default unless the CASE operator is desired.
2020-07-02 11:20:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 6, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8680124282836914, "perplexity": 1689.5189159223482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878753.12/warc/CC-MAIN-20200702111512-20200702141512-00379.warc.gz"}
http://downloadmunkey.net/zero-error/zero-error-capacity-under-list-decoding.php
Home > Zero Error > Zero Error Capacity Under List Decoding # Zero Error Capacity Under List Decoding Our result implies that for the so called q/(q–1) channel, the capacity is exponentially small in q, even if the list size is allowed to be as big as 1.58q. Algorithms for Reed–Solomon codes that can decode up to the Johnson radius which is 1 − 1 − δ {\displaystyle 1-{\sqrt {1-\delta }}} exist where δ {\displaystyle \delta } is the If so, include such a polynomial p ( X ) {\displaystyle p(X)} in the output list. morefromWikipedia Tools and Resources Save to Binder Export Formats: BibTeX EndNote ACMRef Share: | Contact Us | Switch to single page view (no tabs) **Javascript is not enabled and is required this content Terms of Usage Privacy Policy Code of Ethics Contact Us Useful downloads: Adobe Reader QuickTime Windows Media Player Real Player Did you know the ACM DL App is The quantity 1 − R {\displaystyle 1-R} is referred to in the literature as the list-decoding capacity. Because of their ubiquity and the nice algebraic properties they possess, list-decoding algorithms for Reed–Solomon codes were a main focus of researchers. Blinovsky Bounds for codes in the case of list decoding of finite volume Bounds for codes in the case of list decoding of finite volume Problems Inform. As a result, the half-the minimum distance acts as a combinatorial barrier beyond which unambiguous error-correction is impossible, if we only insist on unique decoding. The quantity q H q ( p ) {\displaystyle q^{H_{q}(p)}} gives a very good estimate on the volume of a Hamming ball of radius p {\displaystyle p} centered on any word Inform. Namely, the zero-error capacity of a DMC with feedback is equal to the list code zero-error capacity. "[Show abstract] [Hide abstract] ABSTRACT: We define here a new kind of quantum channel This result involves a far-reaching extension of the "conclusive exclusion" of quantum states [Pusey/Barrett/Rudolph, Nat Phys 8:475, 2012]. Ahlswede Elimination of correlation in random codes for arbitrary varying channels Z. This finally results in an operational interpretation of the celebrated Lovasz $\vartheta$ function of a graph as the zero-error classical capacity of the graph assisted by quantum no-signalling correlations, the first Elias [33] demonstrated that Equations (5.16) and (5.17) are equivalent. But looking at the case of classical channels and bipartite graphs [10], in the light of a classical result of Elias [42], shows that there only a very specific resource is Read our cookies policy to learn more.OkorDiscover by subject areaRecruit researchersJoin for freeLog in EmailPasswordForgot password?Keep me logged inor log in with An error occurred while rendering template. The corresponding capacities C OF( L ), C O( L ) are nondecreasing in L . In other words, this is error-correction with optimal redundancy. The list-decoding problem for Reed–Solomon codes can be formulated as follows: Input: For an [ n , k + 1 ] q {\displaystyle [n,k+1]_{q}} Reed-Solomon code, we are given the pair IEEE Press, New York (1974)CrossRefMathSciNet About this Chapter Title Zero Error List-Decoding Capacity of the q/(q–1) Channel Book Title FSTTCS 2006: Foundations of Software Technology and Theoretical Computer Science Book Subtitle Blinovsky, V. The codes that they are given are called folded Reed-Solomon codes which are nothing but plain Reed-Solomon codes but viewed as a code over a larger alphabet by careful bundling of ScienceDirect ® is a registered trademark of Elsevier B.V.RELX Group Close overlay Close Sign in using your ScienceDirect credentials Username: Password: Remember me Not Registered? The necessary requirement for which a quantum channel has zero-error capacity greater than zero is given. C. This resulted in a gap between the error-correction performance for stochastic noise models (proposed by Shannon) and the adversarial noise model (considered by Richard Hamming). Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. List decoding promises to meet this upper bound. It is hoped that the papers selected, all written by...https://books.google.com/books/about/A_Collection_of_Contributions_in_Honour.html?id=36-cDAAAQBAJ&utm_source=gb-gplus-shareA Collection of Contributions in Honour of Jack van LintMy libraryHelpAdvanced Book SearchGet print bookNo eBook availableAccess Online via ElsevierAmazon.comBarnes&Noble.comBooks-A-MillionIndieBoundFind in a IEEE Int. Inform. For each of these polynomials, check if p ( α i ) = y i {\displaystyle p(\alpha _{i})=y_{i}} for at least t {\displaystyle t} values of i ∈ [ n ] Please try the request again. The unique decoding model in coding theory, which is constrained to output a single valid codeword from the received word could not tolerate greater fraction of errors. Let q ⩾ 2 , 0 ⩽ p ⩽ 1 − 1 q {\displaystyle q\geqslant 2,0\leqslant p\leqslant 1-{\tfrac {1}{q}}} and ϵ ⩾ 0. {\displaystyle \epsilon \geqslant 0.} The following two statements The proof for list-decoding capacity is a significant one in that it exactly matches the capacity of a q {\displaystyle q} -ary symmetric channel q S C p {\displaystyle qSC_{p}} . Interestingly however, for the class of classical-quantum channels, we show that the asymptotic capacity is given by a much simpler SDP, which coincides with a semidefinite generalization of the fractional packing ## Venkatesan Guruswami's PhD thesis Algorithmic Results in List Decoding Folded Reed–Solomon code Retrieved from "https://en.wikipedia.org/w/index.php?title=List_decoding&oldid=741224889" Categories: Coding theoryError detection and correctionComputational complexity theoryHidden categories: Articles lacking in-text citations from May 2011All Their codes are variants of Reed-Solomon codes which are obtained by evaluating m ⩾ 1 {\displaystyle m\geqslant 1} correlated polynomials instead of just 1 {\displaystyle 1} as in the case of Hence, in a sense this is like improving the error-correction performance to that possible in case of a weaker, stochastic noise model. Motivation for list decoding Given a received word y {\displaystyle y} , which is a noisy version of some transmitted codeword c {\displaystyle c} , the decoder tries to output the van TilborgElsevier, Jun 6, 2016 - Mathematics 0 Reviewshttps://books.google.com/books/about/A_Collection_of_Contributions_in_Honour.html?id=36-cDAAAQBAJThis collection of contributions is offered to Jack van Lint on the occasion of his sixtieth birthday and appears simultaneously in the series School of Technology and Computer Science, Tata Institute of Fundamental Research, Mumbai, India 21. Transmission, 27 (1991), pp. 17–33 (in Russian) [3] R. P. Get Help About IEEE Xplore Feedback Technical Support Resources and Help Terms of Use What Can I Access? Get Help About IEEE Xplore Feedback Technical Support Resources and Help Terms of Use What Can I Access? Transmission, to appear. [6] P. or its licensors or contributors. It is hoped that the papers selected, all written by experts in their own fields, represent the many interesting areas that together constitute the discipline of Discrete Mathematics. Publisher conditions are provided by RoMEO. External links A Survey on list decoding by Madhu Sudan Notes from a course taught by Madhu Sudan Notes from a course taught by Luca Trevisan Notes from a course taught Differing provisions from the publisher's actual policy or licence agreement may be applicable.This publication is from a journal that may support self archiving.Learn more © 2008-2016 researchgate.net. Although C was designed for implementing system software, it is also widely used for developing portable application software. The notion was proposed by Elias in the 1950s. A family $${\cal H}$$ of functions from [m] to [q] is said to be an (m,q,ℓ)-family if for every subset S of [m] with ℓ elements, there is an \(h \in This was understood better in the work of Elias [24] who showed that the capacity of zero-error list decoding of N (with arbitrary but constant list size) is exactly log α Efficient traitor tracing.
2018-02-20 21:27:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5847824811935425, "perplexity": 2223.682569824084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.36/warc/CC-MAIN-20180220204917-20180220224917-00776.warc.gz"}
http://en.m.wikipedia.org/wiki/Burstsort
# Burstsort Class Sorting algorithm Array $O(n\log(n))$ Burstsort and its variants are cache-efficient algorithms for sorting strings and are faster than quicksort for large data sets. Burstsort algorithms use a trie to store prefixes of strings, with growable arrays of pointers as end nodes containing sorted, unique, suffixes (referred to as buckets). Some variants copy the string tails into the buckets. As the buckets grow beyond a predetermined threshold, the buckets are "burst", giving the sort its name. A more recent variant uses a bucket index with smaller sub-buckets to reduce memory usage. Most implementations delegate to multikey quicksort, an extension of three-way radix quicksort, to sort the contents of the buckets. By dividing the input into buckets with common prefixes, the sorting can be done in a cache-efficient manner. ## References ↑Jump back a section
2013-05-21 16:48:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31226420402526855, "perplexity": 4292.342712840273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700212265/warc/CC-MAIN-20130516103012-00042-ip-10-60-113-184.ec2.internal.warc.gz"}
https://docs.mdanalysis.org/2.0.0/documentation_pages/coordinates/GRO.html
# 6.8. GRO file format — MDAnalysis.coordinates.GRO¶ Classes to read and write Gromacs GRO coordinate files; see the notes on the GRO format which includes a conversion routine for the box. ## 6.8.1. Writing GRO files¶ By default any written GRO files will renumber the atom ids to move sequentially from 1. This can be disabled, and instead the original atom ids kept, by using the reindex=False keyword argument. This is useful when writing a subsection of a larger Universe while wanting to preserve the original identities of atoms. For example: >>> u = mda.Universe() >>> u.atoms.write('out.gro', reindex=False) # OR >>> with mda.Writer('out.gro', reindex=False) as w: ... w.write(u.atoms) ## 6.8.2. Classes¶ class MDAnalysis.coordinates.GRO.Timestep(n_atoms, **kwargs)[source] Timestep data for one frame Methods: ts = Timestep(n_atoms) create a timestep object with space for n_atoms Changed in version 0.11.0: Added from_timestep() and from_coordinates() constructor methods. Timestep init now only accepts integer creation. n_atoms now a read only property. frame now 0-based instead of 1-based. Attributes status and step removed. Changed in version 2.0.0: Timestep now can be (un)pickled. Weakref for Reader will be dropped. Timestep now stores in to numpy array memory in ‘C’ order rather than ‘F’ (Fortran). Create a Timestep, representing a frame of a trajectory Parameters: n_atoms (int) – The total number of atoms this Timestep describes positions (bool, optional) – Whether this Timestep has position information [True] velocities (bool (optional)) – Whether this Timestep has velocity information [False] forces (bool (optional)) – Whether this Timestep has force information [False] reader (Reader (optional)) – A weak reference to the owning Reader. Used for when attributes require trajectory manipulation (e.g. dt) dt (float (optional)) – The time difference between frames (ps). If time is set, then dt will be ignored. time_offset (float (optional)) – The starting time from which to calculate time (in ps) Changed in version 0.11.0: Added keywords for positions, velocities and forces. Can add and remove position/velocity/force information by using the has_* attribute. copy()[source] Make an independent (“deep”) copy of the whole Timestep. copy_slice(sel)[source] Make a new Timestep containing a subset of the original Timestep. Parameters: sel (array_like or slice) – The underlying position, velocity, and force arrays are sliced using a list, slice, or any array-like. A Timestep object of the same type containing all header information and all atom information relevant to the selection. Timestep Note The selection must be a 0 based slice or array of the atom indices in this Timestep Example Using a Python slice object: new_ts = ts.copy_slice(slice(start, stop, step)) Using a list of indices: new_ts = ts.copy_slice([0, 2, 10, 20, 23]) New in version 0.8. Changed in version 0.11.0: Reworked to follow new Timestep API. Now will strictly only copy official attributes of the Timestep. dimensions View of unitcell dimensions (A, B, C, alpha, beta, gamma) lengths a, b, c are in the MDAnalysis length unit (Å), and angles are in degrees. dt The time difference in ps between timesteps Note This defaults to 1.0 ps in the absence of time data New in version 0.11.0. forces A record of the forces of all atoms in this Timestep Setting this attribute will add forces to the Timestep if they weren’t originally present. Returns: forces – force data of shape (n_atoms, 3) for all atoms numpy.ndarray with dtype numpy.float32 MDAnalysis.exceptions.NoDataError – if the Timestep has no force data New in version 0.11.0. classmethod from_coordinates(positions=None, velocities=None, forces=None, **kwargs)[source] Create an instance of this Timestep, from coordinate data Can pass position, velocity and force data to form a Timestep. New in version 0.11.0. classmethod from_timestep(other, **kwargs)[source] Create a copy of another Timestep, in the format of this Timestep New in version 0.11.0. has_forces A boolean of whether this Timestep has force data This can be changed to True or False to allocate space for or remove the data. New in version 0.11.0. has_positions A boolean of whether this Timestep has position data This can be changed to True or False to allocate space for or remove the data. New in version 0.11.0. has_velocities A boolean of whether this Timestep has velocity data This can be changed to True or False to allocate space for or remove the data. New in version 0.11.0. n_atoms A read only view of the number of atoms this Timestep has Changed in version 0.11.0: Changed to read only property positions A record of the positions of all atoms in this Timestep Setting this attribute will add positions to the Timestep if they weren’t originally present. Returns: positions – position data of shape (n_atoms, 3) for all atoms numpy.ndarray with dtype numpy.float32 MDAnalysis.exceptions.NoDataError – if the Timestep has no position data Changed in version 0.11.0: Now can raise NoDataError when no position data present time The time in ps of this timestep This is calculated as: time = ts.data['time_offset'] + ts.time Or, if the trajectory doesn’t provide time information: time = ts.data['time_offset'] + ts.frame * ts.dt New in version 0.11.0. triclinic_dimensions The unitcell dimensions represented as triclinic vectors Returns: A (3, 3) numpy.ndarray of unit cell vectors numpy.ndarray Examples The unitcell for a given system can be queried as either three vectors lengths followed by their respective angle, or as three triclinic vectors. >>> ts.dimensions array([ 13., 14., 15., 90., 90., 90.], dtype=float32) >>> ts.triclinic_dimensions array([[ 13., 0., 0.], [ 0., 14., 0.], [ 0., 0., 15.]], dtype=float32) Setting the attribute also works: >>> ts.triclinic_dimensions = [[15, 0, 0], [5, 15, 0], [5, 5, 15]] >>> ts.dimensions array([ 15. , 15.81138802, 16.58312416, 67.58049774, 72.45159912, 71.56504822], dtype=float32) New in version 0.11.0. velocities A record of the velocities of all atoms in this Timestep Setting this attribute will add velocities to the Timestep if they weren’t originally present. Returns: velocities – velocity data of shape (n_atoms, 3) for all atoms numpy.ndarray with dtype numpy.float32 MDAnalysis.exceptions.NoDataError – if the Timestep has no velocity data New in version 0.11.0. volume volume of the unitcell class MDAnalysis.coordinates.GRO.GROReader(filename, convert_units=True, n_atoms=None, **kwargs)[source] Reader for the Gromacs GRO structure format. Note This Reader will only read the first frame present in a file. Note GRO files with zeroed 3 entry unit cells (i.e. 0.0   0.0   0.0) are read as missing unit cell information. In these cases dimensions will be set to None. Changed in version 0.11.0: Frames now 0-based instead of 1-based Changed in version 2.0.0: Reader now only parses boxes defined with 3 or 9 fields. Reader now reads a 3 entry zero unit cell (i.e. [0, 0, 0]) as a being without dimension information (i.e. will the timestep dimension to None). Writer(filename, n_atoms=None, **kwargs)[source] Returns a CRDWriter for filename. Parameters: filename (str) – filename of the output GRO file GROWriter class MDAnalysis.coordinates.GRO.GROWriter(filename, convert_units=True, n_atoms=None, **kwargs)[source] GRO Writer that conforms to the Trajectory API. Will attempt to write the following information from the topology: • atom name (defaults to ‘X’) • resnames (defaults to ‘UNK’) • resids (defaults to ‘1’) Note The precision is hard coded to three decimal places. Note When dimensions are missing (i.e. set to None), a zero width unit cell box will be written (i.e. [0.0, 0.0, 0.0]). Changed in version 0.11.0: Frames now 0-based instead of 1-based Changed in version 0.13.0: Now strictly writes positions with 3dp precision. and velocities with 4dp. Removed the convert_dimensions_to_unitcell method, use Timestep.triclinic_dimensions instead. Now now writes velocities where possible. Changed in version 0.18.0: Added reindex keyword argument to allow original atom ids to be kept. Changed in version 2.0.0: Raises a warning when writing timestep with missing dimension information (i.e. set to None). Set up a GROWriter with a precision of 3 decimal places. Parameters: filename (str) – output filename n_atoms (int (optional)) – number of atoms convert_units (bool (optional)) – units are converted to the MDAnalysis base format; [True] reindex (bool (optional)) – By default, all the atoms were reindexed to have a atom id starting from 1. [True] However, this behaviour can be turned off by specifying reindex =False. Note To use the reindex keyword, user can follow the two examples given below.: u = mda.Universe() Usage 1: u.atoms.write('out.gro', reindex=False) Usage 2: with mda.Writer('out.gro', reindex=False) as w: w.write(u.atoms) fmt = {'box_orthorhombic': '{box[0]:10.5f} {box[1]:9.5f} {box[2]:9.5f}\n', 'box_triclinic': '{box[0]:10.5f} {box[4]:9.5f} {box[8]:9.5f} {box[1]:9.5f} {box[2]:9.5f} {box[3]:9.5f} {box[5]:9.5f} {box[6]:9.5f} {box[7]:9.5f}\n', 'n_atoms': '{0:5d}\n', 'xyz': '{resid:>5d}{resname:<5.5s}{name:>5.5s}{index:>5d}{pos[0]:8.3f}{pos[1]:8.3f}{pos[2]:8.3f}\n', 'xyz_v': '{resid:>5d}{resname:<5.5s}{name:>5.5s}{index:>5d}{pos[0]:8.3f}{pos[1]:8.3f}{pos[2]:8.3f}{vel[0]:8.4f}{vel[1]:8.4f}{vel[2]:8.4f}\n'} format strings for the GRO file (all include newline); precision of 3 decimal places is hard-coded here. write(obj)[source] Write selection at current trajectory frame to file. Parameters: obj (AtomGroup or Universe) – Note The GRO format only allows 5 digits for resid and atom number. If these numbers become larger than 99,999 then this routine will chop off the leading digits. Changed in version 0.7.6: resName and atomName are truncated to a maximum of 5 characters Changed in version 0.16.0: frame kwarg has been removed Changed in version 2.0.0: Deprecated support for calling with Timestep has nwo been removed. Use AtomGroup or Universe as an input instead. ## 6.8.3. Developer notes: GROWriter format strings¶ The GROWriter class has a GROWriter.fmt attribute, which is a dictionary of different strings for writing lines in .gro files. These are as follows: n_atoms For the first line of the gro file, supply the number of atoms in the system. E.g.: fmt['n_atoms'].format(42) xyz An atom line without velocities. Requires that the ‘resid’, ‘resname’, ‘name’, ‘index’ and ‘pos’ keys be supplied. E.g.: fmt['xyz'].format(resid=1, resname='SOL', name='OW2', index=2, pos=(0.0, 1.0, 2.0)) xyz_v As above, but with velocities. Needs an additional keyword ‘vel’. box_orthorhombic The final line of the gro file which gives box dimensions. Requires the box keyword to be given, which should be the three cartesian dimensions. E.g.: fmt['box_orthorhombic'].format(box=(10.0, 10.0, 10.0)) box_triclinic` As above, but for a non orthorhombic box. Requires the box keyword, but this time as a length 9 vector. This is a flattened version of the (3,3) triclinic vector representation of the unit cell. The rearrangement into the odd gromacs order is done automatically.
2022-08-08 04:54:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.277694970369339, "perplexity": 11482.418817658088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00661.warc.gz"}
http://suhb.raovatvb.de/importance-of-stiffness-matrix.html
Importance Of Stiffness Matrix In all CBE equations, the terms AKm(j), BKm(j), CKm(j), etc. There is the length, the general overall flex which fits best for your swing speed, the weight of the club and the feel that gives you confidence when it is swung. It is hoped that this paper can enhance understanding and proper usage of the program and increase the awareness oi highway engineers oi the importance of the effects of temperature, joint, edge, and. The stiffness in vibration means the multiplying modulus of elasticity and moment of inertia of beam. The DSM is the method used in the computer analysis of structures and is the precursor to the more general Finite Element Method. Note: It is known from our elementary knowledge of linear algebra that inverse of a symmetric matrix is also a symmetric matrix. 7] where triangular and rectangular elements were used for the analysis of structures under plane stress conditions. & Technology, Vol. Take a slice orthogonal to the -direction and define a small area on this slice as. Each column of stiffness matrix is an equilibrium set of nodal force required to produce unit respective dof Symmetric stiffness matrix shows force is directly proportional to displacement Diagonal terms of the matrix are always positive i. This basic theory will then be used to calculate the frequency response function between two points on a structure using an accelerometer to measure the response and a force gauge hammer to measure the excitation. The material properties of the base state will be used. They measures how "hard" this solid is. It is a specific case of the more general finite element method, and was in. Introduction The systematic development of slope deflection method in this matrix is called as a stiffness method. So let's have a look into the step by step procedure of how a stiffness matrix is assembled. It is shown that the minimum stiffness of the bracings required by a multi-column system depends on: 1) the blueprint layout and the layout of the columns; 2) the variation in height and cross sectional properties among the columns; 3) the flexural and shear stiffness of each column; 4) the applied axial load pattern on the columns; 5) the lack of symmetry in the loading pattern, column layout, column sizes, and heights that cause the combined torsion-sway buckling which reduces the buckling. The reason is that for such a constraint the sum in the tensorial equation for Hooke's law collapses into a single term containing only C 1111. In general, the off-diagonal (cross-coupling) terms can be neglected, for two reasons: 1) the values of the off-diagonal terms are small, especially for shallow footings, and 2) they are difficult to compute. The rotation of the material matrix is done by implementing Euler Angles using Bunge (ZXZ) notation is the method selected as the rotation matrix transformation for the stiffness matrix, stress, and strain components. This type of damping is called proportional damping. I know the definition of symmetric positive definite (SPD) matrix, but want to understand more. Stiffness (or rigidity) is a property of a polymers that is described by Flexural modulus or bending modulus of elasticity. The stiffness matrix of a material inter­ sected by a system of parallel continuously distributed cracks is obtained in the limit. In direct tensor notation. Imagine an object oriented in the cartesian coordinate system with a number of forces acting on it, such that the vector sum of all the forces is zero. Keywords: bus, oscillatory behaviour, spring, shock absorber, simulation. In addition, it is quite convenient and not uncommon to describe the damping of the structural system by a damping matrix that is proportional to the mass matrix and the stiffness matrix, , where the constants and are proportionality constants. They relate to different phases of matter: a solid, in the case of the $6\times 6$ stiffness tensor, and a nematic liquid crystal in the case of the Frank elastic constants. The 𝐵 matrix is a coupling matrix and it relates the bending strains with normal stresses and normal strains with bending stresses. Another application of stiffness finds itself in skin biology. The power of the finite element method now comes after all the nodal displacements are calculated by solving because the polynomial is now completely determined and hence and can now be evaluated for any along the beam and not just at its end nodes. It is a leading cause of death in adults. the stiffness matrix [K] depends on the. Subsequently, the method is extended to study the mean and variance of the stationary response of built-up structures when excited by stationary stochastic forces. • Flexibility Method The flexibility method is based upon the solution of equilibrium equations and compatibility equations. The matrix displacement method first appeared in the aircraft industry in the 1940s7, where it was used to improve the strength-to-weight ratio of aircraft structures. Suvranu De MANE 4240 & CIVL 4240 Introduction to Finite Elements. The moment of inertia for a rectangular cross section about its neutral axis is $\frac{b \cdot d^3}{12}$. GENERAL ANALYSIS Relations between stresses and strains in a laminate plan are stated by []A, []B and []D matrices as we know it from the mechanics of a composite laminated plan. Senjanović, N. The DSM is the method used in the computer analysis of structures and is the precursor to the more general Finite Element Method. of the stiffness matrix approach results from rearrangement, which causes the stiffness matrix itself. To overcome this. What are the type of structtures that can be solved using stiffness matrix method? Structures such as simply supported, fixed beams and portal frames can be solved using stiffness matrix method. (The element stiffness relation is important because it can be used as a building block for more complex systems. Although matrix stiffness is an important determinant of stem cell differentiation, its effect may not be specific for only one lineage, and biochemical factors such as TGF-β are required, together with matrix stiffness, to define a unique differentiation pathway. Short Communication Vascular Smooth Muscle Cell Stiffness As a Mechanism for Increased Aortic Stiffness With Aging Hongyu Qiu,* Yi Zhu,* Zhe Sun, Jerome P. If the material of the spring is linearly elastic, the load P and elongation δare proportional, or P = k δ. Select Solu -> Analysis Options and give your substructure a name (defaults to the jobname) and select the Matrix to be generated to be the Stiffness Matrix. The procedure for writing. 2 Obtaining the matrices from Ansys 5. Stiffness and mass matrices of the model 2. When using this approach, iteration may not be required and the resulting analysis can be less computationally demanding. For a given path of force [F}, the corresponding path of. To derive the dynamic stiffness matrix of a rotating Bernoulli-Euler beam Analytical and computational efforts are required. Laboratory of Applied Energetics and Mechanics (LEMA), University of Abomey -Calavi, BENIN Abstract: In this paper, geometric nonlinear analysis of plane frames was performed by the stiffness matrix method using stability functions. chidolue , n. This calculator is designed to calculate $2\times 2$, $3\times3$ and $4\times 4$ matrix determinant value. For these reasons it is always important to make sure the length, clamp, and weights are the same when comparing frequency measurements of shafts. ¾This not only implies A11 = A22, A16=A26, and A66=(A11-A12)/2, but also that these stiffnesses are independent of the angle of rotation of the laminate. Thus, a single value can be used to represent stiffness. Oth-erwise, the structure is free to move or deflect without deforming. Matrix metalloproteinase-14 is a mechanically regulated activator of secreted MMPs and invasion Author: Haage, Amanda , Nam, Dong Hyun , Ge, Xin , Schneider, Ian C. The four stiffness matrix components kee, kyy, kye and key are pure tensile, torsion and coupling terms, respectively. The Role of Matrix Stiffness in Regulating Cell Behavior RebeccaG. 1), M is still a mass matrix and L is a stiffness matrix, in spite of the fact that we put an eigenvalue on an unusual side. The stiffness of the myogenic stem cell microenvironment markedly influences the ability to regenerate tissue. Mass matrix components, internal forces and stiffness matrix, all require integration in the element domain, which is most commonly obtained with the help on numerical integration schemes e. This matrix is becoming increasingly important in the design of modern mechanical systems, such as compliant. Local stiffness matrix k12 Global stiffness. The material properties of the base state will be used. By performing a linear analysis, under seismic actions, it is important that the distribution of member forces is based on realistic stiffness values (including cracks) applying at close to member yield forces. By continuous fiber-reinforced laminates, the following is assumed:. This formulation results in additional stiffness terms leading to an element stiffness matrix of order 14, and static. me 309 finite elements in mechanical design lecture notes, class 04 thursday, january 17, 2008 winter 2008 19 1 1d bar elements 1. Thus applying shear stress to a 3D matrix can determine the elastic modulus (stiffness) as well as viscous properties of a bulk 3D tumour tissue. This latter development does not introduce an initial displacement matrix. The coefficient matrix L is L I T M r (A-5) The modal participation factor matrix *i for mode i at dof j is ii ij i j mÖ L * (A-6) Each mÖ ii coefficient is 1 if the eigenvectors have been normalized with respect to the mass matrix. has proposed a new method to. In order to. Al-Gahtani (1996) derived the stiffness matrix by using differential equations and determined fixed end forces for distributed and. A gauss elimination solver which works on banded matrices is implemented and given here. The advantages and disadvantages of the matrix stiffness method are compared and discussed in the flexibility method article. A NEW APPROACH TO IDENTIFY THE STIFFNESS MATRIX OF … 191 3. cost is the main driver and strength and stiffness are less important. Introduction. There is the length, the general overall flex which fits best for your swing speed, the weight of the club and the feel that gives you confidence when it is swung. The membrane cracked section factor is applied to membrane stiffness matrix and affects in-plane translational (horizontal and vertical) and in-plane rotation. with each structural element of the building frame is a stiffness matrix, and all these matnces together can be assembled into a global stiffness matnx to represent the structure. endpoint stiffness, which represents the stiffness of the arm at the hand. Large-artery (aortic) stiffening, which occurs with aging and various pathologic states, impairs this cushioning function, and has important consequences on cardiovascular health, including isolated systolic hypertension, excessive penetration of pulsatile energy into the microvasculature of target organs that operate at low vascular resistance. BASIC MECHANICS OF LAMINATED COMPOSITE PLATES I. stiffness matrix [A] behaves like that of an isotropic material. This parameter is used along with PARAM, G. But it is the same basic idea. The equation solver also must be appropriately modified to handle the type of storage scheme adopted. Richardson. a simple method to construct the stiffness matrix of a beam and a beam-column element of constant cross-section, with bending in one principal plane, including shear deflections; 2. Cell Behavior. the behavior of. Equation for linear static analysis is [F]=[K][D]. A1 Flexibility method and the stiffness method Statically indeterminate structures can be analyzed by using the flexibility method or the stiffness method. Intent and Scope This report is intended only to be used as a quick reference guide on the mechanics of continuous fiber-reinforced laminates. Let’s derive the spring element equations and stiffness matrix using the principal of minimum potential energy. All lateral forces are distributed to each element on the basis of relative rigidities and resisting element locations. In direct tensor notation. This may be as a consequence of other adaptations which provide more physiologically important specialisation of mechanical properties. These systems, which are of the form Mx + (C + G)x + Kx = 0, are of fundamental importance in the study of vibrational phenomena, where the matrices M, C, G and K represent mass, damping, gyroscopic coupling. In this study, we examined the effects of matrix stiffness on adult cardiac side population (CSP) progenitor cell behavior. Where Κ (e) is the element stiffness matrix, u (e) the nodal displacement vector and F (e) the nodal force vector. 4 Importance of fluid inertia in thin film flows Importance of fluid inertia effects on several fluid film bearing applications. In the method of displacement are used as the basic unknowns. cost is the main driver and strength and stiffness are less important. In essence, the matrix transfers some of the applied stress to the particles, which bear a fraction of the load. To explore the relative importance of these issues we created an OCCA based kernel that exploits optimized SIMD cramming and loop unrolling to evaluate the throughput of the pre-built matrix approach compared to the on-the-fly approach we highlighted in a previous blog entry. The lateral shear force is applied to the rigid diaphragm, and that force is distributed to all elements after the rotational stiffness analysis has been completed. Recommended: Please solve it on “ PRACTICE ” first, before moving on to the solution. By continuous fiber-reinforced laminates, the following is assumed:. [Stiffness matrix][Displacement matrix] = force vector. Torsion stiffness is an important characteristic in chassis design with an impact on the ride and comfort as well as the performance of the vehicle [5],[6],[10]. Mass matrix components, internal forces and stiffness matrix, all require integration in the element domain, which is most commonly obtained with the help on numerical integration schemes e. The rotational stiffness is the change in torque required to achieve a change in angle. The first approach has proved to be robust and stable in reinforced concrete structures with extensive cracking. It is hoped that this paper can enhance understanding and proper usage of the program and increase the awareness oi highway engineers oi the importance of the effects of temperature, joint, edge, and. The rotation of the material matrix is done by implementing Euler Angles using Bunge (ZXZ) notation is the method selected as the rotation matrix transformation for the stiffness matrix, stress, and strain components. Chapter 10 – Isoparametric Elements Learning Objectives • To illustrate by example how to evaluate the stresses at a given point in a plane quadrilateral element using Gaussian quadrature • To evaluate the stiffness matrix of the three-noded bar using Gaussian quadrature and compare the result to that found by explicit evaluation of the. full files containing the mass and stiffness matrices (Harwell-Boeing format)has been detailed in Cedric Rouault’s report available in french. BASIC MECHANICS OF LAMINATED COMPOSITE PLATES I. To solve vibration problems, we always write the equations of motion in matrix form. The most important matrix generated is the overall joint stiffness matrix [S J ]. The 'element' stiffness relation is: 𝐾(𝑒) 𝑢(𝑒) = 𝐹(𝑒) (11) Where 𝐾 (𝑒) is the element stiffness matrix, 𝑢(𝑒) the nodal displacement vector and 𝐹 the nodal force vector. Once this linear relationship is established, it is easy to answer the following three questions: 1. One notes that for a specified value of , one can count the number of negative terms in the diagonal matrix and it is always equal to the number of frequencies below that value. configuration). I think someone noted some time ago that the stiffness of an object increases/decreases by the cube of the increase/decrease in thickness. , when the stiffness matrix is diagonal). Richardson. Thus in a frame having j joints, the number of members is: H = 2(y-3) +3 = 2/-3 (3. The problem with this method is that the result is often highly sensitive to differences in relative errors in static displacements. Relation (2) is in general feasible only through the actuation and/or kinematic redundancy [4]. matrix 192 k22 Daedalus—the aircraft (continued from page 5) Mark Drela stressed the importance of the seats for so long a flight. Design analysis of beams, circular plates and cylindrical tanks on elastic foundations plate 3. Incorporating the details of finite deformation, the analysis may also be applied to a buckling analysis of the structural system. Rotational Stiffness. I think someone noted some time ago that the stiffness of an object increases/decreases by the cube of the increase/decrease in thickness. However, whereas the inertia matrix is fairly tightly constrained by mechanics, the stiffness matrix can be any 6 6 symmetric matrix, depending on the potential. KEYWORDS: Furniture \ furniture making \ Joinery. We are taking our K matrix with blanks everywhere, and we're adding into it this one element, k1. The goal of thischapteristo analyse the stackingsequence. The composite. There is increasing evidence that this reaction plays a central role in ageing and disease of connective tissues. A NEW APPROACH TO IDENTIFY THE STIFFNESS MATRIX OF … 191 3. Thus, a single value can be used to represent stiffness. The eigenvalues of element stiffness matrices K and the eigenvalues of the generalized problem Kx = λMx, where M is the element's mass matrix, are of fundamental importance in finite element. The modal mass, stiffness, and damping. Now this is a K matrix, this stiffness matrix governing--and this is very important--governing this system. The stiffness and mass matrices of the structure are then projected onto the subspace by. The material flexibility is the inverse of this. Specifically, the relative importance of feedforward and feedback mechanisms has. Combine these factors and there you have the shaft and club for you. Thanks to Gaurav Ahirwar for suggesting below solution. Euler transformation matrix is applied to include the orientation angle. Design of a Composite Drive Shaft and its Coupling for Automotive Application, M. Statically determinate and indeterminate problems can be solved in the same way. • Shear wall stiffness • Shear walls with openings • Diaphragm types • Types of Masonry Shear Walls • Maximum Reinforcement Requirements • Shear Strength • Example: simple building Shear Walls 2 Shear Walls: Stiffness _____ stiffness predominates _____ stiffness predominates Both shear and bending stiffness are important d h. One notes that for a specified value of , one can count the number of negative terms in the diagonal matrix and it is always equal to the number of frequencies below that value. To derive the dynamic stiffness matrix of a rotating Bernoulli-Euler beam Analytical and computational efforts are required. As I mentioned before, when I used some command into the input file in order to write out stiffness matrix and ran input file by this command at command window: abaqus cae nogui=pythoncode1. In the paper, the axial stiffness and bending stiffness of single-layer reticulated shell’s joint are considering together, non-linear beam-column element with rigid springs and rigid ends is taken as the analysis model of members of single-layer reticulated shell, a tangent stiffness matrix of members of single-layer reticulated shell considering joint’s stiffness is derived on the basis. Quantifying these behaviors is important because they significantly alter computed force, moment, curvature, strain, and stress. K sc is the diagonal matrix of tendon stiffness and K pc is the parallel compliance at the joints. Give the formula for. The stiffness matrix extends this to large number of elements (global stiffness matrix). The stack is defined by the fiber directions of each ply like this:. It is important to shift your hips with respect to the blue axis lines indicated in the images above. Computational algorithms and sensitivity to perturbations are both discussed. The external force applied on a specified area is known as stress, while the amount of deformation is called the strain. Careful inspection and intuition of the FEA results is very important. Effective Damping Value of Piezoelectric Transducer Determined by Experimental Techniques and Numerical Analysis Gilder Nader Department of Mechatronic and Mechanical Systems Engineering Escola Politécnica da Universidade de São Paulo Rua Prof. 1 Introduction to the Stiffness (Displacement) Method: Analysis of a system of springs Prof. 163 930 9,296. Example Breakdown. The current key to understanding shaft fitting is experience. Introduction The systematic development of slope deflection method in this matrix is called as a stiffness method. We propose a fast stiffness matrix calculation technique for nonlinear finite element method (FEM). These studies suggest that the matrix stiffness that optimizes matrix stretch and subsequent recoil (and thus the frequency of SSM events) scales directly with contractility-generated traction forces. The finite element method began as a matrix method of. So let's have a look into the step by step procedure of how a stiffness matrix is assembled. Stiffness is neither hard to understand, nor of only theoretical interest. An overall structural damping coefficient can be applied to the entire system stiffness matrix using PARAM, W3, r where r is the circular frequency at which damping is made equivalent. 1 Theory of Elasticity The property of solid materials to deform under the application of an external force and to regain their original shape after the force is removed is referred to as its elasticity. However, the mass matrix as well as the geometric stiffness matrix can also be derived by employing simpler shape functions related only to translation. We will present a more general computational approach in Part 2 of this blog series. , the 6 × 6 stiffness matrix pertaining to a rigid body mounted on a linearly elastic suspension. Metal matrix composite and thermoplastic matrix composite are some of the possibilities. The DSM is appealing in free vibration and buckling analyses because unlike the. ON MESH GEOMETRY AND STIFFNESS MATRIX CONDITIONING FOR GENERAL FINITE ELEMENT SPACES∗ QIANG DU†, DESHENG WANG‡, AND LIYONG ZHU§ Abstract. The following article will attempt to explain the basic theory of the frequency response function. Vibrant Technology, Inc. 2, 2007 Stiffness Matrix for Haunched Members with Including Effect of Transverse Shear Deformations 243 considering the exact variations of the geometry. Take a slice orthogonal to the -direction and define a small area on this slice as. for free vibration analysis of metallic [20] and composite [21] beams. However the effect of orientation angle on damping and dynamic stiffness is not included. Passing to the limit he obtained what is now. I know the definition of symmetric positive definite (SPD) matrix, but want to understand more. When standing, remain near a support to keep yourself from falling, until your balance improves. For this reason properties such as the elasticity and thermal expansivity cannot be expressed as scalars. Mathematical Properties of Stiffness Matrices 5 which is called the characteristic polynomial of [K]. FEA results of moduli for full cross sections Void VF Tow VF Matrix VF E1 E3. , the 6 × 6 stiffness matrix pertaining to a rigid body mounted on a linearly elastic suspension. The latter has shown superiority in analysis where localized cracking and crack propagation are the most important. When expressed as a FORTRAN subroutine and compared with the classical method of forming the stiffness matrix using Gaussian integration, the approach gives a CPU time speed-up of the order of 2–3 on a vector machine and of the order of 4–5 on a scalar machine. BASIC MECHANICS OF LAMINATED COMPOSITE PLATES I. It is convenient to assess the contributions for one typical member i and repeat the process for members. edu Mechanical Engineering Department, Univer sity of South Carolina, Columbia SC, 29208 ABSTRACT. The described. When standing, remain near a support to keep yourself from falling, until your balance improves. Beyond power and control, racquet stiffness also has an impact on comfort. • Flexibility Method The flexibility method is based upon the solution of equilibrium equations and compatibility equations. 1Element Stiffness Matrix The stiffness matrix of a structural system can be derived by various methods like variationalprinciple, Galerkin method etc. In this paper, first, the simplified mass matrix for beam element is constructed employing shape functions of in-plane displacements for deflection, and then the same approach is used for construction of simplified geometric stiffness matrix for beam, and triangular and rectangular plate elements. stiffness and lateral flexural stiffness of the arch rib on the structural stability are determined by the mode of buckling, and the lateral flexural stiffness has nearly no impact on the structural stability for an in-plane buckling arch. Intent and Scope This report is intended only to be used as a quick reference guide on the mechanics of continuous fiber-reinforced laminates. The program handling the structural simulation requires a 6x6 stiffness matrix (M) for the beam elements. The element stiffness matrix 'k' is the inv erse of the element flexibility matrix 'f' and is given by f=1/k or k =1/f. A series of nine-story, five-bay, elastic frames were analyzed to verify the concept of apparent lateral stiffness of a story. Due to scheduled maintenance from 6:30pm Pacific time, August 23 to 12:30am Pacific time, August 24, customers may experience a delay in downloading products, managing users, contacting Autodesk support (Forums will be available), and creating or editing support cases. Introduction The systematic development of slope deflection method in this matrix is called as a stiffness method. 3) Move elements of bottom row. One important feature of the linear approach is that the stiffness matrix of the system is constant and numerically well-conditioned, yielding a fast and stable simulation. Cell Behavior. Based on how the structure elements are connected through their nodes, it is possible to define a connectivity matrix. The 𝐵 matrix is a coupling matrix and it relates the bending strains with normal stresses and normal strains with bending stresses. The method is then known as the direct stiffness method. FEM basis is in the stiffness matrix method for structural analysis where each element has a stiffness associated with it. Since, the stiffness matrix, which is the inverse of compliance matrix, is symmetric; the compliance matrix has to be symmetric. Note: It is known from our elementary knowledge of linear algebra that inverse of a symmetric matrix is also a symmetric matrix. Both the reinforcement type and the matrix af-fect processing. This matrix is becoming increasingly important in the design of modern mechanical systems, such as compliant. 1Element Stiffness Matrix The stiffness matrix of a structural system can be derived by various methods like variationalprinciple, Galerkin method etc. The two matrices must be the same size, i. Stiffness method of analysis of structure also called as displacement method. The objectives of the present paper are to present 1. $\endgroup$ – Christian Clason Jul 16 '15 at 13:14. For instance, they may indicate the presence of ‘zero energy modes’, or control the critical timestep applicable in temporal. The mutual interactions of the frame and infill panel play an important part in controlling the stiffness and strength of the infill frame. In the example, the matrix A is not a full matrix, but matlab’s inverse routine will still return a matrix. There will always be as many compatibility equations as redundants. To solve vibration problems, we always write the equations of motion in matrix form. Note that, from symmetry of the stiffness matrix, 23E3 32 E2 , 13E3 31E1, 12 E2 21E1 (6. 163 930 9,296. Appendix O: THE ORIGINS OF THE FINITE ELEMENT METHOD • In his studies leading to the creation of variational calculus, Euler divided the interval of definition of a one-dimensional functional intofinite intervals and assumed a linear variation over each, defined by end values [434, p. Today, stiffness usually refers to the finite element stiffness matrix, which can include all of the above stiffness terms plus general solid or shell stiffness contributions. In classical eigenvalue buckling the response in the base state is also linear. (D pile shaft diameter, EsD soil Young’s modulus at a depth of one pile diameter. The ‘element’ stiffness relation is: 𝐾(𝑒) 𝑢(𝑒) = 𝐹(𝑒) (11) Where 𝐾 (𝑒) is the element stiffness matrix, 𝑢(𝑒) the nodal displacement vector and 𝐹 the nodal force vector. 2 Stiffness Method for One-Dimensional Truss Elements We will look at the development of the matrix structural analysis method for the simple case of a structure made only out of truss elements that can only deform in one direction. symbol known as a stiffness matrix. Torsion stiffness is an important characteristic in chassis design with an impact on the ride and comfort as well as the performance of the vehicle [5],[6],[10]. One notes that for a specified value of , one can count the number of negative terms in the diagonal matrix and it is always equal to the number of frequencies below that value. Element Stiffness matrix Integration is carried out numerically using Gauss-Legendre quadrature •Value of integral is calculated at specific Gauss points and summed •Number of Gauss points depend on order of equation 1 1 point 2 4 3 1 2 4 3 Mapped 4 points – full integration –Reduced integration Element Full Reduced. In the solution processor choose the Analysis Type to be substructuring (Solu -> New Analysis -> Substructuring). Design analysis of beams, circular plates and cylindrical tanks on elastic foundations plate 3. Richardson. // A function to rotate a matrix mat [] [] of size R x C. This means that if the mooring is connected to a point other than the CG of the body, you will have to convert it to the corresponding mooring stiffness at CG. Bischofa, M. The modal mass, stiffness, and damping definitions are derived in a previous paper [1], and are restated here for convenience. The above figure illustrates the 6x6 symmetric matrix of spring stiffness coefficients. Who is Craig Bampton? Coupling of Substructures for Dynamic Analysis by Roy R. The Role of Matrix Stiffness in Regulating. A finite element formulation for problems of large strain and large displacement 1071 A parallel development in a current frame of reference has been made and will be presented separately. For classically damped structures, modal mass, stiffness and damping can be defined directly from formulas that relate the full mass, stiffness and damping matrices to the transfer function matrix. A series of nine-story, five-bay, elastic frames were analyzed to verify the concept of apparent lateral stiffness of a story. So you're not just blindly doing some-- matrix-matrix products can be pretty tedious, but now you know what they're for. It is expressed as the ratio of load to deflection and depends on the bearing type, design and size. Bampton AIAA Journal, Vol. See section 10. Equation for linear static analysis is [F]=[K][D]. The 6 x 6 stiffness matrix can be incorporated in most structural engineering programs for dynamic response analysis to account for the foundation stiffness in evaluating the dynamic response of the structural system. Force as a function of the displacement at point 1 when varying the spring stiffness. One of the characteristics of the eigenvalue solution (A x = λ x ) is that the initial vector and the acceleration are in the same direction, but are just of a different magnitude ( λ). The joint stiffness matrix consists of contributions from the beam stiffness matrix [S M ]. Keywords: bus, oscillatory behaviour, spring, shock absorber, simulation. Recommended: Please solve it on “ PRACTICE ” first, before moving on to the solution. If the determinant of the matrix is zero, then the inverse does not exist and the matrix is singular. stiffness of the joint is compromised and relatively large slip displacements are possible. The matrix is the component that holds the filler together to form the bulk of the material. 4 Member Stiffness MatrixThe structure stiffness matrix ½K is assembled on the basis of theequilibrium and compatibility conditions between the members. Networks of pipes, circuits, traffic streets, and the like may be represented by a connectivity matrix which indicates which pair of nodes in the matrix are directly joined to each other. An axial member will have local stiffness matrix of size 4×4. The area under the stress­ strain curve, multiplied by the width of the crack band (fracture process zone) represents the fracture energy. The two matrices must be the same size, i. Finally, the total solid stiffness matrix is obtained by adding the anisotropic fibrillar stiffness matrix to the isotropic non-fibrillar one. Thus, a single value can be used to represent stiffness. 45) is shown symmetric. Stiffness is important in designing products which can only be allowed to deflect by a certain amount (e. FEM basis is in the stiffness matrix method for structural analysis where each element has a stiffness associated with it. the so-called “modal parameters”). When using this approach, iteration may not be required and the resulting analysis can be less computationally demanding. It should be clear that the element stiffness matrix is of crucial importance – it links nodal forces to nodal displacements; it encapsulates how the element behaves under load. This paper is organized as the following: the first part, sections. Moreover, for some applications the importance of the tangent stiffness matrix is based on physical grounds, in particular when deformations become large. That is all. Careful inspection and intuition of the FEA results is very important. matrix depends on the ,joint stiffness matrix. This is always the case when the displacements are directly proportional to the applied loads. In this section. The eigenvalues of element stiffness matrices K and the eigenvalues of the generalized problem Kx = λMx, where M is the element's mass matrix, are of fundamental importance in finite element analysis. edu Abstract When a mesh of simplicial elements (triangles or tetrahedra) is used to form a piecewise linear approximation of a function, the. txt) or view presentation slides online. But it could not be added to a matrix with 3 rows and 4 columns (the columns don't match in size). If single rubber band is stretch by two fingers the stiffness is less and the flexibility is more. Overall, the benchmarking indicated varying the bending and torsional stiffness distributions and the camber would appear to be the key approach to altering the feel and performance of a snowboard across the major riding styles. I'd expect it to be singular, since a stiffness matrix would not see simple translations of an elastic body. stiffness which can mathematically be represented as, K j;passive = R TK scR+K pc (1) where R is the transformation matrix from joint space to tendon space also known as the moment arm matrix defining the tendon routing strategy. What are the type of structtures that can be solved using stiffness matrix method? Structures such as simply supported, fixed beams and portal frames can be solved using stiffness matrix method. Coordinates Transformation 5. A NEW APPROACH TO IDENTIFY THE STIFFNESS MATRIX OF … 191 3. It is a specific case of the more general finite element method, and was in. It will be assembled from the material properties and geometry of all the finite elements in the model • So let us look at the matrix method MATRIX METHOD F1 F2 1 K 2 u1 u2. The way that you create a matrix can have an important impact on the efficiency of your programs. In other words, the solid is "hard". Stiffness can be a transient phenomenon, so detecting nonstiffness is equally important ,. In classical eigenvalue buckling the response in the base state is also linear. ¾Called quasi-isotropic and not isotropic because [B] and [D] may not behave like an isotropic material. If a prescribed force is used instead, all solutions will fail at the first peak load. Axial Stiffness of Geosynthetics Geosynthetics are tensile reinforcing elements (geotextiles, geogrids) defined by their starting and end points and by the axial (normal) stiffness J z [ kN/m ]. Vladimir, N. In the paper, the axial stiffness and bending stiffness of single-layer reticulated shell’s joint are considering together, non-linear beam-column element with rigid springs and rigid ends is taken as the analysis model of members of single-layer reticulated shell, a tangent stiffness matrix of members of single-layer reticulated shell considering joint’s stiffness is derived on the basis. Generalized stiffness model of industrial humanoid of anthropomorphic configuration. In the case of a linear static structural analysis, the assembled equation is of the form Kd = r, where K is the system stiffness matrix, d is the nodal degree of freedom (dof) displacement vector, and r is the applied nodal load vector. Method of Finite Elements I Direct Stiffness Method (DSM) • Computational method for structural analysis • Matrix method for computing the member forces and displacements in structures • DSM implementation is the basis of most commercial and open-source finite element software • Based on the displacement method (classical hand. The Slope-deflection and moment distribution methods were extensively used for many years before the computer era. Statically determinate and indeterminate problems can be solved in the same way. Freedom codes of a member in a global coordinate system. This matrix is becoming increasingly important in the design of modern mechanical systems, such as compliant. The problem for club makers and fitters who recognized the importance of shaft profiling was that there was not an affordable EI instrument until I designed and manufactured one. The performance of finite element computation depends strongly on the quality of the geometric mesh and the efficiency of the numerical solution of the linear systems resulting. Vladimir, N. The other advantage is that a single 4 x 4 (2 x 2) complex stiffness matrix provides a theoretically exact description of a single pavement layer (or a half-space). Arterial stiffness results from a degenerative process affecting the extracellular matrix of elastic arteries under the effect of age and cardiovascular risk factors (such as diabetes, hypertension, smoking and sedentary lifestyle). Cell Behavior. The subject of the paper is the Cartesian stiffness matrix in multibody system dynamics, i. The material properties of the base state will be used. To overcome this. Force as a function of the displacement at point 1 when varying the spring stiffness. 6: Analysisof Laminated Composites Thetransverse properties of unidirectionalcomposites Stackingof plies withdifferent angles for tailoring (stiffness, thermal stability) are unsatisfactory for most practicalapplications. Structural Analysis IV Chapter 4 - Matrix Stiffness Method 3 Dr. The method is then known as the direct stiffness method. 4 Member Stiffness MatrixThe structure stiffness matrix ½K is assembled on the basis of theequilibrium and compatibility conditions between the members. Appendix O: THE ORIGINS OF THE FINITE ELEMENT METHOD • In his studies leading to the creation of variational calculus, Euler divided the interval of definition of a one-dimensional functional intofinite intervals and assumed a linear variation over each, defined by end values [434, p. When you multiply a matrix by it's inverse, the result is the 'identity matrix' - another matrix of the same size as the first two. The advantages of using the dynamic stiffness matrix approach in conjunction with discretization schemes based on frequency dependent shape functions, are discussed.
2019-11-20 07:39:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6826702952384949, "perplexity": 1006.1269007885544}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670512.94/warc/CC-MAIN-20191120060344-20191120084344-00535.warc.gz"}
https://www.nature.com/articles/s41598-020-67633-y?error=cookies_not_supported&code=a3ca5c9e-db74-4c9d-b05a-1d54fb4587c6
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Quantification of muco-obstructive lung disease variability in mice via laboratory X-ray velocimetry ## Abstract To effectively diagnose, monitor and treat respiratory disease clinicians should be able to accurately assess the spatial distribution of airflow across the fine structure of lung. This capability would enable any decline or improvement in health to be located and measured, allowing improved treatment options to be designed. Current lung function assessment methods have many limitations, including the inability to accurately localise the origin of global changes within the lung. However, X-ray velocimetry (XV) has recently been demonstrated to be a sophisticated and non-invasive lung function measurement tool that is able to display the full dynamics of airflow throughout the lung over the natural breathing cycle. In this study we present two developments in XV analysis. Firstly, we show the ability of laboratory-based XV to detect the patchy nature of cystic fibrosis (CF)-like disease in β-ENaC mice. Secondly, we present a technique for numerical quantification of CF-like disease in mice that can delineate between two major modes of disease symptoms. We propose this analytical model as a simple, easy-to-interpret approach, and one capable of being readily applied to large quantities of data generated in XV imaging. Together these advances show the power of XV for assessing local airflow changes. We propose that XV should be considered as a novel lung function measurement tool for lung therapeutics development in small animal models, for CF and for other muco-obstructive diseases. ## Introduction Cystic fibrosis (CF) is a progressive, chronic and debilitating genetic disease caused by mutations in the CF Transmembrane-conductance Regulator (CFTR) gene. Unrelenting CF airway disease begins early in infancy and produces a steady deterioration in quality of life, ultimately leading to premature death. Effective lung health assessment tools must capture the patchy nature of muco-obstructive lung diseases such as cystic fibrosis, and this is particularly important during the early stages of the disease when local treatments could be applied to prevent disease progression. Assessments of overall lung health in humans and animals are typically made using lung function tests that screen for abnormalities by measuring the flow of gas in the airways. Spirometry is the most common tool for assessing lung function, however it measures global airflow at the mouth. This means that parameters such as FEV1 are a single, global measure of the health of the entire lung, are effort-dependent, and provide no information about the location of muco-obstructive disease. Multiple breath wash-in or wash-out techniques that measure the lung-clearance index (LCI) are sensitive to some changes in CF disease as they reflect the health of both the small and large airways1, 2, but can only provide limited regional airflow information. LCI is rarely reported in laboratory animal studies. The forced oscillation technique (FOT) measures the resistive properties of the respiratory system and has been used in obstructive lung disease assessment, although the results can be difficult to interpret. Furthermore it suffers from the same problems as FEV1 with respect to lumping regional information into single whole-lung parameters3. These kinds of techniques that are used clinically can also be applied in animals using current gold-standard tools such as the flexiVent (Scireq, Canada), to measure some of the effects of pharmaceutical and genetic therapeutics in animal models4. Regardless of the measurements that can be provided by these common measures of lung function, they cannot accurately localise the changes in airflow that are caused by structural abnormalities within the lung. To identify structural lung disease, image-based assessments such as CT and MRI are commonly used in humans and animal models. CT provides excellent structural information, and can therefore be used to monitor for structural abnormalities from disease progression, or morphological changes produced by new drug therapies. However, preventative treatment programs should ideally be able to intervene before disease establishes and progresses to the point at which it produces structural changes. Similarly, therapeutics assessments should be able to detect early functional changes4. MRI has the advantage of being able to simultaneously observe both lung function and structure without exposing the subject to ionising radiation. However, progress in MRI research has lagged behind x-ray-based methods, most likely because spatial resolution is poor and the properties of the lung—particularly the low proton density, present since the lung is comprised primarily of air—make it less appropriate for MRI5. Nonetheless, recent innovations such as hyperpolarised gas and ultrashort echo time imaging continue to advance human chest MRI research6, 7. Methods that attempt to infer lung function from CT and MRI images have previously been reported in humans and animals. Spirometry-guided CT for volume control during imaging has potential benefits but has not been implemented on a large scale and requires extensive patient training to administer5. Scoring systems that use software analyses to validate outcome measures from chest CTs have been developed, including the PRAGMA-CF assessment protocol for monitoring early-stage CF in children8. These methods must still look to repeatability and standardization of results, as well as restrictions associated with radiation dose from repeated chest CTs9. Other developments that show promise include registration-based techniques for measuring lung aeration10,11,12, and the use of contrast agents to render lung content directly visible13, 14. Tracking X-ray microscopy (TrXM) takes advantage of a sophisticated synchrotron-based X-ray imaging system to directly image mouse alveoli during respiration15. Despite the availability of techniques that assess either lung function or lung structure, none of these are able to simultaneously identify the origin of changes in function, and evaluate their heterogeneity. Abnormal lung motion during breathing has been demonstrated to be an indicator of disease16. Our previous research has developed a method that can rapidly capture the motion of the natural breathing cycle at a high spatial and temporal resolution, without the use of a contrast agent. To do this, propagation-based phase-contrast X-ray imaging (PCXI) was utilised. Since PCXI does not rely solely on absorption of X-rays by matter—but rather the diffraction of rays at material interfaces enhanced by the propagation of x-rays through free space—it can reduce the radiation dose and health risk associated with conventional chest CT scans17,18,19. PCXI can be combined with tomography to create detailed three-dimensional reconstructions of the fine structures in the lungs20,21,22,23. Using multiple PCXI images acquired throughout the respiratory cycle, Fouras et al.16 applied particle-image velocimetry to determine the speed and direction of lung motion in three dimensions throughout the respiratory cycle. The resulting regional maps of lung tissue motion can be used to detect subtle and non-uniform lung disease24,25,26. This high-speed PCXI acquisition and post-processing analysis is termed X-ray velocimetry (XV). The key difference between standard structural imaging modalities and XV is that XV assesses the dynamics of the lung tissue movement throughout the breath in order to extract measures of tissue expansion. The result is a detailed ventilation map of the lung, which non-invasively enables the volume of air that flows through each branch of the lung tree to be calculated21. The potential value of XV for inferring lung function is shown by the characterisation of CF-like lung disease in small laboratory animals at high resolution using a synchrotron-based X-ray source24, where the spatial and temporal variability in airflow throughout the lung was assessed. Recently, Murrie et al. reported the proof-of-principle translation of XV to a laboratory-based source25. They showed that despite a loss of spatial and temporal coherence—which is inevitable when moving from a high-brightness synchrotron to a compact light source—XV data can still be extracted from the resulting images. In the present study, the same laboratory-based X-ray source was used to perform XV on a cohort of β-ENaC mice, a model of CF-like lung disease26. This data was then used to develop novel numerical methods that delineate symptoms of this disease. The success of XV lies in its ability to draw reliable and meaningful quantitative measures, and this study shows how this can be accomplished. In the future these techniques can be expected to be applied to the numerical characterization of CF lung disease in larger cohorts and other CF animal models. These methods allow analyses to be applied in a straightforward fashion and with minimal manual processing, to enable ongoing study and development of the treatment of CF and other respiratory diseases. ## Results and analysis Here methods for extracting quantitative measures for lung health from the XV data are presented. These can be readily applied to large datasets with minimal manual intervention. ### Regional distribution of lung function Figure 1 maps the regional expansion of the lungs at the peak of the breath for both a β-ENaC mouse and its healthy littermate, as measured by XV. The expansion of each region of interest (ROI), defined by the XV voxel, is given as a fractional increase over the course of the breath, i.e. (change in volume of ROI)/(volume of ROI). The resulting measurement, fractional expansion, is a unitless quantity. The XV expansion data shown in Fig. 1 clearly allows the location of the airflow deficits to be determined within this plane inside the lung (see red arrow in panel 1a). In order to quantify differences throughout the entire lung volume, methods of calculating the distribution of tissue expansion have been developed here. An example of this approach is shown in Fig. 2a, which shows a histogram calculated from the fractional tissue displacement across the volume of the lung for each mouse shown in Fig. 1. The measurements for the β-ENaC mouse (from Fig. 1a) are shown in red and its healthy littermate (from Fig. 1b) in blue. The interquartile ranges (IQR) of each histogram are indicated in the graph. To adjust for variations in lung size across the cohort, the area under each histogram has been normalised to 1. In a homogeneously ventilated lung, the range of values for fractional tissue displacement should be narrow. In circumstances where there is heterogeneity due to a ‘patchy’ disease such as cystic fibrosis, a wider range of values (and therefore a higher IQR) is expected due to varied areas of poor ventilation and air trapping from mucus obstruction, as shown in Fig. 2a. Also apparent in the histogram from this particular β-ENaC mouse is the bimodal peak, where the lower peak represents regions of poor lung health (airflow), as indicated by the red arrow in Fig. 1a. This is likely to be caused by the presence of mucus obstruction which is a feature of this animal model27, 28. Our previous synchrotron-based study24 used histological sections to confirm that areas of reduced ventilation as measured by by XV analysis corresponded to mucus blockages in the bronchial tree24. Concurrently, the global (total) expansion of the lung at each point in the breath was calculated in order to demonstrate the superiority of XV in its ability to produce information about the spatial distribution of lung function at any point in the breath, rather than a single global measure of lung function averaged across the entire breath. The global expiratory time constant (τ) (defined previously in24), is calculated as the time taken for 67% $$(\sim 1/\sqrt{2}$$) of the air to be expired from the lungs. The volume in Fig. 2b is calculated from the total magnitude of the 3D tissue displacement vectors over the entire lung for each point in time, and is normalised according to the total lung volume, which was calculated by evaluating the volume of the mask used for tissue segmentation (see “Image processing”). The purpose of this normalisation is to be able to express the volume of air breathed as a fraction of the total lung volume. In this example, the β-ENaC mouse has a fractional tidal volume that is lower than its healthy counterpart due to poorer expansion in some parts of the lung. Note that the measurements in Fig. 2b are expressions of the average health across the whole lung, without reference to the local distribution across the lung, and is analogous to measures such as FEV1. In contrast, the fractional expansion histogram (Fig. 2a) contains many spatially-separated measurements for the lung at the peak of the breath and is designed to visualise the airflow heterogeneity in the presence of this muco-obstructive disease. ### Quantifying distribution of disease As with human CF lung disease, there can be substantial variability in the severity and location of muco-obstructive disease between individual β-ENaC mice. Various presentations of CF-like disease have been categorised using the expansion histograms generated from the XV tissue-displacement calculations, by assigning numerical quantities to characterise their shape. In the implementation presented here a simple least-squares fit for a Gaussian distribution was adopted, and the statistical moments of the fitted curve have been used to numerically characterise the profile of the histogram and relate them to symptoms of disease. Figure 3 shows the histogram of nine different animals each with a score for two properties that describe how lung airflow is distributed throughout the lung. The measured local expansion across the lung, which represents functional changes, may collect around certain values (clustered) or vary widely (heterogeneous), thus each mouse is scored for the presence of either heterogeneous disease (HD) and clustered disease (CD), as defined below. All of the plots show the histogram of the raw data in a grey unbroken line, with the black broken line showing the nonlinear regression line-of-best-fit calculated by applying a least squares approximation to a double-Gaussian curve. The goodness-of-fit is shown on each figure as R2. For this experiment, three categories of histogram profiles are seen: • Healthy The healthy animal shown in Fig. 2a (blue histogram) shows a typical tall and narrow expansion histogram from a healthy mouse lung, showing expansion data that falls into a narrow range and which represents homogeneous expansion. Figure 3a–c show the expansion histograms from three healthy mice, showing the characteristic tall and narrow peak. • Heterogeneous disease (HD) When CF-like disease is established across the lung, we expect XV to demonstrate characteristic heterogeneous lung function, with the disease presenting across the volume of the lung. In lieu of a tall and narrow peak where most of the lung expands evenly (by the same percentage), we expect a low and wide peak, with a larger range of values as some parts of the lung expand less than other parts. Each sample receives a score for patchiness, calculated using the term $$IQR/\underset{\_}{IQ{R}_{L}}$$ where $$IQR$$ is the interquartile range of the histogram and $$\underset{\_}{IQ{R}_{L}}$$ is the average interquartile range for the littermate population. A healthy lung will have a value of close to 1 (see Fig. 3a–c), with the score increasing as lung health becomes more heterogeneous. Figure 3d–f show profiles for heterogeneous disease, each with a low and wide peak and higher HD values. Figure 3f has a heterogeneity level of 1.78, or rather 178% of healthy lung function variation. • Clustered disease (CD) If airways are partially obstructed with mucus, poor ventilation and air trapping in the regions of the lung that receive air via those obstructed airways may result. In the end stages of this disease this may result in permanent damage to those regions due to atelectasis or bacterial infections that have been brought on by the presence of mucus. Where there is mucus preventing ventilation to a region of the lung, this region is less healthy than the rest of the lung, resulting in a clustered disease presentation. In the histogram of the expansion data, this is typically expressed as bimodality, or a split peak (as there could be two or more distinct regions). The diseased mouse in Fig. 2a shows such a peak. In order to determine whether or not the histogram of some expansion data possessed a second peak and how distinct the split between the peaks was, in this implementation a double-Gaussian distribution was consistently fit to each histogram. It was then possible to provide a score for bimodality using the term (μ2–μ1)/μ2; where μ1 and μ2 are the values of the two modes, or rather the mean values of each peak in the double Gaussian distribution. Note that a double-Gaussian function was fitted to the data not because we assumed the data followed a normal distribution, but because the characteristics of a Gaussian distribution (smooth, tends to zero at $$\pm \infty$$) made it a suitable basis function, conveniently applied to our data to readily extract functional measures. Low levels of mucus plugging of the large airways are indicated by low CD quantities in Fig. 3a–c. Figure 3f shows levels of clustered disease approaching 0.34, where a second peak is beginning to separate itself from the central mode. Figure 3g–i show expansion histograms with distinct CD presentation. In Fig. 3h, the algorithm has failed to pick up on the peak that is indicated by the red arrow. This is likely because there are three different regions, not two. The middle region (blue arrow) is not well-separated from the central mode. While the label ‘clustered’ strictly refers to a grouping of expansion values within the histogram, it is typical that this is associated with a spatial grouping of low-expansion pixels within the lung image. Figure 4 shows coronal slices through the 3D expansion volumes that correspond to the histograms in Fig. 3. In Fig. 5 the HD and CD values at the peak of the breath are plotted against the tissue hysteresivity (measured by FOT). In Fig. 5a, XV measurements that suffer from excessive “heart blur” (shown in black—this imaging artefact is described in the section below) are separated from the rest of the data points, which are themselves divided into β-ENaC (red) mice and their healthy littermates (blue). The blue data points consistently present with an HD index ~ 1, a homogeneous lung function presentation, while the red data points show greater variation, which is consistent with the range of disease presentation as exemplified earlier in the “Results and analysis” section. It is not expected that the full complexity of respiratory disease can be captured with a single measurement; nonetheless this simple plot shows a correlation with FOT measurements. Figure 5b further shows the large range of disease presentations that are seen within the group of β-ENaC mice, in this case characterised using the cluster disease index (CD). More variation is expected amongst samples with CF-like disease, due, for example, to the variability in symptoms discussed in “Results: quantifying symptoms of disease”. ### Heart blur As described in “Methods” the image acquisition was coordinated with ventilation. The heart, however, beats independently of image acquisition, resulting in some lung motion blur. Due to the spatial and temporal resolution of the laboratory-based source, along with the small exposure times required to acquire XV images, this blurring could cause local failure of the XV algorithm when it attempts to faithfully capture speckle motion. Figure 6 shows examples from four separate animals. Dubsky et al.29 used XV technology to show the manner in which cardiogenic oscillations affect airflow around the lung, by calculating the resulting tissue displacement. Their work showed that significant lung movement due to cardiac excursions is seen in the lower left regions of the lungs. This was also seen in our data, with the lower left region of the lungs (red arrows, Fig. 6) blurred in the CT image, preventing accurate measurement of tissue displacement using XV analysis. This resulted in unrealistic HD and CD values. A single CT slice from the first time point of the breath for mouse M11 above reveals this heart-motion associated blurriness (Fig. 7, red arrows). For comparison, the yellow arrows show a well-defined lung edge, situated away from the heart region. The methods for detecting heart blur issues numerically are presented below in the “Discussion”. ## Discussion This study successfully shows that XV—when performed on a laboratory source25—can capture and differentiate a range of lung disease presentations seen in β-ENaC mice via the novel quantification methods described here. Currently, clinically feasible methods for the quantification of regional lung airflows and heterogeneity are sparse, but this study shows that it is possible to measure the dynamics of muco-obstructive disease presentations in large groups of animals using XV without the need for a synchrotron X-ray source. A significant outcome of this work is to provide the proof-of-principle for a straightforward means of numerically evaluating small animal lung health in the presence of symptoms of CF-like disease. It has been shown that the histogram for regional tissue displacement enabled by XV analysis provides information about the variability of airflow within the lung. Using the model described here, consisting of a score for both clustered disease (CD) and patchy/heterogeneous disease (HD), an overall presentation of CF-like disease can be quantified numerically. Combinations of these symptoms can account for a more rigorous approach to lung function evaluation than other techniques (e.g. a traditional lung function breath measurement such as spirometry, which measures breathing at the mouth and so averages airflow effects over the whole lung). When compared to FOT measurements for tissue hysteresivity, the HD score shows similarly low values for healthy animals. For diseased animals, there is greater variation—a complexity of presentation that can not be captured by a single measurement. In future studies this scoring system should be suited to analysis of the accuracy of this system in separating disease states in larger cohorts, by using Machine Learning analysis of clustering and association of data points (as with30). Since the HD score is normalised to the mean variance of the healthy population, a new group of animals would have different baseline characteristics. Larger sample sizes would also enable us to use a Deep Learning model (see31) to determine the thresholds for the HD and CD which correspond to muoco-obstructive disease at its various stages. Future investigations can also determine the optimal combination of Gaussian curves which provides the best measure of CD. The use of more powerful statistical techniques such as functional data analysis for describing the shape of the histogram32 should also be investigated, along with an examination of how other respiratory conditions present using these numerical XV characterisations. The challenge of processing the volume of data that is required to develop this model presented here will be made easier with the inclusion of algorithms that can independently search data to detect symptoms of disease, without the need for user input. A number of image analysis algorithms are in development in order to accomplish this. For example, to orient each sample identically, we have implemented a mirrored symmetry approach on a CT image dataset that has been projected in the cranial/caudal direction to find the position of the spine, as shown in Fig. 8. The symmetry of the image is evaluated by comparing the features of the image with those of its reflection33, 34. Ultimately, the image is rotated to a position whereby the spine lies at the bottom of the image, to facilitate automated cropping. Correcting orientation provides the means to associate data points with their physical location across the lung, allowing, for example, automatic identification of heart blur due to characteristic proximity to the heart29. Manual segmentation of the conducting airways or lung lobes, a preliminary step to XV, is time-consuming. Although providing impressive visualisation of lung function and highly accurate localisation of obstruction results, the inclusion of a manual step is not practical for large datasets. As a result, elements in image processing have been established that are designed to automatically segment the lungs from surrounding tissue, using thresholding, 3D morphological filters and continuity checks between slices. This approach will allow for the analysis of XV datasets to move from a new technique requiring some analysis training and effort into a highly accessible technique that can routinely evaluate large sets of data. To address the challenge of heart blur, Lovric et al.35 have implemented a heartbeat-triggering gating technique into their image acquisition, although this is likely to be particularly challenging at the very high frame rates required for acquiring XV images at high ventilation rates. While experimental parameters have been optimised with the set-up for this experiment36, using a smaller spot size and higher power for the X-ray source, or a more sensitive detector, can increase the phase signal and reduce the amount of noise and blurriness in the X-ray images to improve our capabilities. Translating XV to a laboratory-based X-ray source creates challenges associated with lower spatial resolution than what is available with a synchrotron-based source. However, the use of magnification at the laboratory X-ray source enables the use of large, highly-efficient pixels that can reduce the associated radiation dose, which is a key step on the translational path to the clinic. Rather than imaging the fine structures of the lungs at the very high resolution (and hence high dose) required to directly observe the presence of disease, disease can be inferred by analysing the expansion maps produced by XV. The subject can consequently be exposed to lower amounts of ionising radiation than would be required if these blockages were to be resolved directly. Techniques such as FOT and FEV1 provide single measures of lung function that are difficult to interpret alone, such that lung CT is often required to identify the structural abnormalities that might be the source of the change in lung function. The XV analysis technique presented here provides a more regional analysis of function across the lung and throughout the breath and provides the researcher, uniquely, the locations of airflow dysfunction Thus, XV has the potential to become a routine diagnostic tool to measure and monitor animal models, and ultimately humans, for improvements or declines in lung health. To translate XV to human use we have tomosynthesis experiments, large animal studies, and human clinical trials underway. Finally, XV will have applications beyond CF lung disease, and have value in other respiratory diseases such as asthma, COPD, emphysema and lung cancer, and in the development and assessment of respiratory therapeutics. ## Conclusions Here, laboratory-based XV has been applied to the evaluation of lung disease heterogeneity in a group of β-ENaC mice and their healthy littermates. We also present a novel, straightforward and intuitive method for quantifying the distribution of their muco-obstructive disease. Future automated approaches will allow the application of this model to large sets of data in order to observe lung function changes during treatment, to develop a robust numerical model for CF lung disease. The combination of X-ray velocimetry and progressive automation of the data analysis is an important step in the development of more sophisticated methods of lung function testing, and should assist research internationally to improve the health and lives of people with cystic fibrosis, and a range of other lung diseases. ## Methods ### Image acquisition All images were acquired at the Laboratory for Dynamic Imaging at Monash University on a propagation-based PCXI set-up shown in Fig. 9, with the X-ray beam (Excillum D2+, Excillum AB, Kista, Sweden) produced by an electron beam striking a liquid–metal anode. A high power (265 W) was used over a small source (spot size: 60 μm × 15 μm) to generate the flux needed to achieve an imaging rate of 30 frames per second, and coherence sufficient to generate the phase contrast necessary across the lung volume37. With a conventional solid-metal anode, one would have to take care not to overheat the target while attempting to generate a higher flux. However, by using a liquid–metal-jet anode—pumped under high pressure to maintain a laminar flow—there was no concern of approaching the limits of heating the metal target in order to extract sufficiently high flux. The source-to-detector distance was fixed at 3,363 mm, with a maximum source-to-sample distance of 467 mm. The translation stage enabled the mice to be moved toward and away from the source to alter the zoom factor as required. To produce phase contrast in the resulting images, a minimum propagation (sample to detector) distance of 2,896 mm (through the ~ 30 cm diameter vacuum tube) was used. To minimise scattering and avoid an associated reduction in image contrast, the x-rays were propagated through a vacuum tube before reaching the detector. ### Animal experiments All experiments were approved by the Monash University Animal Ethics Committee and conformed to the guidelines set out in the NHMRC Australian Code of Practice for the Care and Use of Animals for Scientific Purposes. β-ENaC mice (n = 15), aged 45–84 days (median = 62 days) at the time of imaging, were used for all experiments26. Mice were bred on a C57Bl/6N background, and supplied from our specific pathogen-free breeding colony (Monash Animal Research Platform). Littermate controls (n = 10) were used to minimise the effects of strain on the lung phenotype. Offspring were genotyped at 3 weeks of age via PCR of genomic DNA as previously described26. Mice were anaesthetised with an intraperitoneal (i.p) injection of a 10 μl/g body weight mixture of medetomidine (0.1 mg/ml, Orion Corporation, Finland) and ketamine (7.6 mg/ml, Parnell Laboratories, Australia), and surgically intubated. The endotracheal tube was attached to a custom-built small animal pressure-controlled ventilator (AcuVent, Notting Hill Devices, Australia) at 12 cmH2O PIP and 2 cmH2O PEEP with a respiratory rate of 120 breaths per minute (inspiration time of 0.15 s and an expiration time of 0.35 s). To maintain the normal dynamics of lung function, a paralytic was not used. Mice were mounted in a vertical position in front of the source on a custom high-precision rotation stage (Zaber Technologies, Vancouver, Canada). Mice were rotated through 360 degrees at 1.5 degrees per second while a flat-panel detector (PaxScan, Varian Medical Systems, Palo Alto, CA, USA) captured images at the rate of 30 Hz to acquire a total of 7,200 images per mouse. Image acquisition was triggered by the ventilator, and was gated to collect 15 images throughout the breathing cycle. Airway pressure and flow were monitored throughout the experiments. At the completion of the imaging experiments, global lung mechanics were measured using a modification of the forced oscillation technique (FOT). Mice were hyperventilated at 400 breaths per minute for 60 s to induce brief (6 s) periods of apnea. During apnea, an oscillatory signal, generated by a loudspeaker, containing 9 frequencies ranging from 4–38 Hz was introduced into the tracheal cannula via a wavetube of known impedance. The impedance of the respiratory system (Zrs) was calculated. A four-parameter model with constant phase tissue impedance38 was fit to the data to Zrs spectrum allowing determination of tissue hysteresivity which is calculated as the ratio of the tissue damping to tissue elastance39. ### Image processing To complete the XV cross-correlation analysis, the 7,200 projections were organised, or binned, into their time points resulting in 400 projections per time point, with a total of 15 time points across the 500 ms breathing period. Computed tomographic reconstruction was performed for each set of projections giving 15 separate CT reconstructions, one for each of the 15 stages of the breath. Each CT consisted of 1,024 slices, each 1,024 pixels by 1,024 pixels. The effective voxel size varied from mouse-to-mouse. Using Avizo software (ThermoFisher Scientific), the CT volume representing the beginning of the breath was used to create a mask for isolating the lung tissue from the rest of the animal. The volume of the mask was also used to determine the total voxel size of the lung, which, after accounting for variation in effective voxel size, was used to normalise the volumetric results. For XV, an interrogation region size of 64 64 64 pixels with an overlap of 50% between successive interrogation windows were used, producing a XV voxel size of 32 32 32 pixels. The XV output showed the magnitude and direction of the lung tissue motion vectors between each time point. This displacement of tissue denotes lung expansion, and was expressed in voxels per frame. The mainstem bronchi were removed from the expansion map images to improve clarity. ## Data availability The data that support the findings of this study are available on reasonable request from the corresponding authors. The XV analysis code that supports the findings in this study is not publicly available due to patent restrictions. Code may however be available from the authors upon reasonable request and with permission of Monash University and 4Dx Limited. ## References 1. 1. O’Neill, K. et al. Lung clearance index in adults and children with Cystic Fibrosis. Chest 150, 1323–1332. https://doi.org/10.1016/j.chest.2016.06.029 (2016). 2. 2. Mulligan, M., Collins, L., Dunne, C. P., Keane, L. & Linnane, B. Determination of the lung clearance index (LCI) in a paediatric Cystic Fibrosis cohort. Ir. Med. J. 110, 629 (2017). 3. 3. Shirai, T. & Kurosawa, H. Clinical application of the forced oscillation technique. Intern. Med. 55, 559–566. https://doi.org/10.2169/internalmedicine.55.5876 (2016). 4. 4. Donnelley, M. & Parsons, D. W. Gene therapy for Cystic Fibrosis lung disease: Overcoming the barriers to translation to the clinic. Front. Pharmacol. 9, 3389. https://doi.org/10.3389/fphar.2018.01381 (2018). 5. 5. Tiddens, H. A. W. M., Kuo, W., van Straten, M. & Ciet, P. Pediatric lung imaging: the times they are a-changin’. Eur. Respir. Rev. 27, 170097. https://doi.org/10.1183/16000617.0097-2017 (2018). 6. 6. Santyr, G. et al. Hyperpolarized gas magnetic resonance imaging of pediatric cystic fibrosis lung disease. Acad. Radiol. 26, 344–354. https://doi.org/10.1016/j.acra.2018.04.024 (2018). 7. 7. Torres, L. et al. Structure-function imaging of lung disease using ultrashort echo time MRI. Acad. Radiol. 26, 431–441. https://doi.org/10.1016/j.acra.2018.12.007 (2019). 8. 8. Rosenow, T. et al. PRAGMA-CF: a quantitative structural lung disease computed tomography outcome in young children with Cystic Fibrosis. Am. J. Respir. Crit. Care Med. 191, 1158–1165. https://doi.org/10.1164/rccm.201501-0061OC (2015). 9. 9. Calder, A. D., Bush, A., Brody, A. S. & Owens, C. M. Scoring of chest CT in children with cystic fibrosis. Periatr. Radiol. 44, 1496–1506. https://doi.org/10.1007/s00247-013-2867-y (2014). 10. 10. Kumar, H. et al. Multiscale imaging and registration-driven model for pulmonary acinar mechanics in the mouse. J. Appl. Physiol. 114, 971–978. https://doi.org/10.1152/japplphysiol.01136.2012 (2013). 11. 11. Perchiazzi, G. et al. Regional distribution of lung compliance by image analysis of computed tomograms. Resp. Physiol. Neurobi. 201, 60–70. https://doi.org/10.1016/j.resp.2014.07.001 (2014). 12. 12. Broche, L. et al. Dynamic mechanical interactions between neighboring airspaces determine cyclic opening and closure in injured lung. Crit. Care Med. 45, 687–694. https://doi.org/10.1097/CCM.0000000000002234 (2017). 13. 13. Deman, P. et al. Respiratory-gates KES imaging of a rat model of acute lung injury at the Canadian Light Source. J. Synch. Rad. 24, 679–685. https://doi.org/10.1107/S160057751700193X (2017). 14. 14. Monfraix, S. et al. Quantitative measurement of regional lung gas volume by synchrotron radiation computed tomography. Phys. Med. Biol. 50, 1–11 (2005). 15. 15. Chang, S. et al. Synchrotron X-ray imaging of pulmonary alveoli in respiration in live intact mice. Sci. Rep. 5, 8760. https://doi.org/10.1038/srep08760 (2015). 16. 16. Fouras, A. et al. Altered lung motion is a sensitive indicator of regional lung disease. Ann. Biol. Eng. 40, 1160–1169. https://doi.org/10.1007/s10439-011-0493-0 (2012). 17. 17. Wilkins, S. W., Gureyev, T. E., Pogany, A. & Stevenson, A. W. Phase-contrast imaging using polychromatic hard X-rays. Nature 384, 335–338 (1996). 18. 18. Lewis, R. A. et al. Dynamic imaging of the lungs using X-ray phase contrast. Phys. Med. Biol. 50, 5031. https://doi.org/10.1088/0031-9155/50/21/006 (2005). 19. 19. Kitchen, M. J. et al. CT dose reduction factors in the thousands using X-ray phase contrast. Sci. Rep. 7, 15953. https://doi.org/10.1038/s41598-017-16264-x (2017). 20. 20. Kitchen, M. J. et al. Dynamic measures of regional lung air volume using phase contrast X-ray imaging. Phys. Med. Biol. 53, 6065–6077. https://doi.org/10.1038/s41598-017-16264-x (2008). 21. 21. Dubsky, S., Hooper, S. B., Siu, K. K. W. & Fouras, A. Synchrotron-based dynamic computed tomography of tissue motion for regional lung function measurement. J. R. Soc. Interface 9, 2213–2224. https://doi.org/10.1098/rsif.2012.0116 (2012). 22. 22. Beltran, M. et al. Interface-specific X-ray phase retrieval tomography of complex biological organs. Phys. Med. Biol. 56, 7353–7369. https://doi.org/10.1088/0031-9155/56/23/002 (2011). 23. 23. Dullin, C., Larsson, E., Tromba, G., Markus, A. M. & Alves, F. Phase-contrast computed tomography for quantification of structural changes in lungs of asthma mouse models of different severity. J. Synchrotron. Radiat. 22, 1106–1111. https://doi.org/10.1107/S1600577515006177 (2015). 24. 24. Stahr, C. S. et al. Quantification of heterogeneity in lung disease with image-based pulmonary function testing. Sci. Rep. 6, 29438. https://doi.org/10.1038/srep29438 (2016). 25. 25. Murrie, R. P. et al. Real-time in vivo imaging of regional lung function in a mouse model of cystic fibrosis on a laboratory X-ray source. Sci. Rep. 10, 447. https://doi.org/10.1038/s41598-019-57376-w (2020). 26. 26. Mall, M., Grubb, B. R., Harkema, J. R., O’Neal, W. K. & Boucher, R. C. Increased airway epithelial Na+ absorption produces cystic fibrosis-like lung disease in mice. Nat. Med. 10, 487–493. https://doi.org/10.1038/nm1028 (2004). 27. 27. Montgomery, S. T., Mall, M. A., Kicic, A. & Stick, S. M. Hypoxia and sterile inflammation in cystic fibrosis airways: mechanisms and potential therapies. Eur. Respir. 49, 1600903. https://doi.org/10.1183/13993003.00903-2016 (2017). 28. 28. Salamone, I. et al. Bronchial tree-shaped mucous plug in cystic fibrosis: imaging-guided management. Resp. Case Rep. 5, e00214. https://doi.org/10.1002/rcr2.214 (2017). 29. 29. Dubsky, S. et al. Cardiogenic airflow in the lung revealed using synchrotron-based dynamic lung imaging. Sci. Rep. 8, 4930. https://doi.org/10.1038/s41598-018-23193-w (2018). 30. 30. Ünlü, R. & Xanthopoulos, P. Estimating the number of clusters in a dataset via consensus clustering. Expert Syst. Appl. 125, 33–39. https://doi.org/10.1016/j.eswa.2019.01.074 (2019). 31. 31. Lecun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015). 32. 32. Ramsay, J. O. & Silverman, B. W. Functional Data Analysis 2nd edn. (Springer, New York, 2005). 33. 33. Lowe, D. G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110. https://doi.org/10.1023/B:VISI.0000029664.99615.94 (2004). 34. 34. Loy, G. & Eklundh, J. O. Detecting symmetry and symmetric constellations of features. In European Conference on Computer Vision, 508–521 (Springer, Berlin, 2006). doi: 10.1007/11744047_39. 35. 35. Lovric, G. et al. Tomographic in vivo microscopy for the study of lung physiology at the alveolar level. Sci. Rep. 7, 12545. https://doi.org/10.1038/s41598-017-12886-3 (2017). 36. 36. Murrie, R. P. et al. Feasibility study of propagation-based phase-contrast x-ray lung imaging on the imaging and medical beamline at the Australian synchrotron. J. Synchrotron. Radiat. 21, 430–445. https://doi.org/10.1107/S1600577513034681 (2013). 37. 37. Murrie, R. P. et al. Phase contrast X-ray velocimetry of small animal lungs: optimising imaging rates. Biomed. Opt. Express 7(1), 79–92. https://doi.org/10.1364/BOE.7.000079 (2016). 38. 38. Hantos, Z. et al. Input impedance and peripheral inhomogeneity of dog lungs. J. Appl. Physiol. 72, 168–178. https://doi.org/10.1152/jappl.1992.72.1.168 (1992). 39. 39. Fredberg, J. J. & Stamenovic, D. On the imperfect elasticity of lung tissue. J. Appl. Physiol. 67, 2408–2419. https://doi.org/10.1152/jappl.1989.67.6.2408 (1989). ## Acknowledgements This project was supported in part by the NHMRC (Projects GNT1079712 and GNT1055116), the Cystic Fibrosis Foundation (FOURAS1510), the Fay Fuller Foundation, and Gandel Philanthropy. KM was supported by a Future Fellowship (FT180100374) and a Veski VPRF and completed this work with the support of the TUM Institute for Advanced Study, funded by the German Excellence Initiative and the European Union Seventh Framework Program under grant agreement no 291763 and co-funded by the European Union. MD was supported by a Robinson Research Institute Career Development Fellowship. We thank Laura Clark for suggesting that we exploit the symmetry of the ribcage to determine dataset orientation. We also thank Dr Emma Knight for discussions about the Gaussian basis functions and the potential of functional data analysis. All authors discussed the results and commented on the manuscript. ## Author information Authors ### Contributions A.F. and S.D. designed the techniques. A.F., D.P., and M.D. designed the experiments. A.F., S.D., C.S., and R.C. wrote the analysis code. R.C., K.M., M.D., G.Z., and D.P. performed the experiments. R.M., C.S., F.W., R.C. analysed the data. Y.H. wrote the automated cropping and alignment algorithm. F.W. wrote the paper. A.F., S.D., K.M., M.D., and D.P. provided overall guidance and supervision of the project. ### Corresponding author Correspondence to Freda Werdiger. ## Ethics declarations ### Competing interests AF, CS, RC, RM, SD, DP, and MD have beneficial interests in 4Dx Limited, a company commercialising respiratory diagnostics technology. AF, SD, and CS are listed on patents filed by Monash University and 4Dx Limited describing the lung imaging technology. ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Werdiger, F., Donnelley, M., Dubsky, S. et al. Quantification of muco-obstructive lung disease variability in mice via laboratory X-ray velocimetry. Sci Rep 10, 10859 (2020). https://doi.org/10.1038/s41598-020-67633-y • Accepted: • Published: • ### X-ray computed tomography • Philip J. Withers • Charles Bouman • Stuart R. Stock Nature Reviews Methods Primers (2021)
2021-12-07 01:15:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4934294819831848, "perplexity": 3550.4447788744405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363327.64/warc/CC-MAIN-20211206224536-20211207014536-00373.warc.gz"}
https://math.libretexts.org/Bookshelves/Algebra/Intermediate_Algebra_(OpenStax)/07%3A_Rational_Expressions_and_Functions/7.0E%3A_7.E%3A_Chapter_Exercise
# 7.E: Chapter 7 Review Exercises ## Simplify, Multiply, and Divide Rational Expressions ### Determine the Values for Which a Rational Expression is Undefined In the following exercises, determine the values for which the rational expression is undefined. 1. $$\dfrac{5 a+3}{3 a-2}$$ $$a \neq \dfrac{2}{3}$$ 2. $$\dfrac{b-7}{b^{2}-25}$$ 3. $$\dfrac{5 x^{2} y^{2}}{8 y}$$ $$y \neq 0$$ 4. $$\dfrac{x-3}{x^{2}-x-30}$$ ### Simplify Rational Expressions In the following exercises, simplify. 5. $$\dfrac{18}{24}$$ $$\dfrac{3}{4}$$ 6. $$\dfrac{9 m^{4}}{18 m n^{3}}$$ 7. $$\dfrac{x^{2}+7 x+12}{x^{2}+8 x+16}$$ $$\dfrac{x+3}{x+4}$$ 8. $$\dfrac{7 v-35}{25-v^{2}}$$ ### Multiply Rational Expressions In the following exercises, multiply. 9. $$\dfrac{5}{8} \cdot \dfrac{4}{15}$$ $$\dfrac{1}{6}$$ 10. $$\dfrac{3 x y^{2}}{8 y^{3}} \cdot \dfrac{16 y^{2}}{24 x}$$ 11. $$\dfrac{72 x-12 x^{2}}{8 x+32} \cdot \dfrac{x^{2}+10 x+24}{x^{2}-36}$$ $$\dfrac{-3 x}{2}$$ 12. $$\dfrac{6 y^{2}-2 y-10}{9-y^{2}} \cdot \dfrac{y^{2}-6 y+9}{6 y^{2}+29 y-20}$$ ### Divide Rational Expressions In the following exercises, divide. 13. $$\dfrac{x^{2}-4 x-12}{x^{2}+8 x+12} \div \dfrac{x^{2}-36}{3 x}$$ $$\dfrac{3 x}{(x+6)(x+6)}$$ 14. $$\dfrac{y^{2}-16}{4} \div \dfrac{y^{3}-64}{2 y^{2}+8 y+32}$$ 15. $$\dfrac{11+w}{w-9} \div \dfrac{121-w^{2}}{9-w}$$ $$\dfrac{1}{11+w}$$ 16. $$\dfrac{3 y^{2}-12 y-63}{4 y+3} \div\left(6 y^{2}-42 y\right)$$ 17. $$\dfrac{\dfrac{c^{2}-64}{3 c^{2}+26 c+16}}{\dfrac{c^{2}-4 c-32}{15 c+10}}$$ $$\dfrac{5}{c+4}$$ 18. $$\dfrac{8 a^{2}+16 a}{a-4} \cdot \dfrac{a^{2}+2 a-24}{a^{2}+7 a+10} \div \dfrac{2 a^{2}-6 a}{a+5}$$ ### Multiply and Divide Rational Functions 19. Find $$R(x)=f(x) \cdot g(x)$$ where $$f(x)=\dfrac{9 x^{2}+9 x}{x^{2}-3 x-4}$$ and $$g(x)=\dfrac{x^{2}-16}{3 x^{2}+12 x}$$. $$R(x)=3$$ 20. Find $$R(x)=\dfrac{f(x)}{g(x)}$$ where $$f(x)=\dfrac{27 x^{2}}{3 x-21}$$ and $$g(x)=\dfrac{9 x^{2}+54 x}{x^{2}-x-42}$$. ## Add and Subtract Rational Expressions Add and Subtract Rational Expressions with a Common Denominator In the following exercises, perform the indicated operations. 21. $$\dfrac{7}{15}+\dfrac{8}{15}$$ $$1$$ 22. $$\dfrac{4 a^{2}}{2 a-1}-\dfrac{1}{2 a-1}$$ 23. $$\dfrac{y^{2}+10 y}{y+5}+\dfrac{25}{y+5}$$ $$y+5$$ 24. $$\dfrac{7 x^{2}}{x^{2}-9}+\dfrac{21 x}{x^{2}-9}$$ 25. $$\dfrac{x^{2}}{x-7}-\dfrac{3 x+28}{x-7}$$ $$x+4$$ 26. $$\dfrac{y^{2}}{y+11}-\dfrac{121}{y+11}$$ 27. $$\dfrac{4 q^{2}-q+3}{q^{2}+6 q+5}-\dfrac{3 q^{2}-q-6}{q^{2}+6 q+5}$$ $$\dfrac{q-3}{q+5}$$ 28. $$\dfrac{5 t+4 t+3}{t^{2}-25}-\dfrac{4 t^{2}-8 t-32}{t^{2}-25}$$ ### Add and Subtract Rational Expressions Whose Denominators Are Opposites In the following exercises, add and subtract. 29. $$\dfrac{18 w}{6 w-1}+\dfrac{3 w-2}{1-6 w}$$ $$\dfrac{15 w+2}{6 w-1}$$ 30. $$\dfrac{a^{2}+3 a}{a^{2}-4}-\dfrac{3 a-8}{4-a^{2}}$$ 31. $$\dfrac{2 b^{2}+3 b-15}{b^{2}-49}-\dfrac{b^{2}+16 b-1}{49-b^{2}}$$ $$\dfrac{3 b-2}{b+7}$$ 32. $$\dfrac{8 y^{2}-10 y+7}{2 y-5}+\dfrac{2 y^{2}+7 y+2}{5-2 y}$$ ### Find the Least Common Denominator of Rational Expressions In the following exercises, find the LCD. 33. $$\dfrac{7}{a^{2}-3 a-10}, \dfrac{3 a}{a^{2}-a-20}$$ $$(a+2)(a-5)(a+4)$$ 34. $$\dfrac{6}{n^{2}-4}, \dfrac{2 n}{n^{2}-4 n+4}$$ 35. $$\dfrac{5}{3 p^{2}+17 p-6}, \dfrac{2 m}{3 p^{2}-23 p-8}$$ $$(3 p+1)(p+6)(p+8)$$ ### Add and Subtract Rational Expressions with Unlike Denominators In the following exercises, perform the indicated operations. 36. $$\dfrac{7}{5 a}+\dfrac{3}{2 b}$$ 37. $$\dfrac{2}{c-2}+\dfrac{9}{c+3}$$ $$\dfrac{11 c-12}{(c-2)(c+3)}$$ 38. $$\dfrac{3 x}{x^{2}-9}+\dfrac{5}{x^{2}+6 x+9}$$ 39. $$\dfrac{2 x}{x^{2}+10 x+24}+\dfrac{3 x}{x^{2}+8 x+16}$$ $$\dfrac{5 x^{2}+26 x}{(x+4)(x+4)(x+6)}$$ 40. $$\dfrac{5 q}{p^{2} q-p^{2}}+\dfrac{4 q}{q^{2}-1}$$ 41. $$\dfrac{3 y}{y+2}-\dfrac{y+2}{y+8}$$ $$\dfrac{2\left(y^{2}+10 y-2\right)}{(y+2)(y+8)}$$ 42. $$\dfrac{-3 w-15}{w^{2}+w-20}-\dfrac{w+2}{4-w}$$ 43. $$\dfrac{7 m+3}{m+2}-5$$ $$\dfrac{2 m-7}{m+2}$$ 44. $$\dfrac{n}{n+3}+\dfrac{2}{n-3}-\dfrac{n-9}{n^{2}-9}$$ 45. $$\dfrac{8 a}{a^{2}-64}-\dfrac{4}{a+8}$$ $$\dfrac{4}{a-8}$$ 46. $$\dfrac{5}{12 x^{2} y}+\dfrac{7}{20 x y^{3}}$$ ### Add and Subtract Rational Functions In the following exercises, find $$R(x)=f(x)+g(x)$$ where $$f(x)$$ and $$g(x)$$ are given. 47. $$f(x)=\dfrac{2 x^{2}+12 x-11}{x^{2}+3 x-10}, g(x)=\dfrac{x+1}{2-x}$$ $$R(x)=\dfrac{x+8}{x+5}$$ 48. $$f(x)=\dfrac{-4 x+31}{x^{2}+x-30}, g(x)=\dfrac{5}{x+6}$$ In the following exercises, find $$R(x)=f(x)-g(x)$$ where $$f(x)$$ and $$g(x)$$ are given. 49. $$f(x)=\dfrac{4 x}{x^{2}-121}, g(x)=\dfrac{2}{x-11}$$ $$R(x)=\dfrac{2}{x+11}$$ 50. $$f(x)=\dfrac{7}{x+6}, g(x)=\dfrac{14 x}{x^{2}-36}$$ ## Simplify Complex Rational Expressions ### Simplify a Complex Rational Expression by Writing It as Division In the following exercises, simplify. 51. $$\dfrac{\dfrac{7 x}{x+2}}{\dfrac{14 x^{2}}{x^{2}-4}}$$ $$\dfrac{x-2}{2 x}$$ 52. $$\dfrac{\dfrac{2}{5}+\dfrac{5}{6}}{\dfrac{1}{3}+\dfrac{1}{4}}$$ 53. $$\dfrac{x-\dfrac{3 x}{x+5}}{\dfrac{1}{x+5}+\dfrac{1}{x-5}}$$ $$\dfrac{(x-8)(x-5)}{2}$$ 54. $$\dfrac{\dfrac{2}{m}+\dfrac{m}{n}}{\dfrac{n}{m}-\dfrac{1}{n}}$$ ### Simplify a Complex Rational Expression by Using the LCD In the following exercises, simplify. 55. $$\dfrac{\dfrac{1}{3}+\dfrac{1}{8}}{\dfrac{1}{4}+\dfrac{1}{12}}$$ $$\dfrac{11}{8}$$ 56. $$\dfrac{\dfrac{3}{a^{2}}-\dfrac{1}{b}}{\dfrac{1}{a}+\dfrac{1}{b^{2}}}$$ 57. $$\dfrac{\dfrac{2}{z^{2}-49}+\dfrac{1}{z+7}}{\dfrac{9}{z+7}+\dfrac{12}{z-7}}$$ $$\dfrac{z-5}{21 z+21}$$ 58. $$\dfrac{\dfrac{3}{y^{2}-4 y-32}}{\dfrac{2}{y-8}+\dfrac{1}{y+4}}$$ ## Solve Rational Equations ### Solve Rational Equations In the following exercises, solve. 59. $$\dfrac{1}{2}+\dfrac{2}{3}=\dfrac{1}{x}$$ $$x=\dfrac{6}{7}$$ 60. $$1-\dfrac{2}{m}=\dfrac{8}{m^{2}}$$ 61. $$\dfrac{1}{b-2}+\dfrac{1}{b+2}=\dfrac{3}{b^{2}-4}$$ $$b=\dfrac{3}{2}$$ 62. $$\dfrac{3}{q+8}-\dfrac{2}{q-2}=1$$ 63. $$\dfrac{v-15}{v^{2}-9 v+18}=\dfrac{4}{v-3}+\dfrac{2}{v-6}$$ no solution 64. $$\dfrac{z}{12}+\dfrac{z+3}{3 z}=\dfrac{1}{z}$$ ### Solve Rational Equations that Involve Functions 65. For rational function, $$f(x)=\dfrac{x+2}{x^{2}-6 x+8}$$, 1. Find the domain of the function 2. Solve $$f(x)=1$$ 3. Find the points on the graph at this function value. 1. The domain is all real numbers except $$x \neq 2$$ and $$x \neq 4$$ 2. $$x=1, x=6$$ 3. $$(1,1),(6,1)$$ 66. For rational function, $$f(x)=\dfrac{2-x}{x^{2}+7 x+10}$$, 1. Solve $$f(x)=2$$ 2. Find the points on the graph at this function value. ### Solve a Rational Equation for a Specific Variable In the following exercises, solve for the indicated variable. 67. $$\dfrac{V}{l}=h w$$ for $$l$$ $$l=\dfrac{V}{h w}$$ 68. $$\dfrac{1}{x}-\dfrac{2}{y}=5$$ for $$y$$ 69. $$x=\dfrac{y+5}{z-7}$$ for $$z$$ $$z=\dfrac{y+5+7 x}{x}$$ 70. $$P=\dfrac{k}{V}$$ for $$V$$ ## Solve Applications with Rational Equations ### Solve Proportions In the following exercises, solve. 71. $$\dfrac{x}{4}=\dfrac{3}{5}$$ $$x = \dfrac{12}{5}$$ 72. $$\dfrac{3}{y}=\dfrac{9}{5}$$ 73. $$\dfrac{s}{s+20}=\dfrac{3}{7}$$ $$s = 15$$ 74. $$\dfrac{t-3}{5}=\dfrac{t+2}{9}$$ ### Solve Applications Using Proportions In the following exercises, solve. 75. Rachael had a 21-ounce strawberry shake that has 739 calories. How many calories are there in a 32-ounce shake? 1161 calories 76. Leo went to Mexico over Christmas break and changed $525 dollars into Mexican pesos. At that time, the exchange rate had$1 US is equal to 16.25 Mexican pesos. How many Mexican pesos did he get for his trip? ### Solve Similar Figure Applications In the following exercises, solve. 77. $$\Delta ABC$$ is similar to $$\Delta XYZ$$. The lengths of two sides of each triangle are given in the figure. Find the lengths of the third sides. $$b=9 ; \; x=2 \dfrac{1}{3}$$ 78. On a map of Europe, Paris, Rome, and Vienna form a triangle whose sides are shown in the figure below. If the actual distance from Rome to Vienna is 700 miles, find the distance from 1. Paris to Rome 2. Paris to Vienna 79. Francesca is 5.75 feet tall. Late one afternoon, her shadow was 8 feet long. At the same time, the shadow of a nearby tree was 32 feet long. Find the height of the tree. 23 feet 80. The height of a lighthouse in Pensacola, Florida is 150 feet. Standing next to the statue, 5.5-foot-tall Natasha cast a 1.1-foot shadow. How long would the shadow of the lighthouse be? ### Solve Uniform Motion Applications In the following exercises, solve. 81. When making the 5-hour drive home from visiting her parents, Lolo ran into bad weather. She was able to drive 176 miles while the weather was good, but then driving 10 mph slower, went 81 miles when it turned bad. How fast did she drive when the weather was bad? 45 mph 82. Mark is riding on a plane that can fly 490 miles with a tailwind of 20 mph in the same time that it can fly 350 miles against a tailwind of 20 mph. What is the speed of the plane? 83. Josue can ride his bicycle 8 mph faster than Arjun can ride his bike. It takes Luke 3 hours longer than Josue to ride 48 miles. How fast can John ride his bike? 16 mph 84. Curtis was training for a triathlon. He ran 8 kilometers and biked 32 kilometers in a total of 3 hours. His running speed was 8 kilometers per hour less than his biking speed. What was his running speed? ### Solve Work Applications In the following exercises, solve. 85. Brandy can frame a room in 1 hour, while Jake takes 4 hours. How long could they frame a room working together? $$\dfrac{4}{5}$$ hour 86. Prem takes 3 hours to mow the lawn while her cousin, Barb, takes 2 hours. How long will it take them working together? 87. Jeffrey can paint a house in 6 days, but if he gets a helper he can do it in 4 days. How long would it take the helper to paint the house alone? 12 days 88. Marta and Deb work together writing a book that takes them 90 days. If Sue worked alone it would take her 120 days. How long would it take Deb to write the book alone? ### Solve Direct Variation Problems In the following exercises, solve. 89. If $$y$$ varies directly as $$x$$ when $$y=9$$ and $$x=3$$, find $$x$$ when $$y=21$$. $$7$$ 90. If $$y$$ varies inversely as $$x$$ when $$y=20$$ and $$x=2$$ find $$y$$ when $$x=4$$. 91. Vanessa is traveling to see her fiancé. The distance, $$d$$, varies directly with the speed, $$v$$, she drives. If she travels 258 miles driving 60 mph, how far would she travel going 70 mph? 301 mph 92. If the cost of a pizza varies directly with its diameter, and if an 8” diameter pizza costs $12, how much would a 6” diameter pizza cost? 93. The distance to stop a car varies directly with the square of its speed. It takes 200 feet to stop a car going 50 mph. How many feet would it take to stop a car going 60 mph? Answer 288 feet ### Solve Inverse Variation Problems In the following exercises, solve. 94. If $$m$$ varies inversely with the square of $$n$$, when $$m=4$$ and $$n=6$$ find $$m$$ when $$n=2$$. 95. The number of tickets for a music fundraiser varies inversely with the price of the tickets. If Madelyn has just enough money to purchase 12 tickets for$6, how many tickets can Madelyn afford to buy if the price increased to $8? Answer 97 tickets 96. On a string instrument, the length of a string varies inversely with the frequency of its vibrations. If an 11-inch string on a violin has a frequency of 360 cycles per second, what frequency does a 12-inch string have? ## Solve Rational Inequalities ### Solve Rational Inequalities In the following exercises, solve each rational inequality and write the solution in interval notation. 97. $$\dfrac{x-3}{x+4} \leq 0$$ Answer $$(-4,3]$$ 98. $$\dfrac{5 x}{x-2}>1$$ 99. $$\dfrac{3 x-2}{x-4} \leq 2$$ Answer $$[-6,4)$$ 100. $$\dfrac{1}{x^{2}-4 x-12}<0$$ 101. $$\dfrac{1}{2}-\dfrac{4}{x^{2}} \geq \dfrac{1}{x}$$ Answer $$(-\infty,-2] \cup[4, \infty)$$ 102. $$\dfrac{4}{x-2}<\dfrac{3}{x+1}$$ ### Solve an Inequality with Rational Functions In the following exercises, solve each rational function inequality and write the solution in interval notation 103. Given the function, $$R(x)=\dfrac{x-5}{x-2}$$, find the values of $$x$$ that make the function greater than or equal to 0. Answer $$(-\infty, 2) \cup[5, \infty)$$ 104. Given the function, $$R(x)=\dfrac{x+1}{x+3}$$, find the values of $$x$$ that make the function greater than or equal to 0. 105. The function $$C(x)=150 x+100,000$$ represents the cost to produce $$x$$, number of items. Find 1. The average cost function, $$c(x)$$ 2. How many items should be produced so that the average cost is less than$160. 1. $$c(x)=\dfrac{150 x+100000}{x}$$ 2. More than 10,000 items must be produced to keep the average cost below $160 per item. 106. Tillman is starting his own business by selling tacos at the beach. Accounting for the cost of his food truck and ingredients for the tacos, the function $$C(x)=2 x+6,000$$ represents the cost for Tillman to produce $$x$$, tacos. Find 1. The average cost function, $$c(x)$$ for Tillman’s Tacos 2. How many tacos should Tillman produce so that the average cost is less than$4. ## Practice Test In the following exercises, simplify. 1. $$\dfrac{4 a^{2} b}{12 a b^{2}}$$ $$\dfrac{a}{3 b}$$ 2. $$\dfrac{6 x-18}{x^{2}-9}$$ In the following exercises, perform the indicated operation and simplify. 3. $$\dfrac{4 x}{x+2} \cdot \dfrac{x^{2}+5 x+6}{12 x^{2}}$$ $$\dfrac{x+3}{3 x}$$ 4. $$\dfrac{2 y^{2}}{y^{2}-1} \div \dfrac{y^{3}-y^{2}+y}{y^{3}-1}$$ 5. $$\dfrac{6 x^{2}-x+20}{x^{2}-81}-\dfrac{5 x^{2}+11 x-7}{x^{2}-81}$$ $$\dfrac{x-3}{x+9}$$ 6. $$\dfrac{-3 a}{3 a-3}+\dfrac{5 a}{a^{2}+3 a-4}$$ 7. $$\dfrac{2 n^{2}+8 n-1}{n^{2}-1}-\dfrac{n^{2}-7 n-1}{1-n^{2}}$$ $$\dfrac{3 n-2}{n-1}$$ 8. $$\dfrac{10 x^{2}+16 x-7}{8 x-3}+\dfrac{2 x^{2}+3 x-1}{3-8 x}$$ 9. $$\dfrac{\dfrac{1}{m}-\dfrac{1}{n}}{\dfrac{1}{n}+\dfrac{1}{m}}$$ $$\dfrac{n-m}{m+n}$$ In the following exercises, solve each equation. 10. $$\dfrac{1}{x}+\dfrac{3}{4}=\dfrac{5}{8}$$ 11. $$\dfrac{1}{z-5}+\dfrac{1}{z+5}=\dfrac{1}{z^{2}-25}$$ $$z=\dfrac{1}{2}$$ 12. $$\dfrac{z}{2 z+8}-\dfrac{3}{4 z-8}=\dfrac{3 z^{2}-16 z-16}{8 z^{2}+2 z-64}$$ In the following exercises, solve each rational inequality and write the solution in interval notation. 13. $$\dfrac{6 x}{x-6} \leq 2$$ $$[-3,6)$$ 14. $$\dfrac{2 x+3}{x-6}>1$$ 15. $$\dfrac{1}{2}+\dfrac{12}{x^{2}} \geq \dfrac{5}{x}$$ $$(-\infty, 0) \cup(0,4] \cup[6, \infty)$$ In the following exercises, find $$R(x)$$ given $$f(x)=\dfrac{x-4}{x^{2}-3 x-10}$$ and $$g(x)=\dfrac{x-5}{x^{2}-2 x-8}$$. 16. $$R(x)=f(x)-g(x)$$ 17. $$R(x)=f(x) \cdot g(x)$$ $$R(x)=\dfrac{1}{(x+2)(x+2)}$$ 18. $$R(x)=f(x) \div g(x)$$ 19. Given the function, $$R(x)=\dfrac{2}{2 x^{2}+x-15}$$, find the values of $$x$$ that make the function less than or equal to 0. $$(2,5]$$ In the following exercises, solve. 20. If $$y$$ varies directly with $$x$$, and $$x=5$$ when $$y=30$$, find $$x$$ when $$y=42$$. 21. If $$y$$ varies inversely with the square of $$x$$ and $$x=3$$ when $$y=9$$, find $$y$$ when $$x=4$$. $$y=\dfrac{81}{16}$$ 22. Matheus can ride his bike for 30 miles with the wind in the same amount of time that he can go 21 miles against the wind. If the wind’s speed is 6 mph, what is Matheus’ speed on his bike? 23. Oliver can split a truckload of logs in 8 hours, but working with his dad they can get it done in 3 hours. How long would it take Oliver’s dad working alone to split the logs? Oliver’s dad would take $$4 \dfrac{4}{5}$$ hours to split the logs himself. 24. The volume of a gas in a container varies inversely with the pressure on the gas. If a container of nitrogen has a volume of 29.5 liters with 2000 psi, what is the volume if the tank has a 14.7 psi rating? Round to the nearest whole number. 25. The cities of Dayton, Columbus, and Cincinnati form a triangle in southern Ohio. The diagram gives the map distances between these cities in inches. The actual distance from Dayton to Cincinnati is 48 miles. What is the actual distance between Dayton and Columbus?
2022-05-23 08:09:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44451025128364563, "perplexity": 2598.8081456967184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662556725.76/warc/CC-MAIN-20220523071517-20220523101517-00429.warc.gz"}
https://rdrr.io/cran/EnvStats/man/evNormOrdStats.html
# Expected Value of Order Statistics from Random Sample from Standard Normal Distribution ### Description Compute the expected value of order statistics from a random sample from a standard normal distribution. ### Usage 1 2 3 evNormOrdStats(n = 1, approximate = FALSE) evNormOrdStatsScalar(r = 1, n = 1, approximate = FALSE) ### Arguments n positive integer indicating the sample size. r positive integer between 1 and n specifying the order statistic for which to compute the expected value. approximate logical scalar indicating whether to use the Blom score approximation (Blom, 1958). The default value is FALSE. ### Details Let \underline{z} = z_1, z_2, …, z_n denote a vector of n observations from a normal distribution with parameters mean=0 and sd=1. That is, \underline{z} denotes a vector of n observations from a standard normal distribution. Let z_{(r)} denote the r'th order statistic of \underline{z}, for r = 1, 2, …, n. The probability density function of z_{(r)} is given by: f_{r,n}(t) = \frac{n!}{(r-1)!(n-r)!} [Φ(t)]^{r-1} [1 - Φ(t)]^{n-r} φ(t) \;\;\;\;\;\; (1) where Φ and φ denote the cumulative distribution function and probability density function of the standard normal distribution, respectively (Johnson et al., 1994, p.93). Thus, the expected value of z_{(r)} is given by: E(r, n) = E[z_{(r)}] = \int_{-∞}^{∞} t f_{r,n}(t) dt \;\;\;\;\;\; (2) It can be shown that if n is odd, then E[(n+1)/2, n] = 0 \;\;\;\;\;\; (3) Also, for all values of n, E(r, n) = -E(n-r, n) \;\;\;\;\;\; (4) The function evNormOrdStatsScalar computes the value of E(r,n) for user-specified values of r and n. The function evNormOrdStats computes the values of E(r,n) for all values of r for a user-specified value of n. For large values of n, the function evNormOrdStats with approximate=FALSE may take a long time to execute. When approximate=TRUE, evNormOrdStats and evNormOrdStatsScalar use the following approximation to E(r,n), which was proposed by Blom (1958, pp. 68-75): E(r, n) \approx Φ^{-1}(\frac{r - 3/8}{n + 1/4}) \;\;\;\;\;\; (5) This approximation is quite accurate. For example, for n ≥ 2, the approximation is accurate to the first decimal place, and for n ≥ 9 it is accurate to the second decimal place. ### Value For evNormOrdStats: a numeric vector of length n containing the expected values of all the order statistics for a random sample of n standard normal deviates. For evNormOrdStatsScalar: a numeric scalar containing the expected value of the r'th order statistic from a random sample of n standard normal deviates. ### Note The expected values of normal order statistics are used to construct normal quantile-quantile (Q-Q) plots (see qqPlot) and to compute goodness-of-fit statistics (see gofTest). Usually, however, approximations are used instead of exact values. The functions evNormOrdStats and evNormOrdStatsScalar have been included mainly because evNormOrdStatsScalar is called by elnorm3 and predIntNparSimultaneousTestPower. ### Author(s) Steven P. Millard (EnvStats@ProbStatInfo.com) ### References Johnson, N. L., S. Kotz, and N. Balakrishnan. (1994). Continuous Univariate Distributions, Volume 1. Second Edition. John Wiley and Sons, New York, pp. 93–99. Royston, J.P. (1982). Algorithm AS 177. Expected Normal Order Statistics (Exact and Approximate). Applied Statistics 31, 161–165. Normal, elnorm3, predIntNparSimultaneousTestPower, gofTest, qqPlot. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # Compute the expected value of the minimum for a random sample of size 10 # from a standard normal distribution: evNormOrdStatsScalar(r = 1, n = 10) #[1] -1.538753 #---------- # Compute the expected values of all of the order statistics for a random sample # of size 10 from a standard normal distribution: evNormOrdStats(10) #[1] -1.5387527 -1.0013570 -0.6560591 -0.3757647 -0.1226888 #[6] 0.1226888 0.3757647 0.6560591 1.0013570 1.5387527 # Compare the above with Blom (1958) scores: evNormOrdStats(10, approx = TRUE) #[1] -1.5466353 -1.0004905 -0.6554235 -0.3754618 -0.1225808 #[6] 0.1225808 0.3754618 0.6554235 1.0004905 1.5466353
2017-01-16 19:33:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8893908262252808, "perplexity": 1254.5872259312246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00542-ip-10-171-10-70.ec2.internal.warc.gz"}
http://gmatclub.com/forum/gmat-data-sufficiency-ds-141/index-11500.html?sk=v&sd=d
Find all School-related info fast with the new School-Specific MBA Forum It is currently 04 May 2015, 01:44 # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # GMAT Data Sufficiency (DS) Question banks Downloads My Bookmarks Reviews Important topics Go to page Previous    1  ...  229   230   231   232   233  ...  270    Next Search for: Topics Author Replies   Views Last post If x and y are non zero integers, is x^y ninomoi 2 539 22 Nov 2004, 13:22 Three integers, the median is M, the mean value is A. Is   Tags: boksana 3 539 11 Jul 2004, 19:27 Is b greater than 1? 1) b^2 is greater than b. 2) b is   Tags: netcaesar 2 539 18 Oct 2006, 18:20 if xy > 0, does (x - 1)(y - 1) = 1 ? (1) x + y = xy (2) x   Tags: lan583 3 539 25 Oct 2006, 11:44 Does the slope of the line equal to 1? (1) x = y (2) x = -y   Tags: TeHCM 6 538 19 Dec 2005, 20:39 a,b,x,and y are positive integers. If a^x*b^y=200, what is   Tags: freetheking 6 538 11 Aug 2006, 18:43 The average of 3a+4 and another number is 2a. what is the   Tags: Rupstar 1 538 08 Apr 2005, 10:54 is tn an integer? 1. n^2 is an integer 2. sq. root of n is   Tags: el1981 2 538 17 Feb 2008, 19:17 10 children have ordered a total of 17 hamburgers, and each   Tags: kevincan 0 538 09 Jul 2006, 09:43 Need to put many boxes into warehouse. All boxes are the   Tags: az780 5 538 09 Mar 2008, 08:45 Is the integer n odd? (1) n is divisible by 3. (2) 2n is   Tags: gluon 7 538 24 Sep 2007, 21:35 Tom is going to move, he has three kinds of boxes: large,   Tags: rkatl 1 538 12 Sep 2006, 00:56 IF X is the integer, is X/3 an integer? 1).72/X is the   Tags: sperumba 6 537 27 Jan 2006, 07:26 Is n an integer? (1) n^2 is an integer (2) Root of n is an   Tags: Blue agave 1 537 12 Nov 2005, 13:42 If a>b>c>0, is c <3? (1) 1/a>1/3 (2)   Tags: freetheking 3 537 01 Aug 2006, 12:56 If y is an integer and y = |x | + x, is y = 0 ? (1) x < 0   Tags: sondenso 9 537 24 Jun 2008, 21:10 if zy young_gun 5 537 21 Nov 2007, 09:21 If x and y are integers , is the value 0f x(y+1) even? (1) x   Tags: ishtmeet 4 537 22 Oct 2006, 07:47 What is the value of y? (1) 3|x^2 4| = y 2 (2) |3 y| = 11   Tags: GMAT TIGER 5 537 12 Dec 2008, 22:20 If x is a positive integer, is the remainder 0 when (3x +   Tags: ricokevin 2 537 17 Feb 2007, 18:27 Does the decimal equivalent of P/Q, where P and Q are   Tags: sidbidus 4 537 27 Oct 2007, 09:57 In a single row of yellow, green and red colored tiles,   Tags: sujayb 2 537 22 Nov 2006, 18:32 GMATPrep : Inequality DS   Tags: techieGuy 3 537 14 Aug 2006, 00:29 An airline company pays $10,000 less for each additional Tags: sudhagar 3 536 07 Nov 2005, 19:48 Necromonger visits GMATClub everyday, and he is a Tags: necromonger 2 536 19 Jun 2006, 23:19 6 The interest paid by Red State Reserve Bank is three times guerrero25 3 536 04 Aug 2013, 03:09 2 The average score of x number of exams is y. When an additional exam Bunuel 1 536 08 Oct 2014, 22:00 DS - Powerprep Tags: ashkrs 2 536 01 Oct 2007, 20:24 What is the standard deviation of Q, a set of consecutive Tags: ArvGMAT 2 536 24 Jun 2007, 20:39 If X is an integer, Is (x^2+1)(x+5) an even number? 1. x is Tags: mbadownunder 3 536 27 Jan 2006, 01:18 What is the sum of 5 evenly spaced integers? 1. The middle Tags: buckkitty 4 536 06 Jan 2006, 05:58 In the rectangular coordinate system, are the points (a, b) Tags: jimjohn 2 536 17 Dec 2007, 17:59 What is the value of the integer n? (1) n(n + 2) = 15 (2) (n Tags: gluon 6 536 03 Oct 2007, 10:29 If x,y and z are integers. Is x is an integer? 1. Y is a Tags: sdanquah 3 536 13 Sep 2004, 08:13 4 When the positive integer x is divided by the positive integ chetan86 3 536 09 Jul 2014, 09:36 If a, b, c, d and e are integers and p = 2^a.3^b and q = Tags: trahul4 4 536 09 Jul 2007, 08:29 Is the remainder of the following expression even? (x^2 + 2x Tags: Futuristic 7 536 16 Sep 2006, 06:05 Starting with$100 in her purse, Selma plays a game in which   Tags: kevincan 2 536 21 Aug 2006, 11:31 Is |x| > |y| ? 1) x^2 > y^2 2) x > y   Tags: jimmyjamesdonkey 6 536 27 Apr 2008, 21:24 gmatprep_ds   Tags: lali 2 536 14 Feb 2007, 09:20 If K is a positive integer, is K the square of an integer?   Tags: TOUGH GUY 3 535 28 Jan 2007, 08:39 IS SQUARE ROOT of (x-5)= 5-x st1 -x absolute value of x>0 mand-y 1 535 05 Dec 2005, 19:21 If n is a positive integer, is (1/10)^n less than 0.01? (1)   Tags: above720 1 535 23 Aug 2007, 18:09 If a, b, and c are positive distinct integers, is   Tags: neelesh 3 535 18 Feb 2008, 18:16 A number of apples and oranges are to be distributed evenly   Tags: ArvGMAT 1 535 26 Jun 2007, 17:12 Is x^4 + y^4 > z^4 ? (1) x^2 + y^2 > z^2 (2) x + y   Tags: chineseburned 10 535 14 Apr 2008, 20:12 If x, y, z are integers are greater than 1, whats the value   Tags: suntaurian 1 535 21 Mar 2008, 17:45 Is a nonzero integer Z prime? (1) |Z|^|Z|=4 (2) |Z^Z|=Z^2   Tags: yezz 3 535 11 Sep 2006, 23:43 If sqrt(x) is a positive integer, is sqrt(x) a prime number?   Tags: bmwhype2 1 535 25 Nov 2007, 05:31 1 Is Z > 10 - Z ? Asifpirlo 8 535 10 Aug 2013, 23:15 Question banks Downloads My Bookmarks Reviews Important topics Go to page Previous    1  ...  229   230   231   232   233  ...  270    Next Search for: Who is online In total there are 5 users online :: 0 registered, 0 hidden and 5 guests (based on users active over the past 15 minutes) Users browsing this forum: No registered users and 5 guests Statistics Total posts 1331738 | Total topics 164821 | Active members 389357 | Our newest member nmal Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2015-05-04 09:44:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3777261972427368, "perplexity": 5762.368779840499}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430453938418.92/warc/CC-MAIN-20150501041858-00030-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.edmundoptics.cn/knowledge-center/application-notes/imaging/6-fundamental-parameters-of-an-imaging-system/
# 成像系统中的6种基本参数 ##### 图 1: 成像系统的基本参数的说明。 (1)$$\text{主要放大倍率 PMAG} = \frac{\text{传感器尺寸} \left[ \text{mm} \right]}{\text{视场} \left[ \text{mm} \right]}$$ ##### 图2: 定焦镜头的图解。 Telecentric, fixed focal length, micro-video, fixed magnification, variable magnification, or zoom lenses available. High resolution or large format designs to cover your sensor. Edmund Optics can not only help you learn how to specify the right imaging optics, but can also provide you with multiple resources and products to surpass your imaging needs. ×
2020-07-09 18:31:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25707748532295227, "perplexity": 8751.158200063257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900614.47/warc/CC-MAIN-20200709162634-20200709192634-00074.warc.gz"}
https://brilliant.org/problems/bernoulli-numbers/
# Bernoulli Numbers Calculus Level 4 Find the value of $\large \displaystyle \lim _{ n\rightarrow \infty }{ \left[ \frac {(2n) ! }{ \left| B_{ 2n } \right| } \right] ^{ \frac { 1 }{ 2n } } }$ where $$B_n$$ is the $$n$$th Bernoulli number. ×
2017-09-26 14:31:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.938629150390625, "perplexity": 3556.079322950123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696182.97/warc/CC-MAIN-20170926141625-20170926161625-00151.warc.gz"}
https://www.physicsforums.com/threads/how-to-show-that-electric-and-magnetic-fields-are-transverse.815001/
# How to show that Electric and Magnetic fields are transverse Tags: 1. May 21, 2015 ### leonardthecow 1. The problem statement, all variables and given/known data This isn't necessarily a problem, but a question I have about a certain step taken in showing that the electric and magnetic fields are transverse. In Jackson, Griffiths, and my professor's written notes, each claims the following. Considering plane wave solutions of the form $$\textbf{E}(\vec{x}, t) = Re[\vec{E_0}e^{-i(\vec{k} \cdot \vec{x} - \omega t)}] \\ \textbf{B}(\vec{x}, t) = Re[\vec{B_0}e^{-i(\vec{k} \cdot \vec{x} - \omega t)}]$$ since the Maxwell equations demand that the divergences of both E and B are zero, this in turn demands that $$\vec{k} \cdot \textbf{E} = 0 \\ \vec{k} \cdot \textbf{B} = 0.$$ 2. Relevant equations See above, plus the fact that $\vec{E_0}$ and $\vec{B_0}$ are complex functions. 3. The attempt at a solution This has to just be my missing something stupid; I just don't see how the plane wave solutions and the Maxwell equations imply that condition (where the wave vector dotted into the E and B fields is zero). Even doing the divergence out for, say, the x component of the E field, you would have something like $$(\nabla \cdot \textbf{E})_x = \partial_x ({E_0}_xe^{-i(k_x x - \omega t)}) = \partial_x {E_0}_x - ik_x {E_0}_xe^{-i(k_x x - \omega t)}$$ which, combined with the other components would give you $$\nabla \cdot \vec{E_0} - i\vec{k} \cdot \textbf{E} = 0$$ which clearly isn't what any of the textbooks are saying is the case. Is it just that the divergence of the complex function $\vec{E_0}$ is zero? If so, why is that the case? Where am I going wrong here? Thanks! 2. May 21, 2015 ### BvU $\vec E_0$ is an amplitude, a constant for the plane waves you describe. 3. May 21, 2015 ### leonardthecow Ah okay, I buy that, thanks! Related question though; $\vec{E_0}$ is defined as $$\vec{E_0}=\textbf{A}_1 + i\textbf{A}_2,$$ where $\textbf{A}_2$ and $\textbf{A}_2$ are in $\mathbb{R}^3$. In a later proof, my professor makes the claim that $$\vec{k} \cdot \textbf{A}_1 = \vec{k} \cdot \textbf{A}_2 = 0.$$ Now, just by simple substitution into $\vec{k} \cdot \vec{E_0} = 0$ would this not imply only that $\vec{k} \cdot \textbf{A}_1 = - \vec{k} \cdot \textbf{A}_2$? I don't see why we would assume that both dot products are individually zero. 4. May 21, 2015 ### BvU Well, if $\vec k$ is real, then $\vec{k} \cdot \vec{E_0} = \vec{k} \cdot ( \textbf{A}_1 + i\textbf{A}_2) = 0 + i 0$ implies $\vec{k} \cdot \textbf{A}_1 = \vec{k} \cdot \textbf{A}_2 = 0$ and you are in business. Is $\vec k_0$ real ? why (or: why not) ?
2017-12-17 14:48:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8627467155456543, "perplexity": 259.96117640778283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948596051.82/warc/CC-MAIN-20171217132751-20171217154751-00456.warc.gz"}
http://mymathforum.com/algebra/18186-mortgage-payment.html
My Math Forum Mortgage Payment Algebra Pre-Algebra and Basic Algebra Math Forum March 19th, 2011, 09:17 AM #1 Newbie   Joined: Mar 2011 Posts: 2 Thanks: 0 Mortgage Payment Mortgage Amount: 180,000 15-yr mortgage at 6.5% 30-yr mortgage at 7.0% Using the following formula, determine the payment for the 15 year mortgage and the 30 year mortgage: $\text{PMT}=\frac{P$$\frac{\text{apr}}{n}$$}{1-$$1+\frac{\text{apr}}{n}$$^{-nY}}$ I am really struggling with the formula and how to use it. Any suggestions? March 19th, 2011, 09:41 AM #2 Senior Member     Joined: Jul 2010 From: St. Augustine, FL., U.S.A.'s oldest city Posts: 12,211 Thanks: 521 Math Focus: Calculus/ODEs Re: Mortgage Payment For the 15-yr mortgage, we have: $\text{PMT}=\frac{(180000)\frac{0.065}{12}}{1-$$1+\frac{0.065}{12}$$^{-12\cdot15}}$ Now you can use your calculator to get: $\text{PMT}\approx1567.99$ For the 30-yr mortgage, we have: $\text{PMT}=\frac{(180000)\frac{0.07}{12}}{1-$$1+\frac{0.07}{12}$$^{-12\cdot30}}$ $\text{PMT}\approx1197.54$ Tags mortgage, payment Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post Denis Algebra 1 July 8th, 2016 05:22 PM Fitim Economics 1 October 28th, 2013 05:36 PM Denis New Users 1 June 5th, 2013 09:45 AM JMATH17 Economics 12 January 23rd, 2013 05:18 PM sivela Algebra 1 May 16th, 2010 06:48 PM Contact - Home - Forums - Cryptocurrency Forum - Top
2019-09-18 17:47:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 5, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3079366087913513, "perplexity": 14852.470759963653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573323.60/warc/CC-MAIN-20190918172932-20190918194932-00168.warc.gz"}
https://stat.ethz.ch/R-manual/R-devel/library/base/html/ifelse.html
ifelse {base} R Documentation ## Conditional Element Selection ### Description ifelse returns a value with the same shape as test which is filled with elements selected from either yes or no depending on whether the element of test is TRUE or FALSE. ### Usage ifelse(test, yes, no) ### Arguments test an object which can be coerced to logical mode. yes return values for true elements of test. no return values for false elements of test. ### Details If yes or no are too short, their elements are recycled. yes will be evaluated if and only if any element of test is true, and analogously for no. Missing values in test give missing values in the result. ### Value A vector of the same length and attributes (including dimensions and "class") as test and data values from the values of yes or no. The mode of the answer will be coerced from logical to accommodate first any values taken from yes and then any values taken from no. ### Warning The mode of the result may depend on the value of test (see the examples), and the class attribute (see oldClass) of the result is taken from test and may be inappropriate for the values selected from yes and no. Sometimes it is better to use a construction such as (tmp <- yes; tmp[!test] <- no[!test]; tmp) , possibly extended to handle missing values in test. Further note that if(test) yes else no is much more efficient and often much preferable to ifelse(test, yes, no) whenever test is a simple true/false result, i.e., when length(test) == 1. The srcref attribute of functions is handled specially: if test is a simple true result and yes evaluates to a function with srcref attribute, ifelse returns yes including its attribute (the same applies to a false test and no argument). This functionality is only for backwards compatibility, the form if(test) yes else no should be used whenever yes and no are functions. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole. if. ### Examples x <- c(6:-4) sqrt(x) #- gives warning sqrt(ifelse(x >= 0, x, NA)) # no warning ## Note: the following also gives the warning ! ifelse(x >= 0, sqrt(x), NA) ## ifelse() strips attributes ## This is important when working with Dates and factors x <- seq(as.Date("2000-02-29"), as.Date("2004-10-04"), by = "1 month") ## has many "yyyy-mm-29", but a few "yyyy-03-01" in the non-leap years y <- ifelse(as.POSIXlt(x)$mday == 29, x, NA) head(y) # not what you expected ... ==> need restore the class attribute: class(y) <- class(x) y ## This is a (not atypical) case where it is better *not* to use ifelse(), ## but rather the more efficient and still clear: y2 <- x y2[as.POSIXlt(x)$mday != 29] <- NA ## which gives the same as ifelse()+class() hack: stopifnot(identical(y2, y)) ## example of different return modes (and 'test' alone determining length): yes <- 1:3 no <- pi^(1:4) utils::str( ifelse(NA, yes, no) ) # logical, length 1 utils::str( ifelse(TRUE, yes, no) ) # integer, length 1 utils::str( ifelse(FALSE, yes, no) ) # double, length 1 [Package base version 4.3.0 Index]
2022-07-05 07:02:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46945980191230774, "perplexity": 5285.505728433042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104514861.81/warc/CC-MAIN-20220705053147-20220705083147-00084.warc.gz"}
http://openstudy.com/updates/50932713e4b0bd31c89da9fd
## SugarRainbow Sin x - Cos x/ Sinx Cosx Help one year ago one year ago 1. satellite73 what are you supposed to do with it? 2. satellite73 you could write $\frac{\sin(x)}{\sin(x)\cos(x)}-\frac{\cos(x)}{\sin(x)\cos(x)}$ $=\frac{1}{\cos(x)}-\frac{1}{\sin(x)}$ $=\sec(x)-\csc(x)$ if you like 3. SugarRainbow Im supliese simplify it
2014-03-10 17:39:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652277588844299, "perplexity": 11553.062613179083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010916587/warc/CC-MAIN-20140305091516-00082-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/surjective-proof-finding-inverse.721963/
# Surjective proof & finding inverse 1. Nov 10, 2013 ### synkk prove the function $g: \mathbb{N} \rightarrow \mathbb{N}$ $g(x) = \left[\dfrac{3x+1}{3} \right]$ where $[y]$ is the maximum integer part of r belonging to integers s.t. r less than or equal to y is surjective and find it's inverse I know this function is bijective, but how do I prove it's surjective? Could I just say g(x) = y $\left[\dfrac{3x+1}{3} \right] = y$ so $x = \left[\dfrac{3y-1}{3} \right ]$ and say that $g^{-1}(x) = \left[\dfrac{3y-1}{3} \right ]$ 2. Nov 10, 2013
2017-10-17 12:24:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6823982000350952, "perplexity": 521.6549625749401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821088.8/warc/CC-MAIN-20171017110249-20171017130249-00811.warc.gz"}
https://phys.libretexts.org/Courses/Merrimack_College/Conservation_Laws_Newton's_Laws_and_Kinematics_version_2.0/19%3A_N6)_Statics_and_Springs
# Chapter 19: N6) Statics and Springs • 19.1: Conditions for Static Equilibrium A body is in equilibrium when it remains either in uniform motion (both translational and rotational) or at rest. Conditions for equilibrium require that the sum of all external forces acting on the body is zero, and the sum of all external torques from external forces is zero. The free-body diagram for a body is a useful tool that allows us to count correctly all contributions from all external forces and torques acting on the body. • 19.2: Springs • 19.3: Examples In applications of equilibrium conditions for rigid bodies, identify all forces that act on a rigid body and note their lever arms in rotation about a chosen rotation axis. Net external forces and torques can be clearly identified from a correctly constructed free-body diagram. In setting up equilibrium conditions, we are free to adopt any inertial frame of reference and any position of the pivot point. We reach the same answer no matter what choices we make. Chapter 19: N6) Statics and Springs is shared under a CC BY-SA license and was authored, remixed, and/or curated by LibreTexts.
2023-02-01 02:20:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9409862160682678, "perplexity": 383.4830134915067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499899.9/warc/CC-MAIN-20230201013650-20230201043650-00385.warc.gz"}
http://mathhelpforum.com/pre-calculus/141198-exponential-equation-pictures.html
# Thread: exponential equation (With pictures!) 1. ## exponential equation (With pictures!) This should just about sum it up. Thank you interent people! 2. Originally Posted by Vamz This should just about sum it up. Thank you interent people! $f(x) = -2^{x - 1} + \frac{7}{2}$. To find the $x$ intercept, let $f(x) = 0$. So $-2^{x - 1} + \frac{7}{2} = 0$ $2^{x - 1} = \frac{7}{2}$ $2\cdot 2^{x - 1} = 7$ $2^{x} = 7$ $\ln{(2^x)} = \ln{7}$ $x\ln{2} = \ln{7}$ $x = \frac{\ln{7}}{\ln{2}}$. 3. Originally Posted by Prove It $f(x) = -2^{x - 1} + \frac{7}{2}$. To find the $x$ intercept, let $f(x) = 0$. So $-2^{x - 1} + \frac{7}{2} = 0$ $2^{x - 1} = \frac{7}{2}$ $2\cdot 2^{x - 1} = 7$ $2^{x} = 7$ what happeend to ^(x-1) .. and that other 2? $\ln{(2^x)} = \ln{7}$ $x\ln{2} = \ln{7}$ $x = \frac{\ln{7}}{\ln{2}}$. what I posted was what my teacher had in the answer key. I am assuming he's right... unless he was wrong!? 4. You should know that $a^m \cdot a^n = a^{m + n}$. Here you have $2^1 \cdot 2^{x - 1} = 2^{1 + x - 1} = 2^x$. Either s/he meant $\frac{\ln{7}}{\ln{2}}$ or s/he meant $\log_2{7}$. 5. Thank you. You sir, are a smart cookie. And I'd give you one, but I wouldn't know where to begin contacting you. case closed. 6. Originally Posted by Vamz Thank you. You sir, are a smart cookie. And I'd give you one, but ... Hallo, a personal remark: Before you start throwing cookies around you could instead press the $\boxed{\text{Thanks}}$-button. That's the kind of food we are living on here . EB
2016-10-23 12:41:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9722318649291992, "perplexity": 7262.935446915814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719273.37/warc/CC-MAIN-20161020183839-00127-ip-10-171-6-4.ec2.internal.warc.gz"}
https://paperswithcode.com/paper/robust-topological-inference-in-the-presence
# Robust Topological Inference in the Presence of Outliers The distance function to a compact set plays a crucial role in the paradigm of topological data analysis. In particular, the sublevel sets of the distance function are used in the computation of persistent homology -- a backbone of the topological data analysis pipeline. Despite its stability to perturbations in the Hausdorff distance, persistent homology is highly sensitive to outliers. In this work, we develop a framework of statistical inference for persistent homology in the presence of outliers. Drawing inspiration from recent developments in robust statistics, we propose a $\textit{median-of-means}$ variant of the distance function ($\textsf{MoM Dist}$), and establish its statistical properties. In particular, we show that, even in the presence of outliers, the sublevel filtrations and weighted filtrations induced by $\textsf{MoM Dist}$ are both consistent estimators of the true underlying population counterpart, and their rates of convergence in the bottleneck metric are controlled by the fraction of outliers in the data. Finally, we demonstrate the advantages of the proposed methodology through simulations and applications. PDF Abstract ## Datasets Add Datasets introduced or used in this paper ## Results from the Paper Add Remove Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.
2022-08-17 22:41:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5407302379608154, "perplexity": 207.00826264207265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00774.warc.gz"}
https://2021.help.altair.com/2021.1/hwsolvers/ms/topics/solvers/ms/bistop_function.htm
# BISTOP The BISTOP function models a gap element. ## Format $\text{Bistop}\left(x,\stackrel{˙}{x},{x}_{1},{x}_{2},k,e,{c}_{\mathrm{max}},d\right)$ ## Description It can be used to model forces acting on a body while moving in the gap between two boundary surfaces, which act as elastic bumpers. The properties of the two boundary surfaces can be tuned as desired. ## Arguments $x$ The expression used for the independent variable. For example, to use the z-displacement of I marker with respect to J marker as resolved in the reference frame of the RM marker as the independent variable, specify $x$ as DZ({marker_i.idstring}, {marker_j.idstring}, {marker_rm.idstring}). $\stackrel{˙}{x}$ The time derivative of the independent variable. For example, if $x$ is specified as above, then $\stackrel{˙}{x}$ will be VZ({marker_i.idstring}, {marker_j. idstring}, {marker_rm.idstring}). ${x}_{1}$ The lower bound of $x$ . If $x$ is less than ${x}_{1}$ , the bistop function returns a positive value. The value of ${x}_{1}$ must be less than the value of ${x}_{2}$ . ${x}_{2}$ The upper bound of $x$ . If $x$ is greater than ${x}_{2}$ , the bistop function returns a negative value. The value of ${x}_{2}$ must be greater than the value of ${x}_{1}$ . $k$ The stiffness of the boundary surface interaction. It must be non-negative. $e$ The exponent of the force deformation characteristic. For a stiffening spring characteristic, $e$ must be greater than 1.0 and for a softening spring characteristic, $e$ must be less than 1.0. It must always be positive. ${c}_{\text{max}}$ The maximum damping coefficient. It must be non-negative. $d$ The penetration at which the full damping coefficient is applied. It must be positive. (1) ## Example <Force_Vector_TwoBody id = "30101" type = "ForceOnly" i_marker_id = "30102031" j_floating_marker_id = "30101031" ref_marker_id = "30101010" fx_expression = "BISTOP(DX(30102030,30101010,30101010),VX(30102030,30101010,30101010),0.5,9.5,10000000,2.1,1,0.001)" fy_expression = "0" fz_expression = "0" />
2022-12-05 11:23:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934545516967773, "perplexity": 788.834020177254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00359.warc.gz"}
https://lhcfitnikhef.github.io/SMEFT/results/smefit20.html
# SMEFiT2.0 Here we present the results of the SMEFiT global analysis described in the following publication • Combined SMEFT interpretation of Higgs, diboson, and top quark data from the LHC, J. Ethier, G.Magni, F. Maltoni, L. Mantani, E. R. Nocera, J. Rojo, E. Slade, E. Vryonidou, C. Zhang [EMM+21] This work develops a global interpretation of Higgs, diboson, and top quark production and decay measurements from the LHC in the framework of the SMEFT at dimension six. We constrain simultaneously 36 independent directions in its parameter space, and compare the outcome of the global analysis with that from individual and two-parameter fits. Our results are obtained by means of state-of-the-art theoretical calculations for the SM and the EFT cross-sections, and account for both linear and quadratic corrections in the EFT expansion. We assess the interplay and complementarity between the top quark, Higgs, and diboson measurements, deploy a variety of statistical estimators to quantify the impact of each dataset in the parameter space, and carry out fits in BSM-inspired scenarios such as the top-philic model. In this page you find information about the input and results from this global EFT analysis: ## Operator basis The EFT analysis presented in this work is based on the following operator basis, see Sect. 2 in the paper for the explicit definition of the various degrees of freedom and the flavour assumptions adopted. We categorize these EFT coefficients into four disjoint classes: four-quark (two-light-two-heavy), four-quark (four-heavy), two-fermion, and purely bosonic operators. Note that some of these coefficients are not independent but rather related among them via the EWPOs In the table below we list the EFT coefficients considered in this work together with the notation used in the plots and in the output files associated to the analysis results. Operator basis Class EFT Coefficient Notation Two-light-two-heavy $$c_{Qq}^{1,8},~c_{Qq}^{1,1}~c_{Qq}^{3,8},~c_{Qq}^{3,1},~c_{tq}^{8},~c_{tq}^{1}$$, c81qq, c11qq, c83qq, c13qq, c8qt, c1qt, Two-light-two-heavy $$c_{tu}^{8},~c_{tu}^{1},~c_{Qu}^{8},~c_{Qu}^{1},~c_{td}^{8},~c_{td}^{1},~c_{Qd}^{8},~c_{Qd}^{1}$$ c8ut, c1ut, c8qu, c1qu, c8dt, c1dt, c8qd, c1qd Four-heavy $$c_{QQ}^1,~c_{QQ}^8,~c_{Qt}^1,~c_{Qt}^8,~c_{tt}^1$$ cQQ1, cQQ8, cQt1, cQt8, ctt1 two-fermion $$c_{t\varphi},~c_{tG},~c_{b\varphi},~c_{c\varphi},~c_{\tau\varphi},~c_{tW}$$ ctp, ctG, cbp, ctZ, c3pQ3, cpQM, two-fermion $$c_{\varphi t},~c_{\varphi l_1}^{(1)},~c_{\varphi l_1}^{(3)},~c_{\varphi l_2}^{(1)},~c_{\varphi l_2}^{(3)},~c_{\varphi l_3}^{(1)},$$ ctZ, c3pQ3, cpQM, cpl2, c3pl2, cpl3, two-fermion $$c_{\varphi l_3}^{(3)},~c_{\varphi e},~c_{\varphi \mu},~c_{\varphi \tau},~c_{\varphi q}^{(3)},~c_{\varphi q}^{(-)},~c_{\varphi u},~c_{\varphi d}$$ c3pl3, cpe, cpmu, cpta, c3pq, cpqMi, cpui, cpdi Purely bosonic $$c_{\varphi G},~c_{\varphi B},~c_{\varphi W},~c_{\varphi d},~c_{\varphi W B},~c_{\varphi D},c_{WWW}$$ cpG, cpB, cpW, cpd, cpWB, cpD, cWWW ## 95% CL intervals Here we provide the 95% CL intervals derived on the EFT coefficients that enter our global analysis. We provide these results both when fitting all coefficients simultaneously (and then marginalising) as well as at the level of individual fits, where only one coefficient is varied and the rest are set to their SM values. Furthermore, we also provide results for linear fits, based on $$\mathcal{O}(\Lambda^{-2})$$ EFT calculations, and for quadratic fits, where the previous calculations include also the $$\mathcal{O}(\Lambda^{-4})$$ corrections. These 95% CL intervals have been determined assuming $$\Lambda=1~{\rm TeV}$$, results for other values of $$\Lambda$$ can be obtained by rescaling. Coefficient bounds Coefficient Individual linear Marginalised linear cQQ1 [-6.132,23.281] [-190,189] [-2.229,2.019] [-2.995,3.706] cQQ8 [-26.471,57.778] [-190,170] [-6.812,5.834] [-11.177,8.170] cQt1 [-195,159] [-190,189] [-1.830,1.862] [-1.391,1.251] cQt8 [-5.722,20.105] [-190,162] [-4.213,3.346] [-3.040,2.202] ctt1 [-2.782,12.114] [-115,153] [-1.151,1.025] [-0.791,0.714] c81qq [-0.273,0.509] [-2.258,4.822] [-0.373,0.309] [-0.555,0.236] c11qq [-3.603,0.307] [-8.047,9.400] [-0.303,0.225] [-0.354,0.249] c83qq [-1.813,0.625] [-3.014,7.365] [-0.470,0.439] [-0.462,0.497] c13qq [-0.099,0.155] [-0.163,0.296] [-0.088,0.166] [-0.167,0.197] c8qt [-0.396,0.612] [-4.035,4.394] [-0.483,0.393] [-0.687,0.186] c1qt [-0.784,2.771] [-12.382,6.626] [-0.205,0.271] [-0.222,0.226] c8ut [-0.774,0.607] [-16.952,0.368] [-0.911,0.347] [-1.118,0.260] c1ut [-6.046,0.424] [-15.565,15.379] [-0.380,0.293] [-0.383,0.331] c8qu [-1.508,1.022] [-12.745,13.758] [-1.007,0.521] [-1.002,0.312] c1qu [-0.938,2.462] [-16.996,1.072] [-0.281,0.371] [-0.207,0.339] c8dt [-1.458,1.365] [-5.494,25.358] [-1.308,0.638] [-1.329,0.643] c1dt [-9.504,-0.086] [-27.673,11.356] [-0.449,0.371] [-0.474,0.347] c8qd [-2.393,2.042] [-24.479,11.233] [-1.615,0.888] [-1.256,0.715] c1qd [-0.889,6.459] [-3.239,34.632] [-0.332,0.436] [-0.370,0.384] ctp [-1.331,0.355] [-5.739,3.435] [-1.286,0.348] [-2.319,2.797] ctG [0.007,0.111] [-0.127,0.403] [0.006,0.107] [0.062,0.243] cbp [-0.006,0.040] [-0.033,0.105] [-0.007,0.035],[-0.403,-0.360] [-0.035,0.047],[-0.430,-0.338] ccp [-0.025,0.117] [-0.316,0.134] [-0.004,0.370] [-0.096,0.484] ctap [-0.026,0.035] [-0.027,0.044] [-0.027,0.040],[0.395,0.462] [-0.019,0.037],[0.389,0.480] ctW [-0.093,0.026] [-0.313,0.123] [-0.084,0.029] [-0.241,0.086] ctZ [-0.039,0.099] [-15.869,5.636] [-0.044,0.094] [-1.129,0.856] cpl1 [-0.664,1.016] [-0.244,0.375] [-0.281,0.343] [-0.106,0.129] c3pl1 [-0.472,0.080] [-0.098,0.120] [-0.432,0.062] [-0.209,0.046] cpl2 [-0.664,1.016] [-0.244,0.375] [-0.281,0.343] [-0.106,0.129] c3pl2 [-0.472,0.080] [-0.098,0.120] [-0.432,0.062] [-0.209,0.046] cpl3 [-0.664,1.016] [-0.244,0.375] [-0.281,0.343] [-0.106,0.129] c3pl3 [-0.472,0.080] [-0.098,0.120] [-0.432,0.062] [-0.209,0.046] cpe [-1.329,2.033] [-0.487,0.749] [-0.562,0.687] [-0.213,0.258] cpmu [-1.329,2.033] [-0.487,0.749] [-0.562,0.687] [-0.213,0.258] cpta [-1.329,2.033] [-0.487,0.749] [-0.562,0.687] [-0.213,0.258] c3pq [-0.472,0.080] [-0.098,0.120] [-0.432,0.062] [-0.209,0.046] c3pQ3 [-0.350,0.353] [-1.145,0.740] [-0.375,0.344] [-0.615,0.481] cpqMi [-2.905,0.490] [-0.171,0.106] [-2.659,0.381] [-0.060,0.216] cpQM [-0.998,1.441] [-1.690,11.569] [-1.147,1.585] [-2.250,2.855] cpui [-1.355,0.886] [-0.499,0.325] [-0.458,0.375] [-0.172,0.142] cpdi [-0.443,0.678] [-0.162,0.250] [-0.187,0.229] [-0.071,0.086] cpt [-2.087,2.463] [-3.270,18.267] [-3.028,2.195] [-13.260,3.955] cpG [-0.002,0.005] [-0.043,0.012] [-0.002,0.005] [-0.019,0.003] cpB [-0.005,0.002] [-0.739,0.289] [-0.005,0.002],[0.085,0.092] [-0.114,0.108] cpW [-0.018,0.007] [-0.592,0.677] [-0.016,0.007],[0.281,0.305] [-0.145,0.303] cpWB [-2.905,0.490] [-0.462,0.694] [-2.659,0.381] [-0.170,0.273] cpd [-0.428,1.214] [-2.002,3.693] [-0.404,1.199],[-34.04,-32.61] [-1.523,1.482] cpD [-4.066,2.657] [-1.498,0.974] [-1.374,1.124] [-0.516,0.425] cWWW [-1.057,1.318] [-1.049,1.459] [-0.208,0.236] [-0.182,0.222] ## Analysis code The results collected in the table above have been obtained for the global input dataset and for the baseline theory settings, which in particular include NLO QCD corrections to the EFT cross-sections whenever available. However many users might be interested in evaluating statistical estimators for fits based on different input datasets, different settings of the EFT calculations, or for fits where the parameter space has been restricted as motivated by specific UV-complete models. For example, one may wish to compare with the outcome of the global fit based on LO EFT calculations, or to use the bounds derived in the top-philic scenario. The posterior probability distributions corresponding to all the fits presented in this work are made available in the SMEFiT GitHub public repository, together with Python analysis code to process them and produce a range of statistical estimators. For instance, upon choosing a given fit (see Released fits), whose output is constitured by $$N_{\rm spl}$$ posterior samples for the $$n_{\rm op}=49$$ EFT coefficients considered in this analysis, the analysis code can evaluate statistical estimators such as means, standard deviations, and correlations, $\left\langle c_i\right\rangle = \frac{1}{N_{\rm spl}} \sum_{k=1}^{N_{\rm spl}}c_i^{(k)} \, ,\quad i=1,\ldots,n_{\rm op} \, ,$ $\sigma_{c_i} = \left( \frac{1}{N_{\rm spl}-1} \sum_{k=1}^{N_{\rm spl}} \left( c_i^{(k)}- \left\langle c_i\right\rangle \right)^2 \right)^{1/2} \, ,\quad i=1,\ldots,n_{\rm op} \, ,$ $\rho(c_i,c_j) = \left( \frac{1}{N_{\rm spl}} \sum_{k=1}^{N_{\rm spl}}c_i^{(k)}c_j^{(k)} - \left\langle c_i\right\rangle \left\langle c_j\right\rangle \right) \Bigg/ \sigma_{c_i} \sigma_{c_k} \, ,\quad i,j=1,\ldots,n_{\rm op} \, .$ as well as other estimators such as confidence level intervals and higher moments beyond the parabolic approximation. It then produces a range of plots such as the ones displayed below. ## Released fits The list of fits that are made available with this release of the SMEFiT global analysis are summarised in the table below. For each configuration we provide a runcard and the corresponding posterior distributions stored in json format. SMEFiT2.0 fits Descriptor Dataset Theory settings Methodology Global NLO QCD in EFT + $$\mathcal{O}(\Lambda^{-4})$$ Default Baseline (linear) Global NLO QCD in EFT + $$\mathcal{O}(\Lambda^{-2})$$ Default Global LO QCD in EFT + $$\mathcal{O}(\Lambda^{-4})$$ Default LO EFT (linear) Global LO QCD in EFT + $$\mathcal{O}(\Lambda^{-2})$$ Default Higgs-only Higgs-only NLO QCD in EFT + $$\mathcal{O}(\Lambda^{-4})$$ Default Top-only Top-only NLO QCD in EFT + $$\mathcal{O}(\Lambda^{-4})$$ Default No Diboson Diboson cross-sections removed NLO QCD in EFT + $$\mathcal{O}(\Lambda^{-4})$$ Default No high-E Bins with $$E\ge 1~{\rm TeV}$$ removed NLO QCD in EFT + $$\mathcal{O}(\Lambda^{-4})$$ Default Top-philic Global NLO QCD in EFT + $$\mathcal{O}(\Lambda^{-4})$$ Constraints from top-philic scenario Requests for variants of these fits should be addressed to the SMEFiT authors.
2021-09-19 10:37:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7570260763168335, "perplexity": 2973.3945097256637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056856.4/warc/CC-MAIN-20210919095911-20210919125911-00701.warc.gz"}
https://www.transtutors.com/questions/14-36-lo-3-in-evaluating-the-reasonableness-of-significant-accounting-estimates-the--1357832.htm
# 14-36 LO 3 In evaluating the reasonableness of significant accounting estimates, the auditor... 14-36     LO 3 In evaluating the reasonableness of significant accounting estimates, the auditor should consider which of the   following? a.       The significance of the estimate. b.       The sensitivity of the estimate to variations. c.       The sensitivity of the estimate to misstatement and bias. d.       All of the above.
2019-04-18 13:06:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936540424823761, "perplexity": 4412.1643927775485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517639.17/warc/CC-MAIN-20190418121317-20190418143317-00328.warc.gz"}
https://www.physicsforums.com/threads/quick-deprotonation-question.78876/
# Quick deprotonation question ## Main Question or Discussion Point Why is it that when methylamine ($$CH_3 NH_2$$) loses a proton, you get $$CH_3 NH$$ (with a negative charge on nitrogen) rather than $$CH_2 = NH_2$$ (with a positive charge on nitrogen) ? GCT
2019-12-12 23:27:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8827987313270569, "perplexity": 6330.200366548343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00130.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/ipi.2020012?viewType=html
Article Contents Article Contents # Enhanced image approximation using shifted rank-1 reconstruction • • Low rank approximation has been extensively studied in the past. It is most suitable to reproduce rectangular like structures in the data. In this work we introduce a generalization using "shifted" rank-$1$ matrices to approximate $\mathit{\boldsymbol{{A}}}\in \mathbb{C}^{M\times N}$. These matrices are of the form $S_{\mathit{\boldsymbol{{\lambda}}}}(\mathit{\boldsymbol{{u}}}\mathit{\boldsymbol{{v}}}^*)$ where $\mathit{\boldsymbol{{u}}}\in \mathbb{C}^M$, $\mathit{\boldsymbol{{v}}}\in \mathbb{C}^N$ and $\mathit{\boldsymbol{{\lambda}}}\in \mathbb{Z}^N$. The operator $S_{\mathit{\boldsymbol{{\lambda}}}}$ circularly shifts the $k$-th column of $\mathit{\boldsymbol{{u}}}\mathit{\boldsymbol{{v}}}^*$ by $\lambda_k$. These kind of shifts naturally appear in applications, where an object $\mathit{\boldsymbol{{u}}}$ is observed in $N$ measurements at different positions indicated by the shift $\mathit{\boldsymbol{{\lambda}}}$. The vector $\mathit{\boldsymbol{{v}}}$ gives the observation intensity. This model holds for seismic waves that are recorded at $N$ sensors at different times $\mathit{\boldsymbol{{\lambda}}}$. Other examples are a car that moves through a video changing its position $\mathit{\boldsymbol{{\lambda}}}$ in each of the $N$ frames, or non-destructive testing based on ultrasonic waves that are reflected by defects inside the material. The main difficulty of the above stated problem lies in finding a suitable shift vector $\mathit{\boldsymbol{{\lambda}}}$. Once the shift is known, a simple singular value decomposition can be applied to reconstruct $\mathit{\boldsymbol{{u}}}$ and $\mathit{\boldsymbol{{v}}}$. We propose a greedy method to reconstruct $\mathit{\boldsymbol{{\lambda}}}$. By using the formulation of the problem in Fourier domain, a shifted rank-$1$ approximation can be calculated in $O(NM\log M)$. Convergence to a locally optimal solution is guaranteed. Furthermore, we give a heuristic initial guess strategy that shows good results in the numerical experiments. We validate our approach in several numerical experiments on different kinds of data. We compare the technique to shift-invariant dictionary learning algorithms. Furthermore, we provide examples from application including object segmentation in non-destructive testing and seismic exploration as well as object tracking in video processing. Mathematics Subject Classification: Primary: 65F18, 65T99; Secondary: 86A22. Citation: • Figure 1.  Input data: Cartoon-like, natural, ultrasonic and seismic images Figure 2.  (a) Singular value ratio to $\||\hat{\bm{{A}}}|\|_2$ after the individual steps of Algorithm 3. (b) Average approximation error over all kinds of input data Figure 3.  Reconstruction of Lena image using 1, 5 and 10 shifted rank-$1$ matrices Figure 4.  Approximation error of all algorithms for different kinds of input data plotted against the storage costs Figure 5.  Sparse approximation of image "phantom" using Wavelets (left), SR1 (middle) and UC-DLA (right) Figure 6.  (a) Average runtime of data approximation against the number of rows. (b) Average runtime of matrix vector multiplication using different number of shifted rank-$1$ matrices Figure 7.  Separation of an ultrasonic image in two signals (top and bottom) using SR1 (left), MoTIF (middle) and UC-DLA (right) Figure 8.  Identified earth layer reflection in noisy seismic image Figure 9.  Tracked route in original video (top) and reconstructed singular vectors $\mathit{\boldsymbol{{u}}}^1$, $\mathit{\boldsymbol{{u}}}^2$ (middle); as comparison the reconstructed background and person using MAMR is shown (bottom) Figure 10.  First and last frame of the soccer video clip Figure 11.  Tracked objects in soccer clip (top): advertising banners, referee and time stamp. As comparison the reconstructed background and foreground using MAMR is shown (bottom) Table 1.  Mean number of iterations for different kinds of input data data global local total orthogonal 139 21 160 natural 137 24 161 cartoon 91 15 106 seismic 95 16 111 ultrasound 95 18 113 • [1] P. Amestoy, C. Ashcraft, O. Boiteau, A. Buttari, J.-Y. L'Excellent and C. Weisbecker, Improving multifrontal methods by means of block low-rank representations, SIAM J. Sci. Comput., 37 (2015), A1451–A1474. doi: 10.1137/120903476. [2] S. Bhojanapalli, B. Neyshabur and N. Srebro, Global optimality of local search for low rank matrix recovery, Advances in Neural Information Processing Systems, 29 (2016), 3873-3881. [3] M. Brand, Fast low-rank modifications of the thin singular value decomposition, Linear Algebra Appl., 415 (2006), 20-30.  doi: 10.1016/j.laa.2005.07.021. [4] J.-F. Cai, E. J. Candes and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956-1982.  doi: 10.1137/080738970. [5] E. J. Candès, X. Li, Y. Ma and J. Wright, Robust principal component analysis?, Journal of the ACM (JACM), 58 (2011), 11. doi: 10.1145/1970392.1970395. [6] M. Chu, R. E. Funderlic and R. J. Plemmons, Structured low rank approximation, Linear Algebra Appl., 366 (2003), 157-172.  doi: 10.1016/S0024-3795(02)00505-0. [7] M. A. Davenport and J. Romberg, An overview of low-rank matrix recovery from incomplete observations, IEEE J. Selected Topics Signal Process., 10 (2016), 608-622.  doi: 10.1109/JSTSP.2016.2539100. [8] L. Fang, J. Xu, H. Hu, Y. Chen, P. Shi, L. Wang and H. Liu, Noninvasive imaging of epicardial and endocardial potentials with low rank and sparsity constraints, IEEE Trans. Biomedical Engineering. doi: 10.1109/TBME.2019.2894286. [9] A. Frieze, R. Kannan and S. Vempala, Fast monte-carlo algorithms for finding low-rank approximations, J. ACM, 51 (2004), 1025-1041.  doi: 10.1145/1039488.1039494. [10] D. Goldfarb and Z. T. Qin, Robust low-rank tensor recovery: Models and algorithms, SIAM J. Matrix Anal. & Appl., 35 (2014), 225-253.  doi: 10.1137/130905010. [11] G. H. Golub and C. F. Van Loan, Matrix Computations, 4th edition, The Johns Hopkins University Press, Baltimore, 2013. [12] L. Grasedyck, D. Kressner and C. Tobler, A literature survey of low-rank tensor approximation techniques, GAMM-Mitt., 36 (2013), 53-78.  doi: 10.1002/gamm.201310004. [13] D. Gross, Recovering low-rank matrices from few coefficients in any basis, Trans. Inform. Theory, 57 (2011), 1548-1566.  doi: 10.1109/TIT.2011.2104999. [14] C. J. Hillar and L.-H. Lim, Most tensor problems are np-hard, J. ACM, 60 (2013), 45. doi: 10.1145/2512329. [15] J. Hui, C. Liu, Z. Shen and Y. Xu, Robust video denoising using low rank matrix completion, IEE Conf. on Computer Vision and Pattern Regonition, 1791–1798. [16] P. Jain, P. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, Proc. of the 45th ACM symposium on Theory of computing, 665–674. doi: 10.1145/2488608.2488693. [17] Y. Jia, S. Yu, L. Liu and J. Ma, Orthogonal rank-one matrix pursuit for 3d seismic data interpolation, Journal of Applied Geophysics, 132 (2016), 137-145. [18] P. Jost, P. Vandergheynst, S. Lesage and R. Gribonval, Motif: An efficient algorithm for learning translation invariant dictionaries, IEEE ICASSP 2006 Proceedings, 5. doi: 10.1109/ICASSP.2006.1661411. [19] E. Liberty, F. Woolfe, P. G. Martinsson, V. Rokhlin and M. Tygert, Randomized algorithms for the low-rank approximation of matrices, Proc. Nat. Acad. Sci. USA, 104 (2007), 20167-20172.  doi: 10.1073/pnas.0709640104. [20] G. Liu, Z. Lin and Y. Yu, Robust subspace segmentation by low-rank representation, Proc. of the 27th International Conference on Machine Learning, ICML-10 (2010), 663-670. [21] X. Liu, G. Zhao, J. Yao and C. Qi, Background subtraction based on low-rank and structured sparse decomposition, IEEE Trans. Image Process., 24 (2015), 2502-2514.  doi: 10.1109/TIP.2015.2419084. [22] C. Lu, J. Tang, S. Yan and Z. Lin, Generalized nonconvex nonsmooth low-rank minimization, IEE Conf. on Computer Vision and Pattern Regonition, 4130–4137. doi: 10.1109/CVPR.2014.526. [23] J. Ma, Three-dimensional irregular seismic data reconstruction via low-rank matrix completion, Geophysics, 78 (2013), V181–V192. doi: 10.1190/geo2012-0465.1. [24] B. Mailhé, S. Lesage, R. Gribonval, F. Bimbot and P. Vandergheynst, Shift-invariant dictionary learning for sparse representations: extending k-svd, 16th IEEE European Signal Processing Conf., 1–5. [25] R. Otazo, E. J. Candes and D. K. Sodickson, Low-rank plus sparse matrix decomposition for accelerated dynamic mri with separation of background and dynamic components, Magnetic Resonance in Medicine, 73 (2015), 1125-1136.  doi: 10.1002/mrm.25240. [26] S. Oymak, A. Jalali, M. Fazel, Y. C. Eldar and B. Hassibi, Simultaneously structured models with application to sparse and low-rank matrices, IEEE Trans. Inform. Theory, 61 (2015), 2886-2908.  doi: 10.1109/TIT.2015.2401574. [27] H. Park, L. Zhang and J. B. Rosen, Low rank approximation of a hankel matrix by structured total least norm, BIT, 39 (1999), 757-779.  doi: 10.1023/A:1022347425533. [28] M. Rahmani and G. K. Atia, High dimensional low rank plus sparse matrix decomposition, IEEE Trans. Signal Process., 65 (2017), 2004-2019.  doi: 10.1109/TSP.2017.2649482. [29] B. Recht, M. Fazel and P. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471-501.  doi: 10.1137/070697835. [30] M. D. Rodriguez, J. Ahmed and M. Shah, Action mach: A spatio-temporal maximum average correlation height filter for action recognition, IEEE 2008 Conf. Computer Vision and Pattern Recognition, 1–8. doi: 10.1109/CVPR.2008.4587727. [31] C. Rusu, B. Dumitrescu and S. Tsaftaris, Explicit shift-invariant dictionary learning, IEEE Signal Process. Letters, 21 (2014), 6-9.  doi: 10.1109/LSP.2013.2288788. [32] X. Shen and Y. Wu, A unified approach to salient object detection via low rank matrix recovery, IEE Conf. on Computer Vision and Pattern Regonition, 853–860. [33] X. Shen and Y. Wu, A unified approach to salient object detection via low rank matrix recovery, IEEE Conf. on Computer Vision and Pattern Recognition, 853–860. [34] F. Shi, J. Cheng, L. Wang, P. T. Yap and D. Shen, Lrtv: Mr image super-resolution with low-rank and total variation regularizations, IEE Trans. Medical Imaging, 34 (2015), 2459-2466.  doi: 10.1109/TMI.2015.2437894. [35] P. J. Shin, P. E. Larson, M. A. Ohliger, M. Elad, J. M. Pauly, D. B. Vigneron and M. Lustig, Calibrationless parallel imaging reconstruction based on structured low–rank matrix completion, Magnetic resonance in medicine, 72 (2014), 959-970.  doi: 10.1002/mrm.24997. [36] A. Sobral, T. Bouwmans and E. Zahzah, Lrslibrary: Low-rank and sparse tools for background modeling and subtraction in videos, in Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing, CRC Press, Taylor and Francis Group., 2015. doi: 10.1201/b20190-24. [37] K. Soomro and A. R. Zamir, Action recognition in realistic sports videos, computer vision in sports, Springer Computer vision in sports, 181–208. [38] P. Sprechmann, A. M. Bronstein and G. Sapiro, Learning efficient sparse and low rank models, IEEE Trans. Pattern Anal. and Machine Intell., 37 (2015), 1821-1833.  doi: 10.1109/TPAMI.2015.2392779. [39] N. Srebro and T. Jaakkola, Weighted low-rank approximations, Proc. of the 20th International Conference on Machine Learning, ICML-03 (2003), 720-727. [40] M. Tao and X. Yuan, Recovering low-rank and sparse components of matrices from incomplete and noisy observations, SIAM Journal on Optimization, 21 (2011), 57-81.  doi: 10.1137/100781894. [41] J. J. Thiagarajan, K. N. Ramamurthy and A. Spanias, Shift-invariant sparse representation of images using learned dictionaries, IEEE Workshop on Machine Learning for Signal Processing, 145–150. [42] I. Tosic and P. Frossard, Dictionary learning, IEEE Signal Process. Magazine, 28 (2011), 27-38. [43] S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi and B. Recht, Low-rank solutions of linear matrix equations via procrustes flow, Proc. of the 33rd International Conference on Machine Learning, ICML-48 (2016), 964-973. [44] M. Udell, C. Horn, R. Zadeh and S. Boyd, Generalized low rank models, Found. Trends Machine Learning, 9 (2016), 1-118. [45] A. E. Waters, A. C. Sankaranarayanan and R. Baraniuk, Sparcs: Recovering low-rank and sparse matrices from compressive measurements, Advances in neural information processing systems, 1089–1097. [46] J. Wright, A. Ganesh, S. Rao, Y. Peng and Y. Ma, Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization, Advances in neural information processing systems, 2080–2088. [47] J. Ye, Generalized low rank approximations of matrices, Mach. Learn., 61 (2005), 167-191. [48] X. Ye, J. Yang, X. Sun, K. Li, C. Hou and Y. Wang, Foreground-background separation from video clips via motion-assisted matrix restoration, IEE Trans. Circuits and Systems for Video Technology, 25 (2015), 1721-1734. [49] C. Zhang, J. Liu, C. Liang, Z. Xue, J. Pang and Q. Huang, Image classification by non-negative sparse coding, correlation constrained low-rank and sparse decomposition, Computer Vision and Image Understanding, 123 (2014), 14-22. [50] T. Zhang, J. M. Pauly and I. R. Levesque, Accelerating parameter mapping with a locally low rank constraint, Magnetic resonance in medicine, 73 (2015), 655-661. [51] Z. Zhang and H. Liu, Nonlocal total variation based dynamic pet image reconstruction with low-rank constraints, Physica Scripta, 94 (2019), 065202. [52] G. Zheng, Y. Yang and J. Carbonell, Efficient shift-invariant dictionary learning, Proc. 22nd ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, 2095–2104. [53] G. Zhou, A. Cichocki and S. Xie, Fast nonnegative matrix/tensor factorization based on low-rank approximation, IEEE Trans. Signal Process., 60 (2012), 2928-2940.  doi: 10.1109/TSP.2012.2190410. [54] T. Zhou and D. Tao, Godec: Randomized low-rank & sparse matrix decomposition in noisy case, Proc. of the 20th International Conference on Machine Learning, 33–40. Open Access Under a Creative Commons license Figures(11) Tables(1)
2023-03-30 20:26:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.5060552358627319, "perplexity": 2648.8150334890315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949387.98/warc/CC-MAIN-20230330194843-20230330224843-00516.warc.gz"}
http://koreascience.or.kr/article/JAKO199601919936742.page?lang=en
# Crown Ether/Chloroform 용매추출법을 이용한 토양시료중의 $^{89}Sr,\;^{90}Sr$ 분석 • Published : 1996.03.30 #### Abstract For the determination of radiostrontium, $^{89}Sr\;and\;^{90}Sr$ in environmental soil sample, a solvent extraction method for the separation of Ca and Sr from matrix using crown ether was investigated. In comparison with the existing fuming nitric acid method, the extraction method showed high chemical yield of strontium and provided simple and rapid analytical steps. The new analytical method applied to the determination of radiostrontium in some soil sample around a nuclear power station to show that the analytical procedure is readily applicable to the practical radioactivity monitoring.
2020-09-22 15:24:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47152209281921387, "perplexity": 3869.548794266011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206133.46/warc/CC-MAIN-20200922125920-20200922155920-00534.warc.gz"}
https://optimization-online.org/tag/metric-subregularity/
## Calmness of a perturbed Cournot Oligopoly Game with nonsmooth cost functions This article deals with the calmness of a solution map of a Cournot Oligopoly Game with nonsmooth cost functions. The fact that the cost functions are not supposed to be differentiable allows for considering cases where some firms have diferent units of production, which have diferent marginal costs. In order to obtain results about the … Read more ## Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems We understand linear convergence of some first-order methods such as the proximal gradient method (PGM), the proximal alternating linearized minimization (PALM) algorithm and the randomized block coordinate proximal gradient method (R-BCPGM) for minimizing the sum of a smooth convex function and a nonsmooth convex function from a variational analysis perspective. We introduce a new analytic … Read more ## Discerning the linear convergence of ADMM for structured convex optimization through the lens of variational analysis Despite the rich literature, the linear convergence of alternating direction method of multipliers (ADMM) has not been fully understood even for the convex case. For example, the linear convergence of ADMM can be empirically observed in a wide range of applications, while existing theoretical results seem to be too stringent to be satisfied or too … Read more ## Inner Conditions for Error Bounds and Metric Subregulerity of Multifunctions We introduce a new class of sets, functions and multifunctions which is shown to be large and to enjoy some nice common properties with the convex setting. Error bounds for objects attached to this class are characterized in terms of inner conditions of Abadie’s type, that is conditions bearing on normal cones and coderivatives at … Read more ## A highly efficient semismooth Newton augmented Lagrangian method for solving Lasso problems We develop a fast and robust algorithm for solving large scale convex composite optimization models with an emphasis on the $\ell_1$-regularized least squares regression (Lasso) problems. Despite the fact that there exist a large number of solvers in the literature for the Lasso problems, we found that no solver can efficiently handle difficult large scale … Read more ## Perturbation of error bounds Our aim in the current article is to extend the developments in Kruger, Ngai & Th\’era, SIAM J. Optim. 20(6), 3280–3296 (2010) and, more precisely, to characterize, in the Banach space setting, the stability of the local and global error bound property of inequalities determined by proper lower semicontinuous under data perturbations. We propose new … Read more ## Directional H”older metric subregularity and application to tangent cones In this work, we study directional versions of the H\”olderian/Lipschitzian metric subregularity of multifunctions. Firstly, we establish variational characterizations of the H\”olderian/Lipschitzian directional metric subregularity by means of the strong slopes and next of mixed tangency-coderivative objects . By product, we give second-order conditions for the directional Lipschitzian metric subregularity and for the directional metric … Read more ## Some criteria for error bounds in set optimization We obtain sufficient and/or necessary conditions for global/local error bounds for the distances to some sets appeared in set optimization studied with both the set approach and vector approach (sublevel sets, constraint sets, sets of {\it all } Pareto efficient/ Henig proper efficient/super efficient solutions, sets of solutions {\it corresponding to one} Pareto efficient/Henig proper … Read more ## Holder Metric Subregularity with Applications to Proximal Point Method This paper is mainly devoted to the study and applications of H\”older metric subregularity (or metric $q$-subregularity of order $q\in(0,1]$) for general set-valued mappings between infinite-dimensional spaces. Employing advanced techniques of variational analysis and generalized differentiation, we derive neighborhood and pointbased sufficient conditions as well as necessary conditions for $q$-metric subregularity with evaluating the exact … Read more
2023-04-01 13:43:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6545435786247253, "perplexity": 745.1592697472406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00431.warc.gz"}
https://zbmath.org/?q=an%3A1057.20009
# zbMATH — the first resource for mathematics A note on decomposition numbers of $$G_2(2^n)$$. (English) Zbl 1057.20009 Summary: The decomposition numbers of the finite Chevalley group $$G_2(p^n)$$ of type $$(G_2)$$ defined over a finite field of characteristic $$r$$ which divides $$p^n+1$$ were almost determined by G. Hiss [J. Algebra 120, No. 2, 339-360 (1989; Zbl 0667.20009)]. The author proves that the decomposition numbers are bounded independently of $$p^n$$ by using the same argument as T. Okuyama and K. Waki [J. Algebra 199, No. 2, 544-555 (1998; Zbl 0891.20009)] in the case of $$p=2$$. ##### MSC: 20C33 Representations of finite groups of Lie type 20G05 Representation theory for linear algebraic groups 20G40 Linear algebraic groups over finite fields ##### Keywords: decomposition numbers; finite Chevalley groups Full Text: ##### References: [1] Carter, R.W., Finite groups of Lie type, Pure and appl. math., (1985) · Zbl 0458.20005 [2] Enomoto, H.; Yamada, H., The characters of G2(2n), Japan. J. math., 12, 2, (1986) [3] Hiss, G., On the decomposition numbers of G2(q), J. algebra, 120, 339-360, (1989) · Zbl 0667.20009 [4] Okuyama, T.; Waki, K., Decomposition numbers of sp(4,q), J. algebra, 199, 544-555, (1998) · Zbl 0891.20009 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-05-10 20:05:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5027926564216614, "perplexity": 1377.8106398232444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00386.warc.gz"}
http://mathhelpforum.com/algebra/32385-tricky-exponent-problem-number-sqeuence-problem.html
# Math Help - Tricky Exponent Problem & Number Sqeuence Problem 1. ## Tricky Exponent Problem & Number Sqeuence Problem I've been at these problems for HOURS!!! In the expression below, each letter represents a different digit. A^5 + B^5 + C^5 + D^5 + E^5 = ABCDE and In the array of numbers below, what number should go where the question mark is? 3 11 21 41 91 6 14 15 23 53 3 5 6 8 ? I would really appriciate any help I can get, thanks very much! (: 2. Originally Posted by Chinchilla Babii I've been at these problems for HOURS!!! In the expression below, each letter represents a different digit. A^5 + B^5 + C^5 + D^5 + E^5 = ABCDE What have you done so far? Note: the remainder when divided by 10 of d^5 is d for d=0, 1, ..9 so you have: A+B+C+D = 0 modulo 10 RonL 3. Originally Posted by Chinchilla Babii I've been at these problems for HOURS!!! ... In the array of numbers below, what number should go where the question mark is? 3 11 21 41 91 6 14 15 23 53 3 5 6 8 12 I would really appriciate any help I can get, thanks very much! (: $ \begin{array}{cccccr} 3 & 11&21&41&91& \\ 6 & 14 & 15 & 23 & 53 & \\ 9 & 25 & 36 & 64 & 144 &\text{sum of the columns}\\ 3 & 5 & 6 & 8 & 12 &\text{square-root of the sums}\end{array}$ 4. Originally Posted by CaptainBlack What have you done so far? Note: the remainder when divided by 10 of d^5 is d for d=0, 1, ..9 so you have: A+B+C+D = 0 modulo 10 RonL I wasn't sure if you could use the number ten because it says DIGIT and ten is a two digit number. I've tried 1,2,3,4,5 and 9,8,7,6,5 and other like that but still can't get the answer. And thank you Earboth, that answer really makes sense. (: 5. 0 0 0 0 0 0 0 0 0 1 0 4 1 5 0 0 4 1 5 1 5 4 7 4 8 9 2 7 2 7 9 3 0 8 4 those are all the possible answers (and only the last has all different digits), but i used a simple program to find them. are you looking for a purely analytical solution? 6. Originally Posted by xifentoozlerix 0 0 0 0 0 0 0 0 0 1 0 4 1 5 0 0 4 1 5 1 5 4 7 4 8 9 2 7 2 7 9 3 0 8 4 those are all the possible answers (and only the last has all different digits), but i used a simple program to find them. are you looking for a purely analytical solution? Yes, just a purely analytical solution, thanks. 7. oops. I was thinking that when it meant abcde , it meant to multiply A times B times C... etc, Thanks so much!
2016-05-25 03:35:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6597640514373779, "perplexity": 442.6458356112387}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274119.11/warc/CC-MAIN-20160524002114-00133-ip-10-185-217-139.ec2.internal.warc.gz"}
https://byjus.com/free-ias-prep/economy-this-week-04may-to-17may-2020/
# Economy This Week (4th May to 17th May 2020) Economy is an important part of the UPSC prelims and mains exams; this series titled ‘Economy This Week’ has been initiated to address the need to read and analyse economic articles in various business-related newspapers. The round-up of the Economy/Business section news for 4th May to 17th May 2020 is given below. Business news is essential for IAS exam preparation. ETW 4 May – 17 May 2020:- Watch the video lecture of the business news roundup (4th May to 17th May 2020) below: 1. Remittance economy on a slippery slope (LM 4/5/20) 2. Stimulus and rating agencies (LM 4/5/20) 3. 99% of CKP Co-op bank depositors will get their deposits (BL 4/5/20) 4. SC says Coop banks on par with other lenders under Sarfaesi Act (LM 6/5/20) 5. Reduce the preference for cash (LM 12/5/20) 6. Monetising the deficit is not a good idea (BL 13/5/20) 1. Remittance economy on a slippery slope (LM 4/5/20) • There are 10 million Indian workers (skilled and semi-skilled) working in the wider Gulf region. • India has received remittances worth $82 bn in 2019, back to India to support their families. • With the global oil prices plunging to their lowest, the economies are under stress. Earnings in these economies mainly come from oil exports. • As per the IMF, the Middle East and Central Asia would be recording a negative 2.8% growth rate for 2020. • Some of the people registered to return are dependents of the workers, some have finished their work and some have been laid off. Post the 2008 Global Financial Crisis, of the 6 lakh plus Indians, around 2 lakh returned. • The migrants from 5 southern states support approximately 40 million Indians back home. • Because of the pandemic, which has resulted in the free fall of crude, the remittances have declined by 23% in 2020 to$64 billion (sharpest fall in recent history as per the World Bank report). • Kerala state: • It accounts for every fourth migrant in the Gulf region. • Its remittances crossed ₹ 1 trillion in 2019 and made up 36% of Kerala’s GDP and 60% of the state’s debt. 2. Stimulus and rating agencies (LM 4/5/20) • With the rising economic disruption caused by the pandemic, there are warning signals of deteriorating financial situation and possible credit rating downgrade. • All the three rating agencies have slashed their GDP forecasts for India. • Fitch has already warned that further deterioration in the financial sector would pressure the sovereign credit rating in the light of limited fiscal headroom India had when it entered into the crisis. • In this situation, it does not help that India has the lowest rating (BBB-) in the investment grade in both S&P and Fitch. • The credit ratings have an influence on FDI and FPI. Foreign sovereign wealth funds, pension funds, etc. are bound by their rules not to invest in the non-investment grade papers (junk bonds). • However, many experts have pointed to the outflow of foreign investors as a result of the pandemic. Hence, the credit rating should not be a priority right now. • Having said so, the India bond market is already under pressure and another boil down in this market could dent the image. On the other hand, economy needs a stimulus and robbing the economy from that would lead to growth slowdown which would again lead to credit rating downgrade. • One way to take care of this is that the policy makers become transparent over the fiscal numbers and make the case for stimulus and also fiscal glide path. 3. 99% of CKP Co-op bank depositors will get their deposits (BL 4/5/20) • 99.2% of the depositors will receive full payment of their deposits from the Deposit Insurance and Credit Guarantee Corporation (DICGC). • RBI has cancelled the license of the bank as there has been no scope of revival. • The net worth of the company has entered into a negative territory. As on March 31st, it stood at -₹ 247 Cr. 4. SC says Coop banks on par with other lenders under Sarfaesi Act (LM 6/5/20) • A 5-judge bench of the SC unanimously has ruled that cooperative banks were part of the Sarfaesi Act 2002 (Securitisation and Reconstruction of Financial Assets and Enforcement of Security Act). • This is a stringent law which allows the secured creditors to take possession of the asset of a borrower who fails to pay dues within 60 days of demanding repayment. • The bench has stated that these banks come under the category of banks as defined under Section 2(1)(c). • As per RBI report, there are 1551 UCBs and 96612 rural cooperative banks as of March 2017. The latter group accounts for 65.8% of the total assets of all cooperative banks. • With this ruling, the cooperative banks will be having better control over handling defaults and control over the defaulters in negotiations. 5. Reduce the preference for cash (LM 12/5/20) • The cash held by the public has surged post the pandemic. By 24th April, there were currency notes of about ₹ 24.2 trillion that were out there in the hands of the people, this was despite the apprehension of the RBI that the notes could end up as a carrier for the virus. By end of April, there was more cash in the hands of the public than it was before demonetisation (₹ 18 trillion). • The money (in the form of cash) is kept for three reasons: • To spend on consumption • Invest to get returns • Contingency needs • With the onset of the pandemic, the supply has got disrupted and the sellers of commodities such as vegetables have been demanding payment in cash. • People, at the hint of a problem, withdraw money from the banks and keep it in the form of cash, as nothing beats the liquidity of cash during a crisis. • The currency demand has to come down and for this, there is a need to ensure that the digital money is also liquid. 6. Monetising the deficit is not a good idea (BL 13/5/20) • For some time now, experts have been saying that the deficit of the government will be overshooting the budgeted numbers by a very large margin and recently, the government has disclosed that the estimates have shot by 54% compared to budgeted estimates. • The gross borrowings are pegged at ₹ 12 lakh Cr. • The borrowings of the government last year were as large as the savings in the households. Now if the borrowings of the government increase, it means that the demand for such securities is lower than the supply, and the central banker has no option but to purchase these securities (otherwise the rates on G-sec will soar). • But such monetisation of the deficit is not desirable. • The argument that with rising deficits there would be a rising interest rate is not correct. In fact, it has been seen that during crisis, the interest rates on G-sec have in fact, fallen as the private investment declines. • During 1999-2000 downturn, the rates on G-sec crashed from 12% to 5%. • During the Global Financial Crisis, the rates again declined from 9% to 5%. • During this cycle itself, the rates have come down from 8% (in September) to 6% now. • With monetisation done by the RBI, the money supply in the market increases and the inflationary trend also increases. • On the one hand, the RBI has to follow inflation targeting and on the other, it is monetizing the deficit. This would dent the credibility of the central banker and if the market feels that, then there would be certain issues. ETW 4 May – 17 May 2020:- For more business news videos and PDFs, keep visiting the ‘Economy This Week’ segment regularly.
2022-05-18 23:30:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26515278220176697, "perplexity": 4758.549151720342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00098.warc.gz"}
http://math.stackexchange.com/questions/231463/varepsilon-balls-and-closed-convex-sets/231477
# $\varepsilon$-balls and closed convex sets I arrived at the following problem during the day. For $\iota\in I$, let $A_\iota\subseteq\mathbb R^d$ be non-empty and closed. For $\varepsilon>0$ let $B_\varepsilon(A_\iota)=\bigcup_{x\in A_\iota}B_\varepsilon(x).$ Do we have $$\bigcap_{\varepsilon>0}\bigcup_{\iota\in I}\overline{B_\varepsilon(A_\iota)}=\bigcup_{\iota\in I}A_\iota$$ and if not, is there a chance that we have $$\bigcap_{\varepsilon>0}\overline{conv}\Bigl(\bigcup_{\iota\in I}\overline{B_\varepsilon(A_\iota)}\Bigr)=\overline{conv}\bigl(\bigcup_{\iota\in I}A_\iota\bigr)?$$ - The first statement doesn't hold when $d=1$, $I=\Bbb N^*$, $A_{i}=\{i^{-1}\}$: $x$ is in the LHS but not in the RHS. For the second statement, fix $\varepsilon>0$ and $x$ in the LHS. Then we can find an integer $N$ and $c_j\in [0,1],x_j\in\Bbb R^d,j\in [N]$ such that $\lVert x-\sum_{j=1}^Nc_jx_j\rVert<\varepsilon$, where $x_j\in\bigcup_{\iota\in I}\overline{B_{\varepsilon}(A_{\iota})}$ and $\sum_jc_j=1$. Now, let $i_j\in I$ and $y_j\in A_{i_j}$ such that $\lVert x_j-y_j\rVert<\varepsilon$. We have $$\lVert x-\sum_{j=1}^Nc_jy_j\rVert\leqslant 2\varepsilon,$$ and $\sum_{j=1}^Nc_jy_j\in \operatorname{conv}\bigcup_{\iota\in I}A_{\iota}$. Note that we didn't use the fact that we work in $\Bbb R^d$ (we can replace it by any normed space), and that $A_i$ are closed. - $A_\iota = \{\frac{1}{\iota}\}$ is a counterexample to the first statement ($I = \mathbb{N}$). For all $\epsilon > 0$, $0 \in \bigcup_{i\in I} \overline{B_\epsilon(A_\iota)}$, hence $0$ is a member of the left-hand side. Yet $0$ lies in none of the $A_\iota$, hence $0$ isn't a member of the right-hand side. - Here is a more geometric argument for the second part: Let $L = \cap_{\epsilon>0} \overline{\mathbb{co}}( \cup_{i \in I} \overline{B_\epsilon(A_i)})$, $R = \overline{\mathbb{co}} (\cup_{i \in I} A_i)$. We want to show $L=R$. For all $\epsilon >0$, we have $\cup_{i \in I} A_i \subset \cup_{i \in I} \overline{B_\epsilon(A_i)}$, hence $\overline{\mathbb{co}} (\cup_{i \in I} A_i)\subset \overline{\mathbb{co}}( \cup_{i \in I} \overline{B_\epsilon(A_i)})$, and since this is true for all $\epsilon>0$, we have $R \subset L$. Now suppose $x_0\notin R$. Then $R$ and $\{x_0\}$ are closed, convex sets, so by the Hahn Banach theorem there is a linear functional $\phi$ that separates the sets, that is, there exists $\gamma_1, \gamma_2$ such that $\phi(x_0) < \gamma_1 < \gamma_2 < \phi(y)$ for all $y \in R$. Now suppose $y \in B_\epsilon(A_i)$, then for some $a \in A_i$, we have $\|y-a\| < \epsilon$. Since $|\phi(y)-\phi(a)| \leq \|\phi\| \|y-a\| \leq \epsilon \|\phi\|$, we have $\phi(y) \geq \phi(a) - \epsilon \|\phi\|$. We can choose $\epsilon$ small enough so that $\phi(x_0) < \gamma_1 < \phi(y)$, for all $y \in B_\epsilon(A_i)$. Note that the $\epsilon$ only depends on $\|\phi\|, \gamma_1$ and not on $i \in I$. Consequently, $B_\epsilon(A_i) \subset C = \phi^{-1}[\gamma_1,\infty)$ for all $i\in I$. Note that $C$ is a closed convex set, hence we have $\overline{B_\epsilon(A_i)} \subset C$ for all $i\in I$, and hence $\cup_{i \in I} \overline{B_\epsilon(A_i)} \subset C$, and again, since $C$ is closed and convex, we have $\overline{\mathbb{co}}( \cup_{i \in I} \overline{B_\epsilon(A_i)}) \subset C$, from which it follows that $L \subset C$. Since $\phi(x_0) < \gamma_1$, it follows that $x_0 \notin L$. Hence $L \subset R$. -
2015-04-19 00:11:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9923600554466248, "perplexity": 31.30008372781139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636255.43/warc/CC-MAIN-20150417045716-00118-ip-10-235-10-82.ec2.internal.warc.gz"}
https://collegephysicsanswers.com/openstax-solutions/show-flat-mirror-htextrmi-htextrmo-knowing-image-distance-behind-mirror-equal
Question Show that for a flat mirror $h_\textrm{i} = h_\textrm{o}$, knowing that the image is a distance behind the mirror equal in magnitude to the distance of the object from the mirror. Question by OpenStax is licensed under CC BY 4.0. Final Answer Please watch the solution video. Solution Video # OpenStax College Physics Solution, Chapter 25, Problem 61 (Problems & Exercises) (1:17) #### Sign up to view this solution video! View sample solution Video Transcript This is College Physics Answers with Shaun Dychko.  We're going to show that the image height is the same as the object height given a flat mirror. And, knowing that the object distance is of equal size to the image distance. Now, given that these are equal sizes and knowing that this image is on the other side of the mirror compared to the object, this must be a virtual image because the light rays do not pass through this image. Instead, they're blocked by the mirror then reflected. And so, we can say that this image distance is going to be the negative of the object distance. So, take the object distance to be positive and the image distance will have the opposite sign and it will be negative since it's a virtual image. And so, this is what we can substitute into our magnification formula and we have magnification is image height divided by object height and that's the negative of image distance over object distance, and then we substitute negative D object in place of D image. And, this negative and a negative makes positive one, and the Dos cancel, and we have image height over object height is one, in which case after you multiply both sides by object height, you find that the image height equals the object height.
2019-05-23 00:05:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.571785569190979, "perplexity": 393.07174386608074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256980.46/warc/CC-MAIN-20190522223411-20190523005411-00292.warc.gz"}
https://socratic.org/questions/what-is-the-sum-of-all-the-oxidation-numbers-in-any-compound#343010
# What is the sum of all the oxidation numbers in any compound? ##### 1 Answer Nov 28, 2016 In any compound the sum of the oxidation numbers is equal to the charge on the compound. #### Explanation: And of course if the compound is neutral, then the sum of the oxidation numbers of is neutral. If the compound is an ion, i.e. $C r {O}_{4}^{2 -} , N {O}_{3}^{-} , S {O}_{4}^{2 -}$, the sum of the oxidation numbers is the charge on the ion. See here for a past answer. You are certainly free to ask further questions.
2022-05-25 01:43:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8112277388572693, "perplexity": 537.032578062909}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00510.warc.gz"}
http://mathhelpforum.com/advanced-algebra/72785-maximal-ideals-c-x-y.html
# Thread: Maximal ideals of C[x,y] 1. ## Maximal ideals of C[x,y] Why maximal ideals in a ring of two-variable polynomials with complex coefficients are those generated by x - c, and y - d, for some complex c and d? Thanks 2. Originally Posted by Different Why maximal ideals in a ring of two-variable polynomials with complex coefficients are those generated by x - c, and y - d, for some complex c and d? Thanks this is a special case of weak Nullstellensatz*. one side is trivial because $\displaystyle \frac{\mathbb{C}[x,y]}{<x-c,y-d>} \simeq \mathbb{C}.$ the other side is much deeper: let $\displaystyle \mathfrak{m}$ be a maximal ideal of $\displaystyle R=\mathbb{C}[x,y].$ then $\displaystyle R/\mathfrak{m}$ is clearly a finitely generated $\displaystyle \mathbb{C}$ algebra, which is also a field. a well-known result in commutative algebra says that a finitely generated domain over a field F is a field iff it's algebraic over F**. so $\displaystyle R/\mathfrak{m}$ must be an algebraic extension of $\displaystyle \mathbb{C},$ which is possible only if $\displaystyle R/\mathfrak{m}=\mathbb{C}$ because $\displaystyle \mathbb{C}$ is algebraically closed. now let $\displaystyle c,d \in \mathbb{C}$ be the image of $\displaystyle x,y$ under the natural projection $\displaystyle R \longrightarrow R/\mathfrak{m}=\mathbb{C}.$ then clearly $\displaystyle x-c \in \mathfrak{m}$ and $\displaystyle y-d \in \mathfrak{m}.$ thus $\displaystyle <x-c,y-d> \subseteq \mathfrak{m}.$ but we already proved that $\displaystyle <x-c,y-d>$ is a maximal ideal of $\displaystyle R.$ therefore $\displaystyle \mathfrak{m}=<x-c,y-d>. \ \Box$ * if $\displaystyle F$ is any algebraically closed field, then maximal ideals of polynomial ring $\displaystyle F[x_1,x_2, \cdots, x_n]$ are exactly the ideals $\displaystyle <x_1-a_1, x_2-a_2, \cdots , x_n - a_n>, \ \ a_j \in F.$ ** see for example page 162 of the book "Graduate Algebra: Commutative View". the author is Louis Halle Rowen. 3. Thanks a lot. Funny enough, I understand the other direction, but could you explain why C[x,y] / <x - c, y - d> is isomorphic to C? I assume the isomorphism sends x to c and y to d....so, why does a polynomial f(x,y) s.t. f(c,d)=0 belongs to the ideal <x - c, y - d> ? 4. Originally Posted by Different Thanks a lot. Funny enough, I understand the other direction, but could you explain why C[x,y] / <x - c, y - d> is isomorphic to C? I assume the isomorphism sends x to c and y to d....so, why does a polynomial f(x,y) s.t. f(c,d)=0 belongs to the ideal <x - c, y - d> ? right! you define $\displaystyle f:\mathbb{C}[x,y] \longrightarrow \mathbb{C}$ by $\displaystyle f(g(x,y))=g(c,d),$ which is obviously a surjective homomorphism. let $\displaystyle I=<x-c,y-d>.$ it's clear that $\displaystyle I \subseteq \ker f.$ now let $\displaystyle g \in \ker f.$ then: $\displaystyle g(x,y)=\sum a_{ij}x^iy^j=\sum a_{ij}(x-c+c)^i(y-d+d)^j \equiv \sum a_{ij}c^id^j =g(c,d)=0 \mod I.$ thus $\displaystyle g \in I,$ i.e. $\displaystyle \ker f \subseteq I.$ 5. All clear now. Thanks! , , , , , # (x^2 1,y) is a maximal ideal in C[x,y] Click on a term to search for related topics.
2018-06-19 17:09:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329941868782043, "perplexity": 379.0957031442243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863100.8/warc/CC-MAIN-20180619154023-20180619174023-00131.warc.gz"}
http://www.caam.rice.edu/~lc55/SFEMaNS/html/doc_debug_test_24.html
SFEMaNS  version 4.1 (work in progress) Reference documentation for SFEMaNS Test 24: Navier-Stokes with penalty for non axisymetric domain ### Introduction In this example, we check the correctness behavior of SFEMaNS for a hydrodynamic problem with a solid obstacle involving Dirichlet boundary conditions. We solve the Navier-Stokes equations: \begin{align*} \partial_t\bu+\left(\ROT\bu\right)\CROSS\bu - \frac{1}{\Re}\LAP \bu +\GRAD p &=\bef \text{ in } \Omega_1, \\ \bu & = 0 \text{ in } \Omega_2, \\ \DIV \bu &= 0, \\ \bu_{|\Gamma} &= \bu_{\text{bdy}} , \\ \bu_{|t=0} &= \bu_0, \\ p_{|t=0} &= p_0, \end{align*} in the domain $$\Omega= \Omega_1 \cup \Omega_2$$ with $$\Omega_1=\{ (r,\theta,z) \in {R}^3 : (r,\theta,z) \in [0.1,1/2] \times [0,2\pi) \times [0,1]\}$$ and $$\Omega_2=\{ (r,\theta,z) \in {R}^3 : (r,\theta,z) \in [1/2,1] \times [0,2\pi) \times [0,1]\}$$. We also define $$\Gamma= \partial \Omega$$. We note that the condition $$\bu=0$$ in $$\Omega_2$$ is imposed via a penalty method that involves a penalty function $$\chi$$ equal to 1 in $$\Omega_1$$ and zero elsewhere. The data are the source term $$\bef$$, the penalty function $$\chi$$, the boundary data $$\bu_{\text{bdy}}$$, the initial datas $$\bu_0$$ and $$p_0$$. The parameter $$\Re$$ is the kinetic Reynolds number. ### Manufactured solutions We approximate the following analytical solutions: \begin{align*} u_r(r,\theta,z,t) &= (2r-1)^2 \sin(r+t) \mathbb{1}_{r\geq0.5}, \\ u_{\theta}(r,\theta,z,t) &= 0, \\ u_z(r,\theta,z,t) &= \left( (2-1/r) (6r-1) \cos(r+t) + (r-0.5) \sin(2\theta) \right) \mathbb{1}_{r\geq0.5}, \\ p(r,\theta,z,t) &= r^2 z^3\cos(t) + r \cos(\theta) , \end{align*} with $$\mathbb{1}_{r\geq0.5}$$ the function equals to $$r\geq0.5$$ if and $$0$$ elsewhere. The source term $$\bef$$ and the boundary data $$\bu_{\text{bdy}}$$ are computed accordingly. The finite element mesh used for this test is named cylinder_0.05.FEM and has a mesh size of $$0.05$$ for the P1 approximation. You can generate this mesh with the files in the following directory: ($SFEMaNS_MESH_GEN_DIR)/EXAMPLES/EXAMPLES_MANUFACTURED_SOLUTIONS/cylinder_0.05. The following image shows the mesh for P1 finite elements. Finite element mesh (P1). ### Information on the file condlim.f90 The initial conditions, boundary conditions, the forcing term and the penalty function are set in the file condlim_test_24.f90. Here is a description of the subroutines and functions of interest. 1. The subroutine init_velocity_pressure initializes the velocity field and the pressure at the time $$-dt$$ and $$0$$ with $$dt$$ being the time step. It is done by using the functions vv_exact and pp_exact as follows: time = 0.d0 DO i= 1, SIZE(list_mode) mode = list_mode(i) DO j = 1, 6 !===velocity un_m1(:,j,i) = vv_exact(j,mesh_f%rr,mode,time-dt) un (:,j,i) = vv_exact(j,mesh_f%rr,mode,time) END DO DO j = 1, 2 !===pressure pn_m2(:) = pp_exact(j,mesh_c%rr,mode,time-2*dt) pn_m1 (:,j,i) = pp_exact(j,mesh_c%rr,mode,time-dt) pn (:,j,i) = pp_exact(j,mesh_c%rr,mode,time) phin_m1(:,j,i) = pn_m1(:,j,i) - pn_m2(:) phin (:,j,i) = Pn (:,j,i) - pn_m1(:,j,i) ENDDO ENDDO 2. The function vv_exact contains the analytical velocity field. It is used to initialize the velocity field and to impose Dirichlet boundary conditions on the velocity field. 1. First we define the radial and vertical coordinates r, z. r = rr(1,:) z = rr(2,:) 2. We define the velocity field depending of the Fourier mode and its TYPE (1 and 2 for the component radial cosine and sine, 3 and 4 for the component azimuthal cosine and sine, 5 and 6 for the component vertical cosine and sine) as follows: IF (TYPE==1.AND.m==0) THEN DO n = 1, SIZE(rr,2) IF (rr(1,n)>0.5d0) THEN vv(n) = (2*rr(1,n)-1)**2*SIN(rr(2,n)+t) ELSE vv(n) = 0.d0 END IF END DO ELSE IF (TYPE==5.AND.m==0) THEN DO n = 1, SIZE(rr,2) IF (rr(1,n)>0.5d0) THEN vv(n) = (2-1.d0/rr(1,n))*(6*rr(1,n)-1)*COS(rr(2,n)+t) ELSE vv(n) = 0.d0 END IF END DO ELSE IF (TYPE==6.AND.m==2) THEN DO n = 1, SIZE(rr,2) IF (rr(1,n)>0.5d0) THEN vv(n) = rr(1,n)-0.5d0 ELSE vv(n) = 0.d0 END IF END DO ELSE vv = 0.d0 END IF RETURN where $$t$$ is the time. 3. The function pp_exact contains the analytical pressure. It is used to initialize the pressure. 1. First we define the radial and vertical coordinates r, z. r = rr(1,:) z = rr(2,:) 2. We define the pressure depending of the Fourier mode and its TYPE (1 for cosine and 2 for sine) as follows: IF (TYPE==1.AND.m==0) THEN vv(:) = r**2*z**3*COS(t) ELSE IF (TYPE==1.AND.m==1) THEN vv(:) = r ELSE vv = 0.d0 END IF RETURN where $$t$$ is the time. 4. The function source_in_NS_momentum computes the source term $$\bef$$ of the Navier-Stokes equations. 5. The function penal_in_real_space define the penalty function $$\chi$$ in the real space (depending of the node in the meridian plan and its angle n). It is done as follows: DO n = nb, ne n_loc = n - nb + 1 IF (rr_gauss(1,n_loc).LE.0.5d0) THEN vv(:,n_loc) = 0.d0 ELSE vv(:,n_loc) = 1.d0 END IF END DO RETURN As defined earlier, this function is equal to one when the cylindrical coordinate r is smaller than 0.5 and else is equal to 1. 6. The function imposed_velocity_by_penalty defines the velocity in the solid domain $$\Omega_2$$. It is set to zero as follows: vv=0.d0 RETURN All the other subroutines present in the file condlim_test_24.f90 are not used in this test. We refer to the section Fortran file condlim.f90 for a description of all the subroutines of the condlim file. ### Setting in the data file We describe the data file of this test. It is called debug_data_test_24 and can be found in the directory ($SFEMaNS_DIR)/MHD_DATA_TEST_CONV_PETSC. 1. We use a formatted mesh by setting: ===Is mesh file formatted (true/false)? .t. 2. The path and the name of the mesh are specified with the two following lines: ===Directory and name of mesh file '.' 'cylinder_0.05.FEM' where '.' refers to the directory where the data file is, meaning ($SFEMaNS_DIR)/MHD_DATA_TEST_CONV_PETSC. 3. We use two processors in the meridian section. It means the finite element mesh is subdivised in two. ===Number of processors in meridian section 2 4. We solve the problem for $$6$$ Fourier modes. ===Number of Fourier modes 6 5. We use $$6$$ processors in Fourier space. ===Number of processors in Fourier space 6 It means that each processors is solving the problem for $$6/6=1$$ Fourier modes. 6. We do not select specific Fourier modes to solve. ===Select Fourier modes? (true/false) .f. As a consequence, the code approximates the problem on the first $$6$$ Fourier modes. 7. We approximate the Navier-Stokes equations by setting: ===Problem type: (nst, mxw, mhd, fhd) 'nst' 8. We approximate the Navier-Stokes equations with the velocity field as dependent variable. ===Solve Navier-Stokes with u (true) or m (false)? .t. We note this data is set to true by default. The momentum $$m$$ is only used for multiphase flow problem. 9. We do not restart the computations from previous results. ===Restart on velocity (true/false) .f. It means the computation starts from the time $$t=0$$. 10. We use a time step of $$0.0005$$ and solve the problem over $$100$$ time iterations. ===Time step and number of time iterations 0.0005d0 100 11. We use a penalty function function to take into account the presence of a solid obstacle. ===Use penalty in NS domain (true/false)? .t. 12. We set the number of domains and their label, see the files associated to the generation of the mesh, where the code approximates the Navier-Stokes equations. ===Number of subdomains in Navier-Stokes mesh 1 ===List of subdomains for Navier-Stokes mesh 1 13. We set the number of boundaries with Dirichlet conditions on the velocity field and give their respective labels. ===How many boundary pieces for full Dirichlet BCs on velocity? 2 ===List of boundary pieces for full Dirichlet BCs on velocity 1 2 14. We set the kinetic Reynolds number $$\Re$$. ===Reynolds number 100.d0 15. We give information on how to solve the matrix associated to the time marching of the velocity. 1. ===Maximum number of iterations for velocity solver 100 2. ===Relative tolerance for velocity solver 1.d-6 ===Absolute tolerance for velocity solver 1.d-10 3. ===Solver type for velocity (FGMRES, CG, ...) GMRES ===Preconditionner type for velocity solver (HYPRE, JACOBI, MUMPS...) MUMPS 16. We give information on how to solve the matrix associated to the time marching of the pressure. 1. ===Maximum number of iterations for pressure solver 100 2. ===Relative tolerance for pressure solver 1.d-6 ===Absolute tolerance for pressure solver 1.d-10 3. ===Solver type for pressure (FGMRES, CG, ...) GMRES ===Preconditionner type for pressure solver (HYPRE, JACOBI, MUMPS...) MUMPS 17. We give information on how to solve the mass matrix. 1. ===Maximum number of iterations for mass matrix solver 100 2. ===Relative tolerance for mass matrix solver 1.d-6 ===Absolute tolerance for mass matrix solver 1.d-10 3. ===Solver type for mass matrix (FGMRES, CG, ...) CG ===Preconditionner type for mass matrix solver (HYPRE, JACOBI, MUMPS...) MUMPS 18. To get the total elapse time and the average time in loop minus initialization, we write: ===Verbose timing? (true/false) .t. These informations are written in the file lis when you run the shell debug_SFEMaNS_template. ### Outputs and value of reference The outputs of this test are computed with the file post_processing_debug.f90 that can be found in the following directory: ($SFEMaNS_DIR)/MHD_DATA_TEST_CONV_PETSC. To check the well behavior of the code, we compute four quantities: 1. The L2 norm of the error on the velocity field. 2. The H1 norm of the error on the velocity field. 3. The L2 norm of the divergence of the velocity field. 4. The L2 norm of the error on the pressure outside the obstacle. These quantities are computed at the final time $$t=0.05$$. They are compared to reference values to attest of the correctness of the code. These values of reference are in the last lines of the file debug_data_test_24 in the directory (\$SFEMaNS_DIR)/MHD_DATA_TEST_CONV_PETSC. They are equal to: ============================================ (cylinder_0.05.FEM) ===Reference results 5.35272511967831415E-003 L2 error on velocity 0.41315380860605949 H1 error on velocity 0.19511906134562279 L2 norm of divergence 5.07028095271459204E-003 L2 error on pressure outter obstacle To conclude this test, we show the profile of the approximated pressure and velocity magnitude at the final time. These figures are done in the plane $$y=0$$ which is the union of the half plane $$\theta=0$$ and $$\theta=\pi$$. Pressure in the plane plane y=0. Velocity magnitude in the plane plane y=0.
2018-01-22 04:17:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999693632125854, "perplexity": 1286.2860022665634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890991.69/warc/CC-MAIN-20180122034327-20180122054327-00465.warc.gz"}
https://physics.stackexchange.com/questions/241144/schrodingers-cat-and-consistent-histories
# Schrodinger's cat and consistent histories I was reading Wikipedia's article on Schrodinger's cat: https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat#Many-worlds_interpretation_and_consistent_histories Quote: "When opening the box, the observer becomes entangled with the cat, so "observer states" corresponding to the cat's being alive and dead are formed; each observer state is entangled or linked with the cat so that the "observation of the cat's state" and the "cat's state" correspond with each other. Quantum decoherence ensures that the different outcomes have no interaction with each other." My question is... what concretely it means for different outcomes to interact with each other. So suppose decoherence was not happening. Then could we have: 1) The cat is observed dead, but the cat is alive (so the observer from one outcome is interacting with the cat from the other outcome). 2) At one point in time "the cat is dead and observed dead", and at a later point in time "the cat is alive, and observed alive" (ie: in reality there's a wave equation which is a superposition of both possibilities... at different points in time we may observe different outcomes but the world is internally consistent at any point in time and the wave equation continues to evolve as it does regardless). Is decoherence needed to prevent 1), 2) or both? Thanks. • Schroedinger's cat died in 1935, when Schroedinger came up with this nonsense. A cat is a living being and living beings can't exist in a perfectly closed box. They die a few minutes after you put them in there for thermodynamic reasons. If Schroedinger had spent even a few minutes thinking about this before he wrote it down, he would have noticed, that the irreversible radioactive process happens in the interaction between the nucleus and the electromagnetic quantum field, everything after that is classical. – CuriousOne Mar 2 '16 at 23:29 • 1) If the cat is in state $\alpha$(alive)+$\beta$(dead) then the observer is in state $\alpha$(sees cat alive)+$\beta$(sees cat dead). You are positing that $\alpha=1$ and $\alpha=0$ simultaneously. Obviously this can't happen. 2) Presumably the time evolution of the state (dead) involves at most a phase shift. Dead cats do not come alive. – WillO Mar 2 '16 at 23:30 • @WillO, so decoherence has no effect here? – Ameet Sharma Mar 2 '16 at 23:33 • Of course it does... on the first $10^{-18}-10^{-15}m$ where the actual quantum process happens. – CuriousOne Mar 2 '16 at 23:48 • @CuriousOne, so what if decoherence did not happen... what would be the state of things? – Ameet Sharma Mar 3 '16 at 0:27 ## 1 Answer Think about it like this: Say Schrodinger's Cat is in his box in a windowless, soundproof room. An observer enters the room, closes the door, checks on the cat, closes the box back up. If the cat's alive, he stands there. If the cat's dead, he shoots himself. Now, before another observer opened the door, the first observer would be, like the cat, in quantum superposition and alive and dead at the same time until observed. • Take the cat and everything else out of the equation. Enter a completely isolated room just barely large enough to contain you. How long are you going to be alive? – CuriousOne Apr 20 '16 at 16:22 • Can we get past the cat? It's an intentional absurdity. It can be Schrodinger's LED for all it matters. – Franklin Kuzenski Apr 22 '16 at 12:23 • That's what I just said. Take the cat out and analyze the problem correctly. When does decoherence happen? Schroedinger had it all wrong and so does everybody who believes in the cat being a useful teaching tool for quantum mechanics. – CuriousOne Apr 22 '16 at 18:21
2019-05-23 03:28:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3951784074306488, "perplexity": 1013.2172441686251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257002.33/warc/CC-MAIN-20190523023545-20190523045545-00347.warc.gz"}
https://codingtheory.wordpress.com/2007/11/06/lectures-29-30-achieving-capacity-for-the-bsc/
Posted by: atri | November 6, 2007 ## Lectures 29 & 30: Achieving Capacity for the BSC We started today by completing the description of the deterministic GMD algorithm that can correct up to half the design distance of certain concatenated codes. We then saw how to use certain explicit concatenated codes to achieve the capacity of the $BSC_p$ for every $0\le p<\frac{1}{2}$ with polynomial time encoding and decoding algorithms. This answers in the affirmative the main question left open by Shannon’s work. Next lecture, we will look at another interesting property of certain concatenated codes.
2017-10-22 20:52:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.479316383600235, "perplexity": 526.0851796127777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825464.60/warc/CC-MAIN-20171022203758-20171022223758-00885.warc.gz"}
https://www.physicsforums.com/threads/limit-as-x-y-z-0-i-missed-it-on-the-exam-but-why.96780/
Limit as (x,y,z)->0, I missed it on the exam but why? 1. Oct 26, 2005 mr_coffee Hello everyone, This problem is bugging me because the professor showed me this is how you do it, the day before the exam, so what do i do? well I do it the way he told me, and I totally miss it. Here is the problem: I'm going to let f(x) be the function to make it easier to read: f(x) = [xy+yz+zx]/[x^2+y^2+z^2]; Lim f(x); (x,y,z)->(0,0,0) So I let Lim f(x); (x,y,z)->(t,3t,0) and i got: (xy)/(x^2+y^2) = (t)(3t)/(t^2+(3t)^2) = 3t^2/10^2 = 3/10; Lim f(x); (x,y,z)->(0,3t,t) yz/(y^2+z^2) = (3t)(t)/((3t)^2+t^2)) = 3/10; Lim f(x); (x,y,z)->(t,0,3t) zx/(x^2+z^2) = 3/10; So I said the limit exists because they all go to 3/10, and yet he marked it wrong. Any ideas why this is wrong? Thanks. He's saying the limit doesn't exist. 2. Oct 26, 2005 TD Unfortunately, it's not sufficient that you find 3 (or "n") different paths to let (x,y,z) go to (0,0,0) with the same limit to conclude that limit is correct. In reverse though, it is sufficient to find two paths which yield a different result to conclude that the limit doesn't exist. As long as you're calculating it by approaching it differently and you keep finding the same values - the limit either exists (and is equal to that value), but you have to prove that then, or you haven't found a 'good' path yet to show it doesn't exist. Have you tried approaching it by approaching from one of the axis alone? E.g. (t,0,0) Last edited: Oct 26, 2005 3. Oct 26, 2005 HallsofIvy Staff Emeritus I guarantee that was not the way he told you! He may very well have shown how by getting different answers along different lines you can show that a limit does not exist but I feel certain he never said that getting the same limit proves that the limit does exist! There are, by the way, examples, in just about any Calculus book, in which you get the same limit approaching the origin along any straight line, but a different limit approaching along a parabola- so even showing that you get the same thing along any straight line does not guarantee you will have a limit. It would have been interesting if your professor had given you a problem where the limit did exist and was, say, 3/10 - and you argued that, since taking the limit along 3 different lines all gave 3/10, the limit was 3/10. A good professor would have marked that wrong- right answer, wrong reasoning- and it's the reasoning that is important! The best way to prove that a limit does exist, in problem with the limit at (0,0,0), is to convert to spherical coordinates. That way a single variable, $\rho$, measures the distance from the origin. If the limit as $\rho$ exists and is independent of the other variables, then the limit of the function exists and is equal to that limit. Last edited: Oct 26, 2005 4. Oct 26, 2005 1800bigk I have a question on that limit question too, is it legal to convert is to polar and let z=0, it still doesnt exist that way, but are you allowed to convert and approach on z=0? 5. Oct 26, 2005 HallsofIvy Staff Emeritus Approach "on z= 0"? Do you mean choose only paths in the xy-plane? Yes, that's "legal" but still doesn't prove that approaching along other paths won't give you a different answer. Or did you mean approach along the z-axis (letting x and y= 0)? Same answer! 6. Oct 26, 2005 1800bigk I mean take f(x) = [xy+yz+zx]/[x^2+y^2+z^2] then say consider z=0 so f(x) becomes f(x) = [xy]/[x^2+y^2] now convert that to polar and it is easy to see lim dne is this legal? Last edited: Oct 26, 2005 7. Oct 26, 2005 Tide No. That will not work. Your particular function has the interesting property that approaching the origin from any direction in the z = 0 plane gives the same result. But the limit does not exist! Follow Halls' advice and use spherical coordinates - all will become crystal clear! :)
2016-12-04 06:27:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8545103073120117, "perplexity": 626.7573822214564}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541214.23/warc/CC-MAIN-20161202170901-00121-ip-10-31-129-80.ec2.internal.warc.gz"}
http://www.churchinwales.org.uk/misc/address_cleric/
# How To Address a Cleric Many people get confused with how they are supposed to refer to a cleric, and this guide aims to help. It should be noted, however, that a great deal depends on the circumstance or setting, and also on the personal preference of the cleric concerned. If in doubt, it is best to ask the cleric concerned. ## In conversation Whilst the standard title for a cleric is “the Reverend”, it is not normal to refer to them like this in conversation. Some people will refer to “Vicar” or “Rector”, but usually only when the person they are referring to really is the vicar or rector of the parish where they live. Otherwise, Mr/Mrs/Miss/Ms Smith is used. When referring to a cleric in the third person (as in “x was saying to me the other day”), then “the Reverend AB Smith” might be used in a formal context – but only for the first reference to that person, after which Mr/Mrs/Miss/Ms Smith is used. ## Writing a letter If writing a letter to a cleric, it should be addressed to “the Reverend AB Smith”, but should start “Dear Mr/Mrs/Miss/Ms Smith”. “The Reverend AB Smith” is sometimes shortened to “the Rev’d AB Smith”. If a cleric also holds a doctorate, then in addition to being referred to as “Dr AB Smith” rather than Mr/Mrs/Miss/Ms AB Smith, a letter should be addressed to the “Reverend Dr AB Smith”. ## Exceptions The exceptions to these rules come when a cleric holds a further post. There are four main cases to be aware of: Bishops and Archbishops, Archdeacons, Cathedral Deans, and Canons. ### Bishops This is possibly the most well known exception. When addressing a letter or creating a formal listing, Bishops should be referred to as “the Right Reverend”. Letters should start “Dear Bishop”. In conversation, Bishops are usually referred to as “Bishop”, though in formal situations “My Lord” is sometimes used. When referred to in the third person, then “the Bishop of X” may be used for the first reference and “the Bishop” from then on. If the Bishop in question is retired or is an Assistant Bishop, “My Lord” is not used, and they are referred to as “the Bishop” in the third person. ### Archbishops When addressing a letter or creating a formal listing, the Archbishop should be referred to as “the Most Reverend”. Letters should start “Dear Archbishop”. In conversation, “Archbishop” is often used, though in more formal situations “Your Grace” is also used. If being referred to in the third person, “the Archbishop of Wales” might be used for the first reference, and “the Archbishop” for subsequent mentions. ### Archdeacons When addressing a letter or creating a formal listing, an Archdeacon is referred to as “the Venerable”. A letter should start “Dear Archdeacon”. In conversation, an Archdeacon is usually referred to as “Archdeacon”, with a more formal alternative of “Mr Archdeacon”. In the third person, an Archdeacon may be referred to as “the Archdeacon of X” the first time, and “the Archdeacon” thereafter. ### Cathedral Deans When addressing a letter or creating a formal listing, a Cathedral Dean is referred to as “the Very Reverend”. A letter should start “Dear Dean”. In conversation, a Cathedral Dean is usually referred to as “Dean”, with a more formal alternative of “Mr Dean”. In the third person, a Dean may be referred to as “the Dean of X” the first time, and “the Dean” thereafter. ### Canons When addressing a letter or creating a formal listing, a Canon is referred to as “the Reverend Canon AB Smith”. A letter should start “Dear Canon…” In conversation, a Canon is usually referred to as “Canon”. ## Further exceptions ### Titled clerics If a cleric holds a title, the title is usually placed after their religious title, e.g. the Reverend Sir Alan Smith Bt, if it is mentioned at all. There are a number of exceptions to this rule – for example, a priest would not normally receive the accolade or title of a knighthood unless they received it before they were ordained. Check Crockfords Clerical Directory (available at most reference libraries) for further details. ### Clerics who are also members of Religious Orders When addressing a letter or creating a formal listing, clerics who are members of religious orders may be addressed as “the Reverend AB Smith XYZ”, or possibly “the Reverend Brother Alan/Sister Alice XYZ”. In conversation, they may be addressed as “Brother Alan” or “Sister Alice” or “Father”, “Father Alan”, or “Father Smith”. The term “Father” is also often used by clerics who have no formal affiliation to a religious order.
2017-10-19 21:44:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8219181299209595, "perplexity": 4272.414485176975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823478.54/warc/CC-MAIN-20171019212946-20171019232946-00378.warc.gz"}
http://dave.thehorners.com/index.php?option=com_user&view=login&return=aHR0cDovL2RhdmUudGhlaG9ybmVycy5jb20vc3VibWl0LW5ldy1xdW90ZXMtdXNlcm1lbnUtMTIy
Dave Horner's Website - Yet another perspective on things... 163 guests Rough Hits : 3310050 how did u find my site? "Software people tend to favor the joy of complexity, yet we should strive for the joy of simplicity." - Alan Kay $$\cos x = \sum\limits_{n = 0}^\infty {\frac{{\left( { - 1} \right)^n x^{2n} }}{{\left( {2n} \right)!}}}$$
2018-04-26 11:32:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41846346855163574, "perplexity": 12257.293599413519}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948126.97/warc/CC-MAIN-20180426105552-20180426125552-00505.warc.gz"}
https://zbmath.org/?q=an%3A0613.53049
## A mathematical theory of gravitational collapse.(English)Zbl 0613.53049 This paper supplements three previous papers of the author [ibid. 105, 337-362 (1986; Zbl 0608.35039), ibid. 106, 587-622 (1986), and ibid. 109, 591-611 (1987); see the preceding reviews)]. In this paper the author investigates the asymptotic behavior of the generalized solutions as the retarded time u tends to infinity. It is shown, when the final Bondi mass $$M_ 1\neq 0$$ as $$u\to \infty$$, a black hole forms of mass $$M_ 1$$ surrounded by vacuum. Further it is shown that in the region exterior to the Schwarzschild sphere, $$r=ZM_ 1$$, the solution tends to stationary as $$u\to \infty$$ and the mass remaining outside this sphere tends to zero as $$u\to \infty$$. Finally, it asserts the formation of an event horizon as $$u\to \infty$$, which is the part of the limiting hypersurface $$u=\infty$$ interior to this sphere. The rate of decay of the metric function and the asymptotic behaviour of the incoming light rays are obtained. Reviewer: N.Sengupta ### MSC: 53C80 Applications of global differential geometry to the sciences 35L70 Second-order nonlinear hyperbolic equations 83C05 Einstein’s equations (general structure, canonical formalism, Cauchy problems) 83C30 Asymptotic procedures (radiation, news functions, $$\mathcal{H}$$-spaces, etc.) in general relativity and gravitational theory 83C40 Gravitational energy and conservation laws; groups of motions ### Citations: Zbl 0613.53048; Zbl 0613.53047; Zbl 0608.35039 Full Text: ### References: [1] Christodoulou, D.: The problem of a self-gravitating scalar field. Commun. Math. Phys.105, 337 (1986) · Zbl 0608.35039 [2] Christodoulou, D.: Global existence of generalized solutions of the spherically symmetric Einstein-scalar equations in the large. Commun. Math. Phys.106, 587 (1986) · Zbl 0613.53047 [3] Christodoulou, D.: The structure and uniqueness of generalized solutions of the spherically symmetric Einstein-scalar equations. Commun. Math. Phys.109, 591 (1987) · Zbl 0613.53048 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-08-12 00:03:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.645281970500946, "perplexity": 1142.3130194154503}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571536.89/warc/CC-MAIN-20220811224716-20220812014716-00709.warc.gz"}
https://mathoverflow.net/questions/348835/existence-of-weak-solutions-of-a-parabolic-pde
# Existence of weak solutions of a parabolic PDE Assume that $$\Omega\subset\mathbb{R}^n(n\geq3)$$ is a bounded open set with smooth boundary, $$\varphi\in H^1_0(\Omega)$$, $$F(t)$$ is differentiable in $$\mathbb{R}$$ and $$F'$$ is bounded. Given a PDE $$\begin{cases} u_t-\displaystyle{\sum}_{i,j=1}^na^{ij}(x)u_{x_ix_j}=F(u)&\text{a.e. }(x,t)\in\Omega\times(0,T],\\u|_{t=0}=\varphi&\text{in }L^2(\Omega) \end{cases}$$ where the matrix function $$[a^{ij}(x)]_{n\times n}\in C^{\infty}(\bar{\Omega})$$ is positive definite and symmetric. I want to know that if there exist a weak soluton $$u\in L^{\infty}(0,T;H^1_0(\Omega))\cap L^2(0,T;H^2(\Omega))$$ such that $$u_t\in L^2(\Omega\times(0,T])$$. If it exists, how to prove it? Thanks! Start by showing that there exists a weak solution with lower regularity (standard $$L^2$$ energy), i-e $$u\in C([0,T];L^2)\cap L^2(0,T;H^1_0) \qquad\mbox{with}\qquad u_t\in L^2(0,T;H^{-1}).$$ (This can be achieved opening any textbook about quasilinear parabolic equations.) Then improve the regularity: since $$F$$ is Lipschitz it is easy to see that $$f:=F(u)\in L^2(0,T;L^2)$$ hence $$u$$ is a weak solution of the frozen Initial-Boundary-Value problem $$\left\{ \begin{array}{ll} u_t+Lu=f & \mbox{in }Q_T\\ u=0 & \mbox{on } [0,T]\times \partial\Omega\\ u|_{t=0}=\varphi & \mbox{in }\Omega \end{array} \right.$$ with $$\varphi\in H^1_0$$ and $$f\in L^2(0,T;L^2)$$, as well as $$L$$ a good, smooth, uniformly elliptic operator. Standard results (improved regularity) for this linear problem gives the desired regularity for $$u$$ (see e.g. Evans' book "PDEs" section 7.1.3).
2020-10-24 23:47:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.947360634803772, "perplexity": 90.19111017528215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885059.50/warc/CC-MAIN-20201024223210-20201025013210-00156.warc.gz"}
http://www.time4education.com/local/articlecms/page.php?id=1243
RBI Announces 526 "Office Attendants" posts RBI has announced 526 posts of “Office Attendants” in various offices of the Bank. Selection for the post will be through a country-wide competitive Test (Online Test) followed by Language Proficiency Test (in Regional Language). Important Dates Payment of Test Fees (Online)17.11.2017 to 07.12.2017 Schedule of Online Test (Tentative)In the month of December 2017/January 2018 The details of vacancies are given below. Office Vacancies PWD # EXS # SC ST OBC GEN TOTAL VI HI OH EX-1 EX-2 Ahmedabad 0 6 6 27 39 1 0 0 2 8 Bengaluru 0 7 19 32 58 1 0 1 3 12 Bhopal 0 10 3 32 45 1 1 0 2 9 Chandigarh & Shimla$0 0 14 33 47 0 1 1 2 9 Chennai 0 0 5 5 10 1 0 0 0 2 Guwahati 0 3 2 5 10 0 0 0 0 2 Hyderabad 4 2 7 14 27 0 0 1 1 5 Jammu 0 0 9 10 19 0 0 1 1 4 Lucknow 0 0 6 7 13 1 0 0 1 3 Kolkata 3 0 2 5 10 0 1 0 0 2 Mumbai, Navi Mumbai and Panaji & 0 23 0 142 165 2 2 3 7 33 Nagpur 0 2 2 5 9 0 0 0 0 2 New Delhi 0 0 13 14 27 0 1 1 1 5 Thiruvananthapuram 0 0 12 35 47 1 0 0 2 9 Total 7 53 100 366 526 8 6 8 22 105$ Chandigarh – 42 and Shimla – 5,& Mumbai – 144, Navi Mumbai (Belapur) – 15 and Panaji – 6 Eligibility Criteria: (a) Age (as on 01/11/2017) : Between 18 and 25 years. (b) Educational Qualifications (as on 01/11/2017): A candidate should have passed 10th Standard (S.S.C./Matriculation) from the concerned State/UT to which he is applying. Scheme of Selection: Selection would be done on the basis of Online Test (as given below) and Language Proficiency Test (LPT). Online Test: Sr. No. Name of Tests (Objective) No. of Questions Maximum Marks Composite Time 1 Reasoning 30 30 90 minutes 2 General English 30 30 3 General Awareness 30 30 4 Numerical ability 30 30 Total 120 120 1. There will be negative marking for the wrong answer in the Online Test. 1/4th mark will be deducted for each wrong answer. 2. LPT is mandatory. LPT will be of Qualifying Type.
2017-12-14 08:31:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22043061256408691, "perplexity": 9474.189582154468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948542031.37/warc/CC-MAIN-20171214074533-20171214094533-00705.warc.gz"}
https://ncatlab.org/homotopytypetheory/show/diff/differential+cohesive+homotopy+type+theory+%3E+history
Homotopy Type Theory differential cohesive homotopy type theory > history (changes) Showing changes from revision #4 to #5: Added | Removed | Changed Contents Overview Differential cohesive homotopy type theory is a three-sorted dependent type theory of spaces, infinitesimal neighborhoods, and homotopy types, where there exist judgments • for spaces $\frac{\Gamma}{\Gamma \vdash S\ space}$ • for infinitesimal neighborhoods $\frac{\Gamma}{\Gamma \vdash I\ infinitesimal\ neighborhood}$ • for homotopy types $\frac{\Gamma}{\Gamma \vdash T\ homotopy\ type}$ • for points $\frac{\Gamma \vdash S\ space}{\Gamma \vdash s:S}$ • for infinitesimals $\frac{\Gamma \vdash I\ infinitesimal\ neighborhood}{\Gamma \vdash i:I}$ • for terms $\frac{\Gamma \vdash T\ homotopy\ type}{\Gamma \vdash t:T}$ • for fibrations $\frac{\Gamma \vdash S\ space}{\Gamma, s:S \vdash A(s)\ space}$ • for infinitesimal fibrations $\frac{\Gamma \vdash I\ infinitesimal\ neighborhood}{\Gamma i:I\vdash A(i)\ infinitesimal\ neighborhood}$ • for dependent types $\frac{\Gamma \vdash T\ homotopy\ type}{\Gamma, t:T \vdash B(t)\ homotopy\ type}$ • for sections $\frac{\Gamma \vdash S\ space}{\Gamma, s:S \vdash a(s):A(s)}$ • for infinitesimal sections $\frac{\Gamma \vdash I\ infinitesimal\ neighborhood}{\Gamma i:I\vdash a(i):A(i)}$ • for dependent terms $\frac{\Gamma \vdash T\ homotopy\ type}{\Gamma, t:T \vdash b(t):B(t)}$ Differential cohesive homotopy type theory has the following additional judgments, two for turning spaces into homotopy types, two for turning homotopy types into spaces, two for turning infinitesimal neighborhoods into homotopy types, two for turning homotopy types into infinitesimal neighborhoods, two for turning infinitesimal neighborhoods into spaces, and two for turning spaces into infinitesimal neighborhoods: • Every space has an underlying homotopy type $\frac{\Gamma \vdash S\ space}{\Gamma \vdash p_*(S)\ homotopy\ type}$ • Every space has a fundamental homotopy type $\frac{\Gamma \vdash S\ space}{\Gamma \vdash p_!(S)\ homotopy\ type}$ • Every homotopy type has a discrete space $\frac{\Gamma \vdash T\ homotopy\ type}{\Gamma \vdash p^*(T)\ space}$ • Every homotopy type has an indiscrete space $\frac{\Gamma \vdash T\ homotopy\ type}{\Gamma \vdash p^!(T)\ space}$ • Every infinitesimal neighborhood has an underlying homotopy type $\frac{\Gamma \vdash I\ infinitesimal\ neighborhood}{\Gamma \vdash q_*(I)\ homotopy\ type}$ • Every infinitesimal neighborhood has a fundamental homotopy type $\frac{\Gamma \vdash I\ infinitesimal\ neighborhood}{\Gamma \vdash q_!(I)\ homotopy\ type}$ • Every homotopy type has a discrete infinitesimal neighborhood $\frac{\Gamma \vdash T\ homotopy\ type}{\Gamma \vdash q^*(T)\ infinitesimal\ neighborhood}$ • Every homotopy type has an indiscrete infinitesimal neighborhood $\frac{\Gamma \vdash T\ homotopy\ type}{\Gamma \vdash q^!(T)\ infinitesimal\ neighborhood}$ I am not sure what the official names of these functors are: • Every space has an infinitesimal neighborhood $\frac{\Gamma \vdash S\ space}{\Gamma \vdash i_*(S)\ infinitesimal\ neighborhood}$ • Every space has an infinitesimal neighborhood $\frac{\Gamma \vdash S\ shape}{\Gamma \vdash i_!(S)\ infinitesimal\ neighborhood}$ • Every infinitesimal neighborhood has a space whereby that infinitesimal neighborhood is contracted away. $\frac{\Gamma \vdash I\ infinitesimal\ neighborhood}{\Gamma \vdash i^*(I)\ space}$ • Every infinitesimal neighborhood has a space $\frac{\Gamma \vdash I\ infinitesimal\ neighborhood}{\Gamma \vdash i^!(I)\ space}$ Modalities From these judgements one could construct the reduction modality? as $\mathfrak{R}(S) \coloneqq i_!(i^*(S))$ the infinitesimal shape modality as $\mathfrak{J}(S) \coloneqq i_*(i^*(S))$ and the infinitesimal flat? modality as $\&(S) \coloneqq i_*(i^!(S))$ for a space $S$.
2022-12-05 07:23:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 28, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9455040097236633, "perplexity": 4353.8776510406415}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711013.11/warc/CC-MAIN-20221205064509-20221205094509-00817.warc.gz"}
http://physics.stackexchange.com/questions/112854/does-the-conversion-of-crude-oil-to-greenhouse-gases-have-any-measurable-effect
# Does the conversion of crude oil to greenhouse gases have any measurable effect on earth's gravitational pull? Oil underground is much denser than greenhouse gas in the atmosphere. Does the conversion in anyway effect the gravitational force from earth. - Comment to the question (v1): In such type of questions, OP is encouraged to perform a crude back of an envelope calculation as a sanity check, and possibly include it in the post. –  Qmechanic May 15 '14 at 12:46 As Hoytman points out, fossil fuel combustion does not change the mass of the earth, and so its gravitation with celestial bodies is unchanged. But moving mass from the subsurface to the atmosphere would slightly decrease surface gravity. Let's see whether that decrease would be measurable. Let's suppose that, in some dystopian future, fossil fuel combustion has removed so much material from underground that the surface collapses and the radius of the entire earth $R_\oplus \approx 6\times10^6$ m shrinks by a meter. The volume beforehand was $$V = \frac{4\pi}3 R_\oplus^3$$ and the change in the volume is $$dV = 4\pi R_\oplus^2 dr$$ We'll pretend that the earth was a uniformly dense sphere before and after. In that case the fractional change in local gravity $dg/g$ is equal to $dV/V = 3dr/R_\oplus \approx \frac12\times10^{-6}$. This is about ten times smaller than the uncertainties on $G$ and $M_\oplus$, so it's safe to conclude that even in this extreme case the gravitational shift you're asking about wouldn't be measureable. A fastidious reader might note that if the earth actually shrank as the mass moved to the atmosphere, the effect would be even smaller: also changing the denominator of $g=GM_\oplus/R_\oplus^2$ reduces the size of the effect to $1\times dr/R_\oplus$. - To be super pedantic, the mass of the Earth IS changed by an amount equal to (chemical binding energy used + excess thermal energy radiated to space)/$c^{2}$ –  Jerry Schirmer May 15 '14 at 14:53 Whether the change in chemical binding energy is larger or smaller than $10^{-6}$ of earth's mass is left as an exercise to the reader :-) –  rob May 15 '14 at 15:00 you could almost say his pedantry sucked, just by an infinitesimal amount –  John Nicholas May 15 '14 at 15:55 @JerrySchirmer The chemical binding energy isn't destroyed, it just lingers as heat, retaining its mass, (unless of course, you are including it in the energy radiated to space, but then adding it would be redundant.) –  PyRulez May 15 '14 at 22:05 Because gravity is a property of mass and mass is neither created or destroyed when fuel is burned, the only measurable difference would be caused by the change in the location of the mass. When fuel is burned, mass is taken from the surface of the planet and added to the atmosphere. this would cause a (slight) reduction in the downward gravity felt by objects on the planets surface. As you move away from the surface of the planet (both up and down) the difference would shrink. - This is a pretty academic exercise. The calculation ignores the many side effects to the conversion. Depending on the composition of the oil, you will need to consider atmospheric pollution leading to a greenhouse effect. The atmosphere would contain more water vapour which impacts on atmospheric pressure. While total mass of earth plus atmosphere is the same, surface gravity as well as surface pressure will change. The change in surface pressure would be more noticeable than the change in surface gravity. -
2015-03-28 22:22:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7732668519020081, "perplexity": 560.255155493603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297831.4/warc/CC-MAIN-20150323172137-00080-ip-10-168-14-71.ec2.internal.warc.gz"}
http://kemandi.com/tag/algebra
# Recent questions tagged algebra Let a, b, c be the roots of the $x^3-2x^2+x+5=0$ Find the value of $a^4+b^4+c^4$ Solve in integers the equation $x+y = x^2-xy +y^2$ Find the ordered pair of nonzero real numbers (p, q) if the roots of the equation $x^2-qx+p=0 \newline \text{ are the squares of the roots of the equation } \newline x^2-px+q=0.$ Write an equation expressing m explicitly in term of n, if m, n >1 and for all x >0 $\log_n x = 3 \log_m x$ Find the expression that is always greater: $\frac{a^4+2a^2+4}{3} \text { and } \frac{a^4+a^2+1}{4}$ Find the values of a and b if $(x-a)(x+2) = (x+6)(x-b) \text{ for all real number x.}$ Given a-b=1, what is the value of $a^3-3ab-b^3$ ? What is the sum of the roots of the equation $(x-1)(x+9)(x-5) =0$ For how many different positive integers n does $\sqrt{n} \text{ differ from } \sqrt{100} \text{ by less than 1}.$ If a, b and c are the zeros, possible complex, of the polynomial $5x^3+1440x^2-120x+8, \newline \text{ what is the absolute value of } \frac{1}{a}+\frac{1}{b}+\frac{1}{c} ?$ Two imaginary solutions of $(x-1)(x-2)(x-3) = (6-1)(6-2)(6-3) \text{ satisfy the equation } \newline x^2+k=0. \text{ What is the value of k?}$ For x > 0 , find the minimum value of $\sqrt{ \frac{(4+x)(1+x)}{x}}$ $\log_x4 + \log_4 x = 17/4$ $x + \frac{5}{x}$
2017-08-22 01:40:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5171379446983337, "perplexity": 724.1777659278475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109803.8/warc/CC-MAIN-20170822011838-20170822031838-00299.warc.gz"}
http://m-phi.blogspot.nl/2011/08/origins-of-modern-algebra-and-its.html
## Monday, 22 August 2011 ### The origins of modern algebra and its notations I am now working on the chapter of my book (on formal languages) which focuses on the historical development of formal languages and mathematical formalisms more generally. I wish I could go into these developments much more thoroughly than I will be able to in the book (given word limit and time constraints), as they are truly fascinating. (It is no secret to anyone having read my previous posts that I am obsessed with the topic of notations and symbolic systems!) So for now, let me share some bibliographical suggestions and a digest of my main findings. Some of you may be thinking that this topic is not of obvious interest in the context of a blog on mathematical philosophy. But one of the hallmarks of mathematical philosophy is certainly the use of mathematical formalisms and special notational systems, and thus a better understanding of this methodology (including its history) does seem to fall within the remit of the blog. (At any rate, feel free to stop reading if you don't care much about history!) For those who are still with me, here we go. A question that had puzzled me for years is: where does the explosive progress in mathematical notation of the 16th and 17th centuries come from? I knew enough about Latin medieval *academic* mathematics to know that nothing in the quadrivium curriculum seemed to anticipate the birth of modern algebra with Viete and Descartes. So it had to come from somewhere else, but where? Well, last year I attended a conference in Nancy, and the mystery stated to be dispelled with a talk by Albrecht Heeffer on the medieval abbaco tradition. Here is a passage from the paper corresponding to that talk: By the end of the fifteenth century there existed two independent traditions of mathematical practice. On the one hand there was the Latin tradition as taught at the early universities and monastery schools in the quadrivium. Of these four disciplines arithmetic was the dominant one with De Institutione Arithmetica of Boethius as the authoritative text. Arithmetic developed into a theory of proportions as a kind of qualitative arithmetic rather than being of any practical use, which appealed to esthetic and intellectual aspirations. On the other hand, the south of Europe also knew a flourishing tradition of what Jens Hoyrup (1994) calls “sub-scientific mathematical practice”. Sons of merchants and artisans, including well-known names such as Dante Alighieri and Leonardo da Vinci, were taught the basics of reckoning and arithmetic in the so-called abbaco schools in the cities of North Italy, theProvence, and Catalonia. The teachers or maestri d’abbaco produced between 1300 and 1500 about 250 extant treatises on arithmetic, algebra, practical geometry and business problems in the vernacular. The mathematical practice of these abbaco schools had clear practical use and supported the growing commercialization of European cities. These two traditions, with their own methodological and epistemic principles, existed completely separately. (Heeffer, forthcoming) Basically, the abbaco tradition was the missing link between Arabic algebra as consolidated in al-Khwārizmī’s “Book on restoration and opposition” (which in turn was inspired by other mathematical traditions, such as the Indian tradition) and the algebra of Viete and Descartes. The interesting thing is that Viete himself emphasizes his indebtedness to three Greek mathematicians (Pappus, Diophantus and Eudoxus -- see this paper by Danielle Macbeth), but makes no reference to either Arabic algebra or to the sub-scientific abbaco schools (this fits well the Renaissance ethos of the time, going back to the Classics!). But besides the 'canonical' Arabic tradition, the abbaco tradition was much inspired by the introduction of special symbols and techniques to operate with the symbolism emerging in the mathematical tradition of the Maghreb in the later Middle Ages (again, it makes sense, as the abbaco people were merchants and thus traveled a lot!). The Maghrebian tradition appears to be the historical place of birth for many of the notational conventions still widely used, such as the notation for fractions. Thus, one lesson to be learned here is that focusing only on the official, 'academic' story is simply not enough to understand the emergence of modern mathematical symbolism; the sub-scientific tradition of the abbaco schools is a crucial piece in the puzzle. - A whole volume with the title Philosophical aspects of symbolic reasoning in Early Modern mathematics (and freely available!). In particular, the paper by Hoyrup tells the story of the development towards algebraic symbolization from circa 1300 to 1550, and the paper by Heeffer covers the ground immediately preceding Viete. - Another paper by Hoyrup, a concise survey of proto-algebra and pre-modern algebra. - Chapter 5 of Bellos' Alex's Adventures in Numberland, which I also mentioned in my previous post. In it (p.181), we discover for example that the reason why we use x as the main symbol for the unknown is because it is one of the least used letters in French! Descartes introduced the convention that letters towards the end of the alphabet would be used for unknown quantities (while those at the beginning would be used for known quantities), but as his La Geometrie was being printed, the printer was running out of letters (think of the old-fashioned printing method of using small lead letters to imprint the paper). He asked if it mattered whether x, y or z was used; since it did not matter, he opted for x simply because it is used less frequently in French. And here we are, still stuck with x's! UPDATE: Kai von Fintel makes the excellent suggestion of adding some links to the (still) definitive account of the history of mathematical notations: Florian Cajori's A History of Mathematical Notations (1928).The first volume can be downloaded (for free) here, the second seems not to have been scanned yet. And here is a compilation (from Cajori's book) of earliest uses of various mathematical notations. 1. Catarina, many thanks. Look forward to this book. I'm interested in various notational matters, and I'm pleased you've gathered some interesting material from the history of algebra. An example that bothers me is the best way to write sequences, and I've finally settled on $(a_i: i \in I)$, where $I$ is the index set. Although we see things like: $(a_0,\dots, a_n)$ $\langle a_0, \dots, a_n \rangle$ $\overline{a}$ $\underline{a}$ $\vec{a}$ $(a_i)_{i \in I}$ $\{a_i\}_{i \in I}$ $(a_i)$ $\{a_i\}$ and no doubt others ... And sometimes a sequence starts with $a_0$ and sometimes with $a_1$. 2. I had a question about your comments on why "x" is most frequently used as a variable, and the assertion that it is because it is the least frequently used letter in the French alphabet. I first came across this idea in a book by Art Johnson. I contacted him about it and asked if he could provide the original source of this bit of information. He said that he didn't have the reference. So far, every mention of this that I have ever seen in any book has listed Art Johnson as a source. (not that I have done an exhaustive search.) The earliest mention of anything connected with this idea that I was able to find was in an 1885 magazine article in Biblioteca Mathematica, but it still did not substantiate the claim or give other references. I was wondering if you have any references on this bit of information that you'd be willing to share. Citations are fine... I'll go find the originals. I'm not trying to discredit the claim, but rather, I'm interested in researching it and I'm not sure where to continue. Thanks so much, Dave 3. Hi Dave, legitimate question. My source for this piece of information was Alex Bellos' book, but right now I don't have it handy to check whether he gives any further references. But you are right, it may well be one of those 'urban legends' that everybody keeps repeating... It would be great to find a more definitive source for the story. 4. Looks like urban legend to me. the 1637 text is online at http://fr.wikisource.org/wiki/La_Géométrie_(éd._1637) and a cursory inspection shows not only that x, y, and z are all used as variables in the text, but also that 'z' occurs far less frequently in the body text than either 'x' (with uses similar to modern french) or 'y' (used in many cases where modern orthography would have an 'i'). Not to mention that a french printer, especially in that time, would want to have many instances of 'z' available for setting dialog. (« frère Jacques, dormez-vous ? ») 5. Aljebra is most important in math thanks for share it fragment checker online free .
2017-11-24 20:27:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5851899981498718, "perplexity": 1165.9240003930643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808935.79/warc/CC-MAIN-20171124195442-20171124215442-00399.warc.gz"}
http://mathhelpforum.com/pre-calculus/91916-logs-question.html
1. logs question $\frac{ log_2{16} + log_2{32} }{log_2{x}} = log_2{x}$ $\frac{log_2{512}}{log_2{x}} = log_2{x}$ $log_2{512} = log_2{x^2}$ $512 = x^2$ $\sqrt{512} = x$ so x = 22.6 , is this correct? appreciate any input. thanks 2. Originally Posted by Tweety $\frac{ log_2{16} + log_2{32} }{log_2{x}} = log_2{x}$ $\frac{log_2{512}}{log_2{x}} = log_2{x}$ $log_2{512} = log_2{x^2}$ $512 = x^2$ $\sqrt{512} = x$ so x = 22.6 , is this correct? appreciate any input. thanks $16 = 2^4$ $32 = 2^5$ $9 = (log_2{x})^2$ $log_2{x} = \pm 3$ $x = 2^{\pm 3}$ Therefore $x = 8 \: , \: x = \frac{1}{8}$ 3. Originally Posted by Tweety $\frac{ log_2{16} + log_2{32} }{log_2{x}} = log_2{x}$ $\frac{log_2{512}}{log_2{x}} = log_2{x}$ $log_2{512} = log_2{x^2}$ PROBLEM is here, as $(log_2{x})*(log_2{x})\not=log_2{x^2}$ You want $log_2{512} = (log_2{x})^2$ , which is where the $9 = (log_2{x})^2$ written in the above post, comes from.
2016-08-25 00:09:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9733161330223083, "perplexity": 9183.063830909263}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292675.36/warc/CC-MAIN-20160823195812-00252-ip-10-153-172-175.ec2.internal.warc.gz"}
https://jympa.com/p4rc9fd7/input-z-must-be-a-2d-array-f63486
input z must be a 2d array Go to the editor Test Data : The axis in the result array along which the input arrays are stacked. Спасылкі | ... What is the difference between 서고 and 도서관? Before we discuss more about two Dimensional array lets have a look at the following C program. Is CEO the profession with the most psychopaths? Гісторыя | 50590220 '33762X10' '02837P30' 57832110 '25037Y10' 13063010 '09972F10' B= Like a fool. The... Катэгорыя:Навапольскі сельсавет ПадкатэгорыіСкладн... Compilation of a 2d array and a 1d array ... That's an odd coin - I wonder why T... Цітвянскі сельсавет 2 dimensional Array has two pairs of square brackets. System.out.print(twoDimentional[i][j]); Before going forward we have to know why we need array. For taking user input we took the help of a scanner class in java. We just created the object of an array. } How can Republicans who favour free markets, consistently express anger when they don't like the outcome of that choice? Now we will overlook briefly how a 2d array gets created and works. Hope this answer helps. There are two types of arrays in java. We created the object of this class called s1. 0 ⋮ Vote. Dawne za... Is this a new Fibonacci Identity? Is it possible to create a QR code using text? Each initial value can be any Maple expression, but this expression must be able to be evaluated to a value of the datatype of the Array (see Options). } System.out.println(""); 0 ⋮ Vote. Edited: Stephen Cobeldick on 12 Apr 2016 Accepted Answer: Stephen Cobeldick. If you are having trouble understanding nested for loop. 2-dimensional array structured as a matrix. 1. Learn more about cell arrays, char TypeError: fillna() got an unexpected keyword argument 'implace'TypeError: __init__() got an unexpected keyword argument 'Log_dir'Python & Pandas : TypeError: to_sql() got an unexpected keyword argument 'flavor', Is there any way to read Xlsx file in pyspark?Also want to read strings of column from each columnNameUse genfromtxt function and can't slice the data in python 3Plot RDD data using a pyspark dataframe from csv fileIs there any way to get samples in under each leaf of a decision tree in Sklearn ?Are there any good NLP APIs for comparing strings in terms of semantic similarity?Navigating the jungle of choices for scalable ML deploymentReliable way to verify Pyspark data frame column typeMerging dataframes in Pandas is taking a surprisingly long timeHow to train a neural network model in python or any language that can train itself from a excel file and validate itself also from a excel file?Multiple filtering pandas columns based on values in another columnread csv file directly from URL / How to Fix a 403 Forbidden Error, ბაბილონის გოდოლი Mathematica command that allows it to read my intentions Why was Sir Cadogan fired? ... tick('b0');if (window.devicePixelRatio > 1)documen... GeoHack - Післяціна Trademark violation for app? З... A hang glider, sudden unexpected lift to 25,000 fe... Краснаакцябрскі сельсавет The documentation also tells us how to make 2D arrays, but I am sure you can figure this out yourself. printArray(twoDimentional); } But we need to understand that in java we can’t delete an item in 2d arrays. System.out.println("Your output would be as below:"); public static void main(String[] args) }. Thanks for contributing an answer to Data Science Stack Exchange! int[][] twodArray = new int[3][2]; // declared and created array object Гісторыя | Oldie but Goldie Create custom note boxes Small nick on power cord from an electric alarm clock, and copper wiring exposed but intact Is it OK to decorate a log book cover? Ask for an element position to insert the element in an array. Follow 90 views (last 30 days) fede on 12 Apr 2016. 0. This is very important to know how the multidimensional array works. A= 89830410. Second nesting of for loop is to display user input on the screen in a matrix format. 0 ⋮ Vote. Спасылкі (default tf.float32) inference_input_type: Target data type of real-number input arrays. This is called a single-dimensional array. რესურსები ინტერნეტში | As of R2018b, all MathWorks ® products are compatible with string arrays. 0. Arrays can hold numbers ( eg age( 100)( or strings ( eg name$( 100)) LB arrays can only be one or two dimensioned. memory location. Нав... How do I keep Mac Emacs from trapping M-? How can I separate the number from the unit in argument? Are Boeing 737-800’s grounded? Сусветныя сістэмы Fields of bus arrays. The c array can only be of dtypes float and complex, and x array must have dtype float. """ If the contents of the array are for input, … After creating an array object it is time to initialize it. How to find all the available tools in mac terminal? Because we are working with rows and columns here. Зноскі Then try again. Does 6x6x51 look like a 2D array to you? A= 89830410. String[][] twoDimentional = {{"1","1"},{"2","2"},{"3","3"},{"4","4"}}; twoDimentional[3][0] = "5"; for(int j = 0 ; j < 2; j++) """Determine if inputs arrays need to be swapped in "valid" mode. 0 ⋮ Vote. Follow 71 views (last 30 days) fede on 12 Apr 2016. This is very simple. Look at the following diagram for clear understanding. Passing Numpy arrays to a C function for input and output. Compatible means that if you can specify text as a character vector or a cell array of character vectors, then you also can specify it as a string array. When a candle burns, why does the top of wick glow if bottom of flame is hottest? Here, the variable m will go to every element of the array ar and will take its value. If so what is it called and is it in a book somewhere? • Subscripted variables can be use just like a variable: ! We created this object to use different methods specified in a class scanner. In the following code, we describe how to initialize the 2-dimensional array. Answered: Image Analyst on 30 May 2016 Accepted Answer: Image Analyst. So the solution is an array. Arrows in tikz Markov chain diagram overlap ... isolation-forest ,explained easily ... How to coordinate airplane tickets? If you are clear about the basic concepts like for loop you can easily understand the above program. Here the name of the employee is of string type. It is very time consuming and tedious. Літаратура | In GETLIST: sw$v0,0 ($a2) #store in array add$a0,$a0,4 #next number <= instead write 'add$a2,$a2,4' if you want don't want to overwrite it. Each array must have the same shape. 2019 Moderator Election Q&A - Questionnaire Also the problem in printing list is that you are adding$a2 to store the number in the array. error:Cell elements must be character arrays.. function Z = myfcn() Z = zeros(1,4); end. Input A of class cell and input B of class double must be cell arrays of strings, unless one is a string. arrays sequence of array_like. No, clearly the 51 on the third dimensions makes it a 3D array. For more information, see Run MATLAB Functions with Distributed Arrays (Parallel Computing Toolbox) . ... Parameters and global variables must be fixed-size. I receive an exception when I attempt to input a 2D array created in the following way: INDArray input = Nd4j.create(x); where x is a 2D array: double[100][100] network.output(input); // Exception java.lang.IllegalArgumentException: Invalid input: expect output columns must be equal to rows 100 x columns 100 but was instead [100, 100] at Г... Regularization: global or layerwise? How can we achieve this? if you really want to be good in java you should work on arrays. for(int i = 0 ; i < 3 ; i++){ Shortening a title without changing its meaning Can you teleport closer to a creature you are Frightened of? ... Variable-size input and output signals must have an upper bound. We know that a 2d array is an array of arrays. Now come to a multidimensional array. {”a1”,”b1”,”c1”}, If in "valid" mode, returns whether or not the input arrays need to be: swapped depending on whether shape1 is at least as large as shape2` in: every calculated dimension. 18. for(int i = 0 ; i < 4 ; i++){ } Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) Now, it’s time to see, if we need to remove some particular elements in 2d array. This collection framework has an Array List. Зноскі public class TwoDArray{ If you are familiar with array concepts you know that we have an index number to each element, in short, we can say the position. Edited: Stephen Cobeldick on 12 Apr 2016 Accepted Answer: Stephen Cobeldick. Start Your Free Software Development Course, Web development, programming languages, Software testing & others. 0. Because that’s what I am. тако... Zamoscie (rejon puchowicki) In array, we know that the index starts at 0th. An array of arrays is known as 2D array. Syntax: there are two forms of declaring an array. Initializing 2d array. x, y and z are arrays of values used to approximate some function f: z = f(x, y). See a 2D array as a table, where x denotes the number of … Historia | creating a 2-dimensional object with 3 rows and 3 columns. Making statements based on opinion; back them up with references or personal experience. The result is distributed, using a 1D distribution scheme over the first dimension. A 2D array stores data in a list with 1-D array. Just focus on the syntax of this for loop, rest of the part is very easy. } For hardware floating-point Arrays, this must be a hardware floating-point number, or one of the special expressions representing undefined or +-infinity. Allows for a different type for input arrays in the case of quantization. System.out.println("After updating an array element: "); Let’s go one step further. i.e. Convert the three-dimensional Cartesian coordinates defined by corresponding entries in the matrices x, y, and z to cylindrical coordinates theta, rho, and z. x = [1 2.1213 0 -5]' x = 4×1 1.0000 2.1213 0 -5.0000 1st row of 2D array was created from items at index 0 to 2 in input array; 2nd row of 2D array was created from items at index 3 to 5 in input array A matrix can be represented as a table of rows and columns. Now, it’s time to create the object of a 2d array. To say I met a person for the first time Rivers without rain Is this homebrew Wind Wave spell balanced? Contradiction proof for inequality of P and NP? System.out.println(); Create custom note boxes The Next C... Замосце (Пухавіцкі раён) Змест So, in the first iteration, m is the 1 st element of the array ar i.e. Multidimensional Array. Now we need to explore more about this. Why doesn't Shulchan Aruch include the laws of destroying fruit trees? Representation of 3D array in Tabular Format: A three – dimensional array can be seen as a tables of arrays with ‘x’ rows and ‘y’ columns where the row number ranges from 0 to (x-1) and column number ranges from 0 to (y-1). System.out.print(twoDimentional[i][j] + " "); System.out.print(twodArray[i][j] + " " ); Declaring a 2d array 2. Was there a Viking Exchange as well as a Columbian one? Also, for a 2-dimensional array look at the following diagram. ... Сяргеевіцкі сельсавет (Пухавіцкі раён) How can we achieve this? If provided, the destination to place the result. Array is a group of homogeneous data items which has a common name. A multidimensional array is mostly used to store a table-like structure. for(int i = 0 ; i < 3 ; i++){ Let’s look at the below program. public static void main(String[] args) { Is grep documentation wrong? When data is an Index or Series, the underlying array will be extracted from data.. dtype str, np.dtype, or ExtensionDtype, optional. The dtype to use for the array. Before that let us look we have two index values for 2d array. If x and y represent a regular grid, consider using RectBivariateSpline. T... Спаслацца ლიტერატურა | სანავიგაციო მენიუBabel In Biblia: The Tower in Ancient Literature by Jim RoviraOur People: A History of the Jews – The Tower of BabelLivius.org: The tower of BabelBook of Genesis, Chapter 11. In the above program, we have updated the value in the 2-dimensional array. Bus arrays cannot have variable-size fields. This is important for some of the correlation and convolution Input 5 elements in the array (value must be <9999) : element - 0 : 0 element - 1 : 9 element - 2 : 4 element - 3 : 6 element - 4 : 5 Expected Output: The Second smallest element in the array is : 4 Click me to see the solution. }. And consider we have only one data like employee name. When to split data into multiple regression models... Старонкі, якія спасылаюцца на "Якуба Коласа" Навіг... Is the 21st century's idea of “freedom of speech” ... Якуба Коласа There are some steps involved while creating two-dimensional arrays. Насельн... How to compactly explain secondary and tertiary ch... Zamoscie self = object. Isolation Forest The Next CEO of St... Памылка стварэння ўліковага запісу Навігацыя. To learn more, see our tips on writing great answers. НавігацыяHGЯOЯкуба ... How to write a predict function for mlr predict to... Загрузіць як PDF Навігацыястаронку праекта. printArray(twoDimentional); } How can I use the Python library networkx from Mathematica? Is it correct to say moon starry nights? Населеныя п... Multi task learning with missing labels in Keras t... How exploitable/balanced is this homebrew spell: S... Кнорынскі сельсавет The first nesting set takes input from the user which is nothing but the inserting values in a 2-dimensional array. ... when calling the function. There are some steps involved while creating two-dimensional arrays. private static void printArray(String[][] twoDimentional){ Why can't we say "I have been having a dog"? © 2020 - EDUCBA. Vote. Input A of class cell and input B of class cell must be cell arrays of strings, unless one is a string". numpy.concatenate¶ numpy.concatenate ((a1, a2, ...), axis=0, out=None, dtype=None, casting="same_kind") ¶ Join a sequence of arrays along an existing axis. However one must know the differences between these ways because they can create complications in code that can be very difficult to trace out. ; Point the rgbValue argument to the array that contains input data for the parameter marker. For inserting data In 2d arrays we need two for loops. Matrix is a combination of rows and columns. Use the following argument values in this function call: Set the fParamType argument value to SQL_PARAM_INPUT. Now come to a multidimensional array.We can say that a 2d array is an array of array. Parameters a1, a2, … sequence of array_like The arrays must have the same shape, except in the dimension corresponding to axis (the first, by default).. axis int, optional. Let’s first jump on to the program and later we will see what actually we are doing with this. { We have an array named two-dimensional. In R2016b, MATLAB ® introduced string arrays as a data type for text. My boss doesn't want me to have a side project Salesforce opportunity stages Is the offspring between a demon and a celestial possible? More than 1 year has passed since last update. Here we discuss the introduction to 2D Arrays in Java along with how to create, insert, update and remove elements. You may also look at the following articles to learn more –, Java Training (40 Courses, 29 Projects, 4 Quizzes). import java.util.Scanner; out ndarray, optional. twodArray[i][j] = s1.nextInt(); Asking for help, clarification, or responding to other answers. Please try out this program first. The scalars inside data should be instances of the scalar type for dtype.It’s expected that data represents a 1-dimensional array of data.. ALL RIGHTS RESERVED. How to write the Input z must be at least a 2x2 array The Next CEO of Stack Overflow2019 Community Moderator ElectionThe effect of all zero value as the input of SVMHow to create an array from the list of arrays in pythonHow can I extract the residual array from the Scikit Learn PCA routine?How to reform arrayNumber of features of the model must match the input. 2d arrays are the part of arrays. Avoiding the "not like other girls" trope? The input arrays must be distributed and have the same number of rows. Python provides many ways to create 2-dimensional lists/arrays. We will create only one variable of type array and we will give a size of 100. for(int i=0; i
2021-06-24 09:15:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20452633500099182, "perplexity": 2438.3831000818595}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488552937.93/warc/CC-MAIN-20210624075940-20210624105940-00153.warc.gz"}
https://unapologetic.wordpress.com/2008/09/04/the-problem-with-pointwise-convergence/?like=1&source=post_flair&_wpnonce=c1c9c39881
# The Unapologetic Mathematician ## The Problem With Pointwise Convergence I wrote this up yesterday between my sections of college algebra, but forgot to post it afterwards. Oops. We’ve got a problem with the topology of pointwise convergence. The subspace of continuous functions isn’t closed. What does that mean? It means that if we take a sequence of continuous functions, their pointwise limit may not be continuous. Here’s an example in the real numbers. Let $f_n(x)=\frac{x^{2n}}{1+x^{2n}}$, which is a sequence of well-defined continuous functions on the entire real line. But if we take the pointwise limit $f(x)=\lim\limits_{n\rightarrow\infty}f_n(x)$ we find that $f(x)=0$ for $|x|<1$, that $f(x)=1$ for $|x|>1$, and that $f(x)=\frac{1}{2}$ for $x=\pm1$. So the functions in the sequence are continuous at $x=\pm1$, but the limiting function isn’t. It would be one thing if the sequence just failed to converge at some points — closedness doesn’t require all sequences to converge — but the pointwise limit clearly exists, and it fails to be continuous. What we need is a stronger sense of convergence: one in which fewer sequences converge in the first place, and hopefully one in which the continuous functions turn out to be closed. But it should also obey the same definition as that of the pointwise limit when it does exist. And to find it we’ll need to recast the question of continuity in the limit. Remember that a function is continuous at a point $x_0$ if it agrees with its limit there. That is, if $\lim\limits_{x\rightarrow x_0}f(x)=f(x_0)$. But the function $f$ should be the pointwise limit of the sequence $f_n$: $f(x)=\lim\limits_{n\rightarrow\infty}f_n(x)$. And each of these functions is continuous: $\lim\limits_{x\rightarrow x_0}f_n(x)=f_n(x_0)$. Putting these together, the condition for continuity in the limit is $\lim\limits_{x\rightarrow x_0}\lim\limits_{n\rightarrow\infty}f_n(x)=\lim\limits_{n\rightarrow\infty}\lim\limits_{x\rightarrow x_0}f_n(x)$. So our question is really about when we can exchange limits. For which sequences of functions do the dependence on $x$ and that on $n$ play well enough together to allow these limits to be exchanged? We’ll answer that question tomorrow. September 4, 2008 - Posted by | Analysis, Functional Analysis ## 3 Comments » 1. When I saw the title of this post, my immediate reaction was, “How is he going to fit all the problems into one post?!” :) Comment by hilbertthm90 | September 4, 2008 | Reply 2. Well, sure there’s a lot of problems. But this is the one that motivates the next step! Comment by John Armstrong | September 4, 2008 | Reply 3. […] Today we’ll give the answer to the problem of pointwise convergence. It’s analogous to the notion of uniform continuity in a metric space. In that case we noted […] Pingback by Uniform Convergence « The Unapologetic Mathematician | September 5, 2008 | Reply
2015-05-22 14:36:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.956173300743103, "perplexity": 227.34317985381978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925274.34/warc/CC-MAIN-20150521113205-00127-ip-10-180-206-219.ec2.internal.warc.gz"}