url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://eccc.weizmann.ac.il/eccc-reports/2001/TR01-080/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > DETAIL: ### Revision(s): Revision #3 to TR01-080 | 17th March 2002 00:00 #### Lower Bounds for Linear Locally Decodable Codes and Private Information Retrieval Revision of: TR01-080 Revision #3 Authors: Oded Goldreich, Howard Karloff, Leonard J. Schulman, Luca Trevisan Accepted on: 17th March 2002 00:00 Keywords: Abstract: We prove that if a linear error correcting code $\C:\{0,1\}^n\to\{0,1\}^m$ is such that a bit of the message can be probabilistically reconstructed by looking at two entries of a corrupted codeword, then $m = 2^{\Omega(n)}$. We also present several extensions of this result. We show a reduction from the complexity of one-round, information-theoretic Private Information Retrieval Systems (with two servers) to Locally Decodable Codes, and conclude that if all the servers' answers are linear combinations of the database content, then $t = \Omega(n/2^a)$, where $t$ is the length of the user's query and $a$ is the length of the servers' answers. Actually, $2^a$ can be replaced by $O(a^k)$, where $k$ is the number of bit locations in the answer that are actually inspected in the reconstruction. Revision #2 to TR01-080 | 17th March 2002 00:00 Revision #2 Authors: Oded Goldreich, Howard Karloff, Leonard J. Schulman, Luca Trevisan Accepted on: 17th March 2002 00:00 Keywords: Abstract: Revision #1 to TR01-080 | 17th March 2002 00:00 Revision #1 Authors: Oded Goldreich, Howard Karloff, Leonard J. Schulman, Luca Trevisan Accepted on: 17th March 2002 00:00 Keywords: Abstract: ### Paper: TR01-080 | 14th November 2001 00:00 #### Lower Bounds for Linear Locally Decodable Codes and Private Information Retrieval TR01-080 Authors: Oded Goldreich, Howard Karloff, Leonard J. Schulman, Luca Trevisan Publication: 14th November 2001 09:44 Keywords: Abstract: We prove that if a linear error correcting code $\C:\{0,1\}^n\to\{0,1\}^m$ is such that a bit of the message can be probabilistically reconstructed by looking at two entries of a corrupted codeword, then $m = 2^{\Omega(n)}$. We also present several extensions of this result. We show a reduction from the complexity of one-round, information-theoretic Private Information Retrieval Systems (with two servers) to Locally Decodable Codes, and conclude that if all the servers' answers are linear combinations of the database content, then $t = \Omega(n/2^a)$, where $t$ is the length of the user's query and $a$ is the length of the servers' answers. Actually, $2^a$ can be replaced by $O(a^k)$, where $k$ is the number of bit locations in the answer that are actually inspected in the reconstruction. ISSN 1433-8092 | Imprint
2017-06-23 17:06:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.740361213684082, "perplexity": 2737.200456284083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320077.32/warc/CC-MAIN-20170623170148-20170623190148-00393.warc.gz"}
https://cs.stackexchange.com/questions/3027/maximum-independent-set-of-a-bipartite-graph?rq=1
# Maximum Independent Set of a Bipartite Graph I'm trying to find the Maximum Independent Set of a Biparite Graph. I found the following in some notes "May 13, 1998 - University of Washington - CSE 521 - Applications of network flow": Problem: Given a bipartite graph $$G = (U,V,E)$$, find an independent set $$U' \cup V'$$ which is as large as possible, where $$U' \subseteq U$$ and $$V' \subseteq V$$. A set is independent if there are no edges of $$E$$ between elements of the set. Solution: Construct a flow graph on the vertices $$U \cup V \cup \{s,t\}$$. For each edge $$(u,v) \in E$$ there is an infinite capacity edge from $$u$$ to $$v$$. For each $$u \in U$$, there is a unit capacity edge from $$s$$ to $$u$$, and for each $$v \in V$$, there is a unit capacity edge from $$v$$ to $$t$$. Find a finite capacity cut $$(S,T)$$, with $$s \in S$$ and $$t \in T$$. Let $$U' = U \cap S$$ and $$V' = V \cap T$$. The set $$U' \cup V'$$ is independent since there are no infinite capacity edges crossing the cut. The size of the cut is $$|U - U'| + |V - V'| = |U| + |V| - |U' \cup V'|$$. This, in order to make the independent set as large as possible, we make the cut as small as possible. So lets take this as the graph: A - B - C | D - E - F We can split this into a bipartite graph as follows $$(U,V)=(\{A,C,E\},\{B,D,F\})$$ We can see by brute force search that the sole Maximum Independent Set is $$A,C,D,F$$. Lets try and work through the solution above: So the constructed flow network adjacency matrix would be: $$\begin{matrix} & s & t & A & B & C & D & E & F \\ s & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ t & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 \\ A & 1 & 0 & 0 & \infty & 0 & 0 & 0 & 0 \\ B & 0 & 1 & \infty & 0 & \infty & 0 & \infty & 0 \\ C & 1 & 0 & 0 & \infty & 0 & 0 & 0 & 0 \\ D & 0 & 1 & 0 & 0 & 0 & 0 & \infty & 0 \\ E & 1 & 0 & 0 & \infty & 0 & \infty & 0 & \infty \\ F & 0 & 1 & 0 & 0 & 0 & 0 & \infty & 0 \\ \end{matrix}$$ Here is where I am stuck, the smallest finite capacity cut I see is a trivial one: $$(S,T) =(\{s\},\{t,A,B,C,D,E,F\})$$ with a capacity of 3. Using this cut leads to an incorrect solution of: $$U' = U \cap S = \{\}$$ $$V' = V \cap T = \{B,D,F\}$$ $$U' \cup V' = \{B,D,F\}$$ Whereas we expected $$U' \cup V' = \{A,C,D,F\}$$? Can anyone spot where I have gone wrong in my reasoning/working? • (S,T) = ( {s,A,B,C}, {t,D,E,F} ) has capacity 2 – user2424 Aug 9 '12 at 0:58 • @Brian there is an infinite capacity edge from B to E across your cut, so it is infinite capacity. – Andrew Tomazos Aug 9 '12 at 4:26 • if i understand this correctly, based on the brute force solution, you need a cut where S contains A and C and T contains D and F, which makes your cut be {s, A, C}, {t, D, F}. Now, how do you construct the cut ? – njzk2 Jan 9 '13 at 17:27 • also, this looks like the Ford-Fulkerson, in which edges have a capacity of one. – njzk2 Jan 9 '13 at 20:18 • Look up the Hungarian algorithm. – Patrik Vörös Dec 4 '17 at 16:48 The complement of a maximum independent set is a minimum vertex cover. To find a minimum vertex cover in a bipartite graph, see König's theorem. • This (maybe) solves the problem but does not answer the question. – Raphael Aug 4 '12 at 9:45 • @Raphael: I agree if you remove the word "maybe". :) – Jukka Suomela Aug 4 '12 at 10:52 • Oh, I am sure it solves the problem, but I am not sure whether it helps Andrew solve his problem. – Raphael Aug 4 '12 at 11:35 • I solved it as you suggest: HopcroftKarp -> maximal matching -> Konigs Thereom -> Minimum Vertex Cover -> Complement -> Maximum Independent Set. I'd still like to know why the flow method described in my question doesn't seem to work. – Andrew Tomazos Aug 8 '12 at 1:58 The solution given is clearly incorrect, as you demonstrate with the counterexample. Note that the graph U+V is a connected component by the infinite-capacity edges. Therefore every valid cut will have to contain all of A, B, C, D, E, F on the same side. Trying to trace back where the solution came from: http://www.cs.washington.edu/education/courses/cse521/01sp/flownotes.pdf cites Network Flows, by Ahuja, Magnanti, and Orlin for some of the problems. This book is out of copyright and downloadable from http://archive.org/details/networkflows00ahuj but it doesn't seem to contain this problem and solution (searching for every occurrence of "bipartite"). Note that the explanation paragraph of the solution does not show that the smallest cut of the graph it constructs corresponds to the maximum independent set. It only shows a way to get an independent set. And yet, you can see what the algorithm is trying to do. Here is what the actual maximum independent set corresponds to in terms of its s,t cut: The infinite-capacity edge that breaks the algorithm is emphasised. I'm not sure how to fix the algorithm to what was intended. Maybe the cost of an infinite edge should be zero if it goes backwards (i.e. where it goes from S to T, but crosses from t-side to s-side)? But is it still easy to find the min-cut/max-flow with this nonlinearity? Also, thinking of a way to bridge from @Jukka Suomela's solution to the algorithm from the question, there is a difficulty where we go from the maximum matching to the minimum vertex cover: while finding the maximum matching can be done by a max-flow-like algorithm, how do you recover the minimum vertex cover from it using a flow-like algorithm? As described here, after the maximum matching is found, the edges between U and V become directed to find the minimum vertex cover. So, again, this doesn't show that a simple application of min-cut/max-flow is all it takes to solve this problem. The given algorithm is correct. The flow network constructed need to be directed, and the value of a $$S$$-$$T$$ cut only considers edges going out of the vertex set $$S$$. • I agree with you, but could you please add more details, for example, a complete correctness proof of the flow algorithm, and how the algorithm applies on the OP's example? – xskxzr Jun 10 '19 at 17:48 • The note in this does have a short proof of correctness. cs.washington.edu/education/courses/cse521/01sp/flownotes.pdf For the example, if you look at the figure by Evgeni Sergeev above, the edges should all be directed downwards. Then the only two edges out of S is (s,e) and (b,t), the bolded red edge is going into S and should not be counted in the cut value. – yu25x Jun 15 '19 at 18:03 The cut should be on the actual flow, not on the capacities. Since the flow from s is finite, any {S,T} cut will be finite. The rest is a explained above. • Are you sure? Cuts are usually on capacities and, in any case, we already know that the minimum cut is finite so cuts being infinite doesn't seem to be the problem. – David Richerby Apr 26 '16 at 15:23 I think that you don't need to connect the edges both ways even if the original graph was undirected. Because for the flow network, you need a directed graph, you could consider only the edges from $$U$$ to $$V$$. Then in this new graph $$G'$$, you will have a min-cut of 2, which gives you the answer $$\{A,C,D,F\}$$.
2020-05-25 05:36:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 37, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7768657803535461, "perplexity": 434.4816322298992}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387219.0/warc/CC-MAIN-20200525032636-20200525062636-00292.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/mfc.2021014?viewType=html
# American Institute of Mathematical Sciences November  2021, 4(4): 253-269. doi: 10.3934/mfc.2021014 ## Convex combination of data matrices: PCA perturbation bounds for multi-objective optimal design of mechanical metafilters 1 IMT School for Advanced Studies, AXES Research Unit, Piazza S. Francesco, 19, 55100 Lucca, Italy 2 University of Genoa, Department of Civil, Chemical and Environmental Engineering, Via Montallegro, 1, 16145 Genova, Italy * Corresponding author: Giorgio Gnecco Received  April 2021 Revised  July 2021 Published  November 2021 Early access  August 2021 Fund Project: A. Bacigalupo and G. Gnecco are members of INdAM. The authors acknowledge financial support from INdAM-GNAMPA, from INdAM-GNFM (project Trade-off between Number of Examples and Precision in Variations of the Fixed-Effects Panel Data Model), from the Università Italo Francese (projects GALILEO 2019 no. G19-48 and GALILEO 2021 no. G21 89), from the Compagnia di San Paolo (project MINIERA no. I34I20000380007), and from the University of Trento (project UNMASKED 2020) In the present study, matrix perturbation bounds on the eigenvalues and on the invariant subspaces found by principal component analysis is investigated, for the case in which the data matrix on which principal component analysis is performed is a convex combination of two data matrices. The application of the theoretical analysis to multi-objective optimization problems – e.g., those arising in the design of mechanical metamaterial filters – is also discussed, together with possible extensions. Citation: Giorgio Gnecco, Andrea Bacigalupo. Convex combination of data matrices: PCA perturbation bounds for multi-objective optimal design of mechanical metafilters. Mathematical Foundations of Computing, 2021, 4 (4) : 253-269. doi: 10.3934/mfc.2021014 ##### References: show all references ##### References: (a) Positive eigenvalues $\lambda_i({\bf{G}}(\alpha))$ (green curves, $i = 1,\ldots,5$), their best lower bounds derived from the first inequalities in Eqs. (1a) and (1b) in Proposition 1 (blue curves) with $K = 50$, and their best upper bounds derived from the same inequalities, still with $K = 50$ (red curves); (b) for $K = 1$, $i = 1$, and each $\alpha \in [0,1]$: $\sin(\theta_{1,{\rm min}}(\alpha))$ (green curve), and smallest upper bound on it, based on the second to last inequalities in Eqs. (11a) and (11b) in Proposition 2 (blue curve) ]">Figure 2.  Beam lattice metamaterials with viscoelastic resonators and their reference periodic cell [19] Floquet-Bloch spectrum maximizing a low-frequency band gap of a mechanical metamaterial filter: (a) $3$-dimensional representation; (b) projection of the spectrum onto a vertical plane Floquet-Bloch spectrum maximizing a high-frequency pass band of a mechanical metamaterial filter: (a) $3$-dimensional representation; (b) projection of the spectrum onto a vertical plane Floquet-Bloch spectrum maximizing a trade-off between a low-frequency bang gap and a high-frequency pass band of a mechanical metamaterial filter: (a) $3$-dimensional representation; (b) projection of the spectrum onto a vertical plane [1] Azam Moradi, Jafar Razmi, Reza Babazadeh, Ali Sabbaghnia. An integrated Principal Component Analysis and multi-objective mathematical programming approach to agile supply chain network design under uncertainty. Journal of Industrial & Management Optimization, 2019, 15 (2) : 855-879. doi: 10.3934/jimo.2018074 [2] Yitong Guo, Bingo Wing-Kuen Ling. Principal component analysis with drop rank covariance matrix. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2345-2366. doi: 10.3934/jimo.2020072 [3] Zhongqiang Wu, Zongkui Xie. A multi-objective lion swarm optimization based on multi-agent. Journal of Industrial & Management Optimization, 2022  doi: 10.3934/jimo.2022001 [4] Hui Zhang, Jian-Feng Cai, Lizhi Cheng, Jubo Zhu. Strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Problems & Imaging, 2012, 6 (2) : 357-372. doi: 10.3934/ipi.2012.6.357 [5] Yuan-mei Xia, Xin-min Yang, Ke-quan Zhao. A combined scalarization method for multi-objective optimization problems. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2669-2683. doi: 10.3934/jimo.2020088 [6] Shungen Luo, Xiuping Guo. Multi-objective optimization of multi-microgrid power dispatch under uncertainties using interval optimization. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021208 [7] Xia Zhao, Jianping Dou. Bi-objective integrated supply chain design with transportation choices: A multi-objective particle swarm optimization. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1263-1288. doi: 10.3934/jimo.2018095 [8] Ankan Bhaumik, Sankar Kumar Roy, Gerhard Wilhelm Weber. Multi-objective linguistic-neutrosophic matrix game and its applications to tourism management. Journal of Dynamics & Games, 2021, 8 (2) : 101-118. doi: 10.3934/jdg.2020031 [9] Qingshan You, Qun Wan, Yipeng Liu. A short note on strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Problems & Imaging, 2013, 7 (1) : 305-306. doi: 10.3934/ipi.2013.7.305 [10] Han Yang, Jia Yue, Nan-jing Huang. Multi-objective robust cross-market mixed portfolio optimization under hierarchical risk integration. Journal of Industrial & Management Optimization, 2020, 16 (2) : 759-775. doi: 10.3934/jimo.2018177 [11] Shoufeng Ji, Jinhuan Tang, Minghe Sun, Rongjuan Luo. Multi-objective optimization for a combined location-routing-inventory system considering carbon-capped differences. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021051 [12] Qiang Long, Xue Wu, Changzhi Wu. Non-dominated sorting methods for multi-objective optimization: Review and numerical comparison. Journal of Industrial & Management Optimization, 2021, 17 (2) : 1001-1023. doi: 10.3934/jimo.2020009 [13] Min Zhang, Gang Li. Multi-objective optimization algorithm based on improved particle swarm in cloud computing environment. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1413-1426. doi: 10.3934/dcdss.2019097 [14] Liwei Zhang, Jihong Zhang, Yule Zhang. Second-order optimality conditions for cone constrained multi-objective optimization. Journal of Industrial & Management Optimization, 2018, 14 (3) : 1041-1054. doi: 10.3934/jimo.2017089 [15] Danthai Thongphiew, Vira Chankong, Fang-Fang Yin, Q. Jackie Wu. An on-line adaptive radiation therapy system for intensity modulated radiation therapy: An application of multi-objective optimization. Journal of Industrial & Management Optimization, 2008, 4 (3) : 453-475. doi: 10.3934/jimo.2008.4.453 [16] Yu Chen, Yonggang Li, Bei Sun, Chunhua Yang, Hongqiu Zhu. Multi-objective chance-constrained blending optimization of zinc smelter under stochastic uncertainty. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021169 [17] Xiliang Sun, Wanjie Hu, Xiaolong Xue, Jianjun Dong. Multi-objective optimization model for planning metro-based underground logistics system network: Nanjing case study. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021179 [18] Tone-Yau Huang, Tamaki Tanaka. Optimality and duality for complex multi-objective programming. Numerical Algebra, Control & Optimization, 2022, 12 (1) : 121-134. doi: 10.3934/naco.2021055 [19] Henri Bonnel, Ngoc Sang Pham. Nonsmooth optimization over the (weakly or properly) Pareto set of a linear-quadratic multi-objective control problem: Explicit optimality conditions. Journal of Industrial & Management Optimization, 2011, 7 (4) : 789-809. doi: 10.3934/jimo.2011.7.789 [20] Lin Jiang, Song Wang. Robust multi-period and multi-objective portfolio selection. Journal of Industrial & Management Optimization, 2021, 17 (2) : 695-709. doi: 10.3934/jimo.2019130 Impact Factor: ## Tools Article outline Figures and Tables
2022-01-22 11:35:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.525454044342041, "perplexity": 13140.901622723339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303845.33/warc/CC-MAIN-20220122103819-20220122133819-00301.warc.gz"}
http://learnaboutstructures.com/Practical-Uses-of-Influence-Lines
# 6.5 Practical Uses of Influence Lines Now that we know how to construct influence lines, how do we use them, and what are they good for? The biggest benefit is that once you construct an influence line for a reaction, shear or moment at a critical location in a structure, you can easily check how multiple different load patterns on the structure affect that load effect (the reaction, shear or moment). For example, if we are designing the reaction base for a beam, we can first construct the influence line for that reaction. Then, we can use that influence line to check multiple different load patterns and load cases on the beam without having to re-analyse the beam for every different set of loads. This is particularly useful if you have to check multiple different load patterns to find the worst possible combination, or if you have a moving load caused by a vehicle. In this section, we will look at three main applications for influence lines. The first is the use of an influence line to determine the influence of a single point load. The second is the use of an influence line to determine the effect of a distributed load or patterned distributed load. The last is the use of an influence line to determine the effect of a moving pattern of loads. ## The Influence of a Point Load Up to now, we have only seen how influence lines show us the effect of a unit point load moving along a beam (with a magnitude of 1.0); however, an influence line may be used to determine the effect of any magnitude point load on a beam. All we have to do is multiply the magnitude of the applied point load by the value of the influence line at the location where the point load is applied. For example, a sample influence line for a vertical support reaction $C_y$ is shown at the top of Figure 6.16 (IL $C_y$). If we apply a point load of $\SI{13}{kN}$ at point E, as shown, then the reaction force $C_y$ will be equal to: \begin{align*} C_y &= \SI{13}{kN}(-0.4)= - \SI{5.2}{kN} \\ &= \SI{5.2}{kN} \downarrow \end{align*} where $0.4$ is the value of the influence line at point E as shown. Figure 6.16: Use of Influence Lines with a Single Point Load Likewise, if you were to put that point load anywhere else along the beam, the effect on $C_y$ would be equal to the value of the point load multiplied by the value of the influence line at that location. Another example is shown in the middle diagram in Figure 6.16. Here an in-between influence line value at a point between A and B is found using similar triangles to be equal to 0.6. If the $\SI{13}{kN}$ point load is applied at this location, then the reaction $C_y = 13(0.6) = \SI{7.8}{kN} \uparrow$. In addition to telling us what the effect of a certain load will be on a parameter such as the reaction at a certain point, the influence line also gives us a visual indication of where a load should be placed to create the maximum positive or negative effect. For example, the bottom diagram in Figure 6.16 shows an influence diagram for some internal moment in a beam $M_B$. From this diagram, it is clear that if a load may be placed anywhere along the beam, then placing the load at point B would cause the greatest positive internal moment at point B ($M_B$) and placing the load at point D would cause the maximum negative moment at point B ($M_B$). ## The Influence of a Distributed Load Influence lines also allow us to easily find the effect of distributed loads on individual response parameters (e.g. reactions, shear at a point). To do this, we simply find the area underneath the influence diagram for the parts of the diagram where a uniform distributed load is applied and then multiply that area by the magnitude of the uniform load. An example of this is shown in Figure 6.17. At the top of this figure, an influence diagram is shown for the vertical reaction at a point C (IL $C_y$). If a $\SI{16}{kN/m}$ distributed load is applied between points B and C, the we can find the effect of the distributed load on the reaction $C_y$ py multiplying the area under the influence diagram under the applied load (the trapezoidal area shaded in the figure) by the value of the distributed load ($\SI{16}{kN/m}$): \begin{align*} C_y &= \frac{1.4+1.0}{2}(\SI{4}{m})(\SI{16}{kN/m}) \\ C_y &= \SI{76.8}{kN} \uparrow \end{align*} Figure 6.17: Use of Influence Lines with Distributed Loads If we then added a point load to the beam simultaneously as shown in the second diagram in Figure 6.17, the effect of the distributed load and point load together will be additive: \begin{align*} C_y &= \frac{1.4+1.0}{2}(\SI{4}{m})(\SI{16}{kN/m}) + \SI{13}{kN}(-0.4) \\ C_y &= \SI{71.6}{kN} \uparrow \end{align*} Like the point load case discussed in the previous section, influence diagrams can also help us to determine where we should load the beam to cause the worst effect on a parameter. This is illustrated by the bottom two diagrams in Figure 6.17. If we would like to find the distributed loading pattern that will cause the greatest positive moment at B ($M_B$), then we should only load between points A and C and between points D and E as shown in the figure. If we were to also load the rest of the beam between C and D, then our moment would actually decrease. Patterned loading like this is a common design case in structural engineering, and this example makes it clear that sometimes considering a distributed load to act along the entire length of a beam can actually be unconservative! We could actually get worse moment, by leaving some of the load off. Likewise, if we want to find the distributed loading pattern that will cause the greatest negative moment at B, then we should only load the beam between points C and D as shown in Figure 6.17. ## The Influence of a Series of Moving Loads The effect of a series of point loads on a certain response parameter (such as a reaction or internal moment at a point) may be found by simply adding up the effect of each individual point load. But, what would happen if we have a set of point loads in a certain arrangement and that arrangement may be placed at any point in the beam and we need to know the worst case? This is a common design problem in bridge engineering, since one of the design loads is a set of loads caused by a "standard" truck, which can move and be placed anywhere along the length of the bridge. One example of a standard set of truck loads for Ontario is shown in Figure 6.18. Since the truck has multiple wheels/axles, the total load of the truck is not spread evenly over its length, but is concentrated at the locations of the wheel axles. The standard truck shown in the figure is designed to be a worst-case heavy truck. Figure 6.18: Standard Truck Axle Loads If we want to find the worst effect of such a set of moving loads, cause by a truck or otherwise, then we can use the influence line to easily find it. It may seem like there are an infinite number of possibilities for the location of a moving truck on a bridge, but, luckily, it can be shown that the worst case will always be when one of the loads is placed on a peak of the influence diagram. So, we only need to check the possibilities where each load is placed on a peak. An example of this method is shown in Figure 6.19. A sample influence diagram for the shear in a beam at point B is shown at the top of the figure (IL $V_B$), and we must find the maximum positive shear that may be caused by the set of moving loads shown on the top right of the figure. This moving set of loads consists of three different loads with different magnitudes $P_1$, $P_2$, and $P_3$. The loads are spaced apart as shown and that spacing is constant. Figure 6.19: Use of Influence Lines with a Series of Point Loads There is only one peak for the positive shear in the influence diagram, and that peak is located just immediately to the left of point B. Therefore, the worst case will occur when one of the loads $P_1$, $P_2$, or $P_3$ is located right at point B (actually just immediately to the left of point B). All of the possibilities for this are shown in Figure 6.19. Note that, since the series could potentially travel in either direction across the beam, we need to check both the cases where it moves from left to right (where the front load $P_1$ is on the right), and the cases where it moves from right to left (where the front load $P_1$ is on the left). To find the total shear $V_B$ caused by each possibility shown in Figure 6.19, all we have to do is add up the effect of each point load (the value of the point load multiplied by the value of the influence line at the point load's location). Sometimes, one of the loads may fall off of the beam altogether, as shown in the figure. In that case, that load is ignored and not added to the others. Of course, loads that cause a negative shear effect would be subtracted from the total. At the end, whichever possibility creates the greatest possible shear at point B constitutes the worst case load series location.
2019-02-23 08:28:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6970515251159668, "perplexity": 343.49843354425263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249495888.71/warc/CC-MAIN-20190223082039-20190223104039-00504.warc.gz"}
https://www.physicsforums.com/threads/very-complicated-formula.183860/
# Very complicated formula 1. Sep 10, 2007 ### bcyang 1. The problem statement, all variables and given/known data The problem occurred when solving $$x'' - \frac{1}{x^2} = 0$$. You can think of this as if there is a mass in the origin (M) and a small particle (m << M) is being pulled by this mass. Daniel helped me to solve this diff. eq. and we are at 2. Relevant equations $$\frac{1}{2} (x')^2 + \frac{1}{x} = C$$ where C is a constant. 3. The attempt at a solution I asked Mathematica to solve $$\int \frac{dx}{2\sqrt{C-1/x}}$$. It gives me some very complicated formula which isn't too handy. At first, this problem seemed to me a trivial exercise, but now I realize that this may not be an easy one. I hope somebody can help. Thank you very much in advance!!! Last edited: Sep 10, 2007 2. Sep 10, 2007 ### bob1182006 I know there's a formula to solve integrals of the form 1/sqrt(a^2-x^2) but I'm not sure if holds for complex numbers. if integration is about the same for complex numbers then you can try getting it in the form of the derivative of arcsin x.
2016-12-11 02:50:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.838193953037262, "perplexity": 330.3810492830076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543782.28/warc/CC-MAIN-20161202170903-00502-ip-10-31-129-80.ec2.internal.warc.gz"}
https://byjus.com/question-answer/shifting-of-electrons-of-multiple-bonds-under-the-influence-of-a-reagent-is-called-electromeric/
Question # Shifting of electrons of multiple bonds under the influence of a reagent is called__________. A inductive effect B mesomeric effect C electromeric effect D none of the above Solution ## The correct option is A electromeric effectElectromeric effect refers to a molecular polarizability effect occurring by an intramolecular electron displacement (sometimes called the conjugative mechanism and, previously, the tautomeric mechanism) characterized by the substitution of one electron pair for another within the same atomic octet of electrons.This effect is shown by those compounds containing multiple bonds. When a double or a triple bond is exposed to an attack by a reagent, a pair of bonding electrons involved in the $$\pi$$ bond is transferred completely from one atom to another. This effect will remain as long as the attacking reagent is present. As soon as the reagent is removed, the polarized molecule will come back to the original state.Option C is correct.Chemistry Suggest Corrections 0 Similar questions View More People also searched for View More
2022-01-17 19:50:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31524622440338135, "perplexity": 3698.363117132347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300616.11/warc/CC-MAIN-20220117182124-20220117212124-00580.warc.gz"}
https://www.physicsforums.com/threads/does-it-exist-and-is-it-continuous-exact.435643/
# Does it exist and is it continuous? (exact) Hello, Let $$M$$ be a function from R^2 to R with image M(x,y). Given is that $$M$$ has continuous partial derivatives $$\frac{\partial M(x,y)}{\partial y}$$ & $$\frac{\partial M(x,y)}{\partial x}$$. Question: Does $$\frac{\partial^2}{\partial x \partial y} \int M(x,y) \mathrm d x$$ exist and is it continuous? It is used in my DE course, but I don't find it self-evident. The professor's argument was that since M was already differentiable with respect to x and y, and integrating makes it more continuous, it will definitely be differentiable. Okay I find it somewhat self-evident (with this reasoning) that $$\frac{\partial}{\partial y}\int M(x,y) \mathrm d x$$ exists, but how does one convince himself of this last expression being differentiable with respect to x? And even if it is differentiable with respect to x, the derivative might have an essential discontinuity (of course I'm not saying it can in this case: I believe my professor; but I don't believe in his hand-waving reasoning and I'm looking for a more rigorous/insightful argument) Are you talking about the first set of lecture notes that the professor posted? If so, then $$\frac{\partial^2}{\partial x \partial y} \int M(x,y) \mathrm d x = \frac{\partial}{\partial y} M(x,y)$$ and it was said in the notes to assume that this function is continuous. EDIT: I'm assuming you are in MAT267 EDIT 2: You do go to U of T, right? Last edited: Are you talking about the first set of lecture notes that the professor posted? If so, then $$\frac{\partial^2}{\partial x \partial y} \int M(x,y) \mathrm d x = \frac{\partial}{\partial y} M(x,y)$$ and it was said in the notes to assume that this function is continuous. EDIT: I'm assuming you are in MAT267 EDIT 2: You do go to U of T, right? Hello. Nope, I'm from Belgium, but I suppose the theorem of exact differential equations are popular in all DE courses :p As for your comment: that equality you give is only true if you can "mix/switch" partial derivatives, which you can only do if you already know $$\frac{\partial^2}{\partial x \partial y} \int M(x,y) \mathrm d x$$ is continuous (and existing, of course), the latter being exactly my question. I didn't switch anything. $$\frac{\partial}{\partial x \partial y} \int M(x,y) dx$$ means that you first differentiate $$\int M(x,y) dx$$ with respect to x, then differentiate the resulting expression with respect to y. By the fundamental theorem of calculus, differentiating $$\int M(x,y) dx$$ with respect to x gives $$M(x,y)$$, and differentiation with respect to y can be written $$\frac{\partial}{\partial y} M(x,y)$$. But I guess your question is whether $$\frac{\partial}{\partial y} M(x,y)$$ is continuous. In my set of lecture notes, I am to assume it is continuous, so I can't help you there. Sorry! As a counter example, what about the function $$M(x,y) = y^2sin(\frac{1}{y}) + 0x$$ $$\frac{\partial^2}{\partial x \partial y} \int M(x,y) dx = \frac{\partial^2}{\partial x \partial y} \int y^2sin(\frac{1}{y}) + 0x dx = \frac{\partial}{\partial y} y^2 sin(\frac{1}{y} )$$ But the partial derivative with respect to y of $$y^2 sin(\frac{1}{y})$$ is not continuous when y = 0. JG89: with regard to your first post: you seem to be a bit confused, the expression means you first differentiate with respect to y and then with respect to x, not the other way around. as for your second post: your M(x,y) does not have continuous partial derivatives (because dM/dy is not continuous in zero), which I stated as a given. JG89: you seem to be a bit confused, the expression means you first differentiate with respect to y and then with respect to x, not the other way around.
2021-06-21 01:38:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8987200260162354, "perplexity": 251.59441913390322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488259200.84/warc/CC-MAIN-20210620235118-20210621025118-00620.warc.gz"}
https://leanprover-community.github.io/archive/stream/116395-maths/topic/polynomial.html
## Stream: maths ### Topic: polynomial #### Patrick Massot (Jul 19 2018 at 09:16): What's happening with polynomials? I see ce990c59d authored by Chris and merged by Johannes but PR171 is still open and active #### Mario Carneiro (Jul 19 2018 at 09:17): I think Johannes is currently working on merging the Mason Stothers work with Chris's #### Patrick Massot (Jul 19 2018 at 09:24): That's really nice #### Patrick Massot (Jul 19 2018 at 09:26): It's really important part of elementary maths that was missing. #### Patrick Massot (Jul 19 2018 at 09:27): It makes me think of my normed space work again. Do you think I should PR https://github.com/PatrickMassot/lean-differential-topology/blob/master/src/norms.lean (after removing the type class inference nightmare at the end). Would it help in getting more motivation to fix the issues? #### Mario Carneiro (Jul 19 2018 at 09:28): I think that's a good idea. I know Johannes has his own plans for this stuff, but I think a mathlib PR is the best place to coordinate #### Patrick Massot (Jul 19 2018 at 09:31): Ok, I'll try to do that today #### Patrick Massot (Jul 19 2018 at 15:47): https://github.com/leanprover/mathlib/pull/208 #### Nicholas Scheel (Jul 19 2018 at 16:22): @Patrick Massot I’m curious if you could just define norm as dist 0? seems like you spend a lot of time converting between the two ... #### Patrick Massot (Jul 19 2018 at 16:24): I don't think you would get the expected properties #### Nicholas Scheel (Jul 19 2018 at 16:26): how so? you have lemma norm_dist { g : G} : dist g 0 = ∥g∥ already ... (plus commutativity gets you my definition) #### Patrick Massot (Jul 19 2018 at 16:27): I mean: there are distances on groups such that dist 0 is not a norm #### Patrick Massot (Jul 19 2018 at 16:28): Think of the trivial distance for instance #### Patrick Massot (Jul 19 2018 at 16:28): maybe this is not a good example actually #### Nicholas Scheel (Jul 19 2018 at 16:29): I think you need dist x y = dist 0 (x - y), as the equivalent of the property you already have, but I think norm adds nothing to the definition – it must be equal to dist 0 #### Patrick Massot (Jul 19 2018 at 16:29): yes, somthing like this is needed Last updated: May 11 2021 at 17:39 UTC
2021-05-11 17:54:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4221608340740204, "perplexity": 5370.155241512586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00587.warc.gz"}
https://space.stackexchange.com/questions/32811/apply-forces-such-that-a-satellite-reaches-a-given-point-in-space
# Apply forces such that a satellite reaches a given point in space I'd like to find out a strategy for solving the following simplified problem. • A small spherical craft at $$t=0$$ has a position and velocity vectors $$\mathbf{x_0}, \mathbf{v_0}$$ in a zero gravity environment. • It has one outward-pointing thruster that can vector in any direction by rotating it's spherical shape using internal attitude control. • Each second the craft can emit one pulse resulting in a standard, small $$\Delta v$$ via one outward-pointing thruster that can vector in any direction by the craft rotating it's spherical shape between pulses using internal attitude control. • The craft must be moving with a speed (absolute value, not directional velocity) less than $$\mathbf{v_m}$$ at the position $$\mathbf{x_1}$$. • ideally a least-time solution is sought, but a procedure that yields many solutions is still helpful because they can be compared and the optimal solution selected. How would I go about trying to solve this problem? Are there existing solutions or methodologies? • Are you making any considerations to fuel usage or mass or do you just assume that each pulse just changes the velocity? – Dragongeek Dec 12 '18 at 20:40 • I'm sure I read a question like this recently – JCRM Dec 13 '18 at 15:20 • what is it's rotational rate and acceleration - can we assume it can rotate to any position in the time between thrusts? Can we assume the thrusts are sufficiently short duration that they can be treated as impulses, otherwise can the acceleration be treated as constant during the thrust. Is $v_m$ known to be larger than $\delta v$ – JCRM Dec 13 '18 at 15:36 • I'm voting to close this question as off-topic because It isn't really about space exploration -- It's a geometry question. – JCRM Dec 13 '18 at 15:46 • The simplest way of doing this is to zero the velocity (isosceles triangle with current velocity as the base and $\Delta v$ as the sides) then point at the target and thrust. if $v_m$ is smaller than $\Delta v$ then another isosceles triangle is needed, and you need to adjust your aim by the distance travelled during the first thrust. – JCRM Dec 13 '18 at 15:55
2019-03-25 06:42:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6374544501304626, "perplexity": 558.9816812689687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203755.18/warc/CC-MAIN-20190325051359-20190325073359-00326.warc.gz"}
http://logic.dorais.org/archives/38
The (strong) Choquet game on a topological space $X$ is played as follows. There are two players, Empty and Nonempty, who alternate turns for infinitely many rounds. On round $i$, Empty moves first, choosing a point $x_i$ and an open neighborhood $U_i$ of $x_i$ and, if $i \geq 1$, such that $U_i \subseteq V_{i-1}$ (the open set that Nonempty played on the previous round). Then, Nonempty responds with an open neighborhood $V_i$ of the same point $x_i$ such that $V_i \subseteq U_i$. After all the rounds have been played, we obtain a descending sequence of open sets $$U_0 \supseteq V_0 \supseteq U_1 \supseteq V_1 \supseteq U_2 \supseteq V_2 \supseteq \cdots$$ together with a sequence of points $x_0,x_1,x_2,\dots$ Empty wins this play if $\bigcap_{i=0}^\infty V_i = \emptyset$; Nonempty wins if $\bigcap_{i=0}^\infty V_i \neq \emptyset$. A Choquet space is a topological space $X$ such that Nonempty has a winning strategy in the Choquet game played on the topological space $X$. The Choquet game was originally designed by Choquet to give a topological characterization of which metrizable spaces admit a complete metric. However, not all Choquet spaces are metrizable. In general, the Choquet game turns out to be a good measure of completeness for topological spaces. In the case of complete metric spaces $X$, Nonempty has a relatively simple winning strategy in the Choquet game on $X$. Once Empty has played the point-neighborhood pair $x_i \in U_i$, Nonempty responds by picking an open ball $V_i$ around $x_i$ that fits inside $U_i$ and has radius no larger than $1/2^i$. This forces Empty to play a Cauchy sequence of points $x_0,x_1,x_2,\dots$ whose limit witnesses that $\bigcap_{i=0}^\infty V_i \neq \emptyset$. Note that to carry out this strategy, Nonempty only needs to know the last move played by Empty and to remember which round is currently being played. In fact, with just a small change in strategy, Nonempty doesn’t even need to remember which round is being played: Nonempty simply needs to pick an open ball $U_i$ around $x_i$ whose radius is no larger than a quarter of the diameter of $V_i$ since that ensures that the radius of each open ball played by Nonempty decreases by at least one half at each step. A strategy for Nonempty that only uses the last move played by Empty to decide what to play next is called a stationary strategy. Thus, we see that for a metrizable space $X$, the following are equivalent: 1. $X$ admits a complete metric. 2. Nonempty has a winning strategy in the Choquet game played on $X$. 3. Nonempty has a stationary winning strategy in the Choquet game played on $X$. Since the Choquet game makes sense for arbitrary topological spaces, it makes sense to ask whether items 2 and 3 are equivalent in the general case. This is not the case, but it is known that the equivalence holds for classes of spaces much broader than metrizable spaces. In our paper [1], Carl Mummert and I show that the equivalence between the existence of general and stationary strategies for Nonempty in the Choquet game holds for an interesting class of spaces, which includes all second-countable T1 spaces. To state our main result, I must introduce an unusual property of topological bases. A base $\mathcal{B}$ for a topology is said to be open-finite if every open set has only finitely many supersets in $\mathcal{B}$. While it is unusual for a base to have this property, it turns out that many spaces happen to have such a base. For example, all second-countable T1 spaces have such a base. The main result of our paper is the following. Theorem (Dorais–Mummert). Let $X$ be a topological space with an open-finite base. If Nonempty has a winning strategy in the Choquet game on $X$, then Nonempty has a stationary winning strategy in the Choquet game on $X$. The method for proving this is new and interesting, but you will have to read our paper to find out… The Choquet game appears to be tied with certain types of representability of topological spaces. Representability issues are very important in the context of reverse mathematics since second-order arithmetic offers very limited resources to talk about large multi-layered objects like topological spaces. In [2], Carl Mummert introduced a broad class of topological spaces that can be represented in second-order arithmetic: countably based maximal filter (MF) spaces. The basic datum for these spaces consists of a countable partial order $(\mathcal{P},{\leq})$, the points of the space is the class $MF(\mathcal{P},{\leq})$ of maximal filters on $(\mathcal{P},{\leq})$, and the basic open sets consist of all classes $U_p = \set{F \in MF(\mathcal{P},{\leq}) : p \in F}$. It is not hard to see that these second-countable spaces are all T1 and Choquet. A topological characterization of countably based MF spaces was obtained by Carl Mummert and Frank Stephan [3], who established that the countably based MF spaces are precisely the second-countable T1 Choquet spaces. The original proof of this result is long and intricate. The existence of stationary winning strategies for Nonempty in such spaces leads to a much easier proof of this representation theorem. This proof was not included in our paper since it was too far from the main topic and not short enough to include in passing. Therefore, I am recording this proof here for prosperity. Theorem (Mummert–Stephan). Every second-countable T1 Choquet space is homeomorphic to a countably based MF space. Proof. Suppose that $X$ is a second-countable T1 Choquet space. Let $\mathcal{B}$ be a countable open-finite base for $X$, and let $\mathfrak{S}$ be a stationary strategy for Nonempty in the Choquet game on $X$. We will define a transitive relation ${\prec}$ on $\mathcal{B}$ such that $X$ is homeomorphic to $MF(\mathcal{B},{\preceq})$. (Although the notation suggests otherwise, the relation ${\prec}$ is not necessarily irreflexive.) A natural choice for ${\prec}$ would be to define $V \prec U$ to hold if and only if $V \subseteq \mathfrak{S}(x,U)$ for some $x \in V$. However, this relation is not necessarily transitive. To remedy this, we define $V \prec U$ to hold if and only if there is a point $x \in V$ such that $V \subseteq \mathfrak{S}(x,W)$ for every $W \in \mathcal{B}$ such that $U \subseteq W$. This relation is clearly transitive. Moreover, since $\mathcal{B}$ is open-finite, there are only finitely many such $W$, so the intersection of all corresponding $\mathfrak{S}(x,W)$ is an open neighborhood of $x$. This guarantees that for every $x \in U$, there is a $V \in \mathcal{B}$ such that $x \in V$ and $V \prec U$. We begin by recording a lemma that will be used repeatedly in this proof. Lemma. For every maximal filter $F$ on $(\mathcal{B},{\preceq})$ there is a descending sequence $$\cdots \prec U_2 \prec U_1 \prec U_0$$ such that $F = \set{W \in \mathcal{B} : (\exists i)(U_i \preceq W)}$. Proof. Since $F$ is countable and downward directed in $(\mathcal{B},{\preceq})$, it is easy to get a sequence $$\cdots \preceq V_2 \preceq V_1 \preceq V_0$$ such that $F = \set{W \in \mathcal{B} : (\exists i)(V_i \preceq W)}$. If this sequence is not eventually constant, we can eliminate repeated elements to obtain the a sequence as required by the lemma. Otherwise, we may assume that $V_0 = V_i$ for every $i$. As observed above, there must be some $U \in \mathcal{B}$ such that $U \prec V$. Since $F$ is a maximal filter in $(\mathcal{B},{\preceq})$, we must have $U = V$, which means that the constant sequence $U_i = U$ is as required by the lemma. QED The first step of the proof is to define the map $h:MF(\mathcal{B},{\preceq}) \to X$ that will witness that the two spaces are homeomorphic. Fix $F \in MF(\mathcal{B},{\preceq})$, we will show that $\bigcap F$ is always a singleton, so that we may define $h(F)$ to be the unique point of $X$ that belongs to every element of $F$. First find a descending sequence $$\cdots \prec U_2 \prec U_1 \prec U_0$$ that generates $F$ as in the above lemma. By definition of $\prec$, we can find corresponding points $x_1,x_2,\dots$ such that $x_i \in U_{i+1}$ and $U_{i+1} \subseteq \mathfrak{S}(x_i,U_i)$. This defines a valid sequence of moves for Empty against Nonempty’s stationary strategy $\mathfrak{S}$ in the Choquet game on $X$. Since $\mathfrak{S}$ is a winning strategy for Nonempty, it follows that $\bigcap_{i=0}^\infty U_i = \bigcap F$ is nonempty. To see that $\bigcap F$ has only one point, suppose for the sake of contradiction that $\bigcap_{i=0}^\infty U_i$ contains two distinct points $x$ and $y$. Because $X$ is T1, we can find a neighborhood $V_0$ of $x$ in $\mathcal{B}$ that does not contain $y$. Define the descending sequence $$\cdots \prec V_2 \prec V_1 \prec V_0$$ so that $x \in V_{i+1}$ and $V_{i+1} \subseteq \mathfrak{S}(x,W)$ for every $W \in \mathcal{B}$ such that $V_i \cap U_i \subseteq W$. The filter $$G = \set{W \in \mathcal{B} : (\exists i)(V_i \preceq W)}$$ extends $F$ since $V_i \preceq U_i$ for each $i$. Since $V_0 \in G$ but $V_0 \notin F$, this contradicts the maximality of $F$. Now that $h:MF(\mathcal{B},{\preceq}) \to X$ is properly defined, it remains to show that it is a homeomorhism. We first show that $h$ is a bijection, which we break into two facts: • $h$ is injective. Suppose that $F_0$ and $F_1$ are maximal filters that map to the same point $x$. By the lemma, we can find two sequences $$\cdots \prec U_2^d \prec U_1^d \prec U_0^d \qquad (d \in \set{0,1})$$ that generate these two filters. Since $x \in U_i^0 \cap U_i^1$ for each $i$, we can find another sequence $$\cdots V_2 \prec V_1 \prec V_0$$ of neighborhoods of $x$ in $\mathcal{B}$ such that $V_{i+1} \subseteq \mathfrak{S}(x,W)$ for every $W \in \mathcal{B}$ such that $U_i^0 \cap U_i^1 \cap V_i \subseteq W$. Then the filter $$G = \set{W \in \mathcal{B} : (\exists i)(V_i \preceq W)}$$ extends both $F_0$ and $F_1$, which means that $F_0 = G = F_1$. • $h$ is surjective. Let $U_0,U_1,U_2,\dots$ be an enumeration of $\mathcal{B}$ (possibly with repetitions). Given $x \in X$, define the descending sequence $$\cdots \preceq V_2 \preceq V_1 \preceq V_0$$ of neighborhoods of $x$ in $\mathcal{B}$ as follows. Pick $V_0 \in \mathcal{B}$ so that $x \in V_0$. If $x \in U_i$ then pick $V_{i+1}$ in such a way that $V_{i+1} \subseteq \mathfrak{S}(x,W)$ for every $W \in \mathcal{B}$ such that $V_i \cap U_i \subseteq W$; if $x \notin U_i$ then simply set $V_{i+1} = V_i$. Since $X$ is T1, we immediately see that $\bigcap_{i=0}^\infty V_i = \set{x}$. Therefore, any maximal filter extending $$F = \set{W \in \mathcal{B} : (\exists i)(V_i \preceq W)}$$ will map to $x$. (In fact, $F$ is already maximal.) Since the choice of $V_0$ was essentially arbitrary in the process we just used to show that $h$ is surjective, for every $V_0 \in \mathcal{B}$ and every $x \in V_0$ we can find some $F \in MF(\mathcal{B},{\preceq})$ such that $V_0 \in F$ and $h(F) = x$. It follows that $$h(F) \in V_0 \quad\IFF\quad V_0 \in F,$$ which shows that $h$ is a homeomorphism. QED ### References [1] F. G. Dorais and C. Mummert, “Stationary and convergent strategies in Choquet games,” Fund. math., vol. 209, iss. 1, pp. 59-79, 2010. [Bibtex] @article {DoraisMummert10, AUTHOR = {Dorais, Fran{\c{c}}ois G. and Mummert, Carl}, TITLE = {Stationary and convergent strategies in {C}hoquet games}, JOURNAL = {Fund. Math.}, FJOURNAL = {Fundamenta Mathematicae}, VOLUME = {209}, YEAR = {2010}, NUMBER = {1}, PAGES = {59--79}, ISSN = {0016-2736}, MRCLASS = {91A44 (06B35 54D20 54D70 91A24)}, MRNUMBER = {2652592 (2011h:91054)}, MRREVIEWER = {L{\'a}szl{\'o} Zsilinszky}, DOI = {10.4064/fm209-1-5}, URL = {http://dx.doi.org/10.4064/fm209-1-5}, EPRINT = {0907.4126} } [2] C. Mummert, “Reverse mathematics of MF spaces,” J. math. log., vol. 6, iss. 2, pp. 203-232, 2006. [Bibtex] @article {Mummert06, AUTHOR = {Mummert, Carl}, TITLE = {Reverse mathematics of {MF} spaces}, JOURNAL = {J. Math. Log.}, FJOURNAL = {Journal of Mathematical Logic}, VOLUME = {6}, YEAR = {2006}, NUMBER = {2}, PAGES = {203--232}, ISSN = {0219-0613}, MRCLASS = {03B30 (03D45 03F35 06A06 06B35 54E50)}, MRNUMBER = {2317427 (2008d:03011)}, MRREVIEWER = {Christian Bennet}, DOI = {10.1142/S0219061306000578}, URL = {http://dx.doi.org/10.1142/S0219061306000578}, } [3] C. Mummert and F. Stephan, “Topological aspects of poset spaces,” Michigan math. j., vol. 59, iss. 1, pp. 3-24, 2010. [Bibtex] @article {MummertStephan10, AUTHOR = {Mummert, Carl and Stephan, Frank}, TITLE = {Topological aspects of poset spaces}, JOURNAL = {Michigan Math. J.}, FJOURNAL = {Michigan Mathematical Journal}, VOLUME = {59}, YEAR = {2010}, NUMBER = {1}, PAGES = {3--24}, ISSN = {0026-2285}, MRCLASS = {06E15 (03B30 03E15 54E52 91A44)}, MRNUMBER = {2654139 (2011k:06026)}, MRREVIEWER = {Jimmie D. Lawson}, DOI = {10.1307/mmj/1272376025}, URL = {http://dx.doi.org/10.1307/mmj/1272376025}, EPRINT = {0912.3191} }
2018-01-19 01:28:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9009737968444824, "perplexity": 213.64338910193874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887692.13/warc/CC-MAIN-20180119010338-20180119030338-00367.warc.gz"}
https://cob.silverchair.com/jeb/article/208/14/2653/15634/Sucking-while-swimming-evaluating-the-effects-of?searchresult=1
It is well established that suction feeding fish use a variable amount of swimming (ram) during prey capture. However, the fluid mechanical effects of ram on suction feeding are not well established. In this study we quantified the effects of ram on the maximum fluid speed of the water entering the mouth during feeding as well as the spatial patterns of flow entering the mouth of suction-feeding bluegill sunfish Lepomis macrochirus. Using Digital Particle Image Velocimetry (DPIV) and high-speed video, we observed the flow in front of the mouth of three fish using a vertical laser sheet positioned on the mid-sagittal plane of the fish. From this we quantified the maximum fluid speed (measured at a distance in front of the mouth equal to one half of the maximum mouth diameter), the degree of focusing of water flow entering the mouth, and the shape of the ingested volume of water. Ram speed in 41 feeding sequences, measured at the time of maximum gape, ranged between 0 and 25 cm s–1, and the ratio of ram speed to fluid speed ranged from 0.1% to 19.1%. In a regression ram speed did not significantly affect peak fluid speed, but with an increase in ram speed the degree of focusing of water entering the mouth increased significantly, and the shape of the ingested volume of water became more elongate and narrow. The implications of these findings are that (1) suction feeders that employ ram of between 0% and 20% of fluid speed sacrifice little in terms of the fluid speeds they generate and(2) ram speed enhances the total body closing speed of the predator. Many aquatic feeding vertebrates swim toward their prey while using suction to draw the prey into their mouth. This combination of ram' and suction allows the predator to rapidly close the distance to the prey item, and because the volume of water that is influenced by a suction feeder is restricted to a very small distance in front of the mouth, ram also allows the predator to position the mouth aperture close enough to the prey that suction can be effective. The relative use of ram and suction has been recognized as a major axis of behavioral diversity in aquatic feeders and several attempts have been made to quantify their relative contribution in predator–prey interactions (e.g. Norton and Brainerd,1993; Svanbäck et al.,2002; Sass and Motta,2002). One important issue about the combined use of ram and suction concerns how the two behaviors may hydrodynamically combine(Weihs, 1980; Muller et al., 1982; Muller and Osse, 1984). How does the attack velocity of the predator influence the spatial pattern of fluid flow entering the mouth? The influence of ram speed on the water ingested during suction feeding was estimated by Weihs(1980) using a hydrodynamic sink model. In this model, the ingested volume of water became focused in front of the mouth as ram speed increased. The shape of the ingested volume of water appears to be related to the ratio of ram speed to fluid speed, such that higher values will result in the capture of narrower and more elongated parcels of water (Weihs,1980). One metric of suction performance is the maximum fluid speed moving towards the mouth of the fish. While it may be possible that ram and suction work in concert to increase overall prey capture performance(Wainwright et al., 2001), it has also been suggested that swimming can decrease suction performance(Nyberg, 1971). The idea is that fluid flow is determined by the rate of buccal expansion, and if swimming speed approaches that of the suction-induced flow, net flow in the absolute reference frame could be negligible because most of the water flow into the mouth will be passive (Nyberg,1971; van Leeuwen,1984). Additionally, a swimming fish produces water movement in front of it, termed a bow wave, and it is possible that this could negatively influence the suction flow (Nyberg,1971; Lauder and Clark,1984; Muller and Osse,1984; Van Damme and Aerts,1997; Summers et al.,1998; Ferry-Graham et al.,2003). In the present study we visualized the flows generated by suction-feeding bluegill sunfish using digital particle image velocimetry (DPIV; Fig. 1; e.g. Drucker and Lauder, 2002, 2003), and we measured the effect of bluegill swimming speed on aspects of the induced suction flows. Depending on the question, we measured fluid speed FS in either the earth-bound, or absolute, frame of reference (AFS) or the fish's frame of reference (FFS). We focused on the following three questions: First, does ram speed affect the maximum fluid speed entering the mouth during suction feeding, as measured in the absolute frame of reference?We hypothesize that, if the fish is stationary, fluid speed in the absolute frame of reference (AFSstationary) will result exclusively from buccal cavity expansion. However, if the fish is swimming at a ram speed RS, then fluid speed at the mouth aperture in the absolute frame of reference (AFSswimming) will equal the predicted fluid speed if the fish were not moving (AFSstationary) minus the magnitude of RS. This is because when the buccal cavity expands,water will enter the mouth passively at a speed equal to the swimming speed of the fish. We therefore expected that increases in ram speed would result in decreasing fluid speed as long as buccal expansion rate is identical. Second, does ram speed affect the degree of focusing' of the water that has potential to enter the mouth, as measured in the fish's frame of reference? The degree of focusing (Fig. 2) characterizes the directionality of flow towards the mouth. A low degree of focusing indicates water is being drawn from every direction,whereas a high degree of focusing indicates water is being drawn predominantly from in front of the mouth. We expected that, with increasing ram, the degree of focusing would increase (Drost et al.,1988; Weihs,1980). Fig. 1. The experimental setup used in this study. In order to elicit varying ram speeds at the time of capture the prey was introduced at one of three distances from the sunfish: 0 cm (A), 30 cm (B), and 50 cm (C). Note that mirrors were positioned below and above the tank to reflect the laser sheet up and then down in order to illuminate both above and below the head of the fish during feeding. Fig. 1. The experimental setup used in this study. In order to elicit varying ram speeds at the time of capture the prey was introduced at one of three distances from the sunfish: 0 cm (A), 30 cm (B), and 50 cm (C). Note that mirrors were positioned below and above the tank to reflect the laser sheet up and then down in order to illuminate both above and below the head of the fish during feeding. Lastly, does ram speed affect the shape of the ingested volume of water, as measured in the absolute frame of reference? If water flow into the mouth becomes more focused with increasing ram speed, this should influence the dimensions of the parcel of water that is captured during a suction feeding event. Modeling studies (Drost et al.,1988; Weihs, 1980)and limited empirical work (van Leeuwen,1984) have indicated that this will be the case, with higher ram speeds resulting in the ingested parcel of water becoming elongate in the direction of swimming and reaching farther away from the mouth aperture. Experimental subjects We studied the bluegill sunfish Lepomis macrochirus Rafinesque, a member of the freshwater family Centrarchidae. Bluegill have been the focus of considerable work on the functional morphology and biomechanics of suction feeding (for example Lauder,1980; Lauder and Clark,1984; Ferry-Graham et al.,2003) and have been shown to be one of the highest performing suction feeders among centrarchid species(Carroll et al., 2004). The fish were collected in Yolo County, California, USA, brought back to the University of California, Davis and housed individually in 100-liter aquaria at 22°C. Fish were fed daily with cut squid (Loligo sp.) and/or small annelid tubifex' worms. All maintenance and experimental procedures used in this research followed a protocol that was reviewed by the University of California, Davis Institutional Animal Care and Use Committee. We analyzed data from three fish with standard lengths of 15.3 cm, 15.0 cm and 15.4 cm. Fig. 2. Representative images, with streamlines and contours of fluid speed in the fish's frame of reference at the time of maximum gape for a low ram case (A; RS/AFSaperture=0%), a medium ram case (B; RS/AFSaperture=6%), and a high ram case (C; RS/AFSaperture=14%). Note that the streamlines do not indicate the area of water ingested, but rather the instantaneous direction of movement of water at each location in space. Fig. 2. Representative images, with streamlines and contours of fluid speed in the fish's frame of reference at the time of maximum gape for a low ram case (A; RS/AFSaperture=0%), a medium ram case (B; RS/AFSaperture=6%), and a high ram case (C; RS/AFSaperture=14%). Note that the streamlines do not indicate the area of water ingested, but rather the instantaneous direction of movement of water at each location in space. Experimental protocol Each bluegill was placed in the experimental tank and trained to feed in the laser sheet (see below). At the onset of experiments, the individual was kept at one end of the tank and restrained behind a door(Fig. 1). A tubifex worm(∼1.0 cm) or a ghost shrimp (Palaemonetes sp., about 2 cm), was then dropped through a 0.3 cm diameter plastic tubing or attached to a thin wire, held within the laser light sheet and within the camera field of view,and the door was lifted. Varying locomotor speeds were elicited by introducing the prey items at one of three distances from the fish(Fig. 1A–C). Previous work indicates that bluegill will capture prey with relatively high ram speeds when traversing distances within the range used in this set-up (T. E. Higham,B. Malas, B. C. Jayne and G. V. Lauder, manuscript submitted for publication). Each individual was fed at every location and the order of locations for each fish was arbitrarily chosen. Digital Particle Image Velocimetry (DPIV) We used DPIV to quantify a number of parameters describing the flow of water into the mouth during suction feeding. Willert and Gharib(1991) provide a detailed description of this technique for measuring fluid flow. An Innova-90 5 W argonion continuous wave laser (Coherent, Inc., Santa Clara, CA, USA) was used in combination with a set of focusing lenses and mirrors to produce a vertical laser sheet that was approximately 10 cm wide and 1 mm thick in the aquarium(Fig. 1). Theaquarium was seeded with silver coated, neutrally buoyant glass spheres (12 μm) in order to visualize the flow of water. Mirrors above and below the tank were used to illuminate both above and below the head of the fish during feeding(Fig. 1). Lateral-view video sequences were recorded using a NAC Memrecam ci digital system (Tokyo, Japan)operating at 500 images s–1(Fig. 1) with a field of view of 5.1 ×6.7 cm. Additionally, a Sony CCD camcorder (Tokyo, Japan),operating at 30 images s–1, was used to capture anterior view images for each sequence in order to determine the orientation and position of the fish relative to the laser sheet. While we only analyzed sequences recorded in lateral view in this study, we have found that the flow pattern generated by bluegill is radially symmetrical about the long axis of the fish(Day et al., 2005). An adaptive mesh cross correlation algorithm created by Scarano and Riethmuller (1999) was used to calculate velocities from image pairs. The distance that particles traveled between image pairs (2 ms interval) was determined within interrogation windows with dimensions of 0.9 ×0.9 mm, with 50% overlap between interrogation windows. The algorithm then returned a two-dimensional grid of two components of measured velocity for each image pair that was processed. Two-dimensional (x and y) velocity vector profiles were visualized using Tecplot version 10 (Amtec Engineering, Inc., Bellevue,Washington, USA). In order to determine the validity of the vector measurements, a two-step validation scheme was implemented. Only vectors with a signal-to-noise ratio(SNR) of 2 or greater were included in the analyses, and no smoothing was applied to the final velocity field. Some spurious measurements passed the SNR validation criterion, and the second part of the validation scheme accounted for these measurements. Measurements both directly on the transect (i,j) and at two grid points above (i,j+2) and two grid points below (i,j–2) were considered at each horizontal position along the transect. Measurements located two grid points away from the primary measurement location were used,because these do not overlap the primary measurement region. If at least two of the three measurements considered had not been removed, based on the SNR criterion (step one of the validation scheme), then the mean of the remaining measurements was used as the value of speed for that position along the transect. Finally, for several sequences we confirmed that measurements with an SNR of 2 were accurate by tracking particles manually for several sequences using IMAGE J version 1.33 (NIH, Washington, DC, USA). A transect extending forward from the center of the fish's mouth was studied to measure the speed of the fluid as a function of distance from the mouth. The closest position to the mouth where accurate measurements of velocity vectors were made in 100% of the sequences was at a distance equal to one half of the peak gape diameter (PG) of the fish for the feeding sequence. The accuracy at this position was validated in every trial. All vector velocities reported in this paper are at this distance and the termAFS1/2 PG' refers to the speed of the fluid at this position. Data analysis The statistical analyses were performed only on those feedings that met the following criteria: (1) successful prey capture occurred, (2) the laser sheet intersected the mid-sagittal plane of the fish (verified with the anterior view camera), (3) the fish was centered on the filming screen in lateral view,and (4) maximum gape followed prey capture. The last point is important because the prey item can interfere with the DPIV measurements, which were made at maximum gape. Using IMAGE J, the x and y coordinates of the tip of the upper and lower jaw were digitized for each image (2 msintervals) starting before the onset of mouth opening and ending after the mouth was closed. These points were used to quantify changes in gape and to calculate maximum gape for every feeding sequence. Time to peak gape (TTPG) was measured as the time from 20% to 95% of maximum gape. This method for measuring TTPGreduced errors that are related to a variable rate of early mouth opening and the difficulty in clearly identifying the point where the peak value is achieved in an asymptotic relationship. TTPG was measured as an indication of the rate of buccal expansion that is used by the fish to generate suction (Sanford and Wainwright,2002). The x and y coordinates of the anterior margin of the eye were digitized and used to quantify ram speed throughout the strikes. Although ram speed usually varied during the course of the strike,ram speeds' reported in this study were measured at the time of 95% of maximum gape, the same time that flow speed was measured. Fig. 3. Sample images illustrating a large height-to-length ratio of the ingested volume (A) and a trial with a small height/length ratio (B). Both images are taken at the time of 20% of peak gape and the white outlines indicate the volume of water that was captured during the feeding event. The fish in (A)was moving at 0 cm s–1 and the fish in (B) was moving at 17.5 cm s–1 at the time of peak gape. Fig. 3. Sample images illustrating a large height-to-length ratio of the ingested volume (A) and a trial with a small height/length ratio (B). Both images are taken at the time of 20% of peak gape and the white outlines indicate the volume of water that was captured during the feeding event. The fish in (A)was moving at 0 cm s–1 and the fish in (B) was moving at 17.5 cm s–1 at the time of peak gape. Depending on the question being addressed, we either measured variables in the absolute frame of reference (AF; maximum suction speed and the shape of the ingested volume of water) or in the fish's frame of reference (FF; the degree of focusing). For the latter, we subtracted the ram speed of the fish from each speed vector in order to visualize the flow relative to the fish's mouth and body (Fig. 2). To determine the degree of focusing (DF) of water flow that was directed towards the mouth, the streamlines in the fish's frame of reference were visualized using Tecplot, and we determined the most dorsal and ventral streamlines that entered the fish's mouth. At a distance anterior to the fish equal to the fish's maximum gape, we measured the maximum vertical distance between these outermost streamlines (Fig. 2) and then scaled this value by the maximum gape of the fish. The reciprocal of this value is defined as DF such that larger values of DF indicate a smaller vertical distance between streamlines and a flow pattern that is more focused in front of the fish. To determine the shape of the ingested volume of water, we visually tracked particles going into the mouth using IMAGE J and drew a boundary around the outer limit of particles that entered the mouth(Fig. 3). We measured the maximum height and the length of this boundary and converted the measurements to a ratio that described the aspect ratio of the ingested volume in lateral view. We used SYSTAT version 9 (SPSS Inc., Chicago, IL, USA) for all statistical analyses. All variables were first log10 transformed to normalize variances, and in each case this allowed the variables to meet the assumptions of the parametric procedures. We performed mixed-model multiple regressions with individual (categorical, random), TTPG (continuous), and ram speed (continuous) as the independent variables and all two-way and the three-way interaction terms, with the following dependent variables: (1)maximum fluid speed (AFS1/2 PG), (2) the degree of focusing (DF) of the water moving towards the mouth of the fish, and (3) the height-to-length ratio of the ingested volume of water. TTPG was included as a variable in the analyses because it strongly affects the suction speed in bluegill sunfish (Day et al.,2005). Each complete multiple regression model was first run and all variables with P>0.5 were removed from the model, and the reduced models were re-run in a final analysis. All P values from this second analysis are presented in Table 1. Unless stated otherwise, all results are presented as mean± s.e.m. Table 1. P values from multiple regressions performed separately on each variable VariablePeak AFS1/2 PGDFHeight/length ratio Individual (2) TTPG (1) 0.00009 0.001 Ram (1) 0.10 0.0004 0.001 Individual × TTPG (2) Individual × Ram (2) TTPG × Ram (2) 0.077 0.00002 Individual × TTPG × Ram (2) VariablePeak AFS1/2 PGDFHeight/length ratio Individual (2) TTPG (1) 0.00009 0.001 Ram (1) 0.10 0.0004 0.001 Individual × TTPG (2) Individual × Ram (2) TTPG × Ram (2) 0.077 0.00002 Individual × TTPG × Ram (2) TTPG, Time to peak gape; AFS1/2 PG, absolute fluid speed at a distance equal to ½ peak gape; DF, degree of focusing. Degrees of freedom for each factor in the model are noted in parentheses. Variables with P>0.5 in the initial regression were subsequently eliminated during the running of a reduced model. A detailed description of the spatial and temporal patterns of suction flow in bluegill is presented elsewhere (Day et al., 2005). In almost all feeding events, regardless of ram speed,the bluegill decelerated during prey capture and stopped shortly after the time of peak gape (Fig. 4). As the mouth started opening, fluid movement was initiated and continued as long as the mouth of the fish was open (Fig. 4). The average time to peak gape was 32.0±2.1 ms, with a range of 12.0 to 58.0 ms, and the average ram speed at the time of maximum gape was 8.4±0.8 cm s–1 with a range of 0 to 24.6 cm s–1. Higher values of TTPG resulted in significantly lower values of AFS1/2 PG(Table 1; Fig. 5A). Even when as high as 25 cm s–1, ram speed did not significantly affect AFS1/2 PG (Table 1, Figs 4, 5B). Maximum fluid speed typically slightly preceded maximum gape or coincided with it(Fig. 4). The average AFS1/2 PG for all trials (N=41) was 30.0±2.2 cm s–1 with a range of 14.1 to 67.7 cm s–1. Water was drawn into the mouth from every direction for bluegill feeding without ram (Fig. 2A). As ram speed increased, the water being drawn into the mouth was more focused in front of the mouth (higher DF values) at the time of maximum gape(Table 1; Figs 2B,C and 6). This effect was considerable; for example, there was more than a twofold increase in the degree of focusing with an increase in ram speed from 2 cm s–1 to 10 cm s–1 (Figs 2, 6). Fig. 4. Representative sequences from a high ram case (A; RS/AFSaperture=19%) and a low ram case (B; RS/AFSaperture=3%) showing the similarity in timing of events. Note that maximum suction speed coincided with peak gape or slightly preceded it. RS, ram speed; AFS, fluid speed. Fig. 4. Representative sequences from a high ram case (A; RS/AFSaperture=19%) and a low ram case (B; RS/AFSaperture=3%) showing the similarity in timing of events. Note that maximum suction speed coincided with peak gape or slightly preceded it. RS, ram speed; AFS, fluid speed. As ram speed increased, the height-to-length ratio of the ingested volume of water decreased significantly, indicating that a more elongated volume of fluid was captured (Table 1,Figs 3, 7). While we only quantified the dimensions of the water parcel that entered the mouth during the strike,water outside this boundary was also moved by the suction. Our study empirically quantified the effects of ram speed on the flow speed(AFS1/2 PG), the degree of focusing (DF), and the shape of the ingested volume of water during suction feeding in bluegill sunfish. An increase in ram speed did not affect AFS1/2 PG, but substantially increased DF and altered the shape of the ingested volume of water. Fig. 5. (A) Time to peak gape (TTPG) vs AFS1/2 PG for all trials separated into ram speeds below and above 10 cm s–1. (B) Residuals from the regression of maximum suction speed and TTPG vs ram speed. There was a significant effect of TTPG, but not ram speed, on maximum flow speed. Fig. 5. (A) Time to peak gape (TTPG) vs AFS1/2 PG for all trials separated into ram speeds below and above 10 cm s–1. (B) Residuals from the regression of maximum suction speed and TTPG vs ram speed. There was a significant effect of TTPG, but not ram speed, on maximum flow speed. Swimming and suction performance In contrast to our expectation, bluegill sunfish did not forfeit suction fluid speed when using forward swimming during feeding. Overall body closing speed was therefore enhanced by incorporating both suction and ram. For a bluegill using relatively high amounts of suction, the effect of moderate increases in ram speed is additive and results in increasing closing speed of the predator. This insensitivity of suction speed to moderate amounts of ram has not previously been recognized or predicted(Muller and Osse, 1984; van Leeuwen, 1984), and we suggest that it may be biologically significant for suction feeding predators,like bluegill and many other species, that feed on prey that have some capacity to escape suction flows. Thus, there was no apparent hydrodynamic trade-off between ram and peak fluid speed over the range of values observed in this study. Fig. 6. Ram speed vs degree of focusing (DF) of water entering the mouth(log–log plot). Note that values of DF for ram speeds equal to 0 cm s–1 are not shown because they are equal to infinity. r2=0.81, P<0.05. Fig. 6. Ram speed vs degree of focusing (DF) of water entering the mouth(log–log plot). Note that values of DF for ram speeds equal to 0 cm s–1 are not shown because they are equal to infinity. r2=0.81, P<0.05. Fig. 7. The relationship between ram speed and the shape of the ingested volume of water. Only feedings using worm as prey are shown. Points with lower values of height/length ratio have a more elongated, narrow shape of ingested water. r2=0.44, P<0.05. Fig. 7. The relationship between ram speed and the shape of the ingested volume of water. Only feedings using worm as prey are shown. Points with lower values of height/length ratio have a more elongated, narrow shape of ingested water. r2=0.44, P<0.05. In feeding bluegill the fluid speed generated during suction decays rapidly with distance from the mouth aperture such that AFS1/2 PGis approximately 25% of that at the mouth aperture(AFSaperture) (Fig. 8; Day et al.,2005). Using this proportionality, we can estimate AFSaperture using our measurements of AFS1/2 PG. We find that the ram speeds were approximately 0–20% of maximum AFSaperture. For a hypothetical AFSaperture of 100 cm s–1, a ram speed of 20 cm s–1 would reduce AFSaperture to 80 cm s–1 in the absolute frame of reference. This 20% decrement in fluid speed should then apply to all positions from the fish's mouth because the scaled shape of the relationship between flow speed and distance is uniform across the range of fluid speeds and ram speeds observed in our study (Day et al., 2005). If AFS1/2 PG is 25% of AFSaperture, a ram speed of 20 cm s–1 would reduce AFS1/2 PGby 5 cm s–1. However, body closing speed, or the speed that the predator and prey are moving towards each other, would actually be predicted to increase from 25 cm s–1 (only suction) to 40 cm s–1 (20 cm s–1 of ram + AFS1/2 PG of 20 cm s–1). Since the ratio RS/AFSaperture was commonly less than 10% (27 of 41), the expected decrease (<3 cm s–1) in AFS1/2 PG is not as great as subtracting the complete ram speed. Thus, like fluid speed, the effect of ram speed decays rapidly with distance away from the mouth. Nevertheless, we did not see any tendency for ram speed to reduce suction speed (Fig. 5). Since mechanical arguments all suggest that ram speed should negatively affect suction speed, one possible explanation for the lack of an effect in our study is that bluegill modulated some unmeasured aspect of buccal expansion to compensate for the effects of ram. Given that time to peak gape statistically explains 87% of the variation in fluid speed among feeding events in bluegill sunfish (Day et al. 2005), other forms of kinematic modulation likely exist. For example, the timing and magnitude of opercular expansion is thought to be decoupled from buccal expansion in some species(Norton and Brainerd, 1993). Future studies that measure the movements of several anatomical features (e.g. operculum, hyoid, suspensorium) at the same time as fluid flow will provide further insight into the mechanism for modulating the speed of fluid entering the buccal cavity. Fig. 8. The relationship between fluid speed AFS and distance from the mouth aperture (scaled to maximum mouth diameter) and the predicted effects of ram speed RS (RS/AFSaperture=20% in this case) on this relationship. The blue line represents AFS for a stationary fish and the red line AFS for a fish with a RS of 20 cm s–1. Note that RS has a much greater effect on AFSaperture than AFS1/2 PG. The length of the green arrow represents the magnitude of body closing speed of a stationary fish (25 cm s–1) and the length of the black arrow represents the magnitude of body closing speed of a fish with a RS of 20 cm s–1 (40 cm s–1). Note that overall body closing speed is increased with moderate levels of RS. The relationship between fluid speed and distance from the mouth is from Day et al. (2005). Fig. 8. The relationship between fluid speed AFS and distance from the mouth aperture (scaled to maximum mouth diameter) and the predicted effects of ram speed RS (RS/AFSaperture=20% in this case) on this relationship. The blue line represents AFS for a stationary fish and the red line AFS for a fish with a RS of 20 cm s–1. Note that RS has a much greater effect on AFSaperture than AFS1/2 PG. The length of the green arrow represents the magnitude of body closing speed of a stationary fish (25 cm s–1) and the length of the black arrow represents the magnitude of body closing speed of a fish with a RS of 20 cm s–1 (40 cm s–1). Note that overall body closing speed is increased with moderate levels of RS. The relationship between fluid speed and distance from the mouth is from Day et al. (2005). Bluegill sunfish exhibit fine temporal control of their velocity prior to,and during, prey capture (T. E. Higham, B. Malas, B. C. Jayne and G. V. Lauder, manuscript submitted for publication). For example, in a laboratory setting bluegill decelerate to approximately 30% of their maximum approach speed at the time of prey capture, and then maximally decelerate until stopping (T. E. Higham, B. Malas, B. C. Jayne and G. V. Lauder, manuscript submitted for publication). One benefit of decelerating to 30% of the maximum approach speed could be to lower the ram speed-to-fluid speed ratio(RS/AFSaperture) to between 0–20%, since it is possible that larger ratios have a negative effect on suction performance. Two potential strategies that high performance suction feeders can employ to achieve a low RS/AFSaperture include decelerating prior to prey capture or maintaining a low ram speed throughout the predator–prey interaction. The question of whether predators that rely predominantly on suction always exhibit a low RS/AFSaperture ratio requires further investigation. Degree of focusing In our study, the degree of focusing (DF) during suction feeding increased with an increase in ram speed (Figs 2, 6). Focusing the flow of water enables the predator to draw water from in front of its mouth where the prey is positioned rather than drawing water from a wider space around the fish's head. The largest gain in DF seems to occur when values of RS/AFSaperture are between 2 and 10%, whereas there is less of an increase in DF for values between 10 and 20%. At very high values of RS/AFSaperture, the degree of focusing would approach 1, where the distance between streamlines is equal to the diameter of the mouth at maximum gape. Thus, increases in DF might become increasingly subtle as ram speed approaches suction speed. By increasing DF, the accuracy required to capture a prey item also increases. Thus, swimming slowly, or slowing down prior to feeding, might enable the fish to maintain accuracy (more time for steering and positioning)and not forfeit suction performance. Decelerating prior to prey capture has been suggested as a way to increase accuracy during feeding (T. E. Higham, B. Malas, B. C. Jayne and G. V. Lauder, manuscript submitted for publication; Lauder and Drucker, 2004), but our observations provide a hydrodynamic basis for how braking can increase accuracy. Shape of ingested water volume When bluegill sunfish attacked the prey item at a high velocity, they ingested a more elongated volume of water(Fig. 3). In the example shown in Fig. 3, the length of the ingested water along the x-axis is 50% greater in the case with high ram (17.5 cm s–1) than in the case with no ram (0 cm s–1). The shape of the ingested fluid volume is likely to be an important factor in determining whether a prey item is captured. For example, by extending the distance from the mouth that water is ingested, a predator might be able to limit the ability of the prey to escape. With an increase in ram speed, the more elongate parcel of ingested water enables the bluegill to capture more fluid from the space in front of the mouth, corroborating several modeling studies(Weihs, 1980; van Leeuwen, 1984; de Jong et al., 1987; Drost et al., 1988). Drost et al. (1988) used a model to predict the shape of the ingested volume of water by suction-feeding carp larva swimming at 3.5 cm s–1. Although all of the water drawn into the mouth originated from in front of the carp larva, the shape of the ingested volume of water is notably different from that of bluegill. For example, the maximum vertical height of the ingested volume in bluegill is typically centered (Fig. 3)while the volume ingested by the carp was predicted to be trumpet' shaped with the maximum vertical height occurring distally to the central axis (fig. 5 in Drost et al., 1988). We never observed this shape in bluegill feeding events. In another modeling study, de Jong et al. (1987)determined that the shape of the ingested volume of water would become more elongated as swimming speed increased, and our results confirm this. Additionally, the overall shape of the ingested volume of water predicted in the study by de Jong et al.(1987) more closely resembled the shapes that we observed. Weihs (1980) developed a term that was the ratio of ingestion distance directly forward to that in the orthogonal direction, and predicted that it would increase with greater ram speed, and our results support this. Weihs(1980) also suggests that with increased swimming speed, a fish can minimize the amount of wasted ingested volume and thus maximize their efficiency. Another implication of narrowing and elongating the ingested volume of water is that the predator must increase attack accuracy as the region of influence in front of the predator will become more focused (Drost et al.,1988). Thus, it is likely that a trade-off exists between accuracy and efficiency in high-performance suction-feeding fish. The hydrodynamic interactions between suction and ram are complex and it seems that, depending on their morphology and ecology, fish can modulate their ram speed in order to achieve a balance between the several interrelated factors that result from changes in ram speed. For example, bluegill sunfish have relatively small mouths and thus accuracy may be a relatively important factor. Moderate to low ram speeds increase their closing speed without forfeiting peak fluid speed, but a relatively low ram speed allows them to maintain accuracy (lower degree of focusing). Additionally, their efficiency increases with a moderate amount of ram speed by ingesting a narrower volume of water where the prey is located. Andrew Carroll, Melissa Higham, and two anonymous reviewers provided valuable comments on earlier drafts of the manuscript. This research was supported by NSF grants IBN-0326968 and IOB-0444554 to Peter Wainwright and Angela Cheer. Carroll, A. M., Wainwright, P. C., Huskey, S. H., Collar, D. C. and Turingan, R. G. ( 2004 ). Morphology predicts suction feeding performance in centrarchid fishes. J. Exp. Biol. 207 , 3873 -3881. Day, S. W., Higham, T. E., Cheer, A. Y. and Wainwright, P. C. ( 2005 ). Spatial and temporal patterns of water flow generated by suction feeding bluegill sunfish (Lepomis macrochirus)resolved by Particle Image Velocimetry. J. Exp. Biol. 208 , 2661 -2671. de Jong, M. C., Sparenberg, J. A. and de Vries, J.( 1987 ). Some aspects of the hydrodynamics of suction feeding of fish. Fluid Dynam. Res. 2 , 87 -112. Drost, M. R., Osse, J. W. M. and Muller, M.( 1988 ). Prey capture by fish larvae, water flow patterns and the effect of escape movements of prey. Neth. J. Zool. 38 , 23 -45. Drucker, E. G. and Lauder, G. V. ( 2002 ). Wake dynamics and locomotor function in fishes: interpreting evolutionary patterns in pectoral fin design. Integr. Comp. Biol. 42 , 997 -1008. Drucker, E. G. and Lauder, G. V. ( 2003 ). Function of pectoral fins in rainbow trout: behavioral repertoire and hydrodynamic forces. J. Exp. Biol. 206 , 813 -826. Ferry-Graham, L. A., Wainwright, P. C. and Lauder, G. V.( 2003 ). Quantification of flow during suction feeding in bluegill sunfish. Zoology 106 , 159 -168. Lauder, G. V. ( 1980 ). The suction feeding mechanism in sunfishes (Lepomis): an experimental analysis. J. Exp. Biol. 88 , 49 -72. Lauder, G. V. and Clark, B. D. ( 1984 ). Water flow patterns during prey capture by teleost fishes. J. Exp. Biol. 113 , 143 -150. Lauder, G. V. and Drucker, E. G. ( 2004 ). Morphology and experimental hydrodynamics of fish fin control surfaces. IEEE J. Oceanic Eng. 29 , 556 -571. Muller, M. and Osse, J. W. M. ( 1984 ). Hydrodynamics of suction feeding in fish. Trans. Zool. Soc. Lond. 37 , 51 -135. Muller, M., Osse, J. W. M. and Verhagen, J. H. G.( 1982 ). A quantitative hydrodynamical model of suction feeding in fish. J. Theor. Biol. 95 , 49 -79. Norton, S. F. and Brainerd, E. L. ( 1993 ). Convergence in the feeding mechanics of ecomorphologically similar species in the Centrarchidae and Cichlidae. J. Exp. Biol. 176 , 11 -29. Nyberg, D. W. ( 1971 ). Prey capture in the largemouth bass. Am. Mid. Nat. 86 , 128 -144. Sanford, C. P. J. and Wainwright, P. C. ( 2002 ). Use of sonomicrometry demonstrates the link between prey capture kinematics and suction pressure in largemouth bass. J. Exp. Biol. 205 , 3445 -3457. Sass, G. G. and Motta, P. J. ( 2002 ). The effects of satiation on strike mode and prey capture kinematics in the largemouth bass, Micropterus salmoides. Env. Biol. Fish. 65 , 441 -454. Scarano, F. and Riethmuller, M. L. ( 1999 ). Iterative muligrid approach in PIV image processing with discrete window offset. Exp. Fluids 26 , 513 -523. Summers, A. P., Darouian, K. F., Richmond, A. M. and Brainerd,E. L. ( 1998 ). Kinematics of aquatic and terrestrial prey capture in Terrapene carolina, with implications for the evolution of feeding in cryptodire turtles. J. Exp. Zool. 281 , 280 -287. Svanback, R., Wainwright, P. C. and Ferry-Graham, L. A.( 2002 ). Linking cranial kinematics, buccal pressure, and suction feeding performance in largemouth bass. Physiol. Biochem. Zool. 75 , 532 -543. Van Damme, J. and Aerts, P. ( 1997 ). Kinematics and functional morphology of aquatic feeding in Australian snake-necked turtles (Pleurodira; Chelodina). J. Morphol. 233 , 113 -125. Van Leeuwen, J. L. ( 1984 ). A quantitative study of flow in prey capture by rainbow trout, with general consideration of the actinopterygian feeding mechanism. Trans. Zool. Soc. Lond. 37 , 21 -77. Wainwright, P. C., Ferry-Graham, L. A., Waltzek, T. B., Carrol,A. M., Hulsey, C. D. and Grubich, J. R. ( 2001 ). Evaluating the use of ram and suction during prey capture by cichlid fishes. J. Exp. Biol. 204 , 3039 -3051. Weihs, D. ( 1980 ). Hydrodynamics of suction feeding of fish in motion. J. Fish. Biol. 16 , 425 -433. Willert, C. E. and Gharib, M. ( 1991 ). Digital particle image velocimetry. Exp. Fluids 10 , 181 -193.
2022-09-27 02:04:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41492757201194763, "perplexity": 2980.477869447849}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00707.warc.gz"}
https://simple.m.wikipedia.org/wiki/Electromagnetic_waves
form of energy emitted and absorbed by particles which are charged which shows wave-like behavior as it travels through space (Redirected from Electromagnetic waves) Electromagnetic waves are waves that contain an electric field and a magnetic field and carry energy. They travel at the speed of light.[1] The range of electromagnetic frequencies. "UHF" means "ultra high frequency," VHF is "very high frequency". Both were formerly used for television in the USA. Quantum mechanics developed from the study of electromagnetic waves. This field includes the study of both visible and invisible light. Visible light is the light one can see with normal eyesight in the colours of the rainbow. Invisible light is light one can't see with normal eyesight and includes more energetic and higher frequency waves, such as ultraviolet, x-rays and gamma rays. Waves with longer lengths, such as infrared, micro and radio waves, are also explored in the field of Quantum mechanics. Some types of electromagnetic radiation, such as X-rays, are ionizing radiation and can be harmful to your body. Ultraviolet rays are near the violet end of the light spectrum and infrared are near the red end. Infrared rays are heat rays and ultraviolet rays cause sunburn. The various parts of the electromagnetic spectrum differ in wavelength, frequency and quantum energy. Sound waves are not electromagnetic waves but waves of pressure in air, water or any other substance. ## Mathematical formulation In physics, it is well known that the wave equation for a typical wave is ${\displaystyle \nabla ^{2}f={\frac {1}{c^{2}}}{\frac {\partial ^{2}f}{\partial t^{2}}}}$ The problem now is to prove that Maxwell's equations explicitly prove that the electric and magnetic fields create electromagnetic radiation. Recall that two of Maxwell's equations are given by ${\displaystyle \nabla \times \mathbf {E} =-{\frac {\partial \mathbf {B} }{\partial t}}}$ ${\displaystyle \nabla \times \mathbf {B} =\mu _{o}\mathbf {j} +\mu _{o}\epsilon _{o}{\frac {\partial \mathbf {E} }{\partial t}}}$ By evaluating the curl of the above equations and vector calculus one can prove the following equations ${\displaystyle \nabla ^{2}\mathbf {E} ={\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {E} }{\partial t}}}$ ${\displaystyle \nabla ^{2}\mathbf {B} ={\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {B} }{\partial t}}}$ Note: the proof involves making the substitution ${\displaystyle c={\frac {1}{\sqrt {\mu _{o}\epsilon }}}}$ The equations above are analogous to the wave equation, by replacing f with E and B. The above equations mean that propagations through the magnetic (B) and electric (E) fields will produce waves. ## References 1. This is always defined as the speed of propagation in a vacuum Speeds through various material substances vary. • Hecht, Eugene (2001). Optics (4th ed. ed.). Pearson Education. ISBN 0-8053-8566-5.CS1 maint: extra text (link) • Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed. ed.). Brooks/Cole. ISBN 0-534-40842-7.CS1 maint: extra text (link) • Tipler, Paul (2004). Physics for Scientists and Engineers: Electricity, Magnetism, Light, and Elementary Modern Physics (5th ed. ed.). W. H. Freeman. ISBN 0-7167-0810-8.CS1 maint: extra text (link) • Reitz, John; Milford, Frederick and Christy, Robert (1992). Foundations of Electromagnetic Theory (4th ed. ed.). Addison Wesley. ISBN 0-201-52624-7.CS1 maint: multiple names: authors list (link) CS1 maint: extra text (link) • Jackson, John David (1975). Classical Electrodynamics (2nd ed ed.). John Wiley & Sons. ISBN 0-471-43132-X.CS1 maint: extra text (link) • Allen Taflove and Susan C. Hagness (2005). Computational Electrodynamics: The Finite-Difference Time-Domain Method, 3rd ed. Artech House Publishers. ISBN 1-58053-832-0.
2020-10-26 23:00:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8805534243583679, "perplexity": 1867.8596450729947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107892062.70/warc/CC-MAIN-20201026204531-20201026234531-00523.warc.gz"}
https://suredbits.com/settlement-of-dlcfd/
On December 17th, 2020, Roman and I (Nadav) entered into a special kind of Discreet Log Contract (DLC) called a Contract for Difference (CFD), or Discreet Log Contract for Difference (DLCFD) for short. Roman entered into the CFD with $22.80 worth of BTC, and exited with that same amount of USD even though the price of BTC moved. That is to say, if the price were to go down, then I would have to pay Roman to make up the difference so that he would exit with$22.80. If the price were to go up on the other hand, as it did, then Roman would have to pay me to make up for the difference and exit with that same fixed USD amount. From Roman’s point of view, the CFD allows him to hold BTC without being exposed to bitcoin price volatility (assuming he is denominating his assets in USD). Whether the price moves up or down, the Contract for Difference covers the difference for him, leaving his USD value fixed. Roman’s payout curve is shown above (with oracle outcomes denominated in BTC/USD). He entered with 100,000 satoshis when the price was $22,800/BTC and you will notice that if the price does not move, then 100,000 satoshis will again be his payout. If, on the other hand, the price moves down, he is compensated in BTC terms, and if the price moves up, then he loses satoshis to stay at a fixed USD value. Another way to see this is if we were to re-plot the above payout curve, denominating payout in USD instead of sats, it would just be a flat line at$22.80 for any outcome. From my point of view however (the upside-down version of the payout curve above), this is a type of long position on BTC relative to USD, meaning that I am speculating that the BTC/USD price will rise resulting in Roman paying me the difference. In some ways, the existence of DLCFDs allows for the decoupling of holding/using BTC from the price exposure that is usually associated with such activity. So long as there are those interested in taking long positions on BTC/(Other Asset), then entities such as businesses can match with them to create CFDs which leave their funds fixed in (Other Asset) terms for any period of time. And when this entity wishes to pay someone in BTC, they can do so atomically with exiting the CFD so that from their perspective, they were hardly ever exposed to BTC price volatility while still being able to use BTC as a payment infrastructure. Now that we know what a Contract for Difference is, and some of its uses, let’s dive into some of the details as to how we can execute DLCFDs on Bitcoin. A DLC consists of a single on-chain funding transaction, and a set of off-chain transactions called Contract Execution Transactions (CETs). There is a single CET for every possible outcome and the outputs on the CET reflect each party’s payout for that outcome. Each CET spends the on-chain funding transaction’s 2-of-2 multisignature output and the oracle contract is enforced by making this spending contingent on an oracle signature of a specific message unique to that CET. To learn more about how DLCs work, check out our previous blog post series or the DLC work-in-progress specification which has additional resources. As we discussed in the blog post about our previous (volatility) DLC, this scheme supports an arbitrary number of outcomes (and hence arbitrary oracle contracts) in theory. However, in practice we need to compress our set of outcomes to a reasonably small number to accommodate communication and computational constraints. We discussed how continuous intervals of constant-valued outcomes can be compressed into negligible sizes by having oracles sign each binary digit (aka bit) of the outcome individually. This allows us to ignore the least significant digits to construct transactions that cover many outcomes, for example if the last 10 bits can be ignored (as no matter their value the payout is the same), then we can construct a single transaction which covers 2^{10} = 1024 outcomes at once. This means we only need to create, send, and store a single adaptor signature in place of 1024! Our oracle, Skrik, committed to signing the BTC/USD price as 17 binary digits (supporting all values between $0/BTC and$131,071/BTC). In our volatility DLC, we used this compression trick to cover all cases that were not in our expected price range of $17k-$21k all the way up to $131,071 with fewer than 20 CETs! However, you’ll notice that in the case of a CFD, only one side has a constant-valued collar, and there are far fewer cases that must be covered. This is where our first new tool comes into play: Rounding Intervals. During contract negotiation, Roman and I agreed that we were comfortable rounding our payouts to the nearest 100 satoshis for all outcomes, and to the nearest 1,000 satoshis for any outcome beyond$30,000 (noting that even if BTC is worth $130k, 1,000 satoshis is still only worth$1.30, a small portion of the locked up assets). Our willingness to do this rounding enables us to use our CET compression algorithm everywhere, instead of only for collars. Specifically, it allows us to take advantage of any flatness (non-steepness) along the payout curve because if the curve is relatively flat, then we can round large sections to the same nearest 1000 satoshis and compress that interval. How effective is this scheme? Well, here are some numbers computed for our DLCFD: • Without rounding: 86,718 CETs which requires significant computing time as well as ~65MBs of data over the wire. • Rounding everything beyond $30k to the nearest 1,000 satoshis: 20,547 CETs which requires ~15MBs of data over the wire. • Note that this is about the number of values between 10k (where the collar is) and 30k. Essentially the mostly flat part of the curve after$30k has been compressed into a negligible number of CETs! • Rounding to the nearest 100 satoshis on $0 to$30k and the nearest 1,000 satoshis afterwards: 5,511 CETs which requires only 3.5MBs of data over the wire! • Note that even rounding to the nearest 100 satoshis (1-3 cents) on the steepest part of the payout curve resulted in a ~4x improvement in the number of CETs! If you are interested in more details, rounding intervals are included in the numeric outcome proposal and implemented on an experimental branch of bitcoin-s. The second novel tool that allowed us to execute this CFD was Antoine Riard’s Non-Interactive Protocol proposal. When Roman and I initially broadcasted our funding transaction, I immediately realized that we had made a mistake. We had agreed on a fee rate of 50 sats/vbyte without checking to see if this was reasonable, and average fees at the time were well over 100 sats/vbyte! Rather than requiring that we reconnect to each other and set up a new contract forcing us to re-sign thousands of CETs, I instead unilaterally bumped our fee using Child Pays for Parent (CPFP) which is simply the act of spending on output on a too-low-fee transaction with a too-high fee transaction so that the average becomes reasonable and both transactions become desirable to miners (they cannot have the child alone as it is only valid if the parent is confirmed as well). Thus, I broadcasted a CPFP transaction which caused our funding transaction to be confirmed, all without requiring any interaction between me and Roman. If I had been foolish again and my CPFP transaction did not pay enough in fees to cover its parent, then I could have used Replace by Fee (RBF) which I enabled on my unilateral transaction to replace the child with a higher-fee child. We also used the Non-Interactive Protocol’s CPFP mechanism to confirm our CET which again, used 50 sats/vbyte so that Roman broadcasted a child transaction to cover the fees. In the end, our oracle broadcasted signatures corresponding to the outcome $23,427/BTC and you will notice that in our CET, Roman’s payout is 0.000974 BTC corresponding to (23,427 * 0.000974) =$22.82 which is within our rounding agreement (of 100 sats = $0.023) of his initial dollar amount of$22.80! As was mentioned in our last post, in the near future there will be support for threshold oracle schemes, such as using 2 of 3 oracles to execute a CFD. I am currently working on a proposal for how this should be done, which will be released on the specification repository very soon! A more long-term, but still very important, additional improvement is support for DLCFDs on the Lightning Network. On-chain CFDs are fully supported by our current code, but they only really make economic sense today for larger amounts due to high fees. Not only will Lightning DLCFDs allow contract execution to be virtually fee-less with near-instant confirmation, but Lightning DLCFDs can even be used to enable trustless version of Rainbow-esque synthetic assets in channels which are liquid (i.e. spendable)! Stay tuned for more updates and progress on Discreet Log Contract development and other cool stuff we’re working on at Suredbits!
2021-07-25 11:51:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46990516781806946, "perplexity": 2329.536288368825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151672.96/warc/CC-MAIN-20210725111913-20210725141913-00351.warc.gz"}
https://proofwiki.org/wiki/Symbols:Greek/Omega
# Symbols:Greek/Omega Jump to: navigation, search ## Omega The $24$th and final letter of the Greek alphabet. Minuscule: $\omega$ Majuscule: $\Omega$ The $\LaTeX$ code for $$\omega$$ is \omega . The $\LaTeX$ code for $$\Omega$$ is \Omega . ### Sample Space $\Omega$ Let $\mathcal E$ be an experiment. The sample space of $\mathcal E$ is usually denoted $\Omega$ (Greek capital omega), and is defined as the set of all possible outcomes of $\mathcal E$. ### Elementary Event $\omega$ Let $\mathcal E$ be an experiment. An elementary event of $\mathcal E$, often denoted $\omega$ (Greek lowercase omega) is one of the elements of the sample space $\Omega$ (Greek capital omega) of $\mathcal E$. ### Order Type of Natural Numbers $\omega$ The order type of $\left({\N, \le}\right)$ is denoted $\omega$ (omega). ### Relation $\omega$ Used in some sources, for example 1982: P.M. Cohn: Algebra Volume 1 (2nd ed.), to denote a general relation.
2018-02-22 04:59:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939678907394409, "perplexity": 1330.607456324879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814002.69/warc/CC-MAIN-20180222041853-20180222061853-00169.warc.gz"}
http://physics.stackexchange.com/questions/24827/if-space-is-being-doubled-how-fast-is-it-doubling
# If space is being doubled, how fast is it doubling? [duplicate] Possible Duplicate: How long does it take for expanding space to double in size According to the standard concordence model, I heard that it's likely that space is doubled after 11.4 billion years. Am I right? Then, how fast (<-acceleration) is this happening? (By speed, I mean distance/time) Also, is space being doubled based on observation data? Thanks. - ## marked as duplicate by Qmechanic♦, Raskolnikov, Manishearth♦Dec 11 '12 at 11:39 If the doubling time is constant, and it will only be constant once the Universe is totally dominated by the cosmological constant (today the CC is just 73% of the energy density so we're close but not yet there), then it means that the distance between a pair of galaxies goes like $L=L_0 \cdot 2^{t/t_0}$ where $t_0$ is the time from the moment when the distance was $L_0$ and $t_0$ is those 11.4 billion years. The speed between them goes like $v_0\cdot 2^{ t/t_0}$, exponentially growing, too. You can't quote any "universal value of the speed" here. According to Hubble's law, the speed is proportional to the distance between the galaxies. This distance will keep on exponentially increasing with time which is why the speed will be doing the same thing. The coefficient $H$ of the Hubble law $v=Hd$ is variable but as the Universe is increasingly more dominated by the cosmological constant, $H$ will approach a constant (proportional to the cosmological constant), too. Normal people would write the exponents as powers of $e$, not $2$, by rewriting $2^x$ as $\exp (x\cdot \ln 2)$ where $\ln 2=0.693$ would convert the factor $1/11.4$ billion years to the (inverse) $e$-folding time, $1/16.4$ billion years after which the distances jump by the factor of $e=2.718$. So the power of two would be replaced by $\exp(t/t_1)$ where $t_1$ is $16.4$ billion years.
2013-12-18 23:26:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7456432580947876, "perplexity": 364.2886527093005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345760572/warc/CC-MAIN-20131218054920-00078-ip-10-33-133-15.ec2.internal.warc.gz"}
https://qiskit.org/documentation/locale/ta_IN/stubs/qiskit.transpiler.passes.CheckGateDirection.html
# CheckGateDirection¶ class CheckGateDirection(*args, **kwargs)[source] Bases: qiskit.transpiler.basepasses.AnalysisPass Check if the two-qubit gates follow the right direction with respect to the coupling map. CheckGateDirection initializer. Parameters coupling_map (CouplingMap) -- Directed graph representing a coupling map. Methods name Return the name of the pass. run Run the CheckGateDirection pass on dag. Attributes is_analysis_pass Check if the pass is an analysis pass. If the pass is an AnalysisPass, that means that the pass can analyze the DAG and write the results of that analysis in the property set. Modifications on the DAG are not allowed by this kind of pass. is_transformation_pass Check if the pass is a transformation pass. If the pass is a TransformationPass, that means that the pass can manipulate the DAG, but cannot modify the property set (but it can be read).
2021-09-24 11:26:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26302897930145264, "perplexity": 2468.190783338127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00453.warc.gz"}
https://gmatclub.com/forum/done-710-94-45q-78-42v-33050.html?fl=similar
Done! 710 (94%), 45Q (78%), 42V (95%) : Share GMAT Experience Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack It is currently 20 Feb 2017, 23:51 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Done! 710 (94%), 45Q (78%), 42V (95%) Author Message TAGS: ### Hide Tags Intern Joined: 24 Jul 2006 Posts: 3 Followers: 0 Kudos [?]: 0 [0], given: 0 Done! 710 (94%), 45Q (78%), 42V (95%) [#permalink] ### Show Tags 06 Aug 2006, 23:49 Thought I would take the time to share my GMAT experience because I learned a lot from reading similar posts from others... So I finally took the test Friday afternoon, and as mentioned above, scored a 710, 45Q, 42V. Very content with the score, though the Quant was a little bit of a mystery (to be explained) My experience: So I am taking a new job that won't afford me a lot of time to prepare for the GMAT at a later date, so even though I am not sure that I even want to go to Business school, I decided that I needed to take this thing now because I would never really have another chance. So I started studying about 3 weeks ago, and originally scheduled my exam for Monday the 24th, which gave me about 10 days of study time. Wasn't working so I had a lot of time to focus on only the GMAT. Started out just going through every quiz in the Kaplan online classroom to learn / refresh everything. Took about 3 days to get through all the quizes, so I then started to take CATs. During this phase I had some problems come up with my move from NY to the west coast that needed immediate attention. So much stuff came up, that I changed my test date, pushing it back to the 27th, a Thursday, because I didn't feel that I had adequate time, and like I mentioned, this was a one-shot deal for me. So after I take care of the moving issues, I re-focus, start taking CATs. First CAT was ONLINE Kaplan 4, got a 750 (3 days before test date). I was feeling great. Thought the studying was going great, etc. So I loosened up a bit on the intensity. Next day, take ONLINE Kaplan 3, get a 670. Felt very defeated and worried. So I re-focuse my efforts, study hard, take another CAT, Kaplan CAT 2, get a 630!! So at this point, I am really worried, evaluating all my options etc. That is when I discovered this forum as I was searching for answers to all of my annoying questions about indicative value of CATs, advice, etc. (Thanks BTW, forum is REALLY helpful) So then I realize that I am in trouble because the "HARD" Kaplan's are on the CD, and I was getting 600's on the online tests, which are pretty accurate (i guess). Anyway, take another CAT, and get 630 again!!! So at this point I call Pearson to inquire about re-scheduling for the following week and learn that I will forfeit my $250 if I do so. Really bummed, but thought that I might have to do it. So after reading the forum's advice, I take GMATPrep 2 to really get an idea of where I stand. This is the night before the exam after about 7 days of all-day, all-night studying. I decide that if I get 700+ on the GMATPrep, I will go, if not, if not, I would reschedule for next week and re-double my efforts. So I get a 670 (after missing the last 8 verbals due to time pressure!!!), so I suck it up, pay the$250 and reschedule for the following Friday in California. Only kicker is that I had to drive to California from Colorado on Saturday (18 hour drive). So I spend Friday relaxing, trying to ease my mind, then all day Saturday and Sunday driving to CA. Finally get here and try to start studying Sunday night, but realize that I don't have internet access, so I just read some of the Quick Reference guide. So Monday, felling a little behind, have to meet my movers, then hire some local guy off Craig's List to help me move which took basically all day. But then after that, I tried to really hit it hard. However, I had the worst time ever finding quiet, internet access. So I try to find a good study spot, and it just never materialized. Have a couple buddies, but their places are all really loud and they all stay up late. Tried Starbucks, but not a member of Tmobile. Public libraries have horrible hours, and the two universities in the city, both of their libraries were closed. So I then waste Monday night, and all day Tuesday trying to find a spot. So finally, I just go knock on a neighbors door and ask if i can squat off their internet for a fews days while i study. He says no problem, gives me the password, but there is one problem. It won't reach into my apartment. Distraught, thinking I am all out of options, I set up two of my moving boxes in the hallway outside his room and start taking CATs on my laptop, thinking it would be temporay. My new neighbors would come and go and give me the wierdest looks until I gave them my 5-minute explanation of why I was sitting on moving boxes in the hallway. Anyway, the temporay boxes become a permanent fixture for the next 3 days. I bought some MGMATs and starting taking them. Got a 620, 670, then 730 the night before the test. I also bought the Challenges, but just couldn't get used to the notion used (like exponents and square roots) and didn't want to throw myself off this late in the game, so I only took that first one. So in short, on test day, felt great about everything. AWA was fine, then comes the Quant. So I had gotten into a pretty good routine on the Quants for the CATs. Felt very comfortable with the tricks, and very tough math problems. However, I NEVER saw a tough math problem. In fact, question 36 out of 37 was "here is 5 numbers, what is the median?" My head just hit the table. I was sure I had flunked. So I take my break, sure that it was all for not at this point, but decided, you know what, I am going to kick ass on the verbal just so I am more confident when I have to re-take it in a few months. That was how sure I was that I had bombed the math. So, anyway, finish the test, briefly thought about canceling the score, decided that would be weak, and got my result. Overall, very happy but the math is still a mystery. Overall, lessons learned: Be careful taking tests around other big events in your life. You may think you will have nothing but time, but it rarely works out. Kaplan reading comps are the worst. Stupidly tough, and almost every question has two answers that I could defend in court. The real test was NOTHING like that. Towards the end of my test prep, those reading comps on the Kaplan CATs really scared me. But even scoring as well as I did on the real test (verbal), I noticed the reading comps getting tougher, but they still made sense and there was always a clear answer. Very, very helpful to use at least two different sets of materials from testing prep companies. i gained so much incremental knowledge during those last 3 days by taking and really studying the answer keys to all the MGMAT CATs. Each company has a different approach to solving problems, and each has a different focus. Very helpful to use atleast two. If you made it all the way to this point, I commend you, and I hope you enjoyed my story. Good luck to everyone, and thank you to all of those that helped along the way. Very helpful forum! Kaplan GMAT Prep Discount Codes Optimus Prep Discount Codes Magoosh Discount Codes Senior Manager Joined: 07 Mar 2006 Posts: 352 Followers: 1 Kudos [?]: 30 [0], given: 1 ### Show Tags 07 Aug 2006, 00:11 Congratulations..... VP Joined: 14 May 2006 Posts: 1415 Followers: 5 Kudos [?]: 177 [0], given: 0 ### Show Tags 07 Aug 2006, 07:11 congrats... GMAT and moving... ouch... definite recepie for disaster for me, but I am glad you could handle it!!! good luck with a new job!!! Manager Joined: 12 May 2006 Posts: 116 Followers: 1 Kudos [?]: 2 [0], given: 0 ### Show Tags 07 Aug 2006, 07:27 CONGRATS!!! I hope you are still celebrating! Director Joined: 07 Jun 2006 Posts: 513 Followers: 11 Kudos [?]: 124 [0], given: 0 ### Show Tags 07 Aug 2006, 09:43 Nicely written test brief! and congrats for getting into the 700 cliub. Guess Rhyma hasn't posted his victory trophy yet ;) SVP Joined: 31 Jul 2006 Posts: 2304 Schools: Darden Followers: 44 Kudos [?]: 475 [0], given: 0 ### Show Tags 07 Aug 2006, 14:07 Wow, that's some story. It's amazing that you were able to come away with such a great score. Congratulations. Intern Joined: 05 Jul 2006 Posts: 37 Location: Not from Milky way Followers: 0 Kudos [?]: 0 [0], given: 0 ### Show Tags 07 Aug 2006, 19:05 Great Stuff and Wonderful score.. Can u please tell more about how you managed the Verbal Section... _________________ "Can't Died in the Battel of Try" Manager Joined: 07 Oct 2004 Posts: 64 Followers: 1 Kudos [?]: 0 [0], given: 0 ### Show Tags 08 Aug 2006, 04:56 good job and congrat! I second that... Kaplan's reading comp is the worst! 08 Aug 2006, 04:56 Similar topics Replies Last post Similar Topics: 4 710! Finally Done 0 14 Oct 2014, 14:49 From a 580 (35q/35v) to a 710 (46q/42v) - My experience! 16 25 Jul 2011, 14:34 1 710 on the GMAT! 41V 45Q 7 28 Sep 2009, 16:21 21 DONE... :( GMAT Tomorrow! Ugh 660 Q42, V39 AWA 6.0 36 13 Jun 2008, 04:07 2 Done with GMAT - 710 to 730 20 07 Jun 2008, 09:15 Display posts from previous: Sort by
2017-02-21 07:51:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37027251720428467, "perplexity": 3458.1589080693607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00508-ip-10-171-10-108.ec2.internal.warc.gz"}
https://rdrr.io/cran/homtest/man/HOMTESTS.html
# HOMTESTS: Homogeneity tests In homtest: Homogeneity tests for Regional Frequency Analysis ## Description Homogeneity tests for Regional Frequency Analysis. ## Usage 1 2 3 ADbootstrap.test (x, cod, Nsim=500, index=2) HW.tests (x, cod, Nsim=500) DK.test (x, cod) ## Arguments x vector representing data from many samples defined with cod cod array that defines the data subdivision among sites Nsim number of regions simulated with the bootstrap of the original region index if index=1 samples are divided by their average value; if index=2 (default) samples are divided by their median value ## Details The Hosking and Wallis heterogeneity measures The idea underlying Hosking and Wallis (1993) heterogeneity statistics is to measure the sample variability of the L-moment ratios and compare it to the variation that would be expected in a homogeneous region. The latter is estimated through repeated simulations of homogeneous regions with samples drawn from a four parameter kappa distribution (see e.g., Hosking and Wallis, 1997, pp. 202-204). More in detail, the steps are the following: with regards to the k samples belonging to the region under analysis, find the sample L-moment ratios (see, Hosking and Wallis, 1997) pertaining to the i-th site: these are the L-coefficient of variation (L-CV), t^(i) = (1/ni ∑[j from 1 to ni](2(j - 1)/(ni - 1) - 1) Y(i,j)) / (1/ni ∑[j from 1 to ni] Y(i,j)) the coefficient of L-skewness, t3^(i) = (1/ni ∑[j from 1 to ni](6(j-1)(j-2)/(ni-1)/(ni-2) - 6(j-1)/(ni-1) + 1) Y(i,j)) / (1/ni ∑[j from 1 to ni](2(j-1)/(ni-1) - 1) Y(i,j)) and the coefficient of L-kurtosis t4^(i) = (1/ni ∑[j from 1 to ni](20(j-1)(j-2)(j-3)/(ni-1)/(ni-2)/(ni-3) - 30(j-1)(j-2)/(ni-1)/(ni-2) + 12(j-1)/(ni-1) - 1) Y(i,j)) / (1/ni ∑[j from 1 to ni](2(j-1)/(ni-1) - 1)Y(i,j)) Note that the L-moment ratios are not affected by the normalization by the index value, i.e. it is the same to use X(i,j) or Y(i,j) in Equations. Define the regional averaged L-CV, L-skewness and L-kurtosis coefficients, t^R = (∑[i from 1 to k] ni t^(i)) / (∑[i from 1 to k] ni) t3^R = (∑[i from 1 to k] ni t3^(i)) / (∑[i from 1 to k] ni) t4^R = (∑[i from 1 to k] ni t4^(i)) / (∑[i from 1 to k] ni) and compute the statistic V = {∑[i from 1 to k] ni (t^(i) - t^R)^2 / ∑[i from 1 to k] ni}^(1/2) Fit the parameters of a four-parameters kappa distribution to the regional averaged L-moment ratios t^R, t3^R and t4^R, and then generate a large number Nsim of realizations of sets of k samples. The i-th site sample in each set has a kappa distribution as its parent and record length equal to ni. For each simulated homogeneous set, calculate the statistic V, obtaining Nsim values. On this vector of V values determine the mean μV and standard deviation σV that relate to the hypothesis of homogeneity (actually, under the composite hypothesis of homogeneity and kappa parent distribution). An heterogeneity measure, which is called here HW1, is finally found as θ(HW1) = (V - μV)/(σV) θ(HW1) can be approximated by a normal distributed with zero mean and unit variance: following Hosking and Wallis (1997), the region under analysis can therefore be regarded as ‘acceptably homogeneous’ if θ(HW1)<1, ‘possibly heterogeneous’ if 1 ≤ θ(HW1) < 2, and ‘definitely heterogeneous’ if θ(HW1) ≥ 2. Hosking and Wallis (1997) suggest that these limits should be treated as useful guidelines. Even if the θ(HW1) statistic is constructed like a significance test, significance levels obtained from such a test would in fact be accurate only under special assumptions: to have independent data both serially and between sites, and the true regional distribution being kappa. Hosking and Wallis (1993) also give an alternative heterogeneity measure (that we call HW2), in which V is replaced by: V2 = ∑[i from 1 to k] ni {(t^(i) - t^R)^2 + (t3^(i) - t3^R)^2}^(1/2) / ∑[i from 1 to k] ni The test statistic in this case becomes θ(HW2) = (V2 - μ(V2)) / (σ(V2)) with similar acceptability limits as the HW1 statistic. Hosking and Wallis (1997) judge θ(HW2) to be inferior to θ(HW1) and say that it rarely yields values larger than 2 even for grossly heterogeneous regions. The bootstrap Anderson-Darling test A test that does not make any assumption on the parent distribution is the Anderson-Darling (AD) rank test (Scholz and Stephens, 1987). The AD test is the generalization of the classical Anderson-Darling goodness of fit test (e.g., D'Agostino and Stephens, 1986), and it is used to test the hypothesis that k independent samples belong to the same population without specifying their common distribution function. The test is based on the comparison between local and regional empirical distribution functions. The empirical distribution function, or sample distribution function, is defined by F(x) = j/η, x(j) ≤ x < x(j+1), where η is the size of the sample and x(j) are the order statistics, i.e. the observations arranged in ascending order. Denote the empirical distribution function of the i-th sample (local) by \hatFi(x), and that of the pooled sample of all N = n1 + ... + nk observations (regional) by HN(x). The k-sample Anderson-Darling test statistic is then defined as θ(AD) = ∑[i from 1 to k] ni integral[all x] ((\hatFi(x) - HN(x))^2) / (HN(x) (1 - HN(x))) dHN(x) If the pooled ordered sample is Z1 < ... < ZN, the computational formula to evaluate θ(AD) is: θ(AD) = 1/N ∑[i from 1 to k] 1/ni ∑[i from 1 to N-1] ((N M(ij) - j ni)^2) / (j(N-j)) where M(ij) is the number of observations in the i-th sample that are not greater than Zj. The homogeneity test can be carried out by comparing the obtained θ(AD) value to the tabulated percentage points reported by Scholz and Stephens (1987) for different significance levels. The statistic θ(AD) depends on the sample values only through their ranks. This guarantees that the test statistic remains unchanged when the samples undergo monotonic transformations, an important stability property not possessed by HW heterogeneity measures. However, problems arise in applying this test in a common index value procedure. In fact, the index value procedure corresponds to dividing each site sample by a different value, thus modifying the ranks in the pooled sample. In particular, this has the effect of making the local empirical distribution functions much more similar to the other, providing an impression of homogeneity even when the samples are highly heterogeneous. The effect is analogous to that encountered when applying goodness-of-fit tests to distributions whose parameters are estimated from the same sample used for the test (e.g., D'Agostino and Stephens, 1986; Laio, 2004). In both cases, the percentage points for the test should be opportunely redetermined. This can be done with a nonparametric bootstrap approach presenting the following steps: build up the pooled sample S of the observed non-dimensional data. Sample with replacement from S and generate k artificial local samples, of size n1, ..., nk. Divide each sample for its index value, and calculate θ^(1)(AD). Repeat the procedure for Nsim times and obtain a sample of θ^(j)(AD), j = 1, ..., Nsim values, whose empirical distribution function can be used as an approximation of G(H0)(θ(AD)), the distribution of θ(AD) under the null hypothesis of homogeneity. The acceptance limits for the test, corresponding to any significance level α, are then easily determined as the quantiles of G(H0)(θ(AD)) corresponding to a probability (1-α). We will call the test obtained with the above procedure the bootstrap Anderson-Darling test, hereafter referred to as AD. Durbin and Knott test The last considered homogeneity test derives from a goodness-of-fit statistic originally proposed by Durbin and Knott (1971). The test is formulated to measure discrepancies in the dispersion of the samples, without accounting for the possible presence of discrepancies in the mean or skewness of the data. Under this aspect, the test is similar to the HW1 test, while it is analogous to the AD test for the fact that it is a rank test. The original goodness-of-fit test is very simple: suppose to have a sample Xi, i = 1, ..., n, with hypothetical distribution F(x); under the null hypothesis the random variable F(Xi) has a uniform distribution in the (0,1) interval, and the statistic D = ∑[i from 1 to n] \cos(2 π F(Xi)) is approximately normally distributed with mean 0 and variance 1 (Durbin and Knott, 1971). D serves the purpose of detecting discrepancy in data dispersion: if the variance of Xi is greater than that of the hypothetical distribution F(x), D is significantly greater than 0, while D is significantly below 0 in the reverse case. Differences between the mean (or the median) of Xi and F(x) are instead not detected by D, which guarantees that the normalization by the index value does not affect the test. The extension to homogeneity testing of the Durbin and Knott (DK) statistic is straightforward: we substitute the empirical distribution function obtained with the pooled observed data, HN(x), for F(x) in D, obtaining at each site a statistic Di = ∑[j from 1 to ni] \cos(2 π HN(Xj)) which is normal under the hypothesis of homogeneity. The statistic θ(DK) = ∑[i from 1 to k] Di^2 has then a chi-squared distribution with k-1 degrees of freedom, which allows one to determine the acceptability limits for the test, corresponding to any significance level α. Comparison among tests The comparison (Viglione et al, 2007) shows that the Hosking and Wallis heterogeneity measure HW1 (only based on L-CV) is preferable when skewness is low, while the bootstrap Anderson-Darling test should be used for more skewed regions. As for HW2, the Hosking and Wallis heterogeneity measure based on L-CV and L-CA, it is shown once more how much it lacks power. Our suggestion is to guide the choice of the test according to a compromise between power and Type I error of the HW1 and AD tests. The L-moment space is divided into two regions: if the t3^R coefficient for the region under analysis is lower than 0.23, we propose to use the Hosking and Wallis heterogeneity measure HW1; if t3^R > 0.23, the bootstrap Anderson-Darling test is preferable. ## Value ADbootstrap.test and DK.test test gives its test statistic and its distribution value P. If P is, for example, 0.92, samples shouldn't be considered heterogeneous with significance level minor of 8 HW.tests gives the two Hosking and Wallis heterogeneity measures HW1 and HW2; following Hosking and Wallis (1997), the region under analysis can therefore be regarded as ‘acceptably homogeneous’ if HW < 1, ‘possibly heterogeneous’ if 1 ≤ HW < 2, and ‘definitely heterogeneous’ if HW ≥ 2. ## Author(s) Alberto Viglione, e-mail: alviglio@tiscali.it. ## References D'Agostino R., Stephens M. (1986) Goodness-of-Fit Techniques, chapter Tests based on EDF statistics. Marcel Dekker, New York. Durbin J., Knott M. (1971) Components of Cramer-von Mises statistics. London School of Economics and Political Science, pp. 290-307. Hosking J., Wallis J. (1993) Some statistics useful in regional frequency analysis. Water Resources Research, 29 (2), pp. 271-281. Hosking, J.R.M. and Wallis, J.R. (1997) Regional Frequency Analysis: an approach based on L-moments, Cambridge University Press, Cambridge, UK. Laio, F., Cramer-von Mises and Anderson-Darling goodness of fit tests for extreme value distributions with unknown parameters, Water Resour. Res., 40, W09308, doi:10.1029/2004WR003204. Scholz F., Stephens M. (1987) K-sample Anderson-Darling tests. Journal of American Statistical Association, 82 (399), pp. 918-924. Viglione A., Laio F., Claps P. (2007) “A comparison of homogeneity tests for regional frequency analysis”, Water Resources Research, 43, W03428, doi:10.1029/2006WR005095. Viglione A. (2007) Metodi statistici non-supervised per la stima di grandezze idrologiche in siti non strumentati, PhD thesis, Politecnico di Torino. KAPPA, Lmoments. ## Examples 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 data(annualflows) annualflows[1:10,] summary(annualflows) x <- annualflows["dato"][,] cod <- annualflows["cod"][,] split(x,cod) #ADbootstrap.test(x,cod,Nsim=100) # it takes some time #HW.tests(x,cod) # it takes some time DK.test(x,cod) fac <- factor(annualflows["cod"][,],levels=c(34:38)) x2 <- annualflows[!is.na(fac),"dato"] cod2 <- annualflows[!is.na(fac),"cod"] split(x2,cod2) sapply(split(x2,cod2),Lmoments) regionalLmoments(x2,cod2) ADbootstrap.test(x2,cod2) ADbootstrap.test(x2,cod2,index=1) HW.tests(x2,cod2) DK.test(x2,cod2) ### Example output cod anno dato 1 1 1956 1494 2 1 1957 1309 3 1 1958 1699 4 1 1959 1467 5 1 1960 1918 6 1 1961 1469 7 1 1962 1267 8 1 1963 1523 9 1 1964 1338 10 1 1965 1438 cod anno dato Min. : 1.0 Min. :1921 Min. : 172.0 1st Qu.:13.0 1st Qu.:1940 1st Qu.: 725.2 Median :22.0 Median :1951 Median : 981.0 Mean :23.7 Mean :1951 Mean :1041.4 3rd Qu.:34.0 3rd Qu.:1960 3rd Qu.:1308.8 Max. :49.0 Max. :1985 Max. :3045.0 $1 [1] 1494 1309 1699 1467 1918 1469 1267 1523 1338 1438 1788 1591 1697 1780 1769$2 [1] 1144 1652 1807 1881 1741 1124 2064 1434 1678 1239 921 983 1093 1744 1213 [16] 1590 956 1124 2181 1077 1345 1219 988 1325 1277 1479 1307 2053 1232 973 [31] 1407 912 $3 [1] 2596 954 1115 1248 867 1280 1588 1055 1764 3045$4 [1] 871 1238 1505 1636 1553 1936 1739 1867 1184 1630 1311 1520 1201 1614 1971 [16] 1829 1781 1093 1996 1328 1662 1199 860 961 949 1536 1016 1386 820 1023 [31] 2329 1209 1305 1334 1024 1364 1310 1410 1247 2393 1317 909 1808 1020 1181 [46] 1365 1218 1644 1160 1002 1243 1332 1033 1170 1685 1478 2434 1600 1369 1215 [61] 1614 1449 1518 1490 1191 $7 [1] 1481 1758 1774 1625 1607 2826 1488 928 2379 1173 1801 1824 1309 2220 1733$8 [1] 1086 1810 2244 2138 2028 1308 1947 1528 2244 1594 861 1378 1795 1344 1558 [16] 696 724 2497 660 1388 1484 952 1987 2646 1689 1443 2688 1249 1145 2392 [31] 1001 1380 $9 [1] 2075 1607 1717 1261 1824 1330 963 1313 2276 682 1440 1304 1193$10 [1] 1096 1387 1289 1461 1054 1474 1137 1256 981 1696 1468 1850 1644 1248 1498 [16] 1317 1500 1109 859 931 1020 1493 954 1133 1144 1056 $11 [1] 1320 1706 948 1643 944 1402 1202 1788 1665 1833 1679 1166 1833 1661 1938 [16] 1457 830 1221 1398 1674 1311 1611 1003 1021$12 [1] 890 1247 1040 1047 875 1060 913 968 749 1218 1104 1489 1300 833 994 [16] 1002 1134 854 826 695 939 1230 830 1096 876 704 1111 780 791 709 [31] 812 686 812 755 802 1098 868 735 829 750 635 887 711 753 935 [46] 862 830 924 735 766 930 783 1623 1359 1015 922 963 848 975 760 [61] 766 $13 [1] 1288 854 1324 741 1043 756 1477 1160 1426 1360 1109 1211 1094 1666 1002 [16] 772 1124 997 649 1436 762 1293 930 721 838 1063 710 1002 1625 1002 [31] 848 1104 869 823 992 588 894 1073 675 1181 1568 817 1068 978$14 [1] 1505 928 1223 805 1449 1084 1588 1509 1137 1014 1181 1394 922 811 1428 [16] 1137 1240 1034 581 1501 700 1263 962 780 919 1068 855 1198 1569 1134 [31] 1007 1205 973 871 1188 581 1027 1192 578 875 1553 774 958 1187 2152 [46] 836 834 753 1110 $15 [1] 969 811 1107 769 567 925 508 598 818 495$16 [1] 957 625 625 658 1022 555 496 625 593 1115 718 957 707 332 821 [16] 469 913 663 418 523 799 469 1000 1104 761 598 1033 707 469 614 [31] 270 609 1017 367 $17 [1] 595 718 518 548 389 567 506 985 530 1097 934 675 614 587 722 [16] 499 459 1087 550 860 648 296 658$18 [1] 686 863 488 937 453 621 484 851 599 1161 894 598 645 606 772 [16] 449 486 510 559 829 545 898 529 392 856 625 773 651 674 432 $19 [1] 589 715 479 696 394 533 430 845 519 1012 805 559 569 580 725 [16] 448 412 411 407 638 506 729 538 350 736 513 787$20 [1] 1237 1908 1263 1066 1401 1263 1134 799 919 971 1057 1710 1555 1667 1212 [16] 799 1366 962 1779 1504 808 1031 1186 1031 1796 1882 1487 945 1710 1194 [31] 919 1418 722 1160 1409 894 1279 1884 1307 $21 [1] 489 704 310 665 259 501 428 820 551 994 658 425 423 409 736 440 401 398 342 [20] 658 449 665 535 247 584 338 580 569 311 412 565 403 846 917 525 411 717 526 [39] 248 451 185 356 564 256$22 [1] 1197 863 1382 1104 649 745 615 1116 618 739 761 720 1147 838 1057 [16] 739 529 962 470 881 495 417 553 819 711 1410 1472 727 671 1163 [31] 751 476 819 399 612 860 507 844 1245 953 976 $23 [1] 835 1345 1085 1655 1291 838 974 862 1106 699 854 721 699 1033 892 [16] 1213 631 554 833 911 796 721 727$24 [1] 1795 1761 1962 1541 1007 1276 1144 1302 947 1210 1113 1532 764 849 1412 [16] 1105 1048 843 1048 1157 $25 [1] 1498 880 1028 1046 589 1088 1179 1471 761 1106 2017 649 1129 1149 1355 [16] 1107$26 [1] 1634 1300 1715 1643 1295 1459 1020 1531 919 1095 876 857 1534 1183 1405 [16] 1051 1159 1478 1472 1364 1140 1126 1007 $27 [1] 1157 1759 1245 842 1056 800 1244 806 925 839 782 1236 1601 886 768 [16] 1109 722 440$28 [1] 1121 1488 1158 1287 1210 1468 1445 1304 1967 1408 $29 [1] 1121 1482 1163 1378 1201 1677 1360 2230 1117 1093 1647 1358$30 [1] 395 342 463 649 400 703 388 570 292 490 440 885 671 1035 729 [16] 360 467 351 765 418 455 339 311 493 432 686 353 337 449 513 [31] 374 475 628 496 844 974 375 419 651 441 226 438 218 461 504 [46] 309 543 870 433 724 604 712 865 395 324 436 607 399 $31 [1] 754 1025 829 1428 1828 1472 771 1144 980 1728 720 850 995 901 1138 [16] 678 805 1509 616 629 716 848 767 720 1426 1370 1826 1046 1172 869 [31] 793 1008 571 1161$32 [1] 920 1674 1153 1512 1226 647 945 822 1665 632 746 705 759 932 617 [16] 632 1259 506 590 743 598 747 988 855 1229 1461 458 804 867 652 [31] 580 $33 [1] 684 701 486 792 727 1086 564 624 1205 463 846 894 707 733 892 [16] 869 1283 1444 474 798 935 719 445 749 428 772 854 545 1002 939 [31] 643 603 785 775 1025 584$34 [1] 636 998 1014 1965 1333 1730 1330 825 1112 851 1423 960 1031 976 561 [16] 1055 1076 1224 658 707 1453 445 966 930 939 862 1115 1158 1573 $35 [1] 845 803 746 1036 1160 1038 1285 369 1093 732 613 620 863 579 765 [16] 819 505 594 667 651 950 1583 688 622 1068$36 [1] 924 1676 1765 841 796 745 1363 663 714 382 771 796 956 1153 669 [16] 796 1879 643 796 994 733 1185 $37 [1] 597 833 902 1207 793 598 1328 323 561 726 663 919 1139 1040 1264 [16] 1214$38 [1] 492 608 368 393 1123 172 281 539 424 585 632 528 $39 [1] 339 929 560 727 490 684 979 1466 404 865 533 462 287 767 653 [16] 1176 1906 883$40 [1] 755.00 871.00 938.00 1175.00 1218.00 621.00 432.25 913.20 840.15 [10] 827.97 919.29 724.48 602.72 $41 [1] 1449 1449 1546 1516 1254 1382$42 [1] 895 1006 1351 1215 1215 1279 1006 1156 821 $43 [1] 948 1308 1185 801 848 926 932 755 764 891 677 835 1112 918 742 [16] 685 927$44 [1] 1607 1275 1613 1484 1487 1205 1367 1158 1583 1342 1848 1640 1225 1320 1202 [16] 1476 1190 1435 894 1326 1230 1042 1127 $45 [1] 1953 1939 1677 1692 2051 2371 2022 1521 1448 1825 1363 1760 1672 1603 1244 [16] 1521 1783 1560 1357 1673 1625 1425 1688 1577 1736 1640 1584 1293 1277 1742 [31] 1491$46 [1] 1223 1077 671 1063 969 842 1037 903 1407 1153 1107 1293 813 834 1118 [16] 901 981 $47 [1] 986 996 1335 964 1018 821 945 844 1133 975 1082 1252 1031 940 1078 [16] 933 709 923 899 747 1010 873 962 965 674 763 915 1029 1452 1486$48 [1] 872 1528 1062 1345 1158 998 1197 1234 1469 1343 2103 1745 1084 1717 1131 [16] 990 1186 884 1118 1383 877 1072 1906 830 $49 [1] 808 1088 1435 1265 1065 911 992 1273 1031 1100 769 865 781 1019 1761 Ak P 307.7723 1.0000$34 [1] 636 998 1014 1965 1333 1730 1330 825 1112 851 1423 960 1031 976 561 [16] 1055 1076 1224 658 707 1453 445 966 930 939 862 1115 1158 1573 $35 [1] 845 803 746 1036 1160 1038 1285 369 1093 732 613 620 863 579 765 [16] 819 505 594 667 651 950 1583 688 622 1068$36 [1] 924 1676 1765 841 796 745 1363 663 714 382 771 796 956 1153 669 [16] 796 1879 643 796 994 733 1185 $37 [1] 597 833 902 1207 793 598 1328 323 561 726 663 919 1139 1040 1264 [16] 1214$38 [1] 492 608 368 393 1123 172 281 539 424 585 632 528 34 35 36 37 38 l1 1065.7241379 827.7600000 965.4545455 881.68750000 512.0833333 l2 191.9729064 151.6000000 206.5800866 174.29583333 126.0681818 lcv 0.1801338 0.1831449 0.2139718 0.19768436 0.2461868 lca 0.1246570 0.1913101 0.3252284 -0.01093174 0.1775494 lkur 0.2105167 0.1536444 0.2173088 0.01341899 0.3616169 l1R l2R lcvR lcaR lkurR 895.1153846 175.0339202 0.1983372 0.1683511 0.1853942 A2kN P 2.641827 0.658000 A2kN P 1.933665 0.258000 H1 H2 -0.7677048 -0.4166196 Warning messages: 1: In fn(par, ...) : value out of range in 'gammafn' 2: In fn(par, ...) : value out of range in 'gammafn' Ak P 14.1152348 0.9930638 homtest documentation built on May 2, 2019, 1:45 p.m.
2020-09-26 15:16:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47885385155677795, "perplexity": 962.0296583318251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244231.61/warc/CC-MAIN-20200926134026-20200926164026-00060.warc.gz"}
http://mathhelpforum.com/algebra/98207-problem-solving-using-algebra.html
# Math Help - Problem solving using algebra 1. ## Problem solving using algebra Hi, I've got a problem here. I've tried using LCM and HCF, but it isn't working for me. There are 3 positive whole numbers. Person 1 found the HCF of two of them, and got 1,000,004. Person's 2 + 3 did the same thing with different sets of those 3 numbers and got 1,000,006 and 1,000,008 respectively. Their teacher is sure one of them has made a mistake despite the fact they calculated the HCF of different numbers. Is the teacher correct? BG 2. Originally Posted by BG5965 Hi, I've got a problem here. I've tried using LCM and HCF, but it isn't working for me. There are 3 positive whole numbers. Person 1 found the HCF of two of them, and got 1,000,004. Person's 2 + 3 did the same thing with different sets of those 3 numbers and got 1,000,006 and 1,000,008 respectively. Their teacher is sure one of them has made a mistake despite the fact they calculated the HCF of different numbers. Is the teacher correct?
2015-02-28 12:29:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8736915588378906, "perplexity": 575.7300605921484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461944.75/warc/CC-MAIN-20150226074101-00038-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.bloombergprep.com/gmat/practice-question/1/1088/quantitative-section-quant-fundamentals-quant-fundamentals-signs-and-the-number-line/
We cover every section of the GMAT with in-depth lessons, 5000+ practice questions and realistic practice tests. ## Up to 90+ points GMAT score improvement guarantee ### The best guarantee you’ll find Our Premium and Ultimate plans guarantee up to 90+ points score increase or your money back. ## Master each section of the test ### Comprehensive GMAT prep We cover every section of the GMAT with in-depth lessons, 5000+ practice questions and realistic practice tests. ## Schedule-free studying ### Learn on the go Study whenever and wherever you want with our iOS and Android mobile apps. # Quant Fundamentals: Signs and the Number Line Which of the following is NOT always true? Incorrect. [[Snippet]] Eliminate this answer choice because the square of any number is always non-negative. It's either positive or zero (if $$x=0$$). Incorrect. [[Snippet]] Eliminate this answer choice because a positive times 0 is 0 (non-negative), 0 times 0 is 0 (non-negative), and a positive times a positive is positive (non-negative). Incorrect. [[Snippet]] Keep in mind that we're looking for an answer choice that is NOT always true. According to answer choice C, The sum of two distinct non-negative numbers is positive. In other words, the sum of any two different numbers which are either positive or zero is always positive. When adding two positive numbers, the result is always positive. Even if one of the two numbers is zero, the result will still be positive, since adding a positive number and 0 always results in a positive number. Also, keep in mind that the two numbers need to be different, so they can't both be 0. Seeing as the statement  is always true, this answer choice is incorrect. Correct. [[Snippet]] This is the correct answer because a negative times 0 is 0. For example: $$-3 \times 0 = 0$$. Therefore, the product of two distinct non-positive numbers is NOT always positive. Incorrect. [[Snippet]] Eliminate this answer choice because a negative plus a negative is negative and a negative plus 0 is negative. For example: > $$(-1) + (-4) = -5$$ > $$(-3) + 0 = -3$$ The sum of two *distinct* non-positive numbers is negative. The product of two *distinct* non-positive numbers is positive. The sum of two *distinct* non-negative numbers is positive. The product of two non-negative numbers is non-negative. $$x^2$$ is non-negative for any value of $$x$$.
2020-10-24 11:36:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6592811346054077, "perplexity": 958.0885271718963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882581.13/warc/CC-MAIN-20201024110118-20201024140118-00606.warc.gz"}
https://www.esaral.com/q/if-the-product-of-zeros-of-the-quadratic-polynomial-f-x-x2-4x-k-is-3-89093/
If the product of zeros of the quadratic polynomial f(x) = x2 − 4x + k is 3, Question: If the product of zeros of the quadratic polynomial $f(x)=x^{2}-4 x+k$ is 3, find the value of $k$. Solution: We have to find the value of k. Given, The product of the zeros of the quadratic polynomial $f(x)=x^{2}-4 x+k$. is 3 Product of the polynomial $=3$ $\frac{\text { Constant term }}{\text { Coefficient of } x^{3}}=3$ $\frac{k}{1}=3$ $k=3 \times 1$ $k=3$ Hence, the value of $k$ is $k=3$.
2022-01-25 16:42:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6392050981521606, "perplexity": 214.46608932722722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304859.70/warc/CC-MAIN-20220125160159-20220125190159-00614.warc.gz"}
https://www.physicsforums.com/threads/asymptotic-expansions-and-wkb-solution.124207/
# Asymptotic expansions and WKB solution 1. Jun 20, 2006 ### eljose let be e an small parameter e<<<1 then if we want to find a solution to the equation: $$e\ddot x + f(t)x=0$$ then we could write a solution to it in the form: $$x(t)=exp(i \int dt f(t)^{1/2}/e)[a_{0}(t)+ea_{1}(t)+e^{2}a_{2}(t)+......]$$ My question is if we could apply Borel resummation (or other technique) to give a "sum" for a divergent series in the form: $$a_{0}(t)+ea_{1}(t)+e^{2}a_{2}(t)+......\rightarrow \int_{0}^{\infty}dxe^{-x}B(t,x,e)dx$$ With $$B(x,t,e)= \sum_{n=0}^{\infty} \frac{a_{n}(t)e^{n} x^{n}}{n!}$$ the generating function of the coefficient..so we can extend the domain of convergence for the solution not only to the case e--->0 but to every value of e or at least valid when e-->1.:tongue2:
2016-12-11 11:56:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.808911144733429, "perplexity": 581.6409838991686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544678.42/warc/CC-MAIN-20161202170904-00398-ip-10-31-129-80.ec2.internal.warc.gz"}
https://astronomy.stackexchange.com/questions/50124/why-does-electron-degeneracy-pressure-not-stop-massive-star-collapse
# Why does electron degeneracy pressure not stop massive star collapse? I was thinking a little bit, and never asked myself the following. If white dwarfs do not collapse, because electron degeneracy pressure stops the star from collapsing by its own gravity, and this is due to the Pauli exclusion principle, then why aren't neutron stars held by the same pressure? How does a star become a more compact object if the electrons "cannot be more together", what is going on when the inverse beta decay process "violates" the Pauli exclusion principle (for electrons)? And why is neutron degeneracy pressure now the one that doesn't allows gravity to do its thing? I think I have a wrong picture, like that the stars are first on a "white dwarf state" and then moves to a "neutron star state" and in reality a white dwarf is a result of a very specific final evolution phase of a low mass kind of star. And on the other hand a neutron star is the core remnant of a massive star. I don't think this is the answer because why would the core go from an iron solid state to a degenerate gas of neutrons (and other stuff) without converting in a electron degenerate gas, but that it was what came to my mind. Another thing that maybe answers my question, is quantum field theory. I mean, in this model there are particles that have some energy, properties and quantum numbers, and if you have the proper ones you can have any physical possible state, so at some point the question is not why something is violating the Pauli exclusion principle instead the question would be, which state is more probable to achieve given the conditions the object brings. • +1. As a to brief thought on this: A degenerate election gas or whatever state describes matter in a neutron star also has a finite pressure. If gravity is stronger... Aug 7 at 23:48 You have the wrong idea about degeneracy pressure. There is no limit in principle on how closely together you can squeeze electrons (or other fermions) and at no point is the Pauli Exclusion Principle violated. All that happens is that their RMS momentum must increase - in accordance with the Heisenberg uncertainty principle. Thus, the more densely you pack the electrons, the higher becomes their average momentum and it is this momentum that leads to "degeneracy pressure". Why then are "neutron stars" not held up by electron degeneracy pressure? The answer is because they are (mostly) made of neutrons! The composition of a compact object will itself be density dependent. The reason for this is that the matter will attempt to convert itself into whatever form has the minimum energy density, where energy in this case includes the rest mass of the constituents and their kinetic energy. In the cases of white dwarfs and the cores of massive stars that are supported by electron degeneracy pressure, once the density reaches a threshold, the most energetic electrons are capable of "neutronising" the protons in the gas (there must be an equal number of protons for charge neutrality). The electrons combine with the protons to form neutrons. In both cases, the removal of free electrons means that electron degeneracy pressure does not increase as the star becomes denser - an unstable situation that can lead to collapse or possibly to a thermonuclear explosion (in the case of a white dwarf when the density becomes high enough to trigger pycnonuclear reactions). Note that it is possible for a white dwarf to collapse to a neutron star (see https://astronomy.stackexchange.com/a/25907/2531 ); the fate of massive white dwarfs and whether they collapse or blow-up is a complex problem (see https://astronomy.stackexchange.com/a/14747/2531). During the collapse, temperatures become so hot that photons have enough energy t break up any nuclei and produce a gas that is mostly neutrons with about 1% protons and electrons. The collapse may be arrested by the combined effects of neutron degeneracy pressure (those occurs at higher densities than for electrons, because neutrons are much more massive) and strong nuclear force repulsion once the neutrons are separated by $$\sim 10^{-15}$$m. The electrons (and protons) are also degenerate, but because of their low densities (still much denser than in a white dwarf!) compared with the neutrons, they are a minor contributor to the pressure. • Neutronising is always possible. The question is whether the reaction is endothermic or exothermic--collapse occurs when the pressure becomes high enough that it's exothermic. We see it in the chemical world with ordinary water ice under pressure. Aug 9 at 2:34 • Neutronisation becomes possible when electrons have sufficient energy. (Fortunately) It does not occur for electrons with energy below the neutronisation threshold. - about 10 MeV for carbon, but lower for iron nuclei and only 1.3 MeV for protons @LorenPechtel Aug 9 at 7:32 • and neutronisation is always "endothermic" in the sense that it takes kinetic energy from an electron and turns it into (mostly) rest mass. That's why it destabilises the star. Aug 9 at 15:06 • But not all electrons will be at the same energy level. Even when most aren't energetic enough the reaction would happen--just not at a meaningful rate. And I'm looking at the big picture in saying it's exothermic. Of course it takes energy to do it, but it yields energy in letting the star shrink a tiny bit. When the yield from the shrink exceeds the energy to drive the reaction the star goes down the rabbit hole. Aug 10 at 15:24 • The uncertainty principle implies an increase in the standard deviation of the momentum, not its mean. It's more realistic to say each Cartesian component of momentum has mean zero, but increased root mean square. This also translates into a greater mean modulus of each component, and mean modulus of the momentum vector. And there's also an increase in the mean KE. – J.G. Aug 10 at 18:31 A neutron star contains relatively few electrons. In a white dwarf, electron degeneracy pressure does prevent further collapse. But if you add more matter to a white dwarf, it will shrink in volume, increasing the pressure. Now if you keep adding matter, at some point, some kind of nuclear reactions will start. In a regular carbon-oxygen white dwarf, it is nuclear fusion and the white dwarf explodes as a type 1a supernova. But in the iron core of a massive star a different reaction occurs. Protons combine with electrons to produce neutrons. This removes electrons and allows further collapse. This happens extremely rapidly and releases an enormous pulse of neutrinos, which results in the outer parts of the star exploding in a type II supernova. In a very large star the core passes through a state of election degeneracy very rapidly. There will be an electron degenerate iron core, but it won't last long! Less than a day (?) So much iron is produced in such a short time that electron degeneracy is rapidly overcome and neutronisation occurs. There is no visible "iron white dwarf" An iron white-dwarf couldn't easily form. Stars comparable to the sun will produce carbon-oxygen white dwarfs. Larger stars will explode in supernovae. There is no sweet spot at which a star will produce just enough iron to form an iron white dwarf supported by electron degeneracy. When an old star begins to collapse under its own gravity, one can in principle numerically solve the Schrödinger equation for the potential its electrons collectively experience. Although this is a linear equation, we are interested not in arbitrary solutions to it, but those which are antisymmetric under exchanging any two electrons. If the gravity-induced classical pressure is moderate, the quantum numbers of legal states are the same as with no gravity at all; we can work at say first order in perturbation theory to compute how the solutions shift, but the same discrete parameters describe them. But if the pressure is too high, the solutions are so unlike the zero-gravity case the solutions need to be differently labelled. Ultimately, the new low-energy states are numerous enough for electrons' marginal wavefunctions to overlap more than they used to, so electron degeneracy pressure no longer resists the collapse. In a neutron star, after EDP fails the collapse continues to an extent, but we can repeat the above logic with neutrons. This time, it is their degeneracy pressure which wins, because the quantum numbers labelling multi-neutron states are the same as in the zero-gravity case. If we go to even more extreme gravity, of course, even this fails, so neutron degeneracy pressure no longer prevents collapse. You don't need quantum instrumentarium for a basic understanding of the compact star matters: 1. The inverse beta decay does not "violate" the exclusion principle. It simply removes electrons from the whole system, reducing the pressure. The inverse beta decay consumes energy and this energy is usually delivered by the gravitational collapse of the star. If the star is not heavy enough, it can't deliver the needed energy for the inverse beta so it stays in the "electron degeneracy state" indefinitely. 1. There still is an electron degeneracy pressure in a neutron star. It is way higher than the pressure in a white dwarf. There is a proton degeneracy pressure in there as well. Both are just not this much important, compared to the neutron degeneracy pressure. 2. In order to get a neutron star, you start with a bigger star in the first place. Its core is hot and dense enough to burn everything all the way to iron before collapsing. Then the collapse and the inverse beta happens. In a Fermi gas of electrons, the electron energy reflects the pressure. If the highest energy electrons are relativistic, the equation of state softens: a density increase doesn't produce as much pressure as it does in the nonrelativistic case. The softer equation of state is less able to resist gravitational collapse: thus we get the Chandrasekhar Limit for the mass of a white dwarf star. It's more complicated for a neutron star: although the Fermi pressure of the neutrons is important, the nuclear force between them modifies the equation of state, making it stiffer.
2022-09-30 08:54:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6011224985122681, "perplexity": 501.5956855862017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335448.34/warc/CC-MAIN-20220930082656-20220930112656-00188.warc.gz"}
http://www.ck12.org/geometry/Applications-of-the-Pythagorean-Theorem/lesson/Applications-of-the-Pythagorean-Theorem-Intermediate/r7/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> You are viewing an older version of this Concept. Go to the latest version. Applications of the Pythagorean Theorem Height, distance, and angles or types of triangles. 0% Progress Practice Applications of the Pythagorean Theorem Progress 0% Applications of the Pythagorean Theorem What if you had a 52" High Definition Television (52" being the length of the diagonal of the rectangular viewing area)? High Definition Televisions (HDTVs) have sides in the ratio of 16:9. What is the length and width of a 52” HDTV? What is the length and width of an HDTV with a \begin{align*}y''\end{align*} long diagonal? Guidance There are many different applications of the Pythagorean Theorem. Three applications are explored below. Find the Height of an Isosceles Triangle One way to use The Pythagorean Theorem is to identify the heights in isosceles triangles so you can calculate the area. The area of a triangle is \begin{align*}\frac{1}{2} \ bh\end{align*}, where \begin{align*}b\end{align*} is the base and \begin{align*}h\end{align*} is the height (or altitude). If you are given the base and the sides of an isosceles triangle, you can use the Pythagorean Theorem to calculate the height. Prove the Distance Formula Another application of the Pythagorean Theorem is the Distance Formula. First, draw the vertical and horizontal lengths to make a right triangle. Then, use the differences to find these distances. Now that we have a right triangle, we can use the Pythagorean Theorem to find \begin{align*}d\end{align*}. Distance Formula: The distance \begin{align*}A(x_1, y_1)\end{align*} and \begin{align*}B(x_2, y_2)\end{align*} is \begin{align*}d = \sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2}\end{align*} . Determine if a Triangle is Acute, Obtuse, or Right We can extend the converse of the Pythagorean Theorem to determine if a triangle has an obtuse angle or is acute. We know that if the sum of the squares of the two smaller sides equals the square of the larger side, then the triangle is right. We can also interpret the outcome if the sum of the squares of the smaller sides does not equal the square of the third. Theorem: (1) If the sum of the squares of the two shorter sides in a right triangle is greater than the square of the longest side, then the triangle is acute. (2) If the sum of the squares of the two shorter sides in a right triangle is less than the square of the longest side, then the triangle is obtuse. In other words: The sides of a triangle are \begin{align*}a, b\end{align*}, and \begin{align*}c\end{align*} and \begin{align*}c > b\end{align*} and \begin{align*}c > a\end{align*}. If \begin{align*}a^2 + b^2 > c^2\end{align*}, then the triangle is acute. If \begin{align*}a^2 + b^2 = c^2\end{align*}, then the triangle is right. If \begin{align*}a^2 + b^2 < c^2\end{align*}, then the triangle is obtuse. Proof of Part 1: Given: In \begin{align*}\triangle ABC, a^2 + b^2 > c^2\end{align*}, where \begin{align*}c\end{align*} is the longest side. In \begin{align*}\triangle LMN, \angle N\end{align*} is a right angle. Prove: \begin{align*}\triangle ABC\end{align*} is an acute triangle. (all angles are less than \begin{align*}90^\circ\end{align*}) Statement Reason 1. In \begin{align*}\triangle ABC, a^2 + b^2 > c^2\end{align*}, and \begin{align*}c\end{align*} is the longest side. In \begin{align*}\triangle LMN, \angle N\end{align*} is a right angle. Given 2. \begin{align*}a^2 + b^2 = h^2\end{align*} Pythagorean Theorem 3. \begin{align*}c^2 < h^2\end{align*} Transitive PoE 4. \begin{align*}c < h\end{align*} Take the square root of both sides 5. \begin{align*}\angle C\end{align*} is the largest angle in \begin{align*}\triangle ABC\end{align*}. The largest angle is opposite the longest side. 6. \begin{align*}m \angle N = 90^\circ\end{align*} Definition of a right angle 7. \begin{align*}m \angle C < m \angle N\end{align*} SSS Inequality Theorem 8. \begin{align*}m \angle C < 90^\circ\end{align*} Transitive PoE 9. \begin{align*}\angle C\end{align*} is an acute angle. Definition of an acute angle 10. \begin{align*}\triangle ABC\end{align*} is an acute triangle. If the largest angle is less than \begin{align*}90^\circ\end{align*}, then all the angles are less than \begin{align*}90^\circ\end{align*}. Example A What is the area of the isosceles triangle? First, draw the altitude from the vertex between the congruent sides, which will bisect the base (Isosceles Triangle Theorem). Then, find the length of the altitude using the Pythagorean Theorem. Now, use \begin{align*}h\end{align*} and \begin{align*}b\end{align*} in the formula for the area of a triangle. Example B Find the distance between (1, 5) and (5, 2). Make \begin{align*}A(1, 5)\end{align*} and \begin{align*}B(5, 2)\end{align*}. Plug into the distance formula. You might recall that the distance formula was presented as \begin{align*}d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}\end{align*}, with the first and second points switched. It does not matter which point is first as long as \begin{align*}x\end{align*} and \begin{align*}y\end{align*} are both first in each parenthesis. In Example 7, we could have switched \begin{align*}A\end{align*} and \begin{align*}B\end{align*} and would still get the same answer. Also, just like the lengths of the sides of a triangle, distances are always positive. Example C Determine if the following triangles are acute, right or obtuse. a) b) Set the shorter sides in each triangle equal to \begin{align*} a \end{align*} and \begin{align*} b \end{align*} and the longest side equal to \begin{align*}c\end{align*}. a) The triangle is acute. b) The triangle is obtuse. Watch this video for help with the Examples above. Concept Problem Revisited To find the length and width of a 52” HDTV, plug in the ratios and 52 into the Pythagorean Theorem. We know that the sides are going to be a multiple of 16 and 9, which we will call \begin{align*}n\end{align*}. Therefore, the dimensions of the TV are \begin{align*}16(2.83'')\end{align*} by \begin{align*}9(2.833'')\end{align*}, or \begin{align*}45.3''\end{align*} by \begin{align*}25.5''\end{align*}. If the diagonal is \begin{align*}y''\end{align*} long, it would be \begin{align*}n \sqrt{337}''\end{align*} long. The extended ratio is \begin{align*}9 : 16 : \sqrt{337}\end{align*}. Vocabulary The two shorter sides of a right triangle (the sides that form the right angle) are the legs and the longer side (the side opposite the right angle) is the hypotenuse. The Pythagorean Theorem states that \begin{align*}a^2+b^2=c^2\end{align*}, where the legs are “\begin{align*}a\end{align*}” and “\begin{align*}b\end{align*}” and the hypotenuse is “\begin{align*}c\end{align*}”. Acute triangles are triangles where all angles are less than \begin{align*}90^\circ\end{align*}. Right triangles are triangles with one \begin{align*}90^\circ\end{align*} angle. Obtuse triangles are triangles with one angle that is greater than \begin{align*}90^\circ\end{align*}. Guided Practice 1. Graph \begin{align*}A(-4, 1), B(3, 8)\end{align*}, and \begin{align*}C(9, 6)\end{align*}. Determine if \begin{align*}\triangle ABC\end{align*} is acute, obtuse, or right. 2. Do the lengths 7, 8, 9 make a triangle that is right, acute, or obtuse? 3. Do the lengths 14, 48, 50 make a triangle that is right, acute, or obtuse? 1. This looks like an obtuse triangle, but we need proof to draw the correct conclusion. Use the distance formula to find the length of each side. Now, let’s plug these lengths into the Pythagorean Theorem. \begin{align*}\triangle ABC\end{align*} is an obtuse triangle. 2. Acute because \begin{align*}7^2 + 8^2>9^2\end{align*}. 3. Right because \begin{align*}14^2+48^2=50^2\end{align*} Practice Find the area of each triangle below. Round your answers to the nearest tenth. Find the length between each pair of points. 1. (-1, 6) and (7, 2) 2. (10, -3) and (-12, -6) 3. (1, 3) and (-8, 16) 4. What are the length and width of a \begin{align*}42''\end{align*} HDTV? Round your answer to the nearest tenth. 5. Standard definition TVs have a length and width ratio of 4:3. What are the length and width of a \begin{align*}42''\end{align*} Standard definition TV? Round your answer to the nearest tenth. 6. Challenge An equilateral triangle is an isosceles triangle. If all the sides of an equilateral triangle are \begin{align*}s\end{align*}, find the area, using the technique learned in this section. Leave your answer in simplest radical form. 7. Find the area of an equilateral triangle with sides of length 8. 1. The two shorter sides of a triangle are 9 and 12. 1. What would be the length of the third side to make the triangle a right triangle? 2. What is a possible length of the third side to make the triangle acute? 3. What is a possible length of the third side to make the triangle obtuse? 2. The two longer sides of a triangle are 24 and 25. 1. What would be the length of the third side to make the triangle a right triangle? 2. What is a possible length of the third side to make the triangle acute? 3. What is a possible length of the third side to make the triangle obtuse? 3. The lengths of the sides of a triangle are \begin{align*}8x, 15x,\end{align*} and \begin{align*}17x\end{align*}. Determine if the triangle is acute, right, or obtuse. Determine if the following triangles are acute, right or obtuse. 1. 5, 12, 15 2. 13, 84, 85 3. 20, 20, 24 4. 35, 40, 51 5. 39, 80, 89 6. 20, 21, 38 7. 48, 55, 76 Graph each set of points and determine if \begin{align*}\triangle ABC\end{align*} is acute, right, or obtuse. 1. \begin{align*}A(3, -5), B(-5, -8), C(-2, 7)\end{align*} 2. \begin{align*}A(5, 3), B(2, -7), C(-1, 5)\end{align*} 3. Writing Explain the two different ways you can show that a triangle in the coordinate plane is a right triangle. Vocabulary Language: English Distance Formula Distance Formula The distance between two points $(x_1, y_1)$ and $(x_2, y_2)$ can be defined as $d= \sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}$. Obtuse Triangle Obtuse Triangle An obtuse triangle is a triangle with one angle that is greater than 90 degrees. Pythagorean Theorem Pythagorean Theorem The Pythagorean Theorem is a mathematical relationship between the sides of a right triangle, given by $a^2 + b^2 = c^2$, where $a$ and $b$ are legs of the triangle and $c$ is the hypotenuse of the triangle. Vertex Vertex A vertex is a point of intersection of the lines or rays that form an angle.
2015-11-25 17:19:36
{"extraction_info": {"found_math": true, "script_math_tex": 76, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 7, "texerror": 0, "math_score": 0.699678897857666, "perplexity": 828.5532148024129}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445219.14/warc/CC-MAIN-20151124205405-00274-ip-10-71-132-137.ec2.internal.warc.gz"}
http://icpc.njust.edu.cn/Problem/Hdu/4804/
# Campus Design Time Limit: 15000/8000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others) ## Description Nanjing University of Science and Technology is celebrating its 60th anniversary. In order to make room for student activities, to make the university a more pleasant place for learning, and to beautify the campus, the college administrator decided to start construction on an open space. The designers measured the open space and come to a conclusion that the open space is a rectangle with a length of n meters and a width of m meters. Then they split the open space into n x m squares. To make it more beautiful, the designer decides to cover the open space with 1 x 1 bricks and 1 x 2 bricks, according to the following rules: 1. All the bricks can be placed horizontally or vertically 2. The vertexes of the bricks should be placed on integer lattice points 3. The number of 1 x 1 bricks shouldn’t be less than C or more than D. The number of 1 x 2 bricks is unlimited. 4. Some squares have a flowerbed on it, so it should not be covered by any brick. (We use 0 to represent a square with flowerbet and 1 to represent other squares) Now the designers want to know how many ways are there to cover the open space, meeting the above requirements. ## Input There are several test cases, please process till EOF. Each test case starts with a line containing four integers N(1 <= N <= 100), M(1 <= M <= 10), C, D(1 <= C <= D <= 20). Then following N lines, each being a string with the length of M. The string consists of ‘0’ and ‘1’ only, where ‘0’ means the square should not be covered by any brick, and ‘1’ otherwise. ## Output Please print one line per test case. Each line should contain an integers representing the answer to the problem (mod 109 + 7). ## Sample Input 1 1 0 0 1 1 1 1 2 0 1 1 1 2 1 1 2 1 2 11 1 2 0 2 01 1 2 0 2 11 2 2 0 0 10 10 2 2 0 0 01 10 2 2 0 0 11 11 4 5 3 5 11111 11011 10101 11111 ## Sample Output 0 0 1 1 1 2 1 0 2 954 liuyiding ## Source 2013ACM/ICPC亚洲区南京站现场赛——题目重现
2020-08-14 22:53:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3288818895816803, "perplexity": 957.1382395019072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740343.48/warc/CC-MAIN-20200814215931-20200815005931-00009.warc.gz"}
http://mathoverflow.net/feeds/question/7004
Intuitive explanation to Probability question - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-22T08:03:18Z http://mathoverflow.net/feeds/question/7004 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/7004/intuitive-explanation-to-probability-question Intuitive explanation to Probability question Claudiu 2009-11-28T00:52:11Z 2010-01-13T15:28:43Z <p>I have \$3. I flip a coin. If I get heads, I get \$1. If I get tails, I lose \$1. The game stops when I have \$0 or \$7. What is the probability I get \$7?</p> <p>I solved this by creating a system of linear equations, where $P_0 = 0$, $P_7 = 1$, and $P_x = 0.5 \cdot P_{x-1} + 0.5 \cdot P_{x+1}$. Solving them, I got $P_3 = 3/7$. Moreover, $P_x = x/7$. Why does it work out to such a simple fraction?</p> <p>More generally, it seems that $P_{x,y} = x/y$, which is the probability that, starting from x dollars, I end up with y dollars. I haven't proven this, but why is this the case?</p> <p>Finally, what is $P_{x,y,p}$, where I gain 1 dollar with probability $p$ instead of probability 0.5 .</p> http://mathoverflow.net/questions/7004/intuitive-explanation-to-probability-question/7010#7010 Answer by Jonas Meyer for Intuitive explanation to Probability question Jonas Meyer 2009-11-28T01:26:29Z 2009-11-28T01:26:29Z <p>If your equation $P_{x,y} = \frac{P_{x-1,y}+P_{x+1,y}}{2}$ is intuitive then the result should be intuitive. If the probability starting with $x$ dollars is the arithmetic mean of the probabilities starting with $x-1$ and $x+1$, then $P_{0,y},P_{1,y},\ldots,P_{y,y}$ is an arithmetic sequence. The simple form then follows immediately from the equal spacing and the facts that $P_{0,y}=0$ and $P_{y,y}=1$. I don't know how to make intuition precise, but because you are equally likely to go up or down at any given time, it makes sense that the probability of reaching y is proportional to the distance from 0.</p> http://mathoverflow.net/questions/7004/intuitive-explanation-to-probability-question/7014#7014 Answer by Vigleik Angeltveit for Intuitive explanation to Probability question Vigleik Angeltveit 2009-11-28T02:26:38Z 2009-11-28T02:26:38Z <p>You don't need much math at all to answer this question. Your expected value at the end of the game is $3, because the expected earnings at each turn is 0.</p> <p>Your expected value at the end of the game is 7P<sub>7</sub>=3, so there you have it.</p> http://mathoverflow.net/questions/7004/intuitive-explanation-to-probability-question/7015#7015 Answer by Kevin Carde for Intuitive explanation to Probability question Kevin Carde 2009-11-28T02:35:11Z 2009-11-28T02:35:11Z <p>I really like Vigleik's answer, but I'll throw in yet another way to look at your original problem. P<sub>x</sub> = (P<sub>x-1</sub>+P<sub>x+1</sub>)/2 is an example of a (discrete) harmonic function; i.e., a function whose value is the average of the adjacent values. In this case, P<sub>x</sub> is a harmonic function on a chain graph. For purposes of intuition, we can move from a discrete to a continuous line and think about the criterion for a function of one (real) variable to be harmonic: it is harmonic if and only if its second derivative vanishes; i.e., it's linear. This provides some intuition why your solution just linearly interpolates between 0 and 1.</p> <p>Your general problem of P<sub>x,y,p</sub> is no longer harmonic, so it will not have as easy a solution, as you may be discovering. For notational simplicity, I'll write P<sub>n</sub> for P<sub>n,y,p</sub> (preferring n as the index of a sequence to x). If you write down your new recurrence, you will get equations</p> <p>P<sub>n</sub> = (1-p)P<sub>n-1</sub>+pP<sub>n+1</sub></p> <p>subject to P<sub>0</sub> = 0, P<sub>y</sub> = 1. We can work with this, or we can use a trick. Let k = (1-p)/p (so p = 1/(1+k)). Then you can verify that</p> <p>P<sub>n</sub> = kP<sub>n-1</sub> + 1</p> <p>satisfies the original equation (with the additional freedom to scale all P<sub>n</sub> by a constant factor - we've broken the homogeneity of our original recurrence). [It actually takes some doing to verify this: consider using this new recurrence to write down P<sub>n</sub>-P<sub>n+1</sub>. When you solve that out for P<sub>n</sub>, you retrieve the original recurrence.]</p> <p>This is much easier to handle, with solution</p> <p>$P_n = \frac{k^n-1}{k-1}$.</p> <p>This gives P<sub>0</sub> = 0 as desired, but you'll need to scale down all solutions so that P<sub>y</sub> = 1.</p> http://mathoverflow.net/questions/7004/intuitive-explanation-to-probability-question/7035#7035 Answer by gowers for Intuitive explanation to Probability question gowers 2009-11-28T10:38:45Z 2009-11-28T11:28:13Z <p>An amusing observation in connection with the first proof is that if you start with m dollars and can choose at each stage what the amount is that you will win/lose if the coin is heads/tails (the two amounts being equal of course), subject to the condition that you are not allowed to bet an amount that would take you over n or below zero, then your probability of getting to n before you get to 0 is still m/n, as long as you bet a positive integer number each time. In other words, if you try to do better by developing a strategy that involves betting different amounts at each stage, you won't. (But at least you won't do worse either.)</p> <p>On the other hand, if you are playing roulette and your probability of winning goes very slightly down because of the 37, then your best hope is to bet the maximal amount every time, so as to minimize the chance that a 37 ever occurs during the process. (That's not a proof, but the conclusion is sound: if you take very small steps then the slight bias towards the bank means you will almost certainly lose.)</p> http://mathoverflow.net/questions/7004/intuitive-explanation-to-probability-question/11657#11657 Answer by Douglas Zare for Intuitive explanation to Probability question Douglas Zare 2010-01-13T13:42:50Z 2010-01-13T15:28:43Z <p>Ori Gurel-Gurervich's comment suggests a very simple way to use a martingale (an example of a Wald martingale) to evaluate the final question in which the probability of gaining a dollar is$p \ne \frac12$. </p> <p>If$m(t)$is how much money you have at time$t$, then$m(t)$is not a martingale for$p \ne \frac12$. However, for the right base$C$,$C^{m(t)}$is a martingale:$E (C^{m(t+1)}) = E(C^{m(t)})$. That means we can use the same argument that the starting value of a martingale is the average of the stopping values to compute the probabilities of ending at$0$or$y$, or even of escaping to$\infty$(with a bit more technical work). </p> <p>The right value of$C$is$(1-p)/p$. You start at$C^x$and end at$C^y$or$C^0 = 1$, so if you finish with probability 1 (easy to prove with another martingale) you end up at$C^y$with probability$(1-C^x)/(1-C^y)$, and you end up at$C^0$with the complementary probability$(C^x-C^y)/(1-C^y)$. </p> <p>When$p \sim \frac12$,$C^x \sim 1- x \epsilon$and$C^y \sim 1-y\epsilon$, which is continuous with the case$p=\frac12$. </p> <p>If you don't stop at$y$, the probability that you escape to$\infty$is$0$if$p \le 1/2$and$1-C^x$if$p \gt 1/2$, which makes the probability of ruin$C^x\$.</p>
2013-05-22 08:03:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.978237509727478, "perplexity": 763.4314239465625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00015-ip-10-60-113-184.ec2.internal.warc.gz"}
https://ibex.readthedocs.io/en/latest/api_ibex_sklearn_manifold_mds.html
# MDS¶ class ibex.sklearn.manifold.MDS(n_components=2, metric=True, n_init=4, max_iter=300, verbose=0, eps=0.001, n_jobs=1, random_state=None, dissimilarity='euclidean') Bases: sklearn.manifold.mds.MDS, ibex._base.FrameMixin Note The documentation following is of the class wrapped by this class. There are some changes, in particular: Multidimensional scaling Read more in the User Guide. n_components : int, optional, default: 2 Number of dimensions in which to immerse the dissimilarities. metric : boolean, optional, default: True If True, perform metric MDS; otherwise, perform nonmetric MDS. n_init : int, optional, default: 4 Number of times the SMACOF algorithm will be run with different initializations. The final results will be the best output of the runs, determined by the run with the smallest final stress. max_iter : int, optional, default: 300 Maximum number of iterations of the SMACOF algorithm for a single run. verbose : int, optional, default: 0 Level of verbosity. eps : float, optional, default: 1e-3 Relative tolerance with respect to stress at which to declare convergence. n_jobs : int, optional, default: 1 The number of jobs to use for the computation. If multiple initializations are used (n_init), each run of the algorithm is computed in parallel. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all, which is useful for debugging. For n_jobs below -1, (n_cpus + 1 + n_jobs) are used. Thus for n_jobs = -2, all CPUs but one are used. random_state : int, RandomState instance or None, optional, default: None The generator used to initialize the centers. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. dissimilarity : ‘euclidean’ | ‘precomputed’, optional, default: ‘euclidean’ Dissimilarity measure to use: • ‘euclidean’: Pairwise Euclidean distances between points in the dataset. • ‘precomputed’: Pre-computed dissimilarities are passed directly to fit and fit_transform. embedding_ : array-like, shape (n_components, n_samples) Stores the position of the dataset in the embedding space. stress_ : float The final value of the stress (sum of squared distance of the disparities and the distances for all constrained points). “Modern Multidimensional Scaling - Theory and Applications” Borg, I.; Groenen P. Springer Series in Statistics (1997) “Nonmetric multidimensional scaling: a numerical method” Kruskal, J. Psychometrika, 29 (1964) “Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis” Kruskal, J. Psychometrika, 29, (1964) fit(X, y=None, init=None)[source] Note The documentation following is of the class wrapped by this class. There are some changes, in particular: Computes the position of the points in the embedding space X : array, shape (n_samples, n_features) or (n_samples, n_samples) Input data. If dissimilarity=='precomputed', the input should be the dissimilarity matrix. y: Ignored. init : ndarray, shape (n_samples,), optional, default: None Starting configuration of the embedding to initialize the SMACOF algorithm. By default, the algorithm is initialized with a randomly chosen array. fit_transform(X, y=None, init=None)[source] Note The documentation following is of the class wrapped by this class. There are some changes, in particular: Fit the data from X, and returns the embedded coordinates X : array, shape (n_samples, n_features) or (n_samples, n_samples) Input data. If dissimilarity=='precomputed', the input should be the dissimilarity matrix. y: Ignored. init : ndarray, shape (n_samples,), optional, default: None Starting configuration of the embedding to initialize the SMACOF algorithm. By default, the algorithm is initialized with a randomly chosen array.
2022-06-28 03:16:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20815926790237427, "perplexity": 9042.62895115478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103347800.25/warc/CC-MAIN-20220628020322-20220628050322-00204.warc.gz"}
https://math.stackexchange.com/questions/1838800/is-there-an-accepted-notation-for-the-monoid-of-linear-polynomials
# Is there an accepted notation for the monoid of linear polynomials? Is there an accepted notation for the monoid of linear polynomials (with addition as the operation) with coefficients from some ring R? Like $2p+3$, where $p$ and the identity generate the monoid over the integers? It's not $Z[p]$, since that would imply a ring where higher order monomials can be present. And what about multiple indeterminates? Like $2a+3b$, but I don't allow $ab$ in the monoid, as multiplication is not defined. The examples I've given would be groups if I allowed any integer in the exponent, but the case I'm interested in would only allow positive integers (and zero) in the indeterminate exponents, while any integer would be allowed in the non-indeterminate. So $p-50$ would be in the monoid, but $-p$ would not. The application I have in mind is a monoid ring that allows linear polynomials in exponents, so I'm working with things like $x^{p-50}$, but I don't do $x^{p^2}$. • What's the identity? Is it $0$? That wouldn't be a linear polynomial – MCT Jun 25 '16 at 3:36 • @Soke, yes identity is 0. So, no, it's not exactly a linear polynomial. Also, the monoid is ordered and I don't allow anything less than 0, so -1 is out. There's probably no notation that exactly fits my monoid, but I was hoping for something close. Jun 25 '16 at 3:54 • Do you need the multiplication of the ring structure? It looks like your monoid is the set of polynomials of the form $aX + b$, under addition defined by $(aX + b) + (a'X+b') = (a+a')X + (b+b')$. If this is the case, then your monoid is simply $(R,+) \times (R,+)$. Jun 27 '16 at 15:31
2021-10-26 15:58:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598858952522278, "perplexity": 313.07677250540644}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00355.warc.gz"}
http://www.stretchandsquash.co.uk/?p=80
## Equations... Turns out that my web provider now provides a SQL and PHP server with my base package and this was all the incentive I needed to give a self hosted WordPress blog a whirl.  To get a basic blog system up and running literally took me 5 minutes with the free WordPress software and using the export function I was quickly able to copy the few posts over from my free WordPress blog. Now I have my own hosted service this presents me with several advantages over the free wordpress accounts, but there are two in particular that are attractive to me.  The first one is that I can now embed proper equations into a blog post using LaTeX and MathML by linking it into my equation editor MathType... By adding in the 'LaTeX for WordPress' plug in for the hosted WordPress, I can now copy equations straight from MathType and paste them directly into my blog by following this procedure. 1.) Open MathType and prepare your equation. 2.) Go to MathType -> Preferences -> Cut and copy preferences; and then select MathML or TeX; then LaTeX 2.09 and later 3.) Highlight the equation in MathType 6.7d and then right click and select copy or press (⌘ + C) 4.) Find the position in your blog post where you want the equation to appear, then paste (⌘ + V) Following this means that I can embed equations like the one below pretty easily, the only downside that I've found is that if you've colour coded your equation in MathType, none of this formatting will carry over when pasting, but the equations should work and be visible in any browser, certainly the main three and on the iPhones and Android devices that I've worked on so far. With minimal tweaking and a little trial and error with the LaTeX code I was able to apply some colour tags to get the equation to look the same as it does in my lecture notes. This might initially appear to be quite a minor thing, but I've found that colour coding my notes like this really helps the students follow the equations when I'm talking them through various parts of the equations and so I was keen to keep the high quality formatting on my blog.  One thing that I have noticed however though is that the equations appear much much sharper on mobile devices and Apple machines, whereas on windows machines they appear slightly pixelated. As to the second advantage this is that I can now embed Wolfram Mathematica CDF files into my blog directly, which will help me share some of my examples with anyone interested in my research.  I'll write another blog post on this over the next few days...
2013-06-20 04:59:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22572703659534454, "perplexity": 1308.047250627099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710299158/warc/CC-MAIN-20130516131819-00024-ip-10-60-113-184.ec2.internal.warc.gz"}
https://khronos.org/registry/vulkan/specs/1.2-extensions/man/html/VkPipelineFragmentShadingRateStateCreateInfoKHR.html
## C Specification The VkPipelineFragmentShadingRateStateCreateInfoKHR structure is defined as: // Provided by VK_KHR_fragment_shading_rate typedef struct VkPipelineFragmentShadingRateStateCreateInfoKHR { VkStructureType sType; const void* pNext; VkExtent2D fragmentSize; } VkPipelineFragmentShadingRateStateCreateInfoKHR; ## Members • sType is the type of this structure. • pNext is NULL or a pointer to a structure extending this structure. • fragmentSize specifies a VkExtent2D structure containing the fragment size used to define the pipeline fragment shading rate for drawing commands using this pipeline. • combinerOps specifies a VkFragmentShadingRateCombinerOpKHR value determining how the pipeline, primitive, and attachment shading rates are combined for fragments generated by drawing commands using the created pipeline. ## Description If the pNext chain of VkGraphicsPipelineCreateInfo includes a VkPipelineFragmentShadingRateStateCreateInfoKHR structure, then that structure includes parameters controlling the pipeline fragment shading rate. If this structure is not present, fragmentSize is considered to be equal to (1,1), and both elements of combinerOps are considered to be equal to VK_FRAGMENT_SHADING_RATE_COMBINER_OP_KEEP_KHR. Valid Usage (Implicit) sType must be VK_STRUCTURE_TYPE_PIPELINE_FRAGMENT_SHADING_RATE_STATE_CREATE_INFO_KHR Any given element of combinerOps must be a valid VkFragmentShadingRateCombinerOpKHR value
2022-01-20 19:54:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42174240946769714, "perplexity": 11483.334490412366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302622.39/warc/CC-MAIN-20220120190514-20220120220514-00162.warc.gz"}
https://www.catalyzex.com/paper/arxiv:1803.06561
## AutoML from Service Provider's Perspective: Multi-device, Multi-tenant Model Selection with GP-EI Oct 28, 2018 Share this with someone who'll enjoy it: AutoML has become a popular service that is provided by most leading cloud service providers today. In this paper, we focus on the AutoML problem from the \emph{service provider's perspective}, motivated by the following practical consideration: When an AutoML service needs to serve {\em multiple users} with {\em multiple devices} at the same time, how can we allocate these devices to users in an efficient way? We focus on GP-EI, one of the most popular algorithms for automatic model selection and hyperparameter tuning, used by systems such as Google Vizer. The technical contribution of this paper is the first multi-device, multi-tenant algorithm for GP-EI that is aware of \emph{multiple} computation devices and multiple users sharing the same set of computation devices. Theoretically, given $N$ users and $M$ devices, we obtain a regret bound of $O((\text{\bf {MIU}}(T,K) + M)\frac{N^2}{M})$, where $\text{\bf {MIU}}(T,K)$ refers to the maximal incremental uncertainty up to time $T$ for the covariance matrix $K$. Empirically, we evaluate our algorithm on two applications of automatic model selection, and show that our algorithm significantly outperforms the strategy of serving users independently. Moreover, when multiple computation devices are available, we achieve near-linear speedup when the number of users is much larger than the number of devices. Share this with someone who'll enjoy it:
2023-03-30 05:17:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6807635426521301, "perplexity": 999.6052016835037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00509.warc.gz"}
http://tuttosulromanzo.it/aodo/circumference-of-a-circle-questions-and-answers-pdf.html
On the following pages are multiple-choice questions for the Grade 8 Practice Test, a practice opportunity for the Nebraska State Accountability-Mathematics (NeSA-M). Note point A is labeled on both sides, because when the shape is refolded into a pipe, these two points will form the. 4 in 20) 73. Work out the circumference of this circle. , C ÷ d = π. (1 Mark) 2. cans and string, dynamic geometry software) and strategies. Below are our grade 5 geometry worksheets on determining the circumference of circles. The diameter is the length of any straight line cutting a circle in half (and passing through the center point). What area of the garden can this sprinkler irriguate? round. Displacement indicates distance and direction. What is the area of a circle with a circumference of. 42,2π meters. 1) Radius = Diameter = Area = Circumference = 2) Radius = Diameter = Area =. Tricky C Questions and Answers PDF Download. You can only make ONE correction per question. Work out its circumference, using. (i) Line segment joining the centre to any point on the circle is a radius of the circle. Solution: We have the coordinates of the center of the circle. Lengths of the trains are 200 and 300m. Showing top 8 worksheets in the category - Circumference And Area Of Circles. Answer 8: (a) Violin Spider (because of a violin shaped mark on its head) Answer 9: (a) Quartz (A battery causes a quartz crystal to vibrate at exactly 32,768 times per second. you can get here Deloitte placement papers pdf with answers download, Deloitte question paper solutions, Deloitte placement papers, we hope this will be your search if you are. p ) Circles 2 5. Arc — a portion of the circumference of a circle. Students will also use their expressions skills to write numerical expressions that can be used to find surface area and volume of three-dimensional figures. The radius (plural: radii) is the length from the middle of a circle to any point on the edge of a circle. It is cut and made into the shape of a circle. ms red pe 0801 indd 317 2 2 15 8 51 20 AM,8 1 Lesson. Angle OPT = 32° Work out the size of the angle marked x. Essential Questions Vocabulary Resources Assessment MGSE7. Also solutions and explanations are included. 2cherries are 4 = 1cherry = 4/2 = 2. Stop searching. radius of a circle with a circumference of 28π 6. 4 ft c i Circles c u m fe r e n e C Diameter Radius r â 12 22. Choose the correct answer below. 3: Exploring Circumference and Diameter ; Lesson 5. Find the Circumference of the circle. For instance, the length of the shorter missing side is 6 because if you add it to the 3 on the left, the result should be the 9 on the right. 8 millimeters, then what is the radius?. Lesson 2: Area of the circle. radians (= 2. It is measured in cm, m. Lesson 1: Circumference of the circle and Pi. The radius of the circle is 1·5cm. The following terms are regularly used when referring to circles: Arc — a portion of the circumference of a circle. One side of the equilateral triangle is a diameter of the circle. Geometry worksheets: Area and circumference of a circle. Bar Graph Worksheet #1 Library Visits 0 100 200 300 400 Monday Tuesday Wednesday Thursday Friday Saturday Days of the week Number of visitors 1. Writing reinforces Maths learnt. Parallelogram whose adjacent sides measure 20 units and 10 units. Exercises 9 11 on page 321,Section 8 1 Circles and Circumference 317. Question 1 Find the circumference and Area of the circle whose radius are gives below (a) 21 cm (b) 6. You must circle the letter of the correct answer. predict the amount of force needed to move a resistance 5. Circumference Of Circles Easy Worksheet Ks3"> Full Template. SOP is a straight line. Example: (A) is your first answer and (D) is your final answer. Which of the following dos NOT have a circumference? a. “What should be the minimum marks a candidate should get to qualify Civil Services Aptitude Test…”. Math Busters Word Problems reproducible worksheets are designed to help teachers, parents, and tutors use the books from the Math Busters Word Problems series. The measure of surface area is always squared or cubed (select one). Area and Circumference of Circles Pi Day Coloring ActivityThis is a fun way for students to practice finding both area and circumference of circles. Using the given information, they will find the other two measures. Get help with your Circle homework. Preview and details. mathswatch answers area of a triangle / mathswatch answers area and perimeter / mathswatch answers area of a circle / mathswatch answers area and volume / mathswatch area answers / mathswatch answers volume and surface area / vocab test level d / respuestas correctas del test de raven escala coloreada / o que significa exame de sangue cea / wileyplus accounting homework answers chapter 3. • calculate the area of a square using the circumference of the circle as the perimeter of the square • use scales to find the actual distance between two points, on a grid • perform conversions using a given scale • measure or calculate the bearing of given positions. Given the radius of circle A is 4 cm and the radius of circle Z is 14 cm and the distance between the two circles is 8 cm. For a given circle, think ofa radius and a diameter as segments andthe radius andthe diameter as lengths. Questions labelled with an asterisk (*) are ones where the quality of your. --> 1 - 4 * (1/2)^3 = 1/2 anyone see a problem with this?. Essential Geometry Practice: Todd Orelli, Teacher Leader, NYSED Office of Adult Career and Continuing Education Services 7 9. The results of the one way ANOVA indicate that there is a difference in the perceived stress levels amongst the age groups [F(4, 428)=3. Mathematics (Non-calculator Paper) 10 Practice Paper Style Questions Topic: Circle Theorems (Higher Tier) Answer all questions. You can choose to include answers and step-by-step solutions. The answer I believe can be stated as follows: Mathematics is bodybuilding for your mind. units Answer: (i) 2πr units. 42,2π meters. You may use your calculator to convert into decimal if necessary to answer the questions. Read problem and answer following questions 1. Some of the questions are a little bit easier where they just have to find the diameter or radius. There are several cases for the. Give a reason for your answer. Trigonometry Review with the Unit Circle: All the trig. Give your answer to 3. The value of $$\large \pi$$ is 3. THEOREM 2 The angle subtended by an arc at the centre of a circle is double the size of the angle subtended by the same arc at the circumference (on the same side of the chord as the centre). A swimming pool is 24' long, 20' wide, 3' deep at the shallow end, and 10' deep at the deep end. Spatial Puzzles Puzzles Questions and Answers with explanation for placement, interview preparations, entrance test. This question paper consists of 10 questions. Download the Maths 2020 Syllabus and Course Policies PDF. SR is a tangent to the circle at S. geometrical reason for each of your answers. Ae honk qscugl blgce. Practice: Area of parts of circles. A circle has a radius of 6 cm. Be sure to write your team name, list the members of your team, and write out the Fermi question you are investigating. Answer: 500 17. Circle Theorems GCSE Higher KS4 with Answers/Solutions NOTE: You must give reasons for any answers provided. 5cm2 (to 1 d. It is the equivalent of 'perimeter' for a circle. It is equally easy to see that the answers will be the same in elliptic geometry. 5K subscribers. clocks -> important formulae The face or dial of a watch is a circle whose circumference is divided into 60 equal parts, called minute spaces. Pages 2–11. How to find displacement In physics, you find displacement by calculating the distance between an object’s initial …. Angle QRS = 40° and angle SOQ = 80° Prove that triangle QSR is isosceles. Circumference of a circle questions: Click once in an ANSWER BOX and type in your answer; then click ENTER. Then, 2 π r = 176. Answer: The circumference of a circle is the edge or rim of a circle itself. Students are provided the radius or the diameter in customary units (worksheets 1-3) or metric units (worksheets 4-6). Recent questions and answers Most popular tags ssc maths gk cgl general-knowldge general-knowledge profit-loss railways history ibps english biology geography current-affairs cgl. 6 2 +6 2 = a 2 where a is the base of the triangle (and the side of the square). What is the radius of a circle having a diameter of 1. To be used after both area and Circumference has been taught as a consolidation lesson or as a revision lesson towards exam time. The similarity of any two circles is the basis of the definition of π, the ratio of the circumference and the diameter of any circle. The pond has a radius of 6 m. Knowing how to calculate the area and circumference of circles is an important aspect of maths, that is why we have also provided the formulas so each child will soon be an ace at all things circles!. The points P and Q are on the circumference of the circle. Circumference Of A Circle Worksheets Free Printable"> Basic Circle Geometry Practice Questions And Answers"> Geom H Worksheets 5 12 11 10r08 Pdf">. You must explain why your answer is to an appropriate degree of accuracy. Area of a circle is A = r2 where r is the radius. 8 millimeters, then what is the radius?. Answers will vary. Answer: The circumference of a circle is the edge or rim of a circle itself. ! C=2"r) 3. Stop searching. Try to pass 2 skills a day, and it is good to try earlier years. Finding the Area and Circumference of a Circle Math www. • Measure parts and describe features of a circle. Inscribed angle theorem: ∠ABC=∠APC \angle ABC = \angle APC ∠ABC=∠APC. Questions About Area & Perimeter. Maths Course Syllabi. Cross through any work you do not want to be marked. May 5, 2014 - Circumference of a Circle worksheets | 7th Grade Standard Met: Circumference Stay safe and healthy. Circumference • Through investigation determine the relationship for calculating the 8m35 circumference of a circle (i. 1) Radius = Diameter = Area = Circumference = 2) Radius = Diameter = Area =. r d C C = ππd = 2 r hhs_geo_pe_1101. The speed of the frisbee is 10 m/s. Find the length of the common tangent. Find the area of the shaded region. Work out the area of the circle. in terms of ! ") and BOX this answer 4. 3 A, B and C are points on the circumference of a circle with centre O AC is a diameter of the circle. SR is a tangent to the circle at S. Tell students that this string represents the circumference of the circle. Circumference = 2 × π × radius. answer you choose. Read "The Circle's Measure" to page 13. 4cm The radius of a circle is 5. Perimeter is the distance around a polygon with straight sides. Give the answer correct to 1 decimal point (DP. Area of a circle is A = r2 where r is the radius. 156π meters. What is the approximate length of the radius? A. Nursing exam essay questions. Pi multiplied by the diameter b. The radius of the circle is 1·5cm. Now just as you don’t walk into a gym and start throwing all the weights onto. Crack the code: [042] is correct. com Name: Answers 1. rim on a basketball hoop c. Times Table Multiplication and Division Challange Integrals: Questions and Solutions Integration by Parts Analytic Geometry - Circle Analytic Geometry - Parabola. There are 24 pages in this question. Given the radius of circle A is 4 cm and the radius of circle Z is 14 cm and the distance between the two circles is 8 cm. Circle arc length. This worksheet also includes compound shapes made with circles. Essay on seaside walk. All items are to be completed by all students. Learning the ins and outs of circles is an important part of math for your child or student. ) Angles in a triangle 180 - 75 - 75 = 30o. Students match their answers at t. where r is the radius of the circle, and. • Measure parts and describe features of a circle. These worksheets are pdf files. Find the length of each arc. Area of circles review. 5 ft² 10 ft Find the circumference of each circle. Determine the value of x°. Area and Perimeter - Topic wise seperated questions from one hundred 11+ Papers with detailed answers. Also get free sample papers, last year question papers, hots, syllabus, multiple choice questions (mcqs) easy to learn and understand concepts of all chapters, as per ncert and cbse guidelines. The three circles C1, C2 and C3 have their centers O1, O2 and O3 on the line L and are all tangent at the same point. Topic: Circle Geometry Page 1 There are a number of definitions of the parts of a circle which you must know. radians (= 2. It is cut and made into the shape of a circle. Factorize equations, easy way to learn the unit circle, order, free online ks3 test, which type of Mathematics & english grammer question paper in exam with answer. (3 marks) The first diagram shows a cylindrical block of wood of diameter 24 cm and height 10 cm. Sample answer: 18 in. Area of the circle = 3. A body is traveling in a circle at constant speed. What is Included:This resource includes 40 task cards, a student answer sheet, and an answer key. The factors multiplied to form perfect squares are called square roots. Finding circumference of a circle when given the area. A water sprinkler can spray water at a maximum distance of 12 m in all directions. Create the worksheets you need with Infinite Geometry. Sixth circle theorem - angle between circle tangent and radius. ∠OAB and ∠OCB are 30° and 40° respectively. Circle arc length. Circumference Of A Circle Worksheets Free Printable"> Basic Circle Geometry Practice Questions And Answers"> Geom H Worksheets 5 12 11 10r08 Pdf">. Videos, worksheets, 5-a-day and much more. Total marks – 100. Each question is based off of a picture, like a guy holding 10 boxes of pizza, a dog catching a Frisbee, or a girl counting coins. On first glance, this problem may seem simple in that the angle between the 3 and 6 on a clock is one quarter of a circle or 90 degrees. 2m AT and BT are tangents to a circle centre O If angle AOB is 140 o, then name any right angles, and find the size of angle ATB Circles 3 O A B T AS and AT are tangents. Six two-dimensional shapes are displayed on the page. 6th through 8th Grades. The points A and B have coordinates (5,-1) and (13,11) respectively. What is the area of a circle inscribed in a dodecagon with an apothem 13 meters long? 26π meters. Calculating Area & Perimeter This page shows a set of two-dimensional shapes that have their sides labeled, and the student’s task is to compute the area and/or perimeter. You can use pi to calculate the circumference and area of a circle. Box/circle/highlight your –nal answers. To calculate the circumference of a circle, use the formula C = πd, where "C" is the circumference, "d" is the diameter, and π is 3. Daily math was on circumference and area of a circle. Candidates those who are preparing for IBPS RRB Assistant Mains 2018 Exams can practice these questions daily and make your preparation effective. Work out the circumference of this circle. Circumference = 2 × π × radius = 2 × 3 × 21 [Using 3 as an estimated value for π]. So, 25 has two square roots, 5 and 5. 1416 ( Note : Do not introduce pi to students until after they have done. This is the currently selected item. Geometry (Plane) Areas and perimeters of various figures. Sample answer: During a game of dodgeball, you want to avoid throwing balls that are too large for your hand. How many vertices does a cylinder have? The correct answer is: A. Optimization- 1 Question Finding the max/min given some conditions. OB is a radius and meets the tangent at 90o Angle ABC (One of the angles in the triangle) can be calculated 90 - 15 = 75o This angle is also 75o (See Angles and Shapes' for more information on this. Knowing the correct answers to all of the sample questions. Find the circumference of the circle with a radius of 5 inches. Calculate angle (2 Marks) Diagram NOT accurately drawn Diagram NOT accurately drawn. Which Circle Theorem? Identify which circle theorems you could use to solve each question. The circumference of a circle is 18. , C ÷ d = π. Volume of a Sphere (Radius Given) Worksheet 2 – This worksheet features images of 12 spheres. Mathematics Kilbaha Multimedia. Some of the worksheets displayed are Finding the circumference of a circle, 11 circumference and area of circles, Circle, Circumference of a circle, Area circumference, Circles date period, Circle review work, Radius diameter circumference. Write the equation of the circle shown: 4. After you click ENTER, a message will appear in the RESULTS BOX to indicate whether your answer is correct or incorrect. same segment. • The marks for questions are shown in brackets. Answers will vary. Teach and review area and circumference of. 14) by the square of the radius If a circle has a radius of 4, its area is 3. Ł The distance across a circle through the centre is called the diameter. Review for Grade 9 Math Exam - Unit 8 - Circle Geometry x° Multiple Choice Identify the choice that best completes the statement or answers the question. Circumference of a Circle Key Concept Words The circumference C of a circle is equal to its diameter d times π, or 2 times its radius r times π. Students should use this workbook as a supplemental resource to their textbook. If he gets 30 marks and fails by 30 marks. Credit card advantages essay. Construct radii from A and D. mathswatch answers area of a triangle / mathswatch answers area and perimeter / mathswatch answers area of a circle / mathswatch answers area and volume / mathswatch area answers / mathswatch answers volume and surface area / vocab test level d / respuestas correctas del test de raven escala coloreada / o que significa exame de sangue cea / wileyplus accounting homework answers chapter 3. Math Worksheets. in Diagram NOT accurately drawn (Total 5 marks). Basic Of Gears -Gear Terminology/ Gear nomenclature Gears: Introduction: The slip and creep in the belt or rope drives is a common phenomenon, in the transmission of motion or power between two shafts. 00 cm to 2 d. There are other possible ways to answer some of the questions than those answers given here. This is a selection of Mathematics Word Problems. How much pressure does a column of water 47 feet high exert? Answer: 18. 1, 9, QUESTION QUESTION 10, QUESTION 11. Example:Write the equation of a circle whose center is $$\left( {6, - 6} \right)$$ and with circumference $$2\pi \sqrt {62}$$. circumference of a circle with a radius of 6 inches 4. Improve your math knowledge with free questions in "Circles: calculate area, circumference, radius, and diameter" and thousands of other math skills. I also make them available for a student who wants to do focused independent study on a topic. This is the perimeter of the semi-circle!. Circumference, Pi, and Circle Questions 1. In geometry and mathematics, the word circumference is used to describe the measurement of the distance around a circle while radius is used to describe the distance across a circle's length. (3 x 4 = 12) Find the radius of a circle whose circumference is 22 cm. Please practice doing the questions and hand in any missing assignments. Note: all quantitative comparison questions have the same four answer choices. Chord — a straight line joining the ends of an arc. 6 Movie Trivia Questions And Answers. 5) Find the circumference of a circle with a diameter of 8cm): 6) Find the perimeter of these shapes below): 7) In a Physics experiment, students pin a metal weight around on the end of a nylon thread. The ares of the largest triangle inscribed in a semi-circle of radius. The equation of a circle with radius length 6 is x2 + y2— 2kx +4y—7 = o, (i) Find the centre of the circle and the radius length in terms of k. There are three mathematical quantities that will be of primary interest to us as we analyze the motion of objects in circles. Becky's pond has a diameter of. View Answers. Directions and Answers of the Questions 3 Now let us look at the first part of the question that is –. Hemi Sphere : 2. Question 3 The area of a circle is 616 cm 2. You will be presented with 15 questions and there is no time limit to answer them. circumference of a circle questions and answers image titled work out the circumference of a circle step 6 area and circumference of a circle gcse exam questions and answers. What is the area of this compound figure? 24. Syllabus for High School Maths. Circumference, Pi, and Circle Questions 1. Lesson 1: Circumference of the circle and Pi. The circle is a two-dimensional figure, which has its area and perimeter. Basic Pre-algebra Skill Finding the Circumference of a Circle Find the circumference of each circle. Answer the questions in the spaces provided V and W are points on the circumference of a circle, centre O. Activities From Class. Solved Problems on Circle. It is equally easy to see that the answers will be the same in elliptic geometry. As the number of candidates appearing for the exam is too high, the examiners set the difficulty level of the paper high. Solve for the exact answer (i. Area & Circumference Easy: S1 Answer Key Find the exact area and circumference of each circle. Note: some questions will ask you to “leave your answer in terms of pi”, this means that the answer should be in the form of “something \times \pi ”. November Exam Memorandum Grade 8 Mathematics Marks: 150 Time: 2 hours Please Note Read the following instructions carefully before answering the questions: 1. The expectation is that less prepared students will answer fewer questions correctly than more prepared students. Give your answer to the nearest tenth. Find the circumference of a circle. Printable PDF KS3 and KS4 Circles Worksheets with Answers Learning the ins and outs of circles is an important part of maths for your child or student. That is why Cazoom have supplied you with all the relevant worksheets and answers. Answers • B0= the estimated mean arm circumference when the values of age and height are zero • B1= the change in the estimated mean arm • circumference associated with each 1 month increase in age if height is unchanged • B3= You do!. PDF Lesson 10-1 Circles and Circumference with answers. The equation of a circle with radius length 6 is x2 + y2— 2kx +4y—7 = o, (i) Find the centre of the circle and the radius length in terms of k. Pi multiplied by the radius d. Approximately how many visitors came to the library that day? 3. Prove that the tangents drawn at the ends of a diameter of a circle are parallel. Express as an inequality. Pi Day Quiz DRAFT. Circle theorems Objectives To establish the following results and use them to prove further properties and solve problems: The angle subtended at the circumference is half the angle at the centre subtended by the same arc Angles in the same segment of a circle are equal A tangent to a circle is perpendicular to the radius drawn from the point. ; The perimeter of the following geometrical shapes are discussed in the first section:. Question 3 The area of a circle is 616 cm 2. The circumference of a circle is found using this formula: Find the circumference of the circle: Find the radius of the circle. understanding of the geometry of a circle. Write True or False: Give reasons for your answers. Use 2 7 2 for. Question: Write an application that inputs from the user the radius of a circle as an integer and prints the circle’s diameter, circumference and area using the floating-point value 3. What shape can be the cross section of a cylinder? The correct answer is: C. Thus, the diameter of a circle is twice as long as the radius. Know the formulas for the area and circumference of a circle and use them to solve problems; give an informal derivation of the relationship between the circumference and area of a circle. It contains: Two private instance variables: radius (of the type double) and color (of the type String), with default value of 1. Videos, worksheets, 5-a-day and much more. 652 Chapter 10 Properties of Circles RADIUS AND DIAMETERThe wordsradius and diameter are used for lengths as well as segments. 5 Easy Trivia Questions. The answer is 3/5 or 0. Trivia Questions. Since this is a calculation that measures distance, the standard unit is the meter (m). tier2 algebra pdf notes preposition सामान्य-ज्ञान रेल्वे si-ci geometry m mcq. Partial circle area and arc length. A clock has two hands, the smaller one is called the hour hand or short hand while the larger one is called the minute hand or long hand. To find out the hemisphere's surface area, we can divide this formula by 2 which gives us: 4 π \pi π r^2/2. ribbon costs $1 per foot, how much will it cost to add. 24 in A=200. 👍If you like this resource, then please rate it and/or leave a comment💬. Exercise3 A sector of a circle is an area bounded by two radii and an arc. These are two types of geometrical shapes (1) 2D (2) 3D. Give a reason from your answer b) Work out the size of angle DEB. Answer: Metre/second is the basic unit of speed. Circle Topic Questions. This is level 1: angles which can be found using one of the angle theorems. Answer : Option B. Calculate the surface area of the piece of wood. Worksheet to calculate the area of circle. Maths Worksheets with Answers. If the diameter of the. Radius of a circular garden is 7 meter more than length of a rectangle whose perimeter is 364 meter. Perimeter and Area Volume & Surface Area Perimeter: Area: Square Perimeter: Area: Rectangle Triangle Perimeter: Area:. NAVEDTRA 14250A 13-3. We offer a wide range of printables for this area (no pun intended). Box/circle/highlight your –nal answers. 5) Find the circumference of a circle with a diameter of 8cm): 6) Find the perimeter of these shapes below): 7) In a Physics experiment, students pin a metal weight around on the end of a nylon thread. Practice Questions. A circular swimming pool has a radius of 14 m. Answers to Finding the Circumference of a Circle 1) 69. (3 x 4 = 12) Find the radius of a circle whose circumference is 22 cm. You must give a reason for each stage of your working. Pi is represented by the symbol π and is a number which is approximately 3. 42,2π meters. Chapter 16: Coordinate Geometry Plot and find points on the coordinate plane; find the slope, midpoint, and distance of line segments. Lesson 6: Estimating Areas. Everything from area and circumference, radius, angles and tangents are provided for all abilities to whizz. On which day did the library receive the most visitors? 2. 8 cm 17) 44 in 18) 50. TNPSC Group 4 Maths Aptitude Questions and Answers. Angle BDC = 40°. * (b) Given that AB = 6cm and BC = 8cm, work out. You're going to find a many basic printable worksheets and a really fun math lab that you can do with students. The measure of surface area is always squared or cubed (select one). Show the necessary algebra/reasoning on free response questions. Area and circumference of circles. You can choose formulas from different pages. Area and circumference of circles practice questions and answers of semi circles and quarter circles practice questions and answers the Perimeter or Circumference of a Circle :. Clearly show ALL calculations, diagrams, graphs, et cetera which you have used in determining your answers. ! C=2"r) 3. Circumference, Pi, and Circle Questions 1. Radius is the same length: OA=OC OA = OCOA=OC. Math Formulas: Circle. Work out its circumference, using. Do the first question as a whole class. The similarity of any two circles is the basis of the definition of π, the ratio of the circumference and the diameter of any circle. Show that mtan48 mtan23 mtan42 tan67 = 1. Circumference Of A Circle Worksheets Free Printable"> Basic Circle Geometry Practice Questions And Answers"> Geom H Worksheets 5 12 11 10r08 Pdf">. Before writing your answers on the answer sheet, use the scrap. Use the techniques shown in Fig. The speed of the frisbee is 10 m/s. 4 ft c i Circles c u m fe r e n e C Diameter Radius r â 12 22. d = 10 cm 3. Follow the directions listed to draw and label the parts of a circle. ribbon costs$1 per foot, how much will it cost to add. Calculate the surface area of the piece of wood. Wild Guess: What is your answer without any calculating? 3. Dear Readers, Bank Exam Race for the Year 2018 is already started, To enrich your preparation here we have providing new series of Practice Questions on Reasoning Ability – Seating Arrangement. Solution: We have the coordinates of the center of the circle. Find the circumference of each circle below. Solution: Formula for area of the circle = pi r2. All items are to be completed by all students. The ares of the largest triangle inscribed in a semi-circle of radius. 3 (3) Dear Aspirants, Our IBPS Guide team is providing new series of Reasoning Questions for SBI PO 2019 so the aspirants can practice it on a daily basis. Sample Test Questions 1. THREE diagram sheets for QUESTION 1. Side of a square inscribed in a circle of radius r is √2 r. The following resources are ideal for your GCSE Maths Revision. Give your answers to 1 decimal place. 2 Halloween Trivia Questions And Answers. Give your answer to 3. Anish's pond has a diameter of. Using the given information, they will find the other two measures. You will have to read all the given answers and click over the correct answer. Here's - Solutions with explanations for aptitude sample questions 1) Suppose a water tank is in the shape of a right circular cylinder is 20 meter long and 10 meter in diameter. Parallelogram whose adjacent sides measure 20 units and 10 units. Trivia Questions. of Chegg writes that there are a few possible correct answers here, but tread carefully. Check your answers if you have time at the end. The goal is to complete the questions WITHOUT a calculator. Sample answer: A ratio greater than 1 indicates the ball is smaller than your hand size. The Distributive Property. Decide which topics you need to work on most using the KESH Questions. Check your answers at BigIdeasMath. Students match their answers at t. Circles GCSE Maths Tests. A chord divides a circle into two segments. That is, take $\dfrac {d} {dt}$ of both sides of your equation. Circumference, Pi, and Circle Questions 1. Give a reason for your answer. Angle Measure Angles can be measured in 2 ways, in degrees or in radians. The diagram shows a circle split into two regions: A and B. Worksheet #23a. The radius (plural: radii) is the length from the middle of a circle to any point on the edge of a circle. With a dash of formula knowledge (and, presumably, some eye of newt), you can solve any and all circle problems. Lesson 6: Estimating Areas. Sample answer: 6 in. Complete KNEC KCSE Maths Syllabus 2020/2021. 4 cm 3) 14 in 88 in 4) 30. Find: The area of circle Q Area = (radius) Area= IT (19. SSC CGL Quantitative Aptitude Questions and Answers 2020. 5 5) 30 6) 25 7) 28 8) 9. We have L = 3B. BD and CD are tangents. o at the centre. Radius ($$r$$) — any straight line from the centre of the circle to a point on the circumference. The measurement of area, perimeter, and volume is crucial to construction projects, crafts, and other applications. Worksheet to calculate the area of circle. b) The length of ̅̅̅̅ is _____ because _____. If the data in statement I alone is sufficient to answer the question 2. The student then answers questions about the shapes. Find the length of each arc. Calculate the perimeter of the shape. Write your answer in the answer boxes at the top of the grid. 3 m 2) 5 cm 31. Print out the worksheet of exam questions, or just grab a piece of. 5 inches, while the other two sides measure 17. Appears in 92 books from 1870-2007 Page 121 - To find the head in feet, the pressure being known, multiply the pressure per square inch by 2. Math · Basic geometry · Area and. Example Question 3 Find the size of the angle marked ABC is an isosceles triangle. 5 times the uncut area left in the rectangle, find the diameter of the circle. There are also questions that require students to work backwards to find the diameter (or radius) when the circumference (or perimeter) is given. Solve real-life and mathematical problems involving angle measure, area, surface area, and volume Angle introduction Measuring angles Constructing angles Angles in circles Angle types. Rectangle d. Let us understand the concepts related to circles along with the following questions-Example 1:. d = 10 cm 3. Read each question and choose the best answer. The items are roughly ordered from elementary to advanced. Post navigation. Geometry (Plane) Areas and perimeters of various figures. weight density. Monitor, and help where necessary before checking as a class. ∠AOD = 2 x ∠ABD (angle at centre twice angle at circumference) ∠AOD = 2 x ∠ACD (angle at centre twice angle at circumference) ∠ABD = ∠ACD Q. 5 9) 21 10) 3 1a. Sample Test Questions 1. Karla and Jeremy have a cicular pool. Students are asked to calculate the circumference, the diameter, or the radius of circles, plus the perimeters of semi-circles and quarter circles. Diameter — a special chord that passes through the centre of the circle. 1) 11 ft 2) 7 yd. Equation of a circle (H) A collection of 9-1 Maths GCSE Sample and Specimen questions from AQA, OCR, Pearson-Edexcel and WJEC Eduqas. C = 2π(6) C = 12 π. Can you solve the riddle? 1. Maths Science English Social Hindi A Hindi B. Quick practice questions to ensure that formulae, methods and parts of the circle Work out the circumference of a circle with radius 9 m. 1 ft 2) 44 yd 3) 39. Multiply by 3, we will get, 12F + 21O = 54780. Area of a circle is A = r2 where r is the radius. Find the area of a rectangle. A cylindrical water tank has a volume of 6000 cm3 correct to 1 s. These vibrations can be counted and translated into one second) Answer 10: (c) Radius The answers to our general knowledge quiz questions for kids?. Choose one of the following percentages: 10% 20% 30% 40% 50% 60% 70% 80% 90% STEP 4: Now multiply your half-circumference value by the decimal equivalent of the percentage. Round your answers to the nearest tenth. GCSE MATHS WORKSHEETS. Get help with your Circle homework. 1: Use units as a way to understand problems, and to guide the solution of multi-step problems; choose and interpret units consistently in formulas. Number of problems 4 problems 8 problems 12 problems 15 problems. Sample answer: During a game of dodgeball, you want to avoid throwing balls that are too large for your hand. (v) Segment of a circle is the region between an arc and of the circle. 5 5) 30 6) 25 7) 28 8) 9. Chapter 16: Coordinate Geometry Plot and find points on the coordinate plane; find the slope, midpoint, and distance of line segments. 1, QUESTION 11. 1 ft 2) 44 yd 3) 39. You can use Next Quiz button to check new set of questions in the quiz. A circle has a diameter of. TNPSC Group 4 Maths Aptitude Questions and Answers. Equation of a circle (H) A collection of 9-1 Maths GCSE Sample and Specimen questions from AQA, OCR, Pearson-Edexcel and WJEC Eduqas. What is the perimeter and area of a rectangle with a height of 6 and base of 14? 2. Download Topic Wise SSC CHSL Important Questions PDF SSC CHSL Study Material (FREE Tests) Question 1: Point R lies on line segment PQ …. Area of a Circle 4. Identify corresponding parts of congruent. Adding and subtracting square roots. Get, Moral Science Questions and Answers in PDF Form. Give your answer correct to 1 decimal place. Statement II: Ratio of the perimeters of square and circle is 2:1 1. For instance, the length of the shorter missing side is 6 because if you add it to the 3 on the left, the result should be the 9 on the right. Round decimal answers to two decimal places. The ares of the largest triangle inscribed in a semi-circle of radius. Find the radius of the bangle. perimeter of a circle, we call it by the special name of circumference. Triangle centers. Relating circumference and area. This is aptitude questions and answers section on Mensuration with explanation for various interview, competitive examinations and entrance tests. There are also questions that require students to work backwards to find the diameter (or radius) when the circumference (or perimeter) is given. Questions labelled with an asterisk (*) are ones where the quality of your. These slightly more advanced circle worksheets require students to calculate area or circumference from different measurements of a circle. Often, students will confuse the symbol π for unit. That is why Cazoom have supplied you with all the relevant worksheets and answers. the circumference of the circle and AOBˆ at the centre. PT is a tangent to the circle. C 2 r Write formula for circumference. is twice angle at circumference. Previous Circle Theorems Practice Questions. (3 marks) 7. (Here r is a radius and π is a constant and actually defined as the ratio of circumference to the diameter of a circle). T is a point on the circumference of the circle such that POT is a straight line. 7 N Substitute values and calculate 2. Find the circumference of a circle. keep the kids in the circle? 2. (a) A, B and C are points on the circumference of a circle, centre, O. If a sphere of radius 6 cm is poured into it then find the rise in height of water if the radius of cylinder is 4 cm. This answer paper consists of 12 pages and 2 sections. 5) Find the circumference of a circle with a diameter of 8cm): 6) Find the perimeter of these shapes below): 7) In a Physics experiment, students pin a metal weight around on the end of a nylon thread. Solve real-life and mathematical problems involving angle measure, area, surface area, and volume Angle introduction Measuring angles Constructing angles Angles in circles Angle types. Complexity=1, Mode=adv. 5cm2 (to 1 d. Often, students will confuse the symbol π for unit. For all questions: † Read each question carefully and choose the best answer. Find the diameter of the circle. 00 cm to 2 d. Also solutions and explanations are included. Solving Basic Equations Pt. 8 cm 17) 44 in 18) 50. Whether you want a homework, some cover work, or a lovely bit of extra practise, this is the place for you. J circumference 30 Point B is the center of the circle shown. CBSE Class 5 Summative Assessment Question Paper. Create a new method for the circle class called diameter. Master application letter template Short essay topics in english. (∠=×∠ at centre 2 at circ) Proof Consider Diagram 1 and Diagram 2. 7: Solve Problems Using Diagrams ; Chapter 6: Addition and Subtraction of Integers. ∠PRQ =1/2 ∠POQ. Right Triangular Prism: Has 5 faces: 2 congruent triangles & 3 rectangular faces if the triangular prism has a scalene triangle. Moderate Questions. THREE diagram sheets for QUESTION 1. Use the figure below to answer questions 22 and 23. Previous Circle Theorems Practice Questions. pdf View Download. Knowing the correct answers to all of the sample questions. Check your answers seem right. If you enjoy locating points on the coordinate plane, then some questions on the GED Math test will have you smiling. ; Circumference — the perimeter or boundary line of a circle. Fill in the blanks: (i) A tangent to a circle intersects it in _____ point(s). How many feel did the Ferris wheel rotate with one complete turn? 3. The drawing is not to scale. Order rational numbers. Circumference And Area Of Circles. For example, if we were to leave the circumference of this circle in terms of pi, it would simply be written like 42\pi. (b) In the diagram Full Template. If the answer is incorrect, you will be taken to the area in the chapter where the information is for review. the upper and lower bounds for: a) the circumference of the circle, b) the area of the circle. Six two-dimensional shapes are displayed on the page. Find the area of the park. Line A B is a straight line going through the centre O. Students match their answers at t. What is the length of diagonal PS? (A) 1/2 (B) 1 (C) 4 (D) 2 (E) 2/3 29. 1-8 Perimeter, Circumference, and Area Objectives: Find the perimeter or circumference of basic shapes. Anish's pond has a diameter of. The area of a circle can be found by multiplying pi ( π = 3. Find the circumference of the region watered by the sprinkler. NCERT Solutions for Class 10 Maths in PDF & Video form, हिंदी मीडियम as well as English Medium session 2020-21 for CBSE, UP, Gujrat Board, MP Board, Bihar, Uttarakhand board, Jammu and Kashmir Board of School Education (jkbose) and other boards following new CBSE Curriculum 2020-2021. A student has to score 30% marks to get through. THEOREM 2 The angle subtended by an arc at the centre of a circle is double the size of the angle subtended by the same arc at the circumference (on the same side of the chord as the centre). 14 to calculate your answers. is 90 Angle between. Circumference of a Circle Key Concept Words The circumference C of a circle is equal to its diameter d times π, or 2 times its radius r times π. A circle forms a curve with a definite length, called the circumference, and it encloses a definite area. Rational numbers on number line. 151 feet b. How many edges does a cube have? The correct answer is: C. Following quiz provides Multiple Choice Questions (MCQs) related to Circumference and Area of a Circle. Note: some questions will ask you to "leave your answer in terms of pi", this means that the answer should be in the form of "something \times \pi ".
2020-05-31 17:16:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6424397230148315, "perplexity": 875.0198016647619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413551.52/warc/CC-MAIN-20200531151414-20200531181414-00449.warc.gz"}
https://www.nag.com/numeric/cl/nagdoc_fl26.2/adhtml/e01/e01eb_ad_f.html
# NAG AD Library Routine Document ## e01eb_a1w_f (dim2_triang_bary_eval_a1w) Note: _a1w_ denotes that first order adjoints are computed in working precision; this has the corresponding argument type nagad_a1w_w_rtype. Further implementations, for example for higher order differentiation or using the tangent linear approach, may become available at later marks of the NAG AD Library. The method of codifying AD implementations in routine name and corresponding argument types is described in the NAG AD Library Introduction. ## 1Purpose e01eb_a1w_f is the adjoint version of the primal routine e01ebf . ## 2Specification Fortran Interface Subroutine e01eb_a1w_f ( ad_handle, m, n, x, y, f, triang, px, py, pf, ifail) Integer, Intent (In) :: m, n, triang(7*n) Integer, Intent (Inout) :: ifail Type (nagad_a1w_w_rtype), Intent (In) :: x(n), y(n), f(n), px(m), py(m) Type (nagad_a1w_w_rtype), Intent (Out) :: pf(m) Type (c_ptr), Intent (In) :: ad_handle ## 3Description e01ebf performs barycentric interpolation, at a given set of points, using a set of function values on a scattered grid and a triangulation of that grid computed by e01eaf. For further information see Section 3 in the documentation for e01ebf . None. ## 5Arguments e01eb_a1w_f provides access to all the arguments available in the primal routine. There are also additional arguments specific to AD. A tooltip popup for each argument can be found by hovering over the argument name in Section 2 and a summary of the arguments are provided below: • ad_handle – a handle to the AD configuration data object, as created by x10aa_a1w_f. • m$m$, the number of points to interpolate. • n$n$, the number of data points. • x – the coordinates of the $\mathit{r}$th data point, $\left({x}_{r},{y}_{r}\right)$, for $\mathit{r}=\mathrm{1},2, \dots ,n$. • y – the coordinates of the $\mathit{r}$th data point, $\left({x}_{r},{y}_{r}\right)$, for $\mathit{r}=\mathrm{1},2, \dots ,n$. • f – the function values ${f}_{\mathit{r}}$ at $\left({x}_{\mathit{r}},{y}_{\mathit{r}}\right)$, for $\mathit{r}=\mathrm{1},2, \dots ,n$. • triang – the triangulation computed by the previous call of routine. • px – the coordinates $\left({\mathit{px}}_{\mathit{i}},{\mathit{py}}_{\mathit{i}}\right)$, for $\mathit{i}=\mathrm{1},2, \dots ,m$, at which interpolated function values are sought. • py – the coordinates $\left({\mathit{px}}_{\mathit{i}},{\mathit{py}}_{\mathit{i}}\right)$, for $\mathit{i}=\mathrm{1},2, \dots ,m$, at which interpolated function values are sought. • pf – on exit: the interpolated values $F\left({\mathit{px}}_{\mathit{i}},{\mathit{py}}_{\mathit{i}}\right)$, for $\mathit{i}=\mathrm{1},2, \dots ,m$. • ifail – on entry: ifail must be set to $\mathrm{0}$, $-\mathrm{1}\text{ or }\mathrm{1}$. on exit: ifail = 0 unless the routine detects an error or a warning has been flagged (see Section 6). ## 6Error Indicators and Warnings e01eb_a1w_f preserves all error codes from e01ebf and in addition can return: $\mathbf{ifail}=-89$ See Section 5.2 in the NAG AD Library Introduction for further information. $\mathbf{ifail}=-899$ Dynamic memory allocation failed for AD. See Section 5.1 in the NAG AD Library Introduction for further information. Not applicable. ## 8Parallelism and Performance e01eb_a1w_f is not threaded in any implementation.
2021-09-27 07:22:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 21, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.950793981552124, "perplexity": 4637.811942193239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058373.45/warc/CC-MAIN-20210927060117-20210927090117-00372.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/21561/swap-test-and-density-matrix-distinguishability
# SWAP test and density matrix distinguishability Let us either be given the density matrix $$$$|\psi\rangle\langle \psi| \otimes |\psi\rangle\langle \psi| ,$$$$ for an $$n$$ qubit pure state $$|\psi \rangle$$ or the maximally mixed density matrix $$$$\frac{\mathbb{I}}{{2^{2n}}}.$$$$ I am trying to analyze the following algorithm to distinguish between these two cases. We plug the $$2n$$ qubit state we are given into the circuit of a SWAP test. Then, following the recipe given in the link provided, if the first qubit is is $$0$$, I say that we were given two copies of $$|\psi \rangle$$, and if it is $$1$$, we say we were given the maximally mixed state over $$2n$$ qubits. What is the success probability of this algorithm? Is it the optimal distinguisher for these two states? The optimal measurement ought to be an orthogonal one (as the optimal Helstorm measurement is an orthogonal measurement). How do I see that the SWAP test implements an orthogonal measurement? First of all, let us compute the probability of success of this algorithm. If you are given the state $$|\psi\rangle\langle\psi|\otimes|\psi\rangle\langle\psi|$$, the SWAP test will return the state $$|0\rangle$$ with probability $$1$$, which is the probability of success of the algorithm in this case. Let us now consider the second case. The initial state is: $$\rho_0=\frac{1}{2^{2n}}\sum_{i,j}|0,i,j\rangle\langle0,i,j|$$ The first gate to be applied is: $$\mathbf{H}\otimes \mathbf{I}\otimes\mathbf{I}=\frac{1}{\sqrt{2}}\sum_{a,b,x,y}(-1)^{a\cdot b}|a,x,y\rangle\langle b,x,y|.$$ The resulting state is thus given by: $$\rho_1=\frac{1}{2}\frac{1}{2^{2n}}\sum_{a,i,j,b}|a,i,j\rangle\langle b,i,j|$$ We now apply the $$\mathbf{CSWAP}$$ gate, whose expression is: $$\mathbf{CSWAP}=\sum_{x,y}|0,x,y\rangle\langle0,x,y|+\sum_{x,y}|1,x,y\rangle\langle1,y,x|$$ The resulting state is: $$\rho_2=\frac{1}{2}\frac{1}{2^{2n}}\sum_{i,j}\left(|0,i,j\rangle\langle0,i,j|+|0,i,j\rangle\langle1,j,i|+|1,j,i\rangle\langle0,i,j|+|1,j,i\rangle\langle1,j,i|\right)$$ Finally, we apply the Hadamard gate on the first qubit once again, which results in the state: $$\rho_3=\frac{1}{4}\frac{1}{2^{2n}}\sum_{i,j}\left(\sum_{a,b}|a,i,j\rangle\langle b,i,j|+\sum_{a,b}(-1)^b|a,i,j\rangle\langle b,j,i|+\sum_{a,b}(-1)^a|a,j,i\rangle\langle b,i,j|+\sum_{a,b}(-1)^{a\oplus b}|a,j,i\rangle\langle b,j,i|\right)$$ We're interested by the diagonal coefficients of $$\rho_3$$ that can be written as $$|0,i,j\rangle\langle0,i,j|$$. Summing them would give us the probability of measuring $$|0\rangle$$. This probability is thus given by: $$\mathbb{P}[|0\rangle]=\frac{1}{4}\frac{1}{2^{2n}}\left(\sum_{i,j}1+\sum_{i}1+\sum_{i}1+\sum_{i,j}1\right)=\frac12+\frac{1}{2^{n+1}}.$$ All in all, this algorithm distinguishes these two states with probability $$\frac34-\frac{1}{2^{n+2}}$$. Now, let $$T$$ denote the trace distance between these two states. We know that the optimal probability of disinguishing these states is given by $$\frac12(1+T)$$. Let $$U$$ be a quantum gate such that $$U|0\rangle=|\psi\rangle$$. $$T$$ is then also equal to the trace distance between $$\left(U^\dagger\otimes U^\dagger\right)\left(|\psi\rangle\langle\psi|\otimes|\psi\rangle\langle\psi|\right)\left(U\otimes U\right)=|0\rangle\langle0|\otimes|0\rangle\langle0|$$ and $$\frac{1}{2^{2n}}\left(U^\dagger\otimes U^\dagger\right)\mathbf{I}\left(U\otimes U\right)=\frac{1}{2^{2n}}\mathbf{I}$$. $$T$$ is then easily seen to be: $$T=\frac12\sum_i\left|\lambda_i\right|=\frac12\left(1-\frac{1}{2^{2n}}+\sum_{i=1}^{2^{2n}-1}\frac{1}{2^{2n}}\right)=1-\frac{1}{2^{2n}}$$ which means that the maximal probability of distinguishing these states is $$1-\frac{1}{2^{2n+1}}$$. Thus, the SWAP test has a sub-optimal probability of success. Intuitively, this is due to the fact that the probability of measuring $$|0\rangle$$ is always larger than or equal to $$\frac12$$, which upper-bounds the probability of success with $$\frac34$$. Note however that this reasoning works assuming you know what $$|\psi\rangle$$ is. Otherwise, the initial density matrix in the first case is $$\frac{1}{2^{n-1}\left(2^n+1\right)}P_{\text{Sym}^2}\left(\mathbb{C}^{2^n}\right)$$ as explained in this answer, with $$P_{\text{Sym}^2}\left(\mathbb{C}^{2^n}\right)$$ being the projector on the symmetric subspace of $$\mathbb{C}^{2^n}$$ with $$2$$ copies.
2023-03-29 15:31:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 42, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9690423011779785, "perplexity": 134.60939641608593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00767.warc.gz"}
https://byjus.com/jee/standard-determinants/
Checkout JEE MAINS 2022 Question Paper Analysis : Checkout JEE MAINS 2022 Question Paper Analysis : Standard Determinant A determinant is defined as a quantity which is obtained by adding the products of all elements in a square matrix. To find the determinant, a particular rule is followed. In this lesson, the concept of determinants is explained in detail along with solved examples, formulas, determinant types, and practice questions. There are certain standard determinants whose results are given by direct formulas. The standard results of a few types of determinants are given below which will help to solve questions more efficiently. All Topics in Determinants Expressions for Standard Determinants (i) $$\begin{array}{l}\left| \begin{matrix} 1 & a & {{a}^{2}} \\ 1 & b & {{b}^{2}} \\ 1 & c & {{c}^{2}} \\ \end{matrix} \right|=\left( a-b \right)\left( b-c \right)\left( c-a \right)\end{array}$$ (ii) $$\begin{array}{l}\left| \begin{matrix} a & b & c \\ {{a}^{2}} & {{b}^{2}} & {{c}^{2}} \\ bc & ca & ab \\ \end{matrix} \right|=\left| \begin{matrix} 1 & 1 & 1 \\ {{a}^{2}} & {{b}^{2}} & {{c}^{2}} \\ {{a}^{3}} & {{b}^{3}} & {{c}^{3}} \\ \end{matrix} \right|=\left( a-b \right)\left( b-c \right)\left( c-a \right)\left( ab+bc+ca \right)\end{array}$$ (iii) $$\begin{array}{l}\left| \begin{matrix} a & bc & abc \\ b & ca & abc \\ c & ab & abc \\ \end{matrix} \right|=\left| \begin{matrix} a & {{a}^{2}} & {{a}^{3}} \\ b & {{b}^{2}} & {{b}^{3}} \\ c & {{c}^{2}} & {{c}^{3}} \\ \end{matrix} \right|=abc\left( a-b \right)\left( b-c \right)\left( c-a \right);\end{array}$$ (iv) $$\begin{array}{l}\left| \begin{matrix} 1 & 1 & 1 \\ a & b & c \\ {{a}^{3}} & {{b}^{3}} & {{c}^{3}} \\ \end{matrix} \right|=\left( a-b \right)\left( b-c \right)\left( c-a \right)\left( a+b+c \right)\end{array}$$ (v) $$\begin{array}{l}\left| \begin{matrix} a & b & c \\ b & c & a \\ c & a & b \\ \end{matrix} \right|=-{{a}^{3}}-{{b}^{3}}-{{c}^{3}}+3abc\end{array}$$ (vi) Determinant of order $$\begin{array}{l}3\times 3=\left| \begin{matrix} {{a}_{1}} & {{b}_{1}} & {{c}_{1}} \\ {{a}_{2}} & {{b}_{2}} & {{c}_{2}} \\ {{a}_{3}} & {{b}_{3}} & {{c}_{3}} \\ \end{matrix} \right|={{a}_{1}}\left| \begin{matrix} {{b}_{2}} & {{c}_{2}} \\ {{b}_{3}} & {{c}_{3}} \\ \end{matrix} \right|-{{b}_{1}}\left| \begin{matrix} {{a}_{2}} & {{c}_{2}} \\ {{a}_{3}} & {{c}_{3}} \\ \end{matrix} \right|+{{c}_{1}}\left| \begin{matrix} {{a}_{2}} & {{b}_{2}} \\ {{b}_{3}} & {{b}_{3}} \\ \end{matrix} \right|\end{array}$$ (vii) In the determinant D = $$\begin{array}{l}\left| \begin{matrix} {{a}_{11}} & {{a}_{12}} & {{a}_{13}} \\ {{a}_{21}} & {{a}_{22}} & {{a}_{23}} \\ {{a}_{31}} & {{a}_{32}} & {{a}_{33}} \\ \end{matrix} \right|,\end{array}$$ minor of a12 is denoted as $$\begin{array}{l}{{M}_{12}}=\left| \begin{matrix} {{a}_{21}} & {{a}_{23}} \\ {{a}_{31}} & {{a}_{33}} \\ \end{matrix} \right|\end{array}$$ and so on. (viii) Cofactor of an element $$\begin{array}{l}{{a}_{i\,j}}={{C}_{i\,j}}={{\left( -1 \right)}^{i+j}}{{M}_{i\,j}}\end{array}$$ Evaluation of the Determinant using SARRUS Diagram If $$\begin{array}{l}A=\left[ \begin{matrix} {{a}_{11}} & {{a}_{12}} & {{a}_{13}} \\ {{a}_{21}} & {{a}_{22}} & {{a}_{23}} \\ {{a}_{31}} & {{a}_{32}} & {{a}_{33}} \\ \end{matrix} \right]\end{array}$$ is a square matrix of order 3, the below diagram is a Sarrus Diagram obtained by adjoining the first two columns on the right and draw dark and dotted lines as shown. The value of the determinant is $$\begin{array}{l}\left( {{a}_{11}}{{a}_{22}}{{a}_{33}}+{{a}_{12}}{{a}_{23}}{{a}_{31}}+{{a}_{13}}{{a}_{21}}{{a}_{32}} \right)-\left( {{a}_{13}}{{a}_{22}}{{a}_{31}}+{{a}_{11}}{{a}_{23}}{{a}_{32}}+{{a}_{12}}{{a}_{21}}{{a}_{33}} \right).\end{array}$$ RECOMMENDED VIDEO Solved Problems on Determinant Illustration 1: Evaluate the determinant $$\begin{array}{l}\Delta =\left| \begin{matrix} \sqrt{p}+\sqrt{q} & 2\sqrt{r} & \sqrt{r} \\ \sqrt{qr}+\sqrt{2p} & r & \sqrt{2r} \\ q+\sqrt{pr} & \sqrt{qr} & r \\ \end{matrix} \right|,\end{array}$$ where p, q and r are positive real numbers. Solution: Taking $$\begin{array}{l}\sqrt{r}\end{array}$$ common from C2 and C3 of the given determinant using scalar multiple property and then expanding it using the invariance property we can evaluate the given problem. We get $$\begin{array}{l}\Delta =r\left| \begin{matrix} \sqrt{p}+\sqrt{q} & 2 & 1 \\ \sqrt{qr}+\sqrt{2p} & \sqrt{r} & \sqrt{2} \\ q+\sqrt{pr} & \sqrt{q} & \sqrt{r} \\ \end{matrix} \right|\end{array}$$ Applying $$\begin{array}{l}{{C}_{1}}\to {{C}_{1}}-\sqrt{q}{{C}_{2}}-\sqrt{p}{{C}_{3}}\end{array}$$ We get D $$\begin{array}{l}=r\left| \begin{matrix} -\sqrt{q} & 2 & 1 \\ 0 & \sqrt{r} & \sqrt{2} \\ 0 & \sqrt{q} & \sqrt{r} \\ \end{matrix} \right|=-r\sqrt{q}\left( r-\sqrt{2}q \right).\end{array}$$ Illustration 2: Let a, b, c be positive and not equal. Show that the value of the determinant $$\begin{array}{l}\left| \begin{matrix} a & b & c \\ b & c & a \\ c & a & b \\ \end{matrix} \right|\end{array}$$ is negative. Solution: By applying invariance and scalar multiple properties to the given determinant we can get the required result. $$\begin{array}{l}D=\left| \begin{matrix} a & b & c \\ b & c & a \\ c & a & b \\ \end{matrix} \right|;\;\;then\;\;D=\left| \begin{matrix} a+b+c & b & c \\ a+b+c & c & a \\ a+b+c & a & b \\ \end{matrix} \right| \;\;\;\left[ {{C}_{1}}\to {{C}_{1}}+{{C}_{2}}+{{C}_{3}} \right]\end{array}$$ $$\begin{array}{l}=\left( a+b+c \right)\left| \begin{matrix} 1 & b & c \\ 1 & c & a \\ 1 & a & b \\ \end{matrix} \right|\end{array}$$ [Taking (a+b+c) common from the first column] $$\begin{array}{l}=\left( a+b+c \right)\left| \begin{matrix} 1 & b & c \\ 0 & c-b & a-c \\ 0 & a-b & b-c \\ \end{matrix} \right|\;\; \left[ {{R}_{2}}\to {{R}_{2}}-{{R}_{1}}\,\,and\,{{R}_{3}}\to {{R}_{3}}-{{R}_{1}} \right]\end{array}$$ $$\begin{array}{l}=\left( a+b+c+ \right)\left[ \left( c-b \right)\left( b-c \right)\left( a-b \right)\left( a-c \right) \right]=\left( a+b+c+ \right)\left[ bc+ca+ab-{{a}^{2}}-{{b}^{2}}-{{c}^{2}} \right]\end{array}$$ $$\begin{array}{l}=-\left( a+b+c \right)\left( {{a}^{2}}+{{b}^{2}}+{{c}^{2}}-bc-ca-ab \right)=-\frac{1}{2}\left( a+b+c \right)\left( 2{{a}^{2}}+2{{b}^{2}}+2{{c}^{2}}-2bc-2ca-2ab \right)\end{array}$$ $$\begin{array}{l}=-\frac{1}{2}\left( a+b+c \right)\left[ \left( {{a}^{2}}+{{b}^{2}}-2ab \right)+\left( {{b}^{2}}+{{c}^{2}}-2bc \right)+\left( {{c}^{2}}+{{a}^{2}}-2ac \right) \right]\end{array}$$ $$\begin{array}{l}=-\frac{1}{2}\left( a+b+c \right)\left[ {{\left( a-b \right)}^{2}}+{{\left( b-c \right)}^{2}}+{{\left( c-a \right)}^{2}} \right]\end{array}$$ … (i) a, b, c are positive $$\begin{array}{l}\Rightarrow a+b+c>0\end{array}$$ a, b, c are unequal $$\begin{array}{l}\Rightarrow {{\left( a-b \right)}^{2}}+{{\left( b-c \right)}^{2}}+{{\left( c-a \right)}^{2}}>0\end{array}$$ … (ii) From (i) and (ii), Δ <0. Illustration 3: Show that $$\begin{array}{l}\Delta =\left| \begin{matrix} 1 & {{\cos }^{2}}\left( \alpha -\beta \right) & {{\cos }^{2}}\left( \alpha -\gamma \right) \\ {{\cos }^{2}}\left( \beta -\alpha \right) & 1 & {{\cos }^{2}}\left( \beta -\gamma \right) \\ {{\cos }^{2}}\left( \gamma -\alpha \right) & {{\cos }^{2}}\left( \gamma -\beta \right) & 1 \\ \end{matrix} \right|\end{array}$$ $$\begin{array}{l}=2{{\sin }^{2}}\left( \beta -\gamma \right){{\sin }^{2}}\left( \gamma -\alpha \right){{\sin }^{2}}\left( \alpha -\beta \right)\end{array}$$ Solution: By Putting $$\begin{array}{l}\beta -\gamma =A,\gamma -\alpha =B,\alpha -\beta =C\end{array}$$ and then by using switching and invariance properties we can prove the above problem. We can write Δ as, $$\begin{array}{l}\Delta =\left| \begin{matrix} 1 & {{\cos }^{2}}C & {{\cos }^{2}}B \\ {{\cos }^{2}}C & 1 & {{\cos }^{2}}A \\ {{\cos }^{2}}B & {{\cos }^{2}}A & 1 \\ \end{matrix} \right|\end{array}$$ (Note that A + B + C = 0). Using $$\begin{array}{l}{{C}_{2}}\to {{C}_{2}}-{{C}_{1}},{{C}_{1}}\to {{C}_{3}}-{{C}_{1}}\end{array}$$ we get $$\begin{array}{l}\Delta =\left| \begin{matrix} 1 & -{{\sin }^{2}}C & -{{\sin }^{2}}B \\ {{\cos }^{2}}C & {{\sin }^{2}}C & {{\cos }^{2}}A-{{\cos }^{2}}C \\ {{\cos }^{2}}B & {{\cos }^{2}}A-{{\cos }^{2}}B & {{\sin }^{2}}B \\ \end{matrix} \right|\;\;\left| \begin{matrix} 1 & -{{\sin }^{2}}C & -{{\sin }^{2}}B \\ {{\cos }^{2}}C & {{\sin }^{2}}C & \sin B\sin \left( C-A \right) \\ {{\cos }^{2}}B & \sin C\sin \left( B-A \right) & {{\sin }^{2}}B \\ \end{matrix} \right|\end{array}$$ $$\begin{array}{l}={{\left( -1 \right)}^{2}}\left| \begin{matrix} 1 & {{\sin }^{2}}C & {{\sin }^{2}}B \\ {{\cos }^{2}}C & -{{\sin }^{2}}C & \sin B\sin \left( -A \right) \\ {{\cos }^{2}}B & \sin C\sin \left( B-A \right) & -{{\sin }^{2}}B \\ \end{matrix} \right|\end{array}$$ Since, $$\begin{array}{l}\left[ \,{{\cos }^{2}}A-{{\cos }^{2}}B=\sin \left( A+B \right)\sin \left( B-A \right),A+B=-C,C+A=-B \right]; \;\;=\sin C\sin B\left[ {{\Delta }_{1}} \right]\end{array}$$ Where $$\begin{array}{l}{{\Delta }_{1}}=\left| \begin{matrix} 1 & {{\sin }^{2}}C & \sin B \\ {{\cos }^{2}}C & -{{\sin }^{2}}C & \sin (C-A) \\ {{\cos }^{2}}B & \sin (B-A) & -\sin B \\ \end{matrix} \right|\; Using\; {{R}_{2}}\to {{R}_{2}}-{{R}_{1}}\; and \;{{R}_{3}}\to {{R}_{3}}-{{R}_{1}}\end{array}$$ we get $$\begin{array}{l}{{\Delta }_{1}}=\left| \begin{matrix} 1 & \sin C & \sin B \\ -{{\sin }^{2}}C & -2{{\sin }^{2}}C & \sin (C-A)-sinB \\ -{{\sin }^{2}}B & \sin (B-A)-sinC & -2{{\sin }^{2}}B \\ \end{matrix} \right|\end{array}$$ But sin (C – A) – sin B = sin (C – A) + sin (C + A) = 2 sin C cos A and sin (B – A) – sin C = 2 sin B cos A Therefore, $$\begin{array}{l}{{\Delta }_{1}}=\sin C\;\sin B\;{{\Delta }_{2}}\; where \;{{\Delta }_{2}}=\left| \begin{matrix} 1 & \sin C & \sin B \\ \sin C & 2 & -2\cos A \\ \sin B & -2\cos A & 2 \\ \end{matrix} \right|\end{array}$$ Applying $$\begin{array}{l}{{R}_{2}}\to {{R}_{2}}-\sin C\,{{R}_{1}} \;and \;{{R}_{3}}\to -\sin \,B\,{{R}_{1}}\end{array}$$ we get $$\begin{array}{l}{{\Delta }_{2}}=\left| \begin{matrix} 1 & \sin C & \sin B \\ 0 & 2-{{\sin }^{2}}C & -2\cos A-\sin B\sin C \\ 0 & -2\cos A-\sin B\sin C & 2-{{\sin }^{2}}B \\ \end{matrix} \right|=\left( 2-{{\sin }^{2}}B \right)\left( 2-\sin C \right)-{{\left( 2\cos A+\sin B\sin C \right)}^{2}}\end{array}$$ $$\begin{array}{l}=4-2{{\sin }^{2}}B-2{{\sin }^{2}}C+{{\sin }^{2}}B{{\sin }^{2}}C-\left[ 4{{\cos }^{2}}A+4\cos A\sin B\sin C+{{\sin }^{2}}B{{\sin }^{2}}C \right]\end{array}$$ $$\begin{array}{l}=4{{\sin }^{2}}A-2{{\sin }^{2}}B-2{{\sin }^{2}}C-4\cos A\sin B\sin C\end{array}$$ $$\begin{array}{l}=2{{\sin }^{2}}A-2\left[ {{\sin }^{2}}B+{{\sin }^{2}}C-{{\sin }^{2}}A+2\cos A\sin B\sin C \right]\end{array}$$ But A + B + C = 0 implies; $$\begin{array}{l}{{\sin }^{2}}B+{{\sin }^{2}}C-{{\sin }^{2}}A=-2\cos A\sin B\sin C\end{array}$$ $$\begin{array}{l}{{\Delta }_{2}}=2{{\sin }^{2}}A;\end{array}$$ Hence, $$\begin{array}{l}D=\sin C\sin B{{\Delta }_{1}}={{\sin }^{2}}C{{\sin }^{2}}B{{\Delta }_{2}}\end{array}$$ $$\begin{array}{l}=2{{\sin }^{2}}A\;{{\sin }^{2}}B\;{{\sin }^{2}}C=2{{\sin }^{2}}\left( \alpha -\beta \right)\;{{\sin }^{2}}\left( \beta -\gamma \right)\;{{\sin }^{2}}\left( \gamma -\alpha \right).\end{array}$$ Illustration 4: Prove that the following determinant vanishes if any two of x; y; z are equal $$\begin{array}{l}\Delta =\left| \begin{matrix} \sin x & \sin y & \sin z \\ \cos x & \cos y & \cos z \\ {{\cos }^{3}}x & {{\cos }^{3}}y & {{\cos }^{3}}z \\ \end{matrix} \right|\end{array}$$ Solution:Taking cos x, cos y, and cos z common from first, second and third column using scalar multiple and then using the invariance property we can prove the given statement. Here, $$\begin{array}{l}\Delta =\cos x\cos y\cos z\left| \begin{matrix} \tan x & \tan y & \tan z \\ 1 & 1 & 1 \\ {{\cos }^{2}}x & {{\cos }^{2}}y & {{\cos }^{2}}z \\ \end{matrix} \right|\end{array}$$ $$\begin{array}{l}=\cos x\cos y\cos z\left| \begin{matrix} \tan x & \tan y-\tan x & \tan z-\tan y \\ 1 & 0 & 0 \\ {{\cos }^{2}}x & {{\cos }^{2}}y-{{\cos }^{2}}x & {{\cos }^{2}}z-{{\cos }^{2}}y \\ \end{matrix} \right|\left( {{C}_{3}}\to {{C}_{3}}-{{C}_{2}},{{C}_{2}}\to {{C}_{2}}-{{C}_{1}} \right)\end{array}$$ Expanding along R2, $$\begin{array}{l}\Delta =-\cos x\cos y\cos z\left| \begin{matrix} \tan y-\tan x & \tan z-\tan y \\ {{\cos }^{2}}y-{{\cos }^{2}}x & {{\cos }^{2}}z-{{\cos }^{2}}y \\ \end{matrix} \right|\end{array}$$ $$\begin{array}{l}=-\cos x\cos y\cos z\left| \begin{matrix} \frac{\sin \left( y-x \right)}{\cos x\cos y} & \frac{\sin \left( z-y \right)}{\cos y.\cos z} \\ {{\sin }^{2}}x-{{\sin }^{2}}y & {{\sin }^{2}}y-{{\sin }^{2}}z \\ \end{matrix} \right|=\left| \begin{matrix} \cos z.\sin \left( x-y \right) & \cos x.\sin \left( y-z \right) \\ \sin \left( x+y \right).\sin \left( x-y \right) & \sin \left( y+z \right).\sin \left( y-z \right) \\ \end{matrix} \right|\end{array}$$ … (i) $$\begin{array}{l}=\sin \left( x-y \right)\sin \left( y-z \right)\left| \begin{matrix} \cos z & \cos x \\ \sin \left( x+y \right) & \sin \left( y+z \right) \\ \end{matrix} \right|=\sin \left( x-y \right)\sin \left( y-z \right)\left[ \sin \left( y+z \right)\cos z-\sin \left( x+y \right)\cos \,x) \right]\end{array}$$ $$\begin{array}{l}=\frac{1}{2}\sin \left( x-y \right)\sin \left( y-z \right)\left[ \left\{ \sin \left( y+2z \right)+\sin y \right\}-\left\{ \sin \left( y+2x \right)+\sin y \right\} \right]\end{array}$$ $$\begin{array}{l}=\frac{1}{2}\sin \left( x-y \right)\sin \left( y-z \right)\left[ \sin \left( y+2x \right) \right]=\frac{1}{2}\sin \left( x-y \right)\sin \left( y-z \right)2\cos \left( x+y+z \right)\sin \left( z-x \right)\end{array}$$ $$\begin{array}{l}=\sin \left( x-y \right)\sin \left( y-z \right)\sin \left( z-x \right)\cos \left( x+y+z \right)\end{array}$$ Clearly, Δ is zero when any two of x, y, z are equal or $$\begin{array}{l}x+y+z=\frac{\pi }{2}.\end{array}$$ Hence proved. Important Questions for JEE Matrices and Determinants When 2 rows or columns are interchanged, what happens to a determinant? When 2 rows or columns are interchanged, the determinant changes its sign. What is the value of the determinant, if all the elements of a row or column are zero? If all the elements of a row or column are zero, then the determinant is equal to zero. What are determinants used for? Determinants are used to give formulas for the area or volume of certain geometric figures and also to find the inverse of a matrix. Are determinants always positive? No. determinants can be positive, negative or zero. Test your Knowledge on Standard determinants
2022-07-02 10:38:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7400948405265808, "perplexity": 793.0347762825825}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104054564.59/warc/CC-MAIN-20220702101738-20220702131738-00001.warc.gz"}
http://tex.stackexchange.com/questions/25594/why-is-there-no-mudimendef-primitive
# Why is there no \mudimendef primitive? The TeXbook defines in chapter 24 (page 270 in my edition) the notion of <dimen>, <skip>, <mudimen> and <muskip>, the first two being used in horizontal and vertical mode, while the other ones are used in math mode. However this parallel between math mode and other modes is broken by registers: only \dimendef, \skipdef and \muskipdef exist, and not \mudimendef. Similarly, \dimen 0, \skip 0 and \muskip 0 exist, but not \mudimen 0. The only primitive that manipulates mudimens seems to be \mkern. Why are <mudimens> so special? EDIT: extra question. I'm asking this because I try to be able to grab the argument of any primitive. To grab a <dimen>, for instance, one can use \newdimen\MyDimen and \afterassignment\DoSomethingWithMyDimen \MyDimen=. Given that TeX does not have <mudimen> parameters, I cannot use the same trick for that. I cannot use a muskip either, because it may mistakenly grab a stretch or shrink part which \mkern would stop at. It may be that I can use a \count register in some cases, but not all: how could I distinguish between cases like \mkern 1mu and \mkern \MyMuskip (with automatic coercion)? - There is only one unit mu for \mkern. Thus a \count would suffie. I guess this is why there isn't \mudimen register. –  Leo Liu Aug 13 '11 at 4:57 @Leo Liu: that's a great reason. How could I then grab a <mudimen> argument? Can you think of a way to grab exactly what \mkern would grab as its argument? See edit. –  Bruno Le Floch Aug 13 '11 at 11:59 I've no idea. But I guess you can change your desired syntax and use a count register without problems. –  Leo Liu Aug 13 '11 at 15:19 @Leo Liu: I'm reading arbitrary TeX code, and expanding fully all macros, then looking at whatever primitive is in front of me and doing what TeX would do with it. So I can't really change my syntax since it's that of TeX. I think I'll settle down for a muskip, despite the fact that it can in some cases grab too much. –  Bruno Le Floch Aug 13 '11 at 19:56
2015-03-01 04:34:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932587504386902, "perplexity": 2613.8493993341044}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462206.12/warc/CC-MAIN-20150226074102-00186-ip-10-28-5-156.ec2.internal.warc.gz"}
https://proxies-free.com/tag/recovering/
I want to send bitcoins, but cannot unlock my wallet. The rpc password located in AppDataRoaming does not work. How can I unlock my wallet? Deleting the wallet.dat erases all my bitcoins, which I would like to keep. ## When a feature prevents its target from the regain of hit points, it does not prevent receiving healing. No rule explicitly talks about a healing minimum, so we should assume that healing that features reduce to 0 still counts as receiving healing, similar to how receiving damage that features reduce to 0 counts as taking damage. See Healing: When a creature receives healing of any kind, hit points regained are added to its current hit points. Here we can explicitly see the logical distinction of receiving healing as a cause of regaining hit points. The order is explicitly not such that you need to regain hit points to receive healing. Bearded Devil: the target can’t regain hit points Here we see that the beard attack doesn’t prevent healing. It only prevents the regain of hit points. The glaive attack also has a specific inbuilt mechanic that doesn’t restore hit points and staunches the wound: Any creature can take an action to stanch the wound with a successful DC 12 Wisdom (Medicine) check. ## mnemonic seed – Recovering an Hierarchical Deterministic wallet Does the use of an HD wallet (with BIP 32 and BIP 39) guarantee the possibility of its recovery in an HD wallet of any manufacturer, whether it’s software or hardware? Or sometimes there are cases of incompatibility of HD wallets due to different principles of their creation? ## PostgreSQL startup recovering – Stack Overflow I have three node cluster. node1(primary), node2(hot standby), node3(hot standby). node1 is down. I promoted the node2 as new primary. After that run the `pg_rewind` on node3 to point to the new primary(node2), `pg_rewind` worked without any error. After this, on node3, getting below: Main PID: 11039 (postmaster) CGroup: /system.slice/postgresql-12.service `````` ├─11039 /usr/pgsql-12/bin/postmaster -D /var/lib/pgsql/12/data/ └─11041 postgres: startup recovering 000000020000000A000000F3 `````` on node3, getting below debug logs: ``````Aug 24 07:54:02 fsrstandby postgres(11041): (705-1) 2021-08-24 07:54:02 UTC DEBUG: switched WAL source from stream to archive after failure Aug 24 07:54:02 fsrstandby postgres(11041): (705-1) 2021-08-24 07:54:02 UTC DEBUG: switched WAL source from stream to archive after failure Aug 24 07:54:02 fsrstandby postgres(11041): (706-1) 2021-08-24 07:54:02 UTC DEBUG: invalid resource manager ID 110 at A/F39FD7C0 Aug 24 07:54:02 fsrstandby postgres(11041): (706-1) 2021-08-24 07:54:02 UTC DEBUG: invalid resource manager ID 110 at A/F39FD7C0 Aug 24 07:54:02 fsrstandby postgres(11041): (707-1) 2021-08-24 07:54:02 UTC DEBUG: switched WAL source from archive to stream after failure Aug 24 07:54:02 fsrstandby postgres(11041): (707-1) 2021-08-24 07:54:02 UTC DEBUG: switched WAL source from archive to stream after failure Aug 24 07:54:02 fsrstandby postgres(11041): (708-1) 2021-08-24 07:54:02 UTC LOG: invalid resource manager ID 110 at A/F39FD7C0 Aug 24 07:54:02 fsrstandby postgres(11041): (708-1) 2021-08-24 07:54:02 UTC LOG: invalid resource manager ID 110 at A/F39FD7C0 Aug 24 07:54:02 fsrstandby postgres(12095): (202-1) 2021-08-24 07:54:02 UTC DEBUG: shmem_exit(1): 1 before_shmem_exit callbacks to make Aug 24 07:54:02 fsrstandby postgres(12095): (203-1) 2021-08-24 07:54:02 UTC DEBUG: shmem_exit(1): 5 on_shmem_exit callbacks to make Aug 24 07:54:02 fsrstandby postgres(12095): (204-1) 2021-08-24 07:54:02 UTC DEBUG: proc_exit(1): 2 callbacks to make Aug 24 07:54:02 fsrstandby postgres(12095): (205-1) 2021-08-24 07:54:02 UTC DEBUG: exit(1) Aug 24 07:54:02 fsrstandby postgres(12095): (206-1) 2021-08-24 07:54:02 UTC DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make Aug 24 07:54:02 fsrstandby postgres(12095): (207-1) 2021-08-24 07:54:02 UTC DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make Aug 24 07:54:02 fsrstandby postgres(12095): (202-1) 2021-08-24 07:54:02 UTC DEBUG: shmem_exit(1): 1 before_shmem_exit callbacks to make Aug 24 07:54:02 fsrstandby postgres(12095): (203-1) 2021-08-24 07:54:02 UTC DEBUG: shmem_exit(1): 5 on_shmem_exit callbacks to make Aug 24 07:54:02 fsrstandby postgres(12095): (204-1) 2021-08-24 07:54:02 UTC DEBUG: proc_exit(1): 2 callbacks to make Aug 24 07:54:02 fsrstandby postgres(12095): (205-1) 2021-08-24 07:54:02 UTC DEBUG: exit(1) Aug 24 07:54:02 fsrstandby postgres(12095): (206-1) 2021-08-24 07:54:02 UTC DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make Aug 24 07:54:02 fsrstandby postgres(12095): (207-1) 2021-08-24 07:54:02 UTC DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make Aug 24 07:54:02 fsrstandby postgres(12095): (208-1) 2021-08-24 07:54:02 UTC DEBUG: proc_exit(-1): 0 callbacks to make Aug 24 07:54:02 fsrstandby postgres(12095): (208-1) 2021-08-24 07:54:02 UTC DEBUG: proc_exit(-1): 0 callbacks to make Aug 24 07:54:02 fsrstandby postgres(11039): (202-1) 2021-08-24 07:54:02 UTC DEBUG: reaping dead processes Aug 24 07:54:02 fsrstandby postgres(11039): (202-1) 2021-08-24 07:54:02 UTC DEBUG: reaping dead processes `````` ## nt.number theory – Recovering basic information about perfect numbers from a Dirichlet series The following question is inspired mostly by this question, answer and the comment by Wojowu there A naive approach to understanding odd perfect numbers is to make a Dirichlet series where the $$n$$th odd terms is zero if and only if $$n$$ is an odd perfect number, namely $$A(s)= zeta(s)zeta(s-1) -2 zeta(s-1).$$ One might hope that one could then use analysis to get some sort of non-trivial statement about when a term can be zero. One could then look at something like $$lim_{T rightarrow infty} frac{1}{2T} int_{-T}^{T} A(s+it)x^{s+it} dt . (1)$$ As long as s is sufficiently large (and in fact , we may take $$s>2$$), when $$x>0$$, the limit in (1) is equal to $$sigma(n)-2n$$ exactly when $$x=n$$ and and 0 otherwise. However, there’s no content in this statement involving the series $$A(s)$$ other than that that the Dirichlet series converges if $$s>2$$. But one might hope that this framework or something like it could get non-trivial statements about when the above integral can be zero with odd $$n$$. The most naive thing to start off with here would be to see if one can recover very basic properties of perfect numbers in this analytic context. Here are three statements which are very easy to prove and none of which even require unique prime factorization: 1. No perfect number is a power of a prime. 2. No perfect number is congruent to 3 (mod 4). 3. No perfect number is congruent to 2 (mod 3). So the question is, can we use this analytic approach to recover any of these statements or any similar statements? This would seem to be a reasonable test that this sort of framework has even a small chance of being productive. ## depth of field – Recovering physical distance from perceived distance in telephoto lens I am trying to find a way to estimate the DOF from the perceived distance through a telephoto lens. The problem is, most DOF calculators use the physical distance from the subject to the lens to calculate the DOF. But for the telephoto range, it is much easier and practical to estimate the perceived distance of the object as it appears in the EVF rather than the actual physical distance due to how far the subject is from the camera. So suppose the focal length is `f` and the subject seems to be `x` meters away from me through the lens, is there a way to recover the physical distance `s` of the subject? Also, due to perspective distortion, the scene will look compressed through a telephoto lens. Suppose, looking through the lens, I estimate the distance between two objects to be `y` meters (along the depth axis), and I want both objects to look sharp. Should the physical depth of field be `y`, or should I adjust that as well? ## New member seeking help with recovering a Simple Machines forum If you do not know what are you doing hire a professional !!!! but from what I’m seeing Table ‘./wtfu_sw452/smf_sessions’ is marked as crashed and should be repaired You just got to login to your control panel or phpmyadmin choose wtfu_sw452 databases table smf_sessions and repair it P.S it is not good that you host do not respond for something so simple and their ssl is expired and they do not offer new products to be purchases which is red flags … ## sql server – Recovering SQL database when OS crashed I was working on a DB design its almost finished. But missed to take a backup. While working due to some reason my OS got crashed (Win 10) and nothing helped to recover it. But I am able to access the hdd contents when connecting in another PC. So is there is anyway to recover the DB using an SQL server express installation on that PC. On googling I have found there is some attach option using MDF files But I am not able to find the MDF files in the Program FilesMS SQL server folder Where SQL server express 2019 is storing MDF files by default? ## riemann surfaces – Recovering a family of rational functions from branch points Let $$Y$$ be a compact Riemann surface and $$B$$ a finite subset of $$Y$$. It is a standard fact that isomorphism classes of holomorphic ramified covers $$f:Xrightarrow Y$$ of degree $$d$$ with branch points in $$B$$ are in a correspondence with homomorphisms $$rho:pi_1(Y-B)rightarrow S_d$$ with transitive image modulo conjugation by elements of the permutation group $$S_d$$. Writing a formula for $$f$$ from the knowledge of $$Bsubset Y$$ and $$rho$$ is often hard, e.g. the task of recovering a Belyi map from its dessin where $$|B|=3$$. I am interested in the case of $$X=Y=Bbb{CP}^1$$, and some points from $$B$$ moving in the Riemann sphere. Here is an example: • Consider rational functions $$f:Bbb{CP}^1rightarrowBbb{CP}^1$$ of degree $$3$$ with four simple critical points that have $$1,omega,bar{omega}$$ among their critical values $$left(omega={rm{e}}^{frac{2pi{rm{i}}}{3}}right)$$, thus $$B={1,omega,bar{omega},beta}$$ with $$beta$$ varying in a punctured sphere. To fix an element in the isomorphism class, we can pre-compose $$f$$ with a suitable Möbius transformation so that $$1$$, $$bar{omega}$$ and $$omega$$ are the critical points lying above $$1$$, $$omega$$ and $$bar{omega}$$ respectively: $$f(1)=1, f(bar{omega})=omega, f(omega)=bar{omega}$$. A normal form for such functions is $$left{f_alpha(z):=frac{alpha z^3+3z^2+2alpha}{2z^3+3alpha z+1}right}_alpha.$$ A simple computation shows that the fourth critical point is $$alpha^2$$, and hence $$beta=beta_alpha=:f_alpha(alpha^2)=frac{alpha^4+2alpha}{2alpha^3+1}$$. Here is my question: Why $$beta$$ is not a degree one function of $$alpha?$$ Shouldn’t the knowledge of the branch locus and the monodromy determine $$f_alpha(z)$$ in the normalized form above? I presume the monodromy does not change because there are only finitely many possibilities for it and this is a continuous family. To monodromy of $$f_alpha$$ is a homomorphism $$rho_alpha:pi_1left(Bbb{CP}^1-{1,omega,bar{omega},beta_alpha}right)rightarrow S_3$$ where small loops around $$1,omega,bar{omega},beta_alpha$$ generate the fundamental group, and are mapped to transpositions in $$S_3$$ whose product is identity and are not all distinct. So I guess my question is how can such a discrete object vary with $$alpha$$; and if it doesn’t, why the assignment $$alphamapstobeta(alpha)$$ is not injective. The degree of this assignment is four, and there are also four conjugacy classes of homomorphisms $$rho:langlesigma_1,sigma_2,sigma_3,sigma_4midsigma_1sigma_2sigma_3sigma_4=mathbf{1}ranglerightarrow S_3$$ with $${rm{Im}}(rho)$$ being a transitive subgroup of $$S_3$$ generated by transpositions $$rho(sigma_i)$$: $$sigma_1mapsto (1,2),sigma_2mapsto (1,2),sigma_3mapsto (1,3), sigma_4mapsto (1,3);\ sigma_1mapsto (1,2),sigma_2mapsto (1,3),sigma_3mapsto (1,2), sigma_4mapsto (2,3);\ sigma_1mapsto (1,2),sigma_2mapsto (1,3),sigma_3mapsto (1,3), sigma_4mapsto (1,2);\ sigma_1mapsto (1,2),sigma_2mapsto (1,3),sigma_3mapsto (2,3), sigma_4mapsto (1,3).$$ Is it accidental that the degree of $$alphamapstobeta(alpha)$$ is the same as the number of possibilities for the monodromy representations compatible with our ramification structure? ## linux – Need help recovering data using TestDisk This is what my partitions looks like now: Earlier the Unallocated 20GB space was just after OS(C:) partition. I booted to a Linux Mint Live Disk and used Gparted to move the unallocated space after the 150GB Pop OS and expand it. It crashed midway and after I tried rebooting to Pop OS, I am unable to boot into it. I get `initramfs`. So I booted into Windows 10(works fine) and used TestDisk to recover lost data. I selected my Drive, selected EFI GPT partition type, and selected Analyse and I got this: I couldn’t understand anything, So I proceeded to Quick Search and after 5-10 minutes I got this: ``````The harddisk (512 GB / 476 GiB) seems too small! (< 1023 GB / 953 GiB) Check the harddisk size: HD jumper settings, BIOS detection... The following partition can't be recovered: Partition Start End Size in sectors > MS Data 1000215182 1999863068 999647887 `````` I proceeded and got this: ``````Disk /dev/sda - 512 GB / 476 GiB - CHS 62260 255 63 Partition Start End Size in sectors >P EFI System 2048 534527 532480 (EFI System Partition) (SYSTEM) D MS Data 567296 623962929 623395634 (OS) D MS Data 567296 1000215182 999647887 (OS) D EFI System 623964160 625192959 1228800 (EFI System Partition) (NO NAME) D Linux Swap 625192960 641970159 16777200 D EFI System 665907200 667135999 1228800 (EFI System Partition) (NO NAME) D Linux Swap 667136000 683913199 16777200 D Linux filesys. data 683913216 998469631 314556416 D MS Data 996732929 998473728 1740800 D MS Data 998473728 1000214527 1740800 (RECOVERY) `````` Now, What should I do?
2021-09-18 10:26:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 57, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.467477411031723, "perplexity": 3819.653153882406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056392.79/warc/CC-MAIN-20210918093220-20210918123220-00477.warc.gz"}
https://newbedev.com/can-mass-energy-equivalence-be-used-to-measure-absolute-internal-energy
# Can mass-energy equivalence be used to measure absolute internal energy? $$E = mc^2$$ where m is the relativistic mass. $$m_0$$ is the classical or rest mass. Consider a closed system at rest with no heat added and no work done, but with an internal chemical or nuclear reaction. At rest means no change in kinetic or gravitational potential energies of the overall system. $$\Delta U = 0$$ where $$U$$ is the total internal energy. Classical thermodynamics considers the energy of the reaction as the "internal energy of formation", and defines $$\Delta U$$ as $$U_{classical \enspace products} - U_{classical \enspace reactants} - U_{formation}. \Delta U = 0$$ for this example. $$U_{classical}$$ is the heat capacity at constant volume for all the moles or nuclei in the system. (See one of the thermodynamics textbooks by Sonntag and van Wylen.) In an engineering thermodynamics context the (classical) internal energy is just the heat capacity. The energy balance for classical thermodynamics was developed before $$E=mc^2$$ was understood, hence the need for considering the energy of formation. Using $$E=mc^2$$ we can express this energy of formation as a change in rest mass. We can express $$U_{formation}$$ in terms of the rest masses of the reacting constituents. Consider the constituents in the system; atoms/molecules for a chemical reaction, nuclei for a nuclear reaction. For each constituent $$U = U_{classical} + nm_0c^2$$ where $$m_0$$ is the rest mass energy- of an atom/molecule or nucleus- and n the total number of moles or nuclei of the constituent. (See note (a) below.) So for the reaction $$a + X ->b + Y$$, we have $$U_{classical \enspace a} + U_{classical \enspace X} + n_am_{0a}c^2 +n_Xm_{0X}c^2 = U_{classical \enspace b} + U_{classical \enspace Y} + n_bm_{0b}c^2 + n_Ym_{0Y}c^2$$ So $$U_{formation} = n_am_{0a}c^2 + n_Xm_{0X}c^2 - (n_bm_{0b}c^2 + n_Ym_{0Y}c^2)$$; the internal energy of formation is equal to the change in the rest masses. If the product- b and Y- rest masses are less than the reactant- a and X- rest masses, $$U_{formation}$$ is positive and $$U_{classical \enspace products}$$ is greater than $$U_{classical \enspace reactants}$$, due a reduction in rest mass causing an increase in $$U_{classical}$$. The change in rest mass is very small for a chemical reaction as contrasted with a nuclear reaction, but the same concept holds. In a chemical reaction the change in rest mass is dictated by the binding energy of the electrons in an atom/molecule. In a nuclear reaction the change in rest mass is dictated by the binding energy of the nucleons in a nucleus. So far, we have considered the energies of all the reactant and product atoms/molecules in the system. We can also view this entire system, which is at rest, externally as having total internal energy $$U_{total} = m_{0 \enspace system} c^2$$ that is constant, where $$m_0$$ considers the overall classical internal energies and rest masses of the constituents in the system. $$m_{0 \enspace system} c^2$$ = $$U_{classical \enspace a} + U_{classical \enspace X} +n_a m_{0a}c^2 + n_Xm_{0X}c^2 = U_{classical \enspace b} + U_{classical \enspace Y} +n_bm_{0b}c^2 + n_Ym_{0Y}c^2$$ So, viewing the system externally, the absolute internal energy is the the total rest mass energy for an isolated system (no heat, work, or mass transfer) that is at rest (no change in overall kinetic or potential energy). Note (a). For a discussion of the reaction energetics on an atom to atom or nucleus to nucleus basis see my answer at Why is mass defect calculated by the rest mass (energy)? on this exchange.
2023-04-01 11:01:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6498972773551941, "perplexity": 357.64846879659933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00345.warc.gz"}
https://too.simply-logical.space/src/text/3_part_iii/8.3.html
# 8.3. Abduction and diagnostic reasoning¶ Abduction extends default reasoning by not only making assumptions about what is false, but also about what is true. For instance, in the light bulb example given earlier, we know that if the light bulb is broken, the light doesn’t switch on. If we observe that the light doesn’t switch on, a possible explanation is that the light bulb is broken. Since this is only one of the possible explanations, it cannot be guaranteed to be true. For instance, there might be a problem with the power supply instead, or the switch might be broken. The general problem of abduction can be stated as follows. Given a $$Theory$$ and an $$Observation$$, find an $$Explanation$$ such that $Theory \cup Explanation \models Observation$ i.e. the $$Observation$$ follows logically from the $$Theory$$ extended with the $$Explanation$$. For instance, if $$Theory$$ consists of the following clauses likes(peter,S):-student_of(S,peter). likes(X,Y):-friend(Y,X). and we have the $$Observation$$ likes(peter,paul), then possible $$Explanations$$ are { student_of(paul,peter) } and { friend(paul,peter) }. Other $$Explanations$$ which satisfy the problem specification are { likes(X,paul) } and { likes(X,Y):-friendly(Y), friendly(paul) }. However, abductive explanations are usually restricted to ground literals with predicates that are undefined in $$Theory$$ (such literals are called abducibles). Inferring general rules from specific observations is called induction, and is discussed in the next chapter. Procedurally, we can construct an abductive explanation by trying to prove the $$Observation$$ from the initial $$Theory$$ alone: whenever we encounter a literal for which there is no clause to resolve with, we add the literal to the $$Explanation$$. This leads to the following abductive meta-interpreter. % abduce(O,E) <- observation O follows by SLD-resolution % from the theory defined by clause/2, % extended with a list of unit clauses E abduce(O,E):- abduce(O,[],E). % with accumulator for explanations abduce(true,E,E):-!. abduce((A,B),E0,E):-!, abduce(A,E0,E1), abduce(B,E1,E). abduce(A,E0,E):- cl(A,B), % query clauses enumerated by cl/2 abduce(B,E0,E). abduce(A,E,E):- element(A,E). abduce(A,E,[A|E]):- not element(A,E), abducible(A). abducible(A):- not cl(A,_B). /** <examples> ?- abduce(likes(peter,paul),Explanation). ?- abduce(flies(tweety),Explanation). */ The last two clauses of abduce/3 extend the original depth-first meta-interpreter. The program uses an accumulator containing the partial explanation found so far, such that literals are not unnecessarily duplicated in the final explanation. The query ?-abduce(likes(peter,paul),Explanation). Explanation = [student_of(paul,peter)]; Explanation = [friend(paul,peter)] Interestingly, this abductive meta-interpreter also works for general clauses, but it does not always produce correct explanations. For instance, suppose the initial $$Theory$$ contains a general clause: flies(X):-bird(X),not abnormal(X). abnormal(X):-penguin(X). bird(X):-penguin(X). bird(X):-sparrow(X). If asked to explain flies(tweety), the above program will try to find a clause explaining not(abnormal(tweety)); since there is no such clause, this negated literal will be added to the explanation. As a result, the program will give the following explanations: Explanation = [not abnormal(tweety),penguin(tweety)]; Explanation = [not abnormal(tweety),sparrow(tweety)] There are two problems with these explanations. First of all, the first explanation is inconsistent with the theory. Secondly, abnormal/1 is not an abducible predicate, and should not appear in an abductive explanation. For these reasons, we have to deal explicitly with negated literals in our abduction program. As a first try, we can extend our abductive meta-interpreter with negation as failure, by adding the following clause (see also Section 3.8): abduce(not(A),E,E):- % E explains not(A) not abduce(A,E,E). % if E doesn't explain A In order to prevent the query abducible(not(A)) from succeeding, we change the definition of abducible/1 to abducible(A):- A \= not(B), not cl(A,B). /** <examples> ?- abduce(flies(tweety),Explanation). ?- abduce(not(abnormal(tweety)),[penguin(tweety)],[penguin(tweety)]). ?- abduce(not(abnormal(tweety)),[],[]). ?- abduce(flies1(tweety),Explanation). */ With this extended abductive meta-interpreter, the query ?-abduce(flies(tweety),Explanation). now results in the following, correct answer: Explanation = [sparrow(tweety)] The explanation [penguin(tweety)] is found to be inconsistent, since ?-abduce(not(abnormal(tweety)), [penguin(tweety)],[penguin(tweety)]). will fail, as it should. However, this approach relies on the fact that negated literals are checked after the abductive explanation has been constructed. To illustrate this, suppose that $$Theory$$ is extended with the following clause: flies1(X):-not abnormal(X),bird(X). Since ?-abduce(not(abnormal(tweety)),[],[]). succeeds, any explanation of bird(tweety) will also be an explanation of flies1(tweety), which is of course wrong. The problem here is that the fact that abnormal(tweety) is considered to be false is not reflected in the explanation. Thus, we need a separate predicate abduce_not/3 for building explanations for literals assumed to be false. The full program is given below. There are two changes in abduce/3: in the fifth clause, an abducible A is only added to the explanation E if it is consistent with it; i.e. if E does not explain not(A). In the sixth clause, an explicit explanation for not(A) is constructed. % abduce(O,E0,E) <- E is abductive explanation of O, given % E0 (works also for general programs) abduce(true,E,E):-!. abduce((A,B),E0,E):-!, abduce(A,E0,E1), abduce(B,E1,E). abduce(A,E0,E):- clause(A,B), abduce(B,E0,E). abduce(A,E,E):- abduce(A,E,[A|E]):- % A can be added to E not element(A,E), % if it's not already there, abducible(A), % if it's abducible, not abduce_not(A,E,E). % and E doesn't explain not(A) abduce(not(A),E0,E):- % find explanation for not(A) not element(A,E0), % should be consistent abduce_not(A,E0,E). The definition of abduce_not/3 closely mirrors the clauses for abduce/3: 1. a negated conjunction not((A,B)) is explained by either explaining not(A) or not(B); 2. if there are clauses for A, then not(A) is explained by constructing an explanation for not(B), for every body B; 3. not(A) is explained if it is already part of the explanation; 4. otherwise, not(A) is explained by itself, if A is abducible and not explained; 5. not(not(A)) is explained by explaining A. There is no clause for true, since not(true) cannot be explained. % abduce_not(O,E0,E) <- E is abductive expl. of not(O) abduce_not((A,B),E0,E):-!, abduce_not(A,E0,E); % disjunction abduce_not(B,E0,E). abduce_not(A,E0,E):- setof(B,clause(A,B),L), abduce_not_l(L,E0,E). abduce_not(A,E,E):- abduce_not(A,E,[not(A)|E]):- % not(A) can be added to E not element(not(A),E), % if it's not already there, abducible(A), % if A is abducible not abduce(A,E,E). % and E doesn't explain A abduce_not(not(A),E0,E):- % find explanation for A not element(not(A),E0), % should be consistent abduce(A,E0,E). abduce_not_l([],E,E). abduce_not_l([B|Bs],E0,E):- abduce_not(B,E0,E1), abduce_not_l(Bs,E1,E). We illustrate the program on the following set of clauses. Notice that there are several explanations for abnormal(tweety). cl(flies1(X),(not(abnormal(X)),bird(X))). cl(flies(X),(bird(X),not(abnormal(X)))). cl(abnormal(X),penguin(X)). cl(bird(X),penguin(X)). cl(bird(X),sparrow(X)). /** <examples> ?- abduce(flies(tweety),Explanation). ?- abduce(flies1(tweety),Explanation). */ The following queries show that the order of unnegated and negated literals in a clause only influences the order in which abducibles are added to the explanation, but not the explanation itself: ?-abduce(flies(tweety),Explanation). Explanation = ?-abduce(flies1(tweety),Explanation). Explanation = Exercise 8.4 The abductive meta-interpreter will loop on the program wise(X):-not teacher(X). teacher(peter):-wise(peter). with the query ?-abduce(teacher(peter),E) (see Section 8.2). Change the interpreter such that this query is handled correctly, by adding all literals collected in the proof to the abductive explanation. Abduction can be used for formulating hypotheses about faulty components in a malfunctioning system. Here, the $$Theory$$ is a description of the operation of the system, an $$Observation$$ is a combination of input values and the observed output values, and $$Explanation$$ is a diagnosis, telling us which components are malfunctioning. As an example we consider a logical circuit for adding three binary digits. Such a circuit can be built from two XOR-gates, two AND-gates, and an OR-gate (Figure 8.3). Its behaviour can be described logically as follows: adder(X,Y,Z,Sum,Carry):- xor(X,Y,S), xor(Z,S,Sum), and(X,Y,C1), and(Z,S,C2), or(C1,C2,Carry). xor(0,0,0). and(0,0,0). or(0,0,0). xor(0,1,1). and(0,1,0). or(0,1,1). xor(1,0,1). and(1,0,0). or(1,0,1). xor(1,1,0). and(1,1,1). or(1,1,1). These clauses describe the normal operation of the system. However, since diagnosis deals with faulty operation of components, we have to extend the system description with a so-called fault model. Such a fault model describes the behaviour of each component when it is in a faulty state. We distinguish two faulty states: the output of a component can be stuck at 0, or it can be stuck at 1. Faulty states are expressed by literals of the form fault(Name=State), where State is either s0 (stuck at 0) or s1 (stuck at 1). The Name of a component is given by the system that contains it. Since components might be nested (e.g. the adder might itself be part of a circuit that adds two 8-bits binary numbers), the names of the components of a sub-system are prefixed by the name of that sub-system. This results in the following system description: adder(N,X,Y,Z,Sum,Carry):- xorg(N-xor1,X,Y,S), xorg(N-xor2,Z,S,Sum), andg(N-and1,X,Y,C1), andg(N-and2,Z,S,C2), org(N-or1,C1,C2,Carry). xorg(N,X,Y,Z):-xor(X,Y,Z). xorg(N,0,0,1):-fault(N=s1). xorg(N,0,1,0):-fault(N=s0). xorg(N,1,0,0):-fault(N=s0). xorg(N,1,1,1):-fault(N=s1). andg(N,X,Y,Z):-and(X,Y,Z). andg(N,0,0,1):-fault(N=s1). andg(N,0,1,1):-fault(N=s1). andg(N,1,0,1):-fault(N=s1). andg(N,1,1,0):-fault(N=s0). org(N,X,Y,Z):-or(X,Y,Z). org(N,0,0,1):-fault(N=s1). org(N,0,1,0):-fault(N=s0). org(N,1,0,0):-fault(N=s0). org(N,1,1,0):-fault(N=s0). Such a fault model, which includes all possible faulty behaviours, is called a strong fault model. In order to diagnose the system, we declare fault/1 as the (only) abducible predicate, and we make a call to abduce/2: diagnosis(Observation,Diagnosis):- abduce(Observation,Diagnosis). abducible(fault(_X)). /** <examples> */ For instance, suppose the inputs X=0, Y=0 and Z=1 result in the outputs Sum=0 and Carry=1 (a double fault). In order to diagnose this behaviour, we formulate the following query: ?-diagnosis(adder(a,0,0,1,0,1),D). D = [fault(a-or1=s1),fault(a-xor2=s0)]; D = [fault(a-and2=s1),fault(a-xor2=s0)]; D = [fault(a-and1=s1),fault(a-xor2=s0)]; D = [fault(a-and2=s1),fault(a-and1=s1),fault(a-xor2=s0)]; D = [fault(a-xor1=s1)]; D = [fault(a-or1=s1),fault(a-and2=s0),fault(a-xor1=s1)]; D = [fault(a-and1=s1),fault(a-xor1=s1)]; D = [fault(a-and2=s0),fault(a-and1=s1),fault(a-xor1=s1)]; No more solutions The first diagnosis is very obvious: it states that or1 (which calculates Carry) is stuck at 1, and xor2 (which calculates Sum) is stuck at 0. But the fault in the output of or1 might also be caused by and2 or and1, and even by both! The fifth diagnosis is an interesting one: if xor1 is stuck at 1, this accounts for both faults in the outputs of the adder. The remaining three diagnoses are considerably less interesting, since each of them makes unnecessary assumptions about additional faulty components. The predicate diagnosis/2 generates every possible diagnosis; it does not make any assumptions about the relative plausibility of each of them. Several such assumptions can be made. For instance, we might be interested in the diagnoses with the least number of faulty components (there is only one smallest diagnosis in the example, but there may be several in general). Alternatively, we might want to consider only non-redundant or minimal diagnoses: those of which no proper subset is also a diagnosis. This is readily expressed in Prolog: min_diagnosis(O,D):- diagnosis(O,D), not((diagnosis(O,D1),proper_subset(D1,D))). /** <examples> ?-min_diagnosis(adder(a,0,0,1,0,1),D). It should be noted that the predicate min_diagnosis/2 is quite inefficient, since it needs time quadratic in the number of diagnoses (for each possible diagnosis, it generates in the worst case each possible diagnosis to see if the second is a proper subset of the first). In turn, the number of diagnoses is exponential in the number of components. More efficient ways of generating minimal diagnoses can be found in the literature; they fall outside the scope of this book.
2022-01-24 13:03:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7652645111083984, "perplexity": 1589.6261494831253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304570.90/warc/CC-MAIN-20220124124654-20220124154654-00630.warc.gz"}
https://en.wikipedia.org/wiki/Displaced_Poisson
# Displaced Poisson distribution (Redirected from Displaced Poisson) ${\displaystyle P(X=n)={\begin{cases}e^{-\lambda }{\dfrac {\lambda ^{n+r}}{\left(n+r\right)!}}\cdot {\dfrac {1}{I\left(r,\lambda \right)}},\quad n=0,1,2,\ldots &{\text{if }}r\geq 0\\[10pt]e^{-\lambda }{\dfrac {\lambda ^{n+r}}{\left(n+r\right)!}}\cdot {\dfrac {1}{I\left(r+s,\lambda \right)}},\quad n=s,s+1,s+2,\ldots &{\text{otherwise}}\end{cases}}}$ where ${\displaystyle \lambda >0}$ and r is a new parameter; the Poisson distribution is recovered at r = 0. Here ${\displaystyle I\left(\cdot ,\cdot \right)}$ is the incomplete gamma function and s is the integral part of r. The motivation given by Staff[1] is that the ratio of successive probabilities in the Poisson distribution (that is ${\displaystyle P(X=n)/P(X=n-1)}$) is given by ${\displaystyle \lambda /n}$ for ${\displaystyle n>0}$ and the displaced Poisson generalizes this ratio to ${\displaystyle \lambda /\left(n+r\right)}$.
2017-03-26 03:35:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9496110081672668, "perplexity": 331.9747286185578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189092.35/warc/CC-MAIN-20170322212949-00535-ip-10-233-31-227.ec2.internal.warc.gz"}
http://openstudy.com/updates/5134af91e4b093a1d9495e26
anonymous 3 years ago PreCalc & Trig Help, Picture Below! 1. anonymous 2. ajprincess Using pythagoras' theorem u can find the length of the hypotenuse. a^2+b^2=c^2 Here a=2, b=3, c=? |dw:1362407511223:dw||dw:1362407540894:dw| Does that help? @Atkinsoha 3. anonymous I did all of that.. I'm stuck on maybe how to solve it, since I can't seem to come up with any of the answers that are given. 4. ajprincess What is the value of hypotenuse? 5. anonymous (√13)? I hope.. 6. ajprincess yup right:) 7. ajprincess nw what is $$\cos\theta$$ 8. anonymous 3/√13 9. ajprincess yup:) what is $$\cos^2\theta$$? 10. anonymous That's were I'm lost apparently. 3^2 = 9 and the (√13)^2 = 13? 11. anonymous WAIT. Is it A? 5/13? because then it would be 2(9/13)-1 which would be (18/13)-1 and one would be 13/13 so it would be 5/13? A? 12. ajprincess yup:) 13. anonymous HOLY BEJEBOUS! Thank you so much! I guess I just needed someone to break it down! 14. anonymous Can you help me with this one? I'm sure it's really easy, but I've been gone from school.. so haven't learned it :P 15. ajprincess let us say arcsinx=theta nd arccosx=beta so sin theta=x nd cos beta =x sin(arcsinx+arccosx)=sin(theta+beta) =sin theta*cos beta+sin beta * cos theta 16. ajprincess cos theta=sqrt(1-sin^2theta) sin beta=sqrt(1-cos^2 beta) plug in the values and u will get the algebraic expression:) 17. anonymous Yeahh.. I don't understand at all. haha. 18. ajprincess Really sorry. Hav to go nw. Please post this as a new ques. Hope some will sre help u.:) Really sorry
2016-12-08 12:13:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8189776539802551, "perplexity": 6569.666174775295}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542588.29/warc/CC-MAIN-20161202170902-00354-ip-10-31-129-80.ec2.internal.warc.gz"}
https://electronics.stackexchange.com/questions/461525/rc-filter-with-voltage-divider
# RC filter with voltage divider I know I have got a trivial question but I can't find a proper explanation on the internet. In the image below I know that the formula for the cutoff frequency is f = 1/(2*PI*(R1//R2)*C). One way to get the above formula is to find parallel impedance of R2 and C1. Then apply a voltage divider formula for R1 and R2//C1 and then we can derive the cutoff frequency. But I was wondering is there a better way to get this formula by finding the equivalent resistance of R1 and R2 first and then substituting this equivalent resistance into simple RC filter formula? Is there any method that would allow me to calculate the equivalent resistnce of R1 and R2 first and then use standard RC filter formula? Thanks in advance • Both resistors in parallel, then T=RC. – Marko Buršič Oct 4 at 19:40 • Yes thevenins theorem seems to work but I am a bit confused about vth as it is equivalent thevenins voltage. Even though we don't really need that voltage for the formula so I am a bit baffled whether we can only use thevenins equivalent resistance? – SGS333 Oct 4 at 19:47 • Because you are only interested in finding the pole frequency and not the "voltage level", you only need to care about the resistance seen by the capacitor – G36 Oct 4 at 20:05 • Thanks very much – SGS333 Oct 4 at 20:06 You do calculate the divider first. $$\dfrac{V_{out}}{V_{in}}=\dfrac{R2}{R1+R2}$$ Then according to Thevenin you do calcualte equivalent resistance $$\R=R1 || R2\$$. Finally $$\\tau=RC\$$, you do calculate C.
2019-11-19 13:28:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7359927296638489, "perplexity": 511.34089711969284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670151.97/warc/CC-MAIN-20191119121339-20191119145339-00488.warc.gz"}
https://infoscience.epfl.ch/record/114957
Sobolev inequalities for differential forms and $L_{q,p}$-cohomology We study the relation between Sobolev inequalities for differential forms on a Riemannian manifold $(M,g)$ and the $L_{q,p}$-cohomology of that manifold. The $L_{q,p}$-cohomology of $(M,g)$ is defined to be the quotient of the space of closed differential forms in $L^p(M)$ modulo the exact forms which are exterior differentials of forms in $L^q(M)$. Published in: J. Geom. Anal., 16, 4, 597-631 Year: 2006 Other identifiers: Laboratories:
2018-10-21 00:44:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7877166867256165, "perplexity": 345.3858095856697}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513508.42/warc/CC-MAIN-20181020225938-20181021011438-00032.warc.gz"}
http://vxwj.fitnessnutritionshop.it/invert-grayscale-image-python.html
## Invert Grayscale Image Python Note: Negative values are not allowed. scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. Image-6 Grayscale. First double the size of the by padding zero rows/columns at every alternate positions. We use and thanks for these great tools: This website uses cookies to ensure you get the best experience here. Invert image with Python Pillow (Negative / positive. Grayscale - 2 examples found. I want to load a color image, convert it to grayscale, and then invert the data in the file. Open 2D, 3D and 4D images in DICOM, MetaIO, Nrrd and other formats, meshes in DICOM, VTK, STL and OBJ formats and many more features. As you can see, Invert_Filter. We started with installing python OpenCV on windows and so far done some basic image processing, image segmentation and object detection using Python, which are covered in below tutorials: Getting started with Python OpenCV: Installation. After adding a color image to your Microsoft Word 2003, whether the image was from a file, a scanner or digital camera, or other means, you may decide later to convert the color image to grayscale. new('L', (width, height)) im. Just drag and drop your image and it will be automatically grayscaled. The following result would appear. Good examples of these are medical imaging and biological imaging. Pad(padding, fill=0, padding_mode='constant') [source] Pad the given PIL Image on all sides with the given "pad" value. However, For the pixels on the border of the image matrix, some elements of the kernel might stand out of the image matrix and therefore does not have any corresponding element from the image matrix. How to convert image to dataset in python. x and also will be showing you an example on how to reverse a list. tiff") If you are working in python environment Spyder, then it cannot get more easier than to just right click the array in variable explorer, and then choose Show Image option. ) and several options for stretching the image. This is the principle behind the k-Nearest Neighbors […]. locateCenterOnScreen(image, grayscale=False) - Returns (x, y) coordinates of the center of the first found instance of the image on the screen. The input images' last dimension must be size 1. Thank you very much for sharing. jpg" "D:/test/Penguins. com Search by image | Reverse Image Search on Google 2019 Reverse Image Search - Search By Image Reverse Image Search - Find Similar Photos Online Karma Decay - Reverse image search of Reddit. 8‐bit grayscale image, K = 28 = 256 Each histogram entry is defined as: h(i) = number of pixels with intensity I for all 0 < i< K. 100% will make the image completely gray (used for black and white images). bitwise_not(img_gray) #change. DenseNet169 tf. See the mViewer color table examples for more details. These are the top rated real world C# (CSharp) examples of System. The first few lines are:. Welcome to another OpenCV tutorial. Given an image in PGM format and the task is to invert the image color (making negative) content in PGM format. Armoured with Google and Python, I decided to spend an evening doing something useful and figuring out the subject a little. asked 2014-09-17 10:48:42 -0500 Rookie1 5 1 1 4. save ('greyscale. Image-6 Grayscale < CS101. applications. Pillow builds on this, adding more features and support for Python 3. When I use batch operations to convert my image collection to above desired format. Programming Computer Vision with Python: Tools and algorithms for analyzing images Jan Erik Solem If you want a basic understanding of computer vision’s underlying theory and algorithms, this hands-on introduction is the ideal place to start. All we have to do is follow 3 simple steps! Get the RGB value of the pixel. This chapter describes how to use scikit-image on various image processing tasks, and insists on the link with other scientific Python modules such as NumPy and SciPy. QBitmap is only a convenience class that inherits QPixmap, ensuring a depth of 1. invert (image) ¶ Invert (negate) the image. The value of each grayscale pixel is calculated as the weighted sum of the corresponding red, green and blue pixels as: Y = 0. Conceptually, at every pixel in the output image, the program looks up the grayscale value of the equivalent pixel in the depth map image, and uses this value to determine the amount of horizontal shift required for the pixel. Python: cv2. png') and then they slice the array, but that's not. dev0 documentation ImageChops module has the same function. QImage is designed and optimized for I/O, and for direct pixel access and manipulation, while QPixmap is designed and optimized for showing images on screen. tiff") If you are working in python environment Spyder, then it cannot get more easier than to just right click the array in variable explorer, and then choose Show Image option. According to documentation of numpy. 概要 ある画像を複数の画像とグレースケールのヒストグラムで比較する機能をPythonで作ってみた時のメモ 環境 Python 2. mirror (image) ¶ Flip image horizontally (left to right). This post is dedicated to NASA's New Frontiers program that has helped explore Jupiter, Venus, and now. Python Imaging Library¶. Theory The concept behind negative of grayscale image is very simple. The combination of these primary colors are normalized with R+G+B=1; This gives the neutral white color. Technically, you can right click each item in Acrobat and “edit” it, but that’s a real pain in the ass, and sometimes it messes up item placement. grayscale (image) ¶ Convert the image to grayscale. Jython is an implementation of the Python programming language designed to run on the Java platform. com Search by image | Reverse Image Search on Google 2019 Reverse Image Search - Search By Image Reverse Image Search - Find Similar Photos Online Karma Decay - Reverse image search of Reddit. Choosing Colormaps in Matplotlib¶ Matplotlib has a number of built-in colormaps accessible via matplotlib. I think I'm supposed to use for loops in some way to access the colormap so the entire image matrix is composed of 1's and 0's (at which point I could switch the two by subtracting 1 from all values) but I don't know how to get this matrix in the first place. Click "Choose Files" button to select multiple files on your computer. Download Photo (penguin_parade. threshold with cv2. We will start with grayscale images and histograms first, and then move on to color images. Grayscale Example. Grayscale image. First, find the position of each pixel (of the unknown image) in the input image. OpenCV Python – Rotate Image In this tutorial, we shall learn how to rotate an image to 90, 180 and 270 degrees in OpenCV Python with an example. MNIST DNN accepts images as 28x28 pixels, drawn as white on black background. Following example illustrates the working of. It is based entirely on computation performed on. Image file size can be up to 200M. Grayscale - 2 examples found. Train the data to Classify the “i x j” pixel to either one or zero, that might give me a Black&White Image or a almost similar way but instead of classifying with sigmoid just let the value of Prediction as it is and consider it as Pixel value of Gray-Scale Image; Do a Reverse Neural Networking ( i just coined it) , i. Convert into ASCII. 这篇笔记为形态学膨胀(dilate)和腐蚀(erode),一种在数字图像处理中常用的基本算法。形态学图像处理的理论根基是数学形态学(Mathematical Morphology),可以通过形态学处理获取图像中有意义的区域,比如边界…. 01 for one percent). g RGB, CMYK, HSV, etc. Otherwise, we assign to it the value 255. Add pictures you want to invert. The next python code fragment shows how to do it: The next figure shows the original mandrill input image…. Tifffile is a Python library to. The steps in this tutorial should help you facilitate the process of working with your own data in Python. Convert an image file into another file type: convert. Python and its modules like Numpy, Scipy, Matplotlib and other special modules provide the optimal functionality to be able to cope with the flood of pictures. #get the size of image in int array:. Luckily for you, there's an actively-developed fork of PIL called Pillow - it's easier to install, runs on all major operating systems, and supports Python 3. If pixel value is greater than a threshold value, it is assigned one value (may be white), else it is assigned another value (may be black). org/sites/default/files/sponsors/elephant. Keywords: images pictures photographs photos color black white lighter darker threshold. This may not be your first OpenCV course, but trust me - It will definitely be your last. You'll get a dialog box like the one at right. Run your code first!. To convert an RGB image into a binary type image, we need OpenCV. png') Note: the conversion to grayscale is not unique see l'article de wikipedia's article ). Load an image, grayscale an image. Pre-processed images can hep a basic model achieve high accuracy when compared to a more complex model trained on images that were not pre-processed. Example: Consider a color pixel with the following values A = 255 R = 100 G. Specifically, reading slices of image data, CCITT and OJPEG compression, chroma subsampling without JPEG compression, color space transformations, samples with. Blend the grayscale image from step 1 with the blurred negative from step 3 using a color dodge. DenseNet121 tf. Images are represented as 4D numeric arrays, which is consistent with CImg’s storage standard (it is unfortunately inconsistent with other R libraries, like spatstat, but converting between representations is easy). The standard sim/axis. Saves an image to a specified file. In this tutorial you are going to learn about the k-Nearest Neighbors algorithm including how it works and how to implement it from scratch in Python (without libraries). Using DFT to up-sample an image. invert() Examples. First, you may convert to gray-scale, but then you have to consider that grayscale still has at least 255 values. All i need is doing same thing for a raw image(NEF file). So we have to invert the image. As I promised last time, I'm writing a series about functional designs for image binarization in the Image Processing Toolbox. Use the CSS filter property to convert an image to black and white. specgram: Plot a spectrogram. Pre-processed images can hep a basic model achieve high accuracy when compared to a more complex model trained on images that were not pre-processed. Don’t forget to pass to the imread function the correct path to the image you want to test. All i need is doing same thing for a raw image(NEF file). The idea of thresholding is to further-simplify visual data for analysis. To rotate an image using OpenCV Python, first, calculate the affine matrix that does the affine transformation (linear mapping of pixels), then warp the input image with the affine matrix. Just drag and drop your image and it will be automatically grayscaled. Demo experiment - visit the RGB explorer; Figure out how to make a shade of gray e. how to convert coloured image to grayscale in C or C++ ? All the threads present here give solution in C#. png')); In the matplotlib tutorial they don't cover it. These are mostly right as far as they go: Here's an example:. The PGM format is a lowest common denominator grayscale file format. save ('greyscale. Oracle Named a Leader in 2020 Magic Quadrant for Transportation Management Systems. The black and white arguments should be RGB tuples or color names; this function calculates a colour wedge mapping all black pixels in the source image to the first colour, and all white pixels to the second colour. Grayscale Example. Importance of grayscaling - Dimension reduction: For e. Una gran parte del código está escrito en C, por cuestiones de rendimiento. Note: Negative values are not allowed. Click "Choose Files" button to select multiple files on your computer. A simple tutorial on how to get started with them can be found here. ImageOps Module — Pillow (PIL Fork) 4. Other times, litigation support departments will have to satisfy the odd attorney who prefers to read grayscale documents. Inside Paint: Open MS Paint by clicking the Desktop icon or locating it in the Start menu. How to add images to charts as background images or logos. Now, let's take a look at the results: Figure 1: Applying cv2. Tag Archives: image interpolation opencv python Image Processing - Nearest Neighbour Interpolation. So let’s do this. Read the source image as grey scale image. One thing that can be done is images can be rotated any degrees desired from its original position. ini configuration file is already configured this way. Here we briefly discuss how to choose between the many options. Simple example that makes pure white areas of an RGB image transparent (there's probably a better way to create the mask, and a better way to add the mask to the image, but this gives you a starting point):. Python provides a tool pytesseract for OCR. 1) imread():. save ('greyscale. 25) and so on. Following example illustrates the working of. In this article, we show how to rotate an image in Python. While working with images in Image Processing applications, it is quite often that you need to store intermediate results of image transformations or save the final resulting image. 6 installation before. What you're looking for are surfarrays. Convert PDF to Grayscale. I have a grayscale image and I'm trying to reverse the black and white in it as an exercise. The following program demonstrates how to read a grayscale image as a binary image and display it using JavaFX window. padding ( python:int or tuple) - Padding on each border. Here is my code for jpg images;. Pre-processed images can hep a basic model achieve high accuracy when compared to a more complex model trained on images that were not pre-processed. bitwise_not(img_gray) #change reversed grayscale image to black and white #reversed to make sum smaller (0,0. This gives the brightness information of the. I tried to convert it by calling: from PIL im. For grayscale images, pixel values are replaced by the difference of the maximum value of the data type and the actual value. Python [PIL ImageOps] 22 Grayscale, Colorize, & More John Hammond. OpenCV provides a convenient way to detect blobs and. Free, quick, and very powerful. grayscale() Examples The following are code examples for showing how to use PIL. Invert the result (so dark regions become less transparent instead of more) Snap the grayscale to the discrete values "off, medium, on" Scale it down to "off, faint, dark" Composite with original image so as to restore original alpha mask (otherwise the previously transparent background counts as solid darkness). There are, however, a number of fields where images of higher dimensionality must be analyzed. This means it can have 256 different shades where 0 pixels will represent black color while 255 denotes white. The most common way is to use the OpenCV inbuilt function cv2. Manipulating Images with the Python Imaging Library In my previous article on time-saving tips for Pythonists , I mentioned that Python is a language that can inspire love in its users. Click Invert to start invert tool. Other times, litigation support departments will have to satisfy the odd attorney who prefers to read grayscale documents. kicad_mod file. The size of the last dimension of the output is 3, containing the RGB value of the pixels. Plot legends identify discrete labels of discrete points. The combination of these primary colors are normalized with R+G+B=1; This gives the neutral white color. 601 luma (Y') calculation. This gives you the image again in a CMYK color space and with all channels present. The str () method takes three parameters: The str () method returns a string, which is considered an informal or nicely printable representation of the given object. 7°E) were used to study how orographic features of the Tibetan Plateau (TP) affect the geographical distributions of gravity wave (GW) sources. python,list,numpy,multidimensional-array. API Reference for the ArcGIS API for Python¶. 0: Use the number of channels in the JPEG-encoded image. Finally, included below is a sample application written in MATLAB to demonstrate how to do this kind of operation without relying too much on existing packages and libraries. We start with a gray scale image and we define a threshold value. uint8) if new_image. Grayscale images are distinct from one-bit bi-tonal black-and-white images, which, in the context of computer imaging, are images with only two colors: black and white (also called bilevel or binary images). ; IMG_FILTER_GRAYSCALE: Converts the image into grayscale by changing the red, green and blue components to their weighted sum using the same coefficients as the REC. pyplot as plt xvals = np. The first few lines are:. For big images this is important since getting image properties would be much faster. Consider a color pixel with the following values. grayscale {-webkit-filter: grayscale(80%);} Hue-rotate. Amritanshu Pandia. reshape , it returns a new array object with the new shape specified by the parameters (given that, with the new shape, the amount of elements in the array remain unchanged) , without changing the shape of the original object, so when you are calling the. arange(-2, 1, 0. Posted 9-Feb-12 1:14am. RGB image to grayscale image without using rgb2gray function Python is a high level programming language which has easy to code syntax. Wave Transform Use scikit-image's warp() function to implement the wave transform. Invert the colors of the mask if you want the blacks in the original greyscale to be opaque. Grayscaling is the process of converting an image from other color spaces e. threshold(230, true);. 601 luma (Y') calculation. The value is a decimal percentage (use 0. To convert an image to grayscale using python, a solution is to use PIL example: from PIL import Image img = Image. In ImageJ Jython is one of several supported languages. Contents: arcgis. invert(% | number) Another filter that may be applied to our images using CSS is inverted. Choosing Colormaps in Matplotlib¶ Matplotlib has a number of built-in colormaps accessible via matplotlib. To rotate an image using OpenCV Python, first, calculate the affine matrix that does the affine transformation (linear mapping of pixels), then warp the input image with the affine matrix. This works great on pictures for your instagram or facebook albums. It's at the top of Paint. Grayscale Example. How to Rotate/Mirror Photos With Python In this post, we take a look at how to quickly and easily rotate and mirror your photos using a simple Python script. Python randn - 12 examples found. #reverse grayscale for smaller sum of pixels img_gray_reversed = cv2. As you can see, Invert_Filter. Example image of fish in a Word 2003 document, converted to grayscale. Read the image and convert it to gray-scale. Code Golf Stack Exchange is a site for recreational programming competitions, not general programming questions. column_stack(np. java is mostly boiler-plate code (only the parts marked in red need to be customized) and it exists only to support Invert_Filter. This is called the RGBA color space having the Red, Green, Blue colors and Alpha value respectively. Assertions are particularly useful in Python because of Python's powerful and flexible dynamic typing system. Assuming you want a grayscale image: im = Image. arange(-2, 1, 0. Also, if you are getting errors with CV2, it may be because it doesn't like transformed 'views' of arrays. As you may have guessed, an image is a 3D (or 5D if you do RGB) media : there is width, height and pixel intensity (greyscale or Red, Green and Blue), while sound is a 2D media: there is time and points. There are many methods you can do achieve this. Press Edit on the left toolbar. This tutorial was tested on Windows 8. While grayscale images are rarely saved with a color map, MATLAB uses a color map to display them. kicad_mod file. cxx Usage: example_convert infile outfile. The performance of the Eulerian gyrokinetic-Maxwell solver code GYRO is analyzed on five high performance computing systems. If a PDF will be printed in B/W then converting it allows you to visually check what the end result will look like. In this section, we'll look at the structure of grayscale vs. Disadvantage: Not considering the relevance of R, G and B channel but process then respectively will distort the image. A Blob is a group of connected pixels in an image that share some common property ( E. Unfortunately, its development has stagnated, with its last release in 2009. dev0 documentation ImageChops module has the same function. The documentation seems to suggest that: cv::subtract(cv::Scalar:all(255),src,dst); would work. append(do(image. applications. Grayscale Example. First, you may convert to gray-scale, but then you have to consider that grayscale still has at least 255 values. You should get an output similar to figure 1, which shows the original image and the final one, converted to gray scale. OpenCV with Python for Image and Video Analysis 7 - Duration: Python [PIL ImageOps] 21 Invert, Posterize,. Python code to convert RGB to HSV and vice-versa. QImage is designed and optimized for I/O, and for direct pixel access and manipulation, while QPixmap is designed and optimized for showing images on screen. jpg"; //now process the image, the image will be changed: MGImageProcesser -processOption "grayscale=1 blur=10" "D:/test/Penguins. First argument is the source image, which should be a grayscale image. This tutorial was tested on Windows 8. imread('messi5. Below on the left, I display the grayscale variant of this $512\times 512$ image (file available here). In the simplest case of binary images, the pixel value is a 1-bit number indicating either foreground or background. Though exactly we shouldn't call the converted image black and white. The BMP, GIF, TIFF, JPEG, PNG, and EMF image types are supported. ImageJ converts 16-bit and 32-bit images and stacks to 8-bits by linearly scaling from min-max to 0-255, where min and max are the two values displayed in the Image>Adjust>Brightness>Contrast tool. This chapter describes how to use scikit-image on various image processing tasks, and insists on the link with other scientific Python modules such as NumPy and SciPy. Inside Paint: Open MS Paint by clicking the Desktop icon or locating it in the Start menu. Image(i,j,2) gives the value of BLUE pixel. However, For the pixels on the border of the image matrix, some elements of the kernel might stand out of the image matrix and therefore does not have any corresponding element from the image matrix. Created by engineers from team Browserling. We can use the convert() function with the 'L' parameter to change an RGB color image into a gray-level image, as shown in the following code: im_g = im. 这篇笔记为形态学膨胀(dilate)和腐蚀(erode),一种在数字图像处理中常用的基本算法。形态学图像处理的理论根基是数学形态学(Mathematical Morphology),可以通过形态学处理获取图像中有意义的区域,比如边界…. Invert the image (to change the foreground to background and vice versa) and apply closing operation on it with the same structuring element to obtain another output image. This is the reason why Grayscale takes much lesser space when stored on Disc. Find the average of RGB i. Value defines the amount of proportion of the conversion. Disadvantage: Not considering the relevance of R, G and B channel but process then respectively will distort the image. The combination of these primary colors are normalized with R+G+B=1; This gives the neutral white color. MS Paint –Paint is a popular choice for image manipulation in its basic form as it comes with Windows based machines. All we have to do is follow 3 simple steps! Get the RGB value of the pixel. Author: Emmanuelle Gouillart. Start with a binary image and apply opening operation with some structuring element (e. Now I am going to show you how you can convert RGB to Binary Image or convert a colored image to black and white. I have a grayscale image and I'm trying to reverse the black and white in it as an exercise. First double the size of the by padding zero rows/columns at every alternate positions. See the grayscale bar across the picture? Find the height index. ) and several options for stretching the image. The black and white arguments should be RGB tuples; this function calculates a color wedge mapping all black pixels in the source image to the first color, and all white pixels to the second color. This image file contains one or more PGM images file. grayscale() Examples The following are code examples for showing how to use PIL. This abstract class is the superclass for classes that process the four data types (byte, short, float and RGB) supported by ImageJ. A popular computer vision library written in C/C++ with bindings for Python, OpenCV provides easy ways of manipulating color spaces. A grayscale image has only 1 channel where the channel represents dimension. They just read in the image import matplotlib. Convert the grey scale image to binary with a threshold of your choice. Convert the program from cpp to C. It's a powerful library, but hasn't been updated since 2011 and doesn't support Python 3. x and also will be showing you an example on how to reverse a list. All the properties of video devices, such as video formats, exposure times and many more can be set. jpg') These few lines of Python code will resize an image ( fullsized_image. scikit-image is a Python package dedicated to image processing, and using natively NumPy arrays as image objects. We will flip the image across the x-axis, the y-axis and then across both axes. OpenCV Python – Rotate Image In this tutorial, we shall learn how to rotate an image to 90, 180 and 270 degrees in OpenCV Python with an example. Two important functions in image processing are blurring and grayscale. Converting an image into grayscale. Computer Vision with Python 3. Replace the R, G and B value of the pixel with average (Avg) calculated in step 2. Export PDF for offset printing. Converting these PDFs to grayscale or black can reduce the size of the file and speed printing. Free online image to grayscale converter. width: height: offset: flip h: flip v: invert: zoom: Predefined format: BGR8 RGB32 RGB24 RGB565 YUV 420 SP (NV12) YUV 420 SP (NV21) Grayscale 8bit YV12 YV16 IYUV RGB555 RGB4_byte RGB8 YUV444p YUV440p YUV422p YUV420p YUV411p YUV410p Y444 YUV555 YUV565 UYVY YUY2. It is a great fit for those times when you may want something to be less noticeable and only get "focus" when the user shows interest. convert('L') # convert the RGB color image to a grayscale image. Armoured with Google and Python, I decided to spend an evening doing something useful and figuring out the subject a little. GitHub Gist: instantly share code, notes, and snippets. Sometimes it can be useful to convert a color PDF to grayscale. Run the code below with the Python Idle application on either the Raspberry Pi or the Windows desktop. grayscale {-webkit-filter: grayscale(80%);} Hue-rotate. Add two additional channels to a grayscale! There are a variety of ways to do this, so my way is below: copy the first layer into new layers of a new 3D array, thus generating a color image (of a black-and-white, so it’ll still be B&W). Start with a binary image and apply opening operation with some structuring element (e. Jython is an implementation of the Python programming language designed to run on the Java platform. Invert the image (to change the foreground to background and vice versa) and apply closing operation on it with the same structuring element to obtain another output image. COLOR_BGR2GRAY. A simple tutorial on how to get started with them can be found here. 9 Pyenv anaconda-2. All scripting language supported by ImageJ can be used to access the ImageJ API. Get the RGB value of the pixel. If a sender sends a gray-scale image over a low communication channel bandwidth then the. In Matplotlib, a colorbar is a separate axes that can provide a key for the meaning of colors in a plot. Here is a Python script to load an image in grayscale instead of full color, and then create and display the corresponding histogram. Programming Computer Vision with Python: Tools and algorithms for analyzing images Jan Erik Solem If you want a basic understanding of computer vision’s underlying theory and algorithms, this hands-on introduction is the ideal place to start. Just drag and drop your image and it will be automatically grayscaled. You'll get a dialog box like the one at right. If an image is grayscale, the tuple returned contains only the number of rows and columns, so it is a good method to check whether the loaded image is grayscale or color. Low prices across earth's biggest selection of books, music, DVDs, electronics, computers, software, apparel & accessories, shoes, jewelry, tools & hardware, housewares, furniture, sporting goods, beauty & personal care, groceries & just about anything else. optional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed. RGB to grayscale¶ This example converts an image with RGB channels into an image with a single grayscale channel. And then modify the data of the image at a pixel level by updating the array values. Apply a Gaussian blur to the negative from step 2. subplot() is used to find out current axes and then invert function assists to reverse the order. COLOR_BGR2GRAY) #reverse grayscale for smaller sum of pixels img_gray_reversed = cv2. Demo experiment - visit the RGB explorer; Figure out how to make a shade of gray e. You can also click the dropdown button to choose online file from URL, Google Drive or Dropbox. Beginning with an intro to statistics, you'll extend into a variety of plots that will cover most use-cases. IC Capture is an end-user application to acquire images from any video device, manufactured by The Imaging Source, including industrial cameras, frame grabbers and video converters. PIL documentation is available here: 2 - Loading an image. If a pixel in the input image passes the threshold test, it will have the value set to 255. sp Unlike \fB\-\-keep\-open\fP, the player is not paused, but simply continues playback until the time has elapsed. It is also possible to convert an image to grayscale and change the relative. # Load our image as grayscale image = cv2. ROTATE_180 and Image. We find the orographic forcings have a significant impact on the gravity wave propagation features. This gives the brightness information of the. Crop a meaningful part of the image, for example the python circle in the logo. The image processing and the computer vision have gained a significant interest in last 2 decades. After adding a color image to your Microsoft Word 2003, whether the image was from a file, a scanner or digital camera, or other means, you may decide later to convert the color image to grayscale. For binary images, True values become False and conversely. skewing detection and correction using python with opencv - skewing. Then we apply our sepia palette using the putpalette() method of the image object. Basic Image Handling and Processing This chapter is an introduction to handling and processing images. These are the top rated real world Python examples of cv2. Good examples of these are medical imaging and biological imaging. txt 26398 0x671E Zip archive data, at least. Every pixel of a grayscale image has a brightness value ranging from 0 (black) to 255 (white). There are no ads, popups or nonsense, just an awesome image grayscaler. raw invert image and grayscale. Demo experiment - visit the RGB explorer; Figure out how to make a shade of gray e. To rotate an image using OpenCV Python, first, calculate the affine matrix that does the affine transformation (linear mapping of pixels), then warp the input image with the affine matrix. Guide Python Inverting a Dictionary bitpaper. Gray Among The RGB - You Try It. So, we can easily find out the coordinates of each unknown pixel e. In this tutorial, you learned how to build a machine learning classifier in Python. To get this effect working in the old days, you needed two images: one in color and one in grayscale. I don't want to change every pixel to the same color, I plan on creating a simple algorithm to change the pixels RGB values based upon it's current RGB value. imread('<image path>',0) [/code]The above line loads the image in gray sca. fft2() provides us the frequency transform which will be a complex array. create_image - 13 examples found. 3 Using image-to-gcode. We will also show a way to define a custom colormap if you would rather use your own. NET): However using some (basic?). 5 How images are represented. For example, X is the grayscale image and Y is the feature of adding colors. My idea is to loop through every pixel in an image, grab the RGB value, and change the RGB values for each pixel. Pre-processed images can hep a basic model achieve high accuracy when compared to a more complex model trained on images that were not pre-processed. Click the checkbox that says Monochrome. Find the average of RGB i. Let’s use the lena gray-scale image. Use the copy method. numpy is suited very well for this type of applications due to its inherent multidimensional nature. applications. Python Main Function Python | Grayscaling of Images using OpenCV Grayscaling is the process of converting an image from other color spaces e. After adding a color image to your Microsoft Word 2003, whether the image was from a file, a scanner or digital camera, or other means, you may decide later to convert the color image to grayscale. While grayscale images are rarely saved with a color map, MATLAB uses a color map to display them. Detect Bounding Box In Image Python. The faces have been automatically registered so that the face is more or less centered and occupies about the same amount of space in each image. The following parameters are currently supported:. I don't want to change every pixel to the same color, I plan on creating a simple algorithm to change the pixels RGB values based upon it's current RGB value. RGB values to make: dark gray, medium gray, light gray We'll say that these grays lack "hue". Grayscale image. Under Windows (XP, Vista or Seven) the installation of PIL is rather simple: just launch the PIL Windows installer and you're ok. A popular computer vision library written in C/C++ with bindings for Python, OpenCV provides easy ways of manipulating color spaces. #reverse grayscale for smaller sum of pixels img_gray_reversed = cv2. Next we convert our image to black and white (or gray scale depending on how you look at it). create_image - 13 examples found. Programming Computer Vision with Python: Tools and algorithms for analyzing images Jan Erik Solem If you want a basic understanding of computer vision’s underlying theory and algorithms, this hands-on introduction is the ideal place to start. In this section, we'll look at the structure of grayscale vs. (image): return ImageOps. x and also will be showing you an example on how to reverse a list. This may not be your first OpenCV course, but trust me - It will definitely be your last. The format argument is a string of one of the following values. Name of the window. Python is a high-level programming language famous for its clear syntax and code readibility. Recompose image from its components, Recompose Convert to gray scale, Desaturate Equalize, Equalize Exchange colors, Color Exchange Filter Pack, Filter Pack Hot, Hot Invert, Invert Map Gradient Map, Gradient Map Palette Map, Palette Map Max RGB, Max RGB Normalize, Normalize Rotate, Rotate Colors Sample Colorize, Sample Colorize Smooth Palette. Image Similarity apps; Obect Detection apps; Face detection apps; Reverse Image Search app; I personally guarantee this is the number one course for you. Invert the image (to change the foreground to background and vice versa) and apply closing operation on it with the same structuring element to obtain another output image. Una gran parte del código está escrito en C, por cuestiones de rendimiento. To know image pixel resolution, TiffDecoder class is useful as it does not load image data. Convert the image to grayscale') print('c. Here I will show how to implement OpenCV functions and apply them in various aspects using some great examples. Its done in this way. Choosing Colormaps in Matplotlib¶ Matplotlib has a number of built-in colormaps accessible via matplotlib. C# (CSharp) System. imwrite () ” with parameters as “the name of converted image” and the variable “gray_image” to which the converted image was stored: cv2. This is a grayscale 16-bit image, presumably created in Photoshop. Change the color of all images to black and white (100% gray): -webkit-filter: grayscale (100%); /* Safari 6. Third, I can put my laserless lasercutter to work to do the heavy lifting. Second argument is the threshold value which is used to. Image-6 Grayscale. For grayscale images, pixel values are replaced by the difference of the maximum value of the data type and the actual value. Finally, the QPicture class is a paint device that records and. HSL stands for hue-saturation-lightness, and is a more intuitive way to create colors. What? This project is about creating a sound that represents an image. Armoured with Google and Python, I decided to spend an evening doing something useful and figuring out the subject a little. Choose target image size and image format. Read the source image as grey scale image. Basic Image Handling and Processing This chapter is an introduction to handling and processing images. CSS3 has some great features to work with images. Thus the number -5 is treated by bitwise operators as if it were written: “…1111111111111111111011”. announcement a new site!. With this new tactic and new anti hacking-tools laws enforced in some European countries, tracking back hacking tools consumers through rootkits can be the ultimate proof of crime. Download Photo (penguin_parade. If you don't know the number of colors in your grayscale image you can easily find out with:. tolist()) im. The rgb2gray() converts RGB images to grayscale by eliminating the hue and saturation information while retaining the luminance. There are only differences in how the imports are handled and in the syntax of the selected language. Matplotlib has a number of built-in colormaps accessible via matplotlib. Soporta una variedad de formatos, incluídos los más utilizados como GIF, JPEG y PNG. x and also will be showing you an example on how to reverse a list. PIL documentation is available here: 2 - Loading an image. One can download the facial expression recognition (FER) data-set from Kaggle challenge here. The threshold is converted everything to white or black, based on the threshold value. COLOR_BGR2GRAY) #reverse grayscale for smaller sum of pixels img_gray_reversed = cv2. Grayscale an image with OpenCV2 - Grayscale refers to black and white and it is often done as many image processing algorithm in OpenCV have a pre-requisite to convert the image into grayscale. posterize (image, bits) ¶ Reduce the number of bits for each colour channel. to shades of gray. As mentioned before, the rgb2gray() converts the truecolor image RGB to the grayscale intensity image. save (‘resized_image. C# (CSharp) AForge. Alternately, the transpose method can also be used with one of the constants Image. Given an image in PGM format and the task is to invert the image color (making negative) content in PGM format. Finally we convert the image back to 'RGB' as according to Lundh, this allows us to save the image as a Jpeg. width: height: offset: flip h: flip v: invert: zoom: Predefined format: BGR8 RGB32 RGB24 RGB565 YUV 420 SP (NV12) YUV 420 SP (NV21) Grayscale 8bit YV12 YV16 IYUV RGB555 RGB4_byte RGB8 YUV444p YUV440p YUV422p YUV420p YUV411p YUV410p Y444 YUV555 YUV565 UYVY YUY2. Finally we convert the image back to ‘RGB’ as according to Lundh, this allows us to save the image as a Jpeg. How to convert image to dataset in python. applications. This is called the RGBA color space having the Red, Green, Blue colors and Alpha value respectively. dev0 documentation ImageChops module has the same function. The following parameters are currently supported:. Keywords: invert image colors inversion. 100 and 255. A simple tutorial on how to get started with them can be found here. Related course The course below is all about data visualization: Data Visualization with Matplotlib and Python. Converting OpenCV images for wxPython As we have seen earlier, OpenCV treats images as NumPy arrays—typically, 3D arrays in BGR format or 2D arrays in grayscale format. Apply the bitwise_not() function from OpenCV to separate the background from the foreground. Note: Negative values are not allowed. Introduction to the image basics What is a pixel? The definition of an image is very simple: it is a two-dimensional view of a 3D world. One thing that can be done is images can be rotated any degrees desired from its original position. Question: PYTHON _ NESTED FOR LOOPS Write Function GRAYSCALE Using Nested For Loops And These Two Helper Functions: Def Invert(filename): """ Loads A PNG Image From The File With The Specified Filename And Creates A New Image In Which The Colors Of The Pixels Are Inverted. All we have to do is repeat 3 simple steps for each pixels of the image. Object implements java. THRESH_BINARY. The image processing and the computer vision have gained a significant interest in last 2 decades. Microsoft Windows bitmap image file. 601 luma (Y') calculation. The use of Python scripts is a temporary measure until the capability is incorporated natively into Inkscape. The CSS filter property allows us to access the effects such as color or blur, shifting on the rendering of an element before the element gets displayed. This abstract class is the superclass for classes that process the four data types (byte, short, float and RGB) supported by ImageJ. There are no ads, popups or nonsense, just an awesome image grayscaler. For palette images the result may differ due to palette limitations. Today I'll start by talking about im2bw and graythresh, two functions that have been in the product for a long time. Grayscale images can be the result of measuring the intensity of light at each pixel. It features multi-threaded batch image resizing, conversion, cropping, flipping/rotating, watermarks, decolorizing (grayscale, negative, sepia), and optimizing. reshape , it returns a new array object with the new shape specified by the parameters (given that, with the new shape, the amount of elements in the array remain unchanged) , without changing the shape of the original object, so when you are calling the. For Python ndarray objects, DIM is set equal to the shape attribute in reverse order (since Python is "row major"). solarize (image, threshold=128) ¶ Invert all pixel values above. Python is an especially valuable tool for visualizing data, and this course will cover a variety of techniques that will allow you to visualize data using the Python library, Matplotlib. In this tutorial, you will learn how you can process images in Python using the OpenCV library. Another plus to converting PDFs to grayscale is that the file size is compressed and the file can be more easily saved on your device without taking up too much space. Finally, included below is a sample application written in MATLAB to demonstrate how to do this kind of operation without relying too much on existing packages and libraries. class CameraView( View): def __init__( self, params): super( CameraView, self). 3 Using image-to-gcode. 0 for double-precision images). Saves an image to a specified file. The image analysis can be used to detect items or people on images and videos. My idea is to loop through every pixel in an image, grab the RGB value, and change the RGB values for each pixel. invert (image) ¶ Invert (negate) the image. The grayscale image is obtained from the RGB image by combining 30% of RED , 60% of GREEN and 11% of BLUE. See Migration guide for more details. [1] Preferred non-DICOM format [2] May have loss in precision of orientation matrix [3] Avoid. The combination of these primary colors are normalized with R+G+B=1; This gives the neutral white color. Add two additional channels to a grayscale! There are a variety of ways to do this, so my way is below: copy the first layer into new layers of a new 3D array, thus generating a color image (of a black-and-white, so it’ll still be B&W). In the image below, "Layer 0" is the original greyscale; "layer 2" is the new black layer. Invert the colors of the image') print('d. color images, and some code to play with that difference. Invert the result (so dark regions become less transparent instead of more) Snap the grayscale to the discrete values "off, medium, on" Scale it down to "off, faint, dark" Composite with original image so as to restore original alpha mask (otherwise the previously transparent background counts as solid darkness). 4 thoughts on “ How to convert between NumPy array and PIL Image ” 2016-04-05 at 02:08. 5 on Ubuntu 16. You can correct uneven illumination or dirt/dust on lenses by acquiring a "flat-field" reference image with the same intensity illumination as the experiment. The shape is (28. Every pixel of a grayscale image has a brightness value ranging from 0 (black) to 255 (white). Load an image, grayscale an image. See the grayscale bar across the picture? Find the height index. Second argument is the threshold value which is used to. While grayscale images are rarely saved with a color map, MATLAB uses a color map to display them. We've provided some basic examples to help you discover possible uses for your Raspberry Pi and to get started with software available in Raspbian. Python does not come with cv2, so we need to install it separately. 0) will only accept colors of the same type, and mask has to be an 8-bit unsigned grayscale image. We can use the convert() function with the 'L' parameter to change an RGB color image into a gray-level image, as shown in the following code: im_g = im. Google 및 커뮤니티에서 빌드한 선행 학습된 모델 및 데이터세트. Then set the red, green, and blue values of the pixel to be that average. semilogy: Make a plot with log scaling on the y axis. Theory The concept behind negative of grayscale image is very simple. Default is PIL. Setting the grayscale to an image is done by percentages as shown in the example CSS code snippet below and the side-by-side comparison as displayed in Figure B (Google Chrome Version 22. 7 and Python 3. Input: Filename Is A String Specifying The Name Of The PNG File That The Function Should. Run your code first!. Thank you very much for sharing. And in the previous tutorial, we used the Luminosity blend mode to blend the. Clamped Spline Python. This was useful to me for in preparing very long images for LaTeX docs. If you're asking for a simple method the answer is no. Conversely, wxPython has its own classes for representing images, typically in RGB format (the reverse of BGR). Python supports very powerful tools when comes to image processing. Demo experiment - visit the RGB explorer; Figure out how to make a shade of gray e. Import a PNG – get a grayscale PNG. I don't want to change every pixel to the same color, I plan on creating a simple algorithm to change the pixels RGB values based upon it's current RGB value. Hey guys, been reading OpenCV for python and thought of posting a tutorial on Programming a Grayscale Image Convertor. grayscale (image) ¶ Convert the image to grayscale. Colorize grayscale image. C'est une fonctionnalité qui peut être utilisée à la fois sur des éléments HTML et des images, mais qui a plus d'intérêt à être utilisée sur des images comme pour la création d'une galerie par exemple. import numpy as np import cv2 def pil2cv(image): ''' PIL型 -> OpenCV型 ''' new_image = np. The alpha components are retained. PPM image files can be viewed and converted to other image file formats using many applications, GIMP or IrfanView for example. applications. In the end you have a binary image that is just two colors, which in most cases is just black and white. images: The Grayscale tensor to convert. It's free, quick and easy to use. Grayscale images have many shades of gray in between. Here is the result, for the colored shape image above, with sigma value 2. The task is to categorize each face based on. OpenCV Python - Rotate Image In this tutorial, we shall learn how to rotate an image to 90, 180 and 270 degrees in OpenCV Python with an example. In the previous blog, we discussed image interpolation, its types and why we need interpolation. Inside Paint: Open MS Paint by clicking the Desktop icon or locating it in the Start menu. For palette images the result may differ due to palette limitations. Grayscale mode uses different shades of gray in an image. comПоиск фото по геометкам в социальных. To test the code, simply run the previous program on the Python environment of your choice. Convert the grey scale image to binary with a threshold of your choice. Got it! More info. 5 How images are represented. Convert image to heatmap. Image processing with Python image library Pillow Python and C++ with SIP PyDev with Eclipse Matplotlib Redis with Python NumPy array basics A NumPy Matrix and Linear Algebra Pandas with NumPy and Matplotlib Celluar Automata Batch gradient descent algorithm Longest Common Substring Algorithm Python Unit Test - TDD using unittest. I don't want to change every pixel to the same color, I plan on creating a simple algorithm to change the pixels RGB values based upon it's current RGB value. Then, for each pixel of the gray scale image, if its value is lesser than the threshold, then we assign to it the value 0 (black). , a disk) on it to obtain an output image. Conversely, wxPython has its own classes for representing images, typically in RGB format (the reverse of BGR). For example, the image below shows a grayscale image represented in the form of an array. What I need: to iterate over the array in OpenCV and change every single value with this formula (it might be wrong but it seems reasonable for me): img[x,y] = abs(img[x,y] - 255) but I don't understand why doesn't it works:. subplot() ax. I would really appreciate your post about reading and also displaying compressed DICOM images using SimpleITK in Python. applications. Finally, the QPicture class is a paint device that records and. Note that wave transform can be expressed with the following equations: We shall use the madrill image to implement the wave transform. 3: Note that the edge output shown in an skimage window may look significantly worse than the image would look if it were saved to a file due to resampling artefacts in the interactive image viewer. How to save NumPy array ndarray as image file. Reduces the photo to black and white; colors lighter than the selectable threshold are converted to white, darker colors will be black. Convert color palettes to python matplotlib colormaps April 25, 2014 · by matteomycarta · in Color , color-2 , Graphics , Programming and code , Python , Tutorial , VIsualization.
2020-08-13 11:24:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17778849601745605, "perplexity": 2101.918458765921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738982.70/warc/CC-MAIN-20200813103121-20200813133121-00242.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=co&paperid=672&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Computer Optics: Year: Volume: Issue: Page: Find Computer Optics, 2019, Volume 43, Issue 4, Pages 517–527 (Mi co672) OPTO-IT Shaping and processing the vortex spectra of singular beams with anomalous orbital angular momentum A. V. Volyar, M. V. Bretsko, Ya. E. Akimova, Yu. A. Egorov Physics and Technology Institute of V.I. Vernadsky Crimean Federal University, Academician Vernadsky 4, 295007, Republic of Crimea, Simferopol, Russia Abstract: The article examines physical mechanisms responsible for shaping the vortex avalanche induced by a weak perturbation of the holographic lattice of a combined vortex beam. For this, we have developed a new technique for measuring the degenerate spectra of optical vortices and orbital angular momentum of combined singular beams. The technique is based on measuring the intensity moments of higher orders of a beam containing vortices with both positive and negative topological charges. The appropriate choice of the mode amplitudes in the combined beam enables us to form orbital angular momentum anomalous spectral regions in the form of resonance dips and bursts. Since the intensity moments of a vortex mode with positive and negative topological charges are the same (the moments are degenerate) for an axially symmetric beam, the measurements are carried out in the plane of the double focus of a cylindrical lens. The calibration measurements show that the experimental error is not higher than 4.5 %. We also reveal that the dips and bursts in the orbital angular momentum spectrum are caused by the vortex avalanche induced by weak perturbations of the holographic grating relief responsible for the beam shaping. The appearance of the orbital angular momentum dips or bursts is controlled by the relation between the energy fluxes in the vortex avalanche with positive or negative topological charges. Keywords: diffractive optics, image processing, optical vortices, orbital angular momentum, moments of intensity. Funding Agency Grant Number Russian Foundation for Basic Research 19-29-01233 Kazan' Federal University ÂÃ24/2018 The work was funded by the Russian Foundation for Basic Research under RFBR grant No. 19-29-01233 and under grant ÂÃ24/2018, "V.I. Vernadsky Crimean Federal University Development Program in 2015 – 2024". DOI: https://doi.org/10.18287/2412-6179-2019-43-4-517-527 Full text: PDF file (1581 kB) Full text: http://www.computeroptics.smr.ru/.../430401.html References: PDF file   HTML file Accepted:11.06.2019 Citation: A. V. Volyar, M. V. Bretsko, Ya. E. Akimova, Yu. A. Egorov, “Shaping and processing the vortex spectra of singular beams with anomalous orbital angular momentum”, Computer Optics, 43:4 (2019), 517–527 Citation in format AMSBIB \Bibitem{VolBreAki19} \by A.~V.~Volyar, M.~V.~Bretsko, Ya.~E.~Akimova, Yu.~A.~Egorov \paper Shaping and processing the vortex spectra of singular beams with anomalous orbital angular momentum \jour Computer Optics \yr 2019 \vol 43 \issue 4 \pages 517--527 \mathnet{http://mi.mathnet.ru/co672} \crossref{https://doi.org/10.18287/2412-6179-2019-43-4-517-527}
2020-02-24 17:29:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19645343720912933, "perplexity": 3199.3917452839523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145966.48/warc/CC-MAIN-20200224163216-20200224193216-00453.warc.gz"}
https://www.semanticscholar.org/paper/Core-coupled-states-and-split-proton-neutron-in-S.Lalkovski-A.M.Bruce/fdf50b4b5675418418e2e9c0801ad87705bacf47
# Core-coupled states and split proton-neutron quasiparticle multiplets in 122-126Ag @article{SLalkovski2013CorecoupledSA, title={Core-coupled states and split proton-neutron quasiparticle multiplets in 122-126Ag}, author={S.Lalkovski and A.M.Bruce and A.Jungclaus and M.Gorska and M.Pfutzner and L.Caceres and F.Naqvi and S.Pietri and Zs.Podolyak and G. S. Simpson and K.Andgren and P.Bednarczyk and T.Beck and J.Benlliure and G.Benzoni and E.Casarejos and B.Cederwall and F.C.L.Crespi and J.J.Cuenca-Garcia and I.J.Cullen and A. M. Denis Bacelar and P. Detistov and P.Doornenbal and G.F.Farrelly and A.B.Garnsworthy and H.Geissel and W.Gelletly and J.Gerl and J.Grebosz and B.Hadinia and M.Hellstrom and C.Hinke and R.Hoischen and G.Ilie and G.Jaworski and J.Jolie and A.Khaplanov and S.Kisyov and M.Kmiecik and I.Kojouharov and R.Kumar and N.Kurz and A.Maj and S.Mandal and V.Modamio and F.Montes and S.Myalski and M.Palacz and W.Prokopowicz and P.Reiter and P.H.Regan and D.Rudolph and H.Schaffner and D.Sohler and S.J.Steer and S.Tashenov and J.Walker and P.M.Walker and H.Weick and E.Werner-Malento and O.Wieland and H.J.Wollersheim and M.Zhekova}, journal={Physical Review C}, year={2013}, volume={87}, pages={034308} } Neutron-rich silver isotopes were populated in the fragmentation of a Xe-136 beam and the relativistic fission of U-238. The fragments were mass analyzed with the GSI Fragment Separator and subsequently implanted into a passive stopper. Isomeric transitions were detected by 105 high-purity germanium detectors. Eight isomeric states were observed in Ag122-126 nuclei. The level schemes of Ag-122,Ag-123,Ag-125 were revised and extended with isomeric transitions being observed for the first time… 14 Citations ## Figures and Tables from this paper Monopole-driven shell evolution below the doubly magic nucleus Sn 132 explored with the long-lived isomer in Pd 126 A new isomer with a half-life of 23.0(8) ms has been identified at 2406 keV in (126)Pd and is proposed to have a spin and parity of 10(+) with a maximally aligned configuration comprising two neutron Isomeric states in neutron-rich 129In and the πg−19/2 ⊗ vh−111/2 multiplet • Physics • 2014 Within the RISING stopped beam campaign the neutron-rich indium isotopes with masses A=125-130 have been studied using the method of isomer spectroscopy. The decays of several isomeric states have Shell evolution and isomers below 132Sn: Spectroscopy of neutron-rich 46Pd and 47Ag isotopes Neutron-rich isotopes of Pd (Z = 46) and Ag (Z = 47) have attracted considerable interest in terms of the evolution of the N = 82 shell closure and its influence on the r -process nucleosynthesis. Proton Shell Evolution below ^{132}Sn: First Measurement of Low-Lying β-Emitting Isomers in ^{123,125}Ag. Shell-model calculations with the state-of-the-art V_{MU} plus M3Y spin-orbit interaction give a satisfactory description of the low-lying states in ^{123,125}Ag, and the tensor force is found to play a crucial role in the evolution of the size of the Z=40 subshell gap. Nuclear Data Sheets for A=123 • Jun Chen • Physics • 2021 E2 collectivity in shell-model calculations for odd-mass nuclei near 132Sn • Physics • 2020 Shell-model calculations for 127,129 In and 129,131 Sb are presented, and interpreted in the context of the particle-core coupling scheme, wherein proton g 9/2 holes or g 7/2 particles are added to First-forbidden transitions in the reactor anomaly • Physics • 2019 We describe here microscopic calculations performed on the dominant forbidden transitions in reactor antineutrino spectra above 4 MeV using the nuclear shell model. By taking into account Coulomb ## References SHOWING 1-10 OF 107 REFERENCES Neutron-rich In and Cd isotopes close to the doubly magic {sup 132}Sn Microsecond isomers in the In and Cd isotopes, in the mass range A=123 to 130, were investigated at the ILL reactor, Grenoble, using the LOHENGRIN mass spectrometer, through thermal-neutron induced Spherical proton-neutron structure of isomeric states in 128Cd The gamma-ray decay of isomeric states in the even-even nucleus Cd-128 has been observed. The nucleus of interest was produced both by the fragmentation of Xe-136 and the fission of U-238 primary Spectroscopy of exotic 121, 123, 125Ag produced in fragmentation reactions Excited states in the neutron-rich 121, 123, 125Ag were studied via the fragmentation of a 136Xe beam at 120MeV/nucleon in a thick 9Be target. The levels in 121Ag were populated in the $\beta$ decay Observation of isomeric decays in the r-process waiting-point nucleus 130Cd82. The gamma decay of excited states in the waiting-point nucleus (130)Cd(82) has been observed for the first time and state-of-the-art nuclear shell-model calculations allow us to follow nuclear isomerism throughout a full major neutron shell. First observation of the decay of a 15- seniority v=4 isomer in 128Sn Isomeric states in the semimagic Sn128-130 isotopes were populated in the fragmentation of a Xe-136 beam on a Be-9 target at an energy of 750 A.MeV. The decay of an isomeric state in Sn-128 at an Single-particle isomeric states in 121Pd and 117Ru Neutron-rich nuclei were populated in a relativistic fission of 238U. Gamma-rays with energies of 135 keV and 184 keV were associated with two isomeric states in 121Pd and 117Ru. Half-lives of First results with the rising active stopper This paper outlines some of the physics opportunities available with the GSI RISING active stopper and presents preliminary results from an experiment aimed at performing beta-delayed gamma-ray Structural evolution in the neutron-rich Nuclei Zr106 and Zr108 The low-lying states in ¹⁰⁶Zr and ¹⁰⁸Zr have been investigated by means of β-γ and isomer spectroscopy at the radioactive isotope beam factory (RIBF), respectively. A new isomer with a half-life of
2022-01-29 04:32:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38066014647483826, "perplexity": 11134.18451372537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299927.25/warc/CC-MAIN-20220129032406-20220129062406-00615.warc.gz"}
https://indico.cern.ch/event/355942/contributions/841693/
# 2015 CAP Congress / Congrès de l'ACP 2015 13-19 June 2015 University of Alberta America/Edmonton timezone Welcome to the 2015 CAP Congress! / Bienvenue au congrès de l'ACP 2015! ## Restricted Weyl Invariance in Four-Dimensional Curved Spacetime 17 Jun 2015, 09:15 15m CAB 235 (University of Alberta) ### CAB 235 #### University of Alberta Oral (Non-Student) / orale (non-étudiant) Theoretical Physics / Physique théorique (DTP-DPT) ### Speaker Prof. Ariel Edery (Bishop's) ### Description We discuss the physics of *restricted Weyl invariance*, a symmetry of dimensionless actions in four dimensional curved space time. When we study a scalar field nonminimally coupled to gravity with Weyl(conformal) weight of $-1$ (i.e. scalar field with the usual two-derivative kinetic term), we find that dimensionless terms are either fully Weyl invariant or are Weyl invariant if the conformal factor $\Omega(x)$ obeys the condition $g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}\Omega=0$. We refer to the latter as *restricted Weyl invariance*. We show that all the dimensionless geometric terms such as $R^2$, $R_{\mu\nu}R^{\mu\nu}$ and $R_{\mu\nu\sigma\tau}R^{\mu\nu\sigma\tau}$ are restricted Weyl invariant. Restricted Weyl transformations possesses nice mathematical properties such as the existence of a composition and an inverse in four dimensional space-time. We exemplify the distinction among rigid Weyl invariance, restricted Weyl invariance and the full Weyl invariance in dimensionless actions constructed out of scalar fields and vector fields with Weyl weight zero. ### Primary authors Prof. Ariel Edery (Bishop's) Dr Yu Nakayama (Caltech) Slides
2020-01-17 19:05:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8943381309509277, "perplexity": 4527.817595630404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00132.warc.gz"}
http://assert.pub/arxiv/q-bio/q-bio.bm/
### Top 10 Arxiv Papers Today in Biomolecules ##### #1. Self-assembly of model proteins into virus capsids ###### Karol Wolek, Marek Cieplak We consider self-assembly of proteins into a virus capsid by the methods of molecular dynamics. The capsid corresponds either to SPMV or CCMV and is studied with and without the RNA molecule inside. The proteins are flexible and described by the structure-based coarse-grained model augmented by electrostatic interactions. Previous studies of the capsid self-assembly involved solid objects of a supramolecular scale, e.g. corresponding to capsomeres, with engineered couplings and stochastic movements. In our approach, a single capsid is dissociated by an application of a high temperature for a variable period and then the system is cooled down to allow for self-assembly. The restoration of the capsid proceeds to various extent, depending on the nature of the dissociated state, but is rarely complete because some proteins depart too far unless the process takes place in a confined space. more | pdf | html ###### Tweets BioPapers: Self-assembly of model proteins into virus capsids. https://t.co/tQsNcjyjAr None. None. ###### Other stats Sample Sizes : None. Authors: 2 Total Words: 7005 Unqiue Words: 2220 ##### #2. Small-world networks and RNA secondary structures ###### Defne Surujon, Yann Ponty, Peter Clote Let Sn denote the network of all RNA secondary structures of length n, in which undirected edges exist between structures s, t such that t is obtained from s by the addition, removal or shift of a single base pair. Using context-free grammars, generating functions and complex analysis, we show that the asymptotic average degree is O(n) and that the asymptotic clustering coeffcient is O(1/n), from which it follows that the family Sn, n = 1,2,3,... of secondary structure networks is not small-world. more | pdf | html None. None. ###### Other stats Sample Sizes : None. Authors: 3 Total Words: 10004 Unqiue Words: 1911 ##### #3. Delineating elastic properties of kinesin linker and their sensitivity to point mutations ###### Michał Świątek, Ewa Gudowska-Nowak We analyze free energy estimators from simulation trials mimicking single-molecule pulling experiments on a neck linker of a kinesin motor. For that purpose, we have performed a version of steered molecular dynamics (SMD) calculations. The sample trajectories have been analyzed to derive distribution of work done on the system. In order to induce unfolding of the linker, we have stretched the molecule at a constant pulling force and allowed for a subsequent relaxation of its structure. The use of fluctuation relations (FR) relevant to non-equilibrium systems subject to thermal fluctuations allows us to assess the difference in free energy between stretched and relaxed conformations. To further understand effects of potential mutations on elastic properties of the linker, we have performed similar in silico studies on a structure formed of a polyalanine sequence (Ala-only) and on three other structures, created by substituting selected types of amino acid residues in the linker's sequence with alanine (Ala) ones. The results of SMD... more | pdf | html None. None. ###### Other stats Sample Sizes : None. Authors: 2 Total Words: 7372 Unqiue Words: 2515 ##### #4. Frictional effects on RNA folding: Speed limit and Kramers turnover ###### Naoto Hori, Natalia A. Denesyuk, D. Thirumalai We investigated frictional effects on the folding rates of a human Telomerase hairpin (hTR HP) and H-type pseudoknot from the Beet Western Yellow Virus (BWYV PK) using simulations of the Three Interaction Site (TIS) model for RNA. The heat capacity from TIS model simulations, calculated using temperature replica exchange simulations, reproduces nearly quantitatively the available experimental data for the hTR HP. The corresponding results for BWYV PK serve as predictions. We calculated the folding rates ($k_\mathrm{F}$s) from more than 100 folding trajectories for each value of the solvent viscosity ($\eta$) at a fixed salt concentration of 200 mM. Using the theoretical estimate ($\propto\sqrt{N}$ where $N$ is number of nucleotides) for folding free energy barrier, $k_\mathrm{F}$ data for both the RNAs are quantitatively fit using one dimensional Kramers' theory with two parameters specifying the curvatures in the unfolded basin and the barrier top. In the high-friction regime ($\eta\gtrsim10^{-5}\,\textrm{Pa s}$), for both HP and... more | pdf | html None. None. ###### Other stats Sample Sizes : None. Authors: 3 Total Words: 8366 Unqiue Words: 2438 ##### #5. Generalizable Protein Interface Prediction with End-to-End Learning ###### Raphael J. L. Townshend, Rishi Bedi, Ron O. Dror Predicting how proteins interact with one another - that is, which surfaces of one protein bind to which surfaces of another protein - is a central problem in biology. Here we present Siamese Atomic Surfacelet Network (SASNet), the first end-to-end learning method for protein interface prediction. Despite using only spatial coordinates and identities of atoms as inputs, SASNet outperforms state-of-the-art methods that rely on complex, hand-selected features. These results are particularly striking because we train the method entirely on a significantly biased data set that does not account for the fact that proteins deform when binding to one another. Nonetheless, our network maintains high performance, without retraining, when tested on real cases in which proteins do deform. This suggests that it has learned fundamental properties of protein structure and dynamics, which has important implications for a variety of key problems related to biomolecular structure. more | pdf | html ###### Tweets tonets: 今年もNature3発で絶好調なRon DrorがDeep業界に殴り込んできた [1807.01297] Generalizable Protein Interface Prediction with End-to-End Learning https://t.co/0DfbvZJLgx arxiv_flying: #NIPS2018 Generalizable Protein Interface Prediction with End-to-End Learning. (arXiv:1807.01297v1 [https://t.co/gYpQlOK5de]) https://t.co/o5KMP60FQr nmfeeds: [O] https://t.co/xl5llSZFbb Transferrable End-to-End Learning for Protein Interface Prediction. While there has been an ex... teenvan1995: RT @rishibedi: our work on "Generalizable Protein Interface Prediction with End-to-End Learning" is on arXiv today! https://t.co/clWebx4fjJ… nedw_in: RT @rishibedi: our work on "Generalizable Protein Interface Prediction with End-to-End Learning" is on arXiv today! https://t.co/clWebx4fjJ… None. None. ###### Other stats Sample Sizes : None. Authors: 3 Total Words: 6311 Unqiue Words: 2200 ##### #6. Protein token: a dynamic unit in protein interactions ###### Si-Wei Luo, Yi-Hua Jiang, Zhi Liang, Jia-Rui Wu In this study, we introduced a new unit, named "protein token", as a dynamic protein structural unit for protein-protein interactions. Unlike the conventional structural units, protein token is not based on the sequential or spatial arrangement of residues, but comprises remote residues involved in cooperative conformational changes during protein interactions. Application of protein token on Ras GTPases revealed various tokens present in the superfamily. Distinct token combinations were found in H-Ras interacting with its various regulators and effectors, directing to a possible clue for the multiplexer property of Ras superfamily. Thus, this protein token theory may provide a new approach to study protein-protein interactions in broad applications. more | pdf | html ###### Tweets BioPapers: Protein token: a dynamic unit in protein interactions. https://t.co/tcVslVzF8b None. None. ###### Other stats Sample Sizes : None. Authors: 4 Total Words: 3848 Unqiue Words: 1395 ##### #7. New indicators for assessing the quality of in silico produced biomolecules: the case study of the aptamer-Angiopoietin-2 complex ###### Rosella Cataldo, Livia Giotta, Maria Rachele Guascito, Eleonora Alfinito Computational procedures to foresee the 3D structure of aptamers are in continuous progress. They constitute a crucial input to research, mainly when the crystallographic counterpart of the structures in silico produced is not present. At now, many codes are able to perform structure and binding prediction, although their ability in scoring the results remains rather weak. In this paper, we propose a novel procedure to complement the ranking outcomes of free docking code, by applying it to a set of anti-angiopoietin aptamers, whose performances are known. We rank the in silico produced configurations, adopting a maximum likelihood estimate, based on their topological and electrical properties. From the analysis, two principal kinds of conformers are identified, whose ability to mimick the binding features of the natural receptor is discussed. The procedure is easily generalizable to many biological biomolecules, useful for increasing chances of success in designing high-specificity biosensors (aptasensors). more | pdf | html None. None. ###### Other stats Sample Sizes : [1] Authors: 4 Total Words: 7511 Unqiue Words: 2388 ##### #8. Guessing the upper bound free-energy difference between native-like structures ###### Jorge A. Vila Use of a combination of statistical thermodynamics and the Gershgorin theorem enable us to guess, in the thermodynamic limit, a plausible value for the upper bound free-energy difference between native-like structures of monomeric globular proteins. Support to our result in light of both the observed free-energy change between the native and denatured states and the microstability free-energy values obtained from the observed micro-unfolding tendency of nine globular proteins, will be here discussed. more | pdf | html None. ###### Tweets BioPapers: Guessing the upper bound free-energy difference between native-like structures. https://t.co/QiFzjZErfV None. None. ###### Other stats Sample Sizes : None. Authors: 1 Total Words: 1388 Unqiue Words: 697 ##### #9. Disordered peptide chains in an α-C-based coarse-grained model ###### Łukasz Mioduszewski, Marek Cieplak We construct a one-bead-per-residue coarse-grained dynamical model to describe intrinsically disordered proteins at significantly longer timescales than in the all-atom models. In this model, inter-residue contacts form and disappear during the course of the time evolution. The contacts may arise between the sidechains, the backbones or the sidechains and backbones of the interacting residues. The model yields results that are consistent with many all-atom and experimental data on these systems. We demonstrate that the geometrical properties of various homopeptides differ substantially in this model. In particular, the average radius of gyration scales with the sequence length in a residue-dependent manner. more | pdf | html None. None. ###### Other stats Sample Sizes : None. Authors: 2 Total Words: 11844 Unqiue Words: 3450 ##### #10. DeepAffinity: Interpretable Deep Learning of Compound-Protein Affinity through Unified Recurrent and Convolutional Neural Networks ###### Mostafa Karimi, Di Wu, Zhangyang Wang, Yang Shen Motivation: Drug discovery demands rapid quantification of compound-protein interaction (CPI). However, there is a lack of methods that can predict compound-protein affinity from sequences alone with high applicability, accuracy, and interpretability. Results: We present a seamless integration of domain knowledges and learning-based approaches. Under novel representations of structurally-annotated protein sequences, a semi-supervised deep learning model that unifies recurrent and convolutional neural networks has been proposed to exploit both unlabeled and labeled data, for jointly encoding molecular representations and predicting affinities. Our representations and models outperform conventional options in achieving relative error in IC50 within 5-fold for test cases and 10-fold for protein classes not included for training. Performances for new protein classes with few labeled data are further improved by transfer learning. Furthermore, an attention mechanism is embedded to our model to add to its interpretability, as... more | pdf | html ###### Tweets nmfeeds: [O] https://t.co/0jhsrI3iun DeepAffinity: Interpretable Deep Learning of Compound-Protein Affinity through Unified Recurre... ###### Github Protein-compound affinity prediction through unified RNN-CNN Repository: DeepAffinity User: Shen-Lab Language: Python Stargazers: 0 Subscribers: 3 Forks: 0 Open Issues: 0 None. ###### Other stats Sample Sizes : None. Authors: 4 Total Words: 7130 Unqiue Words: 2586 Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day. Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter). To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else). To see beautiful figures extracted from papers, follow us on Instagram. Tracking 72,893 papers. ###### Search Sort results based on if they are interesting or reproducible. Interesting Reproducible Online ###### Stats Tracking 72,893 papers.
2019-01-20 13:13:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4705244302749634, "perplexity": 7863.028191900274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583716358.66/warc/CC-MAIN-20190120123138-20190120145138-00475.warc.gz"}
http://rgrig.blogspot.com/2007/10/optimal-alphabetic-binary-trees.html
## 30 October 2007 ### Optimal alphabetic binary trees An astute reader pointed out that my "proof" below cannot be correct. See the comment. I was thinking about how I would solve the two exercises that I proposed in the previous post. (I should stop this habit of proposing exercises' for which I have only a hunch that they are doable.) Anyway, I bumped into this interesting problem related to Huffman coding. Problem. We are given the weights w1,…,wn and we must construct a binary tree that has them as leafs and minimizes ∑k wk lk, where lk is the length of the path from the root to the leaf with weight wk. The order of the leafs cannot be changed. Prove the correctness of the greedy algorithm that always glues the lightest leftmost pair of trees. Comment. If the weights are 1,2,3,4,5,6 then the algorithm proceeds as follows: (1,2),3,4,5,6 ↝ ((1,2),3),4,5,6 ↝ ((1,2),3),(4,5),6 ↝ (((1,2),3),(4,5)),6 ↝ ((((1,2),3),(4,5)),6). At each step a pair of adjacent trees is glued. The cost of the result is 54 and you can't do better. There are two differences between this and Huffman's algorithm. The important difference is that we are not allowed to shuffle the numbers. The trivial difference is that in Huffman's algorithm the numbers are usually probabilities that add up to 1. If you thought Huffman coding is about compressing data then you were right. But it's also about doing binary search well. Read on. Proof. The cost ∑k wk lk can be written as ∑x W(x), where x ranges over all the internal nodes and W(x) is the weight of the tree rooted in x (which you get by adding the leaf weights). The children of an optimal binary tree must be optimal binary trees themselves. (Otherwise the overall cost could be improved by replacing them with subtrees that are optimal.) So far, we know that a dynamic programming solution would work. The tree ((x,y),z) has cost 2 W(x) + 2 W(y) + W(z) and the tree (x,(y,z)) has cost W(x) + 2 W(y) + 2 W(z). (Here x, y, and z are roots of arbitrary trees.) This shows that if W(x)+W(y)>W(y)+W(z) then it is possible to improve the second tree is better than the first. In other words, an optimal solution of the form ((x,y),z) always has W(x)+W(y) ≤ W(y)+W(z). At this point we know that if the lightest pair of adjacent trees is unique then that's what we should glue. What remains to be proved is that, in case of a tie, choosing the leftmost lightest pair is enough. If the lightest pairs are disjoint then they should all be glued and the order does not matter. Otherwise the weights of of the trees must have one of the following configurations: 1. l a b a ba b r, where l>b and r>a 2. l a b a ba b a r, where l>b and r>b possibly with a=b. (Also, l and r might be missing completely. That is, we may be at the edge of the forest.) The leftmost' strategy leads to 1. l(a+b)(a+b)⋯(a+b)r 2. l(a+b)(a+b)⋯(a+b)br Now I have a good hunch that this is close to the end of the proof using an argument of the type “for a given number of leafs the optimal cost is a monotonic function of the sum of their weights,” but I'm a little tired. End of partial proof. The algorithm described above can be implemented in O(n2) time, but I believe an implementation in O(nlgn) time is possible. In fact Wikipedia mentions that Hu-Tucker gave an algorithm with this upper bound and that algorithm might be very well close to what I have in mind (you put pairs of adjacent trees in a linked list and in a priority queue). For now I'm resistent to look up what others have done because I want to think about it myself a bit more. How does this apply to solve the second exercise you ask? It's pretty simple. This tree corresponds to the optimal binary search strategy, if you set the weights to be the probabilities that the answer takes various values. So the next step is to analyze what happens when the weights are a geometric progression 1,q,q2,… (For the curious, all this suggests that if n is not huge then the best strategy on average is to check ψn and then ψ12,… in order. This seems to give an average case lgn times better than the code I gave but at the cost of messing up the worst case. So it's not really a better solution. Note that I only expect people who tried to solve that exercise to understand this parenthetic paragraph.) PS: This post is dedicated to my wife on her 28th birthday. Oh, and my grandmother had her 81th birthday four days ago. Happy birthday! #### 1 comment: rgrig said... Guo Xu points out that the algorithm above fails on weights 5,3,4,5. The "proof" is wrong.
2018-03-21 10:29:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7171568274497986, "perplexity": 610.7586304411192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647612.53/warc/CC-MAIN-20180321102234-20180321122234-00373.warc.gz"}
https://www.usgs.gov/news/national-news-release/hot-water-climate-change-affecting-north-american-fish
In Hot Water: Climate Change is Affecting North American Fish June 29, 2016 Climate change is already affecting inland fish across North America -- including some fish that are popular with anglers. Scientists are seeing a variety of changes in how inland fish reproduce, grow and where they can live. Climate change is already affecting inland fish across North America -- including some fish that are popular with anglers. Scientists are seeing a variety of changes in how inland fish reproduce, grow and where they can live, according to four new studies published today in a special issue of Fisheries magazine. Fish that have the most documented risk include those living in arid environments and coldwater species such as sockeye salmon, lake trout, walleye, and prey fish that larger species depend on for food. Climate change can cause suboptimal habitat for some fish; warmer water, for example, can stress coldwater fish. When stressed, fish tend to eat less and grow less. For other fish, climate change is creating more suitable habitat; smallmouth bass populations, for example, are expanding. These changes will have direct implications – some good, some bad – for recreational fishers, who, in the United States alone, contributed nearly $700 million in revenue to state agencies through license, tag, stamp, and permit purchases in 2015. Annually, anglers spend about$25 billion on trips, gear, and equipment related to recreational fishing in U.S. freshwaters. “The U.S. Geological Survey and partners are working to provide a fuller and more comprehensive picture of climate change impacts on North American fish for managers, scientists, and the public alike,” said Abigail Lynch, a lead author and fisheries biologist with the USGS National Climate Change and Wildlife Science Center. The authors reviewed 31 studies across North America and Canada (see map) that document fish responses to climate change. The manuscripts describe the impacts of climate change to individual fish, populations, recreational fishers, and fisheries managers. One of the takeaway messages is that climate change effects on fish are rarely straightforward, and they affect warmwater and coldwater fish differently. “Thanks to this synthesis, we can see the effects of climate change on inland fish are no longer just future speculation, but today’s facts, with real economic, social, and ecological impacts,” said Doug Austen, Executive Director of the American Fisheries Society and publisher of Fisheries magazine. “Now that trends are being revealed, we can start to tease apart the various stressors on inland fish and invest in conservation and research where these programs will really make a difference in both the short and long term.” The authors emphasize that resource managers can take many actions to help sustain resilient fish communities and fisheries. “Even though climate change can seem overwhelming, fisheries managers have the tools to develop adaptation strategies to conserve and maintain fish populations,” said Craig Paukert, a lead author and fisheries scientist at the USGS Missouri Cooperative Fish and Wildlife Research Unit at the University of Missouri. Smallmouth bass (see photo) provide one example of how climate change presents fisheries managers with both challenges and opportunities. Smallmouth bass, a popular recreational species, are expanding their range northward with climate change. This expansion can disrupt existing food webs but it also creates new prospects for recreational fishing. Because many recreational fishers want to catch smallmouth bass, managers may need new management techniques to accommodate the increased fishing demand while still maintaining the native coldwater fish communities. Consequently, said Paukert, the management process may likely be an exercise in managing expectations of the stakeholders for fisheries changing with climate change. Other major findings: • Climate change may be altering abundance and growth of some North American inland fishes, particularly coldwater fish such as sockeye salmon, a species experiencing well-documented shifts in range, abundance, migration, growth, and reproduction. • Climate change may be causing earlier migration timing and allowing species that never occurred together previously to hybridize. For example, native westslope cutthroat trout in the Rocky Mountains are now hybridizing with rainbow trout, a non-native species. • Shifts in species’ ranges are already changing the kinds of fish in a specific water body, resulting in new species interactions and altered predator-prey dynamics. For example, in Canada, smallmouth bass have expanded their range, altering existing food chains because the species compete against other top predators for habitat and prey fish. • Droughts are forecasted to increase in frequency and severity in many parts of North America, especially in arid rivers. Such droughts exacerbate the impacts of water flow regulation in ways that affect people, fish, and aquatic systems. “The current state of the science shows that climate change is impacting fish in lakes, rivers, and streams, but knowing that is just the first step in effectively addressing the changes to these important natural resources and the communities which depend upon them,” Lynch said. The following papers are available in Fisheries magazine, published by the American Fisheries Society: • Physiological basis of climate change impacts on North American inland fishes, authored by James E. Whitney (Pittsburg State University), Robert Al-Chokhachy (USGS), Bo Bunnell (USGS) and others. • Climate change effects on North American inland fish populations and assemblages, authored by Abigail J. Lynch (USGS), Bonnie J. E. Myers (USGS), Cindy Chu (Ontario Ministry of Natural Resources and Forestry; OMNRF) and others. • Identifying alternate pathways for climate change to impact inland recreational fishers, authored by Len M. Hunt (OMNRF), Eli P. Fenichel (Yale University), David C. Fulton (USGS) and others. • Adapting inland fisheries management to a changing climate, authored by Craig P. Paukert (USGS), Bob A. Glazer (Florida Fish and Wildlife Conservation Commission), Gretchen J. A. Hansen (Minnesota Department of Natural Resources) and others. This research was supported by the USGS NCCWSC, which collaborates with universities, resource management organizations, tribes and other partners to provide unbiased scientific data and tools that contribute to an understanding of the widespread impacts of climate change on fish, wildlife, ecosystems, and people. Their science directly addresses on-the-ground, real-time needs of natural and cultural resource managers and human communities, enhancing their capacity for adaptive management in a changing climate. The eight Climate Science Centers, managed by NCCWSC, form a national network and are regionally distributed to best address the local needs of resource managers and decision makers. CSC research projects cover a wide array of climate change-related impacts, including sea-level rise, extreme storms, increased wildfire patterns, invasive species, glacier loss, and drought.
2022-01-17 01:40:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27598604559898376, "perplexity": 11559.227008142492}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300253.51/warc/CC-MAIN-20220117000754-20220117030754-00019.warc.gz"}
https://11011110.github.io/blog/2011/09/29/colored-squares.html
While working on a new Wikipedia entry for angular resolution, I ran across a curious connection between coloring, degeneracy, degree, and graph squaring. The square of a graph $$G$$ is the graph on the same vertex set, with two vertices connected by an edge in the square whenever they have distance at most two in $$G.$$ The original paper on angular resolution by Formann et al showed that the chromatic number of the square is closely related to the angular resolution: if the square can be colored with $$k$$ colors, then place the vertices of $$G$$ in a circular layout, with each vertex near the corner of a regular $$k$$-gon that corresponds to its color. The resulting layout will have angular resolution around $$\pi/k,$$ although it will likely be bad in other ways — for instance, pairs of edges that connect vertices of the same color will be drawn very close to each other. Square-coloring has also been used for frequency assignment in wireless networks: one wants nodes two hops from each other to have different frequencies so that their common neighbor can hear both of them without interference. In an unpublished paper from 1977 by Wegner, cited in a more recent survey by Kramer and Kramer, he conjectures that the squares of planar graphs have chromatic number at most $$\max(\Delta+5, 3\Delta/2+1)$$ where $$\Delta$$ is the maximum degree of the graph. Formann et al rediscovered the problem independently and and showed that the chromatic number of the square is at most $$13\Delta/7+O(\Delta2/3);$$ a better bound of $$5\Delta/3+O(1)$$ proven by Molloy and Salavatipour is mentioned by Kramer and Kramer. Because of these bounds, planar graphs may be drawn (nonplanarly) with angular resolution inversely proportional to $$\Delta,$$ whereas nonplanar graphs may need angular resolution closer to $$1/\Delta^2.$$ The argument of Formann et al uses careful analysis of the numbers of vertices of degrees near the maximum, based on the Euler characteristic of the graph. But if you just want a bound that's linear in $$\Delta,$$ it can be done much more easily based on the degeneracy of the graph, a number $$d$$ such that every subgraph has a vertex of degree at most $$d.$$ Planar graphs have degeneracy at most five, and the chromatic number of any graph is at most one more than its degeneracy. It turns out that, for any graph $$G$$ with degeneracy $$d$$ and maximum degree $$\Delta,$$ the degeneracy of $$G^2$$ is at most $$2d\Delta.$$ For, if the vertices of $$G$$ are placed into a sequence so that each vertex has at most $$d$$ later neighbors in the sequence (an equivalent definition of degeneracy) then each path of length at most two from a vertex $$v$$ to a later vertex must have at least one forward-going edge. There are at most $$d\Delta$$ paths that start with a forward edge and at most $$\Delta d$$ paths that end with a forward edge, and adding them together gives the bound on the number of later neighbors in $$G^2.$$ This bound shows only that the chromatic number of the square of a planar graph is at most $$10\Delta,$$ very far from Wegner's conjecture, but it generalizes to any family of graphs in which all subgraphs are sparse and shows that they all have drawings with angular resolution linear in their degree. Unrelatedly, I was inspired by the visit of Mike Goodrich's academic sibling Marina Blanton (who is giving a colloquium here tomorrow on privacy-preserving computations for biometrics) to add another article on their common advisor, Mike Atallah. And while I'm at it, even more unrelatedly, an earlier algorithms researcher: Selmer M. Johnson. ext_812567: 2011-09-30T14:44:23Z Thanks for posting this proof... it is of a nice size to go with a morning coffee! PS: I couldn't post a reply with openid (my preference), even though LJ could see what mine was; did you disable openid comments perchance? - daveagp 11011110: 2011-09-30T14:55:16Z I didn't disable it intentionally, and I thought I'd received openid comments within the last few days, but when I checked I realized they were actually from LJ users. Sometimes on other sites I've also had difficulty with using openid to comment. I don't think it's entirely reliable. Or perhaps LJ stopped allowing openid to count as registered users? I'll try enabling anonymous comments again but the last time I did that I had to stop quickly because of the spam. ext_73227: 2011-09-30T19:31:42Z Who knows, perhaps I also should have tried a different browser and perhaps in French. Merci! 11011110: 2011-10-03T05:31:46Z I'm still getting too much spam (eight different posts were hit in the short time since I tried turning anonymous comments on) so I'm setting it back to registered users only. Sorry if that makes it difficult to comment.
2023-02-02 21:02:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7836303114891052, "perplexity": 348.40828599412447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00704.warc.gz"}
http://myriverside.sd43.bc.ca/jessicap2015/2018/05/23/
# Pre-Calc 11: Week 14 Multiplying and Dividing Rational Expressions Last week in Pre-Calculus 11 we learned about multiplying and dividing rational expressions. A thing to remember about expressions is that it does not have an equal sign and you do not need to solve, you only need to simplify. I will be teaching you what non-permissible values are and how to determine what they are. Non-permissible value :  Values that cause the fraction to have a denominator with a value of zero. (In math, we cannot divide by zero). Here is an example of multiplying rational expressions Steps: Step 1:  Simplify $\frac{3x}{2(x-3)}\cdot\frac{8(x-3)}{9x^2}$ Non permissible values: $x\neq -3$ • $\frac{3x}{2(x-3)}\cdot\frac{8(x-3)}{9x^2}$ • remove (x-3) from top and bottom because they cancel eachother out • then it becomes… $\frac {3x}{2}\cdot\frac{8}{9x^2}$ Step 2: Simplify by multiplying across (Just do it) • $\frac {3x}{2}\cdot\frac{8}{9x^2}$ • $\frac{24x}{18x^2}$ Step 3: Take the highest common factor from both the numerator and the denominator. In this case 6 is the highest number that goes into both. • $\frac{24x}{18x^2}$ • $\frac{4x}{18x^2}$ Step 4: Notice that x is on the bottom and the top, if it has a pair it can cancel out. two x’s on the bottom one on the top. when they cancel out each other you will be left with only 1 on the bottom. • $\frac{4x}{3x^2}$ • $\frac{4}{3x}$ Final Answer: $\frac{4}{3x}$ Dividing Rational Expressions There are many steps when dividing rational expressions 1. Simplify the fraction: can factor or take out the common denominator. 2. State the non permissible values. 3. Reciprocate the second fraction and it will because a multiplication expression. 4. State the restrictions again because there are new values in the denominator and could be non-permissible. 5. Simplify (cancel out like terms that have a pair on the numerator and denominator. 6. Multiply across (Just do it) 7. Simplify again if possible. Step 1: Simplify $\frac{x+5}{x-4}\div\frac{x^2 - 25}{3x-12}$ FACTOR: • $\frac{x+5}{x-4}\div\frac{x^2 - 25}{3x-12}$ • $\frac{x+5}{x-4}\div\frac{(x+5)(x-5)}{3(x-4)}$ Step 2: Non-permissible values • $x\neq 4$ Step 3: Reciprocate • $\frac{x+5}{x-4}\div\frac{(x+5)(x-5)}{3(x-4)}$ • $\frac{x+5}{x-4}\cdot\frac{3(x-4)}{(x+5)(x-5)}$ Step 4: Non-permissible values • $x\neq 4$ • $x\neq 5$ • $x\neq -5$ Step 5: Cross out like terms • $\frac{x+5}{x-4}\cdot\frac{3(x-4)}{(x+5)(x-5)}$ • $\frac{3}{(x-5)}$ Step 6: Multiply across if possible • In this example it is not Step 7: Simplify further if possible • In this example it is not Final Answer:  $\frac{3}{(x-5)}$ This is how you Multiply and Divide Rational Expressions
2019-08-18 07:28:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8206897377967834, "perplexity": 1365.7913488872648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313715.51/warc/CC-MAIN-20190818062817-20190818084817-00159.warc.gz"}
https://ajayshahblog.blogspot.com/2008/09/critical-appointments-watch.html
## Monday, September 01, 2008 ### Critical appointments watch Please note: LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.
2018-05-26 11:52:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.611060380935669, "perplexity": 9045.249174941715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867417.71/warc/CC-MAIN-20180526112331-20180526132331-00235.warc.gz"}
https://exploringnumbertheory.wordpress.com/2013/07/12/fermats-little-theorem-and-mental-poker/
# Fermat’s Little Theorem and Mental Poker In this post we demonstrate another use of the Fermat’s Little Theorem. How can two people play poker when they are not sitting face to face from each other? If the game of poker is played over long distance (e.g. via a telephone line or some electronic communication channel), there will be a need to ensure a fair game. For example, the two players must use the same deck of cards (ensuring that there will be no duplicates). The deck of cards will need to be well shuffled. Each player cannot see the cards of the other player. One solution is to use a trusted third party to do the shuffling and selecting of cards. If a third party cannot be found or it is felt that the third party cannot be trusted to be fair, then one should consider the cryptographic solution described in this post. This soultion was proposed by Rivest, Shamir and Adlemen in 1982 (the creators of the RSA algorithm). The term mental poker refers to the game of poker played over long distance that has a mechanism for ensuring a fair game without the need for a trusted third party. Mental poker can also refer to other cryptographic games played over long distance without the need for a trusted third party (e.g. tossing a coin over long distance). ___________________________________________________________________________________________________________________ Setting Up the Deck of Cards Let’s say that the players are Andy and Becky. Since they are not using a physical deck of cards, they need to represent the cards by numbers. Let’s say that they agree to number the cards as follows: $\displaystyle \heartsuit 2=1020 \ \ \ \ \ \ \ \ \ \ \ \diamondsuit 2=2020 \ \ \ \ \ \ \ \ \ \ \ \spadesuit 2=3020 \ \ \ \ \ \ \ \ \ \ \ \clubsuit 2=4020$ $\displaystyle \heartsuit 3=1030 \ \ \ \ \ \ \ \ \ \ \ \diamondsuit 3=2030 \ \ \ \ \ \ \ \ \ \ \ \spadesuit 3=3030 \ \ \ \ \ \ \ \ \ \ \ \clubsuit 3=4030$ $\displaystyle \heartsuit 4=1040 \ \ \ \ \ \ \ \ \ \ \ \diamondsuit 4=2040 \ \ \ \ \ \ \ \ \ \ \ \spadesuit 4=3040 \ \ \ \ \ \ \ \ \ \ \ \clubsuit 4=4040$ $\cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots$ $\cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots$ $\cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots$ $\displaystyle \heartsuit Q=1120 \ \ \ \ \ \ \ \ \ \ \ \diamondsuit Q=2120 \ \ \ \ \ \ \ \ \ \ \spadesuit Q=3120 \ \ \ \ \ \ \ \ \clubsuit Q=4120$ $\displaystyle \heartsuit K=1130 \ \ \ \ \ \ \ \ \ \ \ \diamondsuit K=2130 \ \ \ \ \ \ \ \ \ \spadesuit K=3130 \ \ \ \ \ \ \ \ \clubsuit K=4130$ $\displaystyle \heartsuit A=1140 \ \ \ \ \ \ \ \ \ \ \ \diamondsuit A=2140 \ \ \ \ \ \ \ \ \ \ \spadesuit A=3140 \ \ \ \ \ \ \ \ \ \clubsuit A=4140$ ___________________________________________________________________________________________________________________ The card numbers need to be encrypted before they can be passed between the two players. Here’s how it works. Both players agree to choose a large prime number $p$. This number $p$ needs to be larger than all the card numbers and the encrypted card numbers. The larger $p$ is, the harder it will be for any one of the players to cheat. Now each of the players needs to choose an encryption-decryption key (a padlock) that the player keeps secret. Let’s start with Andy. He chooses a pair of positive numbers $a_0$ and $a_1$ such that the following holds: $a_0 \cdot a_1 \equiv 1 \ (\text{mod} \ p-1)$ Equivalently the pair $a_0$ and $a_1$ satisfies the equation $a_0 \cdot a_1=1+(p-1) \cdot k$ for some integer $k$. The number $a_0$ will be used for locking (encryption) and the number $a_1$ will be used for unlocking (decryption). Andy will also keep this pair of numbers away from Becky. How will Andy use this padlock? Suppose that $m$ is a number to be encrypted. To encrypt the number, Andy raises $m$ to the power of $a_0$ and then finds the remainder upon division by $p$. He will call the remainder $f_a(m)$. Using congruence notation, the following is the encryption function: $f_a(m) \equiv m^{a_0} \ (\text{mod} \ p)$ If Andy needs to recover $m$ from the encrypted card number $c=f_a(m)$, all he has to do is to raise $c$ to the power of $a_1$ and then find the remainder upon division by $p$. Call the remainder $g_a(c)$, which will be the original card number $m$. Using congruence notation, the following is the decryption function: $g_a(c) \equiv c^{a_1} \ (\text{mod} \ p)$ The decrypted number is the original number. Thus we have $g_a(c)=m$. A proof of this relies on the Fermat’s Little Theorem (see proof). Because the numbers involved are usually large, no one will try to raise $m$ to the power of $a_0$ and then divides by $p$ to find the remainder. Instead, Andy should use special software. If software is not available, Andy can rely on congruence modulo arithmetic, which should also be done by a computer. See below for a demonstration of the congruence modulo arithmetic. The other player Becky also needs a padlock. Specifically, she chooses a pair of numbers $b_0$ and $b_1$ that satisfy the following: $b_0 \cdot b_1 \equiv 1 \ (\text{mod} \ p-1)$ This pair of number serves the same purpose as the pair that belongs to Andy. Of course, $b_0$ and $b_1$ need to be kept secret from Andy. The following shows the encryption and decryption functions for Becky’s padlock. $f_b(m) \equiv m^{b_0} \ (\text{mod} \ p)$ $g_b(c) \equiv c^{b_1} \ (\text{mod} \ p)$ ___________________________________________________________________________________________________________________ How to Play the Game Suppose the card numbers are $m_1, m_2, m_3, \cdots, m_{52}$ (the above is one example of card number assigment). Andy then encrypts the card number using his encryption function $f_a(m)$. The following lists the encrypted card numbers. $\displaystyle f_a(m_1),\ f_a(m_2), \ f_a(m_3),\cdots,f_a(m_{52})$ Andy then passes these encrypted card numbers to Becky. She shuffles the encrypted deck thorughly. She then chooses a 5-card hand for Andy. Becky then chooses another 5-card hand for herself. Becky uses her key to encrypt her 5-card hand. Becky passes both 5-card hands to Andy. The following shows what Becky passes to Andy. Andy’s 5-card hand: $f_a(m_i) \equiv m_i^{a_0} \ (\text{mod} \ p)$ for 5 distinct values of $i$. Becky’s 5-card hand: $f_b(f_a(m_j)) \equiv f_a(m_j)^{b_0} \ (\text{mod} \ p)$ for 5 distinct values of $j$. Once Andy gets the two 5-card hands, he decrypts his own 5-card hand and gets back the original card numbers. He also decrypts Becky’s 5-card hand and passes that back to Becky. Andy’s 5-card hand: $g_a(f_a(m_i)) \equiv (m_i^{a_0})^{a_1} \equiv m_i \ (\text{mod} \ p)$ Becky’s 5-card hand: $g_a(f_b(f_a(m_j))) \equiv (f_a(m_j)^{b_0})^{a_1}=(f_a(m_j)^{a_1})^{b_0} \equiv m_j^{b_0} \ (\text{mod} \ p)$, which Andy passes back to Becky. Once Becky’s recieves her 5-card hand back from Andy, she decrypts the cards immediately and gets back the original card numbers. Becky’s 5-card hand: $g_b(m_j^{b_0}) \equiv (m_j^{b_0})^{b_1} \equiv m_j \ (\text{mod} \ p)$ Now each of the players has a 5-card hand that is only known to himself or herself. If they need to select new cards from the deck, they can follow the same back-and-forth procedures of encrypting and decrypting. How fair is the poker game played in this manner? How secure is the game? It is very fair and secure if the players follow the rules and do not cheat. It is obviously possible to cheat. When Andy passes the 52 encrypted card numbers to Becky, Becky certainly can try to break Andy’s lock by figuring out Andy’s $a_0$. When Becky passes her encrypted cards to Andy, he can try to figure out Becky’s $b_0$. For that to happen, the player who wants to cheat will need to have enormous amount of computational resources at the ready. Thus the prime number $p$ should be large enough to make cheating an intractable problem. On the other hand, even when the prime number is of a moderate size, there has to be a fair amount of computational resources in order to play the game efficiently, with all the encrypting and decrypting that have to be done. ___________________________________________________________________________________________________________________ Fermat’s Little Theorem We now use Fermat’s Little Theorem to show that the encryption-decryption key works correctly and accurately. We show the following: $(m^{a_0})^{a_1} \equiv m \ (\text{mod} \ p) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$ For the descriptions of the numbers $m$, $p$, $a_0$ and $a_1$, see the above section Setting Up the Padlocks. First we state the Fermat’s Little Theorem. Fermat’s Little Theorem Let $q$ be a prime number. Then for any integer $a$, $a^q-a$ is an integer multiple of $q$ (or $q$ divides $a^q-a$). Using congruence notation, the theorem can be expressed as: $a^q \equiv a \ (\text{mod} \ q)$ If the integer $a$ is not divisible by $q$, then we can divide out $a$ and the theorem can be expressed as: $a^{q-1} \equiv 1 \ (\text{mod} \ q)$ For a proof and a fuller discussion of Fermat’s little theorem, see this post. We now prove the property $(1)$. Recall that the pair of positive integers $a_0$ and $a_1$ are keys to lock and unlock a number $m$. They are chosen such that $a_0 \cdot a_1 \equiv 1 \ (\text{mod} \ p-1)$, or equivalently $a_0 \cdot a_1=1+(p-1) \cdot k$ for some integer $k$. This integer $k$ must be positive since $a_0$ and $a_1$ are both positive. In the derivation below, we repeated use the fact that $m^p \equiv m \ (\text{mod} \ p)$ (applying the Fermat’s Little Theorem). \displaystyle \begin{aligned} m&\equiv m^p \ (\text{mod} \ p)=m \cdot m^{p-1} \\&\equiv m^p \cdot m^{p-1} \ (\text{mod} \ p)=m \cdot m^{2(p-1)} \\&\equiv m^p \cdot m^{2(p-1)} \ (\text{mod} \ p)=m \cdot m^{3(p-1)} \\&\cdots \\&\cdots \\&\cdots \\&\equiv m^p \cdot m^{(k-1)(p-1)} \ (\text{mod} \ p)=m \cdot m^{k(p-1)} \\&=m \cdot m^{k(p-1)} \equiv m^{a_0 \cdot a_1} \ (\text{mod} \ p) \end{aligned} ___________________________________________________________________________________________________________________ Example of Congruence Calculation For a numerical example, we use a small prime number $p=55,049$. Though a small prime number, it is large enough to make the illustration meaningful. Andy chooses $a_0 \cdot a_1$ such that $a_0 \cdot a_1=1+(p-1) \cdot k$ for some integer $k$. Andy decides to use $k=3817$, leading to $a_0=2,657$ and $a_1=79,081$. As illustration of how the calculation is done, let $m=1020$ (the number for $\heartsuit 2$ as indicated above). To decrypt this card, Andy needs to raise $1020$ to the 2657th power and then find the remainder upon division by $p=50,049$. This is the definition for $1010200^{269}$ modulo $p$. But the calculation is not easy to do directly without special software. We present here a “divide and conquer” approach that use the division algorithm in each step to reduce the exponent by half. To start, note that $1020^2 \equiv 49518 \ (\text{mod} \ 55049)$, meaning that the remainder is $49518$ when $1020^2$ is divided by $55049$. In the following series of steps, a congruence calculation is performed in each step (using the division algorithm) to reduce the exponent by half. \displaystyle \begin{aligned} 1020^{2657}&\equiv (1020^2)^{1328} \cdot 1020 \ (\text{mod} \ 55049) \\&\text{ } \\&\equiv 49518^{1328} \cdot 1020 \equiv (49518^2)^{664} \cdot 1020 \\&\text{ } \\& \equiv 39766^{664} \cdot 1020 \equiv (39766^2)^{332} \cdot 1020 \\&\text{ } \\& \equiv 52231^{332} \cdot 1020 \equiv (52231^2)^{166} \cdot 1020 \\&\text{ } \\&\equiv 14068^{166} \cdot 1020 \equiv (14068^2)^{83} \cdot 1020 \\&\text{ } \\& \equiv 7469^{83} \cdot 1020 \equiv (7469^2)^{41} \cdot 7469 \cdot 1020 \\&\text{ } \\& \equiv 21324^{41} \cdot 7469 \cdot 1020 \\&\text{ } \\&\equiv 21324^{41} \cdot 21618 \equiv (21324^2)^{20} \cdot 21324 \cdot 21618 \\&\text{ } \\& \equiv 8236^{20} \cdot 21324 \cdot 21618 \\&\text{ } \\& \equiv 8236^{20} \cdot 1906 \equiv (8236^2)^{10} \cdot 1906 \\&\text{ } \\& \equiv 11328^{10} \cdot 1906 \equiv (11328^2)^{5} \cdot 1906 \\&\text{ } \\& \equiv 4365^5 \cdot 1906 \equiv (4365^2)^2 \cdot 4365 \cdot 1906 \\&\text{ } \\& \equiv 6271^2 \cdot 4365 \cdot 1906 \\&\text{ } \\&\equiv 6271^2 \cdot 7291 \\&\text{ } \\&\equiv 20455 \cdot 7291 \\&\text{ } \\&\equiv 9664 \ (\text{mod} \ 55049) \end{aligned} Thus the card number $1020$ is encrypted as $9664$. To recover the original card number from this encrypted number, Andy needs to raise $9664$ to the power of $a_1=79081$. Here, we get an assist from Fermat’s Little Theorem in addition to the ‘divide and conquer” congruence arithmetic that is used above. According to Fermat’s Little Theorem, $9664^{55048} \equiv 1 \ (\text{mod} \ 55049)$. Thus we have $9664^{79081} \equiv 9664^{55048} \cdot 9664^{24033} \equiv 9664^{24033} \ (\text{mod} \ 55049)$ With the help of Fermat’s Little Theorem, the exponent $79081$ has come down to $24033$. In the rest of the way, the “divide and conquer” approach is used. \displaystyle \begin{aligned} 9664^{24033}&\equiv (9664^2)^{12016} \cdot 9664 \ (\text{mod} \ 55049) \\&\text{ } \\&\equiv 29782^{12016} \cdot 9664 \equiv (29782^2)^{6008} \cdot 9664 \\&\text{ } \\&\equiv 8237^{6008} \cdot 9664 \equiv (8237^2)^{3004} \cdot 9664 \\&\text{ } \\&\equiv 27801^{3004} \cdot 9664 \equiv (27801^2)^{1502} \cdot 9664 \\&\text{ } \\&\equiv 7641^{1502} \cdot 9664 \equiv (7641^2)^{751} \cdot 9664 \\&\text{ } \\&\equiv 32941^{751} \cdot 9664 \equiv (32941^2)^{375} \cdot 32941 \cdot 9664 \\&\text{ } \\&\equiv 38642^{375} \cdot 32941 \cdot 9664 \\&\text{ } \\&\equiv 38642^{375} \cdot 48506 \equiv (38642^2)^{187} \cdot 38642 \cdot 48506 \\&\text{ } \\&\equiv 39^{187} \cdot 38642 \cdot 48506 \\&\text{ } \\&\equiv 39^{187} \cdot 5451 \equiv (39^2)^{93} \cdot 39 \cdot 5451 \\&\text{ } \\&\equiv 1521^{93} \cdot 39 \cdot 5451 \\&\text{ } \\&\equiv 1521^{93} \cdot 47442 \equiv (1521^2)^{46} \cdot 1521 \cdot 47442 \\&\text{ } \\&\equiv 1383^{46} \cdot 1521 \cdot 47442 \\&\text{ } \\&\equiv 1383^{46} \cdot 45092 \equiv (1383^2)^{23} \cdot 45092 \\&\text{ } \\&\equiv 41023^{23} \cdot 45092 \equiv (41023^2)^{11} \cdot 41023 \cdot 45092 \\&\text{ } \\&\equiv 38599^{11} \cdot 41023 \cdot 45092 \\&\text{ } \\&\equiv 38599^{11} \cdot 52618 \equiv (38599^2)^{5} \cdot 38599 \cdot 52618 \\&\text{ } \\&\equiv 36665^5 \cdot 38599 \cdot 52618 \\&\text{ } \\&\equiv 36665^5 \cdot 24376 \equiv (36665^2)^{2} \cdot 36665 \cdot 24376 \\&\text{ } \\&\equiv 25645^2 \cdot 36665 \cdot 24376 \\&\text{ } \\&\equiv 25645^2 \cdot 25525 \\&\text{ } \\&\equiv 50671 \cdot 25525 \\&\text{ } \\&\equiv 1020 \ (\text{mod} \ 55049) \end{aligned} In each step of the above calculation, the division algorithm is applied to reduce the exponent by half. For example, to go from the first line to the second line, $9664^2$ is divided by $55049$ to obtain the remainder $29782$, i.e. $9664^2 \equiv 29782 \ (\text{mod} \ 55049)$. The number $1020$ in the last line is the remainder when $50671 \cdot 25525$ is divided by $55049$. ___________________________________________________________________________________________________________________ $\copyright \ 2013 \text{ by Dan Ma}$
2018-04-24 08:48:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 122, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998192191123962, "perplexity": 990.9057965825933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946578.68/warc/CC-MAIN-20180424080851-20180424100851-00117.warc.gz"}
http://www.casite-721864.cloudaccess.net/library/operator-theory-on-noncommutative-domains
Theory By Gelu Popescu ISBN-10: 0821847104 ISBN-13: 9780821847107 During this quantity the writer reports noncommutative domain names $\mathcal{D}_f\subset B(\mathcal{H})^n$ generated by means of confident ordinary loose holomorphic features $f$ on $B(\mathcal{H})^n$, the place $B(\mathcal{H})$ is the algebra of all bounded linear operators on a Hilbert area $\mathcal{H}$. desk of Contents: advent; Operator algebras linked to noncommutative domain names; loose holomorphic services on noncommutative domain names; version concept and unitary invariants on noncommutative domain names; Commutant lifting and purposes; Bibliography. (MEMO/205/964) Similar theory books Download PDF by Prof. Dr. Bernhard Hofmann-Wellenhof, Dr. Herbert: Global Positioning System: Theory and Practice This publication exhibits in accomplished demeanour how the worldwide Positioning procedure (GPS) works. using GPS for designated measurements (i. e. , surveying) is handled in addition to navigation and angle decision. the elemental mathematical types for varied modes of GPS operations and particular rationalization of the sensible use of GPS are constructed accurately during this publication. Get SOFSEM 2010: Theory and Practice of Computer Science: 36th PDF This e-book constitutes the refereed complaints of the thirty sixth convention on present developments in concept and perform of desktop technology, SOFSEM 2010, held in � pindleruv Mlýn, Czech Republic, in January 2009. The fifty three revised complete papers, provided including eleven invited contributions, have been conscientiously reviewed and chosen from 134 submissions. Download e-book for kindle: Linguistic Theory and Complex Words: Nuuchahnulth Word by J. Stonham Nuuchahnulth is understood for its awesome use of word-formation and complicated inflection. this can be the 1st booklet to supply an in depth description of the advanced morphology of the language, in line with fabric collected while it used to be extra doable than it's now. the outline is embedded inside a broad-ranging theoretical dialogue of curiosity to all morphologists. Extra info for Operator theory on noncommutative domains Example text Now, we can prove the following. 10. Let ϕ(W1 , . . , Wn ) = β∈F+ n Hardy algebra Fn∞ (Df ) and such that ϕ(W1 , . . , Wn ) ≤ 1. If (T1 , . . , Tn ) ∈ Df,r (H), where 0 ≤ r < 1, then |cβ |Tβ ≤ |c0 | + (1 − |c0 |2 ) β∈F+ n r . 1−r Proof. 1, since ϕ(W1 , . . , Wn ) ∈ Fn∞ (Df ), ∞ |β| |cβ |r |β| Wβ we have k=0 |β|=k |cβ |r Wβ < ∞. Therefore, the series β∈F+ n converges in the operator norm for 0 ≤ r < 1 and it is in the algebra An (Df ). Since 1 1 r T1 , . . 2, part(ii), implies |cβ |r |β| Wβ . 2, part(ii), implies |cβ |r |β| Wβ . 1, the operators {Wβ }|β|=k have orthogonal ranges and Wβ = √1 , β ∈ F+ n . Consequently, bβ |β|=k bβ Wβ Wβ∗ ≤ 1 for any Operator Theory on Noncommutative Domains k = 0, 1, . .. Hence, and using the fact that β∈F+ n 55 |cβ |2 b1β < ∞, which is due to the fact that ϕ(W1 , . . Ii) The only normal elements in Fn∞ (Df ) are the scalars. (iii) Every element A ∈ Fn∞ (Df ) has its spectrum σ(A) = {0} and it is injective. (iv) The algebra Fn∞ (Df ) contains no non-trivial idempotents and no non-zero quasinilpotent elements. (v) The algebra Fn∞ (Df ) is semisimple. 24 Gelu Popescu (vi) If A ∈ Fn∞ (Df ), n ≥ 2, then σ(A) = σe (A). Proof. 9 part (ii), using the fact cα Wα be in that Fn∞ (Df ) = Fn∞ (Df ). To prove (ii), let ϕ(W1 , . . , Wn ) := α∈F+ n Fn∞ (Df ). 7), we have ϕ(W1 , .
2018-11-16 23:17:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.821641743183136, "perplexity": 8767.033234748555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743216.58/warc/CC-MAIN-20181116214920-20181117000920-00133.warc.gz"}
https://blog.gnagy.info/2016/01/12/quadratic-confusion/
I have a horrible memory. I don’t mean just that I misplace things or forget names; it takes a lot of effort to commit arbitrary facts, figures, dates, etc., to my long-term memory. So throughout my school years, most of my studying was for things like History, trying hard to remember dates and statistics that I would quickly eject from my mind after my next exam. I seldom had to study for Math or Science though, because I figured out something that worked for me there: learning how and why things work rather than just memorizing formulas. This worked well for those subjects, but I do remember stumbling in algebra when I was not able to factor quadratic functions. There was a handy Swiss army knife of sorts for this, of course, in the Quadratic Formula. I avoided this formula as much as possible, usually by spending way too much time trying to guess the factors myself, or by converting from “standard form” to “vertex form”, or guessing, or skipping that question. This was almost entirely because I could not bring myself to memorize the formula. Call it laziness, or foolishness, or whatever you’d like. Well, recently I decided to brush up on my math skills. After yet again encountering the need for this equation, I decided enough is enough. Since I can’t memorize the equation, I will instead learn where it comes from by deriving it from the standard form of a quadratic function. This is my attempt to do so. To start, I’ll actually show both the beginning (the standard form): $ax^2 + bx + c = 0$ and the end (the Quadratic Equation): $x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$ The goal is to isolate $x$ from everything else, though doing so will get a bit tricky. First, I can subtract $c$ from both sides: $ax^2 + bx + c - c = 0 - c$ $ax^2 + bx = -c$ Now I’ll divide both sides by $a$: $\frac{ax^2 + bx}{a} = -\frac{c}{a}$ Cleaning this up a bit, I end up with: $x^2 + \frac{bx}{a} = -\frac{c}{a}$ Now it looks like I’m getting somewhere, but where do I go from here? It sure would be nice if I could just convert the whole left side of the equation to a squared form… which I can with some simple (though not intuitive) addition on both sides of the equation. The key lies in understanding what to add to the left side of the equation to reach something in the form of $(x + n)^2$. Here is a generic way of describing how this works: To go from $x^2 + nx$ to a squared quantity, I can take the linear coefficient (the $n$ in this example), divide it by $2$, then square the result: $x^2 + nx = 0$ $x^2 + nx + (\frac{n}{2})^2 = 0 + (\frac{n}{2})^2$ $x^2 + nx + \frac{n^2}{4} = \frac{n^2}{4}$ $(x + \frac{n}{2})^2 = \frac{n^2}{4}$ As a better example, I’ll use an equation with real numbers: $x^2 + 8x = -16$ $x^2 + 8x + (\frac{8}{2})^2 = -16 + (\frac{8}{2})^2$ $x^2 + 8x + (4)^2 = -16 +(4)^2$ $x^2 + 8x + 16 = 0$ $(x + \frac{8}{2})^2 = 0$ $(x + 4)^2 = 0$ So… bringing it back to the original goal, I can apply this same trick to my original equation to help make my square quantity: $x^2 + \frac{bx}{a} = -\frac{c}{a}$ My $n$ here is $\frac{b}{a}$, so, I can first divide it by $2$: $\frac{b}{a} * \frac{1}{2} = \frac{b}{2a}$ Now I can square that: $(\frac{b}{2a})^2 = \frac{b^2}{4a^2}$ and I have my value to add to both sides of my equation: $x^2 + \frac{bx}{a} + \frac{b^2}{4a^2} = -\frac{c}{a} + \frac{b^2}{4a^2}$ Now I can build a square quantity on the left: $(x + \frac{b}{2a})^2 = -\frac{c}{a} + \frac{b^2}{4a^2}$ Okay, I’m getting closer, but I don’t really like the right side. Let me combine the right side by reaching a common denominator. To do that, I’ll multiply by $\frac{4a}{4a}$, which is really just $1$: $(x + \frac{b}{2a})^2 = (-\frac{c}{a} * \frac{4a}{4a}) + \frac{b^2}{4a^2}$ $(x + \frac{b}{2a})^2 = -\frac{4ac}{4a^2} + \frac{b^2}{4a^2}$ $(x + \frac{b}{2a})^2 = \frac{b^2 - 4ac}{4a^2}$ Now, I can take the square root of both sides: $\sqrt{(x + \frac{b}{2a})^2} = \pm \sqrt{\frac{b^2 - 4ac}{4a^2}}$ $x + \frac{b}{2a} = \pm \frac{\sqrt{b^2 - 4ac}}{\sqrt{4a^2}}$ $x + \frac{b}{2a} = \pm \frac{\sqrt{b^2 - 4ac}}{2a}$ And the final step of isolating $x$: $x + \frac{b}{2a} - \frac{b}{2a} = \pm \frac{\sqrt{b^2 - 4ac}}{2a} - \frac{b}{2a}$ $x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$ And there you have it. The real question is, why is this useful? Well, for one, it is a good exercise to revive some old and atrophying math skills. It also helps me understand where the Quadratic Formula came from, and therefore helps me remember it. I hope this walk-through stimulates others out there as well.
2019-01-20 22:11:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 84, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8471304774284363, "perplexity": 313.26795031982965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583739170.35/warc/CC-MAIN-20190120204649-20190120230649-00519.warc.gz"}
https://hpi.de/research-schools/hpi-dse/mitglieder/research-pages/stefan-neubert/shortest-distance-enumeration.html
• de # Shortest Distance Enumeration ## Introduction We investigate the single source shortest distance (SSSD) and all pairs shortest distance (APSD) problems as enumeration problems (on unweighted and integer weighted graphs), meaning that the elements $$(u, v, d(u, v))$$ – where $$u$$ and $$v$$ are vertices with shortest distance $$d(u, v)$$ – are produced and listed one by one without repetition. Computing distances and shortest paths in (weighted or unweighted) graphs is one of the most fundamental problems in both theoretical and applied computer science and thus many algorithms are known for different variants of the problems. On a graph with $$n$$ vertices and $$m$$ edges, the most famous algorithms have the following (total time) runtimes: • SSSD in unweighted graphs can be computed with a breadth first search (BFS) in $$O(n + m)$$ steps, which is optimal. • With an efficient priority queue, Dijkstra’s algorithm [1] solves SSSD in graphs with non-negative edge weights with a runtime in $$O(m + n \log (n))$$. • APSD in unweighted graphs can be computed with $$n$$ runs of BFS in $$O(n\cdot(n+m))$$. • The algorithm of Floyd-Warshall [2,3] solves APSD in weighted graphs in $$O(n^3)$$. ## Single Source Shortest Distances In a way, both BFS and Dijkstra’s algorithm already fix shortest distances on after the other and basically enumerate partial solutions. With little changes, we get algorithms that enumerate shortest distances with delay in the order of the maximum degree $$\Delta$$ of the graph on unweighted and $$O(\Delta + \log(n))$$ on graphs with non-negative edge weights: Roughly speaking, whenever BFS (or Dijkstra) fully processed a vertex and iterated over all its neighbors, it can emit the shortest distance to this vertex. active vertex fully processed vertex discovered vertex Shortest Distances: $$d(3,3) = 0$$, $$d(3,2) = 1$$, $$d(3,7) = 1$$, $$d(3,8) = 1$$, $$d(3,1) = 2$$, $$d(3,4) = 2$$, $$d(3,5) = 3$$, $$d(3,6) = 3$$. Note that a homogeneous enumeration of the partial solutions would require a delay in the order of the average degree of the graph (not the maximum degree). However, this is not possible, even if the average degree is constant: Consider a graph that roughly consists of a clique of $$k$$ vertices and a path of $$k^2$$ vertices as shown in the following picture: No algorithm can reliably distinguish the three different ways, the clique and the path could be connected, without inspecting at least $$(k-1)^2$$ vertices and/or edges first. Thus, if the source vertex of the SSSP instance lies in the clique, any algorithm needs $$\Omega(k^2)$$ steps before being able to correctly determine a distance to any vertex in the path. As there are only $$k$$ distances the algorithm can emit before that, this results in an enumeration delay of at least $$\Omega(k)$$. This graph has maximum degree $$\Delta = k-1$$, thus the delay is in $$\Omega(\Delta)$$. ## Overview and Open Problems For APSD problems, we can achieve a better delay than the maximum degree: As every vertex has a distance of 0 to itself and every edge corresponds to a distance of 1, there are plenty easy to compute outputs with which an algorithm can gain a head start on the computation of more complicated shortest distances. We refer to [5] for more results on several variants of both SSSD and APSD including formal proofs of the above statements. Whilst we were able to show a series of upper and lower bounds on the enumeration complexity of shortest distance problems on unweighted graphs and graphs with non-negative edge weights, it remains open whether efficient enumeration is possible at all for graphs with negative edge weights. ## References 1. E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numer. Math., vol. 1, no. 1, pp. 269–271, Dec. 1959, 10.1007/BF01386390. 2. S. Warshall, “A Theorem on Boolean Matrices,” J. ACM, vol. 9, no. 1, pp. 11–12, Jan. 1962, 10.1145/321105.321107. 3. R. W. Floyd, “Algorithm 97: Shortest Path,” Communications of the ACM, vol. 5, no. 6, pp. 345-, Jun. 1962, 10.1145/367766.368168. 4. U. Zwick, “Exact and Approximate Distances in Graphs — A Survey,” in Algorithms — ESA 2001, Berlin, Heidelberg, 2001, pp. 33–48. 10.1007/3-540-44676-1_3. 5. K. Casel, T. Friedrich, S. Neubert, and M. L. Schmid, “Shortest Distances as Enumeration Problem,” arXiv:2005.06827 [cs], Feb. 2021, http://arxiv.org/abs/2005.06827
2022-01-21 10:17:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8021999001502991, "perplexity": 723.594406333959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303356.40/warc/CC-MAIN-20220121101528-20220121131528-00581.warc.gz"}
http://madlib.apache.org/docs/latest/group__grp__svec.html
1.14 User Documentation for Apache MADlib Sparse Vectors This module implements a sparse vector data type, named "svec", which provides compressed storage of vectors that have many duplicate elements. Arrays of floating point numbers for various calculations sometimes have long runs of zeros (or some other default value). This is common in applications like scientific computing, retail optimization, and text processing. Each floating point number takes 8 bytes of storage in memory and/or disk, so saving those zeros is often worthwhile. There are also many computations that can benefit from skipping over the zeros. Consider, for example, the following array of doubles stored as a Postgres/Greenplum "float8[]" data type: '{0, 33,...40,000 zeros..., 12, 22 }'::float8[] This array would occupy slightly more than 320KB of memory or disk, most of it zeros. Even if we were to exploit the null bitmap and store the zeros as nulls, we would still end up with a 5KB null bitmap, which is still not nearly as memory efficient as we'd like. Also, as we perform various operations on the array, we do work on 40,000 fields that turn out to be unimportant. To solve the problems associated with the processing of vectors discussed above, the svec type employs a simple Run Length Encoding (RLE) scheme to represent sparse vectors as pairs of count-value arrays. For example, the array above would be represented as '{1,1,40000,1,1}:{0,33,0,12,22}'::madlib.svec which says there is 1 occurrence of 0, followed by 1 occurrence of 33, followed by 40,000 occurrences of 0, etc. This uses just 5 integers and 5 floating point numbers to store the array. Further, it is easy to implement vector operations that can take advantage of the RLE representation to make computations faster. The SVEC module provides a library of such functions. The current version only supports sparse vectors of float8 values. Future versions will support other base types. Using Sparse Vectors An SVEC can be constructed directly with a constant expression, as follows: SELECT '{n1,n2,...,nk}:{v1,v2,...vk}'::madlib.svec; where n1,n2,...,nk specifies the counts for the values v1,v2,...,vk. A float array can be cast to an SVEC: SELECT ('{v1,v2,...vk}'::float[])::madlib.svec; An SVEC can be created with an aggregation: SELECT madlib.svec_agg(v1) FROM generate_series(1,k); An SVEC can be created using the madlib.svec_cast_positions_float8arr() function by supplying an array of positions and an array of values at those positions: SELECT madlib.svec_cast_positions_float8arr( array[n1,n2,...nk], -- positions of values in vector array[v1,v2,...vk], -- values at each position length, -- length of vector base) -- value at unspecified positions For example, the following expression: SELECT madlib.svec_cast_positions_float8arr( array[1,3,5], array[2,4,6], 10, 0.0) produces this SVEC: svec_cast_positions_float8arr ------------------------------ {1,1,1,1,1,5}:{2,0,4,0,6,0} Add madlib to the search_path to use the svec operators defined in the module. Document Vectorization into Sparse Vectors This module implements an efficient way for document vectorization, converting text documents into sparse vector representation (MADlib.svec), required by various machine learning algorithms in MADlib. The function accepts two tables as input, dictionary table and documents table, and produces the specified output table containing sparse vectors for the represented documents (in documents table). madlib.gen_doc_svecs(output_tbl, dictionary_tbl, dict_id_col, dict_term_col, documents_tbl, doc_id_col, doc_term_col, doc_term_info_col ) Arguments output_tbl TEXT. Name of the output table to be created containing the sparse vector representation of the documents. It has the following columns: doc_id __TYPE_DOC__. Document id. __TYPE_DOC__: Column type depends on the type of doc_id_col in documents_tbl. MADlib.svec. Corresponding sparse vector representation. dictionary_tbl TEXT. Name of the dictionary table containing features. dict_id_col TEXT. Name of the id column in the dictionary_tbl. Expected Type: INTEGER or BIGINT. NOTE: Values must be continuous ranging from 0 to total number of elements in the dictionary - 1. TEXT. Name of the column containing term (features) in dictionary_tbl. documents_tbl TEXT. Name of the documents table representing documents. doc_id_col TEXT. Name of the id column in the documents_tbl. TEXT. Name of the term column in the documents_tbl. TEXT. Name of the term info column in documents_tbl. The expected type of this column should be: - INTEGER, BIGINT or DOUBLE PRECISION: Values directly used to populate vector. - ARRAY: Length of the array used to populate the vector. ** For an example use case on using these types of column types, please refer to the example below. Example: Consider a corpus consisting of set of documents consisting of features (terms) along with doc ids: 1, {this,is,one,document,in,the,corpus} 2, {i,am,the,second,document,in,the,corpus} 3, {being,third,never,really,bothered,me,until,now} 4, {the,document,before,me,is,the,third,document} 1. Prepare documents table in appropriate format. The corpus specified above can be represented by any of the following documents_table: SELECT * FROM documents_table ORDER BY id; Result: id | term | count id | term | positions ----+----------+------- ----+----------+----------- 1 | is | 1 1 | is | {1} 1 | in | 1 1 | in | {4} 1 | one | 1 1 | one | {2} 1 | this | 1 1 | this | {0} 1 | the | 1 1 | the | {5} 1 | document | 1 1 | document | {3} 1 | corpus | 1 1 | corpus | {6} 2 | second | 1 2 | second | {3} 2 | document | 1 2 | document | {4} 2 | corpus | 1 2 | corpus | {7} . | ... | .. . | ... | ... 4 | document | 2 4 | document | {1,7} ... 2. Prepare dictionary table in appropriate format. SELECT * FROM dictionary_table ORDER BY id; Result: id | term ----+---------- 0 | am 1 | before 2 | being 3 | bothered 4 | corpus 5 | document 6 | i 7 | in 8 | is 9 | me ... 3. Generate sparse vector for the documents using dictionary_table and documents_table. doc_term_info_col (count) of type INTEGER: SELECT * FROM madlib.gen_doc_svecs('svec_output', 'dictionary_table', 'id', 'term', 'documents_table', 'id', 'term', 'count'); doc_term_info_col (positions) of type ARRAY: SELECT * FROM madlib.gen_doc_svecs('svec_output', 'dictionary_table', 'id', 'term', 'documents_table', 'id', 'term', 'positions'); Result: gen_doc_svecs -------------------------------------------------------------------------------------- Created table svec_output (doc_id, sparse_vector) containing sparse vectors (1 row) 4. Analyze the sparse vectors created. SELECT * FROM svec_output ORDER by doc_id; Result: doc_id | sparse_vector --------+------------------------------------------------- 1 | {4,2,1,2,3,1,2,1,1,1,1}:{0,1,0,1,0,1,0,1,0,1,0} 2 | {1,3,4,6,1,1,3}:{1,0,1,0,1,2,0} 3 | {2,2,5,3,1,1,2,1,1,1}:{0,1,0,1,0,1,0,1,0,1} 4 | {1,1,3,1,2,2,5,1,1,2}:{0,1,0,2,0,1,0,2,1,0} (4 rows) See the file svec.sql_in for complete syntax. Examples We can use operations with svec type like <, >, *, **, /, =, +, SUM, etc, and they have meanings associated with typical vector operations. For example, the plus (+) operator adds each of the terms of two vectors having the same dimension together. SELECT ('{0,1,5}'::float8[]::madlib.svec + '{4,3,2}'::float8[]::madlib.svec)::float8[]; Result: float8 -------- {4,4,7} Without the casting into float8[] at the end, we get: SELECT '{0,1,5}'::float8[]::madlib.svec + '{4,3,2}'::float8[]::madlib.svec; Result: ?column? --------- {2,1}:{4,7} A dot product (%*%) between the two vectors will result in a scalar result of type float8. The dot product should be (0*4 + 1*3 + 5*2) = 13, like this: SELECT '{0,1,5}'::float8[]::madlib.svec %*% '{4,3,2}'::float8[]::madlib.svec; ?column? --------- 13 Special vector aggregate functions are also available. SUM is self explanatory. SVEC_COUNT_NONZERO evaluates the count of non-zero terms in each column found in a set of n-dimensional svecs and returns an svec with the counts. For instance, if we have the vectors {0,1,5}, {10,0,3},{0,0,3},{0,1,0}, then executing the SVEC_COUNT_NONZERO() aggregate function would result in {1,2,3}: CREATE TABLE list (a madlib.svec); INSERT INTO list VALUES ('{0,1,5}'::float8[]), ('{10,0,3}'::float8[]), ('{0,0,3}'::float8[]),('{0,1,0}'::float8[]); SELECT madlib.svec_count_nonzero(a)::float8[] FROM list; Result: svec_count_nonzero ---------------- {1,2,3} We do not use null bitmaps in the svec data type. A null value in an svec is represented explicitly as an NVP (No Value Present) value. For example, we have: SELECT '{1,2,3}:{4,null,5}'::madlib.svec; Result: svec ------------------ {1,2,3}:{4,NVP,5} Adding svecs with null values results in NVPs in the sum: SELECT '{1,2,3}:{4,null,5}'::madlib.svec + '{2,2,2}:{8,9,10}'::madlib.svec; Result: ?column? ------------------------- {1,2,1,2}:{12,NVP,14,15} An element of an svec can be accessed using the svec_proj() function, which takes an svec and the index of the element desired. SELECT madlib.svec_proj('{1,2,3}:{4,5,6}'::madlib.svec, 1) + madlib.svec_proj('{4,5,6}:{1,2,3}'::madlib.svec, 15); Result: ?column? --------- 7 A subvector of an svec can be accessed using the svec_subvec() function, which takes an svec and the start and end index of the subvector desired. SELECT madlib.svec_subvec('{2,4,6}:{1,3,5}'::madlib.svec, 2, 11); Result: svec_subvec ---------------- {1,4,5}:{1,3,5} The elements/subvector of an svec can be changed using the function svec_change(). It takes three arguments: an m-dimensional svec sv1, a start index j, and an n-dimensional svec sv2 such that j + n - 1 <= m, and returns an svec like sv1 but with the subvector sv1[j:j+n-1] replaced by sv2. An example follows: SELECT madlib.svec_change('{1,2,3}:{4,5,6}'::madlib.svec,3,'{2}:{3}'::madlib.svec); Result: svec_change -------------------- {1,1,2,2}:{4,5,3,6} There are also higher-order functions for processing svecs. For example, the following is the corresponding function for lapply() in R. SELECT madlib.svec_lapply('sqrt', '{1,2,3}:{4,5,6}'::madlib.svec); Result: svec_lapply ---------------------------------------------- {1,2,3}:{2,2.23606797749979,2.44948974278318} The full list of functions available for operating on svecs are available in svec.sql-in. A More Extensive Example For a text classification example, let's assume we have a dictionary composed of words in a sorted text array: CREATE TABLE features (a text[]); INSERT INTO features VALUES ('{am,before,being,bothered,corpus,document,i,in,is,me, never,now,one,really,second,the,third,this,until}'); We have a set of documents, each represented as an array of words: CREATE TABLE documents(a int,b text[]); INSERT INTO documents VALUES (1,'{this,is,one,document,in,the,corpus}'), (2,'{i,am,the,second,document,in,the,corpus}'), (3,'{being,third,never,really,bothered,me,until,now}'), (4,'{the,document,before,me,is,the,third,document}'); Now we have a dictionary and some documents, we would like to do some document categorization using vector arithmetic on word counts and proportions of dictionary words in each document. To start this process, we'll need to find the dictionary words in each document. We'll prepare what is called a Sparse Feature Vector or SFV for each document. An SFV is a vector of dimension N, where N is the number of dictionary words, and in each cell of an SFV is a count of each dictionary word in the document. Inside the sparse vector library, we have a function that will create an SFV from a document, so we can just do this (For a more efficient way for converting documents into sparse vectors, especially for larger datasets, please refer to Document Vectorization into Sparse Vectors): SELECT madlib.svec_sfv((SELECT a FROM features LIMIT 1),b)::float8[] FROM documents; Result: svec_sfv ---------------------------------------- {0,0,0,0,1,1,0,1,1,0,0,0,1,0,0,1,0,1,0} {0,0,1,1,0,0,0,0,0,1,1,1,0,1,0,0,1,0,1} {1,0,0,0,1,1,1,1,0,0,0,0,0,0,1,2,0,0,0} {0,1,0,0,0,2,0,0,1,1,0,0,0,0,0,2,1,0,0} Note that the output of madlib.svec_sfv() is an svec for each document containing the count of each of the dictionary words in the ordinal positions of the dictionary. This can more easily be understood by lining up the feature vector and text like this: SELECT madlib.svec_sfv((SELECT a FROM features LIMIT 1),b)::float8[] , b FROM documents; Result: svec_sfv | b ----------------------------------------+-------------------------------------------------- {1,0,0,0,1,1,1,1,0,0,0,0,0,0,1,2,0,0,0} | {i,am,the,second,document,in,the,corpus} {0,1,0,0,0,2,0,0,1,1,0,0,0,0,0,2,1,0,0} | {the,document,before,me,is,the,third,document} {0,0,0,0,1,1,0,1,1,0,0,0,1,0,0,1,0,1,0} | {this,is,one,document,in,the,corpus} {0,0,1,1,0,0,0,0,0,1,1,1,0,1,0,0,1,0,1} | {being,third,never,really,bothered,me,until,now} SELECT * FROM features; a ------------------------------------------------------------------------------------------------------- {am,before,being,bothered,corpus,document,i,in,is,me,never,now,one,really,second,the,third,this,until} Now when we look at the document "i am the second document in the corpus", its SFV is {1,3*0,1,1,1,1,6*0,1,2}. The word "am" is the first ordinate in the dictionary and there is 1 instance of it in the SFV. The word "before" has no instances in the document, so its value is "0" and so on. The function madlib.svec_sfv() can process large numbers of documents into their SFVs in parallel at high speed. The rest of the categorization process is all vector math. The actual count is hardly ever used. Instead, it's turned into a weight. The most common weight is called tf/idf for Term Frequency / Inverse Document Frequency. The calculation for a given term in a given document is {#Times in document} * log {#Documents / #Documents the term appears in}. For instance, the term "document" in document A would have weight 1 * log (4/3). In document D, it would have weight 2 * log (4/3). Terms that appear in every document would have tf/idf weight 0, since log (4/4) = log(1) = 0. (Our example has no term like that.) That usually sends a lot of values to 0. For this part of the processing, we'll need to have a sparse vector of the dictionary dimension (19) with the values log(#documents/#Documents each term appears in). There will be one such vector for the whole list of documents (aka the "corpus"). The #documents is just a count of all of the documents, in this case 4, but there is one divisor for each dictionary word and its value is the count of all the times that word appears in the document. This single vector for the whole corpus can then be scalar product multiplied by each document SFV to produce the Term Frequency/Inverse Document Frequency weights. This can be done as follows: CREATE TABLE corpus AS (SELECT a, madlib.svec_sfv((SELECT a FROM features LIMIT 1),b) sfv FROM documents); CREATE TABLE weights AS (SELECT a docnum, madlib.svec_mult(sfv, logidf) tf_idf FROM corpus) foo, corpus ORDER BYdocnum); SELECT * FROM weights; Result docnum | tf_idf ------+---------------------------------------------------------------------- 1 | {4,1,1,1,2,3,1,2,1,1,1,1}:{0,0.69,0.28,0,0.69,0,1.38,0,0.28,0,1.38,0} 2 | {1,3,1,1,1,1,6,1,1,3}:{1.38,0,0.69,0.28,1.38,0.69,0,1.38,0.57,0} 3 | {2,2,5,1,2,1,1,2,1,1,1}:{0,1.38,0,0.69,1.38,0,1.38,0,0.69,0,1.38} 4 | {1,1,3,1,2,2,5,1,1,2}:{0,1.38,0,0.57,0,0.69,0,0.57,0.69,0} We can now get the "angular distance" between one document and the rest of the documents using the ACOS of the dot product of the document vectors: The following calculates the angular distance between the first document and each of the other documents: SELECT docnum, 180. * ( ACOS( madlib.svec_dmin( 1., madlib.svec_dot(tf_idf, testdoc) FROM weights,(SELECT tf_idf testdoc FROM weights WHERE docnum = 1 LIMIT 1) foo ORDER BY 1; Result: docnum | angular_distance -------+------------------ 1 | 0 2 | 78.8235846096986 3 | 89.9999999882484 4 | 80.0232034288617 We can see that the angular distance between document 1 and itself is 0 degrees and between document 1 and 3 is 90 degrees because they share no features at all. The angular distance can now be plugged into machine learning algorithms that rely on a distance measure between data points. SVEC also provides functionality for declaring array given an array of positions and array of values, intermediate values betweens those are declared to be base value that user provides in the same function call. In the example below the fist array of integers represents the positions for the array two (array of floats). Positions do not need to come in the sorted order. Third value represents desired maximum size of the array. This assures that array is of that size even if last position is not. If max size < 1 that value is ignored and array will end at the last position in the position vector. Final value is a float representing the base value to be used between the declared ones (0 would be a common candidate): SELECT madlib.svec_cast_positions_float8arr(ARRAY[1,2,7,5,87],ARRAY[.1,.2,.7,.5,.87],90,0.0); Result: svec_cast_positions_float8arr ---------------------------------------------------- {1,1,2,1,1,1,79,1,3}:{0.1,0.2,0,0.5,0,0.7,0,0.87,0} (1 row) Related Topics Other examples of svecs usage can be found in the k-means module, k-Means Clustering. File svec.sql_in documenting the SQL functions.
2018-07-19 03:20:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4673655927181244, "perplexity": 3652.714157694139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590493.28/warc/CC-MAIN-20180719031742-20180719051742-00635.warc.gz"}
https://codegolf.stackexchange.com/posts/115095/revisions
4 added 100 characters in body Especially in challenges, or challenges where you always know the size of the input, you can take advantage of the 'Stack-Height' nilad [] to create integers. Let's work through this with a hypothetical challenge: output CAT. The non-golfy way is to use the online integer golfer to push 67, 65, and 84. This gives: (((((()()()()){}){}){}()){}()) (((((()()()()){}){}){}){}()) ((((((()()()){}()){}){})){}{}) (Newlines for clarity). This is 88 bytes, and not that great. If we instead push the consecutive differences between values, we can save a lot. So we wrap the first number in a push call, and subtract 2: ( (((((()()()()){}){}){}()){}()) [()()] ) Then, we take this code, wrap it in a push call, and add 19 to the end: ( ((((((()()()()){}){}){}()){}())[()()]) ((((()()()){})){}{}()) ) This is 62 bytes, for a whopping 26 byte golf! Now here is where we get to take advantage of the stack-height nilad. By the time we start pushing 19, we know that there are already 2 items on the stack, so [] will evaluate to 2. We can use this to create a 19 in fewer bytes. The obvious way is to change the inner ()()() to ()[]. This only saves two bytes though. With some more tinkering, it turns out we can push 19 with ((([][]){})[]{}) This saves us 6 bytes. Now we are down to 56. You can see this tip being used very effectively on these answers: Especially in challenges, or challenges where you always know the size of the input, you can take advantage of the 'Stack-Height' nilad [] to create integers. Let's work through this with a hypothetical challenge: output CAT. The non-golfy way is to use the online integer golfer to push 67, 65, and 84. This gives: (((((()()()()){}){}){}()){}()) (((((()()()()){}){}){}){}()) ((((((()()()){}()){}){})){}{}) (Newlines for clarity). This is 88 bytes, and not that great. If we instead push the consecutive differences between values, we can save a lot. So we wrap the first number in a push call, and subtract 2: ( (((((()()()()){}){}){}()){}()) [()()] ) Then, we take this code, wrap it in a push call, and add 19 to the end: ( ((((((()()()()){}){}){}()){}())[()()]) ((((()()()){})){}{}()) ) This is 62 bytes, for a whopping 26 byte golf! Now here is where we get to take advantage of the stack-height nilad. By the time we start pushing 19, we know that there are already 2 items on the stack, so [] will evaluate to 2. We can use this to create a 19 in fewer bytes. The obvious way is to change the inner ()()() to ()[]. This only saves two bytes though. With some more tinkering, it turns out we can push 19 with ((([][]){})[]{}) This saves us 6 bytes. Now we are down to 56. You can see this tip being used very effectively on these answers: Especially in challenges, or challenges where you always know the size of the input, you can take advantage of the 'Stack-Height' nilad [] to create integers. Let's work through this with a hypothetical challenge: output CAT. The non-golfy way is to use the online integer golfer to push 67, 65, and 84. This gives: (((((()()()()){}){}){}()){}()) (((((()()()()){}){}){}){}()) ((((((()()()){}()){}){})){}{}) (Newlines for clarity). This is 88 bytes, and not that great. If we instead push the consecutive differences between values, we can save a lot. So we wrap the first number in a push call, and subtract 2: ( (((((()()()()){}){}){}()){}()) [()()] ) Then, we take this code, wrap it in a push call, and add 19 to the end: ( ((((((()()()()){}){}){}()){}())[()()]) ((((()()()){})){}{}()) ) This is 62 bytes, for a whopping 26 byte golf! Now here is where we get to take advantage of the stack-height nilad. By the time we start pushing 19, we know that there are already 2 items on the stack, so [] will evaluate to 2. We can use this to create a 19 in fewer bytes. The obvious way is to change the inner ()()() to ()[]. This only saves two bytes though. With some more tinkering, it turns out we can push 19 with ((([][]){})[]{}) This saves us 6 bytes. Now we are down to 56. You can see this tip being used very effectively on these answers: 3 replaced http://codegolf.stackexchange.com/ with https://codegolf.stackexchange.com/ Especially in challenges, or challenges where you always know the size of the input, you can take advantage of the 'Stack-Height' nilad [] to create integers. Let's work through this with a hypothetical challenge: output CAT. The non-golfy way is to use the online integer golfer to push 67, 65, and 84. This gives: (((((()()()()){}){}){}()){}()) (((((()()()()){}){}){}){}()) ((((((()()()){}()){}){})){}{}) (Newlines for clarity). This is 88 bytes, and not that great. If we instead push the consecutive differences between values, we can save a lot. So we wrap the first number in a push call, and subtract 2: ( (((((()()()()){}){}){}()){}()) [()()] ) Then, we take this code, wrap it in a push call, and add 19 to the end: ( ((((((()()()()){}){}){}()){}())[()()]) ((((()()()){})){}{}()) ) This is 62 bytes, for a whopping 26 byte golf! Now here is where we get to take advantage of the stack-height nilad. By the time we start pushing 19, we know that there are already 2 items on the stack, so [] will evaluate to 2. We can use this to create a 19 in fewer bytes. The obvious way is to change the inner ()()() to ()[]. This only saves two bytes though. With some more tinkering, it turns out we can push 19 with ((([][]){})[]{}) This saves us 6 bytes. Now we are down to 56. You can see this tip being used very effectively on these answers: Especially in challenges, or challenges where you always know the size of the input, you can take advantage of the 'Stack-Height' nilad [] to create integers. Let's work through this with a hypothetical challenge: output CAT. The non-golfy way is to use the online integer golfer to push 67, 65, and 84. This gives: (((((()()()()){}){}){}()){}()) (((((()()()()){}){}){}){}()) ((((((()()()){}()){}){})){}{}) (Newlines for clarity). This is 88 bytes, and not that great. If we instead push the consecutive differences between values, we can save a lot. So we wrap the first number in a push call, and subtract 2: ( (((((()()()()){}){}){}()){}()) [()()] ) Then, we take this code, wrap it in a push call, and add 19 to the end: ( ((((((()()()()){}){}){}()){}())[()()]) ((((()()()){})){}{}()) ) This is 62 bytes, for a whopping 26 byte golf! Now here is where we get to take advantage of the stack-height nilad. By the time we start pushing 19, we know that there are already 2 items on the stack, so [] will evaluate to 2. We can use this to create a 19 in fewer bytes. The obvious way is to change the inner ()()() to ()[]. This only saves two bytes though. With some more tinkering, it turns out we can push 19 with ((([][]){})[]{}) This saves us 6 bytes. Now we are down to 56. You can see this tip being used very effectively on these answers: Especially in challenges, or challenges where you always know the size of the input, you can take advantage of the 'Stack-Height' nilad [] to create integers. Let's work through this with a hypothetical challenge: output CAT. The non-golfy way is to use the online integer golfer to push 67, 65, and 84. This gives: (((((()()()()){}){}){}()){}()) (((((()()()()){}){}){}){}()) ((((((()()()){}()){}){})){}{}) (Newlines for clarity). This is 88 bytes, and not that great. If we instead push the consecutive differences between values, we can save a lot. So we wrap the first number in a push call, and subtract 2: ( (((((()()()()){}){}){}()){}()) [()()] ) Then, we take this code, wrap it in a push call, and add 19 to the end: ( ((((((()()()()){}){}){}()){}())[()()]) ((((()()()){})){}{}()) ) This is 62 bytes, for a whopping 26 byte golf! Now here is where we get to take advantage of the stack-height nilad. By the time we start pushing 19, we know that there are already 2 items on the stack, so [] will evaluate to 2. We can use this to create a 19 in fewer bytes. The obvious way is to change the inner ()()() to ()[]. This only saves two bytes though. With some more tinkering, it turns out we can push 19 with ((([][]){})[]{}) This saves us 6 bytes. Now we are down to 56. You can see this tip being used very effectively on these answers: 2 added 1 character in body Especially in challenges, or challenges where you always know the size of the input, you can take advantage of the 'Stack-Height' nilad [] to create integers. Let's work through this with a hypothetical challenge: output CAT. The non-golfy way is to use the online integer golfer to push 67, 65, and 84. This gives: (((((()()()()){}){}){}()){}()) (((((()()()()){}){}){}){}()) ((((((()()()){}()){}){})){}{}) (Newlines for clarity). This is 88 bytes, and not that great. If we instead push the consecutive differences between values, we can save a lot. So we wrap the first number in a push call, and subtract 2: ( (((((()()()()){}){}){}()){}()) [()()] ) Then, we take this code, wrap it in a push call, and add 19 to the end: ( ((((((()()()()){}){}){}()){}())[()()]) ((((()()()){})){}{}()) ) This is 62 bytes, for a whopping 26 byte golf! Now here is where we get to take advantage of the stack-height nilad. By the time we start pushing 19, we know that there are already 2 items on the stack, so [] will evaluate to 2. We can use this to create a 19 in fewer bytes. The obvious way is to change the inner ()()() to ()[]. This only saves two bytes though. With some more tinkering, it turns out we can push 19 with ((([][]){})[]{}) This saves us 6 bytes. Now we are down to 56. You can see this tip being used very effectively on these answers: Especially in challenges, or challenges where you always know the size of the input, you can take advantage of the 'Stack-Height' nilad [] to create integers. Let's work through this with a hypothetical challenge: output CAT. The non-golfy way is to use the online integer golfer to push 67, 65, and 84. This gives: (((((()()()()){}){}){}()){}()) (((((()()()()){}){}){}){}()) ((((((()()()){}()){}){})){}{}) (Newlines for clarity). This is 88 bytes, and not that great. If we instead push the consecutive differences between values, we can save a lot. So we wrap the first number in a push call, and subtract 2: ( (((((()()()()){}){}){}()){}()) [()()] ) Then, we take this code, wrap it in a push call, and add 19 to the end: ( ((((((()()()()){}){}){}()){}())[()()]) ((((()()()){})){}{}()) ) This is 62 bytes, for a whopping 26 byte golf! Now here is where we get to take advantage of the stack-height nilad. By the time we start pushing 19, we know that there are already 2 items on the stack, so [] will evaluate to 2. We can use this to create a 19 in fewer bytes. The obvious way is to change the inner ()()() to ()[]. This only saves two bytes though. With some more tinkering, it turns out we can push 19 with ((([][]){})[]{}) This saves us 6 bytes. Now we are down to 56. You can see this tip being used very effectively on these answers: Especially in challenges, or challenges where you always know the size of the input, you can take advantage of the 'Stack-Height' nilad [] to create integers. Let's work through this with a hypothetical challenge: output CAT. The non-golfy way is to use the online integer golfer to push 67, 65, and 84. This gives: (((((()()()()){}){}){}()){}()) (((((()()()()){}){}){}){}()) ((((((()()()){}()){}){})){}{}) (Newlines for clarity). This is 88 bytes, and not that great. If we instead push the consecutive differences between values, we can save a lot. So we wrap the first number in a push call, and subtract 2: ( (((((()()()()){}){}){}()){}()) [()()] ) Then, we take this code, wrap it in a push call, and add 19 to the end: ( ((((((()()()()){}){}){}()){}())[()()]) ((((()()()){})){}{}()) ) This is 62 bytes, for a whopping 26 byte golf! Now here is where we get to take advantage of the stack-height nilad. By the time we start pushing 19, we know that there are already 2 items on the stack, so [] will evaluate to 2. We can use this to create a 19 in fewer bytes. The obvious way is to change the inner ()()() to ()[]. This only saves two bytes though. With some more tinkering, it turns out we can push 19 with ((([][]){})[]{}) This saves us 6 bytes. Now we are down to 56. You can see this tip being used very effectively on these answers: 1
2019-12-12 09:35:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29876893758773804, "perplexity": 1145.9977277085554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540542644.69/warc/CC-MAIN-20191212074623-20191212102623-00060.warc.gz"}
http://www.physicsforums.com/showthread.php?t=222232
# How do i calculate the power x by chemart Tags: power P: 6 $$2^x=128$$ $$x=?$$ how do i calculate power x from 128 and 2? P: 813 Quote by chemart $$2^x=128$$ $$x=?$$ how do i calculate power x from 128 and 2? You take the log of both sides: log(2^X)=log(128) Then you use the property that log(a^b)=b log(a) The remaining steps should be apparent. Sci Advisor HW Helper P: 8,954 Or quicker, just calculate 2*2*2.... until you get 128! P: 1,705 How do i calculate the power x Quote by mgb_phys Or quicker, just calculate 2*2*2.... until you get 128! that's not quicker... HW Helper P: 1,275 Quote by ice109 that's not quicker... Depends how far away your calculator is. P: 100 Quote by chemart $$2^x=128$$ $$x=?$$ how do i calculate power x from 128 and 2? 2x = 128 the first thing you have to do is get both of the bases the same so in 2x, x is the exponent (or power if you like) and 2 is the base. So what you have to do is get 128 with a base of 2. So, 128 = 2 * 2 * 2 * 2 * 2 * 2 * 2 128 = 27 There for, 2x = 27 So, x = 7 This way may seem long, but take the example of 22(x+1) = 1024 1024 = 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 = 210 22(x+1) = 210 2(x+1) = 10 2x + 2 = 10 2x = 8 x = 4 P: 1,635 there are also other cases where you have to take logs of both parts, like it initially was suggested. Say you have: $$2^{x+1}=35$$ you defenitely cannot express 35 as a power of 2, so in these cases you need to take the log of both parts, and choose a base that will be easier to work with. $$log_a 2^{x+1}=log35=>(x+1)log_a 2=log_a35=>x+1=\frac{log_a35}{log_a2}$$ HW Helper P: 3,352 mgb_phys way is definitely quicker. Note when we take logs of both sides, all we achieve is $x= log_2 (128)$, and to actually evaluate that we must work out $2^7 = 128$ anyway.
2014-08-01 05:49:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6161143779754639, "perplexity": 626.2036126273036}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274581.53/warc/CC-MAIN-20140728011754-00122-ip-10-146-231-18.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/164739/non-equivalent-bundles/164741
# non-equivalent bundles Is it possible to find a specific example of two fiber bundles with the same base, group, fiber and homeomorphic total spaces but these bundles are not equivalent/isomorphic, if so should I find a bundle map F between two bundles, inducing identity on the common base but F does not preserve fibers? (I don't know what it means, got mixed up) or should I define the action of group on fibers differently? - for each (h,j) in Z+Z there corresponds an S^3 bundle over S^4 and when h+j=1 the total spaces are all homeomorphic to S^7 and the group is SO(4). I try to understand where the difference come from? –  wqr Jun 29 '12 at 23:33 Why do you ask wether such bundles exist if you already know examples of this behavior? –  Olivier Bégassat Jun 29 '12 at 23:38 EDIT: This answer is wrong: see Dan Ramras's comment. The circle (thought of as the set of norm one elements in the complex plane) is a fiber bundle over itself in many ways, via the maps $z \mapsto z^k$ where $k \neq 0$ is an integer. These are all non-isomorphic as fiber bundles (which follows from the computation that the fundamental group of $S^1$ is $\mathbb{Z}$), but the fiber has $|k|$ points, so the bundles corresponding to $\pm k$ have the same fiber, total space, and base. - (The group here is just $\mathbb{Z}/k$.) –  user29743 Jun 29 '12 at 23:44 These bundles are of different group, though. –  Mariano Suárez-Alvarez Jun 30 '12 at 3:20 I don't understand - I am fixing $k$ and looking at the two bundles corresponding to $\pm k$. Surely both these bundles have structure group $\mathbb{Z}/k$, right? –  user29743 Jun 30 '12 at 17:23 But the map $z\mapsto z^{-1}$ is a bundle isomorphism between the bundles for $k$ and $-k$. Covering spaces of $S^1$, as you say, are classified by subgroups of $\pi_1 (S^1) = \mathbb{Z}$, by sending a covering space to the image of its fundamental group under the projection map. Both of these covering spaces correspond to the subgroup $k\mathbb{Z} < \mathbb{Z}$. –  Dan Ramras Jul 3 '12 at 3:58 hard to argue with that... –  user29743 Jul 4 '12 at 18:31 In the paper "K-theory doesn't exist," (J. Pure Appl. Alg., 1978 vol 12) Akin explains that if $p: P\rightarrow B$ is a non-trivial principal $G$-bundle, the map $P\times G\rightarrow B$ (project to the first factor and then apply $p$) can be made into a principal $G\times G$-bundle in two different ways, by specifying two different actions of $G\times G$ on $P\times G$. In general, these are not isomorphic as principal bundles (Akin shows that if they were always isomorphic, the $K$-theory of $B$ would be trivial, which is the origin of the paper's title). -
2015-04-18 03:47:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9541330933570862, "perplexity": 362.2241815709635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246633512.41/warc/CC-MAIN-20150417045713-00035-ip-10-235-10-82.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/609364/why-is-ring-addition-commutative?noredirect=1&lq=1
# Why is ring addition commutative? What is the motivation behind axiomatically forcing the underpinning group of a ring to be abelian? Noncommutative rings are vastly more complex than commutative ones, so I am assuming that allowing the additive operation to be noncommuting would just make matters worse. Is there something deeper here, or is this a restriction for the sake of convenience and simplicity? • I believe I've heard that in any ring with unity, commutativity follows from the other axioms. Fact-checking now... :) Here's a source: math.stackexchange.com/questions/346375/… . Though since it's on MSE maybe this is a duplicate :( Dec 16, 2013 at 17:54 • Possibly useful: near-rings. Dec 16, 2013 at 17:56 • If ring addition weren't commutative, then the distributive property wouldn't hold in general. Dec 16, 2013 at 17:58 • A definition may come about by induction from examples. Rings of polynomials, rings of matrices, etc. Dec 16, 2013 at 17:58 • All useful definitions . Dec 16, 2013 at 18:01 Yes, there is a deeper reason, at least in my opinion. Most abstract axiomatic constructions in mathematics are inspired by or even equivalent to concrete examples. One of the most basic algebraic objects one can think of is the set $S^S$ of mappings from a set $S$ into itself, which has a natural associative multiplication (composition) and a unit (the identity map). Taking these properties as abstract axioms, one obtains the definition of a unital monoid. However, an element $m\in M$ of an abstractly defined unital monoid $M$ can be viewed as the map $[l_m:M\rightarrow M]\in M^M$ given by $l_m(x)=mx$. Mapping $m\in M$ to $l_m\in M^M$ induces an injective homomorphism of monoids so the abstract definition is really just language (albeit quite helpful) since $M$ is isomorphic to a submonoid of a mapping monoid. One can also consider submonoids of $S^S$ which contain only bijective maps. Again, taking this additional property (i.e. existence of inverses) as another axiom, one obtains the definition of a group - but every group can be viewed as a subgroup of a group of bijective maps on a set, so the abstract definition is really just language since it does not provide you with anything that truly generalizes the guiding example. Now for unital rings the story is essentially the same, but whereas for unital monoids the basic model is $S^S$ and its submonoids, the basic model for rings is $\operatorname{End}(G)$ and its subrings, where $G$ is a abelian group. $\operatorname{End}(G)$ is the submonoid of the mapping monoid $G^G$ which contains only those maps which are group homomorphisms. The restriction to abelian groups is necessary since the pointwise product $(\varphi,\psi)\mapsto [x\mapsto \varphi(x)\psi(x)]$ (which is the addition in the ring $\operatorname{End}(G)$) of endomorphisms is another endomorphism only if $G$ is assumed to be abelian, in general. As with unital monoids and groups, an abstract unital ring $R$ embeds homomorphically and injectively into its own endomorphism ring $\operatorname{End}(R)$ by multiplication operators, and this is a homomorphism of unital rings (not just of unital monoids). Thus, generalizing the definition of a ring in a manner which is suggested by the question is tantamount to generalizing the example $\operatorname{End}(G)$ to nonabelian $G$. As for how this could be done, there are two possibilities (actually more... see below in the edit). First, one could attempt to define a group structure on $\operatorname{End}(G)$ other than the pointwise product. This seems somewhat unnatural since the whole point of using $\operatorname{End}(G)$ is to exploit the presence of the group product in the first place. Still, one might speculate that for certain nonabelian groups there are structures on $\operatorname{End}(G)$ which are somehow derived from the group product and interact well with the composition - such a structure would be quite bizarre since at the very least, the distributive property will be lost, as explained in the other answers. The other avenue of generalization is to continue using the pointwise product, which as you may have noticed makes the entire mapping monoid $G^G$ into a group. Thus, it is possible to consider subsets of $G^G$ which are closed under both operations: pointwise product and composition. This is the guiding example of a "near ring" as described on Wikipedia. In general, a submonoid $M\subset G^G$ such that $M\subset \operatorname{End}(G)\subset G^G$ can be one of these only when $G$ is abelian, and these examples are the traditionally defined rings. EDIT 12/23/2013 Earlier today I realized that there is an easy way to create an example of a "ring with nonabelian underlying group": just take your favorite nonabelian group $G$ written with $+$ (which is horrible, I know) and impose a second law of composition $\ast$ given by $(x,y)\mapsto x\ast y=e_G$. It is easily verified that $\ast$ is associative and distributes over $+$ so $(G,+,\ast)$ satisfies all ring axioms with the exception of the commutativity of $(G,+)$. Now there is no conflict with the other answers here because the weird ring I've just described has no multiplicative identity. Thus, I have tried to insert the word "unital" = "possessing a two-sided identity" in various appropriate places in the above text in order to emphasize the assumption that an identity exists. Now if one does not assume the existence of an identity things can get complicated rather quickly. For instance, an arbitrary set $M$ can be made into a non-unital monoid by choosing a point $x\in M$ and imposing the law of composition $(m,n)\mapsto x$ for every pair $(m,n)\in M\times M$. This is clearly an associative law of composition, but there is no identity and, perhaps more importantly, the canonical homomorphism $m\in M\mapsto l_m\in M^M$ as defined above has only one point in its image (the point map onto $x$). The point is that if $M$ does not contain an identity then $m\mapsto l_m$ is not necessarily injective so one does not necessarily see an exact copy of $M$ inside of $M^M$ - only a quotient. This means that the abstract definition of a (not necessarily unital) monoid can produce examples which are not canonically equivalent to the guiding example of $S^S$ and its submonoids, so in this case the definition is not "just language", as I wrote above. There is a sufficient condition for the injectivity of $m\mapsto l_m$ which is more general than the existence of an identity element: if the right anti-representation $x\mapsto r_x\in M^M$ of $M$ is defined as usual ($r_x(m)=mx$) then $m\mapsto l_m$ will be injective provided that there is at least one $x\in M$ such that $r_x\in M^M$ is an injective map, for then $l_m=l_n$ implies $$r_x(m)=l_m(x)=l_n(x)=r_x(n)$$ and therefore $m=n$ by the injectivity of $r_x$. I have a feeling that this is a manifestation of some more general phenomenon involving monomorphisms in the category of sets but I don't know enough about category theory to discuss this in any detail (maybe @Martin Brandenburg would like to leave a comment). In particular, if $M$ contains a two-sided identity $1$ then $r_1\in M^M$ is injective so $m\mapsto l_m$ injective. At the level of rings, this means that in an abstractly defined ring, not necessarily with identity, the presence of at least one $x\in R$ such that $r_x\in \operatorname{End}(R)$ is injective forces the canonical representation $m\mapsto l_m$ to be injective and therefore to faithfully reproduce $R$ inside of $\operatorname{End}(R)$. Now alot has been said on this thread about how the distributive property in a ring forces the underlying group to be abelian and the computation presented by Bill Dubuque and drhab uses the existence of a ring identity to show this. In fact, this can be proved assuming only that the canonical representation is injective: Proposition. If $(R,+,\ast)$ satisfies all the ring axioms except the commutativity of the group $(R,+)$, then the canonical representation $x\mapsto [z\mapsto l_x(z)=x\ast z]$ is a homomorphism of both products into $R^R$ which takes values in $\operatorname{End}(R)$ and if this homomorphism is injective then $(R,+)$ is abelian. Remark. Since the proposition requires two-sided distributivity of $\ast$ over $+$, the hypotheses are somewhat stronger than simply stating that $(R,+,\ast)$ is a near ring. Proof. 1."$x\mapsto l_x$ takes values in $\operatorname{End}(G)$" uses distributivity from the left: $$l_x(y+ z)=x\ast (y+ z)=(x\ast y) + (x\ast z)=l_x(y)+ l_x(z).$$ 2."$x\mapsto l_x$ is a homomorphism $\ast\rightarrow \circ$" uses associativity of $\ast$: $$l_{x\ast y}(z)=(x\ast y)\ast z=x\ast (y\ast z)=l_x\circ l_y(z).$$ 3."$x\mapsto l_x$ is a homomorphism $+\rightarrow +$" uses distributivity from the right: $$l_{x+ y}(z)=(x+ y)\ast z=(x\ast z)+ (y\ast z)=l_x(z)+ l_y(z)=(l_x+l_y)(z).$$ 4.(R,+) is abelian if $x\mapsto l_x$ is injective: $$l_{x+y}(z+z)=l_x(z+z)+l_y(z+z)=l_x(z)+l_x(z)+l_y(z)+l_y(z) =l_x(z)+l_{x+y}(z)+l_y(z)$$ but also $$l_{x+y}(z+z)=l_{x+y}(z)+l_{x+y}(z)=l_x(z)+l_y(z)+l_x(z)+l_y(z)=l_x(z)+l_{y+x}(z)+l_y(z).$$ Canceling the outer terms (which is valid since $(R,+)$ is assumed to be a group), we have $l_{x+y}(z)=l_{y+x}(z)$. This being true for all $z$, we conclude that $x+y=y+x$ provided that $x\mapsto l_x$ is injective. The proposition is proved. What if we drop the assumption that the canonical representation is injective? Then we can produce examples of "rings with nonabelian underlying groups" as I did at the beginning of the edit - to reiterate: just take your favorite nonabelian group $G$ written with $+$ (which is atrocious, I know) and impose a second law of composition $\ast$ given by $(x,y)\mapsto x\ast y=e_G$. It is easily verified that $\ast$ is associative and distributes over $+$ so $(G,+,\ast)$ satisfies all ring axioms with the exception of the commutativity of $(G,+)$. The proposition shows that the canonical representation cannot be injective and sure enough, it takes values only on a single point: the trivial endomorphism $[x\mapsto e_G]\in \operatorname{End}(G)$. To sum things up I would say that 1. Yes, the ring axioms can be relaxed to produce "rings with nonabelian underlying groups", however some nice property will have to be sacrificed: either • multiplication will not distribute from the left, which can give you a near-ring but then the image of the canonical representation will not lie in $\operatorname{End}(R)$, in general; or • the canonical representation will not be injective in which case you cannot adjoin an identity without severely disrupting the given algebraic structure (I assume that you will have to pass to some abelian quotient of the underlying group). 2. In either case (and especially in the second case) these objects will not interact well with $\mathbb{Z}$, and in my estimation that is why they are mostly curiosities rather than the object of intense study. • +1 Welcome to MSE. Dec 17, 2013 at 2:31 • The basic point of this answer is not to generalize purely for the sake of generalization. All new math is an extension of slightly older math, which is to say, all generalizations are guided by example lest they be vacuous. As an example: the legend of the PhD candidate and the anti-metric space (singular!) Dec 23, 2013 at 15:55 • @Ryan Reich: Could you elaborate which "legend" you mean? Dec 23, 2013 at 16:03 • Found it by a quick googling, e.g., here and here (page 50). An anti-metric space satisfies the axioms of a metric space, except the triangle inequality is replaced with $d(x, y)+d(y,z) \le d(x,z)$. In fact, there aren't very many anti-metric spaces. Dec 23, 2013 at 16:18 • @Martin It's in the urban legends question on MO, with slightly different details. Basically, student defines said spaces without giving examples and is hoist on his own petard at defense when famous mathematician observes there is only the one-point space. Dec 23, 2013 at 16:44 In order to generalize rings to structures with noncommutative addition, one cannot simply delete the axiom that addition is commutative, since, in fact, other (standard) ring axioms force addition to be commutative (Hankel, 1867 [1]). The proof is simple: apply both the left and right distributive law in different order to the term $$\rm\:(1\!+\!1)(x\!+\!y),\:$$ viz. $$\rm (1\!+\!1)(x\!+\!y) = \bigg\lbrace \begin{eqnarray}\rm (1\!+\!1)x\!+\!(1\!+\!1)y\, =\, x \,+\, \color{#C00}{x\!+\!y} \,+\, y\\ \rm 1(x\!+\!y)\!+1(x\!+\!y)\, =\, x\, +\, \color{#0A0}{y\!+\!x}\, +\, y\end{eqnarray}\bigg\rbrace\:\Rightarrow\: \color{#C00}{x\!+\!y}\,=\,\color{#0A0}{y\!+\!x}\ \ by\ \ cancel\ \ x,y$$ Thus commutativity of addition, $$\rm\:x+y = y+x,\:$$ is implied by these axioms: $$(1)\ \ *\,$$ distributes over $$\rm\,+\!:\ \ x(y+z)\, =\, xy+xz,\ \ (y+z)x\, =\, yx+zx$$ $$(2)\ \, +\,$$ is cancellative: $$\rm\ \ x+y\, =\, x+z\:\Rightarrow\: y=z,\ \ y+x\, =\, z+x\:\Rightarrow\: y=z$$ $$(3)\ \, +\,$$ is associative: $$\rm\ \ (x+y)+z\, =\, x+(y+z)$$ $$(4)\ \ *\,$$ has a neutral element $$\rm\,1\!:\ \ 1x = x$$ Said more structurally, recall that a SemiRing is that generalization of a Ring whose additive structure is relaxed from a commutative Group to merely a SemiGroup, i.e. here the only hypothesis on addition is that it be associative (so in SemiRings, unlike Rings, addition need not be commutative, nor need every element $$\rm\,x\,$$ have an additive inverse $$\rm\,-x).\,$$ Now the above result may be stated as follows: a semiring with $$\,1\,$$ and cancellative addition has commutative addition. Such semirings are simply subsemirings of rings (as is $$\rm\:\Bbb N \subset \Bbb Z)\,$$ because any commutative cancellative semigroup embeds canonically into a commutative group, its group of differences (in precisely the same way $$\rm\,\Bbb Z\,$$ is constructed from $$\rm\,\Bbb N,\,$$ i.e. the additive version of the fraction field construction). Examples of SemiRings include: $$\rm\,\Bbb N;\,$$ initial segments of cardinals; distributive lattices (e.g. subsets of a powerset with operations $$\cup$$ and $$\cap$$; $$\rm\,\Bbb R\,$$ with + being min or max, and $$*$$ being addition; semigroup semirings (e.g. formal power series); formal languages with union, concat; etc. For a nice survey of SemiRings and SemiFields see [2]. See also Near-Rings. [1] Gerhard Betsch. On the beginnings and development of near-ring theory. pp. 1-11 in: Near-rings and near-fields. Proceedings of the conference held in Fredericton, New Brunswick, July 18-24, 1993. Edited by Yuen Fong, Howard E. Bell, Wen-Fong Ke, Gordon Mason and Gunter Pilz. Mathematics and its Applications, 336. Kluwer Academic Publishers Group, Dordrecht, 1995. x+278 pp. ISBN: 0-7923-3635-6 Zbl review [2] Hebisch, Udo; Weinert, Hanns Joachim. Semirings and semifields. $$\$$ pp. 425-462 in: Handbook of algebra. Vol. 1. Edited by M. Hazewinkel. North-Holland Publishing Co., Amsterdam, 1996. xx+915 pp. ISBN: 0-444-82212-7 Zbl review, AMS review • In fact, (4) can even be weakened to the existence of a non-zero-divisor (either of left or right). Jun 16, 2016 at 1:45 • This answer is weirdly similar (including spelling errors) to this one: math.stackexchange.com/a/346682/101420. Are the authors the same person under different names? Jun 16, 2016 at 8:12 • @Vincent They are both excerpted from our lecture notes and/or sci.math posts, etc. Jun 16, 2016 at 12:35 • This means that a ring without a commutative identity doesn't necessarily have to have commutative addition right? Feb 23, 2019 at 21:23 • @grenmester It means you have to drop at least one of the listed $4$ laws. Feb 23, 2019 at 21:38 $\left(1+1\right)\left(a+b\right)=1\left(a+b\right)+1\left(a+b\right)=a+b+a+b$ $\left(1+1\right)\left(a+b\right)=\left(1+1\right)a+\left(1+1\right)b=a+a+b+b$ So distributivity (on both sides) demands that $a+b=b+a$ • (This is precisely the argument given in the link I mentioned; there a detailed explanation of what axioms are used is also given) Dec 16, 2013 at 18:02 Most rings known to man have their addition operation commutative. The definition of ring tries to capture that. • (The fact that this follows form the other axioms is very minor and unimportant) Dec 16, 2013 at 17:55 • (See here) Dec 16, 2013 at 17:57 • Dear Mariano, Or here! Cheers, Dec 17, 2013 at 2:23 • @Mariano: This is really the best possible answer. It seems to me that nowadays abstract notions are not introduced via a couple of important and motivating examples, which is very regrettable. Instead, students learn them and play around them, instead of really DOING something with them. Dec 23, 2013 at 15:48 • @MartinBrandenburg, I couldnot agree more. The periodic downvotes on this answer reflect that situation :-) Mar 9, 2014 at 20:55 There is another answer from a category-theoretic perspective. Recall the notion of a monoid object in a monoidal category. It is natural to study them, they appear in many situations, and many "theorems" about monoids, rings, topological rings or algebra in general etc. are actually special cases of abstract nonsense with monoid objects in (nice) monoidal categories. • Monoids = monoid objects in $(\mathsf{Set},\times)$ • Monoids with zero = monoid objects in ($\mathsf{Set}_*,\wedge)$ • $H$-spaces = monoid objects in $(\mathsf{hTop}_*,\wedge)$ • Semirings = monoid objects in $(\mathsf{CMon},\otimes)$ • Rings = monoid objects in $(\mathsf{Ab},\otimes)$ • Rings with noncommutative addition = monoid objects in ... ??? Well, it is tempting to take $(\mathsf{Grp},\otimes)$ here, but what should $\otimes$ be here? Although there are various variants of tensor products of groups, none of them makes $\mathsf{Grp}$ a monoidal category. Therefore, rings with a noncommutative addition fall out of the general picture. This doesn't necessarily imply that they are uninteresting, but rather that their theory is more exotic. As already mentioned in the other answers, a near-ring is a "ring" with a noncommutative addition and only the one-sided distributive law $(x+y)z=xz+yz$. These may be interpreted as groups $G$ equipped with an associative map $G \otimes G \to G$, where $G \otimes G$ is defined to be the free group generated by symbols $x \otimes y$ subject to the (additively written) relations $(x+y) \otimes z = x \otimes z + y \otimes z$. This seems to be just the coproduct (aka free product) of $|G|$ copies of $G$, where $x \otimes z$ belongs to the copy indexed by $z$. A similar definition should work for arbitrary algebraic categories. Its not just for convenience and simplicity; its actually totally fundamental. The problem with replacing abelian groups with general groups (or general monoids, for that matter) is that neither of these are the models of a commutative algebraic theory. Compared to this, the fact that commutativity follows from the other ring axioms pales to insignificance. Let me explain. First, a problem. Exercise 1. Let $X$ and $Y$ denote commutative semigroups written additively, and let $f,g : X \rightarrow Y$ denote homomorphisms. Show that $f+g$ is a homomorphism. I highly recommend you solve the above (fairly easy) exercise before going on. In doing so, you'll have noticed that both associativity and commutativity were used in the proof. However, with a bit of thought, we see that they weren't really fundamental to the proof. Basically, we have the expression $$(f(x)+f(x'))+(g(x)+g(x'))$$ and we use associativity to drop the brackets, obtaining $$f(x)+f(x')+g(x)+g(x')$$ then we switch the two inner terms, obtaining $$f(x)+g(x)+f(x')+g(x')$$ and then we put the brackets back in: $$(f(x)+g(x))+(f(x')+g(x'))$$ Thus, our proof ultimately comes down to the fact that $Y$ satisfies the following identity. $$(a+b)+(c+d) \equiv (a+c)+(b+d)$$ So we get an idea: call magmas satisfying the above identity medial, prove the result for medial magmas rather than commutative semigroups, and then simply observe that every commutative semigroup is in fact a medial magma. Of course there might be a niggling doubt in the back of our minds; is the mediality condition actually a "natural" thing to consider? It looks just like a warped form of commutativity. The answer is: "Yes, it is a natural property to consider," but the justification for this claim will have to wait for later. So, lets temporarily adopt the position that mediality is interesting and natural, and lets take some time to consider its consequences. Exercise 2. Verify that every commutative semigroup is indeed a medial magma (I suggest using additive notation). Conversely, prove that every medial magma with a two-sided identity element is both associative and commutative. The above exercise begs the question; do there exist magmas that are medial, but which aren't commutative semigroups? The answer is a resounding "yes!" Example 1. If $(X,+)$ is an abelian group, then $(X,-)$ is a medial magma that is neither commutative, nor associative. Example 2. If $I \subseteq \mathbb{R}$ and $f : I^2 \rightarrow I$ is defined by $f(x,y) = (x+y)/2$, then $(I,f)$ is a medial magma that is commutative, but not associative. The best way to verify Example 1 is with the result of the following exercise; Example 1 is then obtained as the special case in which $*$ is taken as addition, $f$ denotes the identity function on $X,$ and $g$ represents unary negation. Exercise 3. Let $(X,*)$ denote a medial magma, and suppose $f,g : X \rightarrow X$ are commutative endomorphisms, in the sense that $f \circ g = g \circ f$. Defining $x \diamond y = f(x) * g(y)$, show that $(X,\diamond)$ is also medial. Finally, since we've come this far, lets check for a moment that we're not fooling ourselves, by verifying the result that motivated our interest in mediality in the first place: Exercise 4. Let $X$ and $Y$ denote medial magmas written with an asterisk, and let $f,g : X \rightarrow Y$ denote homomorphisms. Verify that $f*g$ is a homomorphism. So to recap, if $f$ and $g$ are parallel homomorphisms in the category of medial magmas, then so too is $f*g$. But, is this really a big deal? I think this is a huge deal; here's one reason why. Let $(X,*)$ denote a medial magma and suppose $A$ is the set of all endomorphisms of $X$. Then we observe that $A$ has the following properties. 1. $(A,*)$ is a medial magma, where $*$ on $A$ is defined pointwise from $*$ on $X$. 2. $(A,\circ)$ is a monoid 3. $\circ$ distributes on both sides over $*$ Thus, $A$ is kind of like a ring. This motivates the following definition: a ringlike-entity appropriate for medial magmas is a triple $(A,*,\circ)$ satisfying the aforementioned axioms. Now suppose $A$ is a ringlike-entity appropriate for medial magmas and that $X$ is a medial magma. I leave it as an exercise for the reader to define the notion of "multiplication of a scalar in $A$ with a vector in $X$"; that is, a function $f : A \times X \rightarrow X$ that satisfies all the reasonable compatibility conditions. As inspiration; let $X$ denote an arbitrary medial magma, suppose $A$ is the set of all endomorphisms of $X$, and think of $f : A \times X \rightarrow X$ as the evaluation function. Therefore, since we have notion of scalar multiplication, for any ringlike-entity appropriate for medial magmas $A$, we have a notion of $A$-module. Lets go picture for a moment. Commutative algebra (think: abelian groups, rings, modules and algebras) is built on the theory of "abelian groups." However, what I'm trying to say here, is that we can replace this theory with others (e.g. the theory of commutative semigroups, or medial magmas) in order to obtain different versions of commutative algebra, each with their own version of "ring", "module" etc. Most generally, I think, is that we can replace the theory of "abelian groups" with any commutative algebraic theory. Examples of such theories include: • The theory of abelian groups, with addition and unary negation. • As above, except with with addition and binary subtraction this time. • The theory of medial magmas. • The theory of commutative semigroups. Note that a theory with a single binary operation $*$ is a commutative algebraic theory if and only if it proves that $*$ is medial; that is why I argued earlier that mediality is a natural condition to consider. Furthermore, not every group is medial, and not every monoid is medial, and not every semigroup is medial, so it wouldn't make sense to try to build commutative algebra over these structures. Anyway, I have much, MUCH more to say about this stuff, some of it quite abstract, but I don't want to give away too much of what I've been toying with, since it might be publishable someday. In conclusion though, its not just for convenience and simplicity. For everything to work, we have to build commutative algebra over some base commutative algebraic theory. • Hey why the downvote? Dec 23, 2013 at 17:20
2023-03-27 19:24:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8818730115890503, "perplexity": 299.7610308671956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00523.warc.gz"}
http://www.jku.at/content/e263/e16099/e16086/e173791/?view=LECD&v_id=8235
# FUN-Veranstaltungen FoFö-Stammtisch, 23. November 2017, 14 Uhr siehe Info-Veranstaltungen # Kontakt Abteilung Forschungsunterstützung (FUN): forschen@jku.at # Top-Links Die folgende Link-Liste enthält oft gesuchte Begriffe in alphabethischer Reihenfolge: Link-Liste nach Häufigkeit sortieren # Forschungseinheiten Vortrag auf einer Tagung (nicht referiert) ## A remark on the composition of polynomial functions over algebraically closed fields ### Details Zusammenfassung: In 1969, M.\ D.\ Fried and R.\ E.\ MacRae proved that for univariate polynomials $p,q, f, g \in \mathbb{K}[t]$ ($\mathbb{K}$ a field) with $p,q$ nonconstant, $p(x)-q(y)$ divides $f(x)-g(y)$ in $\mathbb{K}[x,y]$ if and only if there is $h \in \mathbb{K}[t]$ such that $f=h(p(t))$ and $g=h(q(t))$. In 1995, F.\ Binder and the author provided short algebraic proofs of this theorem, and J.\ Schicho gave a proof from the viewpoint of category theory, thereby providing several generalizations to multivariate polynomials. In this talk, we give an algebraic proof of one of these generalizations. % % The theorem by Fried and MacRae yields a way to prove the following fact for nonconstant functions $f,g$ from $\mathbb{C}$ to $\mathbb{C}$: if both the composition $f \circ g$ and $g$ are polynomial functions, then $f$ has to be a polynomial function as well. We give an algebraic proof of this fact and present a generalization to multivariate polynomials over algebraically closed fields. % As an application, one obtains a generalization of a result by L.\ Carlitz from 1963 that describes those univariate polynomials over finite fields that induce injective functions on all of their extensions. Part of this research is joint work with S.\ Steinerberger (Bonn, Germany). Tagungstitel: AAA81 - 81. Arbeitstagung Allgemeine Algebra Vortragsdatum: 05.02.2011 Web: http://dmg.tuwien.ac.at/aaa81/ (Konferenz-Homepage) Land: Österreich Ort: Universitaet Salzburg ### Beteiligte ReferentInnen: Assoz.Univprof. DI Dr. Erhard Aichinger Forschungseinheiten der JKU: Wissenschaftszweige: 1102 Algebra | 1107 Geometrie | 1119 Zahlentheorie | 1131 Computer Algebra
2017-11-22 05:45:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7362458109855652, "perplexity": 1656.028787241537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806465.90/warc/CC-MAIN-20171122050455-20171122070455-00417.warc.gz"}
http://opendsa.readthedocs.io/en/latest/QBankUserManual.html
# 18. QBank - Users Manual¶ ## 18.1. Introduction¶ QBank is a web application that assists Problem Authoring and Publishing. At the present time, this is an experimental system, and is not actively being used to create OpenDSA exercise content. But its goal is to replace a lot of the current programming required to develop our OpenDSA exercises. The user interface is meant to be intuitive and easy to understand. This document will give you a feel of the overall capabilities and functionality of QBank tool. The key features of the QBank tool are: • Easy interfaces for Problem authoring based on the main Problem types: • Static question – Multiple Choice Question • Dynamic question – Parameterised Question • Summative question – Multi-Part Question • Tool Specific question – Khan Academy Exercise Question • Problem publishing - Parsing options to convert authored questions to different formats • Comma Separated Format - csv • Khan Academy Exercise Format • Standard Authoring Interfaces based on the Formal Problem Definition for different Problem Types. ## 18.2. Key Terms¶ ### 18.2.1. Problem Definition¶ The following lists out the essential components of a Problem. Problem Statement Includes a function that generates a Problem Instance. User Interface A mechanism that a user interacts with a create a Student Answer. Model Answer Generator A function that takes a Problem Instance and generates a Model Answer. Answer Evaluator A function that compares the Student Answer to the Model Answer to determine whether the Student Answer is correct or not. Variables These carry information from the Problem Statement to the Model Answer Generator. ### 18.2.2. Problem Types¶ 1. Multiple Choice Question 2. Parameterized Question • Variable that take a List of values • Variable with values that vary over a Range 1. Multi - Part Question 2. Tool Specific Problem Authoring – Khan Academy Exercise Format ## 18.3. Write a Problem¶ ### 18.3.1. Overview¶ The QBank editing interface consists of text boxes and buttons that are self explanatory. The text boxes also accepts HTML and JavaScript when appropriate. The What's this? button gives helpful indicators on the purpose of different text boxes and what different parameters can be added to make the Problem powerful. It also tells you some functions that can be used to make more effective questions. ### 18.3.2. Problem¶ #### 18.3.2.1. File Name¶ A unique identifier that is used to : 1. Store a problem in the database. 2. Refer to a problem in different parsed format 3. Refer to while creating a summative problem written from previously authored problems. #### 18.3.2.2. Difficulty¶ It just classifies problems as easy , medium or hard. This information can be used by a Smart tutor which bases the next question posed to the user on the correctness of the previous question. If correct, a question with a higher difficulty is posed or vice-versa. ### 18.3.3. Variables¶ This allows for generation of different problem instances based on a static Problem Template with variables that take on different specified values. Variables are used by specifying the variable name within <var>...</var> delimiters. Variable Name is an ID for the var as that’ll be the name that you’ll refer to in the future. Variable value Values that the variable can take is specified here. This can be as simple as commma separated values or functions that can be accepted by the publishing tool/ parsed into a compatible format. For example: <!-- Numbers from 1-5 --> Variable Name : A Variable Value : "1", "2" ,"3" ,"4", "5" Another example to make a variable named SPEED1 that is a number from 11 to 20 you would do: Variable Name : SPEED1 Variable Value : randRange(11,20) The content of a <var>...</var> block is executed as JavaScript, with access to to all the properties and methods provided by the JavaScript Math object, as well as those defined in the modules/scripts you included: <!-- Random number -10 to -1, 1 to 10.--> Variable Name : A Variable Value :(random() > 0.5 ? -1 : 1)*(rand(9) + 1) Most mathematical problems that you generate will have some bit of randomness to them (in order to make an interesting, not-identical, problem). #### 18.3.3.1. Variable Reference¶ Use a <var>...</var> delimiters to refer to predefined variables while defining other variables or within other components of the problem. For example in the following we define two variables (AVG and TIME ) and then multiply them together and store them in a third variable ( DIST ). <!-- Defining a variable using predefined variables. --> Variable Name : AVG Variable Value : 31 + rand(9) Variable Name : TIME Variable Value : 1 + rand(9) Variable Name : DIST Variable Value : AVG * TIME ### 18.3.4. Solution¶ The solution consists of the answer. #### 18.3.4.1. Answer¶ The answer can be any of the following. 1. A valid choice 2. A function that is the calculation of a question ( with specified values for variables) 3. A previously defined variable. For example:: Answer : <var>round(DIST1)</var> ### 18.3.5. Choices¶ This can include text which acts as distractors for the user. The choices can also use the previously defined variables. ### 18.3.6. Hints¶ These are textual suggestions to help the user figure out the correct answer. The hints can also use the previously authored variable . The hints are optional. ### 18.3.7. Scripts¶ The author can add scripts written in javaScript to add different functionality to the question. This can add extra interactivity to the exercise. This is optional as well. ### 18.3.8. Common Introduction¶ The problem overview/introduction is defined for a Summative Problem. This is useful since the problems combined together can have some information that isn’t explicitly part of the statement of the question. For example, a Physics problem may describe the situation and the various objects in the world before asking about a certain quality of a certain object. ### 18.3.9. Problem Name¶ This part of the Problem Template defined for Summative Problems. The Problem Name is the file name of previously authored questions , that can be grouped together. The "question" in a summative problem is just a file name. ### 18.3.10. Available functions and Tips¶ #### 18.3.10.1. Generating Random Numbers¶ You can use random(), or one of the following methods defined in the math.js module (which should be included in all exercises): 1. randRange( min, max ) - Get a random integer in [min, max]. 2. randRange( min, max, count ) - Get a random integer between min and max, inclusive. If count is specified, will return an array of random integers in the range. 3. randRangeUnique( min, max, count ) - Get an array of unique random numbers between min and max, inclusive. 4. randRangeExclude( min, max, excludes ) - Get a random integer between min and max, inclusive, that is never any of the values in the excludes array. 5. randRangeNonZero( min, max ) - Get a random integer between min and max that is never zero. 6. randFromArray( arr ) - Get a random member of arr. ## 18.4. Problem Type Specifics¶ ### 18.4.1. Parameterized Question – List¶ Used to write a question with variables that take values that need to be specified explicitly Variables: The values are specified within double-quotes and are comma-separated. Show/Hide Variable Combination: It shows the various combination of the different values of variable can take. This indicates the position of the answer for the particular combination has to be stored in the answer array. Solution: The answer takes a one dimensional array, where the index corresponds the answer specified to a row on the table that shows the Variable Combination. ### 18.4.2. Parameterized Question – Range¶ This type of question is for math problems where there are calculations involved as the solution. Variables: The values the variables take are between a range of values that can be specified using the javaScript randRange(min,max) function. Solution: The answer is the exact calculation that is specified in the Problem Template. For example: A + B - C, is enough to specify the answer. The author doesn’t need to explicitly make the calculation and write the correct answer. This ensures the validity of answer and gets rid of the risk specifying an incorrect answer for the solution. Another important feature of this type of problem is, the author doesn’t need to explicitly provide choices, since the user is expected to fill the answer in the blank provided. This is very effective for math problems. ### 18.4.3. Multi-Part Question¶ It is comprised of different previously authored questions from the Repository. You can combine different type of questions, free-form and multiple-choice simple questions and matching questions in your multipart question. Common Introduction The problems share a common introduction which generally contains information that is common to the questions. Problem Problem Name Specify the exact identifier for a Problem. You can Browse the Problems in the repository by clicking the show button. You can then click Add and the Problem Name gets added. ### 18.4.4. Tool Specific Question – Khan Academy Exercise¶ This type of problem supports inputs that can be handled by Khan Academy. Also, the What's this button gives a lot of direction in assisting an author while authoring a problem. ## 18.5. Publishing a Problem¶ The current version of the QBank supports two main publishing formats. ### 18.5.1. CSV format¶ Every authored exercise can be exported as a comma-separated file. ### 18.5.2. Khan Academy Exercise Format¶ The exercises can also be exported in a format fully compatible with Khan Academy. ## 18.6. Search for a Problem¶ This allows the author to browse previously written problems. The author can: 1. Edit the problem. 2. Download the problem in CSV 3. Parse the problem in to Khan Academy Exercise Format. 4. View the problem in Khan Academy.
2017-04-26 04:06:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17139726877212524, "perplexity": 2393.9838287486205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121153.91/warc/CC-MAIN-20170423031201-00372-ip-10-145-167-34.ec2.internal.warc.gz"}
https://codeahoy.com/learn/golangsecurity/ch13/
# File Management The first precaution to take when handling files is to make sure the users are not allowed to directly supply data to any dynamic functions. In languages like PHP, passing user data to dynamic include functions, is a serious security risk. Go is a compiled language, which means there are no include functions, and libraries aren’t usually loaded dynamically1. File uploads should only be permitted from authenticated users. After guaranteeing that file uploads are only made by authenticated users, another important aspect of security is to make sure that only acceptable file types can be uploaded to the server (whitelisting). This check can be made using the following Go function that detects MIME types: func DetectContentType(data []byte) string Below you find the relevant parts of a simple program to read and compute filetype: package main import ( "fmt" "net/http" "os" ) func main() { file, err := os.Open("./img.png") if err != nil { fmt.Println(err) os.Exit(1) } buff := make([]byte, 512) // why 512 bytes ? see http://golang.org/pkg/net/http/#DetectContentType if err != nil { fmt.Println(err) os.Exit(1) } filetype := http.DetectContentType(buff) fmt.Println(filetype) switch filetype { case "image/jpeg", "image/jpg": fmt.Println(filetype) case "image/gif": fmt.Println(filetype) case "image/png": fmt.Println(filetype) default: } } Files uploaded by users should not be stored in the web context of the application. Instead, files should be stored in a content server or in a database. An important note is for the selected file upload destination not to have execution privileges. If the file server that hosts user uploads is *NIX based, make sure to implement safety mechanisms like chrooted environment, or mounting the target file directory as a logical drive. Again, since Go is a compiled language, the usual risk of uploading files that contain malicious code that can be interpreted on the server-side, is non-existent. In the case of dynamic redirects, user data should not be passed. If it is required by your application, additional steps must be taken to keep the application safe. These checks include accepting only properly validated data and relative path URLs. Additionally, when passing data into dynamic redirects, it is important to make sure that directory and file paths are mapped to indexes of pre-defined lists of paths, and to use these indexes. Never send the absolute file path to the user, always use relative paths. Set the server permissions regarding the application files and resources to read-only. And when a file is uploaded, scan the file for viruses and malware. 1. Go 1.8 does allow dynamic loading now, via the new plugin mechanism. If your application uses this mechanism, you should take precautions against user-supplied input.
2022-12-01 10:28:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2508868873119354, "perplexity": 4128.421216006777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710808.72/warc/CC-MAIN-20221201085558-20221201115558-00664.warc.gz"}
https://mrchasemath.com/2012/09/27/its-all-fun-and-games-until-someone-loses-an-i/
# It’s all fun and games until someone loses an i How’s your Thursday going? Keeping it real? And by real, of course, I mean somewhere in the $0$ or $\pi$ direction in the complex plane. I’ve been teaching about complex numbers in my Algebra 2 class, so I thought I’d share this groaner with you (HT: Doug McDonald).
2018-11-21 18:49:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4059247374534607, "perplexity": 1286.964016213438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039749562.99/warc/CC-MAIN-20181121173523-20181121195523-00179.warc.gz"}
https://cracku.in/rq-rrb-ntpc-quant-test-47
## RRB NTPC Quant Test 47 Instructions For the following questions answer them individually Q 1 If $$+$$ means $$\div$$, $$-$$ means $$+$$, $$\times$$ means $$-$$ and $$\div$$ means $$\times$$ then find the value of $$27 \div 15 - 36 + 6$$? Q 2 An assertion (A) and a reason (R) are given below. Assertion (A): Forest cover in the country has gradually decreased. Reason (R): Encroachment by humans is one of the concerns for the forest department. Choose the correct option. Q 3 Select the alternative that shows a similar relationship as the given pair Mandatory : Compulsory Q 4 Given below is a statement followed by two assumptions numbered I and II. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement. Statement: "Mobile addiction is a psychological disorder," a psychologist tells his client. Assumptions: I. A psychologist deals with mental disorders. II. Surgery is the best option for any disorder. Q 5 The pattern in which of the following options most closely resembles the pattern in the given image?
2021-01-27 00:39:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5707300305366516, "perplexity": 1729.644709933703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704804187.81/warc/CC-MAIN-20210126233034-20210127023034-00370.warc.gz"}
https://www.amdainternational.com/3vv8wv/elasticity-physics-notes-c26795
Grade 9 | Grade 10 | Year 9 | Year 10 | Form 4 | Form 5| | This site is best seen using Web version. Those bodies which regain its original configuration immediately and completely after the removal of deforming force are called perfectly elastic bodies, e.g. Question and Answer forum for K12 Students. The temporary delay in regaining the original configuration by the elastic body after the removal of deforming force, is called elastic after effect. Learn about the deforming force applied on an elastic object and how the stress and strain works on an object. Elasticity is a branch of physics which studies the properties of elastic matil A tili idterials. Potential energy U = Average force * Increase in length, = 1 / 2 Stress * Strain * Volume of the wire, Elastic potential energy of a stretched spring = 1 / 2 kx2. Tamilnadu Board Class 10 English Solutions, Tamilnadu Board Class 9 Science Solutions, Tamilnadu Board Class 9 Social Science Solutions, Tamilnadu Board Class 9 English Solutions, Elastic Modulus in Physics | Definition, Formulas, Symbol, Units – Elasticity, Stress in Physics | Definition, Formulas, Types – Elasticity, MCQ Questions for Class 10 Social Science SST with Answers PDF Download Chapter Wise, MCQ Questions for Class 9 Science with Answers PDF Download Chapter Wise, MCQ Questions with Answers for Class 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, and 1 all Subjects, MCQ Questions for Class 11 Economics with Answers Chapter Wise PDF Download, MCQ Questions for Class 11 Accountancy with Answers PDF Download Chapter Wise, MCQ Questions for Class 12 Economics with Answers Chapter Wise PDF Download, MCQ Questions for Class 11 Biology with Answers Chapter Wise PDF Download, MCQ Questions for Class 11 Chemistry with Answers Chapter Wise PDF Download, MCQ Questions for Class 11 Physics with Answers Chapter Wise PDF Download, MCQ Questions for Class 11 Maths with Answers Chapter Wise PDF Download. Note : The fourth state of matter in which the medium is in the form of positive and negative ions, is known as plasma. From a general summary to chapter summaries to explanations of famous quotes, the SparkNotes Elasticity Study Guide has everything you need to ace quizzes, tests, and essays. Important Facts About Elasticity. When a deforming force is applied at the free end of a suspended wire of length 1 and radius R, then its length increases by dl but its radius decreases by dR. Now two types of strains are produced by a single force. If there is an increase in length, then stress is called tensile stress. What Is Elasticity In Economics Definition Theory Formula by study.com. Depression at the free end of a cantilever is given by. The property of matter by virtue of which it regains its original configuration after removing the deforming force is called elasticity. What is a Hooke’s law and how it is applicable for the concept of elasticity. Elasticity of Demand: The degree of responsiveness of demand to the […] (iv) 9 / Y = 1 / K + 3 / η or Y = 9K η / η + 3K. The materials for which strain produced is much larger than the stress applied, with in the limit of elasticity are called elastomers, e.g., rubber, the elastic tissue of aorta, the large vessel carrying blood from heart. (iii) Shearing strain = Angular displacement of the plane perpendicular to the fixed surface. Download 12th class elasticity chapter notes in physics document. Elasticity of Demand – CBSE Notes for Class 12 Micro Economics CBSE NotesCBSE Notes Micro EconomicsNCERT Solutions Micro Economics Introduction This is a numerical based chapter on elasticity of demand, price elasticity of demand and its measurements, also discussing the factors affecting it. In physicsand materials science, elasticityis the ability of a bodyto resist a distorting influence and to return to its original size and shape when that influence or forceis removed. Those bodies which does not regain its original configuration at all on the removal of deforming force are called perfectly plastic bodies, e.g. y = Young’s modulus of elasticity, and IG = geometrical moment of inertia. On this page you can read or download elasticity notes for 12th physics in PDF format. To get fastest exam alerts and government job alerts in India, join our Telegram channel. A beam clamped at one end and loaded at free end is called a cantilever. Strain = Change in the configuration / Original configuration. Perfectly Elastic Demand: A perfectly elastic demand refers to a situation when demand is infinite at the prevailing price. Made by expert teachers. Elastic Modulus in Physics | Definition, Formulas, Symbol, Units – Elasticity Plasma occurs in the atmosphere of stars (including the sun) and in discharge tubes. Coefficient of elasticity depends upon the material, its temperature and purity but not on stress or strain. Elastic Modulus or Young’s Modulus Definition: The ratio of stress and strain, called modulus of elasticity or elastic moduli. How elasticity affects the incidence of a tax, and who bears its burden? Elasticity 2012 1. The materials which show very small plastic range beyond elastic limit are called brittle materials, e.g., glass, cast iron, etc. Summary ⇒ The following image shows an unfortunate situation: a van fails to stop as it approaches a line of traffic and hits a stationary car; and they move forwards together - is this elastic or inelastic collision? AQA GCSE Physics exam revision with questions & model answers for Forces & elasticity. The maximum value of deforming force for which elasticity is present in the body is called its limit of elasticity. To assist you with that, we are here with notes. Elasticity is a physical property of a material whereby the material returns to its original shape after having been stretched out or altered by force. Candidates who are ambitious to qualify the Class 11 with good score can check this article for Notes. 4 The World Demand for Oil . This is possible only when you have the best CBSE Class 11 Physics study material and a smart preparation plan. When temperature of a rod fixed at its both ends is changed, then the produced stress is called thermal stress. All CBSE Notes for Class 11 Physics Maths Notes Chemistry Notes Biology Notes. etc. Practical Applications of Elasticit.. Elasticity is that property of the object by virtue of which it regain its original configuration after the removal of the deforming force. Elastic Limit Definition: From J to K the material flowed like a fluid; such behaviour is called plastic flow. Its unit is N/m2 or Pascal and dimensional formula is [ML-12T-2]. Interatomic Force Constant. Relation Between Volumetric Strain,.. Elasticity: Elastic limit is the upper limit of deforming force upto which, if deforming force is removed, the body regains its original form completely and beyond which if deforming force is increased the body loses its property of elasticity and get permanently deformed. The elastic modulus has the same physical unit as stress. Plastic bodies: - Bodies which do not show a tendency to recover their original configuration on the removal … Compressibility of a material is the reciprocal of its bulk modulus of elasticity. The SI unit applied to elasticity is the pascal (Pa), which is used to measure the modulus of deformation and elastic limit. Steel is more elastic than rubber. It is defined as the ratio of tangential stress to the shearing strain, within the elastic limit. Relation between \ [\mathbf {Y,}\,\ma.. Ductile materials are used for making springs and sheets. Elastic Hysteresis. Young’s modulus (Y) and modulus of rigidity (η) are possessed by solid materials only. We are giving a detailed and clear sheet on all Physics Notes that are very useful to understand the Basic Physics Concepts. There are three types of modulus of elasticity, Young’s modulus, Shear modulus, and Bulk modulus. Physics Notes for High School DEDICATED TO HELP STUDENTS EXCEL IN PHYSICS BY GIVING NOTES, MOTIVATION AND RESOURCES ESPECIALLY FOR (O-LEVEL), High School, Secondary School Students. Theory of Elasticity Exam Problems and Answers Lecture CT5141 (Previously B16) Delft University of Technology Faculty of Civil Engineering and Geosciences Structural Mechanics Section Dr.ir. 1. Elasticity Definition Physics: where, w = load, 1 = length of the cantilever. Elasticity:-The property by virtue of which a body tends to recover its original configuration (shape and size) on the removal of the deforming forces, is elasticity. Download Elasticity (Physics) notes for IIT-JEE Main and Advanced Examination. For liquids. Download elasticity notes for 12th physics document. Its SI unit is N-1m2 and CGS unit is dyne-1 cm2. A body with this ability is said to behave (or respond) elastically. Many candidates are facing problems in collecting Maths, Physics and Chemistry Topic wise notes collection for JEE(Joint … It is defined as the ratio of normal stress to the longitudinal strain Within the elastic limit. Elasticity is a measure of a variable’s sensitivity to a change in another variable, most commonly this sensitivity is the change in price relative to changes in other factors. The modulus of elasticity is simply the ratio between stress and strain. Thanks heaps! Steel is more elastic than rubber. The change in the shape or size of a body when external forces act on it is determined by the forces between its atoms or molecules. where, K = bulk modulus of elasticity and. The minimum value of stress required to break a wire, is called breaking stress. If this site helps, please consider sharing this in your social media. Perfectly Plastic Bodies: Save my name, email, and website in this browser for the next time I comment. In this case, elasticity of demand is infinite or E Candidates may refer this study material for their IIT JEE exam preparation. 3 Defining and Measuring Elasticity The price elasticity of demand is the ratio of the percent change in the quantity demanded to the percent change in the price as we move along the demand curve. If there is a decrease in length, then stress is called compression stress. When temperature of a gas enclosed in a vessel is changed, then the thermal stress produced is equal to change in pressure (Δp)of the gas. It has been assembled … It is a situation where the slightest rise in price causes the quantity demanded of the commodity to fall to zero. where, η = modulus of rigidity of the material of cylinder, Work done in twisting the cylinder through an angle θ, Relation between angle of twist (θ) and angle of shear (φ), rθ = lφ or φ = r / l = θ modulus of rigidity is zero. Ratio between isothermal elasticity and adiabatic elasticity E. genius PHYSICS Elasticity 5 9.5 Types of Solids. putty, paraffin, wax etc. According to the change in configuration, the strain is of three types, (1) Longitudinal strain= Change in length / Original length, (2) Volumetric strain = Change in volume / Original volume. The time delay in restoring the original configuration after removal of deforming force is called elastic relaxation time. We are giving a detailed and clear sheet on all Physics Notes that are very useful to understand the Basic Physics Concepts. It is defined as the ratio of normal stress to the volumetric strain within the elastic limit. The fractional change in configuration is called strain. Candidates can download notes as per their requirements from the links given below. You can see that if the price changes from $.75 to$1, the quantity decreases by a lot. where, k = Force constant of spring and x = Change in length. You Can Download Sri Lanka Advanced Level Physics Related Notes.Such as Laser notes,Elasticity note,etc.Browse Archives,And Latest Physics Notes in here. modulus of rigidity is zero. Class-XI Physics Handwritten Notes Ch 1: Physical World Ch 2: Units and Measurements Ch 3: Motion in a Straight Line Ch 4: Motion in a Plane (a)Vectors (b) Projectile Ch 5: Laws of Motion Ch 6: Work,Energy and Power Ch 7: System of Particles & Rotational Motion Ch 8: … After a region K to L of partial elastic behaviour, plastic flow continued from L to M . P.C.J. Safety factor = Breaking stress / Working stress. Recall Hooke's law — first stated formally by Robert Hooke in The True Theory of Elasticity or Springiness(1676)… which can be translated literally into… or translated formally into… Most likely we'd replace the word "extension" with the symbol (∆x), "force" with the symbol (F), and "is directly proportional to" with an equals sign (=) and a constant of proportionality (k), then, to … Subjects | Physics Notes | A-Level Physics. Elasticity is the property of solid materials to return to their original shape and size after the forces deforming them have been removed. Elasticity, ability of a deformed material body to return to its original shape and size when the forces causing the deformation are removed. The internal restoring force acting per unit area of a deformed body is called stress. Elastic limit is the upper limit of deforming force upto which, if deforming force is removed, the body regains its original form completely and beyond which if deforming force is increased the body loses its property of elasticity and get permanently deformed. Price Elasticity Of Demand Using The Midpoint Method Video by khanacademy.org. Class 11 Physics Elasticity – Get here the Notes for Class 11 Physics Elasticity. Elasticity is that property of the object by virtue of which it regain its original configuration after the removal of the deforming force. For a beam of rectangular cross-section having breadth b and thickness d. For a beam of circular cross-section area having radius r, Beam Supported at Two Ends and Loaded at the Middle. Learnengineering.in collected the various Topic wise notes for JEE(Joint Entrance Exam).This collection is very useful for JEE candidates to crack their upcoming JEE Examination.. Notes On Income And Cross Elasticity Of Demand Grade 12 by kullabs.com. quartz, phospher bronze etc. If you don't see any interesting for you, use our search form on bottom ↓ . The materials which show large plastic range beyond elastic limit are called ductile materials, e.g., copper, silver, iron, aluminum, etc. For quartz and phosphor bronze this time is negligible. Factors Affecting Elasticity. (ii) Tangential Stress If deforming force is applied tangentially, then the stress is called tangential stress. Elasticity defines a property of an object that has the ability to regain its original shape after being stretched or compressed. Its practical value lies between 0 and 0.5. BUNGEE jumping utilizes a long elastic strap which stretches until it reaches a maximum length that is proportional to the weight of the jumper. The elasticity of the strap determines the amplitude of the resulting vibrations. Those bodies which regain its original configuration immediately and completely after the removal of deforming force are called perfectly elastic bodies. SYLLABUS PHYSIC A LEVEL : Download; 150290-learner-guide-for-cambridge-international-as-a-level-physics-9702- : Download; 9702_iECRs_ALL : Download; 9702_Topic_Connections : Download; 9702_Topic_Questions : Download; A-Levels Physics Videos : Download; Notes : Physics As notes : Download; Physics formula sheet : Download; 1 - Physical … Preface This lecture book contains the problems and answers of the exams elasticity theory from June 1997 until January 2003. For liquids. The property of an elastic body by virtue of which its behaviour becomes less elastic under the action of repeated alternating deforming force is called elastic fatigue. Learn about and revise shape-changing forces, elasticity and the energy stored in springs with GCSE Bitesize Physics. where, E is the modulus of elasticity of the material of the body. For the same material, the three coefficients of elasticity γ, η and K have different magnitudes. 4, DD is the perfectly elastic demand curve which is parallel to OX-axis. Solids are more elastic and gases are least elastic. To a greater or lesser extent, most solid materials exhibit elastic behaviour, but there Its unit is N/m2 or Pascal and its dimensional formula is [ML-1T-2]. Download Elasticity and thermal expansion NOTES by MOTION for Jee Mains & Jee Advanced (IIT JEE) exam preparation. Elastic Potential Energy in a Stretched Wire. Limit of Elasticity. Within elastic limit, Stress & strain ⇒ S t r e s s S t r a i n = C o n s t a n t \Rightarrow \frac{Stress}{Strain}=Constant ⇒ S t r a i n S t r e s s = C o n s t a n t. This constant is known as modulus of elasticity (or) coefficient of elasticity. Within the limit of elasticity, the stress is proportional to the strain. ∴ Poisson’s Ratio (σ) = Lateral strain / Longitudinal strain = – Δ R/ R / ΔU l. The theoretical value of Poisson’s ratio lies between – 1 and 0.5. A material is said to be elastic if it deforms under stress (e g external Elastic modulus under stress (e.g., external forces), but then returns to its origgpinal shape when the stress is removed. Elasticity Of Demand Cbse Notes For Class 12 Micro by learncbse.in. A force which produces a change in configuration of the object on applying it, is called a deforming force. Perfectly Elastic Bodies: Young’s modulus (Y) and modulus of rigidity (η) are possessed by solid materials only. where, α = coefficient of linear expansion of the material of the rod. The maximum value of deforming force for which elasticity is present in the body is called its limit of elasticity. AS Level Physics Notes and Worksheets. Consider a case in the figure below where demand is very elastic, that is, when the curve is almost flat. Hoogenboom CT5141 August 2003 21010310399. Breaking stress is fixed for a material but breaking force varies with area of cross-section of the wire. All objects in nature are elastic and no effects in … It has no unit and it is a dimensionless quantity. Elasticity is a measure of a variable’s sensitivity to a change in another variable, most commonly this sensitivity is the change in price relative to changes in other factors. Solid objects will deformwhen adequate loadsare applied to them; if the material is elastic, the object will return to its initial shape and size after removal. 5 Using the Midpoint Method to Calculate Elasticities. Those bodies which does not regain its original configuration at all on the removal of deforming force are called perfectly plastic bodies, e.g., putty, paraffin, wax etc. The work done in stretching a wire is stored in form of potential energy of the wire. If you don't see any interesting for you, use our search form on bottom ↓ . γ = coefficient of cubical expansion of the gas. where, γ = Cp / Cv ratio of specific heats at constant pressure and at constant volume. Solids are more elastic and gases are least elastic. Elasticity: The property of the body to regain its original configuration (length, volume or shape) when the deforming forces are removed, is called elasticity. (i) Normal Stress If deforming force is applied normal to the area, then the stress is called normal stress. PM1: Elasticity 5 As the stress was further increased, a point Y, known as the yield point, at which the stress rapidly dropped, was reached. The amount of deformation is ll d th t i Elastic deformation Price elasticity of demand, also called the elasticity of demand, refers to the degree of responsiveness in demand quantity with respect to price. Torsion of Cylinder. IARCS Olympiads: Indian Association for Research in Computing Science, CBSE 12 Class Compartment Result 2020 (Out) – Check at cbseresults.nic.in, CBSE Class 10 Result 2020 (Out) – Check CBSE 10th Result at cbseresults.nic.in, cbse.nic.in, Breaking: CBSE Exam to be conducted only for Main Subjects, CBSE Class 11 Physics Notes : Hydrostatics. Notes for NEET Physics Elasticity. Chapter 9 – Stress and Strain 2. In Fig. On this page you can read or download 12th class elasticity chapter notes in physics in PDF format. e.g., quartz and phosphor bronze etc. We are giving a detailed and clear sheet on all Physics Notes that are very useful to understand the Basic Physics Concepts. Substances that display a high degree of elasticity are termed "elastic." | Physics Notes that are very useful to understand the Basic Physics Concepts for... And completely after the removal of deforming force, is called compression stress of... This lecture book contains the problems and answers of the material of gas! In length, then the stress and strain works on an elastic object how. Stretching a wire is stored in form of potential energy of the plane perpendicular to the strain shape and after... And bulk modulus when you have the best CBSE Class 11 Physics elasticity property., within the limit of elasticity is defined as the ratio of normal stress to the fixed surface site,. Case in the body is called plastic flow coefficient of elasticity and answers for forces elasticity! By khanacademy.org Demand: a perfectly elastic Demand curve which is parallel to OX-axis by! Then stress is called breaking stress the perfectly elastic Demand curve which is parallel to OX-axis ( iv 9! Price elasticity of the object by virtue of which it regain its original configuration jumping utilizes long! Maximum length that is proportional to the fixed surface force is applied tangentially, stress... More elastic and gases are least elastic. force for which elasticity is the perfectly elastic Demand to..., then the produced stress is called tangential stress to the volumetric strain within the elastic are! Force is called its limit of elasticity is the reciprocal of its bulk modulus and size the! With good score can check this article for Notes my name, email, and IG geometrical... Class 12 Micro by learncbse.in end is called a cantilever is given by length of deforming..., γ = coefficient of elasticity, the three coefficients of elasticity cubical expansion the... The volumetric strain within the limit of elasticity a wire is stored in springs with Bitesize. The resulting vibrations time i comment for you, use our search form on ↓... Modulus ( Y ) and modulus of elasticity depends upon the material, its temperature purity... Get fastest exam alerts and government job alerts in India, join our Telegram channel deformation are removed materials! And clear sheet on all Physics Notes that are very useful to the! Answers of the exams elasticity theory from June 1997 until January 2003, etc 4, DD is property. + 3 / η or Y = Young ’ s modulus ( Y ) and modulus of elasticity ability..., Symbol, Units – elasticity download 12th Class elasticity chapter Notes in Physics document is... A tax, and website in this browser for the concept of elasticity, quantity. Material of the rod requirements from the links given below figure below where Demand is infinite at the end! A force which produces a Change in length, then stress is proportional to the strain are to! The cantilever deformed body is called plastic flow continued from L to M of modulus of elasticity upon! Where Demand is very elastic, that is proportional to the fixed surface Definition theory formula by.. The elasticity of Demand Using the Midpoint Method Video by khanacademy.org constant pressure and constant... For elasticity physics notes next time i comment bodies which regain its original shape and size after the forces causing the are... Perpendicular to the weight of the object on applying it, is called breaking is! The perfectly elastic Demand curve which is parallel to OX-axis the produced stress called... On an elastic object and how elasticity physics notes is defined as the ratio stress! Body with this ability is said to behave ( or respond ) elastically / Cv ratio normal... What is elasticity in Economics Definition theory formula by study.com same material, its temperature and but. Elasticity γ, η and K have different magnitudes check this article for.. A fluid ; such behaviour is called elastic relaxation time plasma occurs the! Elastic relaxation time this browser for the concept of elasticity of Demand 12! Respond ) elastically Economics Definition theory formula by study.com, glass, cast iron, etc atmosphere... Slightest rise in price causes the quantity decreases by a lot candidates may refer study... 9K η / η or Y = Young ’ s modulus ( Y ) and in discharge tubes elasticity. The incidence of a material is the modulus of elasticity are termed elastic. of stress... Strain works on an object that has the ability to elasticity physics notes its original configuration by the elastic limit to to... On this page you can read or download 12th Class elasticity chapter Notes in Physics PDF. Download 12th Class elasticity chapter Notes in Physics document elasticity physics notes, Shear modulus, and IG geometrical! The same physical unit as stress the ratio of tangential stress the limit! Substances that display a high degree of elasticity and modulus has the ability to regain its original configuration the. Who bears its burden range beyond elastic limit [ \mathbf { Y, } \,... ) normal stress if deforming force by the elastic limit of modulus of.... Reciprocal of its bulk modulus of elasticity of Demand CBSE Notes for NEET elasticity. An increase in length, then stress is called stress Basic Physics Concepts by the elastic after. The same physical unit as stress configuration by the elastic limit to K the material flowed a. Η and K have different magnitudes not on stress or strain preparation plan shape after being stretched or compressed India... Formulas, Symbol, Units – elasticity download 12th Class elasticity chapter Notes in Physics document a body this. Consider sharing this in your social media ratio of stress required to break a wire, called... Or download 12th Class elasticity chapter Notes in Physics | Definition, Formulas,,! Applied tangentially, then the stress and strain, called modulus of rigidity ( η ) are possessed by materials. The work done in stretching a wire, is called tangential stress if deforming force for which elasticity is property. For NEET Physics elasticity simply the ratio of normal stress continued from L to M / K 3. It regain its original shape after being stretched or compressed α = coefficient of cubical expansion of the flowed! To $1, the quantity demanded of the commodity to fall to.... K have different magnitudes are least elastic. ratio of normal stress to the area, then stress! Please consider sharing this in your social media a wire is stored in form of potential of. The temporary delay in regaining the original configuration by the elastic limit are called perfectly bodies... And strain works on an object that has the ability to regain its original shape and size the... Iron, etc, that is, when the curve is almost flat relation between [! J to K the material flowed like a fluid ; such behaviour is called plastic flow continued from to. Revise shape-changing forces, elasticity and the energy stored in form of potential energy of the object virtue! Here the Notes for NEET Physics elasticity has no unit and it is a decrease in,! Are removed save my name, email, and website in this browser for the time. Exams elasticity theory from June 1997 until January 2003 score can check this for... Alerts in India, join our Telegram channel the Basic Physics Concepts to (... Or Y = 9K η / η or Y = 9K η / η or =! 12Th Class elasticity chapter Notes in Physics document bungee jumping utilizes a long elastic strap which stretches until reaches! & elasticity iii ) Shearing strain = Change in configuration of the wire N/m2 or Pascal and formula. Links given below elastic behaviour, plastic flow continued from L to M deformation are removed load, 1 length... Formulas, Symbol, Units – elasticity download 12th Class elasticity chapter Notes Physics... Beyond elastic limit are called perfectly elastic Demand: a perfectly elastic Demand refers a... Download 12th Class elasticity chapter Notes in Physics | Definition, Formulas, Symbol, –..., glass, cast iron, etc deforming force is applied normal to the longitudinal strain within the elastic has... { Y, } \, \ma in form of potential energy of the plane to. Do n't see any interesting for you, use our search form on bottom ↓ t i elastic Notes. ( Y ) and in discharge tubes is changed, then the stress is to. Bulk modulus of elasticity website in this browser for the same physical unit as stress and government alerts. To$ 1, the stress and strain called elastic after effect aqa GCSE Physics exam revision questions. Modulus has the ability to regain its original configuration after removal of deforming force is applied tangentially, the. Cross-Section of the object on applying it, is called its limit elasticity... And answers of the commodity to fall to zero parallel to OX-axis revision with questions & model for. Applying it, is called a deforming force is applied tangentially, then the is. Theory from June 1997 until January 2003 energy of the strap determines amplitude! Like a fluid ; such behaviour is called tangential stress, glass, cast,. To a situation when Demand is infinite at the prevailing price you can or! And in discharge tubes Method Video by khanacademy.org being stretched or compressed to K the material the. A dimensionless quantity that display a high degree of elasticity are termed elastic. time negligible... Elasticity – Get here the Notes for NEET Physics elasticity which it regain its original and! Called its limit of elasticity coefficients of elasticity, ability of a cantilever unit as stress government. Are very useful to understand the Basic Physics Concepts Physics | Definition, Formulas, Symbol, Units – download. Aircare Mini Console Humidifier 5d6700, A Perfect Crime Series, Open Sanctuary Project, Who Owns Cerro Gordo, Spider Plant Facts, How To Get Egg Stain Off Concrete, Modern Mexican Architecture, Anime Characters With Black Hair And Red Eyes,
2022-08-13 19:27:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6314910650253296, "perplexity": 1614.429396520494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00485.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2017_AMC_8_Problems/Problem_19&diff=prev&oldid=98119
During AMC testing, the AoPS Wiki is in read-only mode. No edits can be made. # Difference between revisions of "2017 AMC 8 Problems/Problem 19" ## Problem 19 For any positive integer $M$, the notation $M!$ denotes the product of the integers $1$ through $M$. What is the largest integer $n$ for which $5^n$ is a factor of the sum $98!+99!+100!$ ? $\textbf{(A) }23\qquad\textbf{(B) }24\qquad\textbf{(C) }25\qquad\textbf{(D) }26\qquad\textbf{(E) }27$ ## Solution 1 Factoring out $98!+99!+100!$, we have $98!(10,000)$. Next, $98!$ has $\left\lfloor\frac{98}{5}\right\rfloor + \left\lfloor\frac{98}{25}\right\rfloor = 19 + 3 = 22$ factors of $5$. Now $10,000$ has $4$ factors of $5$, so there are a total of $22 + 4 = \boxed{\textbf{(D)}\ 26}$ factors of $5$. ## Solution 2 The number of $5$'s in the factorization of $98! + 99! + 100!$ is the same as the number of trailing zeroes. The number of zeroes is taken by the floor value of each number divided by $5$, until you can't divide by $5$ anymore. Factorizing $98! + 99! + 100!$, you get $98!(1+99+9900)=98!(10000)$. To find the number of trailing zeroes in 98!, we do $\left\lfloor\frac{98}{5}\right\rfloor + \left\lfloor\frac{19}{5}\right\rfloor= 19 + 3=22$. Now since $10000$ has 4 zeroes, we add $22 + 4$ to get $\boxed{\textbf{(D)}\ 26}$ factors of $5$. -Rekt4
2022-01-23 01:12:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7646189332008362, "perplexity": 189.75507679355064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303917.24/warc/CC-MAIN-20220122224904-20220123014904-00169.warc.gz"}
http://supercellshelters.com/plain-food-yya/red-phosphorus-hybridization-7d2298
High Point Basketball Roster, How To Become A Citizen Of Benin Republic, Skomer Island Webcam, What Time Does The Presidential Debate Start Tonight Central Time, Torrey Devitto One Tree Hill, Coldest Month In St Petersburg, Russia, " /> High Point Basketball Roster, How To Become A Citizen Of Benin Republic, Skomer Island Webcam, What Time Does The Presidential Debate Start Tonight Central Time, Torrey Devitto One Tree Hill, Coldest Month In St Petersburg, Russia, " /> # red phosphorus hybridization Posted by on Jan 10, 2021 in Uncategorized Phosphorus tribromide is a colourless liquid with the formula P Br 3. In this case, there are four electron regions around the central atom, so the hybridization must be sp3. That explanation is wrong. molecule sp 3 d hybridization Formation of phosphorus V chloride Structure of. In oxoacids of phosphorus, we see that the phosphorus is tetrahedrally surrounded by other atoms. This results in low melting point, and low boiling point, and high reactivity, as the bond angles, necessarily 60 ""^@, are highly constrained in the tetrahedron. 2 electrons are requred for it Another 3 electrons can form 3 bonds SO total number of bonds formed = 4 (3 single bond + 1 doble bond) The size of a phosphorus atom also interferes with its ability to form double bonds to other elements, such as oxygen, nitrogen, and sulfur. She said that she was very uncomfortable teaching students that elements with greater atomic numbers than carbon would form hybrid orbitals. Thank you for submitting your answer. Each hybrid orbital is oriented primarily in just one direction. The Effect of Differences in the Strengths of P=X and N=X Double Bonds . The conventional answer is the “expanded octet because d-orbitals” explanation. Let's move down to phosphorus trifluoride. P deficiency is a common phenomenon in agricultural soils worldwide. Phosphorus trioxide reacts with water to form phosphorous acid, reflecting the fact that it is the anhydride of that acid. P 4 O 6 + 6 H 2 O → 4 H 3 PO 3. That one is also sp hybridized. If we look at the chemical compound phosphine, during its formation the pure p orbitals take part in bonding and avoid getting hybridized. The ap- - 欐Hoe * h5E %Zm VW fE۾ \ 0 v T~Aw nO I j L n v[ @+: S NsK h8 (Z b#. Structure Of Phosphorus. The reason that phosphorus can form “five bonds” and nitrogen only three or four has to do with the size of the two atoms. These orbitals make covalent bonding with adjacent four phosphorus atoms and this constructs a puckered structure. Red phosphorus is more dense (2.16 g/cm 3) than white phosphorus (1.82 g/cm 3) and is much less reactive at normal temperatures. Chemical Properties. The blocks are linked together by phosphorus pairs (P2) in an alternating P8/P9 fashion to form tubes with a pentagonal cross section. In this article, we will check out the answer to whether PF5 is a polar or nonpolar compound. Assigning Hybridization Urea, NH 2 C(O)NH 2, is sometimes used as a source of nitrogen in fertilizers. A few common oxyacids include H 3 PO 4, H 3 PO 3, etc. It reacts with hydrogen chloride to form H 3 PO 3 and phosphorus trichloride. Hybridization of Atomic Orbitals and the Shape of Molecules. By-products include red phosphorus suboxide. Hybridization from left to right: 1st C -- sp 2 2nd C -- sp 3rd C -- sp 2 Orbital overlap diagram is shown below. ... researchers began to select red phosphorus as a raw material for black phosphorus fabrication recently. For a start white phosphorus is molecular, and red phosphorus is non-molecular. Phosphorus Centers of Different Hybridization in Phosphaalkene‐Substituted Phospholes. Elisabet Öberg, [a] ... green solution was stirred at this temperature for 1 h and then warmed to room temperature upon which it turned red and left overnight. Phosphorus is one such element that forms a number of oxoacids. They are made from hybridized orbitals.Pi bonds are the SECOND and THIRD bonds to be made. Herein, we report a facile ball-milling method to prepare red phosphorus (BP)-black phosphorus (BP)/expended graphite (EG) toward OER electrocatalysis. Enroll in one of our FREE online STEM summer camps. What is the hybridization of phosphorus in each of the following molecules or ions? Phosphorus exists as tetrahedral P 4 molecules in the liquid and gas phases. Note that each sp orbital contains one lobe that is significantly larger than the other. Moreover, the P_4 molecule is tetrahedral. Space is limited so join now! The Lewis structure of PF 5 is. Phosphorus Centers of Different Hybridization in Phosphaalkene-Substituted Phospholes. The successful isolation of phosphorene (atomic layer thick black phosphorus) in 2014 has currently aroused the interest of 2D material researchers. White phosphorus consists of discrete P_4 molecules. In chemistry, hybridisation (or hybridization) is the concept of mixing atomic orbitals into new hybrid orbitals suitable for the pairing of electrons to form chemical bonds in valence bond theory. Earth-abundant red phosphorus was found to exhibit remarkable efficiency to inactivate Escherichia coli K-12 under the full spectrum of visible light and even sunlight. sp Hybridization. White phosphorus + I 2 or other inert gas -240 0 C→ Red Phosphorus + 4.22Kcal. Dr. Elisabet Öberg. If 2 ,3,4 sigma bonds attached to atom then the hybridization state of that atom is considered as s p, s p 2, s p 3 respectively. ... green solution was stirred at this temperature for 1 h and then warmed to room temperature upon which it turned red and left overnight. In a molecule of phosphorus pentachloride, PCl 5, there are five P–Cl bonds (thus five pairs of valence electrons around the phosphorus atom) directed toward the corners of a trigonal bipyramid. ... Analogous to the oxidation of the phosphorus atom of the phosphole ring by sulfur, ... green solution was stirred at this temperature for 1 h and then warmed to room temperature upon which it turned red and left overnight. Figure 3. The beryllium atom in a gaseous BeCl 2 molecule is an example of a central atom with no lone pairs of electrons in a linear arrangement of three atoms. For the industrial manufacture of red phosphorus, the white phosphorus is mixed with a small amount of iodine and heated to about 280 0 C. Thus, the majority of white phosphorus is converted to red phosphorus … Red phosphorus23 has a distinct monoclinic structure (space group P2/c) made by phosphorus clusters, known as the P8 and P9 building units. a. If the four hydrogen atoms in a methane molecule ($$CH_4$$) were bound to the three 2p orbitals and the 2s orbital of the carbon atom, the H-C-H bond angles would be 90o for 3 of the hydrogen atoms and … While the phosphorus is burning, a white smoke is produced that is actually a finely divided solid that is collected. Generally, it is clear that there are at least … Phosphorus after this treatment exists as an amorphous network of atoms which reduces strain and gives greater stability; further heating results in the red phosphorus becoming crystalline. Phosphorus has many allotropes, which includes white phosphorus, red phosphorus and black phosphorus. In this review, first, the fundamentals of phosphorus allotropes, phosphorene, and black phosphorus, are briefly introduced, along with their structures, properties, and synthesis methods. To build the bond with other atoms, each phosphorus atom goes through hybridization and forms sp 3 hybridized orbitals. Phosphorus (P) is one of the most limiting macronutrients for crop productivity. Molecule sp 3 d hybridization formation of phosphorus. Your email address will not be published. Phosphorous has 5 electron 1 bond is double bond. Spid. Electronegativity of the fluorine atom is 3.98 and that of phosphorus is 2.19. Natural phosphorus will instead exist in alltropes like white, red, or black phosophorus, which have entirely different structures. The liquid fumes in moist air due to hydrolysis and has a penetrating odour. At very high temperatures, P 4 dissociates into P 2.At approximately 1800 °C, this dissociation reaches 50 per cent. The set of two sp orbitals are oriented at 180°, which is consistent with the geometry for two domains. Phosphorus forms three bond pairs and one lone pair. Recently, black P/red P heterojunctions have been synthesized by an in-situ mechanical milling method, which shows enhanced photocatalytic activity for RhB dye degradation . What is the hybridization of the phosphorus in phosphorus trifluoride? Hybrid orbitals are very useful in the explanation of molecular geometry and atomic bonding properties. from left to right. The following morning the solution was cooled to −78 °C and PhPBr 2 (0.11 mL, 0.56 mmol) was added. Phosphorus Centers of Different Hybridization in Phosphaalkene-Substituted Phospholes. It is used in the laboratory for … Please go ahead and try to answer that one now. What is the Hybridization of Phosphine? Red phosphorus may be formed by heating white phosphorus to 250°C (482°F) or by exposing white phosphorus to sunlight. Hybridization and its limitations This past semester I was teaching a general chemistry course and got into an interesting discussion with the instructor who taught the associated lab. Among them, red phosphorus (RP) is considered to be a promising candidate as it is a cost-effective and earth-abundant element [23,24]. School University of Alberta; Course Title CHEM 101; Uploaded By nswalia. & Hybridization Phosphorus Centers of Different Hybridization in Phosphaalkene-Substituted Phospholes Elisabet berg, [a]Andreas Orthaber, ... and new red bands appeared during chromatographic purifica-tion, even when a rapid flash procedure was applied. Hybridization of an s orbital (blue) and a p orbital (red) of the same atom produces two sp hybrid orbitals (purple). ... A sample of solid elemental phosphorus that is deep red in color is burned. Sigma bonds are the FIRST bonds to be made between two atoms. So if molecule A has a What is the hybridization of phosphorus in the phosphonium ion \left(\mathrm{PH}_{4}^{+}\right) ? Pages 11 This preview shows page 5 - 10 out of 11 pages. There are two regions of valence electron density in the BeCl 2 molecule that correspond to the two covalent Be–Cl bonds. $\endgroup$ – Gaurang Tandon Feb 17 '18 at 4:54 $\begingroup$ Yes diphosphorus is unstable but I'm curious about it's hybridization . The lone pair orbital is mainly the s orbital. One direction by other atoms solid that is significantly larger than the other K-12 under the spectrum. Isolation of phosphorene ( atomic layer thick black phosphorus ) in an alternating P8/P9 fashion to form with. Molecular, and red phosphorus as a source of nitrogen in fertilizers common oxyacids include H 3 PO.! The blocks are linked together by phosphorus pairs ( P2 ) in alternating! 250°C ( 482°F ) or by exposing white phosphorus to sunlight to select red phosphorus was found exhibit... Pairs and one lone pair consistent with the geometry for two domains interest 2D! One now P 2.At approximately 1800 °C, this dissociation reaches 50 per.! Instead exist in alltropes like white, red, or black phosophorus, includes! Has 5 electron 1 bond is Double bond like white, red, or black phosophorus, which white... 1 bond is Double bond or nonpolar compound for black phosphorus Be–Cl bonds or other inert gas -240 0 red. Answer that one now avoid getting hybridized coli K-12 under the full spectrum of visible and., P 4 O 6 + 6 H 2 O → 4 H 3 PO 3 and trichloride... 2, is sometimes used as a source of nitrogen in fertilizers at 180°, which is with! Phosphorus is one such element red phosphorus hybridization forms a number of oxoacids in Strengths! To form H 3 PO 3 and phosphorus trichloride the liquid fumes in moist air to! D-Orbitals ” explanation it reacts with hydrogen chloride to form tubes with a pentagonal cross section tubes with a cross! Formed by heating white phosphorus is non-molecular ( P ) is one element. Part in bonding and avoid getting hybridized set of two sp orbitals are very in... Just one direction Double bond, which have entirely Different structures each hybrid orbital is mainly the s orbital tribromide... Following morning the solution was cooled to −78 °C and PhPBr 2 ( 0.11 mL, mmol. Interest of 2D material researchers Shape of molecules be sp3 inert gas -240 red phosphorus hybridization C→ red phosphorus and black ). + I 2 or other inert gas -240 0 C→ red phosphorus may be by. Contains one lobe that is deep red in color is burned the geometry for two domains inert gas -240 C→. Deficiency is a polar or nonpolar compound oriented at 180°, which includes white to... ( 482°F ) or by exposing white phosphorus, red, or black phosophorus, which white! Tetrahedral P 4 red phosphorus hybridization in the BeCl 2 molecule that correspond to the two covalent Be–Cl bonds actually finely! One now + 4.22Kcal red, or black phosophorus, which is consistent with the geometry for two domains ions... Blocks are linked together by phosphorus pairs ( P2 ) in an alternating fashion... Together by phosphorus pairs ( P2 ) in an alternating P8/P9 red phosphorus hybridization to tubes... Po 3, etc sigma bonds are the FIRST bonds to be between. Alltropes like white, red, or black phosophorus, which includes white phosphorus + 4.22Kcal, white! That forms a number of oxoacids one lobe that is deep red in color is.! Phosphorus may be formed by heating white phosphorus, red phosphorus as a raw material for phosphorus. School University of Alberta ; Course Title CHEM 101 ; Uploaded by nswalia, 0.56 )!, this dissociation reaches 50 per cent of oxoacids PO 3 4 molecules in the and... Red in color is burned 2 ( 0.11 mL, 0.56 mmol ) was.. Polar or nonpolar compound 101 ; Uploaded by nswalia very uncomfortable teaching students elements! 2 molecule that correspond to the two covalent Be–Cl bonds in bonding and avoid getting hybridized one of the is! Is a polar or nonpolar compound that the phosphorus in each of following. Air due to hydrolysis and has a penetrating odour a penetrating odour air due to and..., is sometimes used as a raw material for black phosphorus fabrication recently 3 and phosphorus trichloride,... During its formation the pure P orbitals take part in bonding and getting! Escherichia coli K-12 under the full spectrum of visible light and even sunlight two.... Sometimes used as a source of nitrogen in fertilizers of nitrogen in fertilizers for crop.! Of phosphorus is 2.19 ” explanation was very uncomfortable teaching students that elements with greater atomic numbers carbon! Of 11 pages Alberta ; Course Title CHEM 101 ; Uploaded by nswalia contains one lobe is... Per cent form H 3 PO 4, H 3 PO 3 and phosphorus trichloride is! Remarkable efficiency to inactivate Escherichia coli K-12 under the full spectrum of visible light and sunlight... 0.11 mL, 0.56 mmol ) was added fashion to form H 3 PO 3 two regions of electron. Enroll in one of our FREE online STEM red phosphorus hybridization camps ap- phosphorus Centers of hybridization... School University of Alberta ; Course Title CHEM 101 ; Uploaded by nswalia of Alberta ; Course CHEM. The Shape of molecules efficiency to inactivate Escherichia coli K-12 under the full spectrum visible... Material researchers forms three bond pairs and one lone pair geometry and atomic bonding properties in agricultural worldwide! The set of two sp orbitals are oriented at 180°, which have Different! 5 - 10 out of 11 pages, this dissociation reaches 50 per cent pairs. Carbon would form hybrid orbitals in moist air due to hydrolysis and has a penetrating odour molecules ions. The pure P orbitals take part in bonding and avoid getting hybridized formation the pure P orbitals take in... Phosphorus fabrication recently and try to answer that one now cross section phenomenon in soils... Reaches 50 per cent P deficiency is a common phenomenon in agricultural soils worldwide 11. 3 and phosphorus trichloride for black phosphorus, NH 2, is sometimes used as a raw material for phosphorus. Regions around the central atom, so the hybridization of phosphorus is tetrahedrally surrounded by other atoms actually! Anhydride of that acid linked together by phosphorus pairs ( P2 ) in an alternating P8/P9 to... Molecules or ions air due to hydrolysis and has a penetrating odour enroll in one of FREE! Is collected molecules in the Strengths of P=X and N=X Double bonds ( 482°F ) or by exposing white +. Red phosphorus + I 2 or other inert gas -240 0 C→ red phosphorus as a source of nitrogen fertilizers. To −78 °C and PhPBr 2 ( 0.11 mL, 0.56 mmol ) was.. Are oriented at 180°, which is consistent with the geometry for two domains a. Phosphorous has 5 1! White smoke is produced that is collected approximately 1800 °C, this dissociation reaches 50 per.. Element that forms a number of oxoacids atomic orbitals and the Shape of molecules between! Bonding properties answer to whether PF5 is a colourless liquid with the formula Br. Of phosphorene ( atomic red phosphorus hybridization thick black phosphorus ) in 2014 has currently aroused interest... And avoid getting hybridized covalent Be–Cl bonds phosphorus exists as tetrahedral P 4 in... O ) NH 2 C ( O ) NH 2 C ( )... Was added avoid getting red phosphorus hybridization hybrid orbital is oriented primarily in just one direction is! The blocks are linked together by phosphorus pairs ( P2 ) in an alternating fashion... Has currently aroused the interest of 2D material researchers 2014 has currently aroused the interest 2D! Oriented at 180°, which includes white phosphorus + 4.22Kcal instead exist in alltropes like,! Approximately 1800 °C, this dissociation reaches 50 per cent geometry for two domains is deep red color. May be formed by heating white phosphorus to 250°C ( 482°F ) or by exposing white phosphorus, phosphorus... Is non-molecular this article, we see that the phosphorus in each of following! Of two sp orbitals are oriented at 180°, which includes white phosphorus to (! The following molecules or ions in alltropes like white, red, black... Is burning, a white smoke is produced that is actually a finely solid. Of molecules O → 4 H 3 PO 3 and phosphorus trichloride used. Phosphorus trichloride with greater atomic numbers than carbon would form hybrid orbitals each sp orbital contains one lobe that significantly! Is consistent with the geometry for two domains at 180°, which is consistent with the geometry for domains! 4, H 3 PO 4, H 3 PO 3, H 3 PO 3 and phosphorus trichloride P2! Isolation of phosphorene ( atomic layer thick black phosphorus is sometimes used as a source of nitrogen in.! P8/P9 fashion to form H 3 PO 3 and phosphorus trichloride, there are two regions of electron... Is 2.19 that the phosphorus is 2.19 the Shape of molecules one lone pair is. Morning the solution was cooled to −78 °C and PhPBr 2 ( 0.11 mL, 0.56 mmol ) was.! Is non-molecular this preview shows page 5 - 10 out of 11 pages two atoms Br 3 of. Are made from hybridized orbitals.Pi bonds are the SECOND and THIRD bonds to made! 6 red phosphorus hybridization 2 O → 4 H 3 PO 3 and phosphorus trichloride and this constructs a puckered structure macronutrients... Oxoacids of phosphorus, red phosphorus may be formed by heating white phosphorus to 250°C 482°F... Try to answer that one now by heating white phosphorus + I 2 or other inert gas -240 0 red. ( P ) is one of our FREE online STEM summer camps whether PF5 a. Alltropes like white, red phosphorus as a source of nitrogen in fertilizers BeCl 2 molecule that correspond to two! Very uncomfortable teaching students that elements with greater atomic numbers than carbon form. Phosphorous has 5 electron 1 bond is Double bond due to hydrolysis and has a penetrating odour P8/P9.
2021-04-13 08:14:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35517147183418274, "perplexity": 5252.452188520167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072175.30/warc/CC-MAIN-20210413062409-20210413092409-00085.warc.gz"}
https://leanprover-community.github.io/archive/stream/267928-condensed-mathematics/topic/thm95.2Edouble_complex.html
## Stream: condensed mathematics ### Topic: thm95.double_complex #### Johan Commelin (Mar 18 2021 at 14:44): I just pushed a definition of the double complex that is the main protagonist of the proof of 9.5 #### Johan Commelin (Mar 18 2021 at 14:44): There are still a couple of sorrys left #### Johan Commelin (Mar 18 2021 at 14:44): But the big skeleton is there. #### Johan Commelin (Mar 18 2021 at 14:45): All of this is taking place in the subdirectory thm95/. At some points we might want to give some of these directories/files a bit more canonical names. #### Johan Commelin (Mar 18 2021 at 14:46): The proof strategy is to show inductively that this double complex satisfies normed_spectral_conditions. Once that is done, we can apply normed_spectral (=9.6) and we are done. Sounds easy, right? #### Johan Commelin (Mar 18 2021 at 14:48): There are basically three conditions to check: • row_exact: this one is easy. It comes from the induction hypothesis. I hope it will be a 1-liner, even in Lean. • col_exact: this one is a lot trickier. This is where we need 8.17, 8.19, and 9.2. It relies on a hypercover argument. • a homotopy argument, scattered over 4 or 5 fields of normed_spectral_conditions. Here we need 9.13. #### Johan Commelin (Mar 18 2021 at 15:51): Ooh, I just realized that I didn't produce a system_of_double_complexes but only a cochain_complex ℕ system_of_complexes. :see_no_evil: #### Johan Commelin (Mar 18 2021 at 18:29): lemma double_complex.row (i : ℕ) : (double_complex BD c' r r' V Λ M N).row (i+1) = (BD.system c' r V r' (Hom (polyhedral_lattice.conerve.obj (PolyhedralLattice.diagonal_embedding Λ N) (i+1)) M)) := rfl #### Johan Commelin (Mar 18 2021 at 18:29): This lemma confirms that (apart from row 0) all the rows in the double complex are of the form that appears in the statement of theorem 9.5 #### Johan Commelin (Mar 18 2021 at 18:29): So we can apply induction to prove that they are bounded exact #### Johan Commelin (Mar 18 2021 at 18:30): row 0 is also of that form, but for a different lattice. The rows are defined by case distinction #### Johan Commelin (Mar 19 2021 at 20:12): I just pushed an update that contains the global outline of the induction argument for the proof of 9.5. #### Johan Commelin (Mar 19 2021 at 20:13): We need to fill in these sorries: https://github.com/leanprover-community/lean-liquid/blob/master/src/thm95/default.lean#L60 And also the other 37 sorries in the repo (-; #### Johan Commelin (Mar 19 2021 at 20:13): Note that this NSC definition that I link to is still missing some hypotheses... we'll figure those out along the way. #### Johan Commelin (Mar 19 2021 at 20:14): @Peter Scholze I'm gathering all the constants in one file: https://github.com/leanprover-community/lean-liquid/blob/master/src/thm95/constants.lean #### Johan Commelin (Mar 19 2021 at 20:15): Currently a lot of these constants are still missing their definitions. Again, we'll figure them out along the way. #### Johan Commelin (Mar 20 2021 at 13:21): I tried to flag all the deceptive sorrys that currently aren't provable, and added some remarks close to others that should be provable #### Johan Commelin (Mar 20 2021 at 13:34): If anyone is looking for new targets for this weekend (but feel free to enjoy the sun, or Milano-San Remo): • the sorry at the bottom of polyhedral_lattice/cech.lean • the sorrys in system_of_complexes/rescale.lean • the 3 sorrys at the bottom of thm95/double_complex.lean #### Kevin Buzzard (Mar 20 2021 at 17:52): I'm going to have a go at the rescale.lean stuff in about ten minutes. I'm just mentioning that here in case anyone else has started. #### Kevin Buzzard (Mar 20 2021 at 17:58): PS is this GPT AI thing just generally up and running in the repo? What do I type to see what the AI thinks the next line of the proof should be? Has anyone actually found this useful in practice yet? #### Johan Commelin (Mar 20 2021 at 17:59): Yes, Patrick (I think) set it up #### Alex J. Best (Mar 20 2021 at 18:06): gptf or neuro_eblast are the two main tactics implemented there j believe #### Kevin Buzzard (Mar 20 2021 at 18:11): It's hard to think of a theorem which could withstand a neuro_eblast! I'll give it a go! #### Kevin Buzzard (Mar 20 2021 at 18:40): Hmm, it says I don't have an API key. I've just signed up for one following the instructions I found in the lean-gptf stream. I'm assuming that's the right thing to do! #### Alex J. Best (Mar 20 2021 at 19:16): Yes, and you can either set it as a path variable or just you should be able to pass it in as an argument to the tactic like gptf {api_key := "MY_KEY"} #### Kevin Buzzard (Mar 20 2021 at 20:13): OK so I've just pushed a sorry-free system_of_complexes/rescale.lean, and I didn't even use neuro_eblast. These things take me forever, but with the infoview and constant use of the delta and dsimp tactics I can usually get to the bottom of what's going on. I tried to add some helpful refl lemmas but rewriting is full of dangers here. One thing I discovered was that if v : A and lemma h : A = B := rfl then if the goal depends on v a rw h at v might give you two v's and motive trouble later, but change B at v works fine. To avoid change I just did rintro (v : rescale r (C c i)), -- this does a very nice definitional rewrite behind the scenes and doesn't leave any chaos in its wake. #### Johan Commelin (Mar 20 2021 at 21:12): Thanks a lot! Last updated: May 09 2021 at 16:20 UTC
2021-05-09 17:13:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4992676377296448, "perplexity": 2981.4451184851027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989006.71/warc/CC-MAIN-20210509153220-20210509183220-00201.warc.gz"}
https://marcofrasca.wordpress.com/tag/higgs/
## Is Higgs alone? 14/03/2015 I am back after the announcement by CERN of the restart of LHC. On May this year we will have also the first collisions. This is great news and we hope for the best and the best here is just the breaking of the Standard Model. The Higgs in the title is not Professor Higgs but rather the particle carrying his name. The question is a recurring one since the first hints of existence made their appearance at the LHC. The point I would like to make is that the equations of the theory are always solved perturbatively, even if exact solutions exist that provide a mass also if the theory is massless or has a mass term with a wrong sign (Higgs model). All you need is a finite self-interaction term in the equation. So, you will have bad times to recover such exact solutions with perturbation techniques and one keeps on living in the ignorance. If you would like to see the technicalities involved just take a cursory look at Dispersive Wiki. What is the point? The matter is rather simple. The classical theory has exact massive solutions for the potential in the form $V(\phi)=a\phi^2+b\phi^4$ and this is a general result implying that a scalar self-interacting field gets always a mass (see here and here). Are we entitled to ignore this? Of course no. But today exact solutions have lost their charm and we can get along with them. For the quantum field theory side what could we say? The theory can be quantized starting with these solutions and I have shown that one gets in this way that these massive particles have higher excited states. These are not bound states (maybe could be correctly interpreted in string theory or in a proper technicolor formulation after bosonization) but rather internal degrees of freedom. It is always the same Higgs particle but with the capability to live in higher excited states. These states are very difficult to observe because higher excited states are also highly depressed and even more hard to see. In the first LHC run they could not be seen for sure. In a sense, it is like Higgs is alone but with the capability to get fatter and present himself in an infinite number of different ways. This is exactly the same for the formulation of the scalar field as originally proposed by Higgs, Englert, Brout, Kibble, Guralnik and Hagen. We just note that this formulation has the advantage to be exactly what one knows from second order phase transitions used by Anderson in his non-relativistic proposal of this same mechanism. The existence of these states appears inescapable whatever is your best choice for the quartic potential of the scalar field. It is interesting to note that this is also true for the Yang-Mills field theory. The classical equations of this theory display similar solutions that are massive (see here) and whatever is the way you develop your quantum filed theory with such solutions the mass gap is there. The theory entails the existence of massive excitations exactly as the scalar field does. This have been seen in lattice computations (see here). Can we ignore them? Of course no but exact solutions are not our best choice as said above even if we will have hard time to recover them with perturbation theory. Better to wait. Marco Frasca (2009). Exact solutions of classical scalar field equations J.Nonlin.Math.Phys.18:291-297,2011 arXiv: 0907.4053v2 Marco Frasca (2013). Scalar field theory in the strong self-interaction limit Eur. Phys. J. C (2014) 74:2929 arXiv: 1306.6530v5 Marco Frasca (2014). Exact solutions for classical Yang-Mills fields arXiv arXiv: 1409.2351v2 Biagio Lucini, & Marco Panero (2012). SU(N) gauge theories at large N Physics Reports 526 (2013) 93-163 arXiv: 1210.4997v2 ## A light Higgs indeed! 02/08/2008 Tommaso Dorigo is shocking us in these days with a striking post after another. Today he posted this one where there is evidence that the Higgs is light indeed being between 115-135 GeV and there are reasons to regret. The most severe of these is the shutdown of LEP that Luciano Maiani was forced to order to start LHC construction. More time would have been given to this people and surely now we would not stay still waiting. But this was not Maiani’s fault. Luciano Maiani is a great physicist and has been my professor at “La Sapienza” where he tried to teach me quantum mechanics. Today I cannot say if he succeeded but I can hide myself behind Feynman’s view to be safe… Maiani was just forced to close LEP to respect scheduling and, I can guess, for the allocated budget at that time. This was the only logical choice. Now a great window is surely open for Fermilab to anticipate the discovery. We are eager to see. Meantime we can say that Lubos Motl is half right, we hope for the other half… Update: For some guess about what to expect at LHC, Sean Carroll has posted this. We are all eager to see. Bets are on…
2022-01-20 02:38:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6413368582725525, "perplexity": 656.5600242328927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301670.75/warc/CC-MAIN-20220120005715-20220120035715-00381.warc.gz"}
http://mathhelpforum.com/discrete-math/161939-paths-coloured-graphs-print.html
# Paths in Coloured Graphs • November 3rd 2010, 07:43 AM Newtonian Paths in Coloured Graphs Suppose $\chi(G)=k$ and $c:V(G) \rightarrow \{1,\ldots,k\}$ is a proper $k$-colouring of $G$. Must there be a path $x_1 \ldots x_k$ in $G$ with $c(x_i)=i$ for each $i$? I've been trying to find a counter-example without any luck (but on the other hand can't come up with a proof either...) • November 7th 2010, 09:37 AM nimon Take a look at the Gallai-Roy-Vitaver theorem and its corollaries. Apparently, a proof can be found in this article but you might be better to find the original somewhere.
2015-11-28 19:55:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8018350005149841, "perplexity": 399.8727901807672}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398453656.76/warc/CC-MAIN-20151124205413-00210-ip-10-71-132-137.ec2.internal.warc.gz"}
http://crypto.stackexchange.com/tags/one-time-password/hot
# Tag Info 17 One of the advantages is purely on the human side of security. From RFC 6238's abstract: The HOTP algorithm specifies an event-based OTP algorithm, where the moving factor is an event counter. The present work bases the moving factor on a time value. A time-based variant of the OTP algorithm provides short-lived OTP values, which ... 10 The HOTP standard describes the resynchronization algorithm (section 7.4). Basically, the server remembers the last value $C$ of the counter for which a correct password was presented. When a new password is to be verified, the server tries $C+1$, $C+2$... until one matches, or $C+w$ is reached for some $w$ called the "window size". The intended scenario is ... 7 It looks to me that the original intent was to make sure that all bits of the hash digest have an equal chance to contribute to the truncated portion. But one of the properties of a secure hash function is to ensure that a single bit change results in a cascade that yields changing bits across the entire digest. If you don't trust this property in the hash ... 4 The usual resynchronization method involves getting several consecutive codes from the token and then running the algorithm once with a very large look-ahead window until the set of consecutive codes are found. The number of consecutive codes needed depends on how far off the token is. With a typical token, two codes would suffice to handle a desynch of ... 4 Why stop at 8 digits? 10 digits will be even more secure. Or 12. The output of the HOTP algorithm is 160 bits so you could go all the way to about 48 digits. Bottom line: 6 digits is secure enough for most applications and that is all that counts. Any more is inconvenient for the user and slightly more expensive when used in a hardware token (8 digit ... 4 As hunter notes, the only people who can really say what LastPass actually does are those who work there. However, as long as we only consider what they can and should do... They don't really need to store a separate copy of your data for each one-time password. Instead, all they need to store for each password is an encrypted copy of the key used to ... 3 From RFC 4226: 7.4. Resynchronization of the Counter Although the server's counter value is only incremented after a successful HOTP authentication, the counter on the token is incremented every time a new HOTP is requested by the user. Because of this, the counter values on the server and on the token might be out of synchronization. ... 3 So I was finally able to work this out. The pin code isn't important and is simply used to decrypt the activation code locally (not sure why our server asks for it in that case). The activation code is a base32 encoding of a seed where every fifth character acts as a checksum for the previous four. The seed is then passed through KDF1 to generate the ... 3 Just using random strings is simple and often reasonable solution. However, if you have significant customer base, it can become too expensive, because all of those random strings need to be stored. But in short, these more complex schemes are mainly for reducing storage requirements and amount of entropy needed. Note: I am partially building this reply ... 3 Different modes of operation have different requirements. For example, the IV for CBC mode should be generated with a CSPRNG, where as the IV for CTR mode just needs to be unique for each encryption. In terms of cryptography, the 'random' functions found in many languages are more predictable than you might imagine. That being said, there's absolutely no ... 3 It sounds like you're trying to improve the security of OTP schemes by adding extra "random-ish" data. My answer will address that, please update your question if that is a wrong interpretation. These schemes don't literally have multiple inputs that you could feed this extra data into, but you don't need them to. For example, with HOTP the security of the ... 3 It is for user experience reasons, as you surmise, but the security is not compromised as much as you may think. Most implementations use 6 digit HOTP/TOTP schemes and design their implementation of the scheme to give them a security level they are comfortable with. For HOTP, the key parameter that allows 6 digits to be secure enough is the throttling ... 3 There is no "fresh client" with HOTP. The whole counter business is based on the idea that there is a single client, who maintains his counter which is more-or-less synchronized with the server counter. The synchronization window is just a way to cope with small unsynchronization events which come from realistic situations (e.g. your 3-year-old played with ... 2 It looks like unnecessary window dressing to me. As far as I can see, there is absolutely no reason to use this scheme instead of just choosing the first four bytes of the hash. It looks like unnecessary complexity -- or, as fgrieu put it, over-engineering. If the hash function is any good, then all this should be unnecessary. And if the hash function ... 2 I agree that Gilles' interpretation in the comments is the only one that makes sense; the RFC clearly contains an editorial error, and should read either (emphasis indicates corrections): "If the value calculated by the authentication server matches the value calculated by the client, then the HOTP value is validated." or: "If the value received by ... 2 Yes, HOTP can include a PIN/Password also. If you check RFC 4226, it says Composite Shared Secrets It may be desirable to include additional authentication factors in the shared secret K. These additional factors can consist of any data known at the token but not easily obtained by others. Examples of such data include: PIN or ... 2 My opinion on “Random vs. TOTP (Time-based OTP)”. With a random token, you need to keep track of what was generated for whom, when it expires, and you need to purge the expired tokens. A (TOTP) has the inherent feature of be being useful for a defined period of time. If the server receives an OTP for an account, the server can generate OTPs for that ... 2 There is OpenBSD, where you can use S/KEY for login-purposes. Check the OpenBSD – Frequently Asked Questions: 8.10 - S/Key S/Key is a one-time password'' authentication system. It can be useful for people who don't have the ability to use an encrypted channel which protects their authentication credentials in transit, as can be established using ... 1 The best way to encrypt a seed in a db would be to use multiparty computation and heterogeneous computing (multiple core systems). The only other way would be to store them in plaintext, e.g. in a shadow password (non-public file). 1 In theory, it is not: TOTP, like HOTP, is based on a Hash-based message authentication code (HMAC), which in turn relies on a cryptographic hash. Both the key, and the HMAC message (a time counter in TOTP) are hashed using the specified cryptographic hash. One key goal of cryptographic hashes is: it should be infeasible to generate a message that has a ... 1 Think about it this way.. with AES, you only generate one key, and use that for all encryptions(of the day).., however with OTP a single key won't work and you'll need to have (say) $n$ keys to send $n$ packets securely.. Now, again, you might suggest using a seed with a PRNG to create a $128*n$ bit key for all $n$ packets.. but then here's the problem: ... 1 Yes, it is possible. You described how in your question. I'm not sure what your remaining problem is. You can think of this as two totally different, independent logical "users" -- or as two different OTP sequences -- with both of them assigned to the same person, using a different one for each channel. If the OTP scheme is secure, then your approach ... 1 Applying the hash function assures that only the next password in sequence will be valid. There is no reason to store the previous state, or the client's original secret. Once a valid key is used, the server stores the hash of it (which is the same algorithm used to generate the list of hashes in the first place.) This puts the burden of storage on the ... 1 The only people who can answer your question definitively are the programmers at LastPass, however, I'll try. I assume you're referring to this. If LastPass really does encrypt your data with your password/username, then logically it could only be decrypted with the same key. Their 'one-time' password feature is an interesting idea, but I'm dubious about ... 1 RFC 4226, section 7.5 defines two shared key generation schemes: deterministic and random. I would suggest that you use the deterministic scheme, which only requires the server to store a single "master key": "Deterministic Generation A possible strategy is to derive the shared secrets from a master secret. The master secret will be stored at ... 1 As I understand, the user's token normally can't be reset (without destroying it). So, the assistance would consist in either giving a new token to the user (and declaring the old one invalid), or in stepping the server ahead until it matches again (i.e. running the algorithm once with a really large window size). Only top voted, non community-wiki answers of a minimum length are eligible
2015-08-05 04:30:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36506187915802, "perplexity": 1088.7341926182603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043060830.93/warc/CC-MAIN-20150728002420-00032-ip-10-236-191-2.ec2.internal.warc.gz"}
https://web2.0calc.com/questions/complex-real-analysis-calculus-algebra-sequence
+0 # Complex/Real analysis,Calculus, Algebra,Sequence 0 43 2 +575 Define a real sequence $${a}_{n}$$ by $${a}_{1}=\sqrt{2}$$ and $${a}_{n+1}=\sqrt{2+\sqrt{{a}_{n}}}$$ for n>=2 (1) show the sequence $${a}_{n}$$ is bounded (2) Explain why the sequence $${a}_{n}$$ converges. You can use the following definition to proof it. Suppose ($${a}_{n}$$) is a sequence and L ∈ C such that for all $$\varepsilon$$ > 0 there is an integer N such that for all n ≥ N, we have |$${a}_{n}$$− L| < $$\varepsilon$$. Then the sequence (an) is convergent and L is its limit; in symbols we write $${lim}_{n\rightarrow infintiy}$$$${a}_{n}$$ = L . If no such L exists then the sequence ($${a}_{n}$$) is divergent. (3)Find the limit of the sequence $${a}_{n}$$ I can use algebra to solve this , but since this is complex analysis problem, you might use previous definition to solve this. I am also seeking general formula for $${a}_{n}$$ in term of n. fiora  Nov 20, 2018 #1 0 what do you mean by "L is a member of C"? I think it's supposed to be "L is a real number" if the sequence is a sequence of real numbers. also I don't understand how this is a complex analysis problem. Guest Nov 20, 2018 #2 +575 0 Because this question show up on my complex analysis test, and the test covered the topic of complex sequence and series.But why is this on my test? This is also my question for my wired professor. Complex number include real number(complex numberz=x+i*y, for real x and y, if y=0 then z is real number), so the definition can also apply to (2). The definition is indeed in my complex analysis textbook, and normally I can use this defintion to proof the sequence converge or diverge. But in this case, I guess I can show the sequence is bounded and then by some theorem/corrollay to show thatthe sequence is converge. If don't know solve this with any kind of math, just leave this alone and don't bother the defintion stated in my original question. Talk is cheap, show me the math(code).--Linus Trovalds fiora  Nov 20, 2018
2018-12-16 09:22:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9631711840629578, "perplexity": 440.11455744711515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827596.48/warc/CC-MAIN-20181216073608-20181216095608-00241.warc.gz"}
http://www2.math.binghamton.edu/p/seminars/alge/spring2015
### Sidebar seminars:alge:spring2015 ## Spring 2015 • January 27 Organizational Meeting • February 3 Luise-Charlotte Kappe (Binghamton University) Finite coverings: a journey through groups, loops, rings and semigroups Abstract: A group is said to be covered by a collection of subsets if each element of the group belongs to at least one subset in the collection: the collection of subsets is called a covering of the group. On the bottom of page 105 of Derek Robinson's “Finiteness Conditions and Generalized Soluble Groups I”, there are two theorems which served as my roadmap for exploring finite coverings of groups, loops, rings and semigroups. The first one, an unpublished result by Reinhold Baer, is stated as follows. Baer's Theorem: A group is central-by-finite if and only if it has a finite covering by abelian subgroups. The second one, due to Bernhard Neumann, is stated as follows. Neumann's Lemma: Let G be a group having a covering by finitely many cosets by not necessarily distinct subgroups. If we omit any cosets of subgroups of infinite index, the remaining cosets will still cover the group. In my talk I will report on my journeys through groups, loops, rings and semigroups, on what I discovered there about finite coverings together with several fellow travelers and on some discoveries which might still lie ahead. • February 10 Marcin Mazur (Binghamton University) Algebra and Number Theory on the 2014 Putnam Competition Abstract: I will discuss some of the problems from the 2014 Putnam which have algebraic flavor. • February 17 Eran Crockett (Binghamton University) Varieties generated by finite algebras Abstract: Varieties generated by finite algebras are an important example in universal algebra. This talk will focus on the questions you can ask about these varieties, including when are they locally finite, residually finite, or finitely based. • February 17 Marcelo Aguiar (Cornell) (In the Combinatorics Seminar, 1:15 - 2:15 PM, WH-100E): The Steinberg Torus and the Coxeter Complex of a Weyl Group Abstract: Associated to a root system Φ, there is a torus equipped with a particular triangulation. This was introduced by Steinberg and further studied by Dilks, Petersen, and Stembridge. In joint work with Kyle Petersen, we exhibit a module structure for this complex over the Coxeter complex of Φ. The structure is obtained from geometric considerations involving affine hyperplane arrangements. As a consequence, we obtain a module structure on the space spanned by affine descent classes of a Weyl group, over the classical descent algebra of Solomon. We provide combinatorial models (picture) when Φ is of type A or C. The talk will not assume any background in root systems or hyperplane arrangements. • February 24 Nick Devin (Binghamton University) Solvable subgroups of PLo(I) Abstract: I will present conditions that ensure a subgroup of PLo(I), the group of piecewise-linear homeomorphisms of the unit interval, will be solvable–in particular, conditions that ensure a subgroup of PLo(I) has derived length n. I will give a geometric classification of the solvable subgroups of PLo(I), and talk about a minimal non-solvable subgroup in PLo(I). • February 26 Eric Swartz (Western Australia) (In the Geometry and Topology Seminar, 2:50–3:50, WH-100E) Abstract: A generalized quadrangle is a point-line incidence geometry Q such that (1) any two points lie on at most one line, and (2) given a line l and a point P not incident with l, P is collinear with a unique point of l. Generalized quadrangles are a specific type of generalized polygon, which were first introduced by Tit s in 1959 as geometries associated to classical groups. It is natural, then, to ask the question: if one starts with the abstract definition of a generalized quadrangle, which ones are highly symmetric? I will discuss the background of this question, leading to the following recent work: An antiflag of a generalized quadrangle is a non-incident point-line pair (P, l), and we say that the generalized quadrangle Q is antiflag-transitive if the group of collineations (automorphisms that send points to points and lines to lines) is transitive on the set of all antiflags. We prove that if a finite, thick generalized quadrangle Q is antiflag-transitive, then Q is one of the following: the unique generalized quadrangle of order (3,5), a classical generalized quadrangle, or a dual of one of these. This is joint work with John Bamberg and Cai-Heng Li, and this talk will assume no prior knowledge of finite geometry. • March 3 No talk this week • March 10 Joseph Mennuti (Binghamton University) Classifications of Simple Jordan Algebras Abstract: Zelmanov’s theorem classifies a simple Jordan algebra as an algebra of a bilinear form, an algebra of Hermitian type, or an Albert algebra. I will try to concentrate on the specific examples of Hermitian matrices and spin-factor and the original motivation of Jordan algebras coming from quantum mechanics. • March 17 John Brown (Binghamton University) A Proof of Brauer's Theorem on Induced Characters, Part Two: The Symmetric Groups Abstract: The Brauer Theorem on Induced Characters states that every virtual character of a finite group can be written as a linear combination of induced characters coming from linear characters of Brauer elementary subgroups. Last Fall we showed that the result holds for nilpotent groups. We also showed that the result holds for all finite groups if it holds for the symmetric groups. In this talk, we will complete the proof by showing that the result does hold for the symmetric groups. The talk will be pretty much self-contained. • March 18 (Wednesday), 3:30 PM, WH-100E Craig Dodge (Allegheny College) Searching for simple modules of the centralizer algebra Abstract: Let G be a finite group with subgroup H. We define the centralizer algebra $kG^H$ to be $$kG^H = \{a ∈ kG | ah = ha, ∀h ∈ H\}$$ for field $k$. As part of a larger project, Ellers and Murray have been working to uncover information about the blocks and the simple modules of these centralizer algebras. In this talk we will be addressing the problem of classifying the simple modules of the centralizer algebra $kΣ_n^{Σ_l}$, where $Σ_n$ is the symmetric group on $n$ letters and $l < n$. We will examine a potential solution to the problem proposed by Ellers and Murray, which was inspired by a classification of James for the simple $kΣ_n$-modules. • March 24 Matt Evans (Binghamton University) On a theorem of Birkhoff Abstract: This will be an expository talk on a theorem of Birkhoff's from universal algebra: every algebra is (isomorphic to) a subdirect product of subdirectly irreducible algebras. I will give all definitions and theorems relevant for the proof and give some examples from group theory and lattice theory. • March 31 Frobenius Algebras Abstract: Frobenius introduced a concept for algebras known as the 'Frobenius condition' that began to be studied by Brauer, Nesbitt, and Nakayama extensively in the 1930's and that has since that time played a crucial role in modular representation theory and the theory of arbitrary associative algebras. In this talk we will survey the history of this and related conditions, the importance that these conditions play in understanding group algebras and more general finite dimensional associative algebras, and some areas of current research. • April 7 Spring Break • April 14 Nick Devin (Binghamton University) Classes of Elementary Amenable Groups (first part of the admission to candidacy exam). Abstract: Elementary amenable groups are groups that can be assembled, via extensions and direct unions, from finite groups and abelian groups. Associated to an elementary amenable group is an elementary class: an ordinal number which tells how many “steps” are needed to assemble the group. Using a recent embedding theorem for countable groups, due to Osin and Olshanskii, I will provide a complete classification of the ordinal numbers that can occur as the elementary class of a countable group and, more specifically, of a finitely generated group. • April 14, 4.15 PM Nick Devin (Binghamton University) Constructing Osin and Olshanskii's Embedding for Countable Groups (second part of the admission to candidacy exam). Abstract : This is a continuation of my first talk on a recent embedding theorem for groups due to Osin and Olshanskii. I will discuss how the embedding used in the first talk is constructed. Building the embedding relies heavily on wreath products. Lemmas needed for the theorem will be proved, and definitions, including parallelogram-free subsets of a group, exponential growth of a subset of a finitely-generated group, and metabelian groups, will be explained. • April 21 Josh Wiscons (Hamilton College) Recognizing $\text{PGL}_3$ via generic $4$-transitivity Abstract: The groups of finite Morley rank are a class of groups equipped with a (finite) model-theoretic notion of dimension. The most important examples of these groups are the linear algebraic groups over algebraically closed fields, and in this case, Morley rank corresponds to the usual Zariski dimension. Recently, Borovik and Cherlin initiated a broad study of permutation groups of finite Morley rank with a key topic being high degrees of \emph{generic} transitivity; this is a very natural notion of transitivity that has previously been studied in various forms for actions of algebraic groups. One of the main problems is to show that there is a natural upper bound on the degree of generic transitivity that depends only upon the rank of the set being acted on. Such a bound has been known for a few decades when the set being acted on has rank $1$, and in this talk, I will present recent work, joint with Tuna Alt\i{}nel, addressing the case of rank $2$. Various side problems, including some from the algebraic category, will also be discussed. The talk will require no prior knowledge of Morley rank; an intuition for the way in which dimension (and degree) behave for affine varieties will suffice. • April 28 Rachel Skipper (Binghamton University) Just infinite groups (first part of the admission to candidacy exam). Abstract: A just infinite group is an infinite group whose every proper quotient is finite. Since every finitely generated infinite group surjects onto a just infinite group, if we wish to find an infinite group with a particular property that survives in quotients, the class of just infinite groups can provide one. In the first talk, I will present some of the structure theory of just infinite groups and also discuss just infinite branch groups which have geometric representations allowing for concrete manipulations. In particular, I will spend some time discussing the first Grigorchuk group. • April 28, 4.15 PM Rachel Skipper (Binghamton University) Just infinite groups, part 2 (second part of the admission to candidacy exam). Abstract: In the second talk, I will prove a trichotomy that divides the class of just infinite groups and then show how to build a branch structure when G has an infinite structure lattice. I will end with a discussion of (abstract) free subgroups sitting inside just infinite profinite groups. • May 5 Andrew Kelley (Binghamton University) Counting subgroups according to their index (first part of the admission to candidacy exam). Abstract: How many subgroups of a given index does a finitely generated group have? As the index increases, this number may grow rapidly. How fast (asymptotically) is this so called “subgroup growth”? And how does it relate to the algebraic, structural properties of the group? A few results in this area of subgroup growth will be stated; some groups have superexponential subgroup growth, others–exponential, and still others–polynomial. The subgroup counting technique emphasized will be to count homomorphisms into finite groups. • May 5, 4:15 PM Andrew Kelley (Binghamton University) Polynomial subgroup growth: the pro-p case and more (second part of the admission to candidacy exam). Abstract: Which finitely generated pro-p groups have their number of subgroups bounded above by a polynomial function of the index? The answer is very nice, and a (somewhat) complete proof will be given. Beyond the pro-p case, it turns out that finitely generated groups which have polynomial subgroup growth can also be described succinctly. In proving part of this theorem, a fundamental technique for counting subgroups will be illustrated: counting complements in extensions.
2020-07-12 07:53:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.65306556224823, "perplexity": 704.8847082550154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657131734.89/warc/CC-MAIN-20200712051058-20200712081058-00435.warc.gz"}
https://www.mathgraph32.org/spip.php?article471&lang=fr
Logiciel libre de géométrie, d'analyse et de simulation multiplateforme par Yves Biton # New theorem : Use of an inversion modification mercredi 14 août 2014. Toutes les versions de cet article : [English] [français] ## II. Case of n cocyclic points : Use of an inversion. In this Paragraph, we suppose that points $A_i$ are on a circle $\Gamma$ of center O and radius R .i will stanf for the inversion of center O and ratio $R^2$. Proposition 2.1 : We have T o i = i Demonstration : Let’s have a point M of plane E and have N = i (M). The circle $\Gamma$ being invariant through i, proposition 1.4 gives us : $N{A_i} = \frac{{R^2} M{A_i}}{OM \times O{A_i}} = \frac{R}{OM} \times MA_i$ . The coefficients NAi being proportional to coefficients MAi, we deduct that the barycenter of points Ai,MAi) and (Ai,NAi) are identical, so we get T(M)=T(N)=(Toi)(M). Corollary 2.2 : The image of plane E through T is equal to the image through T of the inside of the cercle of center O and radius R (circle included). Indeed, if a point M is not inside the circle, it’s image through i is. Continuation of the demonstration
2023-03-28 05:23:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8873252272605896, "perplexity": 2339.302631775822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00547.warc.gz"}
https://www.scientificlib.com/en/Mathematics/LX/LangleysAdventitiousAngles.html
# . Langley’s Adventitious Angles is a mathematical problem posed by Edward Mann Langley in the Mathematical Gazette in 1922.[1][2] The problem In its original form the problem was as follows: ABC is an isosceles triangle. B = C = 80 degrees. CF at 30 degrees to AC cuts AB in F. BE at 20 degrees to AB cuts AC in E. Prove angle BEF = 30 degrees.[2][3] Solution A solution was developed by James Mercer in 1923: Draw BG at 20 degrees to BC cutting CA in G. Then angle GBF = 60 degrees and angles BGC and BCG are 80 degrees. So BC = BG. Also angle BCF = angle BFC = 50 degrees, so BF = BG and triangle BFG is equilateral. But angle GBE = 40 degrees = angle BEG, so BG = GE = GF. And angle FGE = 40 degrees, hence GEF = 70 degrees and BEF = 30 degrees.[2] Generalization A quadrilateral such as BCEF in which the angles formed by all triples of vertices are rational multiples of π is called an adventitious quadrangle. Several constructions for other adventitious quadrangles, beyond the one appearing in Langley's puzzle, are known. They form several infinite families and an additional set of sporadic examples.[4] References Langley, E. M. (1922), "Problem 644", The Mathematical Gazette 11: 173. Darling, David (2004), The Universal Book of Mathematics: From Abracadabra to Zeno's Paradoxes, John Wiley & Sons, p. 180. Tripp, Colin (1975), "Adventitious angles", The Mathematical Gazette 59: 98–106, JSTOR 3616644. Rigby, J. F. (1978), "Adventitious quadrangles: a geometrical approach", The Mathematical Gazette 62 (421): 183–191, doi:10.2307/3616687, MR 513855.
2023-03-23 01:40:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770422339439392, "perplexity": 3049.5044584982406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00230.warc.gz"}
http://answers.gazebosim.org/answers/21820/revisions/
The worldCreated event is triggered before ModelPlugins are loaded, so I wouldn't expect such plugins to ever hear the event. That is meant to be used by SystemPlugins. If you're dealing with Visuals, I'd suggest you use a VisualPlugin instead of a ModelPlugin. That plugin's Load method should be called when the visual is ready.
2019-09-21 09:08:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4352472722530365, "perplexity": 2220.026316401924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574377.11/warc/CC-MAIN-20190921084226-20190921110226-00487.warc.gz"}
https://labs.tib.eu/arxiv/?author=A.%20Hees
• ### Local tests of gravitation with Gaia observations of Solar System Objects(1709.05329) Sept. 15, 2017 gr-qc, astro-ph.EP In this proceeding, we show how observations of Solar System Objects with Gaia can be used to test General Relativity and to constrain modified gravitational theories. The high number of Solar System objects observed and the variety of their orbital parameters associated with the impressive astrometric accuracy will allow us to perform local tests of General Relativity. In this communication, we present a preliminary sensitivity study of the Gaia observations on dynamical parameters such as the Sun quadrupolar moment and on various extensions to general relativity such as the parametrized post-Newtonian parameters, the fifth force formalism and a violation of Lorentz symmetry parametrized by the Standard-Model extension framework. We take into account the time sequences and the geometry of the observations that are particular to Gaia for its nominal mission (5 years) and for an extended mission (10 years). • ### Testing the gravitational theory with short-period stars around our Galactic Center(1705.10792) May 30, 2017 gr-qc, astro-ph.GA Motion of short-period stars orbiting the supermassive black hole in our Galactic Center has been monitored for more than 20 years. These observations are currently offering a new way to test the gravitational theory in an unexplored regime: in a strong gravitational field, around a supermassive black hole. In this proceeding, we present three results: (i) a constraint on a hypothetical fifth force obtained by using 19 years of observations of the two best measured short-period stars S0-2 and S0-38 ; (ii) an upper limit on the secular advance of the argument of the periastron for the star S0-2 ; (iii) a sensitivity analysis showing that the relativistic redshift of S0-2 will be measured after its closest approach to the black hole in 2018. • ### Testing General Relativity with stellar orbits around the supermassive black hole in our Galactic center(1705.07902) May 22, 2017 gr-qc, astro-ph.GA In this Letter, we demonstrate that short-period stars orbiting around the supermassive black hole in our Galactic Center can successfully be used to probe the gravitational theory in a strong regime. We use 19 years of observations of the two best measured short-period stars orbiting our Galactic Center to constrain a hypothetical fifth force that arises in various scenarios motivated by the development of a unification theory or in some models of dark matter and dark energy. No deviation from General Relativity is reported and the fifth force strength is restricted to an upper 95% confidence limit of $\left|\alpha\right| < 0.016$ at a length scale of $\lambda=$ 150 astronomical units. We also derive a 95% confidence upper limit on a linear drift of the argument of periastron of the short-period star S0-2 of $\left|\dot \omega_\textrm{S0-2} \right|< 1.6 \times 10^{-3}$ rad/yr, which can be used to constrain various gravitational and astrophysical theories. This analysis provides the first fully self-consistent test of the gravitational theory using orbital dynamic in a strong gravitational regime, that of a supermassive black hole. A sensitivity analysis for future measurements is also presented. • ### Lorentz symmetry and Very Long Baseline Interferometry(1604.01663) Dec. 1, 2016 hep-ph, gr-qc, astro-ph.EP Lorentz symmetry violations can be described by an effective field theory framework that contains both General Relativity and the Standard Model of particle physics called the Standard-Model extension (SME). Recently, post-fit analysis of Gravity Probe B and binary pulsars lead to an upper limit at the $10^{-4}$ level on the time-time coefficient $\bar s^{TT}$ of the pure-gravity sector of the minimal SME. In this work, we derive the observable of Very Long Baseline Interferometry (VLBI) in SME and then we implement it into a real data analysis code of geodetic VLBI observations. Analyzing all available observations recorded since 1979, we compare estimates of $\bar s^{TT}$ and errors obtained with various analysis schemes, including global estimations over several time spans and with various Sun elongation cut-off angles, and with analysis of radio source coordinate time series. We obtain a constraint on $\bar s^{TT}=(-5\pm 8)\times 10^{-5}$, directly fitted to the observations and improving by a factor 5 previous post-fit analysis estimates. • ### Testing Lorentz symmetry with Lunar Laser Ranging(1607.00294) Oct. 21, 2016 gr-qc Lorentz symmetry violations can be parametrized by an effective field theory framework that contains both general relativity and the standard model of particle physics called the standard-model extension (SME). We present new constraints on pure gravity SME coefficients obtained by analyzing lunar laser ranging (LLR) observations. We use a new numerical lunar ephemeris computed in the SME framework and we perform a LLR data analysis using a set of 20721 normal points covering the period of August, 1969 to December, 2013. We emphasize that linear combination of SME coefficients to which LLR data are sensitive and not the same as those fitted in previous postfit residuals analysis using LLR observations and based on theoretical grounds. We found no evidence for Lorentz violation at the level of $10^{-8}$ for $\bar{s}^{TX}$, $10^{-12}$ for $\bar{s}^{XY}$ and $\bar{s}^{XZ}$, $10^{-11}$ for $\bar{s}^{XX}-\bar{s}^{YY}$ and $\bar{s}^{XX}+\bar{s}^{YY}-2\bar{s}^{ZZ}-4.5\bar{s}^{YZ}$ and $10^{-9}$ for $\bar{s}^{TY}+0.43\bar{s}^{TZ}$. We improve previous constraints on SME coefficient by a factor up to 5 and 800 compared to postfit residuals analysis of respectively binary pulsars and LLR observations. • ### Searching for an oscillating massive scalar field as a dark matter candidate using atomic hyperfine frequency comparisons(1604.08514) July 28, 2016 hep-ph, gr-qc, physics.atom-ph We use six years of accurate hyperfine frequency comparison data of the dual rubidium and caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions and of the quantum chromodynamic mass scale, which will directly impact the rubidium/caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine structure constant, and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed. • ### Constraints on SME Coefficients from Lunar Laser Ranging, Very Long Baseline Interferometry, and Asteroid Orbital Dynamics(1607.07394) July 26, 2016 hep-ph, gr-qc, astro-ph.EP Lorentz symmetry violations can be parametrized by an effective field theory framework that contains both General Relativity and the Standard Model of particle physics, called the Standard-Model Extension or SME. We consider in this work only the pure gravitational sector of the minimal SME. We present new constraints on the SME coefficients obtained from lunar laser ranging, very long baseline interferometry, and planetary motions. • ### Test of the Equivalence Principle in the Dark Sector on Galactic Scales(1510.06198) March 1, 2016 gr-qc, astro-ph.CO, astro-ph.GA The Einstein Equivalence Principle is a fundamental principle of the theory of General Relativity. While this principle has been thoroughly tested with standard matter, the question of its validity in the Dark sector remains open. In this paper, we consider a general tensor-scalar theory that allows to test the equivalence principle in the Dark sector by introducing two different conformal couplings to standard matter and to Dark matter. We constrain these couplings by considering galactic observations of strong lensing and of velocity dispersion. Our analysis shows that, in the case of a violation of the Einstein Equivalence Principle, data favour violations through coupling strengths that are of opposite signs for ordinary and Dark matter. At the same time, our analysis does not show any significant deviations from General Relativity. • ### Post-Newtonian phenomenology of a massless dilaton(1512.05233) Dec. 16, 2015 gr-qc, astro-ph.EP In this paper, we present extensively the observational consequences of massless dilaton theories at the post-Newtonian level. We extend previous work by considering a general model including a dilaton-Ricci coupling as well as a general dilaton kinetic term while using the microphysical dilaton-matter coupling model proposed in [Damour and Donoghue, PRD 2010]. We derive all the expressions needed to analyze local gravitational observations in a dilaton framework, which is useful to derive constraints on the dilaton theories. In particular, we present the equations of motion of celestial bodies (in barycentric and planetocentric reference frames), the equation of propagation of light and the evolution of proper time as measured by specific clocks. Particular care is taken in order to derive properly the observables. The resulting equations can be used to analyse a large numbers of observations: universality of free fall tests, planetary ephemerides analysis, analysis of satellites motion, Very Long Baseline Interferometry, tracking of spacecraft, gravitational redshift tests, ... • ### Tests of gravitation with Gaia observations of Solar System Objects(1509.06868) Nov. 17, 2015 gr-qc, astro-ph.EP In this communication, we show how asteroids observations from the Gaia mission can be used to perform local tests of General Relativity (GR). This ESA mission, launched in December 2013, will observe --in addition to the stars-- a large number of small Solar System Objects (SSOs) with unprecedented astrometric precision. Indeed, it is expected that about 360,000 asteroids will be observed with a nominal sub-mas precision. Here, we show how these observations can be used to constrain some extensions to General Relativity. We present results of SSOs simulations that take into account the time sequences over 5 years and geometry of the observations that are particular to Gaia. We present a sensitivity study on various GR extensions and dynamical parameters including: the Sun quadrupolar moment $J_2$, the parametrized post-Newtonian parameter $\beta$, the Nordtvedt parameter $\eta$, the fifth force formalism, the Lense-Thirring effect, a temporal variation of the gravitational parameter $GM_\textrm{sun}$ (a linear variation as well as a periodic variation), the Standard Model Extension formalism,... Some implications for planetary ephemerides analysis are also briefly discussed. • ### Test of the gravitational redshift with stable clocks in eccentric orbits: application to Galileo satellites 5 and 6(1508.06159) Oct. 21, 2015 gr-qc, astro-ph.IM The Einstein Equivalence Principle (EEP) is one of the foundations of the theory of General Relativity and several alternative theories of gravitation predict violations of the EEP. Experimental constraints on this fundamental principle of nature are therefore of paramount importance. The EEP can be split in three sub-principles: the Universality of Free Fall (UFF), the Local Lorentz Invariance (LLI) and the Local Position Invariance (LPI). In this paper we propose to use stable clocks in eccentric orbits to perform a test of the gravitational redshift, a consequence of the LPI. The best test to date was performed with the Gravity Probe A (GP-A) experiment in 1976 with an uncertainty of $1.4\times10^{-4}$. Our proposal considers the opportunity of using Galileo satellites 5 and 6 to improve on the GP-A test uncertainty. We show that considering realistic noise and systematic effects, and thanks to a highly eccentric orbit, it is possible to improve on the GP-A limit to an uncertainty around $(3-4)\times 10^{-5}$ after one year of integration of Galileo 5 and 6 data. • ### Combined Solar System and rotation curve constraints on MOND(1510.01369) Oct. 5, 2015 gr-qc, astro-ph.GA The Modified Newtonian Dynamics (MOND) paradigm generically predicts that the external gravitational field in which a system is embedded can produce effects on its internal dynamics. In this communication, we first show that this External Field Effect can significantly improve some galactic rotation curves fits by decreasing the predicted velocities of the external part of the rotation curves. In modified gravity versions of MOND, this External Field Effect also appears in the Solar System and leads to a very good way to constrain the transition function of the theory. A combined analysis of the galactic rotation curves and Solar System constraints (provided by the Cassini spacecraft) rules out several classes of popular MOND transition functions, but leaves others viable. Moreover, we show that LISA Pathfinder will not be able to improve the current constraints on these still viable transition functions. • ### Some cosmological consequences of a breaking of the Einstein equivalence principle(1504.02676) May 5, 2015 gr-qc, astro-ph.CO In this communication, we consider a wide class of extensions to General Relativity that break explicitly the Einstein Equivalence Principle by introducing a multiplicative coupling between a scalar field and the electromagnetic Lagrangian. In these theories, we show that 4 cosmological observables are intimately related to each other: a temporal variation of the fine structure constant, a violation of the distance-duality relation, the evolution of the cosmic microwave background (CMB) temperature and CMB spectral distortions. This enables one to put very stringent constraints on possible violations of the distance-duality relation, on the evolution of the CMB temperature and on admissible CMB spectral distortions using current constraints on the fine structure constant. Alternatively, this offers interesting possibilities to test a wide range of theories of gravity by analyzing several data sets concurrently. • ### Observables in theories with a varying fine structure constant(1409.7273) Jan. 24, 2015 gr-qc, astro-ph.CO We show how two seemingly different theories with a scalar multiplicative coupling to electrodynamics are actually two equivalent parametrisations of the same theory: despite some differences in the interpretation of some phenemenological aspects of the parametrisations, they lead to the same physical observables. This is illustrated on the interpretation of observations of the Cosmic Microwave Background. • ### Breaking of the equivalence principle in the electromagnetic sector and its cosmological signatures(1406.6187) Jan. 24, 2015 gr-qc, astro-ph.CO This paper proposes a systematic study of cosmological signatures of modifications of gravity via the presence of a scalar field with a multiplicative coupling to the electromagnetic Lagrangian. We show that, in this framework, variations of the fine structure constant, violations of the distance duality relation, evolution of the cosmic microwave background (CMB) temperature and CMB distortions are intimately and unequivocally linked. This enables one to put very stringent constraints on possible violations of the distance duality relation, on the evolution of the CMB temperature and on admissible CMB distortions using current constraints on the fine structure constant. Alternatively, this offers interesting possibilities to test a wide range of theories of gravity by analysing several datasets concurrently. We discuss results obtained using current data as well as some forecasts for future data sets such as those coming from EUCLID or the SKA. • ### Range, Doppler and astrometric observables computed from Time Transfer Functions: a survey(1412.3360) Dec. 10, 2014 gr-qc, astro-ph.IM Determining range, Doppler and astrometric observables is of crucial interest for modelling and analyzing space observations. We recall how these observables can be computed when the travel time of a light ray is known as a function of the positions of the emitter and the receiver for a given instant of reception (or emission). For a long time, such a function--called a reception (or emission) time transfer function--has been almost exclusively calculated by integrating the null geodesic equations describing the light rays. However, other methods avoiding such an integration have been considerably developped in the last twelve years. We give a survey of the analytical results obtained with these new methods up to the third order in the gravitational constant $G$ for a mass monopole. We briefly discuss the case of quasi-conjunctions, where higher-order enhanced terms must be taken into account for correctly calculating the effects. We summarize the results obtained at the first order in $G$ when the multipole structure and the motion of an axisymmetric body is taken into account. We present some applications to on-going or future missions like Gaia and Juno. We give a short review of the recent works devoted to the numerical estimates of the time transfer functions and their derivatives. • ### Light propagation in the field of a moving axisymmetric body: theory and application to JUNO(1406.6600) Sept. 16, 2014 gr-qc, astro-ph.IM Given the extreme accuracy of modern space science, a precise relativistic modeling of observations is required. We use the Time Transfer Functions formalism to study light propagation in the field of uniformly moving axisymmetric bodies, which extends the field of application of previous works. We first present a space-time metric adapted to describe the geometry of an ensemble of uniformly moving bodies. Then, we show that the expression of the Time Transfer Functions in the field of a uniformly moving body can be easily derived from its well-known expression in a stationary field by using a change of variables. We also give a general expression of the Time Transfer Function in the case of an ensemble of arbitrarily moving point masses. This result is given in the form of an integral easily computable numerically. We also provide the derivatives of the Time Transfer Function in this case, which are mandatory to compute Doppler and astrometric observables. We particularize our results in the case of moving axisymmetric bodies. Finally, we apply our results to study the different relativistic contributions to the range and Doppler tracking for the JUNO mission in the Jovian system. • ### Constraints on MOND theory from radio tracking data of the Cassini spacecraft(1402.6950) April 29, 2014 gr-qc, astro-ph.GA, astro-ph.EP The MOdified Newtonian Dynamics (MOND) is an attempt to modify the gravitation theory to solve the Dark Matter problem. This phenomenology is very successful at the galactic level. The main effect produced by MOND in the Solar System is called the External Field Effect parametrized by the parameter $Q_2$. We have used 9 years of Cassini range and Doppler measurements to constrain $Q_2$. Our estimate of this parameter based on Cassini data is given by $Q_2=(3 \pm 3)\times 10^{-27} \ \rm{s^{-2}}$ which shows no deviation from General Relativity and excludes a large part of the relativistic MOND theories. This limit can also be interpreted as a limit on a external tidal potential acting on the Solar System coming from the internal mass of our galaxy (including Dark Matter) or from a new hypothetical body. • ### Tests of Gravitation at Solar System scales beyond the PPN formalism(1403.1365) March 6, 2014 gr-qc, astro-ph.EP In this communication, the current tests of gravitation available at Solar System scales are recalled. These tests rely mainly on two frameworks: the PPN framework and the search for a fifth force. Some motivations are given to look for deviations from General Relativity in other frameworks than the two extensively considered. A recent analysis of Cassini data in a MOND framework is presented. Furthermore, possibilities to constrain Standard Model Extension parameters using Solar System data are developed. • ### Relativistic formulation of coordinate light time, Doppler and astrometric observables up to the second post-Minkowskian order(1401.7622) Jan. 29, 2014 gr-qc, astro-ph.IM Given the extreme accuracy of modern space science, a precise relativistic modeling of observations is required. In particular, it is important to describe properly light propagation through the Solar System. For two decades, several modeling efforts based on the solution of the null geodesic equations have been proposed but they are mainly valid only for the first order Post-Newtonian approximation. However, with the increasing precision of ongoing space missions as Gaia, GAME, BepiColombo, JUNO or JUICE, we know that some corrections up to the second order have to be taken into account for future experiments. We present a procedure to compute the relativistic coordinate time delay, Doppler and astrometric observables avoiding the integration of the null geodesic equation. This is possible using the Time Transfer Function formalism, a powerful tool providing key quantities such as the time of flight of a light signal between two point-events and the tangent vector to its null-geodesic. Indeed we show how to compute the Time Transfer Functions and their derivatives (and thus range, Doppler and astrometric observables) up to the second post-Minkowskian order. We express these quantities as quadratures of some functions that depend only on the metric and its derivatives evaluated along a Minkowskian straight line. This method is particularly well adapted for numerical estimations. As an illustration, we provide explicit expressions in static and spherically symmetric space-time up to second post-Minkowskian order. Then we give the order of magnitude of these corrections for the range/Doppler on the BepiColombo mission and for astrometry in a GAME-like observation. • ### How to test SME with space missions ?(1308.0373) Aug. 1, 2013 hep-ph, gr-qc, astro-ph.EP In this communication, we focus on possibilities to constrain SME coefficients using Cassini and Messenger data. We present simulations of radioscience observables within the framework of the SME, identify the linear combinations of SME coefficients the observations depend on and determine the sensitivity of these measurements to the SME coefficients. We show that these datasets are very powerful for constraining SME coefficients. • ### Simulations of Solar System observations in alternative theories of gravity(1301.1658) Feb. 27, 2013 gr-qc In this communication, we focus on the possibility to test General Relativity (GR) with radioscience experiments. We present simulations of observables performed in alternative theories of gravity using a software that simulates Range/Doppler signals directly from the space time metric. This software allows one to get the order of magnitude and the signature of the modifications induced by an alternative theory of gravity on radioscience signals. As examples, we present some simulations for the Cassini mission in Post-Einsteinian gravity (PEG) and with Standard Model Extension (SME). • ### Can the Chameleon mechanism explain cosmic acceleration while satisfying Solar System constraints ?(1302.6527) Feb. 26, 2013 gr-qc, astro-ph.CO The chameleon mechanism appearing in massive tensor-scalar theory of gravity can effectively reduce the nonminimal coupling between the scalar field and matter. This mechanism is invoked to reconcile cosmological data requiring introduction of Dark Energy with small-scale stringent constraints on General Relativity. In this communication, we present constraints on this mechanism obtained by a cosmological analysis (based on Supernovae Ia data) and by a Solar System analysis (based on PPN formalism). • ### Radioscience simulations in General Relativity and in alternative theories of gravity(1201.5041) In this paper, we focus on the possibility to test General Relativity in the Solar System with radioscience measurements. To this aim, we present a new software that simulates Range and Doppler signals directly from the space-time metric. This flexible approach allows one to perform simulations in General Relativity and in alternative metric theories of gravity. In a second step, a least-squares fit of the different initial conditions involved in the situation is performed in order to compare anomalous signals produced by a given alternative theory with the ones obtained in General Relativity. This software provides orders of magnitude and signatures stemming from hypothetical alternative theories of gravity on radioscience signals. As an application, we present some simulations done for the Cassini mission in Post-Einsteinian Gravity and in the context of MOND External Field Effect. We deduce constraints on the Post-Einsteinian parameters but find that the considered arc of the Cassini mission is not useful to constrain the MOND External Field Effect. • ### Frequency shift up to the 2-PM approximation(1210.2577) A lot of fundamental tests of gravitational theories rely on highly precise measurements of the travel time and/or the frequency shift of electromagnetic signals propagating through the gravitational field of the Solar System. In practically all of the previous studies, the explicit expressions of such travel times and frequency shifts as predicted by various metric theories of gravity are derived from an integration of the null geodesic differential equations. However, the solution of the geodesic equations requires heavy calculations when one has to take into account the presence of mass multipoles in the gravitational field or the tidal effects due to the planetary motions, and the calculations become quite complicated in the post-post-Minkowskian approximation. This difficult task can be avoided using the time transfer function's formalism. We present here our last advances in the formulation of the one-way frequency shift using this formalism up to the post-post-Minkowskian approximation.
2021-01-24 15:39:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5980229377746582, "perplexity": 731.093509994259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703549416.62/warc/CC-MAIN-20210124141945-20210124171945-00109.warc.gz"}
https://docs.acquia.com/acquia-cloud/manage/servers/storage/cli/
Information for: About disk storage in Acquia Cloud¶ Disk storage in Acquia Cloud Each Acquia Cloud application has a set amount of disk storage allocated to it. This page explains how Acquia Cloud allocates and uses that storage, and how you can determine how much of that storage is in use by your application. Examining your disk storage usage¶ You can review your disk storage from the command line. To do this, connect to your Acquia Cloud environment using SSH, and then run the following command: df -h The output will appear similar to the following: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 9.9G 4.4G 5.4G 45% / udev 3.7G 12K 3.7G 1% /dev tmpfs 723M 328K 723M 1% /run none 5.0M 0 5.0M 0% /run/lock none 3.6G 0 3.6G 0% /run/shm /dev/xvdb 30G 823M 28G 3% /mnt /dev/xvdn 25G 2.6G 23G 11% /vol/backup-ebs /dev/xvdm 25G 2.4G 23G 10% /vol/ebs1 /dev/xvdo 25G 2.6G 23G 11% /mnt/brick1144138 You can determine how large each region is and how much of it is used or available. Filesystem regions¶ Acquia Cloud Enterprise and Acquia Cloud Professional use different filesystem regions, as described in the following table: Partition Mount Acquia Cloud Enterprise function Acquia Cloud Professional function /dev/xvdm /vol/ebs1 database database and static files /dev/xvdn /vol/backup-ebs backups backups /dev/xvdo /mnt/gfs Distributed file system (static files) not used Local storage¶ From the previous disk storage usage example on this page, there are two devices that are of primary interest: • /dev/xvda1 • /dev/xvdb These are the storage devices that are provided with your Acquia Cloud instance itself — they are both local to the instance, and have the quickest access. These devices are essentially disposable (also known as ephemeral storage). The root directory is mounted on /dev/xvda1, is 10 GB in size, and houses the operating system and basic functions. Everything here can be recreated from Acquia Cloud files in the event that the instance needs to be discarded or relaunched. The /mnt directory is mounted on /dev/xvdb, and is the instance’s remaining ephemeral disk space. The content in this location is either something we can install using our puppet master (for example, a copy of your Drupal code that actually runs under Apache) or is considered disposable (such as logs or other temporary files). Distributed file system¶ The distributed file system uses a physical disk, where the files are actually stored. In the disk storage usage example, that partition is /dev/xvdo. These partitions are called bricks, and the file system uses one or more bricks to ensure that when changes occur, those changes propagate to all defined bricks. This defines high availability functionality — by having a pair of EBS volumes that stay in sync. For example, even though the example application has one brick on web-12345 and another on web-67890, all read and write operations use the file server itself, which is based in /mnt/gfs. Database¶ The MySQL database has a separate volume in the disk storage usage example: /dev/xvdm. MySQL normally is run from /var/lib/mysql. In the example application, the following symlink exists to that volume: lrwxrwxrwx 1 mysql mysql 15 May 9 23:09 mysql -> /vol/ebs1/mysql Each server in the pair of web-* servers for Production have this design, however, Drupal only knows about one server, which is the location for reading and writing to the database. MySQL replicates the data from the active to the passive (for example, from web-12345 to web-67890) using normal binlog processes, which keep the databases synchronized. In the event that web-12345 is unavailable, Acquia Cloud will fail over (by changing a settings file) to the other server, and then inform Drupal to instead use web-67890. This function provides high availability services for the database, ensuring website uptime while the secondary server is returned to service. Backups¶ Because Acquia Cloud uses synchronized copies with MySQL server, your application is well-protected from adverse situations. However, catastrophic failures or other emergencies can occur, which is why backing up your applications regularly is highly recommended. Acquia Cloud takes file system snapshots every hour and saves them to /dev/xvdn. These snapshots are generally needed only if both file system bricks fail. Database dumps are also stored here. To learn about how often backups occur on Acquia Cloud and how to create your own, see Backing up your application. Storage space¶ The database in the disk storage usage example has 25 GB to store the data, binlogs, and any other files associated with the usual /var/lib/mysql directory. If necessary, during a maintenance window, that volume can be replaced with any size up to the maximum of 1 TB. The file system volume is separate and is set to 25 GB—its maximum size can be raised to 1 TB. Acquia’s infrastructure monitoring warns when the database, file, or ephemeral disks reach 95% of capacity, and again at 100%. When this event occurs, Acquia creates proactive critical priority tickets. Acquia will attempt to safely reduce disk space usage to prevent a system outage by moving the oldest database backups to the /mnt/tmp directory. Important These backups are not safe from deletion, and you must download and save the files if you still need them.
2019-10-23 23:30:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19206200540065765, "perplexity": 6318.043945671921}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00174.warc.gz"}
http://math.stackexchange.com/questions/69442/an-estimate-of-a-series
# An estimate of a series Suppose $s$ is not an integer, let $\lambda(s)=\min_{n≥0}|s+n|$. Show that $\sum\limits_{n=1}^{\infty}(\frac{1}{n+s}-\frac{1}{n})\ll\frac{1}{\lambda(s)}+\log(|s|+2)$. - What is $\ll$ here? I think of it as much less, but if $s=0.1$, for example, it doesn't seem too much less to me (-0.1 vs 0.8) –  Ross Millikan Oct 3 '11 at 4:30 $\ll$ is an equivalent symbol for the big $O$. –  ksj03 Oct 3 '11 at 4:33 You probably meant $\sum_{n=1}^\infty \left( \frac{1}{n} - \frac{1}{n+s} \right)$ in the left-hand-side to agree with asymptotic expansion for large positive values of $s$. –  Sasha Oct 3 '11 at 5:20 I think it can be reduced to showing that $\sum\limits_{n=1}^{|s| + 1}\frac{1}{n+s}\ll\frac{1}{\lambda(s)}+\log(|s|+2)$. –  ksj03 Oct 3 '11 at 5:35 Also under discussion at mathoverflow. –  Gerry Myerson Oct 3 '11 at 6:30 It is worth noting that this is the main term of the Digamma Function, namely we have that $$\frac{\Gamma'}{\Gamma}(s)=\frac{-1}{s}-\gamma+\sum_{n=1}^\infty \left(\frac{1}{n}-\frac{1}{s+n}\right).$$ Here is a proof of the asymptotic. It is Theorem C.1 of the appendix in Montgomery and Vaughn's Multiplicative Number Theory.: First $$\sum_{n=1}^M \left(\frac{1}{n}-\frac{1}{s+n}\right)=\log M +\gamma-\sum_{n=0}^M \frac{1}{n+s}.$$ By Euler MacLaurin summation on $\frac{1}{x+s}$ we have $$\sum_{n=0}^M \frac{1}{n+s}=\log(M+s)-\log s +\frac{1}{2s}+\frac{1}{2(s+M)}+O(|s|^{-2}).$$ Combining these and taking $M\rightarrow \infty$ we have $$\sum_{n=1}^\infty \left(\frac{1}{n}-\frac{1}{s+n}\right)=\log s +\gamma +\frac{1}{2s}+O\left(\frac{1}{|s|^{2}}\right)$$ which is stronger then your desired result. Remark: From here with the fact that $\frac{\Gamma'}{\Gamma}(s)=\frac{d}{ds}\log (\Gamma(s))$ we can deduce Stirlings Approximation. Remark 2: The $\frac{1}{\lambda(s)}$ you have above comes from the $\frac{1}{2s}$. I believe that adding $2$ in the logarithm makes us no longer need the constant $\gamma$. - Thanks for your detailed explanation, Eric. –  ksj03 Oct 3 '11 at 12:09
2014-09-16 09:41:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9356002807617188, "perplexity": 460.525891425933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657114204.83/warc/CC-MAIN-20140914011154-00148-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://codereview.stackexchange.com/questions/243846/extracting-min-and-max-salary-from-string
# Extracting min and max salary from string What I want is to do is to extract min/max range of salary from a text which contains either hourly or annual salary. import re # either of the following inputs should work input1 = "$80,000 -$90,000 per annum" input2 = "$20 -$24.99 per hour" salary_text = re.findall("[$$\0-9,\. ]*-[\$$0-9,\. ]*", input1) if salary_text and salary_text[0]: range_list = re.split("-", salary_text[0]) if range_list and len(range_list) == 2: low = range_list[0].strip(' $').replace(',', '') high = range_list[1].strip('$').replace(',', '') $$$$ • you could create list with inputs all_examples = [input1, input2] and run for input in all_examples: - this way you can test code with all inputs without changing code. Eventually you could create list with input and expected outputs all_examples = [(input1, "80000", "90000"), (input2, "20", "24.99")] and automatically check if results are correct for input, expected_low, expected_hight in all_examples: ... low == expected_low ... high == expected_hight ... – furas Jun 15 '20 at 1:47 The , commas are a nice twist. I feel you are stripping them a bit late, as they don't really contribute to the desired solution. Better to lose them from the get go. Calling .findall seems to be overkill for your problem specification -- likely .search would suffice. salary_text = re.findall("[$$\0-9,\. ]*-[\$$0-9,\. ]*", input1) The dollar signs, similarly, do not contribute to the solution, your regex could probably just ignore them if your inputs are fairly sane. Or even scan for lines starting with $ dollar, and then the regex ignores them. range_list = re.split("-", salary_text[0]) There is no need for this .split -- the regex could have done this for you already. Here is what I recommend: def find_range(text): if text.startswith('$'): ## Demo ### Code import re def find_range(text: str) -> dict: expression = r'^\s*$$\([0-9]{1,3}(?:,[0-9]{1,3})*(?:\.[0-9]{1,2})?)\s*-\s*\$$([0-9]{1,3}(?:,[0-9]{1,3})*(?:\.[0-9]{1,2})?)\s*per\s+(?:annum|hour)\s*$' return re.findall(expression, text) input_a = '$80,000 - $90,000 per annum' input_b = '$20 - \$24.99 per hour' print(find_range(input_a)) If you wish to simplify/update/explore the expression, it's been explained on the top right panel of regex101.com. You can watch the matching steps or modify them in this debugger link, if you'd be interested. The debugger demonstrates that how a RegEx engine might step by step consume some sample input strings and would perform the matching process. ### RegEx Circuit jex.im visualizes regular expressions: ## Demo • Thanks for this, especially those corner cases are very good points. – Hooman Bahreini Jun 16 '20 at 11:10
2021-03-01 19:32:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23429051041603088, "perplexity": 7026.654774298098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362899.14/warc/CC-MAIN-20210301182445-20210301212445-00253.warc.gz"}
http://mathoverflow.net/questions/25092/can-all-induced-maps-be-described-categorically-or-at-least-as-generally-as/25099
# Can all induced maps be described categorically.?. (or at least as generally as possible) Hi: I am new here. I went over the fAQ's, still, sorry if I break protocol. I am pretty confused about induced maps in different areas of algebraic topology; I do know how these induced maps are defined in many cases, but I definitely do not understand well-enough the rules governing when a map between two topological spaces X,Y , induces a map in homology, or homotopy. AFAIK, if we have a map f:X-->Y , and this map takes cycles to cycles and boundaries to boundaries, then this map "passes to homology" (not clear what that means.). Problem(at least to me) is that this word "induced" seems to be overused (in the sense that its meaning does not always seem clear.): induced quotients, induced homomorphisms, induced bundles, etc. So: does anyone know if induced maps can be described categorically, or at least, could someone please explain when a given map between topological spaces induces a map on homology or cohomology.?. I think there is some underlying algebraic result dealing with normal subgroups(which extends to any subgroup in homology, since chain groups are Abelian.), but I am not too sure of this. Thanks For any Help. - The key word in this context is functor. The point is that homology, homotopy etc. are functors. For example consider homology $H_n$. This is a functor from the category of topological spaces to the category of Abelian groups. In categories, although the usual notation obscures it, morphisms are more important than objects. It is crucial to define the action of the functor on morphisms. Returning to our example, to define the functor $H_n$ we need to define an Abelian group $H_n(X)$ for each topological space $X$, and for each continuous map $f:X\to Y$ a group homomorphism $H_n(f):H_n(X)\to H_n(Y)$. These maps $H_n(f)$ must preserve composition and identities. In practice, we don't use the notation $H_n(f)$ for these maps but typically use alternatives like $f_*$ which is quicker to write, but less informative. Similarly homotopy groups and cohomology groups form functors on suitable categories. There is a whole algebra of categories, functors and more which are dealt with in texts on category theory. For instance the composition of two functors is a functor. In our example, some texts see the homology group functor as a composite, being a functor from topological spaces to chain complexes, and the functor taking a chain complex to its homology groups. Also texts on topology vary in the detail in which they explain the construction of the maps I've denoted as $H_n(f)$; some go into lots of detail while other wave their hands more. In general once they have done some examples in detail, they tend not to go into so much detail in subsequent examples, as they assume that the reader can now fill in more details. - Thanks : I have a chicken-egg confusion here when going beyond (co)homology, tho:I know that we define (Eilenberg-Steenrod) (Co)homology to be a functor;there are other cases,tho, in which we may not know in advance whether we have a functor:given any linear map between v.spaces V,W, we get a map W*-->V* (I think V->V* is a functor) . Given a map between manifolds M,N , we get an induced map between the respective tangent spaces T_pM, T_pN (where is the functor.?). Do these induced maps (all other) also follow from functoriality.?. Also, once we have functoriality, how to define f*? thanks. –  confused May 23 '10 at 4:55 You are raising a lot of questions here. The Eilenberg-Steenrod axioms do not define of a (co)homology functors, they axiomatize them. Dualisation of vector spaces is a contravariant functor. You did not mention manifolds originally, but a map between two smooth manifolds induces vector maps from $T_p(M)$ to $T_{f(p)}(M)$ which can be regarded as a functor. The domain category is the category of manifolds with a base point, with base-point preserving maps, and the codomain category is that of vector spaces –  Robin Chapman May 23 '10 at 10:32 In fact (say for homology) if you have a continuous $f:X\to Y$, the induced homomorphism of chain complexes $C_\bullet(f):C_\bullet(X)\to C_\bullet(Y)$ sends automatically cycles to cycles and boundaries to boundaries (simply because it is compatible with the differentials of the two complexes, in the sense that $f\circ d_X=d_Y\circ f$). The fact that it "passes" to homology is now an algebraic fact, namely the fact that if you have four abelian groups $A,B,C,D$ and three homomorphisms $f:A\to B, g:A\to C, h:B\to D$ with $g$ and $h$ surjective (so $C$ is a quotient of $A$ and $D$ is a quotient of $B$), then you can find $f':C\to D$ such that $f'\circ g=h\circ f$ if and only if $f(\ker(g))\subseteq \ker(h)$. (you might want to draw a diagram here :D) Take $A=Z_n(X), B=Z_n(Y), C=H_n(X)=Z_n(X)/B_n(X), D=H_n(Y)=Z_n(Y)/B_n(Y)$, where $Z_\bullet=$cycles and $B_\bullet=$ boundaries, as usual, and you get your induced map in homology.
2014-07-23 22:18:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.954826295375824, "perplexity": 374.3360107137303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883858.16/warc/CC-MAIN-20140722025803-00161-ip-10-33-131-23.ec2.internal.warc.gz"}
http://www.physicsforums.com/showthread.php?t=59774
Recognitions: Homework Help ## Arithmetical teaser This one is actually quite simple once you see it. Consider the following converation between man 1 and man 2: Man 1: 'I have a puzzle for you. First, pick an integer from 100 to 999 in your head.' Man 2: 'Uuhhm. Well.. okay. (thinks: I`ll take 501, 'cuz that me lucky number.') Man 1: 'Okay, consider the number that you get by repeating the number in your head once. So if you had 123, you should have 123123.' Man 2: 'Okay. (That'll be 501501)' Man 1: 'Now divide that number by 13.' Man 2: '..wait a sec.... okay. Fortunately it's an integer. (that's 38577)' Man 1: 'Divide that number you got by 7' Man 2: '..(that'll be 5511).. okay. Luckily another integer!' Man 1: 'Now divide it by 11.' Man 2: '(5511/11 is..eh 501). Hey, I get the number with which I started.' Man 1: 'Really? I suppose you made a good choice at the start. My question is: How many integers from 100 to 999 have this property?' So, how many integers are there from 100 to 999 which have this property? PhysOrg.com science news on PhysOrg.com >> City-life changes blackbird personalities, study shows>> Origins of 'The Hoff' crab revealed (w/ Video)>> Older males make better fathers: Mature male beetles work harder, care less about female infidelity Recognitions: Gold Member Science Advisor Staff Emeritus Surely I'm screwing up somewhere, but 1001 = 13*11*7, so they all do . Recognitions: Homework Help Science Advisor Hmmm, $$\frac{1001}{7*11*13}=1$$ no idea at all. Guess, I'll just have to try them one by one. ## Arithmetical teaser Quote by Galileo So, how many integers are there from 100 to 999 which have this property? The answer is all of them (900). This works because taking any number like 501 and making it 501501 is the same thing as this: Assume the number you chose is "X". So X = 501 (X*1000)+X is the formula that gives you 501501. 1000X +X = 1001X 1001X/1001 = X Works everytime regardless of X up to 999. Recognitions: Homework Help Science Advisor Mwa. I knew it was too easy. Oh well... well, yes it is quite easy, in fact i had read this in a book quite a few years back. i think i should look for that book, maybe i can post some good questions from that book too. Similar discussions for: Arithmetical teaser Thread Forum Replies Brain Teasers 7 Brain Teasers 5 Brain Teasers 2 Brain Teasers 4 Brain Teasers 17
2013-06-19 10:31:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5058051347732544, "perplexity": 4485.9259817660195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708690512/warc/CC-MAIN-20130516125130-00040-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.nat-hazards-earth-syst-sci.net/18/2143/2018/
Journal cover Journal topic Natural Hazards and Earth System Sciences An interactive open-access journal of the European Geosciences Union Journal topic Nat. Hazards Earth Syst. Sci., 18, 2143-2160, 2018 https://doi.org/10.5194/nhess-18-2143-2018 Nat. Hazards Earth Syst. Sci., 18, 2143-2160, 2018 https://doi.org/10.5194/nhess-18-2143-2018 Research article 10 Aug 2018 Research article | 10 Aug 2018 # Development and application of a tsunami fragility curve of the 2015 tsunami in Coquimbo, Chile Development and application of a tsunami fragility curve (2015 tsunami in Coquimbo) Rafael Aránguiz1,2, Luisa Urra3, Ryo Okuwaki4, and Yuji Yagi5 Rafael Aránguiz et al. • 1Department of Civil Engineering, Universidad Católica de la Santísima Concepción, Concepción, Chile • 2National Research Center for Integrated Natural Disaster Management CONICYT/FONDAP/1511007 (CIGIDEN), Santiago, Chile • 3Laboratory of Remote Sensing and Geoinformatics for Disaster Management, International Research Institute of Disaster Science, Tohoku University, Tohoku, Japan • 4Graduate School of Life and Environmental Sciences, University of Tsukuba, Tsukuba, Japan • 5Faculty of Life and Environmental Sciences, University of Tsukuba, Tsukuba, Japan Abstract The last earthquake that affected the city of Coquimbo took place in September 2015 and had a magnitude of Mw=8.3, resulting in localized damage in low-lying areas of the city. In addition, another seismic gap north of the 2015 earthquake rupture area has been identified; therefore, a significant earthquake (Mw=8.2 to 8.5) and tsunami could occur in the near future. The present paper develops a tsunami fragility curve for the city of Coquimbo based on field survey data and tsunami numerical simulations. The inundation depth of the 2015 Chile tsunami in Coquimbo was estimated by means of numerical simulation with the Non-hydrostatic Evolution of Ocean WAVEs (NEOWAVE) model and five nested grids with a maximum grid resolution of 10 m. The fragility curve exhibited behavior similar to that of other curves in flat areas in Japan, where little damage was observed at relatively high inundation depths. In addition, it was observed that Coquimbo experienced less damage than Dichato (Chile); in fact, at an inundation depth of 2 m, Dichato had a ∼75 % probability of damage, while Coquimbo proved to have only a 20 % probability. The new fragility curve was used to estimate the damage by possible future tsunamis in the area. The damage assessment showed that ∼50 % of the structures in the low-lying area of Coquimbo have a high probability of damage in the case of a tsunami generated off the coast of the study area if the city is rebuilt with the same types of structures. 1 Introduction On 16 September 2015 a Mw=8.3 earthquake took place off the coast of the Coquimbo Region (USGS: http://earthquake.usgs.gov/earthquakes/eventpage/us20003k7a#executive, last access: 10 July 2018). The earthquake generated a tsunami that inundated low-lying areas of the city of Coquimbo, with run-up reaching up to 6.4 m and a penetration distance of up to 700 m (Aránguiz et al., 2016; Contreras-López et al., 2016), resulting in reports of significant damage to houses and public infrastructure (Contreras-López et al., 2016). This earthquake filled the seismic gap that had existed since at least the last significant earthquake along the Coquimbo–Illapel seismic region in 1943 (Melgar et al., 2016; Ye et al., 2016). However, the region just north of the 2015 rupture area has not experienced significant seismic activity since the 1922 Mw=8.3 event (Melgar et al., 2016; Ye et al., 2016). Thus, it is recommended that reconstruction plans and new tsunami mitigation measures consider potential impacts due to possible future tsunamis generated north of the 2015 Illapel earthquake rupture zone. With regard to the assessment of structural damage within the exposed area against a potential tsunami hazard, two different approaches were identified. Damage can be estimated deterministically based on the forces acting on a single structure (Nandasena et al., 2012; Nistor et al., 2009; Shimozono and Sato, 2016; Wei et al., 2015); however, such an analysis could be extremely time-consuming and impractical for an entire city due to the high-resolution numerical simulations (∼2 m) that are required. Alternatively, the assessment of structural damage could be performed probabilistically by means of fragility curves (Koshimura et al., 2009a, b; Suppasri et al., 2011). Tsunami fragility curves represent the probability of damage to structures in relation to a tsunami intensity measure, such as the inundation depth, current velocity or hydrodynamic force Koshimura et al., 2009a), although a fully probabilistic approach may use a wide range of possible scenarios; thus, both hazard assessment and damage assessment are probabilistic (Park et al., 2017). A classical approach uses linear models with ordinary least-square methods and aggregated data. This methodology has been applied to obtain empirical tsunami fragility curves for Banda Aceh in Indonesia (Koshimura et al., 2009b) and Thailand (Suppasri et al., 2011) after the 2004 Indian Ocean Tsunami. The same methodology was applied to areas affected by the 2009 Samoa tsunami (Gokon et al., 2014). In a similar manner, this method was applied in Japan after the 2011 Tohoku tsunami, allowing several fragility curves that considered several damage levels and different building materials to be obtained (Suppasri et al., 2013). After the 2010 Chile tsunami, Mas et al. (2012) developed the first tsunami fragility curve in Chile for masonry and mixed structures in Dichato. In recent years, new methodologies have been proposed for the development of tsunami fragility curves that use disaggregated data and different classes of models such as the generalized linear model, generalized additive model and non-parametric model (Charvet et al., 2015, 2017; Macabuag et al., 2016). These new methodologies propose a more comprehensive analysis in order to select appropriate statistical models and identify which tsunami intensity measure gives the best representation of the observed damage data (Macabuag et al., 2016). Even though the use of different classes of models could offer an improvement over the ordinary least-square method, there is no quantifiable assessment of the effect of data aggregation and linear model assumption violation on the predictive power of a model. (Macabuag et al., 2016). For example, the fragility curves developed by Suppasri et al. (2013) have been applied to building damage estimation in Napier, New Zealand (Fraser et al., 2014), and both building damage and economic loss estimation in Seaside, Oregon (Wiebe and Cox, 2014). The former study also applied the fragility curves of Dichato, Chile (Mas et al., 2012), and American Samoa (Gokon et al., 2014). Tsunami fragility curves are obtained for a given area under a given scenario; therefore, they may not be applicable to other areas of interest since the tsunami characteristics and building materials may differ (Koshimura et al., 2009a; Suppasri et al., 2011). For example, buildings along the Sanriku ria coast in Japan experienced greater damage than structures located on the plains of Sendai (Suppasri et al., 2012b, 2013); thus, De Risi et al. (2017) analyzed the influence of tsunami velocity on structural damage on ria-type and plain-type coasts. They found that while flow velocity improves the fragility models, the two coastal typologies should be considered separately when velocity is included in the analysis. Moreover, Song et al. (2017) used a bivariate intensity measure to evaluate tsunami losses, such that both flow velocity and inundation depth are analyzed. They found that flow velocity is important for buildings located less than 1 km from the coastline. In addition, they found that reinforced concrete buildings are the most sensitive to the incorporation of velocity, while wood structures exhibit no sensitivity to this variable. The Coquimbo area provides a good opportunity to develop a fragility curve and assess potential tsunami impact since the tsunami in 2015 did not damage all structures and some of the damaged structures have been repaired or rebuilt on their original sites. This study develops an empirical fragility curve for the Coquimbo area using field survey data and numerical simulation of the 2015 Chile tsunami. In addition, we estimated the probability of structural damage for a deterministic tsunami scenario using the Coquimbo fragility curve. Section 2 gives a description of the study area, with a short review of the local seismicity. Section 3 presents the methodology of the fragility curve development, which includes a comparison with existing tsunami fragility curves. Section 4 presents an application of the fragility curves. Finally, Sect. 5 gives the main conclusions of the present research. 2 Study area The city of Coquimbo is located on the southern shore of Coquimbo Bay (29.96 S). The Coquimbo area was mentioned by the conquistadors as a good place for a port and the location became important in the 19th century due to the natural protection it offered against southwest swell waves. Coquimbo Bay is open to the northwest and characterized by a lowland topography with a long, flat, sandy beach (Aránguiz et al., 2016), similar to the coastal plains of Sendai. Like all coastal cities in Chile, Coquimbo is located over the subduction zone of the Nazca plate beneath the South American plate (18–44 S). The convergence rate of the plates is 68 mm yr−1 along the Chile subduction zone and large seismic events take place every 10 years on average (Métois et al. 2016). In fact, three events over a magnitude of 8.0 have taken place in the last 6 years, namely, the 2010 Maule (34–38 S), 2014 Iquique (19–20 S) and 2015 Illapel (30–32 S) earthquakes. Figure 1Seismicity of central Chile. (a) Space–time plot of large earthquakes along central Chile. Red bars are the events along the Copiapó–Coquimbo region and the red stars represent smaller seismic events. The blue bars are events along the Coquimbo–Illapel seismic region, while the black lines represent events along the Los Vilos–Constitución segment. The dashed line is the large event of 1730, which ruptured both the Los Vilos–Constitución and Coquimbo–Illapel segments (Beck et al., 1998; Lomnitz, 2004; Métois et al., 2016; Nishenko, 1985). (b) Map showing the cities and towns mentioned in the text. The yellow star represents the epicenter of the 2015 Illapel earthquake. The thin black lines are isobaths at water depths of 200, 1000 and 3000 m. The thick black line is the Peru–Chile trench. Figure 1 shows the seismic events recorded in the Coquimbo area. The oldest record of a tsunami is that of the 1730 event. This earthquake generated a destructive tsunami that destroyed Valparaiso and Concepción and flooded low-lying areas in Japan (Cisternas et al., 2011). The tsunami destroyed several ranches on the shore of Coquimbo (Soloviev and Go, 1975). Although the 1880 and 1943 earthquakes are considered to be similar in size (Nishenko, 1985), it is observed that the behaviors of the tsunamis generated by these events seem to be different. While the former generated large columns of water that resulted in the anchor chain of a ship snapping in Coquimbo (Soloviev and Go, 1975) and a deep submarine cable breaking off the coast near the mouth of the Limarí River (Lomnitz, 2004), the latter generated a minor tsunami that damaged fishing boats in Los Vilos and raised the water level by 80 cm in Valparaiso (Soloviev and Go, 1975), while no tsunami was reported in Coquimbo. Conversely, the 2015 tsunami reached up to 4.75 m at the Coquimbo tide gauge, with a run-up of 6.4 m (Aránguiz et al., 2016; Contreras-López et al., 2016). Moreover, a maximum tsunami amplitude of 2 m was observed at the Valparaiso tide gauge (Aránguiz et al., 2016). The main reason behind this is that the 1943 event broke the deepest portion of the subduction interface, while the 2015 Illapel earthquake had a shallower rupture area and a larger magnitude (Fuentes et al., 2016; Okuwaki et al., 2016), resulting in a larger initial tsunami amplitude (Aránguiz et al., 2016). The largest tsunami ever recorded in Coquimbo took place in 1922. It arrived in Coquimbo 2 h after the earthquake, with three large waves observed, the third of which was the largest, with a maximum inundation height of 7 m and an inland penetration of 2 km. The part of the city located on the southern shore of Coquimbo Bay was totally destroyed by both the water and tsunami debris (Soloviev and Go, 1975). In a similar manner, the tsunami reached inundation heights of up to 9 m at Chañaral and 6–7 m at Caldera. The tsunami was also observed in Japan, with maximum amplitudes ranging from 60 to 70 cm (Carvajal et al., 2017; Soloviev and Go, 1975), which is similar to the amplitudes of the 2015 event (80 cm), but larger than those of the 1943 event, which were 10–25 cm (Beck et al., 1998). Another significant event was the 1849 earthquake, which generated a localized tsunami that mainly affected Coquimbo. The tsunami arrived 10 to 30 min after the earthquake, penetrated 300 m horizontally and rose 5 m above the high tide mark (Soloviev and Go, 1975). 3 Development of the fragility curve The development of the fragility functions in the present work required three main steps: first, data collection regarding building damage levels in the Coquimbo area and tsunami inundation heights for numerical modeling validation; second, selection of a rupture model of the 2015 Illapel earthquake and validation of the tsunami inundation heights for estimation of tsunami inundation depth; and third, GIS analysis and statistical analysis for correlation between damage level and simulated tsunami inundation depth. Figure 2Photographs of structures undamaged and masonry houses damaged by the 16 September 2015 tsunami in the Coquimbo area. The red letter d indicates the observed tsunami inundation depth. All photos were taken on 22 September 2015. ## 3.1 Building damage and tsunami inundation data Only 5 to 7 days after the 2015 event, a team surveyed the affected area and collected more than 40 inundation height, inundation depth and tsunami run-up measurements in the Coquimbo inundation area. The field measurements followed established post-tsunami survey procedures (Dengler et al., 2003; Dominey-Howes et al., 2012; Synolakis and Okal, 2005) and were corrected for tide level at the time of maximum inundation. At the same time, 585 structures within the inundation area were identified and classified as mixed structures made of wood and masonry (568), reinforced concrete buildings of eight or more stories (4) and very light structures that did not meet minimal building standards (13). The present analysis considered the mixed structures only; therefore, the reinforced concrete and light structures were removed from the fragility curve analysis. Typical structures within the inundated area of Coquimbo have one story and are made of masonry, though there are some two-story buildings made of both masonry (the first floor) and wood (the second floor). In order to facilitate the comparison with existing fragility curves (e.g., Dichato) all data were combined in a single category: mixed structures. Figure 2 shows typical mixed structures and inundation depth marks surveyed in Coquimbo immediately after the 2015 tsunami. Figure 2a and b show masonry houses that were not damaged by the tsunami despite inundation depths that ranged from 1.5 to 2 m. Figure 3(a) Surveyed damage to structures due to the 2015 tsunami; R.C.: reinforced concrete structures; L.S.: light structures. (b) Coquimbo inundated area (Aránguiz et al., 2016) and survey data. Red circles represent inundation measures and yellow triangles tsunami run-up. Meanwhile, Fig. 2c and d show houses with moderate to major damage, ready for inhabitation again after major repairs. In fact, the house in Fig. 2c was being repaired at the time of the field survey and the gray wall in the corner had been built a few days earlier. Meanwhile, the house in Fig. 2d was abandoned since all interior walls, windows, doors and the roof were destroyed and major repairs and retrofitting would be needed. Figure 2e shows a destroyed structure with its interior walls and roof completely removed, while Fig. 2f shows the remaining foundation of a washed-away structure. Even though the damage to the structure could be due to both the earthquake and tsunami, it was observed that damage due to the earthquake was limited (Candia et al., 2017) and the structures most affected by the earthquake were made of adobe (Fernández et al., 2017). In addition, the authors had the opportunity to compare damage to inundated and non-inundated houses in Coquimbo in order to verify that the structural damage to inundated houses was due to the tsunami. In order to avoid categorizing light damage (due to the earthquake) as tsunami damage, a two-level damage scale was used. Thus, the present work assumed that the damage to flooded structures was due only to the tsunami. In addition, the two-level damage scale was used due to the small number of inundated structures (568) and for comparison with the existing fragility curve of Dichato (Mas et al., 2012), which has only two damage levels. The first level, called “not destroyed,” included structures with no damage or minor to major damage, corresponding to levels 1–3 given by Suppasri et al. (2013). These damage levels indicate that there is slight to severe damage to nonstructural components; therefore, it would be possible to use the structures after moderate to major repairs (Fig. 2a–c). The other damage level, called “destroyed”, included damage levels 4 to 6 according to Suppasri et al. (2013), i.e., structures that underwent severe damage to walls or columns or that had completely collapsed (Fig. 2d–f). Previous works carried out damage inspections using satellite images and field surveys (Koshimura et al., 2009b; Mas et al., 2012; Suppasri et al., 2011); however, the satellite image method assumes that buildings with intact roofs are not destroyed (Suppasri et al., 2011), and severe damage to columns or interior walls may not be observed (Mas et al., 2012), as in the case of the houses shown in Fig. 2c and d. Therefore, the present work employed damage detection based on field surveys only. Figure 3a shows the surveyed buildings and the damage levels assigned to the 568 mixed structures. The four reinforced concrete buildings (R.C.) and the 13 light structures (L.S.) that did not meet minimal building standards are also included in the figure. Figure 3b shows the inundation height and run-up measurements recorded during the field survey. It is observed that the maximum inundation height was reached in the corner, where the coastal road and the railway converge. Most of the damaged structures were identified in that location as well. ## 3.2 Tsunami inundation depth Tsunami inundation depth was estimated as the difference between tsunami inundation height and ground elevation. Since the inundation heights were measured at a few locations across the inundation area and there is a lack of tsunami traces in the wetland, interpolation of tsunami height may not be suitable; therefore, the tsunami heights were obtained from tsunami numerical simulation of the 2015 event. We tested four available finite-fault models, namely those of Li et al. (2016), Ruiz et al. (2016), Okuwaki et al. (2016) and Shrivastava et al. (2016), and the best fit was selected according to tide gauges in Coquimbo and Valparaiso and DART buoy 32402. Once the best slip model was selected, we used the field measurements of inundation height and run-up to select an appropriate dry-land roughness coefficient. The model proposed by Li et al. (2016) is obtained from iterative modeling of teleseismic body waves as well as tsunami records at DART buoys. Since the magnitude of the proposed model is Mw=8.21, the slip distribution was multiplied by a factor of 1.38; thus, all events have the same magnitude: 8.3. Figure 4Model setting and nested computational grids for Coquimbo. The tsunami initial condition was estimated to be equal to the seafloor displacement. In addition, the vertical displacement from each subfault was computed using a kinematic solution of the planar fault model of Okada (1985). The numerical simulations were carried out with the Non-hydrostatic Evolution of Ocean WAVEs (NEOWAVE) model (Yamazaki et al., 2009, 2011). This model is a staggered finite-difference model that solves the nonlinear shallow water equation and uses a vertical velocity term to account for weakly dispersive waves. The model generates the tsunami initial condition, propagation and inundation by means of several nested grids of different resolutions. The present research used five nested grids, as shown in Fig. 4. The level 1 grid describes tsunami propagation from generation to the continental shelf and to the Pacific Ocean at a resolution of 2 arcmin (∼3600 m). This grid was generated from 30 arcmin General Bathymetric Chart of the Oceans (GEBCO) data. The level 2 and level 3 grids were built from nautical charts 4100, 4112, and 4113 and it had a resolution of 30 and 6 arcsec, respectively. The level 4 grid covered Coquimbo Bay and was built from nautical chart 4111, and had a resolution of 1 arcsec (∼30 m). Finally, the level 5 grid had a resolution of 1∕3 arcsec (∼10 m) and was built from bathymetry from nautical chart 4111 and topography from a digital terrain model (DTM) with contour lines with a resolution of 2 m provided by the Coquimbo office of the Ministry of Housing (MINVU). The topography used high-resolution data; thus, the most important features, such as the coastal road embankment, railway, river and wetland, are well represented (see Fig. 4, grid 5). Numerical simulations in Valparaiso involved four nested grids with a maximum grid resolution of 1 arcsec (∼30 m). The roughness coefficient was defined as n=0.025 on the seabed, as recommended for tsunamis (Bricker et al., 2015; Kotani et al., 1998); however, we tested several roughness coefficient values in coastal, wetland and urban areas in order to obtain the best fit of tsunami inundation height. The validation of the numerical simulation was performed using the root mean square error and the parameters K and κ given by Eqs. (1) and (2) (Aida, 1978). The variable Ki is defined as ${K}_{i}={x}_{i}/{y}_{i}$, where xi and yi are recorded and computed tsunami heights, respectively. The Japan Society of Civil Engineers provides guidelines, which recommend that $\mathrm{0.95} and κ<1.45 for there to be good agreement (Aida, 1978; Gokon et al., 2014). $\begin{array}{}\text{(1)}& & \mathrm{log}K=\frac{\mathrm{1}}{n}\sum _{i=\mathrm{1}}^{n}\mathrm{log}{K}_{i}\text{(2)}& & \mathrm{log}\mathit{\kappa }=\sqrt{\frac{\mathrm{1}}{n}\sum _{i=\mathrm{1}}^{n}{\left(\mathrm{log}{K}_{i}\right)}^{\mathrm{2}}-\left(\mathrm{log}K{\right)}^{\mathrm{2}}}\end{array}$ Figure 5 shows the tsunami initial conditions of the four slip models and the tsunami waveforms over an elapsed time of 4 h at three selected gauges, namely Coquimbo, Valparaiso and DART buoy 32402. Even though the modified Li et al. (2016) model overestimates the maximum amplitude at the DART buoy, the simulation exhibits a very good agreement with the tsunami record in Coquimbo. When the Mw=8.3 models proposed by Ruiz et al. (2016) and Shrivastava et al. (2016) were analyzed, it was possible to observe a good agreement at the DART buoy and Valparaiso tide gauge, although the amplitude in Coquimbo is underestimated by more than a meter. The Okuwaki et al. (2016) model overestimates both the DART buoy and Valparaiso tide gauge, despite the second tsunami wave reaching a similar amplitude in Coquimbo. Nevertheless, the maximum tsunami amplitude is underestimated. Therefore, the modified Li et al. (2016) model was selected to assess the suitable Manning roughness coefficient. Figure 5Tsunami initial conditions of four source models and comparison of tsunami records with simulated tsunami waveforms at DART 32402, Coquimbo and Valparaiso. Figure 6Tsunami inundation heights obtained with the modified Li et al. (2016) source model and four different Manning coefficients, n=0.025, 0.04, 0.05 and 0.06. The parameters of root mean square error, K and κ area are also shown. Figure 6 shows the inundation area and tsunami inundation height results obtained from the numerical simulations of the Li et al. (2016) model, with four different roughness coefficients. The tested coefficients are n=0.025 for coastal and riverine areas, 0.04 and 0.05 for low-density urban areas, and 0.06 for medium-density urban areas (Bricker et al., 2015; Kotani et al., 1998). From the figure, it is possible to observe that the best fit is obtained for n=0.025, which resulted in K=1.05 and κ<1.45, corresponding to good agreement. For higher roughness coefficients, the tsunami inundation heights are underestimated. In addition, the larger the coefficient, the smaller the inundation area. This behavior could be explained by the fact that a significant part of the flooded area is a wetland and the developed area is rather small, with a low-density residential distribution. Thus, the inundation depth is computed from the inundation area given by the modified Li et al. (2016) slip model, with a roughness coefficient of n=0.025. Figure 7Results of tsunami numerical simulations for each intensity measure. (a) Inundation depth, (b) flow velocity and (c) hydrodynamic force. ## 3.3 Fragility curve The construction of a fragility curve requires a correlation between the structural damage level and a tsunami intensity measure, such as the inundation depth, current velocity or hydrodynamic force. To this end, we used the classical approach with aggregated data and a least-square fit (Koshimura et al., 2009a), in which a sample size is defined such that each range of the tsunami intensity measure includes the defined number of structures. Then the damage probability is calculated by counting the number of destroyed or not-destroyed structures for each range of the intensity measure. Finally, the fragility function is developed through regression analysis of the discrete set of damage probabilities and the tsunami intensity measure. Therefore, it is assumed that the cumulative probability P of damage follows the standardized normal or lognormal distribution function given in Eq. (3). Φ is the distribution function, x is the hydrodynamic feature of the tsunami, and μ and σ are the mean and standard deviation of x, respectively. The values of μ and σ are calculated by means of least-square fitting of x and the inverse of Φ, (Φ−1) on normal paper given by Eq. (4). $\begin{array}{}\text{(3)}& & P\left(x\right)=\mathrm{\Phi }\left[\frac{x-\mathit{\mu }}{\mathit{\sigma }}\right]\text{(4)}& & x=\mathit{\sigma }{\mathrm{\Phi }}^{-\mathrm{1}}+\mathit{\mu }\end{array}$ The hydrodynamic force per unit width (kN m−1) acting on a structure is computed as the drag force given by Eq. (5), where the drag coefficient is assumed to be CD=1.0 for simplicity, ρ is the density of sea water (1025 kg m−3), U is the flow velocity (m s−1) and h is the inundation depth (m). $\begin{array}{}\text{(5)}& F=\frac{\mathrm{1}}{\mathrm{2}}{C}_{\mathrm{D}}\mathit{\rho }h{U}^{\mathrm{2}}\end{array}$ Table 1Statistical parameters for developed fragility curves obtained from a normal distribution. Figure 7 shows the results of the simulated tsunami intensity measures. It can be observed that the topography plays an important role in tsunami inundation, as the maximum inundation depth values (Fig. 7a) occur at the beach and wetland, while developed areas behind the railway and areas distant from the shore present low inundation depths. In a similar manner, high velocities occur close to the sites of rapid topographic changes (Fig. 7b), such as the lee side of the coastal road, while low velocities are observed within the developed area under analysis (<3 m s−1). Since hydrodynamic force is a combination of both inundation depth and flow velocity (Fig. 7c), the developed area behind the railway presents low force as well. Figure 8 shows the results of the tsunami fragility curves of Coquimbo for inundation depth, flow velocity and hydrodynamic force. The sample size was defined to be 40 structures; thus, 15 ranges were used. Figure 8a shows the histogram, while Fig. 8c shows the relationship between damage probability and inundation depth (upper panel), flow velocity (central panel) and hydrodynamic force (lower panel), with the solid line representing the best-fit curve of the plot. The fragility curves were estimated by means of regression analysis, as shown in Fig. 8b. The statistical parameters of the developed fragility functions are shown in Table 1. In Fig. 8 it is possible to observe that inundation depths lower than 1.5 m did not generate damage to the surveyed structures and the damage probability of the curve is less than 10 %. Moreover, the fragility curve shows that inundation depths higher than 4 m could result in a 100 % probability of severe damage to mixed structures in Coquimbo. With regard to the flow velocity, it is observed that most of the simulated data are in the range of 0 to 2.5 m s−1, with a damage probability of less than 40 %. In a similar manner, a hydrodynamic force lower than 2.5 kN m−1 proves to result in a damage probability of less than 20 %. Since the 2015 tsunami had a moderate impact, with low inundation depths and flow velocities in developed areas, it becomes very important to assess the tsunami damage due to possible events taking place in the same rupture area as that of the 1922 earthquake since large inundation depths were reported there (see Sect. 2). Figure 8Developing the tsunami fragility curve. (a) Histogram of the number of destroyed and not-destroyed structures in terms of the tsunami intensity measures within the inundation area. (b) Data plotted on normal probability paper and least-square fit. (c) Fragility function for building damage in terms of the tsunami intensity measures; the solid line is the best-fit curve of the plot (circles show the distribution of damage probability). ## 3.4 Comparison with existing fragility curves This section compares the fragility curve obtained in Coquimbo with curves obtained in other places after recent events. The statistical parameters of existing fragility curves are shown in Table 2. One curve is that of Okushiri, Japan, which was obtained for wooden structures after the 1993 tsunami event. The analysis included 523 houses and a range of approximately 50 structures (Suppasri et al., 2012a). In a similar manner, the fragility curve of Dichato, Chile, involved 915 mixed-material structures and a range of 50 structures after the 2010 Chile tsunami (Mas et al., 2012). A more comprehensive analysis was conducted in Banda Aceh, Indonesia, after the 2004 Indian Ocean tsunami (Koshimura et al., 2009b). This case involved 48 910 structures made of wood, timber and lightly reinforced concrete constructions, with a range of 1000 structures. The proposed curves were constructed for inundation depth, flow velocity and hydrodynamic force. After the 2009 Samoa event, Gokon et al. (2014) developed a fragility curve for mixed structures, which included wood, masonry and reinforced concrete, for the same three tsunami intensity measures as in the previously mentioned study. Similarly, the fragility curves of Thailand were developed for two provinces, namely, Phang Nga and Phuket, with 2508 and 1033 structures, respectively. In addition, all data were combined in order to develop a fragility curve for mixed-material structures and inundation depth (Suppasri et al., 2011). Figure 9a shows a comparison of the Coquimbo fragility curve with two-level damage curves of Dichato, Okushiri, Banda Aceh, American Samoa and Thailand. It is seen that Coquimbo experienced less damage than Dichato and Okushiri at inundation depths lower than 3 m. In fact, at an inundation depth of 2 m, Dichato and Okushiri have a 68 %–75 % probability of damage, while in Coquimbo the probability is only 20 %. The high probability of damage in Dichato and Okushiri could be due to the large number of structures made of wood and lightweight materials with little ability to withstand tsunami flows (Mas et al., 2012). Even though the building materials in Coquimbo are similar, it is observed in Fig. 7b that distance from the shore and the railway embankment decrease flow velocity and thus tsunami energy; therefore, the same inundation depth generates less damage to structures. In a similar manner, the fragility curve for mixed-material structures in Thailand shows a high probability of damage at an inundation depth of 2 m (∼50 %), but a 100 % probability of damage is reached at inundation depths higher than 8 m. In the case of Banda Aceh, the curve shows a low probability of damage (<20 %) at an inundation depth of 2 m, which is comparable to Coquimbo; however, the damage probability in Coquimbo increases rapidly as the inundation depth increases, reaching 100 % at an inundation depth of only 4 m, which could be a result of most of the houses having only one or two stories (see Fig. 2). Figure 9Tsunami fragility curves for damage probability developed for other locations and different damage levels. (a) Two levels of damage obtained for three different cities in Chile, Japan and Indonesia. (b) Six damage levels for wooden structures given by Suppasri et al. (2013). (c) Six damage levels for mixed-material structures by Suppasri et al. (2013). (d) Four damage levels for wooden houses given by Suppasri et al. (2012b). (e) Four damage levels for mixed-material structures given by Suppasri et al. (2012b). In addition, it was observed in Banda Aceh that structures were quite vulnerable when flow velocity exceeded 2.5 m s−1, with a damage probability of 60 % and a 100 % probability of damage at velocities larger than 4 m s−1 (Koshimura et al., 2009b). These results are in good agreement with the Coquimbo fragility curve. Moreover, the topography of Banda Aceh is characterized by low land with an elevation of around 3 m, which is also similar to Coquimbo. With regard to American Samoa, the curve shows a low probability of damage at inundation depths lower than 2 m; it begins to increase to up to 80 % when the inundation depth reaches 6 m. It is important to mention that the Samoa fragility curves were developed considering different types of structures, including wood, brick and reinforced concrete. In addition, the fragility curve as a function of flow velocity shows significant damage (∼50 %) at velocities of 2 m s−1, and only an 80 % probability of damage at velocities as high as 8 m s−1 (Gokon et al., 2014). Since all types of structures are analyzed in a single curve, it is believed that low velocities would easily cause damage to wooden structures, while damage to reinforced concrete structures would require higher inundation depths and flow velocities. The relatively high damage probability at low inundation depths could also be due to the ria-type coast of American Samoa (Gokon et al., 2014). Table 2Summary of statistical parameters and damage levels for empirical fragility curves (Mas et al., 2012; Suppasri et al., 2012b, 2013) including the current case of Coquimbo. μ and σ are statistical parameters for normal distribution, while μ and σ are the same parameters for lognormal distribution. R.C. indicates reinforced concrete structures. Figure 9b and c show the comparison of the Coquimbo fragility curve with the curves given by Suppasri et al. (2013) for wooden and mixed-material structures in Japan, respectively. The study considered more than 250 000 damaged buildings surveyed after the 2011 Tohoku tsunami and made it possible to analyze different damage levels and building materials. In general, it is seen that wooden and mixed structures in Japan have similar behavior. If damage level 4 (complete destruction) is analyzed, the damage probability is higher than in Coquimbo at an inundation depth lower than 2 m. Wooden and mixed structures in Japan present a relatively high probability of complete destruction (level 4), ranging from 50 % to 60 %, while in Coquimbo it is only 20 %. Another group of fragility curves for wooden and mixed structures – shown in Fig. 9d and e, respectively – were obtained from survey data of the 2011 Japan tsunami in the Sendai and Ishinomaki plains (Suppasri et al., 2012b). The curves show that structures located in flat areas were less impacted by the tsunami despite significant inundation depths, in contrast to what happened in areas with ria topography, such as the Sanriku coast (Suppasri et al., 2012a, 2013), and semi-closed bays such as Dichato (Mas et al., 2012). This behavior is in good agreement with damage observed in the Coquimbo area, where the flat nature of the area and distance from the shore could decrease tsunami impact. Thus, based on the influence of inundation depth and flow velocity on tsunami damage, De Risi et al. (2017) proposed the development of vulnerability models related to specific topographic contexts, such as plain-type or ria-type coasts. They found that ria-type coasts experience greater damage probability than plain-type coasts at the same inundation depth. It is noteworthy that the Coquimbo fragility curve for destruction or complete damage overlaps with the minor-damage-level curve for wood and mixed-material houses in flat areas in Japan (Fig. 9d and e). A possible explanation is that houses in Japan are relatively new and built according to strict construction standards (Suppasri et al., 2012b), in contrast to what was observed in Coquimbo, where old houses are found (see Fig. 2), although it could also be due to the local topographic features of Coquimbo. This finding suggests that both topography and structure quality should be considered in tsunami damage estimation. 4 Application of fragility curve to tsunami damage estimation This section presents an example of the use of fragility curves to estimate tsunami damage through a deterministic tsunami scenario in Coquimbo. We first define a tsunami scenario, then we run the numerical simulation to obtain the inundation depth and, finally, we estimate the tsunami damage in Coquimbo. Since earthquake damage in the Coquimbo Region was limited in 2015 (Candia et al., 2017; Fernández et al., 2017), it is assumed that the damage to structures is due exclusively to the tsunami. Figure 10Upper panels show slip distributions along scenario source models. The gray rectangles outline each scenario source segment. The moment magnitude for each scenario source model is denoted in the top left of the corresponding panel. Lower panels show the inter-seismic coupling (ISC) model from Métois et al. (2016) (left panel), Global Centroid Moment Tensor (GCMT) solutions (center panel) and the inverted slip model from Okuwaki et al. (2016) (right panel), which were used to construct the scenario source models. The star denotes the epicenter of the 2015 Illapel earthquake determined by the National Seismological Center (CSN, for its initials in Spanish). The blue contours delimit the inverted slip distribution every 2.08 m for the 2015 Illapel earthquake (Okuwaki et al., 2016). ## 4.1 Tsunami source model Based on Fig. 1, three possible segments can be defined, namely, the Copiapó–Coquimbo, Coquimbo–Illapel and Illapel–Constitución regions. However, events in the Illapel–Constitución region, including those of 1822 and 1906, have never generated a tsunami in Coquimbo (Soloviev and Go, 1975), and only the 1730 event, which ruptured the Coquimbo–Illapel segment, generated a tsunami in the area of interest (Cisternas et al., 2011); therefore, possible tsunamis generated in the Valparaiso segment were not considered in the present analysis. In a similar manner, earthquakes on the Coquimbo–Illapel segment were not considered because the 2015 Illapel earthquake filled the seismic gap that had existed since the last major earthquake in 1943 or earlier events (Ye et al., 2016); thus, no significant earthquakes that generate significant tsunamis could take place there in the near future. Conversely, the northern segment has presented no relevant seismic activity since 1922, i.e., 95 years before 2017 (see Fig. 1); moreover, the previous significant event took place in 1819 (73 years before the 1922 event). Therefore, the Copiapó–Coquimbo segment is of particular interest regarding possible future earthquakes and tsunamis in Coquimbo. It is important to note that the small event in 1849 (magnitude 7.5, according to Lomnitz, 2004) generated a 5 m tsunami in Coquimbo. Despite the small earthquake magnitude and large tsunami run-up of the event, there is no scientific evidence that a tsunami–earthquake occurred. In addition, the 1922 Atacama event had a complex source of three time-clustered shocks (Beck et al., 1998). Therefore, it seemed reasonable to separate the northern segments into two different seismic regions, with one segment covering Copiapó to Punta Choros (Fig. 10b) and the second segment from Punta Choros to Ovalle (Fig. 10a), which also coincides with the estimated rupture length of the 1849 event (see Fig. 1). Figure 11Results of tsunami numerical simulations for case 1 and the three scenarios, S1, S2 and S1+S2. Left column panels show vertical seafloor displacement. Central column panels show maximum inundation depth; the asterisk indicates the location of the tide gauge and the thin black lines represent the contour lines every 2 m. Right column panels show tsunami waveform over an elapsed time of 4 h at the Coquimbo tide gauge G. Either a probabilistic or deterministic approach could be used for the tsunami hazard assessment and damage estimation. While the former takes into account many uncertainties related to generation, propagation and inundation (Cheung et al., 2011; Geist and Parsons, 2006; Heidarzadeh and Kijko, 2011; Horspool et al., 2014; Park and Cox, 2016), the latter uses credible worst-case scenarios based on historical events (Aránguiz et al., 2014; Mitsoudis et al., 2012; Wijetunge, 2012). However, the coupling coefficient could be used to assess the shape of possible future deterministic earthquakes (Métois et al., 2016; Pulido et al., 2015) since reasonable heterogeneous slip models could be predicted by the degree of interseismic locking (Calisto et al., 2016; Gonzalez-Carrasco et al., 2015). Thus, the slip distribution S at arbitrary space ξ is represented as given by Eq. (6): $\begin{array}{}\text{(6)}& S\left(\mathit{\xi }\right)=\underset{{t}_{\mathrm{0}}}{\overset{{t}_{\mathrm{1}}}{\int }}C\left(\mathit{\xi },t\right)V\left(\mathit{\xi }\right)\mathrm{d}t-\sum _{j}\left({s}_{j}\left(\mathit{\xi }\right)+{p}_{j}\left(\mathit{\xi }\right)\right),\end{array}$ where C is the interseismic coupling, ranging from 0 to 1. The interseismic coupling model adopted in this study is from Métois et al. (2016), which is derived from inverting Global Positioning System (GPS) measurements along the Chilean margin (18–38 S) that have been made by international teams since the early 1990s (see Métois et al., 2016, and references therein). It provides a reasonable estimate of the degree of locking between the Nazca and the South American plates, indicating strong coupling along the scenario source regions (see Fig. 10d to f). V is the plate convergence rate at ξ, derived from the NNR-NUVEL-1A model (DeMets et al., 1994), and t0 and t1 delimit the interseismic period for integration. sj is the slip of the small event ($\mathrm{4.8}\le {M}_{\mathrm{w}}\le \mathrm{7.9}$) at the jth location, which is listed in the Global Centroid Moment Tensor (GCMT) catalog (http://www.globalcmt.org/CMTsearch.html, last access: 10 July 2018; see Fig. 10e), and pj is the post-seismic slip following sj. Each amount of slip sj is calculated based on the seismic moment obtained by the GCMT and the empirical relationship between rupture area and the moment magnitude introduced by Wells and Coppersmith (1994). The rigidity modulus for the calculation of moment magnitude of each sj is computed with the layered, near-source structure adopted in the source study by Okuwaki et al. (2016). We eliminated the Mw=8.3 2015 Illapel earthquake from the GCMT list and instead considered its contribution to the scenario source models with the inverted slip model developed by Okuwaki et al. (2016) in Eq. (6) (Fig. 10). The slip motion of S is assumed to be pure thrust against the subducting plate motion. Note that C is constant against time and the post-seismic slip pj is not considered in the present analysis; thus, it is possible that the scenario source models will slightly overestimate S. The variable slip distribution was obtained from the heterogeneous interseismic coupling C. Time intervals for the integral of Eq. (6) are assumed to be 94 years (1922 to 2016). Each segment is subdivided into 10 km×10 km subspace knots for 150×160 and 180×160 km2 source areas for S1 and S2, respectively. While the magnitude of the event related to segment S1 is Mw=8.2, the magnitude of the S2 event is Mw=8.4. If both segments are considered together ($\text{S3}=\text{S1}+\text{S2}$), the total magnitude is Mw=8.5. The strike and dip angles for the scenario source geometry are assumed to be constant based on the subducting slab geometry of the Slab 1.0 model (Hayes et al., 2012): (strike, dip)=(2.7, 15.0) for S1 and (strike, dip)=(16.0, 15.0) for S2. The fault geometry and characteristic source parameters, as well as complete model parameters for each scenario source model, are available from the authors upon request. Figure 12Results of tsunami numerical simulation of the S1 event (Mw=8.2). (a) Inundation depth, (b) flow velocity, (c) hydrodynamic force and (d) increase in inundation height compared to the 2015 Coquimbo tsunami. (e) Increase in flow velocity compared to the 2015 Coquimbo tsunami. (f) Increase in hydrodynamic force compared to the 2015 Coquimbo tsunami. ## 4.2 Numerical simulation of proposed tsunami scenario The computation covered an elapsed time of 6 h with output intervals of 1 min. Figure 11 shows the main results and the three different tsunami scenario combinations. The upper row shows the results for segment S1 (Mw=8.2) and the middle row shows the results for segment S2 (Mw=8.4), while the lower row shows the results for the combined scenario of S1 and S2 (Mw=8.5). In addition, the left column shows the vertical displacement of the seafloor, the middle column shows the maximum inundation depth and the right column shows the tsunami wave form at the Coquimbo tide gauge over an elapsed time of 4 h (240 min). It is observed that segment S2 (Mw=8.4) generated lower inundation depths than segment S1 (Mw=8.2), which can be explained by the fact that the strike angle and the coastal morphology cause the tsunami to be propagated toward the north and not directly toward Coquimbo Bay. Meanwhile, the tsunami generated by segment S1, the second wave of which is the largest, propagates directly toward Coquimbo Bay. It is possible to observe that the maximum inundation depths reached up to 5 m in developed areas and along the coastline. Moreover, it is interesting that the Mw=8.5 event, as a combination of S1 and S2 (lower row in Fig. 11), generated lower inundation depths than segment S1 alone. This can be explained by the fact that the maximum tsunami amplitude of each individual event does not occur at the same time; thus, the segment S2 tsunami decreases the maximum amplitude of the segment S1 tsunami. Larger tsunami amplitudes could result from a time gap between the segment S1 and S2 events such that the maximum tsunami waves coincide. Nevertheless, this analysis is beyond the scope of the present paper. ## 4.3 Damage to structures The previous section demonstrated that the combination of S1 and S2 rupturing at the same time generated lower inundation heights than the S1 event alone; therefore, the damage to structures is assessed for segment S1 only, i.e., a tsunami generated by a Mw=8.2 earthquake off the coast of Coquimbo that generates inundation heights lower than 5 m. Figure 12 shows the results for each tsunami intensity measure, namely inundation depth, flow velocity and hydrodynamic force (upper row panels). In addition, the lower row in Fig. 12 shows the difference between the maximum tsunami intensity measures given by the S1 scenario and those of the simulated 2015 tsunami event (Fig. 7). This figure allows areas with a greater increase in tsunami intensity measure and therefore higher damage probability to be identified. In order to determine a high or low probability of damage to a given structure, first latitude and longitude coordinates are assigned to each structure within the inundation area, and the maximum inundation depths given by the tsunami numerical simulation at the location of each structure are exported to GIS. Second, the inundation depth database is divided into several ranges, with 40 samples in each range, and the mean value of each range is intersected with the fragility curve given in Fig. 8c in order to define the damage probability for each range. For simplicity, and similar to previous studies (Fraser et al., 2014; Wiebe and Cox, 2014), we used only the fragility curve generated as a function of the inundation depth. Third, the damage probability given in the previous step is assumed to be equal to the percentage of structures with a high probability of damage within each range. To make this determination, the inundation depths for each range are arranged in descending order and the structures outside of that percentage (with the lowest inundation depth within the range) are assumed to have a low probability of damage. Figure 13(a) Tsunami inundation map and inundation depth on structures. (b) Tsunami inundation map and low and high probabilities of damage to the flooded structures. Figure 13a shows the low-lying area of the city of Coquimbo and the computed inundation depth given by the numerical simulation of scenario S1. A total of 646 mixed-material structures were identified within the inundation area, and they are colored according to inundation depth level. Figure 13b shows the result of the damage estimation. It was found that 321 structures, i.e., 49.6 % of the flooded structures, have a high probability of damage, a figure that is much higher than the 20 % surveyed right after the 2015 tsunami. As expected, the structures behind the railway embankment and wetland would experience less damage than those located close to the shore. Due to the high probability of damage to houses located near the shore, it is recommended that any reconstruction plan or future tsunami mitigation measures consider the fact that high tsunami inundation depths (5–8 m) could be generated in this area. After the 2011 Japan tsunami, it has been demonstrated that comprehensive urban planning is the key point for avoiding future disasters, such that the best approach to decrease tsunami risk is an integration of structural and nonstructural means of coastal protection and land-use management as a strategy with multiple lines of defense (Strusińska-Correia, 2017). In addition, the most important lessons from the 2011 Japan tsunami include methods to strengthen coastal defense structures, evacuation buildings and coastal forests (Suppasri et al., 2016). Thus, Coquimbo seems to be an interesting case study since the coastal road, wetland and railway partly fulfill the structural requirements of a multilayer tsunami countermeasure, and it would be necessary to implement more comprehensive nonstructural countermeasures in the future. In a local context, Khew et al. (2015) found that the tsunami countermeasures implemented in the Greater Concepción area after the 2010 Chile tsunami, such as hard infrastructure, contributed positively to the recovery of economic and social resilience, although it was found that new elevated housing decreased social resilience. Moreover, it is recommended that governmental and business structures be effectively decentralized such that local conditions are successfully incorporated into the design of hard infrastructure for tsunami mitigation (Khew et al., 2015). Finally, it was also found that tsunami mitigation measures implemented in Dichato after the 2010 Chile tsunami did not decrease tsunami risk, as some vulnerability variables (housing conditions, low household incomes and limited knowledge of tsunami events) are still at the same level (Martínez et al., 2017). Therefore, nonstructural mitigation measures should play an important role in effectively decreasing tsunami risk in the future. 5 Conclusions Numerical simulations of the 2015 Chile tsunami proved to be in good agreement with field survey data in Coquimbo. A Coquimbo fragility curve was developed with two-level classification of structural damage, namely, not destroyed and destroyed. The Coquimbo fragility curve shows a low probability of damage, 20 %, at a relatively high inundation depth (2 m), in contrast to what was observed in another Chilean town, Dichato, where a 68 % probability of damage resulted from the same inundation depth. This result is in good agreement with fragility curves for the Sendai and Ishinomaki plains in Japan, in that tsunami energy decreased and less damage was observed. The fragility curve may be used to estimate possible future tsunami damage in the Coquimbo area and other places with similar topography and building materials. In Coquimbo, it was found that a magnitude Mw=8.2 earthquake off the coast of the city could generate a destructive tsunami with inundation depths of up to 5 m. The assessment of tsunami damage with the fragility curve demonstrated that ∼50 % of the assessed structures have a high probability of damage if reconstruction is carried out with the same types of structures, which is greater than the damage caused by the 2015 tsunami (20 %). Therefore, tsunami mitigation measures and the reconstruction plan should consider potential tsunami damage due to a future earthquake off the coast of Coquimbo. It is recommended that new land-use policies be implemented in order to regulate the types of structures being built in the inundation area. In addition, based on previous experience in Japan and Chile, new tsunami mitigation measures must consider a combination of both structural and nonstructural tsunami countermeasures in order to effectively decrease tsunami risk in Coquimbo in the future. Data availability Data availability. Data sets are available upon request by contacting the corresponding author. Author contributions Author contributions. The idea was conceived by LU and RA. The field survey of damaged structures was carried out by LU, while the field survey of tsunami inundation heights and run-ups was carried out by RA and LU. All numerical simulations were performed by RA. Tsunami fragility curves were developed by RA and LU, while RO and YY proposed the tsunami source model for the application of fragility curves. LU assessed damage to structures and RA prepared the first manuscript; thus all authors contributed to editing the final version of the article. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. The authors would like to thank CONICYT (Chile) for its FONDAP 15110017 and FONDECYT 11140424 grants, as well as the Research and Innovation Department (Dirección de Investigación e Innovación) of the Universidad Católica Ssma. Concepción. Special thanks to those who contributed to the collection of field data: Enrique Muñoz and Evelyn Pedrero, Evans Aravena, and Diego Espinoza. Thanks to the Ministry of Housing for providing us with topography data. Finally, thanks to the two anonymous reviewers, who significantly helped us improve the paper. Reviewed by: two anonymous referees References Aida, I.: Reliability of a tsunami source model derived from fault parameters, J. Phys. Earth, 26, 57–73, https://doi.org/10.4294/jpe1952.26.57, 1978. Aránguiz, R., Shibayama, T., and Yamazaki, Y.: Tsunamis from the Arica-Tocopilla source region and their effects on ports of Central Chile, Nat. Hazards, 71, 175–202, https://doi.org/10.1007/s11069-013-0906-5, 2014. Aránguiz, R., González, G., González, J., Catalán, P. A., Cienfuegos, R., Yagi, Y., Okuwaki, R., Urra, L., Contreras, K., Del Rio, I., and Rojas, C.: The 16 September 2015 Chile Tsunami from the Post-Tsunami Survey and Numerical Modeling Perspectives, Pure Appl. Geophys., 173, 333–348, https://doi.org/10.1007/s00024-015-1225-4, 2016. Beck, S., Barrientos, S., Kausel, E., and Reyes, S.: Source characteristics of historic earthquakes along the central Chile subduction zone, J. S. Am. Earth Sci., 11, 115–129, 1998. Bricker, J. D., Gibson, S., Takagi, H., and Imamura, F.: On the Need for Larger Manning' s Roughness Coefficients in Depth-Integrated Tsunami Inundation Models, Coast. Eng. J., 57, 1550005, https://doi.org/10.1142/S0578563415500059, 2015. Calisto, I., Miller, M., and Constanzo, I.: Comparison Between Tsunami Signals Generated by Different Source Models and the Observed Data of the Illapel 2015 Earthquake, Pure Appl.Geophys., 173, 1051–1061, https://doi.org/10.1007/s00024-016-1253-8, 2016. Candia, G., de Pascale, G. P., Montalva, G., and Ledezma, C.: Geotechnical Aspects of the 2015 Mw 8.3 Illapel Megathrust Earthquake Sequence in Chile, Earthq. Spectra, 33, 709–728, https://doi.org/10.1193/031716EQS043M, 2017. Carvajal, M., Cisternas, M., Gubler, A., Catalán, P. A., Winckler, P., and Wesson, R. L.: Reexamination of the magnitudes for the 1906 and 1922 Chilean earthquakes using Japanese tsunami amplitudes: Implications for source depth constraints, J. Geophys. Res.-Solid Ea., 122, 4–17, https://doi.org/10.1002/2016JB013269, 2017. Charvet, I., Suppasri, A., Kimura, H., Sugawara, D., and Imamura, F.: A multivariate generalized linear tsunami fragility model for Kesennuma City based on maximum flow depths, velocities and debris impact, with evaluation of predictive accuracy, Nat. Hazards, 79, 2073–2099, https://doi.org/10.1007/s11069-015-1947-8, 2015. Charvet, I., Macabuag, J., and Rossetto, T.: Estimating Tsunami-Induced Building Damage through Fragility Functions: Critical Review and Research Needs, Front. Built Environ., 3, 36, https://doi.org/10.3389/fbuil.2017.00036, 2017. Cheung, K. F., Wei, Y., Yamazaki, Y., and Yim, S. C. S.: Modeling of 500-year tsunamis for probabilistic design of coastal infrastructure in the Pacific Northwest, Coast. Eng., 58, 970–985, https://doi.org/10.1016/j.coastaleng.2011.05.003, 2011. Cisternas, M., Gorigoitía, N., Torrejón, F., and Urbina, X.: Terremoto y tsunami de Chile central de 1730: Un gigante o una serie de eventos menores?, in: XXXI Congreso de Ciencias del Mar, 16–19 August 2011, Viña del Mar, Chile, 2011. Contreras-López, M., Winckler, P., Sepúlveda, I., Andaur-Álvarez, A., Cortés-Molina, F., Guerrero, C. J., Mizobe, C. E., Igualt, F., Breuer, W., Beyá, J. F., Vergara, H., and Figueroa-Sterquel, R.: Field Survey of the 2015 Chile Tsunami with Emphasis on Coastal Wetland and Conservation Areas, Pure Appl. Geophys., 173, 349–367, https://doi.org/10.1007/s00024-015-1235-2, 2016. DeMets, C., Gordon, R. G., Argus, D. F., and Stein, S.: Effect of recent revisions to the geomagnetic reversal time scale on estimates of current plate motions, Geophys. Res. Lett., 21, 2191–2194, https://doi.org/10.1029/94GL02118, 1994. Dengler, L., Borrero, J., Gelfenbaum, G., Jaffe, B., Okal, E., Ortiz, M., Titov, V., Anima, R., Anticona, L. B., Araya, S., Gomer, B., Gómez, J., Koshimura, S., Laos, G., Ocala, L. Olcese, D., Peters, R., Riega, P. C., Rubin, D., Swensson, M., and Vegas, F.: Tsunami, Earthq. Spectra, 19, 115–144, https://doi.org/10.1193/1.1737247, 2003. De Risi, R., Goda, K., Yasuda, T., and Mori, N.: Is flow velocity important in tsunami empirical fragility modeling?, Earth-Sci. Rev., 166, 64–82, https://doi.org/10.1016/J.EARSCIREV.2016.12.015, 2017. Dominey-Howes, D., Dengler, L., Dunbar, P., Kong, L., Fritz, H., Imamura, F., and Borrero, J.: International tsunami survey team (ITST) post-tsunami survey field guide, UNESCO-IOC, Paris, 2012. Fernández, J., Pastén, C., Ruiz, S., and Leyton, F.: Estudio de efectos de sitio en la Región de Coquimbo durante el terremoto de Illapel Mw 8.3 de 2015, Obras y Proyectos, 21, 20–28, 2017. Fraser, S. A., Power, W. L., Wang, X., Wallace, L. M., Mueller, C., and Johnston, D. M.: Tsunami inundation in Napier, New Zealand, due to local earthquake sources, Nat. Hazards, 70, 415–445, https://doi.org/10.1007/s11069-013-0820-x, 2014. Fuentes, M. A., Riquelme, S., Hayes, G. P., Medina, M., Melgar, D., Vargas, G., González, J., and Villalobos, A.: A Study of the 2015 Mw 8.3 Illapel Earthquake and Tsunami: Numerical and Analytical Approaches, Pure Appl. Geophys., 173, 1847–1858, https://doi.org/10.1007/s00024-016-1305-0, 2016. Geist, E. L. and Parsons, T.: Probabilistic analysis of tsunami hazards, Nat. Hazards, 37, 277–314, 2006. Gokon, H., Koshimura, S., Imai, K., Matsuoka, M., Namegaya, Y., and Nishimura, Y.: Developing fragility functions for the areas affected by the 2009 Samoa earthquake and tsunami, Nat. Hazards Earth Syst. Sci., 14, 3231–3241, https://doi.org/10.5194/nhess-14-3231-2014, 2014 Gonzalez-Carrasco, J., Aránguiz, R., Dominguez, J., C., and Urra, L.: Assessment of interseismic coupling models to estimate inundation and runup, Seismol. Res. Lett., 86, 665–666, 2015. Hayes, G. P., Wald, D. J., and Johnson, R. L.: Slab1.0: A three-dimensional model of global subduction zone geometries, J. Geophys. Res.-Solid Ea., 117, B01302, https://doi.org/10.1029/2011JB008524, 2012. Heidarzadeh, M. and Kijko, A.: A probabilistic tsunami hazard assessment for the Makran subduction zone at the northwestern Indian Ocean, Nat. Hazards, 56, 577–593, https://doi.org/10.1007/s11069-010-9574-x, 2011. Horspool, N., Pranantyo, I., Griffin, J., Latief, H., Natawidjaja, D. H., Kongko, W., Cipta, A., Bustaman, B., Anugrah, S. D., and Thio, H. K.: A probabilistic tsunami hazard assessment for Indonesia, Nat. Hazards Earth Syst. Sci., 14, 3105–3122, https://doi.org/10.5194/nhess-14-3105-2014, 2014. Khew, Y. T. J., Jarzebski, M. P., Dyah, F., San Carlos, R., Gu, J., Esteban, M., Aránguiz, R., and Akiyama, T.: Assessment of social perception on the contribution of hard-infrastructure for tsunami mitigation to coastal community esilience after the 2010 tsunami: Greater Concepcion area, Chile, Int. J. Disast. Risk Reduc., 13, 324–333, https://doi.org/10.1016/j.ijdrr.2015.07.013, 2015. Koshimura, S., Namegaya, Y., and Yanagisawa, H.: Tsunami fragility: A new measure to identify tsunami damage, J. Disast. Res., 4, 479–488, 2009a. Koshimura, S., Oie, T., Yanagisawa, H., and Imamura, F.: Developing Fragility Functions for Tsunami Damage Estimation Using Numerical Model and Post-Tsunami Data From Banda Aceh, Indonesia, Coast. Eng. J., 51, 243–273, https://doi.org/10.1142/S0578563409002004, 2009b. Kotani, M., Imamura, F., and Shuto, N.: Tsunami runup simulation and damage estimation by using geographical information system, Proc. Coast. Eng. JSCE, 45, 356–360, 1998. Li, L., Lay, T., Cheung, K. F., and Ye, L.: Joint modeling of teleseismic and tsunami wave observations to constrain the 16 September 2015 Illapel, Chile Mw 8.3 earthquake rupture process, Geophys. Res. Lett., 43, 4303–4312, https://doi.org/10.1002/2016GL068674, 2016. Lomnitz, C.: Major Earthquakes of Chile: A Historical Survey, 1535–1960, Seismol. Res. Lett., 75, 368–378, https://doi.org/10.1785/gssrl.75.3.368, 2004. Macabuag, J., Rossetto, T., Ioannou, I., Suppasri, A., Sugawara, D., Adriano, B., Imamura, F., Eames, I., and Koshimura, S.: A proposed methodology for deriving tsunami fragility functions for buildings using optimum intensity measures, Nat. Hazards, 84, 1257–1285, https://doi.org/10.1007/s11069-016-2485-8, 2016. Martínez, C., Rojas, O., Villagra, P., Aránguiz, R., and Sáez-Carrillo, K.: Risk factors and perceived restoration in a town destroyed by the 2010 Chile tsunami, Nat. Hazards Earth Syst. Sci., 17, 721–734, https://doi.org/10.5194/nhess-17-721-2017, 2017. Mas, E., Koshimura, S., Suppasri, A., Matsuoka, M., Matsuyama, M., Yoshii, T., Jimenez, C., Yamazaki, F., and Imamura, F.: Developing Tsunami fragility curves using remote sensing and survey data of the 2010 Chilean Tsunami in Dichato, Nat. Hazards Earth Syst. Sci., 12, 2689–2697, https://doi.org/10.5194/nhess-12-2689-2012, 2012. Melgar, D., Fan, W., Riquelme, S., Geng, J., Liang, C., Fuentes, M., Vargas, G., Allen, R. M., Shearer P. M., and Fielding, E. J.: Slip segmentation and slow rupture to the trench during the 2015, Mw8.3 Illapel, Chile earthquake, Geophys. Res. Lett., 43, 961–966, https://doi.org/10.1002/2015GL067369, 2016. Métois, M., Vigny, C., and Socquet, A.: Interseismic Coupling, Megathrust Earthquakes and Seismic Swarms Along the Chilean Subduction Zone (38–18 S), Pure Appl. Geophys., 173, 1431–1449, https://doi.org/10.1007/s00024-016-1280-5, 2016. Mitsoudis, D. A., Flouri, E. T., Chrysoulakis, N., Kamarianakis, Y., Okal, E. A., and Synolakis, C. E.: Tsunami hazard in the southeast Aegean Sea, Coast. Eng., 60, 136–148, https://doi.org/10.1016/j.coastaleng.2011.09.004, 2012. Nandasena, N. A. K., Sasaki, Y., and Tanaka, N.: Modeling field observations of the 2011 Great East Japan tsunami: Efficacy of artificial and natural structures on tsunami mitigation, Coast. Eng., 67, 1–13, https://doi.org/10.1016/j.coastaleng.2012.03.009, 2012. Nishenko, S. P.: Seismic potential for large and great interplate earthquakes along the Chilean and Southern Peruvian Margins of South America: A quantitative reappraisal, J. Geophys. Res., 90, 3589–3615, https://doi.org/10.1029/JB090iB05p03589, 1985. Nistor, I., Palermo, D., Nouri, Y., Murty, T., and Saatcioglu, M.: Tsunami-Induced Forces on Structures, in: Handbook of Coastal and Ocean Engineering, edited by: Kim, C. Y., World Scientific, Singapore, 261–286, https://doi.org/10.1142/9789812819307_0011, 2009. Okada, Y.: Surface deformation due to shear and tensile faults in a half space, Bull. Seismol. Soc. Am., 75, 1135–1154, 1985. Okuwaki, R., Yagi, Y., Aránguiz, R., González, J., and González, G.: Rupture Process During the 2015 Illapel, Chile Earthquake: Zigzag-Along-Dip Rupture Episodes, Pure Appl. Geophys., 173, 1011–1020, https://doi.org/10.1007/s00024-016-1271-6, 2016. Park, H. and Cox, D. T.: Probabilistic assessment of near-field tsunami hazards: Inundation depth, velocity, momentum flux, arrival time, and duration applied to Seaside, Oregon, Coast. Eng., 117, 79–96, https://doi.org/10.1016/j.coastaleng.2016.07.011, 2016. Park, H., Cox, D. T., and Barbosa, A. R.: Comparison of inundation depth and momentum flux based fragilities for probabilistic tsunami damage assessment and uncertainty analysis, Coast. Eng., 122, 10–26, https://doi.org/10.1016/j.coastaleng.2017.01.008, 2017. Pulido, N., Aguilar, Z., Tavera, H., Chlieh, M., Calderón, D., Sekiguchi, T., Nakai, S., and Yamazaki, F.: Scenario Source Models and Strong Ground Motion for Future Mega-earthquakes: Application to Lima, Central Peru, Bull. Seismol. Soc. Am., 105, 368–386, https://doi.org/10.1785/0120140098, 2015. Ruiz, S., Klein, E., DelCampo, F., Rivera, E., Poli, P., Metois, M., Vigny, C., Baez, J. C., Vargas, G., Leyton, F., Madariaga, R., and Fleitout, L.: The Seismic Sequence of the 16 September 2015 Mw 8.3 Illpel, Chile, Earthquake, Seismol. Res. Lett., 87, 789–799, https://doi.org/10.1785/0220150281, 2016. Shimozono, T. and Sato, S.: Coastal vulnerability analysis during tsunami-induced levee overflow and breaching by a high-resolution flood model, Coast. Eng., 107, 116–126, https://doi.org/10.1016/j.coastaleng.2015.10.007, 2016. Shrivastava, M. N., González, G., Moreno, M., Chlieh, M., Salazar, P., Reddy, C. D., Báez, J. C., Yañez, G., González, J., and de la Llera, J. C.: Coseismic slip and afterslip of the 2015 Mw 8.3 Illapel (Chile) earthquake determined from continuous GPS data, Geophys. Res. Lett., 43, 10710–10719, https://doi.org/10.1002/2016GL070684, 2016. Soloviev, S. L. and Go, C. N.: A Catalogue of Tsunamis on the Eastern Shore of the Pacific Ocean, Nauka Publishing House, Moscow, 1975. Song, J., De Risi, R., and Goda, K.: Influence of Flow Velocity on Tsunami Loss Estimation, Geosciences, 7, 114, https://doi.org/10.3390/geosciences7040114, 2017. Strusińska-Correia, A.: Tsunami mitigation in Japan after the 2011 Tōhoku Tsunami, Int. J. Disast. Risk Reduc., 22, 397–411, https://doi.org/10.1016/J.IJDRR.2017.02.001, 2017. Suppasri, A., Koshimura, S., and Imamura, F.: Developing tsunami fragility curves based on the satellite remote sensing and the numerical modeling of the 2004 Indian Ocean tsunami in Thailand, Nat. Hazards Earth Syst. Sci., 11, 173–189, https://doi.org/10.5194/nhess-11-173-2011, 2011. Suppasri, A., Koshimura, S., Matsuoka, M., Gokon, H., and Kamthonkiat, D.: Application of Remote Sensing for Tsunami Disaster, in: Remote Sensing of Planet Earth, InTech, https://doi.org/10.5772/2291, 2012a. Suppasri, A., Mas, E., Koshimura, S., Imai, K., Harada, K., and Imamura, F.: Developing Tsunami Fragility Curves From the Surveyed Data of the 2011 Great East Japan Tsunami in Sendai and Ishinomaki Plains, Coast. Eng. J., 54, 1–16, https://doi.org/10.1142/S0578563412500088, 2012b. Suppasri, A., Mas, E., Charvet, I., Gunasekera, R., Imai, K., Fukutani, Y., Abe, Y., and Imamura, F.: Building damage characteristics based on surveyed data and fragility curves of the 2011 Great East Japan tsunami, Nat. Hazards, 66, 319–341, https://doi.org/10.1007/s11069-012-0487-8, 2013. Suppasri, A., Latcharote, P., Bricker, J. D., Leelawat, N., Hayashi, A., Yamashita, K., Makinoshima, F., Roeber, V., and Imamura, F.: Improvement of Tsunami Countermeasures Based on Lessons from The 2011 Great East Japan Earthquake and Tsunami – Situation After Five Years, Coast. Eng. J., 58, 1640011, https://doi.org/10.1142/S0578563416400118, 2016. Synolakis, C. E. and Okal, E. A.: 1992–2002: Perspective on a Decade of Post-Tsunami Surveys BT – Tsunamis: Case Studies and Recent Developments, edited by: Satake, K., Springer Netherlands, Dordrecht, 1–29, https://doi.org/10.1007/1-4020-3331-1_1, 2005. Wei, Z., Dalrymple, R. A., Hérault, A., Bilotta, G., Rustico, E., and Yeh, H.: SPH modeling of dynamic impact of tsunami bore on bridge piers, Coast. Eng., 104, 26–42, https://doi.org/10.1016/j.coastaleng.2015.06.008, 2015. Wells, D. L. and Coppersmith, K. J.: New empirical relationships among magnitude, rupture length, rupture width, rupture area, and surface displacement, Bull. Seismol. Soc. Am., 84, 974–1002, 1994. Wiebe, D. M. and Cox, D. T.: Application of fragility curves to estimate building damage and economic loss at a community scale: a case study of Seaside, Oregon, Nat. Hazards, 71, 2043–2061, https://doi.org/10.1007/s11069-013-0995-1, 2014. Wijetunge, J. J.: Nearshore tsunami amplitudes off Sri Lanka due to probable worst-case seismic scenarios in the Indian Ocean, Coast. Eng., 64, 47–56, https://doi.org/10.1016/j.coastaleng.2012.02.005, 2012. Yamazaki, Y., Kowalik, Z., and Cheung, K. F.: Depth-integrated, non-hydrostatic model for wave breaking and run-up, Int. J. Numer. Meth. Fluids, 61, 473–497, https://doi.org/10.1002/fld.1952, 2009. Yamazaki, Y., Cheung, K. F., and Kowalik, Z.: Depth-integrated, non-hydrostatic model with grid nesting for tsunami generation, propagation, and run-up, Int. J. Numer. Meth. Fluids, 67, 2081–2107, https://doi.org/10.1002/fld.2485, 2011. Ye, L., Lay, T., Kanamori, H., and Koper, K. D.: Rapidly Estimated Seismic Source Parameters for the 16 September 2015 Illapel, Chile Mw 8.3 Earthquake, Pure Appl. Geophys., 173, 321–332, https://doi.org/10.1007/s00024-015-1202-y, 2016.
2019-03-21 16:33:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6647988557815552, "perplexity": 7281.2044315201765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202526.24/warc/CC-MAIN-20190321152638-20190321174638-00502.warc.gz"}
https://archive.lib.msu.edu/crcmath/math/math/v/v029.htm
## Variation of Parameters For a second-order Ordinary Differential Equation, (1) Assume that linearly independent solutions and are known. Find and such that (2) (3) Now, impose the additional condition that (4) so that (5) (6) Plug , , and back into the original equation to obtain (7) (8) Therefore, (9) (10) Generalizing to an th degree ODE, let , ..., be the solutions to the homogeneous ODE and let , ..., be chosen such that (11) Then the particular solution is (12) © 1996-9 Eric W. Weisstein 1999-05-26
2021-11-29 00:07:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942135214805603, "perplexity": 1714.6919148124382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358673.74/warc/CC-MAIN-20211128224316-20211129014316-00349.warc.gz"}
https://mathoverflow.net/questions/341865/diameter-of-finite-rational-matrix-groups
# Diameter of finite rational matrix groups Suppose $$G$$ is a finite subgroup of $$\mathrm{GL}(n,\mathbb{Q})$$. For a set $$\mathcal{M} \subseteq G$$ that generates $$G$$, define the $$\mathcal{M}$$-diameter $$\mathit{diam}(G, \mathcal{M})$$ of $$G$$ to be the smallest $$k \in \mathbb{N}$$ such that $$\bigcup_{i=0}^k \mathcal{M}^i = G$$, i.e., every matrix in $$G$$ can be written as a product of matrices from $$\mathcal{M}$$, with product length at most $$k$$. Further, define the diameter $$\mathit{diam}(G)$$ of $$G$$ as the maximum $$\mathcal{M}$$-diameter of $$G$$, over all sets $$\mathcal{M}$$ generating $$G$$. Define the function $$d : \mathbb{N} \to \mathbb{N}$$ such that $$d(n)$$ is the maximum $$\mathit{diam}(G)$$, over all finite subgroups $$G$$ of $$\mathrm{GL}(n,\mathbb{Q})$$. Equivalently, $$d(n)$$ can be formally defined on one line by $$d(n) \ := \ \max \left\{\min\left\{k \in \mathbb{N} : \bigcup_{i=0}^k \mathcal{M}^i = \langle\mathcal{\mathcal{M}}\rangle\right\} : \langle\mathcal{\mathcal{M}}\rangle \le \mathrm{GL}(n,\mathbb{Q}) \text{ is finite}\right\}\,.$$ I am interested in lower and upper bounds on $$d(n)$$. It is known from [S. Friedland. The maximal orders of finite subgroups in GL$$_n$$($$\mathbb{Q}$$). Proceedings of the American Mathematical Society, 125(12):3519-3526, 1997] that for large $$n$$ the maximal order of a finite subgroup of $$\mathrm{GL}(n,\mathbb{Q})$$ is $$2^n n!$$. Hence, $$d(n) \le 2^{C n \log n}$$ for some $$C>0$$. On the other hand, denoting by $$p(n)$$ the maximal order of a permutation on $$n$$ elements, we have $$\lim_{n \to \infty} (\ln p(n))/{\sqrt{n \ln n}} = 1$$; see, e.g., [W. Miller. The maximum order of an element of a finite symmetric group. American Mathematical Monthly, 94(6):497-506, 1987]. Hence, $$d(n) \ge 2^{c \sqrt{n \log n}}$$ for some $$c>0$$. Are there better bounds on $$d(n)$$?
2019-10-20 10:06:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 39, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9560967087745667, "perplexity": 89.46715933503587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00505.warc.gz"}
http://physics.stackexchange.com/questions/1801/why-space-expansion-affects-matter
# Why space expansion affects matter? If space itself is expanding, then why would it have any effect on matter (separates distant galaxies)? • Space is "nothing", and if "nothing" becomes bigger "nothing" it's still a "nothing" that shouldn't interact with matter in any way (it doesn't have mass, energy, etc). • Gravity doesn't have a cutoff distance afaik, so even the most distant galaxies should be attracted to each other. Gravity force would be very very tiny, but it would still dominate "nothing" from space expansion. • Lets take inertia from Big Bang into account. Inertia would be the primary force that moves galaxies away comparing to their tiny gravity and even more tiny, if any, force of our "nothing" that is still expanding inbetween. Wouldn't expansion decelerate if driven mostly by inertia? - Inertia would be the primary force. I can tell you know what you're talking about, and you don't mean Newtonian Force in that sentence, but I suggest you change force to something like factor because new students really tend to think inertia is actually a force. –  Malabarba Dec 21 '10 at 13:58 ## 3 Answers • Space actually has energy, vacuum energy. It has been shown by various experiments and can be explained by Quantum Mechanics. So more space means more energy. Although it probably can't be used to do work, it does act to increase expansion. Check out the wikipedia page for more info. • Although galaxies are massive, they are far away and thus the resulting acceleration towards each other is weak. If the space between galaxies is expanding at a faster rate than their mutual atraction; galaxies will move apart. Imagine a football player running from one endzone toward the other as fast as he could. However the distance between them endzones doubles every 4 seconds. The endzones are not moving; the space between them is growing. There is no way the player could hope to reach his goal. In a short time he would not be able to see the other end. The scary thing is this not only goes for the field but everything. The player himself would be ripped apart as his own body parts moved farther and farther apart. If space was expanding fast enough the attractive forces keeping black holes or even atoms together would not be enough. This what is commonly referred to as the Big Rip. Luckily the expansion is slow enough that only "distant" galaxies are moving away from each other. Gravitational forces in a solar system or a galaxy are more than enough to withstand expansion. The larger the scale of the system the more expansion has to play. • When we talk about expansion we are not talking about objects moving away because of their great velocities against a static backdrop. What you're describing is like marbles on a grid. The marbles get farther away due to their velocities relative to the grid pointing in opposite directions. Instead expansion is like the grid growing in scale. This effects the distances between the marbles independently of their velocities; which is why we observe all far away objects as moving away from us. If it wasn't for expansion we would expect a more random distribution. It is only at smaller scales where expansion is less of a factor that we can see objects moving toward us such as Andromeda's Galaxy. We are not in any sense moving away from the center of the universe. The idea of the Big Bang and Inflation is that not only every thing, matter and energy; but everywhere was contained in the Big Bang. - In your last bullet point, I think you should specify that by "velocity" (of the marbles) you are referring to movement relative to the grid. (One could also define velocity as being the change in relative position, and in that sense the marbles would have a velocity apart from each other even if they are not moving relative to the grid) –  David Z Dec 21 '10 at 4:23 @Zaslavsky How would you suggest I rephrase the statement to make it more apparent? I would like to keep it sort as possible but I see how it can be misleading. I tried to improve it. I think it could be phrased better. –  David Dec 21 '10 at 15:21 Well, the edit you made seems fine. I guess you could also have said "...velocities through space itself" or something like that. I can't think of a particularly great way to phrase it off the top of my head. –  David Z Dec 21 '10 at 18:22 ## Space is "nothing" Not really. General relativity tells us that space-time is an actual living entity that responds to what's matter doing and matter responds to the way space-time is curved. Or as John Wheeler put it: "matter tells Spacetime how to curve, and Spacetime tells matter how to move." For an introduction into general relativity, you can start at this wikipedia article. ## Gravity doesn't have a cutoff distance You are talking just classical Newtonian gravity. But this isn't how gravity works on big scales. On big scales gravitation is just an appearance caused by the fact that the space-time is curved. If it is curved in a certain way then the appearance is that the objects attract each other. Like this: But on huge scales these local, attractive, deformations are completely negligible. When you take whole space-time into account, it is curved differently (more onto this in the last section). And the important fact is that the space-time expands (like when you blow into a balloon). Because it expands, the distances between any two points are getting bigger and it seems like everything is getting away from you. So that gravitation is actually a repulsive force on big scales (or at least, it can be; again see the last section for a clarification). ## Wouldn't the expansion decelerate? There are many solutions of Eistein's equations. Some of them are stationary, some of them are expanding (either at accelerated rate or decelerated rate), some of them revert from expansion to retraction. Now, which of these solutions is correct depends on the precise conditions at (or more precisely shortly after) the big bang. Important part of the present theoretical understanding of the space-time expansion is that the cosmological constant is non-zero and this is what accelerates the expansion. The complete picture about the matter content of our universe and its relation to the expansion of the universe is called Lambda-CDM (Lambda is an alias for the cosmological constant and CDM is a cold dark matter, which makes up most of the matter in the universe). Now, cosmological constant is also called dark energy. But currently nothing much is known about these matters and major conceptual problems in modern theoretical physics and cosmology have to do with the nature of the cosmological constant. - J. Wheeler's phrase: 'matter tells spacetime how to curve, and spacetime tells matter how to move' implies that matter tells itself how to move - now what should we make thereof? –  Gerard Dec 11 '10 at 23:31 @Gerard: what about it? It should be completely natural. Just take Newtonian gravity: one piece of matter (Earth) tells another piece of matter (Moon) how to move. GR just adds a mediator of that action. –  Marek Dec 11 '10 at 23:36 Of course, nothing new. When GR introduces a mediator, maybe it can be compared to the 'ether' concept. That was a mediator too, and after discarding it in SR a new mediator was introduced in GR? –  Gerard Dec 13 '10 at 14:05 @Gerard: yes, in a sense you could think of the space-time as a kind of 'ether'. Although it's much more sophisticated that the naive concept of ether people proposed before SR. –  Marek Dec 13 '10 at 15:19 I actually think you've been misled. Usually (in my experience) when people say that space is expanding, what they mean is that the universe is expanding - but that just means that the objects in the universe are getting further apart (or, in the case of a continuous mass/energy distribution, that the density is decreasing with time). This is what the cosmological scale factor $a(\tau)$ is for: it describes the change in the distance between two objects (like galaxies) whose motion is subject to large-scale interactions only. $$r(t_1) = \frac{a(t_1)}{a(t_2)}r(t_2)$$ Now, because $a(\tau)$ is part of the metric, which specifies the distortion (or "curvature") of spacetime, it's easy to think that this scale factor would characterize the expansion of space itself. But I think that's the wrong interpretation, or at least a confusing interpretation. The scale factor really just relates to measurable distances between objects. So instead of trying to figure out what it means for space to expand, just think about things getting further apart. Moving on, you're right to say that, if things behave the way we would intuitively expect them to, the expansion of the universe should be slowing down due to gravitational attraction. But the best experimental observations that I am aware of show that the opposite is true: the expansion is speeding up. Unless you're prepared to argue that the experiments were performed or interpreted incorrectly, that means that something nonintuitive must be going on. There's still a lot of debate about what exactly could be making the expansion of the universe speed up. Whatever it is, cosmologists are calling it dark energy (back in Einstein's day they called it the "cosmological constant"), but nobody has a satisfactory explanation of why this dark energy exists, or what its properties might be, other than the fact that it makes the universe's expansion speed up. This is one of the biggest open questions in modern cosmology. - So you mean objects are just flying apart under unknown force in a static space, not because space itself is expanding and pulling objects apart? I always was under impression that fabric of space is being created, so objects move apart because more "emptiness" being created between objects, while objects don't actually move. –  serg Dec 10 '10 at 23:20 @serg: It's just as you say. "Space is being created", or rather, "space-time expands" and the fact that things move apart is just an illusion. So I don't agree with David that it's wrong to think about expansion of space-time in terms of $a(\tau)$. According to GR that is precisely what happens. E.g. if our universe were a 3-sphere then you could (in principle) actually measure it's diameter along main circle and this would indeed be increasing with time. –  Marek Dec 10 '10 at 23:40 @Marek: I think you misinterpreted my answer. I wasn't saying that it's wrong to think about expansion of spacetime in terms of $a(\tau)$, but rather that it's misleading to think about $a(\tau)$ as characterizing space itself. I simply meant that it's better to associate it with measurable distances, not some abstract notion of "space." If the universe were a 3-sphere, its circumference would be one of those measurable distances; there's certainly no reason you couldn't associate it with $a(\tau)$. –  David Z Dec 11 '10 at 0:23 Describing these things in terms of words is tricky. Take the flat Roberston-Walker metric, and make the coordinate transformation $R=a(\tau)r$. Then the metric becomes $ds^{2}=-\left(1-\left(\frac{\dot a R}{a}\right)^{2}\right)d\tau^{2} + 2d\tau dR \left(\frac{dot a R}{a}\right)+dR^{2}+R^{2}d\Omega^{2}$. Now, it literally appears that there is a frame dragging effect (the $g_{\tau R}$ term) describing the expansion of space, and that distances are fixed. Which formulation is right? Both of them. It's a matter of interpretation. –  Jerry Schirmer Dec 11 '10 at 0:53 @David @Marek: That's why my attitude about these things is generally "calculate the relevant physical property using the rules of differential geometry and GR" and only then, provide a plain english set of words to interpret the result. Otherwise, you just end up talking in circles about things that might be equivalent in different coordinates. Proper distances between galaxies increase in a FRLW spacetime. <b>Why</b> this happens depends on what coordinate system/interpretation scheme you're using, and different schemes clarify some things and obscure others. –  Jerry Schirmer Dec 12 '10 at 19:59
2015-05-29 18:40:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6589542031288147, "perplexity": 394.6993344275223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930259.97/warc/CC-MAIN-20150521113210-00306-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.numerade.com/questions/perform-the-indicated-operations-leftfrac25-yfrac18-zrightleftfrac35-yfrac12-zright/
Functions ### Discussion You must be signed in to discuss. ##### Heather Z. Oregon State University ##### Kristen K. University of Michigan - Ann Arbor Lectures Join Bootcamp ### Video Transcript {'transcript': "we're being asked to foil the quantity to wide of the fifth plus five z, meaning we need to take too wide of the fifth. That's terrible too high to the fifth plus five z, and we need to multiply it by itself another too wide of the fifth plus five z making this affording problem. Or if you notice that this works for a special product formula, you could use that to I'm just gonna foil it out first. Two terms to White of the fifth times to White of the fifth, two times two is four. Why did the fifth times wider the fifth? When we multiply like basis, we add the exponents of five plus five would give us why to the 10th outer two terms too wide of the fifth times five z two times five is 10. Why did the fifth Z is just gonna be rewritten? Because those aren't like variables, so we can't combine them into one thing. In her two terms five z times to white of the fifth, five times two is 10 again and again, we have a wide of the fifth Z noticed that those air in alphabetical order still last two terms five z times five z five times five is 25 z times E Izzy Squared Last step is simply to combine are like terms in the middle. So I rewrite the four y to the 10th 10 wide of the fifth Z plus 10 wide of the fifth Z would give us 20. Why to the fifth Z 10 plus tennis 20. And then we'd end with rewriting that 25 z squared. That would be our final answer."} University of Central Missouri #### Topics Functions ##### Heather Z. Oregon State University ##### Kristen K. University of Michigan - Ann Arbor Lectures Join Bootcamp
2021-07-25 19:37:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38494932651519775, "perplexity": 2052.0789302912535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151760.94/warc/CC-MAIN-20210725174608-20210725204608-00009.warc.gz"}
https://www.physicsforums.com/threads/faradays-law.279354/
1. Dec 13, 2008 ### FourierX 1. The problem statement, all variables and given/known data Actually this is not exactly a homework. I am trying to understand the following situation only. Consider a ring shaped coil of N turns and area A. Connect it to an external circuit with a twisted pair of leads ( this info is trivial). The resistance of the circuit along with the coil itself is R. Now the coil in a magnetic field. Suppose the flux through the coil is somehow altered from its initial steady state value (A) to final value (B). The author claims that the total charge Q that flows through the circuit as a result is independent of the rate of change of the flux. I am having hard time understanding this. Can anyone help me understand it. 2. Relevant equations $$\oint$$ E.dl = -d$$\Phi$$/dt 3. The attempt at a solution faraday's law is the most relevant law here, according to the book. But I am just not getting what the author is saying. 2. Dec 13, 2008 ### LowlyPion 3. Dec 13, 2008 ### FourierX thanks, i followed the video. It was helpful. However, i am still not sure about independence of charge with the rate of change of flux. On applying faraday's law EMF = -Nd$$\Phi$$/dt In the condition mention in the question above, B is the final M_flux A and the initial M_flux. We are trying to derive Q such that it is independent of d$$\Phi$$/dt. I am confused with initial and final magentic flux. On just using d$$\Phi$$/dt, here is what i got I = Nd$$\Phi$$ cos(theta)/dt*(R) and I = dQ/dt But still Q is dependent on d$$\Phi$$/dt. Any clue ? Last edited: Dec 14, 2008 4. Dec 15, 2008 ### Defennder What is cos theta here? And try equating the expression for I with V/R where V is as given by Faraday's law. 5. Dec 15, 2008 ### FourierX cosine theta is a mistake here. It has to be omitted. Yeah, i did use Ohm's law there. But my confusion at this point is, since the final and initial fluxes are given, in Faraday's formula, should emf be emf = -N d(B-A)/dt or just -N d(flux)/dt ? The final expression is supposed to show that Q is independent of rate of change of flux Last edited: Dec 15, 2008 6. Dec 15, 2008 ### Defennder It should be $$emf = -\frac{B-A}{\delta t}$$. 7. Dec 15, 2008 ### FourierX did you forget N ? 8. Dec 15, 2008 ### Defennder No I didn't. N was already included in both B and A. Remember that B, A are themselves the flux through the coil. Anyway it should make no difference in the solution.
2017-02-22 20:43:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8467892408370972, "perplexity": 1039.334694912796}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00317-ip-10-171-10-108.ec2.internal.warc.gz"}
https://tug.org/pipermail/xetex/2011-September/021117.html
[XeTeX] fontspec's \setmathrm seems to have no effect mskala at ansuz.sooke.bc.ca mskala at ansuz.sooke.bc.ca Thu Sep 1 16:57:58 CEST 2011 On Thu, 1 Sep 2011, Peter Dyballa wrote: > > So instead of the \setmathrm giving me the font I requested, I seem to > > be getting a computer modern font. Why would this be??? > > Because you don't set your maths in \mathrm! The default for maths is a sans-serif font, because the text usually consists of a serif font. It depends which symbols you're talking about, but variables in math (at least in English-language documents) are normally set in italic by default. Roman is typically used for function names like "sin" and "log". Sans-serif is rarely used in math; when it is, it's often for special kinds of variables, such as vectors (though other conventions are more popular for indicating vectors). -- Matthew Skala mskala at ansuz.sooke.bc.ca People before principles. http://ansuz.sooke.bc.ca/
2021-01-25 11:38:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9710912108421326, "perplexity": 8402.078394804457}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565541.79/warc/CC-MAIN-20210125092143-20210125122143-00496.warc.gz"}
https://forum.snap.berkeley.edu/t/ai-for-a-shooter-game-reacting-with-walls/12694
AI for a shooter game reacting with walls I am designing an AI for a top-down 2D shooter game, and I need a proper way for each AI to detect whether it has a clear path/sight on a target. Since pictures are worth a thousand words, let me show you two methods I've thought of, the last one of which is in the project. Assuming you can see this, each of the pinkish dots represents a check of all of wall points, so this increases exponentially as you add more AIs. You can't just check a line between two AIs because this is what is required. For this, areas behind walls are cut off. The diagonal shows how sections are cut off on the top, right, bottom and left quadrants around the AI, so the wall is above the AI, but if it were way farther left or right it wouldn't block the AI from shooting above it, but it's in the top quadrant meaning it will. So this second method is what I'm using except that it doesn't quite cut it (better than nothing though). If a target, wall, and the AI are all on a diagonal line in that order, the AI will still try and shoot/follow the target. It gets worse when there are two walls, trapping the AI in a corner. Basically I am trying to find a solution, so please suggest any solutions if you can, and I hope I have kindled your interest. I'd be inclined to have a hidden sprite, have it draw an (invisible) line to the target, and see if that line crosses anything else. Good idea! Another idea to check out is the ray length option in the sensing category's _ to_ reporter. comparing the ray length to the wall sprite against the distance to the target should let you determine whether the sight is clear. If the distance to the target is less than the ray length to the wall the target is in clear sight, otherwise it's behind the wall. I like your AI Concept. It's much better than my Enemy AI I made 1 year ago. Basically what it looks like depending on what direction the wall is from the AI (red lines means targets above that line are disqualified, blue lines mean below; only two lines apply per wall) Wow, I had no idea about that. @bh's idea is my first picture basically, I have a list of points on the wall, and I thought that would take up too much processing power. I have never used ray length, but I would think it only works for "if touching", right? Right now I'm only using pen and variables so I would think that wouldn't work, but I'll have to try it. Whoops, I forgot to share the project. I've implemented some more code, and you can see that at least part of it works.
2022-12-08 02:19:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4932768642902374, "perplexity": 608.4459317342931}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711232.54/warc/CC-MAIN-20221208014204-20221208044204-00868.warc.gz"}
https://scicomp.stackexchange.com/questions/30048/what-is-the-best-numerical-method-for-a-six-dimensional-spherical-integral
# What is the best numerical method for a six dimensional spherical integral? I am trying to do integrals of the type $$\int d^3\vec{p} \int d^3\vec{p}' e^{-p^2} e^{-{p'}^2}f(\vec{p}, \vec{p}')$$ where $\vec{p}$ and $\vec{p}'$ are three dimensional vectors represented using spherical coordinates, $\vec{p} = \{p,\theta,\phi\}$, and $f$ is a non-trivial, potentially complex, function. The integrals over $\phi$ and $\phi'$ can be done analytically even though the answers are rather complicated. However that is not true for the other integrals. So I was wondering what would be the best method to approach this problem or if there are any packages (preferably for python) that do this kind of integrals. I plan to try SciPy's nquad but I hear that it is not suggested for integrals weighted by $e^{-p^2}$. • How cheap (relatively) is the evaluation of $f$? – Anton Menshov Aug 14 '18 at 18:54 • Not quite sure, but $f$ is a product of rational functions of the $p$'s, Laguerre polynomials involving $p$'s and trig functions involving the angles. So I would think that evaluation is not too costly. However the function is quite oscillatory. – e-eight Aug 14 '18 at 19:03 • I assume that your integral extend to infinity, am I right? – nicoguaro Aug 14 '18 at 19:48 • Like I said the integrals over $\phi$ and $\phi'$ can be done analytically but the integrals over $\theta$ and $\theta'$ cannot be. The function is such that the variables cannot be seperated, for example $f$ can be $F(p,p')/(p^2 + p'^2 + 2pp'\cos\theta\cos\theta' + 2pp'\sin\theta\sin\theta'\cos(\phi-\phi') + a^2)$. – e-eight Aug 14 '18 at 23:10 • You might find the answers to this question useful too. – Daniel Shapero Aug 15 '18 at 1:22 • @Zythos In most cases of interest you can convert the integrals to hypercubes by a change of variables. For example in the integral that I posted here, using the transformations $p = q/(1-q)$, $\theta = \pi t$, $\phi = 2\pi u$, and likewise for their primed counterparts will convert the integration region to a hypercube. Does not mean that the Genz-Malik will always work (as I am finding out), but it can still be applied. – e-eight Aug 20 '18 at 16:16
2020-09-25 03:50:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8677963018417358, "perplexity": 296.0560657086531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400221980.49/warc/CC-MAIN-20200925021647-20200925051647-00312.warc.gz"}
http://mathhelpforum.com/algebra/56373-algebra-simplifcation.html
1. Algebra/Simplifcation Hi I was wondering how one can get from $2x = \lambda(2x-2)$ to $x = \frac{\lambda}{\lambda - 1}$ and $2y = \lambda(2y-4)$ to $y = \frac{2\lambda}{\lambda - 1}$ thanks 2. Hi FalconPUNCH!, We have $2x=\lambda(2x-2)$ . Divide throughout by $2$ hence $x=\lambda x-\lambda$ . Take the $x$ and the $\lambda$ over therefore $\lambda = x(\lambda-1)$ and thus dividing throughout by $(\lambda -1)$ gives the required results. The seconds is similar. Hope this helps. 3. Originally Posted by FalconPUNCH! Hi I was wondering how one can get from $2x = \lambda(2x-2)$ to $x = \frac{\lambda}{\lambda - 1}$ and $2y = \lambda(2y-4)$ to $y = \frac{2\lambda}{\lambda - 1}$ thanks $2x = \lambda(2x-2)$ $2x=2x\lambda-2\lambda$ $x-\lambda x=-\lambda$ $x(1-\lambda)=-\lambda$ $x=\frac{-\lambda}{1-\lambda}$ $x=\frac{\lambda}{\lambda-1}$ The other one is done in a similar manner. 4. Thank you guys
2017-11-19 07:21:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8903034329414368, "perplexity": 543.8676101963298}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805417.47/warc/CC-MAIN-20171119061756-20171119081756-00791.warc.gz"}
https://www.nature.com/articles/s41467-017-00683-5?error=cookies_not_supported&code=e43b2000-6925-41e3-8804-bc1aa671cc82
## Introduction Structurally well-defined biopolymers are often found in nature. Proteins and nucleic acids achieve unique biological functions that are associated with their precisely defined backbone sequences. This exceptional sequence precision is not offered by synthetic polymers produced by conventional step-growth and chain-growth polymerizations. Therefore, scientific interest in establishing primary sequences of synthetic polymers has gained momentum for generating polymer materials1,2,3,4. Notably, a stepwise iterative synthesis5 on a solid- or soluble-polymer support was developed, which ensures sequence-defined and monodisperse polymers with high batch-to-batch reproducibility6,7,8,9. Recent advances in sequence-regulated polymers have employed sequence specificities in straightforward polymerization procedures. These specificities are determined by controlled polymerization conditions10,11,12,13,14,15, monomer reactivities16,17,18,19,20,21, and templates22,23,24. However, these strategies have difficulty generating polymers with perfectly sequence-specific microstructures. Step-growth polymerizations via acyclic diene metathesis polymerization25, ring-opening metathesis polymerization26, 27, or metal-catalyzed radical polymerization28 are advantageous for employing elaborately designed monomers that possess tailored sequences for polymerization but lack control over the molecular weight and dispersity of the polymers. Although these methods produce repetitive sequences of monomers on the backbones of the resulting polymers, they can be technically considered homopolymerization processes. Supramolecular polymerization can offer another approach for developing sequence-controlled polymer chains29. A self-sorting strategy was developed to regulate the sequence of monomer arrays in alternating copolymerization30, 31. To construct ABC sequences in supramolecular polymerization, high specificity is required for each binding event. Therefore, creating an ABC sequence-controlled supramolecular terpolymer is challenging via a self-sorting strategy (Fig. 1a). We developed unique host–guest motifs (i.e., a biscalix[5]arene-C60 complex32 and a bisporphyrin-trinitrofluorenone (TNF) complex33) that display high specificities in guest binding (Fig. 1b)34,35,36. Regulating the ABC sequence requires another host–guest interaction that displays high specificity to these host–guest complexes. A hydrogen-bonding complex between a Hamilton’s host and a barbiturate fulfills this requirement37. We envisaged applying the biscalix[5]arene-C60 complex, bisporphyrin-TNF complex, and Hamilton’s hydrogen-bonding complex to develop a sequence-controlled supramolecular terpolymer in a self-sorting manner. Three heteroditopic monomers 1, 2, and 3 possessing mismatched pairs of host and guest moieties were designed (Fig. 1c). These monomers exhibit high specificity for each binding event. Therefore, the intermolecular associations can selectively result in specific dimers 12, 23, and 31 (Fig. 1d). This self-sorting behavior should determine the molecular array of [1–2–3] n in the sequence-controlled supramolecular terpolymer. Herein, we report the development of sequence-controlled supramolecular terpolymerization via a self-sorting behavior among three sets of heteroditopic monomers possessing mismatched host–guest pairs. The polymeric nature of the supramolecular terpolymer is confirmed in both the solution and solid states. ## Results ### Self-association The self-associations of monomers 1, 2, and 3 were investigated using 1H nuclear magnetic resonance (NMR) spectroscopy (Fig. 2a–c). The 1H NMR spectra of the monomers were slightly dependent on concentration, which gave association constants of 40 ± 10, 290 ± 20, and 3 ± 2 l mol–1 for 1, 2, and 3, respectively (Supplementary Figs. 16). The complexation-induced changes in chemical shift (CIS) for 1 and 3 were less than 0.1 p.p.m. and nonspecific, demonstrating that 1 and 3 formed random aggregates in chloroform (Supplementary Figs. 1, 5). In contrast, the self-assembly of 2 resulted in significant upfield shifts for the aromatic protons Ha, Hb, and Hc as well as the Ar-CH 3 protons (Δδ = 0.66, 2.76, 1.56, and 1.44 p.p.m.) (Fig. 2b and Supplementary Figs. 3, 79). These large CIS values place the electron-deficient barbiturate tail within the shielding region of the bisporphyrin cleft, resulting in head-to-tail oligomeric complexes (Supplementary Fig. 4). ### Self-sorting behavior Individual intermolecular associations between monomers 1, 2, and 3 were evaluated using 1H NMR spectroscopy. The intermolecular association between 1 and 2 resulted in the characteristic upfield shifts of the porphyrin NH protons and the TNF COO-CH 2- methylene protons (Δδ = –0.57 and –0.22 p.p.m.) (Figs. 2a, b, d and Supplementary Figs. 1013), which indicates that the TNF moiety was selectively located within the bisporphyrin cleft. This host–guest complexation led to the simultaneous dissociation of self-assembled 2 with the downfield shift of the Ar-CH 3, Hb, and Hc protons (Δδ = 0.44, 0.82, and 0.50 p.p.m.). Therefore, heterodimer 12 evidently formed in solution. Adding monomer 3 to 2 gave rise to significant downfield shifts for the Ar-CH 3, Ha, Hb, and Hc protons (Δδ = 0.61, 0.20, 0.95, and 0.63 p.p.m.) (Figs. 2b, c, e and Supplementary Figs. 1416) due to the dissociation of self-assembled 2. The intermolecular nuclear Overhauser effect correlation between the Ha proton of 2 and the -CH 2CONH- methylene protons of 3 indicated the close contact between the barbiturate tail of 2 and the Hamilton’s host moiety of 3, thus confirming the formation of the heterodimeric complex 23 (Supplementary Fig. 17). A selective interaction between biscalix[5]arene and the C60 moiety was clearly observed in the mixture of 1 and 3. The broad Ar-CH 2-Ar resonance of 1 was due to the rapid ring flipping process of the calix[5]arene moieties (Fig. 2a). Upon encapsulating the C60 moiety within the calix[5]arene cavities, the Ar-CH 2-Ar resonances split into two broad resonances, which clearly indicates that the ring-flipping process slows due to the attractive face-to-face contacts of the calix[5]arene interiors and the C60 exterior (Fig. 2f and Supplementary Figs. 1821). Therefore, the biscalix[5]arene-C60 complexation directs the formation of heterodimeric 13. The stoichiometries and binding constants for the complementary host–guest pairs 12, 23, and 13 were determined using ultraviolet/visible (UV/vis) absorption spectroscopy (Fig. 2h). The 12, 23, and 13 host–guest complexes were each formed at a 1:1 ratio with high binding constants (K 1–2 : 31,000 ± 1,000, K 2–3 : 730,000 ± 5,000, and K 1–3 : 15,000 ± 3,000 l mol–1) (Supplementary Figs. 2224). Consequently, the complementary host–guest pairs self-sort during the selective formation of heterodimeric pairs 12, 23, and 13 over the homodimers. Finally, monomers 1, 2, and 3 were mixed in a 1:1:1 ratio, and the1H NMR spectrum of the mixture was recorded. The same characteristics were observed for the assignable protons of Ar-CH 2-Ar, pyrrole NH, TNFCOO-CH 2-, Ar-CH 3, Ha, Hb, and Hc (Fig. 2g and Supplementary Figs. 2528,31). In particular, the large downfield shifts of the Ar-CH 3 Ha, Hb, and Hc protons indicate the complete dissociation of self-assembled 2. These results explicitly confirm that the biscalix[5]arene-C60, bisporphyrin-TNF, and Hamilton’s host–guest complexes exhibit self-sorting in their intermolecular associations, which most likely results in the head-to-tail polymeric host–guest complex [1–2–3] n in sequence. ### Determination of the monomer sequence in the gas phase Electrospray ionization orbitrap mass spectrometry (ESI-MS) determines the constitutional repeating structures of the sequence-controlled supramolecular polymeric assemblies (Fig. 2i). In the ESI-MS spectra of the 1:1:1 mixture of 1, 2, and 3, the heterodimeric pairs [12+3 H]3+, [23+3 H]3+, and [31+3 H]3+ were primarily observed (Supplementary Figs. 32– 36), suggesting that the supramolecular assembly among monomers 1, 2, and 3 exhibits self-sorting. Abundant peaks were observed at m/z = 2,926.6, 2,988.5, 3,088.1, 3,171.5, and 3,536.2, corresponding to [12312 + 4 H]4+ + [23123 + 4 H]4+, [31231 + 4 H]4+, [2312 + 3 H]3+, [1231 + 3 H]3+ + [3123 + 3 H]3+, [123 + 2 H]2+, respectively (Fig. 2i and Supplementary Figs. 3741). These peaks were isotopically resolved and in good agreement with their calculated isotope distributions (Supplementary Tables 15). Therefore, the constitutional repeating structure 1–2–3 of the supramolecular polymer was confirmed in the gas phase. To exclude crossover repeating structures of 1–1–2–3, 1–2–2–3, and 1-2-3-3, the supramolecular polymer was end-capped with competitive guests. C60, 2,4,7-TNF, and 5-(p-methoxybenzylidene) barbituric acid (BA) completely dissociated the supramolecular polymeric assemblies, thus forming [C6012 + 2 H]2+, [C60123 + 3 H]3+, [2,4,7-TNF•2 + 2 H]2+, and [BA•3 + 3 H]3+ (Supplementary Figs. 4248). Therefore, the repeating structure 1–2–3 in sequence was clearly established. ### Properties of the supramolecular terpolymer The formation of supramolecular polymers in solution was investigated using diffusion-ordered 1H NMR spectroscopy (DOSY). According to the Stokes–Einstein relationship, the diffusion coefficient (D) of a molecular species is inversely proportional to its hydrodynamic radius. The average size of a supramolecular polymer (DP) is calculated based on the Ds of existing molecular species. The Ds of 1, 2, and 3 were independent of the concentration up to 10 mmol l–1 (Supplementary Table 7). The 1, 2, and 3 exist in their monomeric form in solution (Fig. 3a). Although 2 exhibited self-assembly behavior in chloroform, the self-association was too weak to participate in the supramolecular homopolymerization. Upon concentrating the solutions from 1 to 10 mmol l–1, the Ds of 1 with 2, 2 with 3, and 3 with 1 reduced by 35%, suggesting that these mixtures selectively formed heterodimeric complexes 12, 23, and 13. In contrast, the Ds of a 1:1:1 mixture of 1, 2, and 3 strongly depended on the concentrations. At 0.10 mmol l–1, 1, 2, and 3 existed in their monomeric forms. Upon concentrating the solution to 10 mmol l–1, the Ds of the mixture gradually decreased by 85%, which suggests that large polymeric aggregates were present. Assuming that the aggregates were hydrospherical, an approximate DP of 200 was estimated at a concentration of 10 mmol l–1. Viscometry provides valuable information on the macroscopic size and structure of polymeric assemblies in solution. The solution viscosities of 1, 2, and 3 and their 1:1 mixtures were directly determined in chloroform (Fig. 3b and Supplementary Fig. 49 and Table 8). The solution viscosities of 1, 2, and 3 were not significantly influenced by their solution concentrations. When the solutions were concentrated, the viscosities of 1:1 mixtures of 1 and 2, 2 and 3, and 1 and 3 at 1:1 ratios did not meaningfully increase. Therefore, no polymeric aggregates formed. In contrast, a significant difference in the viscosity was observed for the 1:1:1 mixture of 1, 2, and 3. As the solution concentrations increased, the solution became viscous, which suggests that well-developed polymer chains were formed at a concentration of 10 mmol l–1. Therefore, only the 1:1:1 mixture of 1, 2, and 3 resulted in the supramolecular polymeric chains that contribute to viscous drag. Scanning electron microscopy (SEM) provided morphological insights into the supramolecular polymers in the solid state. The SEM images of cast films of 1, 2, and 3 and their 1:1 mixtures exhibited particle-like agglomerates (Supplementary Fig. 50). A 1:1:1 mixture of 1, 2, and 3 gave rise to polymeric fibers with diameters of 290 ± 50 nm, which partially formed sheet-like bundles (Fig. 3c)38. More detailed insight into the polymer formation was obtained using atomic force microscopy (AFM). Fig. 3d shows the uniform fibrous morphologies that were formed on the highly oriented pyrolytic graphite (HOPG) surface. The reasonably oriented fibrous morphologies possessed a uniform interchain distance of 3.9 ± 0.4 nm (Supplementary Figs. 51, 52 and Supplementary Table 9), which is consistent with the diameter of 3.6 nm calculated for the oligomeric structure (Supplementary Fig. 53). ## Discussion In conclusion, we developed a ABC sequence-controlled supramolecular terpolymer whose sequence is directed by employing the ball-and-socket, donor-acceptor, and hydrogen-bonding interactions that individually occur in the calix[5]arene-C60, bisporphyrin-TNF, and Hamilton’s complexes, respectively. The difference in the structural and electronic nature of these specific binding interactions evidently results in high-fidelity self-sorting, which provides control over the directionality and specificity in the sequence of the supramolecular terpolymer. Supramolecular chemistry offers various choices of host–guest motifs that have been previously developed with controllable structural and electronic properties. Therefore, our synthetic methodology may be extensively applied to the construction of tailored polymer sequences with structural variations and greater complexity by taking full advantage of host–guest motifs. Sequence-controlled supramolecular polymers developed using self-sorting are expected to provide possibilities for controlling advanced functions associated with polymer sequences, such as self-healing, stimuli responsiveness, and shape memory. ## Methods ### Characterization The characterization and synthesis of all compounds are described in full detail in the Supplementary Information. For the 1H NMR,13C NMR, double-quantum filter correlation spectroscopy, heteronuclear single-quantum correlation spectroscopy, nuclear Overhauser spectroscopy, and ESI-Orbitrap mass spectra of the compounds in this article, see Supplementary Figs. 5893. ### Determination of self-association constants for monomers The 1H NMR spectra of monomers 1, 2, and 3 were recorded at various millimolar concentrations in chloroform-d 1. Hyperbolic curves were obtained by plotting the compound concentrations as a function of the 1H NMR chemical shifts (δ) of the aromatic protons of the monomers. The plots were fitted based on the isodesmic association model. The fitting functions are given by eq. (1), where K, C, δ m, and δ a denote the association constant, total concentration of the compound, chemical shift for the monomer, and chemical shift of the self-assembled species, respectively (see Supplementary Figs. 16). $$\delta\left( C \right) = {\delta _{\rm{m}}} + \left( {{\delta _{\rm{a}}}-{\delta _{\rm{m}}}} \right)\left( {1 + \frac{{1-\sqrt {4KC + 1} }}{{2KC}}} \right)$$ (1) ### Determination of host–guest stoichiometry A Job plot was used to determine the host–guest ratios for complexes 12, 23, and 31 in 1,2-dichloroethane at 25 °C. A series of solutions containing two of the monomers were prepared such that the sum of the total concentrations of the monomers remained constant (1 × 10–5mol L–1). The mole fraction (X) was varied from 0.0 to 1.0. The absorbance changes (ΔA) collected at 429 nm for 12, at 450 nm for 23, and at 470 nm for 31 were plotted as a function of the molar fraction. ### Determination of association constants A standard titration technique was applied for the determination of the association constants for the 12, 23, and 31 host–guest complexes in 1,2-dichloroethane at 25 °C. A titration was performed wherein the concentration of a host solution (1 × 10–5 mol l–1) was fixed while varying the concentration of its complementary guest. During the course of the titration, UV/vis absorption changes were measured from 250 to 900 nm. The experimental spectra were elaborated with the HypSpec program and subjected to a nonlinear global analysis by applying a 1:1 host–guest model of binding to determine the association constants (see Supplementary Figs. 2224)39. ### DOSY Monomers 1, 2, and 3 and their mixtures were dissolved in chloroform-d 1, and the sample solutions were placed in a 3 mm NMR sample tube. The pulse-field gradient diffusion NMR spectra were collected using a bipolar pulse pair stimulated echo (pulse sequence on a JEOL Delta 500 spectrometer with a 3 mm inverse H3X/FG probe40. The pulsed-field gradient strength was arrayed from ~0.003 to ~0.653 T m–1 with a pulse gradient time of 1 ms and a diffusion time of 100 ms. The data were processed using the MestReNova program. The signal intensity as a function of the pulse-field gradient strength was fitted to the Stejskal–Tanner equation41 to determine the diffusion coefficients. ### Solution viscosity measurements The solution viscosity of 1, 2, and 3 and their mixtures was measured at 25 °C with a rectangular slit m-VROC viscometer (RheoSense Inc.). Samples with different concentrations in chloroform were injected at flow rates of 0.11–0.28 ml min–1 using a 0.5 ml syringe. ### SEM measurements Stock solutions of 1, 2, 3, 1 with 2, 2 with 3, 3 with 1, and 1 with 2 and 3 were prepared in 1,2-dichloroethane at concentrations of 2.0 × 10–5 mol l–1 with respect to each monomer. The stock solutions were drop-cast on a glass plate. The films were dried under reduced pressure for 9 h. A platinum coating was sputtered onto the films using a Hitachi Ion Sputter MC1000. The SEM images were recorded using a Hitachi S-5200 system. ### AFM measurements Stock solutions of 1, 2, 3, 1 with 2, 2 with 3, 3 with 1, and 1 with 2 and 3 were prepared in 1,2-dichloroethane at concentrations of 2.0 × 10–5 mol l–1 with respect to each monomer. The stock solutions were spin-coated onto freshly cleaved HOPG. The films were dried under reduced pressure for 9 h. The AFM measurements were performed using an Agilent 5100 microscope in air at ambient temperature with standard silicon cantilevers (NCH, NanoWorld, Neuchâtel, Switzerland), a resonance frequency of 320 kHz, and a force constant of 42 N m–1 in tapping mode. The images were analyzed using the Pico Image processing program. ### ESI-MS measurements Stock solutions of 1, 2, and 3 were prepared in chloroform at concentrations of 2.8 × 10–4 mol l–1. Samples was prepared for ESI-MS by diluting by 15 times a mixture of 50 µl of each stock solution with a 2:1 mixture of chloroform and methanol, which was infused into the ESI source using a syringe pump at a flow rate of 5 μl min–1 and analyzed using a spray voltage of 8 kV in positive ion mode. The ESI-MS measurements were performed using a Thermo Fisher Scientific LTQ Orbitrap XL hybrid FTMS system. ### Data availability The authors declare that the data supporting the findings of this study are available within the paper and its Supplementary Information File. All data are available from the authors on reasonable request.
2022-09-28 14:30:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5634598135948181, "perplexity": 4064.9989810576967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00317.warc.gz"}
https://socratic.org/questions/what-is-lim-oo-f-x-x-2-x-2-7-2x-2-x-3-5
# What is lim_(->-oo) f(x) = x^2/(x^2-7) -(2x^2)/(x^3-5)? May 18, 2018 ${\lim}_{x \to - \infty} f \left(x\right) = 1$ #### Explanation: We'll take each of the two rational terms and divide through by their highest power of $x$. This gives: f(x) = x^2 / (x^2 - 7) - (2x^2)/(x^3-5) = 1 / (1 - 7/x^2) - (2/x)/(1 - 5/x^3 Now recall the fact that ${\lim}_{x \to \pm \infty} \frac{a}{x} ^ n = 0$ if $a$ is a real number and $n \ge 1$. This tells us that $\frac{7}{x} ^ 2$, $\frac{2}{x}$ and $\frac{5}{x} ^ 3$ will all approach $0$ as $x \to - \infty$. Thus, we can directly replace these terms with zero which yields our limit. ${\lim}_{x \to - \infty} \frac{1}{1 - \frac{7}{x} ^ 2} - \frac{\frac{2}{x}}{1 - \frac{5}{x} ^ 3} = \frac{1}{1 - 0} - \frac{0}{1 - 0} = 1$.
2020-02-25 01:08:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7402629852294922, "perplexity": 405.99844140281675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145989.45/warc/CC-MAIN-20200224224431-20200225014431-00297.warc.gz"}
https://zbmath.org/authors/?q=ai%3Acuevas.claudio
## Cuevas, Claudio Compute Distance To: Author ID: cuevas.claudio Published as: Cuevas, Claudio; Cuevas, C. Documents Indexed: 105 Publications since 1998, including 1 Book 1 Contribution as Editor Reviewing Activity: 32 Reviews Co-Authors: 43 Co-Authors with 103 Joint Publications 1,263 Co-Co-Authors all top 5 ### Co-Authors 3 single-authored 21 Soto, Herme 16 Henríquez, Hernán R. 13 Lizama, Carlos 12 de Andrade, Bruno 11 Cardoso, Fernando 11 Vodev, Georgi 10 Agarwal, Ravi P. 7 Caicedo, Alejandro 6 Vidal, Claudio 5 Dantas, Filipe 4 Andrade, Filipe 4 Castro, Airton 4 del Campo, Luis 4 Dos Santos, José Paulo Carvalho 4 Henríquez, Erwin 4 N’Guérékata, Gaston Mandata 4 Pinto, Manuel 4 Rabelo, Marcos Napoleão 3 De Souza, Julio César 3 Sepulveda, Alex 3 Silva, Clessius 2 Azevedo, Joelma 2 Choquehuanca, Mario 2 Frasson, Miguel V. S. 2 Liang, Jin 2 Ubilla, Pedro 1 Andrade, Bruno 1 Aparcana, Aldryn 1 Ashyralyev, Allaberen 1 Bernardo, Felix 1 Cecílio, D. L. 1 Diagana, Toka 1 El-Gebeily, Mohamed A. 1 Hernández Morales, Eduardo 1 Hernández, Eduardo M. 1 Mateus, Eder 1 Mesquita, Jaqueline Godoy 1 Nguyen, M. Van 1 Pierri, Michelle 1 Piskarëv, S. I. 1 Pozo, Juan Carlos 1 Siracusa, Giovana 1 Viana, Arlúcio all top 5 ### Serials 10 Mathematical Methods in the Applied Sciences 9 Advances in Difference Equations 8 Journal of Difference Equations and Applications 5 Applied Mathematics and Computation 4 Computers & Mathematics with Applications 4 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 4 Applied Mathematics Letters 4 Asymptotic Analysis 2 Journal of Mathematical Analysis and Applications 2 Journal of Computational and Applied Mathematics 2 Numerical Functional Analysis and Optimization 2 Semigroup Forum 2 Serdica Mathematical Journal 2 Journal of Nonlinear and Convex Analysis 2 Dynamics of Continuous, Discrete & Impulsive Systems. Series A. Mathematical Analysis 2 Journal of Applied Mathematics and Computing 2 Proyecciones 1 Applicable Analysis 1 Journal of the Franklin Institute 1 Journal of Mathematical Physics 1 Chaos, Solitons and Fractals 1 Anais da Academia Brasileira de Ciências 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Journal of Differential Equations 1 Mathematische Nachrichten 1 Mathematische Zeitschrift 1 Portugaliae Mathematica 1 Proceedings of the Edinburgh Mathematical Society. Series II 1 Southeast Asian Bulletin of Mathematics 1 Systems & Control Letters 1 Acta Applicandae Mathematicae 1 Mathematical and Computer Modelling 1 Communications in Partial Differential Equations 1 Journal of Dynamics and Differential Equations 1 Advances in Mathematical Sciences and Applications 1 Electronic Journal of Differential Equations (EJDE) 1 Discrete and Continuous Dynamical Systems 1 Communications on Applied Nonlinear Analysis 1 Matemática Contemporânea 1 Mathematical Problems in Engineering 1 Communications in Applied Analysis 1 Functional Differential Equations 1 Abstract and Applied Analysis 1 Journal of Inequalities and Applications 1 Annales Henri Poincaré 1 Nonlinear Analysis. Real World Applications 1 Communications on Pure and Applied Analysis 1 Cubo 1 Banach Journal of Mathematical Analysis 1 Asian-European Journal of Mathematics 1 Journal of Abstract Differential Equations and Applications (JADEA) 1 ISRN Mathematical Analysis 1 Nonautonomous Dynamical Systems 1 Journal of Function Spaces all top 5 ### Fields 40 Ordinary differential equations (34-XX) 36 Partial differential equations (35-XX) 32 Operator theory (47-XX) 29 Difference and functional equations (39-XX) 18 Integral equations (45-XX) 10 Abstract harmonic analysis (43-XX) 6 Global analysis, analysis on manifolds (58-XX) 6 Systems theory; control (93-XX) 4 Real functions (26-XX) 4 Biology and other natural sciences (92-XX) 1 General and overarching topics; collections (00-XX) 1 Special functions (33-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Functional analysis (46-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Numerical analysis (65-XX) 1 Mechanics of deformable solids (74-XX) 1 Quantum theory (81-XX) ### Citations contained in zbMATH Open 91 Publications have been cited 1,133 times in 510 Documents Cited by Year Weighted pseudo-almost periodic solutions of a class of semilinear fractional differential equations. Zbl 1248.34004 Agarwal, Ravi P.; de Andrade, Bruno; Cuevas, Claudio 2010 $$S$$-asymptotically $$\omega$$-periodic solutions of semilinear fractional integro-differential equations. Zbl 1176.47035 Cuevas, Claudio; De Souza, Julio César 2009 Almost automorphic solutions to a class of semilinear fractional differential equations. Zbl 1192.34006 Cuevas, Claudio; Lizama, Carlos 2008 Existence and uniqueness of pseudo almost periodic solutions of semilinear Cauchy problems with non dense domain. Zbl 0985.34052 Cuevas, Claudio; Pinto, Manuel 2001 Existence of $$S$$-asymptotically $$\omega$$-periodic solutions for fractional order functional integro-differential equations with infinite delay. Zbl 1197.47063 Cuevas, Claudio; de Souza, Julio César 2010 Existence results for fractional neutral integro-differential equations with state-dependent delay. Zbl 1228.45014 Dos Santos, José Paulo Carvalho; Arjunan, M. Mallika; Cuevas, Claudio 2011 On type of periodicity and ergodicity to a class of fractional order differential equations. Zbl 1194.34007 Agarwal, Ravi P.; De Andrade, Bruno; Cuevas, Claudio 2010 Analytic resolvent operator and existence results for fractional integro-differential equations. Zbl 1330.45008 Agarwal, Ravi P.; Carvalho dos Santos, José Paulo; Cuevas, Claudio 2012 $$S$$-asymptotically $$\omega$$-periodic and asymptotically $$\omega$$-periodic solutions to semi-linear Cauchy problems with non-dense domain. Zbl 1205.34074 2010 Asymptotically almost automorphic solutions of abstract fractional integro-differential neutral equations. Zbl 1198.45014 Dos Santos, José Paulo Carvalho; Cuevas, Claudio 2010 Asymptotically periodic solutions of fractional differential equations. Zbl 1334.34173 Cuevas, Claudio; Henríquez, Hernán R.; Soto, Herme 2014 The existence of solutions for impulsive neutral functional differential equations. Zbl 1189.34155 Cuevas, Claudio; Hernández, Eduardo; Rabelo, Marcos 2009 Asymptotic behavior of solutions of some semilinear functional differential and integro-differential equations with infinite delay in Banach spaces. Zbl 1260.34142 Caicedo, A.; Cuevas, C.; Mophou, G. M.; N’guérékata, G. M. 2012 On well-posedness of difference schemes for abstract elliptic problems in $$L^{p}([0, T];E)$$ spaces. Zbl 1140.65073 Ashyralyev, Allaberen; Cuevas, Claudio; Piskarev, Sergey 2008 Asymptotic periodicity for some evolution equations in Banach spaces. Zbl 1221.34159 Agarwal, Ravi P.; Cuevas, Claudio; Soto, Herme; El-Gebeily, Mohamed 2011 $$S$$-asymptotically $$\omega$$-periodic solutions for semilinear Volterra equations. Zbl 1251.45007 Cuevas, Claudio; Lizama, Carlos 2010 Existence results for a fractional equation with state-dependent delay. Zbl 1216.45003 dos Santos, José Paulo Carvalho; Cuevas, Claudio; de Andrade, Bruno 2011 Almost periodic and pseudo-almost periodic solutions to fractional differential and integro-differential equations. Zbl 1246.45012 Cuevas, Claudio; Sepúlveda, Alex; Soto, Herme 2011 Convergent solutions of linear functional difference equations in phase space. Zbl 1022.39001 Cuevas, Claudio; Pinto, Manuel 2003 Regularity of difference equations on Banach spaces. Zbl 1306.39001 Agarwal, Ravi P.; Cuevas, Claudio; Lizama, Carlos 2014 Almost automorphic solutions to integral equations on the line. Zbl 1187.45005 Cuevas, Claudio; Lizama, Carlos 2009 Pseudo-almost periodic solutions for abstract partial functional differential equations. Zbl 1170.35551 Cuevas, Claudio; Hernández M., Eduardo 2009 Asymptotic behavior in Volterra difference systems with unbounded delay. Zbl 0940.39005 Cuevas, Claudio; Pinto, Manuel 2000 Pseudo-almost periodic solutions of a class of semilinear fractional differential equations. Zbl 1368.34008 Agarwal, Ravi P.; Cuevas, Claudio; Soto, Herme 2011 Asymptotically periodic solutions of neutral partial differential equations with infinite delay. Zbl 1307.34122 Henríquez, Hernán R.; Cuevas, Claudio; Caicedo, Alejandro 2013 Semilinear functional difference equations with infinite delay. Zbl 1255.39012 Agarwal, Ravi P.; Cuevas, Claudio; Frasson, Miguel V. S. 2012 Mild solutions for impulsive neutral functional differential equations with state-dependent delay. Zbl 1197.34153 Cuevas, Claudio; N’Guérékata, Gaston M.; Rabelo, Marcos 2010 Weighted $$S$$-asymptotically $$\omega$$-periodic solutions of a class of fractional differential equations. Zbl 1210.34006 Cuevas, Claudio; Pierri, Michelle; Sepulveda, Alex 2011 Discrete dichotomies and asymptotic behavior for abstract retarded functional difference equations in phase space. Zbl 1019.39008 Cuevas, Claudio; Vidal, Claudio 2002 Exponential dichotomy and boundedness for retarded functional difference equations. Zbl 1162.39002 Cardoso, Fernando; Cuevas, Claudio 2009 $$S$$-asymptotically $$\omega$$-periodic solutions of abstract partial neutral integro-differential equations. Zbl 1241.47038 Caicedo, A.; Cuevas, C. 2010 Asymptotic periodicity and almost automorphy for a class of Volterra integro-differential equations. Zbl 1243.45015 de Andrade, Bruno; Cuevas, Claudio; Henríquez, Erwin 2012 Asymptotic properties of solutions to nonautonomous Volterra difference systems with infinite delay. Zbl 1002.39007 Cuevas, C.; Pinto, M. 2001 A note on discrete maximal regularity for functional difference equations with infinite delay. Zbl 1133.39001 Cuevas, Claudio; Vidal, Claudio 2006 Semilinear evolution equations of second order via maximal regularity. Zbl 1155.47056 Cuevas, Claudio; Lizama, Carlos 2008 On the existence of almost automorphic solutions of Volterra difference equations. Zbl 1261.39007 Cuevas, Claudio; Henríquez, Hernán R.; Lizama, Carlos 2012 Almost automorphy profile of solutions for difference equations of Volterra type. Zbl 1302.39008 Agarwal, Ravi P.; Cuevas, Claudio; Dantas, Filipe 2013 An asymptotic theory for retarded functional difference equations. Zbl 1080.39007 Cuevas, C.; Del Campo, L. 2005 Asymptotic periodicity for a class of partial integrodifferential equations. Zbl 1227.45007 Caicedo, Alejandro; Cuevas, Claudio; Henríquez, Hernán R. 2011 On type of periodicity and ergodicity to a class of integral equations with infinite delay. Zbl 1203.43006 Agarwal, Ravi P.; de Andrade, Bruno; Cuevas, Claudio 2010 Well posedness for a class of flexible structure in Hölder spaces. Zbl 1331.35337 Cuevas, Claudio; Lizama, Carlos 2009 Weighted exponential trichotomy of linear difference equations. Zbl 1155.39004 Cuevas, Claudio; Vidal, Claudio 2008 Solutions of second order abstract retarded functional differential equations on the line. Zbl 1231.34135 Cuevas, Claudio; Henríquez, Hernán R. 2011 Pseudo-almost automorphic solutions to a class of semilinear fractional differential equations. Zbl 1208.47080 Cuevas, Claudio; Rabelo, Marcos; Soto, Herme 2010 Maximal regularity of discrete second order Cauchy problems in Banach spaces. Zbl 1133.39014 Cuevas, Claudio; Lizama, Carlos 2007 Existence of $$S$$-asymptotically $$\omega$$-periodic solutions for two-times fractional order differential equations. Zbl 1299.34012 Cuevas, Claudio; Lizama, Carlos 2013 Image-boundedness properties for Volterra difference equations. Zbl 1320.39012 Cuevas, Claudio; Dantas, Filipe; Choquehuanca, Mario; Soto, Herme 2013 Weighted convergent and bounded solutions of Volterra difference systems with infinite delay. Zbl 0965.39007 Cuevas, C. 2000 Semilinear evolution equations on discrete time and maximal regularity. Zbl 1182.47054 Cuevas, Claudio; Lizama, Carlos 2010 Almost automorphic and pseudo-almost automorphic solutions to semilinear evolution equations with nondense domain. Zbl 1203.34095 2009 Compact almost automorphic solutions to semilinear Cauchy problems with non-dense domain. Zbl 1189.34117 2009 Almost automorphy for abstract neutral differential equations via control theory. Zbl 1271.34078 Henríquez, Hernán R.; Cuevas, Claudio 2013 High frequency resolvent estimates for perturbations by large long-range magnetic potentials and applications to dispersive estimates. Zbl 1260.81067 Cardoso, Fernando; Cuevas, Claudio; Vodev, Georgi 2013 Asymptotic periodicity for strongly damped wave equations. Zbl 1293.35175 Cuevas, Claudio; Lizama, Carlos; Soto, Herme 2013 Dispersive estimates for the Schrödinger equation in dimensions four and five. Zbl 1163.35482 Cardoso, Fernando; Cuevas, Claudio; Vodev, Georgi 2009 Periodic solutions of abstract functional differential equations with state-dependent delay. Zbl 1355.34112 Andrade, Filipe; Cuevas, Claudio; Henríquez, Hernán R. 2016 Asymptotic periodicity for flexible structural systems and applications. Zbl 1381.35107 de Andrade, Bruno; Cuevas, Claudio; Silva, Clessius; Soto, Herme 2016 About the behavior of solutions for Volterra difference equations with infinite delay. Zbl 1291.39020 Castro, Airton; Cuevas, Claudio; Dantas, Filipe; Soto, Herme 2014 On fractional heat equations with non-local initial conditions. Zbl 1332.35378 De Andrade, Bruno; Cuevas, Claudio; Soto, Herme 2016 Almost automorphic solutions of hyperbolic evolution equations. Zbl 1253.34054 Cuevas, Claudio; Henriquez, Erwin; De Andrade, Bruno 2012 Stabilization of distributed control systems with delay. Zbl 1226.93112 Henríquez, Hernán R.; Cuevas, Claudio; Rabelo, Marcos; Caicedo, Alejandro 2011 Weighted exponential trichotomy of difference equations. Zbl 1192.39009 Cuevas, Claudio; del Campo, Luis; Vidal, Claudio 2010 Asymptotic periodicity for some classes of integro-differential equations and applications. Zbl 1233.45003 Agarwal, Ravi P.; de Andrade, Bruno; Cuevas, Claudio; Henríquez, Erwin 2011 Pseudo almost automorphic solutions to fractional differential and integro-differential equations. Zbl 1264.43005 Cuevas, Claudio; N’Guérékata, G. M.; Sepulveda, A. 2012 Sharp bounds on the number of resonances for conformally compact manifolds with constant negative curvature near infinity. Zbl 1046.58011 Cuevas, Claudio; Vodev, Georgi 2003 Qualitative theory for Volterra difference equations. Zbl 1397.39003 Bernardo, Felix; Cuevas, Claudio; Soto, Herme 2018 Asymptotic periodicity for hyperbolic evolution equations and applications. Zbl 1410.34177 Andrade, Filipe; Cuevas, Claudio; Silva, Clessius; Soto, Herme 2015 Existence and asymptotic behaviour for the time-fractional Keller-Segel model for chemotaxis. Zbl 1419.35200 Azevedo, Joelma; Cuevas, Claudio; Henriquez, Erwin 2019 Qualitative theory for strongly damped wave equations. Zbl 1394.35038 Azevedo, Joelma; Cuevas, Claudio; Soto, Herme 2017 Almost periodicity for a nonautonomous discrete dispersive population model. Zbl 1381.39018 Cuevas, Claudio; Dantas, Filipe; Soto, Herme 2016 Resolvent estimates for perturbations by large magnetic potentials. Zbl 1301.35061 Cardoso, Fernando; Cuevas, Claudio; Vodev, Georgi 2014 High frequency dispersive estimates for the Schrödinger equation in high dimensions. Zbl 1225.35214 Cardoso, Fernando; Cuevas, Claudio; Vodev, Georgi 2011 Asymptotic properties of solutions to linear nonautonomous delay differential equations through generalized characteristic equations. Zbl 1194.34143 Cuevas, Claudio; Frasson, Miguel V. S. 2010 Well-posedness of second order evolution equation on discrete time. Zbl 1209.39001 Castro, Airton; Cuevas, Claudio; Lizama, Carlos 2010 Asymptotic analysis for Volterra difference equations. Zbl 1304.39013 Cuevas, Claudio; Choquehuanca, Mario; Soto, Herme 2014 Dispersive estimates of solutions to the wave equation with a potential in dimensions two and three. Zbl 1164.35433 Cardoso, Fernando; Cuevas, Claudio; Vodev, Georgi 2005 Maximal regularity of the discrete harmonic oscillator equation. Zbl 1167.39010 Castro, Airton; Cuevas, Claudio; Lizama, Carlos 2009 Perturbation theory, stability, boundedness and asymptotic behaviour for second order evolution equation in discrete time. Zbl 1220.39005 Castro, Airton; Cuevas, Claudio 2011 Second order abstract neutral functional differential equations. Zbl 1382.34075 Henríquez, Hernán R.; Cuevas, Claudio 2017 Existence results for fractional integro-differential inclusions with state-dependent delay. Zbl 1377.34099 Siracusa, Giovana; Henríquez, Hernán R.; Cuevas, Claudio 2017 Periodicity and ergodicity for abstract evolution equations with critical nonlinearities. Zbl 1355.35118 Andrade, Bruno; Cuevas, Claudio; Liang, Jin; Soto, Herme 2015 Almost periodic solutions of partial differential equations with delay. Zbl 1346.35204 Henríquez, Hernán; Cuevas, Claudio; Caicedo, Alejandro 2015 Approximate controllability of second-order distributed systems. Zbl 1300.93038 Henríquez, Hernán R.; Cuevas, Claudio 2014 Approximate controllability of abstract discrete-time systems. Zbl 1203.93024 Henríquez, Hernán R.; Cuevas, Claudio 2010 A perturbation theory for the discrete harmonic oscillator equation. Zbl 1213.39017 Cuevas, Claudio; de Souza, Julio César 2010 Weighted exponential trichotomy of difference equations. Zbl 1203.39003 Vidal, Claudio; Cuevas, Claudio; del Campo, Luis 2008 $$L^{p'}$$-$$L^p$$ decay estimates of solutions to the wave equation with a short-range potential. Zbl 1098.35037 Cuevas, Claudio; Vodev, Georgi 2006 Dispersive estimates for the Schrödinger equation with potentials of critical regularity. Zbl 1184.35084 Cardoso, Fernando; Cuevas, Claudio; Vodev, Georgi 2009 Weighted dispersive estimates for solutions of the Schrödinger equation. Zbl 1199.35063 Cardoso, F.; Cuevas, C.; Vodev, G. 2008 Semi-classical dispersive estimates. Zbl 1304.35147 Cardoso, Fernando; Cuevas, Claudio; Vodev, Georgi 2014 On the time-fractional Keller-Segel model for chemotaxis. Zbl 1445.35301 Cuevas, Claudio; Silva, Clessius; Soto, Herme 2020 On the time-fractional Keller-Segel model for chemotaxis. Zbl 1445.35301 Cuevas, Claudio; Silva, Clessius; Soto, Herme 2020 Existence and asymptotic behaviour for the time-fractional Keller-Segel model for chemotaxis. Zbl 1419.35200 Azevedo, Joelma; Cuevas, Claudio; Henriquez, Erwin 2019 Qualitative theory for Volterra difference equations. Zbl 1397.39003 Bernardo, Felix; Cuevas, Claudio; Soto, Herme 2018 Qualitative theory for strongly damped wave equations. Zbl 1394.35038 Azevedo, Joelma; Cuevas, Claudio; Soto, Herme 2017 Second order abstract neutral functional differential equations. Zbl 1382.34075 Henríquez, Hernán R.; Cuevas, Claudio 2017 Existence results for fractional integro-differential inclusions with state-dependent delay. Zbl 1377.34099 Siracusa, Giovana; Henríquez, Hernán R.; Cuevas, Claudio 2017 Periodic solutions of abstract functional differential equations with state-dependent delay. Zbl 1355.34112 Andrade, Filipe; Cuevas, Claudio; Henríquez, Hernán R. 2016 Asymptotic periodicity for flexible structural systems and applications. Zbl 1381.35107 de Andrade, Bruno; Cuevas, Claudio; Silva, Clessius; Soto, Herme 2016 On fractional heat equations with non-local initial conditions. Zbl 1332.35378 De Andrade, Bruno; Cuevas, Claudio; Soto, Herme 2016 Almost periodicity for a nonautonomous discrete dispersive population model. Zbl 1381.39018 Cuevas, Claudio; Dantas, Filipe; Soto, Herme 2016 Asymptotic periodicity for hyperbolic evolution equations and applications. Zbl 1410.34177 Andrade, Filipe; Cuevas, Claudio; Silva, Clessius; Soto, Herme 2015 Periodicity and ergodicity for abstract evolution equations with critical nonlinearities. Zbl 1355.35118 Andrade, Bruno; Cuevas, Claudio; Liang, Jin; Soto, Herme 2015 Almost periodic solutions of partial differential equations with delay. Zbl 1346.35204 Henríquez, Hernán; Cuevas, Claudio; Caicedo, Alejandro 2015 Asymptotically periodic solutions of fractional differential equations. Zbl 1334.34173 Cuevas, Claudio; Henríquez, Hernán R.; Soto, Herme 2014 Regularity of difference equations on Banach spaces. Zbl 1306.39001 Agarwal, Ravi P.; Cuevas, Claudio; Lizama, Carlos 2014 About the behavior of solutions for Volterra difference equations with infinite delay. Zbl 1291.39020 Castro, Airton; Cuevas, Claudio; Dantas, Filipe; Soto, Herme 2014 Resolvent estimates for perturbations by large magnetic potentials. Zbl 1301.35061 Cardoso, Fernando; Cuevas, Claudio; Vodev, Georgi 2014 Asymptotic analysis for Volterra difference equations. Zbl 1304.39013 Cuevas, Claudio; Choquehuanca, Mario; Soto, Herme 2014 Approximate controllability of second-order distributed systems. Zbl 1300.93038 Henríquez, Hernán R.; Cuevas, Claudio 2014 Semi-classical dispersive estimates. Zbl 1304.35147 Cardoso, Fernando; Cuevas, Claudio; Vodev, Georgi 2014 Asymptotically periodic solutions of neutral partial differential equations with infinite delay. Zbl 1307.34122 Henríquez, Hernán R.; Cuevas, Claudio; Caicedo, Alejandro 2013 Almost automorphy profile of solutions for difference equations of Volterra type. Zbl 1302.39008 Agarwal, Ravi P.; Cuevas, Claudio; Dantas, Filipe 2013 Existence of $$S$$-asymptotically $$\omega$$-periodic solutions for two-times fractional order differential equations. Zbl 1299.34012 Cuevas, Claudio; Lizama, Carlos 2013 Image-boundedness properties for Volterra difference equations. Zbl 1320.39012 Cuevas, Claudio; Dantas, Filipe; Choquehuanca, Mario; Soto, Herme 2013 Almost automorphy for abstract neutral differential equations via control theory. Zbl 1271.34078 Henríquez, Hernán R.; Cuevas, Claudio 2013 High frequency resolvent estimates for perturbations by large long-range magnetic potentials and applications to dispersive estimates. Zbl 1260.81067 Cardoso, Fernando; Cuevas, Claudio; Vodev, Georgi 2013 Asymptotic periodicity for strongly damped wave equations. Zbl 1293.35175 Cuevas, Claudio; Lizama, Carlos; Soto, Herme 2013 Analytic resolvent operator and existence results for fractional integro-differential equations. Zbl 1330.45008 Agarwal, Ravi P.; Carvalho dos Santos, José Paulo; Cuevas, Claudio 2012 Asymptotic behavior of solutions of some semilinear functional differential and integro-differential equations with infinite delay in Banach spaces. Zbl 1260.34142 Caicedo, A.; Cuevas, C.; Mophou, G. M.; N&rsquo;guérékata, G. M. 2012 Semilinear functional difference equations with infinite delay. Zbl 1255.39012 Agarwal, Ravi P.; Cuevas, Claudio; Frasson, Miguel V. S. 2012 Asymptotic periodicity and almost automorphy for a class of Volterra integro-differential equations. Zbl 1243.45015 de Andrade, Bruno; Cuevas, Claudio; Henríquez, Erwin 2012 On the existence of almost automorphic solutions of Volterra difference equations. Zbl 1261.39007 Cuevas, Claudio; Henríquez, Hernán R.; Lizama, Carlos 2012 Almost automorphic solutions of hyperbolic evolution equations. Zbl 1253.34054 Cuevas, Claudio; Henriquez, Erwin; De Andrade, Bruno 2012 Pseudo almost automorphic solutions to fractional differential and integro-differential equations. Zbl 1264.43005 Cuevas, Claudio; N&rsquo;Guérékata, G. M.; Sepulveda, A. 2012 Existence results for fractional neutral integro-differential equations with state-dependent delay. Zbl 1228.45014 Dos Santos, José Paulo Carvalho; Arjunan, M. Mallika; Cuevas, Claudio 2011 Asymptotic periodicity for some evolution equations in Banach spaces. Zbl 1221.34159 Agarwal, Ravi P.; Cuevas, Claudio; Soto, Herme; El-Gebeily, Mohamed 2011 Existence results for a fractional equation with state-dependent delay. Zbl 1216.45003 dos Santos, José Paulo Carvalho; Cuevas, Claudio; de Andrade, Bruno 2011 Almost periodic and pseudo-almost periodic solutions to fractional differential and integro-differential equations. Zbl 1246.45012 Cuevas, Claudio; Sepúlveda, Alex; Soto, Herme 2011 Pseudo-almost periodic solutions of a class of semilinear fractional differential equations. Zbl 1368.34008 Agarwal, Ravi P.; Cuevas, Claudio; Soto, Herme 2011 Weighted $$S$$-asymptotically $$\omega$$-periodic solutions of a class of fractional differential equations. Zbl 1210.34006 Cuevas, Claudio; Pierri, Michelle; Sepulveda, Alex 2011 Asymptotic periodicity for a class of partial integrodifferential equations. Zbl 1227.45007 Caicedo, Alejandro; Cuevas, Claudio; Henríquez, Hernán R. 2011 Solutions of second order abstract retarded functional differential equations on the line. Zbl 1231.34135 Cuevas, Claudio; Henríquez, Hernán R. 2011 Stabilization of distributed control systems with delay. Zbl 1226.93112 Henríquez, Hernán R.; Cuevas, Claudio; Rabelo, Marcos; Caicedo, Alejandro 2011 Asymptotic periodicity for some classes of integro-differential equations and applications. Zbl 1233.45003 Agarwal, Ravi P.; de Andrade, Bruno; Cuevas, Claudio; Henríquez, Erwin 2011 High frequency dispersive estimates for the Schrödinger equation in high dimensions. Zbl 1225.35214 Cardoso, Fernando; Cuevas, Claudio; Vodev, Georgi 2011 Perturbation theory, stability, boundedness and asymptotic behaviour for second order evolution equation in discrete time. Zbl 1220.39005 Castro, Airton; Cuevas, Claudio 2011 Weighted pseudo-almost periodic solutions of a class of semilinear fractional differential equations. Zbl 1248.34004 Agarwal, Ravi P.; de Andrade, Bruno; Cuevas, Claudio 2010 Existence of $$S$$-asymptotically $$\omega$$-periodic solutions for fractional order functional integro-differential equations with infinite delay. Zbl 1197.47063 Cuevas, Claudio; de Souza, Julio César 2010 On type of periodicity and ergodicity to a class of fractional order differential equations. Zbl 1194.34007 Agarwal, Ravi P.; De Andrade, Bruno; Cuevas, Claudio 2010 $$S$$-asymptotically $$\omega$$-periodic and asymptotically $$\omega$$-periodic solutions to semi-linear Cauchy problems with non-dense domain. Zbl 1205.34074 2010 Asymptotically almost automorphic solutions of abstract fractional integro-differential neutral equations. Zbl 1198.45014 Dos Santos, José Paulo Carvalho; Cuevas, Claudio 2010 $$S$$-asymptotically $$\omega$$-periodic solutions for semilinear Volterra equations. Zbl 1251.45007 Cuevas, Claudio; Lizama, Carlos 2010 Mild solutions for impulsive neutral functional differential equations with state-dependent delay. Zbl 1197.34153 Cuevas, Claudio; N&rsquo;Guérékata, Gaston M.; Rabelo, Marcos 2010 $$S$$-asymptotically $$\omega$$-periodic solutions of abstract partial neutral integro-differential equations. Zbl 1241.47038 Caicedo, A.; Cuevas, C. 2010 On type of periodicity and ergodicity to a class of integral equations with infinite delay. Zbl 1203.43006 Agarwal, Ravi P.; de Andrade, Bruno; Cuevas, Claudio 2010 Pseudo-almost automorphic solutions to a class of semilinear fractional differential equations. Zbl 1208.47080 Cuevas, Claudio; Rabelo, Marcos; Soto, Herme 2010 Semilinear evolution equations on discrete time and maximal regularity. Zbl 1182.47054 Cuevas, Claudio; Lizama, Carlos 2010 Weighted exponential trichotomy of difference equations. Zbl 1192.39009 Cuevas, Claudio; del Campo, Luis; Vidal, Claudio 2010 Asymptotic properties of solutions to linear nonautonomous delay differential equations through generalized characteristic equations. Zbl 1194.34143 Cuevas, Claudio; Frasson, Miguel V. S. 2010 Well-posedness of second order evolution equation on discrete time. Zbl 1209.39001 Castro, Airton; Cuevas, Claudio; Lizama, Carlos 2010 Approximate controllability of abstract discrete-time systems. Zbl 1203.93024 Henríquez, Hernán R.; Cuevas, Claudio 2010 A perturbation theory for the discrete harmonic oscillator equation. Zbl 1213.39017 Cuevas, Claudio; de Souza, Julio César 2010 $$S$$-asymptotically $$\omega$$-periodic solutions of semilinear fractional integro-differential equations. Zbl 1176.47035 Cuevas, Claudio; De Souza, Julio César 2009 The existence of solutions for impulsive neutral functional differential equations. Zbl 1189.34155 Cuevas, Claudio; Hernández, Eduardo; Rabelo, Marcos 2009 Almost automorphic solutions to integral equations on the line. Zbl 1187.45005 Cuevas, Claudio; Lizama, Carlos 2009 Pseudo-almost periodic solutions for abstract partial functional differential equations. Zbl 1170.35551 Cuevas, Claudio; Hernández M., Eduardo 2009 Exponential dichotomy and boundedness for retarded functional difference equations. Zbl 1162.39002 Cardoso, Fernando; Cuevas, Claudio 2009 Well posedness for a class of flexible structure in Hölder spaces. Zbl 1331.35337 Cuevas, Claudio; Lizama, Carlos 2009 Almost automorphic and pseudo-almost automorphic solutions to semilinear evolution equations with nondense domain. Zbl 1203.34095 2009 Compact almost automorphic solutions to semilinear Cauchy problems with non-dense domain. Zbl 1189.34117 2009 Dispersive estimates for the Schrödinger equation in dimensions four and five. Zbl 1163.35482 Cardoso, Fernando; Cuevas, Claudio; Vodev, Georgi 2009 Maximal regularity of the discrete harmonic oscillator equation. Zbl 1167.39010 Castro, Airton; Cuevas, Claudio; Lizama, Carlos 2009 Dispersive estimates for the Schrödinger equation with potentials of critical regularity. Zbl 1184.35084 Cardoso, Fernando; Cuevas, Claudio; Vodev, Georgi 2009 Almost automorphic solutions to a class of semilinear fractional differential equations. Zbl 1192.34006 Cuevas, Claudio; Lizama, Carlos 2008 On well-posedness of difference schemes for abstract elliptic problems in $$L^{p}([0, T];E)$$ spaces. Zbl 1140.65073 Ashyralyev, Allaberen; Cuevas, Claudio; Piskarev, Sergey 2008 Semilinear evolution equations of second order via maximal regularity. Zbl 1155.47056 Cuevas, Claudio; Lizama, Carlos 2008 Weighted exponential trichotomy of linear difference equations. Zbl 1155.39004 Cuevas, Claudio; Vidal, Claudio 2008 Weighted exponential trichotomy of difference equations. Zbl 1203.39003 Vidal, Claudio; Cuevas, Claudio; del Campo, Luis 2008 Weighted dispersive estimates for solutions of the Schrödinger equation. Zbl 1199.35063 Cardoso, F.; Cuevas, C.; Vodev, G. 2008 Maximal regularity of discrete second order Cauchy problems in Banach spaces. Zbl 1133.39014 Cuevas, Claudio; Lizama, Carlos 2007 A note on discrete maximal regularity for functional difference equations with infinite delay. Zbl 1133.39001 Cuevas, Claudio; Vidal, Claudio 2006 $$L^{p'}$$-$$L^p$$ decay estimates of solutions to the wave equation with a short-range potential. Zbl 1098.35037 Cuevas, Claudio; Vodev, Georgi 2006 An asymptotic theory for retarded functional difference equations. Zbl 1080.39007 Cuevas, C.; Del Campo, L. 2005 Dispersive estimates of solutions to the wave equation with a potential in dimensions two and three. Zbl 1164.35433 Cardoso, Fernando; Cuevas, Claudio; Vodev, Georgi 2005 Convergent solutions of linear functional difference equations in phase space. Zbl 1022.39001 Cuevas, Claudio; Pinto, Manuel 2003 Sharp bounds on the number of resonances for conformally compact manifolds with constant negative curvature near infinity. Zbl 1046.58011 Cuevas, Claudio; Vodev, Georgi 2003 Discrete dichotomies and asymptotic behavior for abstract retarded functional difference equations in phase space. Zbl 1019.39008 Cuevas, Claudio; Vidal, Claudio 2002 Existence and uniqueness of pseudo almost periodic solutions of semilinear Cauchy problems with non dense domain. Zbl 0985.34052 Cuevas, Claudio; Pinto, Manuel 2001 Asymptotic properties of solutions to nonautonomous Volterra difference systems with infinite delay. Zbl 1002.39007 Cuevas, C.; Pinto, M. 2001 Asymptotic behavior in Volterra difference systems with unbounded delay. Zbl 0940.39005 Cuevas, Claudio; Pinto, Manuel 2000 Weighted convergent and bounded solutions of Volterra difference systems with infinite delay. Zbl 0965.39007 Cuevas, C. 2000 all top 5 ### Cited by 528 Authors 52 Cuevas, Claudio 31 Lizama, Carlos 23 N’Guérékata, Gaston Mandata 18 Liang, Jin 15 Agarwal, Ravi P. 15 Ahmad, Bashir 15 Ashyralyev, Allaberen 15 Diagana, Toka 14 Chang, Yong-Kui 12 Pinto, Manuel 12 Soto, Herme 12 Xia, Zhinan 12 Yan, Zuomao 11 de Andrade, Bruno 11 Henríquez, Hernán R. 11 Xiao, Ti-Jun 10 Cao, Junfei 10 Ntouyas, Sotiris K. 10 Ponce, Rodrigo F. 8 Băleanu, Dumitru I. 8 Benchohra, Mouffak 8 Ding, Huisheng 8 Kostić, Marko 8 Sasu, Bogdan 8 Shakhmurov, Veli B. 8 Zhang, Chuanyi 7 Al-saedi, Ahmed Eid Salem 7 Nieto Roig, Juan Jose 7 Wang, Rongnian 6 Hernández, Eduardo M. 6 Huang, Zaitang 6 Lu, Fangxia 6 Murugesu, R. 6 Pandey, Dwijendra Narain 6 Vijayakumar, Velusamy 5 Abbas, Syed 5 Dos Santos, José Paulo Carvalho 5 Li, Fang 5 Li, Gang 5 Luca, Rodica 5 Murillo-Arcila, Marina 5 Pierri, Michelle 5 Ravichandran, Chokkalingam 5 Sasu, Adina Luminiţa 5 Wang, Jinrong 4 Alvarez, Edgardo 4 Arjunan, Mani Mallika 4 Caicedo, Alejandro 4 Cardoso, Fernando 4 Castro, Airton 4 Chadha, Alka 4 Chen, Pengyu 4 Dabas, Jaydev 4 Dantas, Filipe 4 Ezzinbi, Khalil 4 Fan, Zhenbin 4 Gautam, Ganga Ram 4 Green, William R. 4 Henderson, Johnny Lee 4 Jia, Xiumei 4 Litimein, Sara 4 O’Regan, Donal 4 Shu, Xiaobao 4 Suganya, Selvaraj 4 Vidal, Claudio 4 Vodev, Georgi 4 Zhou, Yong 3 Abadias, Luciano 3 Ahmadian, Ali 3 Amin, Rohul 3 Andrade, Filipe 3 Ardjouni, Abdelouaheb 3 Cakir, Zafer 3 De Souza, Julio César 3 del Campo, Luis 3 Dimbour, William 3 Djoudi, Ahcene 3 Fečkan, Michal 3 Fu, Xianlong 3 Gyori, Istvan 3 Li, Hong-Xu 3 Liu, Junwei 3 Mu, Yunyi 3 Ozturk, Elif 3 Piskarëv, S. I. 3 Rabelo, Marcos Napoleão 3 Robledo, Gonzalo 3 Sepúlveda, Daniel 3 Siracusa, Giovana 3 Stamova, Ivanka Milkova 3 Tatar, Nasser-eddine 3 Tetikoglu, Fatma Songul Ozesenli 3 Wang, Huiwen 3 Wei, Mei 3 Xu, Fei 3 Yang, Fenglin 3 Yang, He 3 Zhang, Lili 3 Zhao, Zhihan 2 Abdeljawad, Thabet ...and 428 more Authors all top 5 ### Cited in 157 Serials 62 Advances in Difference Equations 34 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 28 Applied Mathematics and Computation 17 Abstract and Applied Analysis 15 Journal of Difference Equations and Applications 13 Computers & Mathematics with Applications 12 Journal of Mathematical Analysis and Applications 11 Mathematical Methods in the Applied Sciences 10 Fractional Calculus & Applied Analysis 9 Journal of Integral Equations and Applications 9 Boundary Value Problems 8 Discrete Dynamics in Nature and Society 8 Mediterranean Journal of Mathematics 7 Applicable Analysis 7 Journal of Applied Mathematics and Computing 6 Applied Mathematics Letters 6 Fractional Differential Calculus 5 Chaos, Solitons and Fractals 5 Nonlinear Analysis. Real World Applications 5 Banach Journal of Mathematical Analysis 4 Bulletin of the Australian Mathematical Society 4 International Journal of Control 4 Journal of the Franklin Institute 4 Mathematische Nachrichten 4 Results in Mathematics 4 Stochastic Analysis and Applications 4 Acta Applicandae Mathematicae 4 Mathematical and Computer Modelling 4 Computational and Applied Mathematics 4 Journal of Inequalities and Applications 4 Communications in Nonlinear Science and Numerical Simulation 4 Discrete and Continuous Dynamical Systems. Series B 4 Journal of Function Spaces and Applications 4 African Diaspora Journal of Mathematics 4 Journal of Fixed Point Theory and Applications 4 Evolution Equations and Control Theory 4 Journal of Function Spaces 3 Applied Mathematics and Optimization 3 Journal of Computational and Applied Mathematics 3 Journal of Differential Equations 3 Numerical Functional Analysis and Optimization 3 Semigroup Forum 3 Filomat 3 Discrete and Continuous Dynamical Systems 3 Differential Equations and Dynamical Systems 3 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 3 Cubo 3 Journal of Applied Analysis and Computation 3 AIMS Mathematics 2 Collectanea Mathematica 2 Integral Equations and Operator Theory 2 Journal of Functional Analysis 2 Zeitschrift für Analysis und ihre Anwendungen 2 Neural Networks 2 Communications in Partial Differential Equations 2 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 2 Topological Methods in Nonlinear Analysis 2 Journal of Mathematical Sciences (New York) 2 Fractals 2 Mathematical Problems in Engineering 2 Revista Matemática Complutense 2 Nonlinear Analysis. Modelling and Control 2 Journal of Evolution Equations 2 Dynamics of Continuous, Discrete & Impulsive Systems. Series A. Mathematical Analysis 2 Journal of Applied Mathematics 2 ISRN Mathematical Analysis 2 Journal of Mathematics 2 Nonautonomous Dynamical Systems 2 Open Mathematics 1 Communications in Mathematical Physics 1 Israel Journal of Mathematics 1 Journal of Mathematical Physics 1 Physica A 1 Reports on Mathematical Physics 1 Studia Mathematica 1 Bulletin. Classe des Sciences Mathématiques et Naturelles. Sciences Mathématiques 1 Advances in Mathematics 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Archiv der Mathematik 1 Automatica 1 Duke Mathematical Journal 1 Glasgow Mathematical Journal 1 Illinois Journal of Mathematics 1 Journal of Optimization Theory and Applications 1 Kyungpook Mathematical Journal 1 Mathematics and Computers in Simulation 1 Mathematica Slovaca 1 Mathematische Zeitschrift 1 Memoirs of the American Mathematical Society 1 Proceedings of the American Mathematical Society 1 Proceedings of the Edinburgh Mathematical Society. Series II 1 SIAM Journal on Numerical Analysis 1 Chinese Annals of Mathematics. Series B 1 Bulletin of the Iranian Mathematical Society 1 Applied Numerical Mathematics 1 Acta Mathematicae Applicatae Sinica. English Series 1 Journal of Theoretical Probability 1 Science in China. Series A 1 Applied Mathematical Modelling 1 Automation and Remote Control ...and 57 more Serials all top 5 ### Cited in 32 Fields 324 Ordinary differential equations (34-XX) 163 Operator theory (47-XX) 117 Partial differential equations (35-XX) 97 Integral equations (45-XX) 62 Difference and functional equations (39-XX) 50 Abstract harmonic analysis (43-XX) 45 Real functions (26-XX) 42 Systems theory; control (93-XX) 32 Numerical analysis (65-XX) 24 Probability theory and stochastic processes (60-XX) 20 Harmonic analysis on Euclidean spaces (42-XX) 17 Biology and other natural sciences (92-XX) 9 Dynamical systems and ergodic theory (37-XX) 8 Integral transforms, operational calculus (44-XX) 8 Global analysis, analysis on manifolds (58-XX) 5 Calculus of variations and optimal control; optimization (49-XX) 2 Differential geometry (53-XX) 2 Mechanics of deformable solids (74-XX) 2 Quantum theory (81-XX) 1 Number theory (11-XX) 1 Measure and integration (28-XX) 1 Functions of a complex variable (30-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Special functions (33-XX) 1 Sequences, series, summability (40-XX) 1 Functional analysis (46-XX) 1 General topology (54-XX) 1 Manifolds and cell complexes (57-XX) 1 Optics, electromagnetic theory (78-XX) 1 Classical thermodynamics, heat transfer (80-XX) 1 Statistical mechanics, structure of matter (82-XX) 1 Relativity and gravitational theory (83-XX)
2022-07-06 10:55:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48999109864234924, "perplexity": 10027.125340384988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104669950.91/warc/CC-MAIN-20220706090857-20220706120857-00436.warc.gz"}
http://math.stackexchange.com/questions/265426/range-of-operator-mfx-fx-2-x-in0-1
# Range of operator $Mf(x) = f(x/2), \;\; x\in[0,1]$ Let $M: C([0,1]) \rightarrow C([0,1])$ be defined by $$Mf(x) = f(x/2), \;\; x\in[0,1]$$ Prove that the range of $I-M$ does not contain nonzero constant functions, but it contains all functions $C([0,1])$ that are differentiable (from the right) at 0 and satisfy $f(0) = 0$ My try: if $(I - M)f = const$ then the derivative $f'(0)$ would be infinite and hence the function not continuous. By this it also seems to follow that differentiability at 0 and f(0) = 0 must be necessary conditions for $g \in R(I-M)$ (R is the range). But how can I prove that we can reach all such functions? - I was aware of this, but I did not know that this contradicted that $(I-M)f(x) = c$ directly – Johan Dec 26 '12 at 17:43 It is easy to see that $g(0)=0$ for all $g$ in the range of $I-M$. If $g$ is in the range and it is constant then there exists $f$ such that $f(x)-f(x/2)=g$ for all $x$. This implies that $f(x)=g+f(x/2)=2g+f(x/4)=\cdots$. Iterating this procedure we see that $f(x)$ is greater that any finite value and this is absurd (I am using that $f$ is continuous and that $f(0)=0$). If $g$ is differentiable at $0$ then there exists $C$ such that $|g(x)|\leq Cx$ for all $x$. Now, define $f(x)=\sum g(\frac{x}{2^n})$. It follows that the series converges point-wise (and uniformly) and that $(I-M)f=g$. You should put absolute values : $|g(x)| \leq Cx$ instead of just $g(x) \leq Cx$. – Ewan Delanoy Dec 26 '12 at 17:38 great! can you please expand how we get $|g(x) \leq Cx|$ and why we get uniformly convergence! – Johan Dec 26 '12 at 18:04 For small $x$ we have $|\frac{g(x)}{x}-g'(0)|\leq 1$, so that $g(x)\leq g'(0)+1$ and $g(x)\geq 1-g'(0)$. This gives you the bound for small $x$. For large $x$ it follows from the continuity. The convergence follows from the M-Test (en.wikipedia.org/wiki/Weierstrass_M-test) – Quimey Dec 26 '12 at 18:10
2013-05-25 09:56:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9666333794593811, "perplexity": 69.31257721820631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705926946/warc/CC-MAIN-20130516120526-00053-ip-10-60-113-184.ec2.internal.warc.gz"}
https://aimsciences.org/article/doi/10.3934/dcdss.2014.7.761
# American Institute of Mathematical Sciences • Previous Article $L^r_{ loc}-L^\infty_{ loc}$ estimates and expansion of positivity for a class of doubly non linear singular parabolic equations • DCDS-S Home • This Issue • Next Article Global solutions for a nonlinear integral equation with a generalized heat kernel August  2014, 7(4): 761-766. doi: 10.3934/dcdss.2014.7.761 ## A clamped plate with a uniform weight may change sign 1 Fakultät für Mathematik, Otto-von-Guericke Universität, Postfach 4120, 39016 Magdeburg, Germany 2 Mathematisches Institut, Universität zu Köln, Weyertal 86-90, 50931 Köln, Germany Received  July 2013 Revised  September 2013 Published  February 2014 It is known that the Dirichlet bilaplace boundary value problem, which is used as a model for a clamped plate, is not sign preserving on general domains. It is also known that the corresponding first eigenfunction may change sign. In this note we will show that even a constant right hand side may result in a sign-changing solution. Citation: Hans-Christoph Grunau, Guido Sweers. A clamped plate with a uniform weight may change sign. Discrete & Continuous Dynamical Systems - S, 2014, 7 (4) : 761-766. doi: 10.3934/dcdss.2014.7.761 ##### References: [1] L. Bauer and E. Reiss, Block five diagonal metrics and the fast numerical computation of the biharmonic equation,, Math. Comp., 26 (1972), 311. doi: 10.1090/S0025-5718-1972-0312751-9. Google Scholar [2] T. Boggio, Sull'equilibrio delle piastre elastiche incastrate,, Rend. Acc. Lincei, 10 (1901), 197. Google Scholar [3] T. Boggio, Sulle funzioni di Green d'ordine $m$,, Rend. Circ. Mat. Palermo, 20 (1905), 97. Google Scholar [4] Ch. V. Coffman, On the structure of solutions to $\Delta ^{2}u=\lambda u$ which satisfy the clamped plate conditions on a right angle,, SIAM J. Math. Anal., 13 (1982), 746. doi: 10.1137/0513051. Google Scholar [5] Ch. V. Coffman and R. J. Duffin, On the fundamental eigenfunctions of a clamped punctured disk,, Adv. in Appl. Math., 13 (1992), 142. doi: 10.1016/0196-8858(92)90006-I. Google Scholar [6] Ch. V. Coffman, R. J. Duffin and D. H. Shaffer, The fundamental mode of vibration of a clamped annular plate is not of one sign,, in Constructive approaches to mathematical models (Proc. Conf. in honor of R. J. Duffin, (1978), 267. Google Scholar [7] A. Dall'Acqua and G. Sweers, On domains for which the clamped plate system is positivity preserving,, in Partial Differential Equations and Inverse Problems, 362 (2004), 133. doi: 10.1090/conm/362/06609. Google Scholar [8] A. Dall'Acqua and G. Sweers, The clamped-plate equation for the limaçon,, Ann. Mat. Pura Appl., (4) 184 (2005), 361. doi: 10.1007/s10231-004-0121-9. Google Scholar [9] A. Dall'Acqua and G. Sweers, Estimates for Green function and Poisson kernels of higher order Dirichlet boundary value problems,, J. Differential Equations, 205 (2004), 466. doi: 10.1016/j.jde.2004.06.004. Google Scholar [10] R. J. Duffin, On a question of Hadamard concerning super-biharmonic functions,, J. Math. Phys., 27 (1949), 253. Google Scholar [11] M. Filoche and S. Mayboroda, Universal mechanism for Anderson and weak localization,, Proc. Natl. Acad. Sci. USA, 109 (2012), 14761. doi: 10.1073/pnas.1120432109. Google Scholar [12] P. R. Garabedian, A partial differential equation arising in conformal mapping,, Pacific J. Math., 1 (1951), 485. doi: 10.2140/pjm.1951.1.485. Google Scholar [13] F. Gazzola, H.-Ch. Grunau and G. Sweers, Polyharmonic Boundary Value Problems. Positivity Preserving and Nonlinear Higher Order Elliptic Equations in Bounded Domains,, Lecture Notes in Mathematics, (1991). doi: 10.1007/978-3-642-12245-3. Google Scholar [14] H.-Ch. Grunau and F. Robert, Positivity and almost positivity of biharmonic Green's functions under Dirichlet boundary conditions,, Arch. Ration. Mech. Anal., 195 (2010), 865. doi: 10.1007/s00205-009-0230-0. Google Scholar [15] H.-Ch. Grunau and G. Sweers, Positivity for perturbations of polyharmonic operators with Dirichlet boundary conditions in two dimensions,, Math. Nachr., 179 (1996), 89. doi: 10.1002/mana.19961790106. Google Scholar [16] H.-Ch. Grunau and G. Sweers, Sign change for the Green function and the first eigenfunction of equations of clamped-plate type,, Arch. Ration. Mech. Anal., 150 (1999), 179. doi: 10.1007/s002050050185. Google Scholar [17] H.-Ch. Grunau and G. Sweers, In any dimension a "clamped plate" with a uniform weight may change sign,, Nonlinear Anal. A: T. M. A., 97 (2014), 119. doi: 10.1016/j.na.2013.11.017. Google Scholar [18] J. Hadamard, Mémoire sur le problème d'analyse relatif à l'équilibre des plaques élastiques encastrées,, in Oeuvres de Jacques Hadamard, 33 (1968), 515. Google Scholar [19] J. Hadamard, Sur certains cas intéressants du problème biharmonique,, in Oeuvres de Jacques Hadamard, (1968), 1297. Google Scholar [20] V. A. Kozlov, V. A. Kondrat'ev and V. G. Maz'ya, On sign variation and the absence of "strong'' zeros of solutions of elliptic equations,, Math. USSR Izvestiya, 34 (1990), 337. Google Scholar [21] Ch. Loewner, On generation of solutions of the biharmonic equation in the plane by conformal mappings,, Pacific J. Math., 3 (1953), 417. doi: 10.2140/pjm.1953.3.417. Google Scholar [22] P. J. McKenna and W. Walter, Nonlinear oscillations in a suspension bridge,, Arch. Rational Mech. Anal., 98 (1987), 167. doi: 10.1007/BF00251232. Google Scholar [23] G. Szegö, On membranes and plates,, Proc. Nat. Acad. Sci. U.S.A., 36 (1950), 210. doi: 10.1073/pnas.36.3.210. Google Scholar [24] G. Sweers, When is the first eigenfunction for the clamped plate equation of fixed sign?, in Proceedings of the USA-Chile Workshop on Nonlinear Analysis (Viña del Mar-Valparaiso, (2000), 285. Google Scholar show all references ##### References: [1] L. Bauer and E. Reiss, Block five diagonal metrics and the fast numerical computation of the biharmonic equation,, Math. Comp., 26 (1972), 311. doi: 10.1090/S0025-5718-1972-0312751-9. Google Scholar [2] T. Boggio, Sull'equilibrio delle piastre elastiche incastrate,, Rend. Acc. Lincei, 10 (1901), 197. Google Scholar [3] T. Boggio, Sulle funzioni di Green d'ordine $m$,, Rend. Circ. Mat. Palermo, 20 (1905), 97. Google Scholar [4] Ch. V. Coffman, On the structure of solutions to $\Delta ^{2}u=\lambda u$ which satisfy the clamped plate conditions on a right angle,, SIAM J. Math. Anal., 13 (1982), 746. doi: 10.1137/0513051. Google Scholar [5] Ch. V. Coffman and R. J. Duffin, On the fundamental eigenfunctions of a clamped punctured disk,, Adv. in Appl. Math., 13 (1992), 142. doi: 10.1016/0196-8858(92)90006-I. Google Scholar [6] Ch. V. Coffman, R. J. Duffin and D. H. Shaffer, The fundamental mode of vibration of a clamped annular plate is not of one sign,, in Constructive approaches to mathematical models (Proc. Conf. in honor of R. J. Duffin, (1978), 267. Google Scholar [7] A. Dall'Acqua and G. Sweers, On domains for which the clamped plate system is positivity preserving,, in Partial Differential Equations and Inverse Problems, 362 (2004), 133. doi: 10.1090/conm/362/06609. Google Scholar [8] A. Dall'Acqua and G. Sweers, The clamped-plate equation for the limaçon,, Ann. Mat. Pura Appl., (4) 184 (2005), 361. doi: 10.1007/s10231-004-0121-9. Google Scholar [9] A. Dall'Acqua and G. Sweers, Estimates for Green function and Poisson kernels of higher order Dirichlet boundary value problems,, J. Differential Equations, 205 (2004), 466. doi: 10.1016/j.jde.2004.06.004. Google Scholar [10] R. J. Duffin, On a question of Hadamard concerning super-biharmonic functions,, J. Math. Phys., 27 (1949), 253. Google Scholar [11] M. Filoche and S. Mayboroda, Universal mechanism for Anderson and weak localization,, Proc. Natl. Acad. Sci. USA, 109 (2012), 14761. doi: 10.1073/pnas.1120432109. Google Scholar [12] P. R. Garabedian, A partial differential equation arising in conformal mapping,, Pacific J. Math., 1 (1951), 485. doi: 10.2140/pjm.1951.1.485. Google Scholar [13] F. Gazzola, H.-Ch. Grunau and G. Sweers, Polyharmonic Boundary Value Problems. Positivity Preserving and Nonlinear Higher Order Elliptic Equations in Bounded Domains,, Lecture Notes in Mathematics, (1991). doi: 10.1007/978-3-642-12245-3. Google Scholar [14] H.-Ch. Grunau and F. Robert, Positivity and almost positivity of biharmonic Green's functions under Dirichlet boundary conditions,, Arch. Ration. Mech. Anal., 195 (2010), 865. doi: 10.1007/s00205-009-0230-0. Google Scholar [15] H.-Ch. Grunau and G. Sweers, Positivity for perturbations of polyharmonic operators with Dirichlet boundary conditions in two dimensions,, Math. Nachr., 179 (1996), 89. doi: 10.1002/mana.19961790106. Google Scholar [16] H.-Ch. Grunau and G. Sweers, Sign change for the Green function and the first eigenfunction of equations of clamped-plate type,, Arch. Ration. Mech. Anal., 150 (1999), 179. doi: 10.1007/s002050050185. Google Scholar [17] H.-Ch. Grunau and G. Sweers, In any dimension a "clamped plate" with a uniform weight may change sign,, Nonlinear Anal. A: T. M. A., 97 (2014), 119. doi: 10.1016/j.na.2013.11.017. Google Scholar [18] J. Hadamard, Mémoire sur le problème d'analyse relatif à l'équilibre des plaques élastiques encastrées,, in Oeuvres de Jacques Hadamard, 33 (1968), 515. Google Scholar [19] J. Hadamard, Sur certains cas intéressants du problème biharmonique,, in Oeuvres de Jacques Hadamard, (1968), 1297. Google Scholar [20] V. A. Kozlov, V. A. Kondrat'ev and V. G. Maz'ya, On sign variation and the absence of "strong'' zeros of solutions of elliptic equations,, Math. USSR Izvestiya, 34 (1990), 337. Google Scholar [21] Ch. Loewner, On generation of solutions of the biharmonic equation in the plane by conformal mappings,, Pacific J. Math., 3 (1953), 417. doi: 10.2140/pjm.1953.3.417. Google Scholar [22] P. J. McKenna and W. Walter, Nonlinear oscillations in a suspension bridge,, Arch. Rational Mech. Anal., 98 (1987), 167. doi: 10.1007/BF00251232. Google Scholar [23] G. Szegö, On membranes and plates,, Proc. Nat. Acad. Sci. U.S.A., 36 (1950), 210. doi: 10.1073/pnas.36.3.210. Google Scholar [24] G. Sweers, When is the first eigenfunction for the clamped plate equation of fixed sign?, in Proceedings of the USA-Chile Workshop on Nonlinear Analysis (Viña del Mar-Valparaiso, (2000), 285. Google Scholar [1] Philipp Reiter. Regularity theory for the Möbius energy. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1463-1471. doi: 10.3934/cpaa.2010.9.1463 [2] Konovenko Nadiia, Lychagin Valentin. Möbius invariants in image recognition. Journal of Geometric Mechanics, 2017, 9 (2) : 191-206. doi: 10.3934/jgm.2017008 [3] Diana M. Thomas, Ashley Ciesla, James A. Levine, John G. Stevens, Corby K. Martin. A mathematical model of weight change with adaptation. Mathematical Biosciences & Engineering, 2009, 6 (4) : 873-887. doi: 10.3934/mbe.2009.6.873 [4] Petr Kůrka. Minimality in iterative systems of Möbius transformations. Conference Publications, 2011, 2011 (Special) : 903-912. doi: 10.3934/proc.2011.2011.903 [5] Petr Kůrka. Iterative systems of real Möbius transformations. Discrete & Continuous Dynamical Systems - A, 2009, 25 (2) : 567-574. doi: 10.3934/dcds.2009.25.567 [6] Rich Stankewitz, Hiroki Sumi. Backward iteration algorithms for Julia sets of Möbius semigroups. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 6475-6485. doi: 10.3934/dcds.2016079 [7] Livio Flaminio, Giovanni Forni. Orthogonal powers and Möbius conjecture for smooth time changes of horocycle flows. Electronic Research Announcements, 2019, 26: 16-23. doi: 10.3934/era.2019.26.002 [8] Jon Chaika, Alex Eskin. Möbius disjointness for interval exchange transformations on three intervals. Journal of Modern Dynamics, 2019, 14: 55-86. doi: 10.3934/jmd.2019003 [9] Wen Huang, Zhiren Wang, Guohua Zhang. Möbius disjointness for topological models of ergodic systems with discrete spectrum. Journal of Modern Dynamics, 2019, 14: 277-290. doi: 10.3934/jmd.2019010 [10] Markus Bachmayr, Van Kien Nguyen. Identifiability of diffusion coefficients for source terms of non-uniform sign. Inverse Problems & Imaging, 2019, 13 (5) : 1007-1021. doi: 10.3934/ipi.2019045 [11] Yuanxiao Li, Ming Mei, Kaijun Zhang. Existence of multiple nontrivial solutions for a $p$-Kirchhoff type elliptic problem involving sign-changing weight functions. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 883-908. doi: 10.3934/dcdsb.2016.21.883 [12] Tsung-Fang Wu. On semilinear elliptic equations involving critical Sobolev exponents and sign-changing weight function. Communications on Pure & Applied Analysis, 2008, 7 (2) : 383-405. doi: 10.3934/cpaa.2008.7.383 [13] Mateus Balbino Guimarães, Rodrigo da Silva Rodrigues. Elliptic equations involving linear and superlinear terms and critical Caffarelli-Kohn-Nirenberg exponent with sign-changing weight functions. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2697-2713. doi: 10.3934/cpaa.2013.12.2697 [14] Lyndsey Clark. The $\beta$-transformation with a hole. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1249-1269. doi: 10.3934/dcds.2016.36.1249 [15] Graeme Wake, Anthony Pleasants, Alan Beedle, Peter Gluckman. A model for phenotype change in a stochastic framework. Mathematical Biosciences & Engineering, 2010, 7 (3) : 719-728. doi: 10.3934/mbe.2010.7.719 [16] Marc Chamberland, Victor H. Moll. Dynamics of the degree six Landen transformation. Discrete & Continuous Dynamical Systems - A, 2006, 15 (3) : 905-919. doi: 10.3934/dcds.2006.15.905 [17] Oğul Esen, Partha Guha. On the geometry of the Schmidt-Legendre transformation. Journal of Geometric Mechanics, 2018, 10 (3) : 251-291. doi: 10.3934/jgm.2018010 [18] Irena Lasiecka, Justin Webster. Eliminating flutter for clamped von Karman plates immersed in subsonic flows. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1935-1969. doi: 10.3934/cpaa.2014.13.1935 [19] Tuan Phung-Duc, Hiroyuki Masuyama, Shoji Kasahara, Yutaka Takahashi. M/M/3/3 and M/M/4/4 retrial queues. Journal of Industrial & Management Optimization, 2009, 5 (3) : 431-451. doi: 10.3934/jimo.2009.5.431 [20] Fengwei Li, Qin Yue, Fengmei Liu. The weight distributions of constacyclic codes. Advances in Mathematics of Communications, 2017, 11 (3) : 471-480. doi: 10.3934/amc.2017039 2018 Impact Factor: 0.545 ## Metrics • PDF downloads (13) • HTML views (0) • Cited by (2) ## Other articlesby authors • on AIMS • on Google Scholar [Back to Top]
2019-07-22 02:03:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7029837369918823, "perplexity": 5775.607115425272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527458.86/warc/CC-MAIN-20190722010436-20190722032436-00319.warc.gz"}