url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.cymath.com/blog/2014-08-18
# Problem of the Week ## Updated at Aug 18, 2014 3:35 PM This week's problem comes from the calculus category. How can we solve for the derivative of $$\frac{{x}^{4}}{2}$$? Let's begin! $\frac{d}{dx} \frac{{x}^{4}}{2}$ 1 Use Constant Factor Rule: $$\frac{d}{dx} cf(x)=c(\frac{d}{dx} f(x))$$.$\frac{1}{2}(\frac{d}{dx} {x}^{4})$2 Use Power Rule: $$\frac{d}{dx} {x}^{n}=n{x}^{n-1}$$.$2{x}^{3}$Done2*x^3
2021-09-24 02:46:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4902612566947937, "perplexity": 5641.8022896078555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00646.warc.gz"}
https://www.ias.ac.in/describe/article/jcsc/121/05/0867-0872
• Electronic structure analysis and vertical ionization energies of thiophene and ethynylthiophenes • # Fulltext https://www.ias.ac.in/article/fulltext/jcsc/121/05/0867-0872 • # Keywords Thiophenic conjugated polymers; electron propagator; ionization energy; Dyson orbital, 𝜋 electron density. • # Abstract Results from different decouplings of the electron propagator theory using MP2/6-311$g$ ($2df$, $2p$) and MP2/6-311$++g$ ($2df$, $2p$) optimized geometries have been performed to investigate first eight vertical ionization energies and the corresponding Dyson orbitals. The results computed are in good agreement with experimental ionization energies and help clear the ambiguities of experimental photoelectron spectrum (PES) assignments. Detailed examination of the 𝜋-orbital density distribution of Dyson orbitals provides clarity in PES assignments and new insights about the topology of ring 𝜋 and ethynyl $\pi_{c-c}$ electron density distribution which may be tapped for improved nonlinear optical/electrochemical response from the thiophenic conjugated polymers. • # Author Affiliations 1. Department of Chemistry, Indian Institute of Technology Bombay, Powai 400 076 • # Journal of Chemical Sciences Volume 135, 2023 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
2023-03-20 22:16:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39827844500541687, "perplexity": 9308.934676234005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00317.warc.gz"}
http://www.sceneadvisor.com/Wisconsin/minimizing-mean-square-error.html
Electric Address 640 S 70th St, Milwaukee, WI 53214 (414) 771-9088 http://www.romanelectrichome.com # minimizing mean square error Kewaskum, Wisconsin Moving on to your question. First add and subtract $E[Y | X]$: $E\left[\left\lbrace(Y - E[Y | X]) - (f(X) - E[Y|X])\right\rbrace^2\right]$ Expanding the quadratic yield: E\left[\left(Y - E[Y|X]\right)^2 + \left(f(X) - E[Y|X]\right)^2 - 2 \left(Y - In the Bayesian setting, the term MMSE more specifically refers to estimation with quadratic cost function. Thus we can obtain the LMMSE estimate as the linear combination of y 1 {\displaystyle y_{1}} and y 2 {\displaystyle y_{2}} as x ^ = w 1 ( y 1 − Publishing a mathematical research article on research which is already done? Sequential linear MMSE estimation In many real-time application, observational data is not available in a single batch. First, note that \begin{align} E[\hat{X}_M]&=E[E[X|Y]]\\ &=E[X] \quad \textrm{(by the law of iterated expectations)}. \end{align} Therefore,\hat{X}_M=E[X|Y]$is an unbiased estimator of$X$. This way the expression$2 (Y - E[Y|X])(f(X) - E[Y|X]) = 0, so could you please elaborate the second part of your answer, following To finish the proof... –Andrej May 4 This therefore gives $$E(Y-E(Y|X)|X)=E(Y|X)-E(E(Y|X)|X)=E(Y|X)-E(Y|X).$$ –M Turgeon May 4 '14 at 20:57 @Andrej My last comment is about the fact that in general, the expectation of a product is not ISBN0-13-042268-1. First, note that \begin{align} E[\tilde{X} \cdot g(Y)|Y]&=g(Y) E[\tilde{X}|Y]\\ &=g(Y) \cdot W=0. \end{align} Next, by the law of iterated expectations, we have \begin{align} E[\tilde{X} \cdot g(Y)]=E\big[E[\tilde{X} \cdot g(Y)|Y]\big]=0. \end{align} We are now A shorter, non-numerical example can be found in orthogonality principle. Let the fraction of votes that a candidate will receive on an election day be x ∈ [ 0 , 1 ] . {\displaystyle x\in [0,1].} Thus the fraction of votes Had the random variable x {\displaystyle x} also been Gaussian, then the estimator would have been optimal. In other words, if\hat{X}_M$captures most of the variation in$X$, then the error will be small. Thus the expression for linear MMSE estimator, its mean, and its auto-covariance is given by x ^ = W ( y − y ¯ ) + x ¯ , {\displaystyle {\hat Just expand the inside argument and differentiate w.r.t.$w_1^*$and put the gradient to$0$. Why are climbing shoes usually a slightly tighter than the usual mountaineering shoes? Then, we have$W=0$. Then you use the previous property of$\epsilon$to show that$-2E[h(X)\epsilon]=0, hence the last expression is zero. Thus we can re-write the estimator as x ^ = W ( y − y ¯ ) + x ¯ {\displaystyle {\hat σ 4}=W(y-{\bar σ 3})+{\bar σ 2}} and the expression Fundamentals of Statistical Signal Processing: Estimation Theory. Direct numerical evaluation of the conditional expectation is computationally expensive, since they often require multidimensional integration usually done via Monte Carlo methods. Can I stop this homebrewed Lucky Coin ability from being exploited? Thus, we can combine the two sounds as y = w 1 y 1 + w 2 y 2 {\displaystyle y=w_{1}y_{1}+w_{2}y_{2}} where the i-th weight is given as w i = Prediction and Improved Estimation in Linear Models. The system returned: (22) Invalid argument The remote host or network may be down. After (m+1)-th observation, the direct use of above recursive equations give the expression for the estimate x ^ m + 1 {\displaystyle {\hat σ 0}_ σ 9} as: x ^ m Since some error is always present due to finite sampling and the particular polling methodology adopted, the first pollster declares their estimate to have an error z 1 {\displaystyle z_{1}} with The mean squared error (MSE) of this estimator is defined as \begin{align} E[(X-\hat{X})^2]=E[(X-g(Y))^2]. \end{align} The MMSE estimator ofX, \begin{align} \hat{X}_{M}=E[X|Y], \end{align} has the lowest MSE among all possible estimators. Let x {\displaystyle x} denote the sound produced by the musician, which is a random variable with zero mean and variance σ X 2 . {\displaystyle \sigma _{X}^{2}.} How should the When the observations are scalar quantities, one possible way of avoiding such re-computation is to first concatenate the entire sequence of observations and then apply the standard estimation formula as done Prentice Hall. Further reading Johnson, D. What does the "publish related items" do in Sitecore? Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. Generated Thu, 20 Oct 2016 14:40:02 GMT by s_nt6 (squid/3.5.20) more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed How do spaceship-mounted railguns not destroy the ships firing them? The new estimate based on additional data is now x ^ 2 = x ^ 1 + C X Y ~ C Y ~ − 1 y ~ , {\displaystyle {\hat Let the noise vector z {\displaystyle z} be normally distributed as N ( 0 , σ Z 2 I ) {\displaystyle N(0,\sigma _{Z}^{2}I)} where I {\displaystyle I} is an identity matrix. Suppose an optimal estimate x ^ 1 {\displaystyle {\hat − 0}_ ¯ 9} has been formed on the basis of past measurements and that error covariance matrix is C e 1 An estimator x ^ ( y ) {\displaystyle {\hat ^ 2}(y)} of x {\displaystyle x} is any function of the measurement y {\displaystyle y} . While these numerical methods have been fruitful, a closed form expression for the MMSE estimator is nevertheless possible if we are willing to make some compromises. Is a larger or smaller MSE better?What are the applications of the mean squared error?Is the least square estimator unbiased, if so then is only the variance term responsible for the Instead the observations are made in a sequence. Find the MMSE estimator ofX$given$Y$, ($\hat{X}_M$). For eg. This is known as the CEF prediction property and in class you usually show it to motivate least squares as projection of$Y$on$X$. Here it is$(s-Wy)'(s-Wy)=(s'-y'W')(s-Wy)=s's-s'Wy-y'W's-y'W'Wy$But at linear regression it is optimized w.r.t$y$, not$W\$. Is a larger or smaller MSE better?What are the applications of the mean squared error?Is the least square estimator unbiased, if so then is only the variance term responsible for the
2019-04-24 22:02:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9818823933601379, "perplexity": 1270.6925532932098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578663470.91/warc/CC-MAIN-20190424214335-20190425000335-00419.warc.gz"}
https://socratic.org/questions/how-do-you-solve-x-2-x-3-4
# How do you solve x^2 + x = 3/4? $\implies 4 {x}^{2} + 4 x - 3 = 0$ $\implies 4 {x}^{2} + 6 x - 2 x - 3 = 0$ $\implies 2 x \left(2 x + 3\right) - 1 \left(2 x + 3\right) = 0$ $\implies \left(2 x + 3\right) \left(2 x - 1\right) = 0$ $\therefore x = - \frac{3}{2} \mathmr{and} x = \frac{1}{2}$
2019-12-11 08:08:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23388999700546265, "perplexity": 1303.4179406288065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00311.warc.gz"}
http://taoofmac.com/space/apps/ImageOptim
# ImageOptim ImageOptim is a graphical queue/wrapper for a number of image optimization utilities. It works well for JPEG and PNG images but exhibits a tendency to corrupt old GIF files, so use with caution.
2016-09-24 22:35:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8754599094390869, "perplexity": 5797.915876509812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659512.19/warc/CC-MAIN-20160924173739-00244-ip-10-143-35-109.ec2.internal.warc.gz"}
http://openstudy.com/updates/53e512fae4b0e7ddacf65c72
## fulltilt 4 months ago lim of x as sin(3x)/(2x) approaches 0 • This Question is Open 1. agreene try graphing it. 2. myininaya do you know how to evaluate $\lim_{u \rightarrow 0}\frac{\sin(u)}{u}?$ 3. myininaya This limit should have been already introduced to you by the squeeze theorem (or at least this is the first way I learned what the above limit is) $\lim_{u \rightarrow 0}\frac{\sin(u)}{u}=1$ commit this limit to memory it will be useful let's take this limit and see if i can give you a hint on how to do your problem since there is a 3x inside that sin let's let u equal 3x and if u goes to 0 then 3x goes to 0 since u=3x and since 3 doesn't go to 0 then the x must go to 0 so anyways $\lim_{x \rightarrow 0}\frac{\sin(3x)}{3x}=1$ try to use this limit here for your problem you may find it necessary to multiply by a 1 to do so 4. Not the answer you are looking for? Search for more explanations. Search OpenStudy
2014-12-19 08:46:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9217053055763245, "perplexity": 607.3427201267064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768352.71/warc/CC-MAIN-20141217075248-00039-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.math.ru.nl/~sagave/geometry-seminar/abstracts/2018-12-20-van_dobben_de_bruyn.html
## Geometry Seminar - Abstracts ### Talk Thursday 20 December 2018, 16:00-17:00 in HG03.085 Remy van Dobben de Bruyn (IAS) A variety that cannot be dominated by one that lifts ### Abstract In the sixties, Serre constructed a smooth projective variety in characteristic $$p$$ that cannot be lifted to characteristic $$0$$. If a variety does not lift, a natural question is whether some variety related to it does. After a brief survey of Serre's example and generalities on lifting, we will construct a smooth projective variety that cannot be dominated by a smooth projective variety that lifts. (Back to geometry seminar schedule)
2022-06-25 11:43:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6909434795379639, "perplexity": 879.3852431791954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00001.warc.gz"}
https://dbfin.com/logic/enderton/chapter-2/section-2-2-truth-and-models/problem-5-solution/
# Section 2.2: Problem 5 Solution Working problems is a crucial part of learning mathematics. No one can learn... merely by poring over the definitions, theorems, and examples that are worked out in the text. One must work part of it out for oneself. To provide that opportunity is the purpose of the exercises. James R. Munkres Show that the formula (where is a one-place function symbol and is a two-place predicate symbol) is valid. Let be a structure and . Then, iff or iff or or . So, assume that does not hold, i.e. . Then , implying that iff iff iff , i.e. if then at least one of the other two conditions holds. So, we have that for every and , .
2021-06-23 12:03:38
{"extraction_info": {"found_math": true, "script_math_tex": 22, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9821321964263916, "perplexity": 708.6801163013673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488538041.86/warc/CC-MAIN-20210623103524-20210623133524-00434.warc.gz"}
https://www.semanticscholar.org/paper/Strong-pseudoprimes-to-twelve-prime-bases-Sorenson-Webster/629d1e6427d15a201de098c3359962a95a6a4161
# Strong pseudoprimes to twelve prime bases @article{Sorenson2017StrongPT, title={Strong pseudoprimes to twelve prime bases}, author={J. Sorenson and Jonathan Webster}, journal={Math. Comput.}, year={2017}, volume={86}, pages={985-1003} } • Published 2 September 2015 • Computer Science, Mathematics • Math. Comput. Let $\psi_m$ be the smallest strong pseudoprime to the first $m$ prime bases. This value is known for $1 \leq m \leq 11$. We extend this by finding $\psi_{12}$ and $\psi_{13}$. We also present an algorithm to find all integers $n\le B$ that are strong pseudoprimes to the first $m$ prime bases; with a reasonable heuristic assumption we can show that it takes at most $B^{2/3+o(1)}$ time. 18 Citations #### Figures and Topics from this paper Two Algorithms to Find Primes in Patterns • Computer Science, Mathematics • Math. Comput. • 2020 Two algorithms are presented that find all integers $x$ where $\max{ f_i(x) \le n$ and all the $f_i (x)$ are prime and prove correctness unconditionally, but the running time relies on two unproven but reasonable conjectures. Expand Fast tabulation of challenge pseudoprimes • Mathematics • The Open Book Series • 2019 We provide a new algorithm for tabulating composite numbers which are pseudoprimes to both a Fermat test and a Lucas test. Our algorithm is optimized for parameter choices that minimize theExpand An Algorithm and Estimates for the Erdős-Selfridge Function (work in progress) • Mathematics, Computer Science • Open Book Series • 2020 A new algorithm to compute the value of g(k) is presented, and computational evidence is provided to support the claim that $\hat{g}(K)$ estimates reasonably well in practice, and it is proved that for large $x,$G(x,k)$is asymptotic to$x/\hat{ g)\$. Expand Speeding up decimal multiplication This paper focuses on the number-theoretic transform (NTT) family of algorithms and achieves a 3x-5x speedup over the mpdecimal library, and presents a simple cache-efficient algorithm for in-place matrix transposition. Expand The development of an effective algorithm searching for strong pseudoprime numbers The problem of searching for strictly pseudoprime numbers is relevant in the field of number theory, and it also has a number of applications in cryptography: in particular, with the help of numbersExpand Tabulating Pseudoprimes and Tabulating Liars The asymptotic complexity of two problems related to the Miller-Rabin-Selfridge primality test, to tabulate strong pseudoprimes to a single fixed base a and to find all strong liars and witnesses, given a fixed odd composite n are explored. Expand N T ] 3 1 O ct 2 01 9 TWO ALGORITHMS TO FIND PRIMES IN PATTERNS Let k ≥ 1 be an integer, and let P = (f1(x), . . . , fk(x)) be k admissible linear polynomials over the integers, or the pattern. We present two algorithms that find all integers x where max {fi(x)}Expand On the Number of Witnesses in the Miller-Rabin Primality Test • Computer Science, Mathematics • Symmetry • 2020 The average probability of errors in the Miller–Rabin primality test is studied and it is shown that it decreases when the length of tested integers increases, which allows us to reduce estimations for the probability of the Miller-Rabin test errors and increase its efficiency. Expand N ov 2 02 0 Speeding up decimal multiplication Decimal multiplication is the task of multiplying two numbers in base 10N . Specifically, we focus on the number-theoretic transform (NTT) family of algorithms. Using only portable techniques, weExpand The Error Probability of the Miller–Rabin Primality Test • Mathematics • Lobachevskii Journal of Mathematics • 2018 In our paper we give theoretical and practical estimations of the error probability in the well-known Miller–Rabin probabilistic primality test. We show that a theoretical probability of error 0.25Expand #### References SHOWING 1-10 OF 33 REFERENCES On strong pseudoprimes to several bases With Y'k denoting the smallest strong pseudoprime to all of the first k primes taken as bases we determine the exact values for 5, q6, q7, q8 and give upper bounds for V/9, / W t,' 1 . We discuss theExpand Strong pseudoprimes to the first eight prime bases • Computer Science, Mathematics • Math. Comput. • 2014 A 19decimal-digit number Q11 = 3825 12305 65464 13051 is found which is a strong pseudoprime to the first 11 prime bases and Z. Z. Zhang conjectured that ψ9 = ψ10 =ψ11 = Q11 and this conjecture is proved by algorithms. Expand Two kinds of strong pseudoprimes up to 1036 Let n > 1 be an odd composite integer. Write n - 1 = 2 s d with d odd. If either b d ≡ 1 mod n or b 2r d ≡ -1 mod n for some r = 0,1,..., s - 1, then we say that n is a strong pseudoprime to base b,Expand The Pseudosquares Prime Sieve The pseudosquares prime sieve is presented, which finds all primes up to n in sublinear time using very little space and the primes generated by the algorithm are proven prime unconditionally. Expand On the difficulty of finding reliable witnesses • Mathematics, Computer Science • ANTS • 1994 It is shown that there are finite sets of odd composites which do not have a reliable witness, namely a common witness for all of the numbers in the set. Expand A Wieferich Prime Search up to 6.7 × 10 15 • Mathematics • 2011 A Wieferich prime is a prime p such that 2 p−1 ≡ 1 (mod p 2 ). Despite several intensive searches, only two Wieferich primes are known: p = 1093 and p = 3511. This paper describes a new searchExpand A Space-Efficient Fast Prime Number Sieve • Computer Science, Mathematics • Inf. Process. Lett. • 1996 A new algorithm is presented that matches the running time of the best previous prime number sieve, but uses less space by a factor of Θ ( log n ). Expand Explicit bounds for primality testing and related problems Many number-theoretic algorithms rely on a result of Ankeny, which states that if the Extended Riemann Hypothesis (ERH) is true, any nontrivial multiplicative subgroup of the integers modulo m omitsExpand A Binary Recursive Gcd Algorithm • Mathematics, Computer Science • ANTS • 2004 This work presents a quasi-linear time recursive algorithm that computes the greatest common divisor of two integers by simulating a slightly modified version of the binary algorithm. Expand On the Order of Finitely Generated Subgroups of Q*(mod p) and Divisors ofp−1 Abstract LetΓbe a finitely generated subgroup of Q * with rankr. We study the size of the order |Γp| ofΓ mod pfor density-one sets of primes. Using a result on the scarcity of primesp⩽xfor whichp−1Expand
2021-12-06 21:16:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7967224717140198, "perplexity": 1215.7787339073034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363312.79/warc/CC-MAIN-20211206194128-20211206224128-00587.warc.gz"}
https://socratic.org/questions/how-do-you-use-the-shell-method-to-set-up-and-evaluate-the-integral-that-gives-t-13
# How do you use the shell method to set up and evaluate the integral that gives the volume of the solid generated by revolving the plane region y=1, y=x^2, and x=0 rotated about the line y=2? Sep 22, 2015 $\frac{28 \pi}{15}$ cubic units #### Explanation: Since we are revolving around a horizontal line using the method of shells we will integrate with respect to $y$. We are bounded by the $y$ axis, the horizontal line $y = 1$, and the function $y = {x}^{2}$ Solve $y = {x}^{2}$ for $x$ $x = \sqrt{y}$ We are in quadrant $I$ so we do not have to worry about the negative square root. Our representative cylinder height is our function $\sqrt{y}$ Our representative radius is $2 - y$ over the interval $0 \le y \le 1$ The integral for the volume is $2 \pi {\int}_{0}^{1} \left(2 - y\right) \left({y}^{\frac{1}{2}}\right) \mathrm{dy}$ $2 \pi {\int}_{0}^{1} 2 {y}^{\frac{1}{2}} - {y}^{\frac{3}{2}} \mathrm{dy}$ Integrating we get $2 \pi \left[\frac{4}{3} {y}^{\frac{3}{2}} - \frac{2}{5} {y}^{\frac{5}{2}}\right]$ Evaluating we get $2 \pi \left[\frac{4}{3} - \frac{2}{5} - 0\right]$ $2 \pi \left[\frac{20}{15} - \frac{6}{15}\right] = 2 \pi \left[\frac{14}{15}\right] = \frac{28 \pi}{15}$ cubic units
2022-01-17 16:22:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 17, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9507809281349182, "perplexity": 306.3109816689235}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00707.warc.gz"}
https://ltwork.net/three-fifths-of-the-members-of-the-spanish-club-are-girls--3816972
# Three-fifths of the members of the Spanish club are girls. There are a total of 30 girls in the Spanish club. Which statements ###### Question: Three-fifths of the members of the Spanish club are girls. There are a total of 30 girls in the Spanish club. Which statements can be used to solve for x, the total number of members in the Spanish club? Select three options. Ē X = 30 5 = 30 x=50 ### 100 POINTS For what value(s) of k will the relation not be a function?A={( 1.5k - 4, 7), ( -0.5k + 8 , 15)} 100 POINTS For what value(s) of k will the relation not be a function? A={( 1.5k - 4, 7), ( -0.5k + 8 , 15)}... ### Describe at least two products that are commonly exported from the United States to other countries. (1-2 sentences)​ Describe at least two products that are commonly exported from the United States to other countries. (1-2 sentences)​... ### On the way home from school, doug walks by the skate shop. he spots a pair of shoes in On the way home from school, doug walks by the skate shop. he spots a pair of shoes in the window he likes. after all, they're red, his favorite color. five minutes later, doug walks out of the store and continues home, wearing his brand new red shoes. doug's decision is based on... ### Photoautotrophs are capable of converting energy into chemical energy.A.) heatB.) thermalC.) light D.) electricalhelp Photoautotrophs are capable of converting energy into chemical energy. A.) heatB.) thermalC.) light D.) electricalhelp me please!​... ### What is 0.5 of 7 or 0.5 x 7 What is 0.5 of 7 or 0.5 x 7... ### One of the weaknesses of the articles of confederation was that it could not prevent warfare between the states. One of the weaknesses of the articles of confederation was that it could not prevent warfare between the states.... ### Which one of the following groups did NOT leave their party to form the Republican party?Group of answer choicesFree-SoilersDemocratsWhigs Which one of the following groups did NOT leave their party to form the Republican party? Group of answer choices Free-Soilers Democrats Whigs... ### Hat other personal history led to eleanor roosevelt’s influence? Hat other personal history led to eleanor roosevelt’s influence?... ### What is the solution to the system y + x = 3 and y - x = 1? * What is the solution to the system y + x = 3 and y - x = 1? *... ### We often in slides and homework questions have neglected to induce the normalization constants that are needed to make a smoothing We often in slides and homework questions have neglected to induce the normalization constants that are needed to make a smoothing filter's coefficients add up to one, or that allow a derivative filter to compute the correct gradient magnitude (which would indicate the actual change in intensity val... ### Mia is 6.94512 x 106 minutes old. Convert her age to more appropriate units using years, months, anddays. Mia is 6.94512 x 106 minutes old. Convert her age to more appropriate units using years, months, and days. Assume a year has 365 days and a month has 30 days. Mia is ? years, ? months, and ? days old... What was the importance of the navigation acts? a. the navigation acts controlled all colonial trade. b. the navigation acts permitted the colonial ships from trading with certain countries. c. the navigation acts set limits to what the colonists were allowed trade with others. d. the navigation ac... ### Suppose a ceiling fan manufacturer has the total cost function c(x) = 48x + 1485 and Suppose a ceiling fan manufacturer has the total cost function c(x) = 48x + 1485 and the total revenue function r(x) = 75x. (a) what is the equation of the profit function p(x) for this commodity? p(x) = (b) what is the profit on 35 units? p(35) = interpret your result. the total costs are less th... ### (a) list the equally likely events for the gender of the 4 children, from oldest to youngest. (let m (a) list the equally likely events for the gender of the 4 children, from oldest to youngest. (let m represent a boy (male) and f represent a girl (female). select all that apply.) fmfm fmff two m's, tow f's mfff fffm mfmf mmff mmmf fmmf mffm ffmf ffmm three m's, one f mmfm mfmm one m, three f's fm... ### /5K+12/=2/4K-1/ lines aren’t absolute value /5K+12/=2/4K-1/ lines aren’t absolute value... ### Write a sentence about the political ideas of the English colonist using the term representative government write a sentence about the political ideas of the English colonist using the term representative government...
2022-12-01 19:10:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26498183608055115, "perplexity": 2679.510282080383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710869.86/warc/CC-MAIN-20221201185801-20221201215801-00131.warc.gz"}
http://mathhelpforum.com/advanced-statistics/222871-cumulative-distribution-functions.html
Math Help - Cumulative Distribution Functions 1. Cumulative Distribution Functions I cant remember anything about CDF's Please can someone help with this question!!! Obtain the cumulative distribution function of the following discrete random variables (A) Bin(3 , theta) (B) Unif( 0 , m) (C) Geom( Theta ) The question did give hints but they don't help me what so ever HINT First calculate F at the integers. (Distinguish the index of the summation from the upper limit of summation). Secondy extend from the integers to the real line. Any help would be appreciated!!! 2. Re: Cumulative Distribution Functions Hey Matt1993. Hint: For discrete distributions P(X <= x) = Sigma (i = 0 to x) P(X = x) [Assuming that the first event is X = 0]. For continuous distributions, P(X < x) = Integral [-infinity,x] f(u)du. These are standard definitions. Given the above, what is P(X = x) for (a) and (c) and what is f(u) for b? 3. Re: Cumulative Distribution Functions Ok so For bin(3, Theta) then p(r) = (nCr)*(theta)^(r)*(1- theta)^(n-r) For unif(0, m) then p(r) = 1/(m+1) for r = 0,1,2,3,...,m or 0 otherwise For geom p(r) = (1- theta)^r * theta for 0,1,2,3,.... Were would I go from here thanks for this 4. Re: Cumulative Distribution Functions For the discrete you need to sum all values greater than or equal to a particular value in terms of probabilities. So for Binomial (and other discrete) you have P(X <= 3) = P(X = 0) + P(X = 1) + P(X = 2) + P(X = 3). For uniform you have P(X < x) = Integral [0,x) f(u)du = Integral [0,x) 1*du. Remember that for discrete you add up all individual cases that satisfy the inequality and for continuous you use the integral definition I provided above.
2015-01-30 20:57:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8803767561912537, "perplexity": 1532.6581475599273}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115861872.41/warc/CC-MAIN-20150124161101-00130-ip-10-180-212-252.ec2.internal.warc.gz"}
https://oceanopticsbook.info/view/light-and-radiometry/level-2/cherenkov-radiation
Page updated: October 11, 2021 Author: Curtis Mobley View PDF Cherenkov radiation (Cherenkov (1934)) is electromagnetic radiation emitted by charged particles traveling faster than the phase speed of light in a dielectric medium such as water. It can be qualitatively thought of as the optical equivalent of the acoustic shock wave (“sonic boom”) generated by an airplane flying faster than the speed of sound in air. Cherenkov radiation is the cause of the beautiful blue glow around the core of a water-cooled nuclear reactor as shown in Fig. 1. [The emission of light by a charged particle moving faster than the speed of light in a medium was first predicted by Oliver Heavyside in a series of papers starting in 1888. Arnold Sommerfeld independently predicted the effect in 1904, and Marie Curie observed the glow of light in radium solutions. However, these earlier results were unappreciated and forgotten until Cherenkov’s observations in the 1930s. You will see Cherenkov’s name romanized in various ways; the accent is on the last syllable.] Bradner et al. (1987) examined physical sources of light in the ocean. They estimated that near the sea surface, Cherenkov radiation from cosmic rays generates a photon irradiance of order $1{0}^{7}\phantom{\rule{2.6108pt}{0ex}}photons\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}\phantom{\rule{2.6108pt}{0ex}}{m}^{-2}$; this value decreases exponentially with depth with an e-folding distance of about 1 km. However, another source of Cherenkov radiation is due to the decay of radioactive potassium-40, which is distributed uniformly throughout the oceans as part of the dissolved salts that make up salinity. The Earth’s crust contains potassium at a concentration of about 2.6% by mass. This potassium occurs as three isotopes: stable ${}^{39}K$ (93.2581%) and ${}^{41}K$ (6.7302%), and radioactive ${}^{40}K$ (0.0117%). ${}^{40}K$ decays 89.1% of the time to ${}^{40}Ca$ by emission of an electron and an electron anti-neutrino (beta decay), and 10.9% of the time to ${}^{40}Ar$ by capture of an inner-shell electron, followed by emission of a gamma ray and a neutrino. The half-life of ${}^{40}K$ is 1.25 Gy. [Earth’s atmosphere is 0.94% argon, of which 99.6% is ${}^{40}Ar$ . Spectroscopy shows that the argon in stars is 85% ${}^{36}Ar$, which is created by fusion of two alpha particles (Helium nuclei) with one silicon-32 nucleus during supernova explosions, and 15% is ${}^{38}Ar$ . It is thus thought that the argon in Earth’s atmosphere has accumulated up over billions of years from the decay of ${}^{40}K$.] When ${}^{40}K$ decays to ${}^{40}Ca$, the emitted electron and anti-neutrino carry a combined kinetic energy of 1.31 MeV or $2.1×1{0}^{-13}\phantom{\rule{2.6108pt}{0ex}}J$. There is a continuous distribution of the electron kinetic energy ranging from 0 to a maximum of 1.31 MeV, with the peak at about 0.55 MeV (Kelly et al. (1959)). [It is a characteristic of beta decay that the emitted electrons have a continuous spectrum of energies from 0 to some maximum. This contrasts with alpha decay, in which the emitted alpha particles have a single energy determined by the quantized energy levels of the decaying nucleus.] The associated speed of the electron for a given kinetic energy can be obtained from the formula for relativistic kinetic energy (e.g., Section 42-14 of Halliday and Resnick (1988)): $KE={m}_{o}{c}^{2}\left(\frac{1}{\sqrt{1-{\left(v∕c\right)}^{2}}}-1\right)\phantom{\rule{0.3em}{0ex}},$ where ${m}_{o}$ is the rest mass of the particle, $c$ is the speed of light, and $v$ is the speed of the particle. In relativity theory, it is customary to let $\beta =v∕c$ be the speed of a particle relative to the speed of light in a vacuum. Solving this equation for ${\beta }^{2}={\left(v∕c\right)}^{2}$ gives ${\beta }^{2}=1-{\left(\frac{KE}{{m}_{o}{c}^{2}}+1\right)}^{-2}.$ (1) Using the values in Table 1 gives ${\beta }^{2}=0.918$, or $v=0.958c$, for a 1.31 MeV electron. The phase speed of light in water is $c∕n$, which is approximately $0.75c$ at visible wavelengths. Thus the emitted electron is traveling faster than the speed of light in water and will therefore emit Cherenkov radiation until the electron slows down to less than $c∕n$ through loss of radiated energy and other interactions with the water. Symbol Quantity Value ${m}_{o}$ rest mass of the electron $9.109×1{0}^{-31}\phantom{\rule{2.6108pt}{0ex}}kg$ $e$ charge of the electron $1.602×1{0}^{-19}\phantom{\rule{2.6108pt}{0ex}}C$ $c$ speed of light in vacuo $2.998×1{0}^{8}\phantom{\rule{2.6108pt}{0ex}}m\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}$ $h$ Planck constant $6.626×1{0}^{-34}\phantom{\rule{2.6108pt}{0ex}}J\phantom{\rule{2.6108pt}{0ex}}s$ ${\mu }_{o}$ magnetic permeability of free space $4\pi ×1{0}^{-7}\phantom{\rule{2.6108pt}{0ex}}N\phantom{\rule{2.6108pt}{0ex}}{s}^{2}\phantom{\rule{2.6108pt}{0ex}}{C}^{-2}$ $KE$ kinetic energy of emitted electron $1.31\phantom{\rule{2.6108pt}{0ex}}MeV=2.099×1{0}^{-13}\phantom{\rule{2.6108pt}{0ex}}J$ $n$ real index of refraction of water see Fig. 2 Table 1: Quantities needed for Cherenkov radiation calculations of ${}^{40}K$ decay. The energy radiated by a single ${}^{40}K$ electron per unit of distance traveled ($x$, in meters) and per unit angular frequency ($\omega$, in radians per second) of the emitted light is given by the celebrated formula of Frank and Tamm (1937): $\frac{{\partial }^{2}E}{\partial x\phantom{\rule{2.6108pt}{0ex}}\partial \omega }=\frac{1}{4\pi }{e}^{2}{\mu }_{w}\left(\omega \right)\omega \left(1-\frac{1}{{\beta }^{2}{n}^{2}\left(\omega \right)}\right)\phantom{\rule{2em}{0ex}}\left[\frac{J}{m\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}}\right]\phantom{\rule{0.3em}{0ex}}.$ (2) Here ${\mu }_{w}\left(\omega \right)$ is the frequency-dependent magnetic permeability of the medium. [Although the end result is simple, the derivation of Eq. (2) starting from Maxwell’s equations is quite difficult (e.g., Sections 13.4 and 14.9 of Jackson (1962)). Pavel Cherenkov, Ilya Frank, and Igor Tamm shared the 1958 Nobel Prize in Physics “for the discovery and the interpretation of the Cherenkov effect.”] Using $\omega =2\pi c∕\lambda$ to convert the Frank and Tamm formula to energy emitted per unit distance per unit wavelength gives $\frac{{\partial }^{2}E}{\partial x\phantom{\rule{2.6108pt}{0ex}}\partial \lambda }=\pi {c}^{2}{e}^{2}{\mu }_{w}\left(\lambda \right)\frac{1}{{\lambda }^{3}}\left(1-\frac{1}{{\beta }^{2}{n}^{2}\left(\lambda \right)}\right)\phantom{\rule{2em}{0ex}}\left[\frac{J}{m\phantom{\rule{2.6108pt}{0ex}}m}\right]\phantom{\rule{0.3em}{0ex}}.$ (3) Converting this formula from energy emitted to number $N$ of photons emitted via $E=Nhc∕\lambda$ gives $\frac{{\partial }^{2}N}{\partial x\phantom{\rule{2.6108pt}{0ex}}\partial \lambda }=\frac{\pi c}{h}{e}^{2}{\mu }_{w}\left(\lambda \right)\frac{1}{{\lambda }^{2}}\left(1-\frac{1}{{\beta }^{2}{n}^{2}\left(\lambda \right)}\right)\phantom{\rule{2em}{0ex}}\left[\frac{photons}{m\phantom{\rule{2.6108pt}{0ex}}m}\right]\phantom{\rule{0.3em}{0ex}}.$ (4) The magnetic permeability ${\mu }_{w}$ is a function of frequency (or wavelength), but for water its value equals that of a vacuum, ${\mu }_{o}$, the permeability of free space, to within a 8 parts per million. Therefore we can replace ${\mu }_{w}$ with ${\mu }_{o}$ in these equations with a negligible error. The observant reader will then note that $\alpha =\frac{c\phantom{\rule{0.3em}{0ex}}{e}^{2}\phantom{\rule{0.3em}{0ex}}{\mu }_{o}}{2\pi h}=0.007297\approx \frac{1}{137}$ is the dimensionless fine-structure constant of quantum theory. The last equation therefore can be succinctly written as $\frac{{\partial }^{2}N}{\partial x\phantom{\rule{2.6108pt}{0ex}}\partial \lambda }=2\pi \alpha \frac{1}{{\lambda }^{2}}\left(1-\frac{1}{{\beta }^{2}{n}^{2}\left(\lambda \right)}\right)\phantom{\rule{0.3em}{0ex}}.$ (5) In these formulas, $x$ and $\lambda$ are in meters. These formulas show a remarkable feature of Cherenkov radiation, namely that the emission is broad-band and increases rapidly going from visible to ultraviolet wavelengths. As seen in Fig. 2, the real index of refraction of water decreases rapidly between 130 nm, where $n\approx 1.63$, and 71 nm, where $n$ drops to less than 1. [The discussion in Section TBD explains that that although $n<1$ gives the phase speed $c∕n$ greater than the speed of light in vacuo, this is not a violation of special relativity.] This gives a sharp radiation cut-off at wavelengths less than about 100 nm because as $n$ approaches 1, the speed of light in water approaches the speed in vacuo, in which case the electron is always traveling slower than light in the water, and there is no emitted radiation. For infrared and longer wavelengths, the emission is small and goes to zero as the wavelength increases. The red curves in Fig. 3 show Eqs. (3) and (5) using the wavelength-dependent $n\left(\lambda \right)$ seen in Fig. 2 for the initial electron energy of 1.31 MeV. For wavelengths in the visible range, Table 2 of the water IOPs page shows that $n\approx 1.36$ even for the extreme case of cold (0 deg C), saline (35 PSU), deep-ocean (depth of 10,000 m) water, compared to about 1.34 for pure water at atmospheric pressure. This difference would have only a small effect on the spectra plotted in Fig. 3, so these curves for pure water are representative of all parts of the ocean. Numerically integrating Eq. (3) over wavelength, for a given value of $\beta$, gives the energy emitted over all wavelengths per unit distance traveled. Further integrating over distance gives the total energy emitted. Corresponding integrations of Eq. (5) give the numbers of emitted photons. As the electron travels through the water, its kinetic energy decreases. When the kinetic energy decreases to a value such that the electron’s speed, given by Eq. (1), results in $v=c∕n\left(\lambda \right)$, photon emission ceases for that wavelength. After the kinetic energy has decreased to 0.24 MeV, ${\beta }^{2}=0.533$. This gives $v for $n\left(\lambda \right)\le 1.37$. There is thus no more radiation for wavelengths greater than 300 nm, where $n\le 1.37$. The spectra at this energy are shown in blue in Fig. 3. The green curves in the figure are for an energy of 0.17 MeV, ${\beta }^{2}=0.434$, for which there is only a small amount of emission at the UV wavelengths where the index of refraction greater than 1.52. To compute the total amount of Cherenkov radiation, the above integrations over wavelength and distance must be repeated for each energy of the distribution of energies of the emitted electrons; thus there is a triple integration over energy, wavelength, and distance. Equations (2)-(5) are correct in that they give the energy or number of photons emitted as Cherenkov radiation for a given electron energy. What they do not tell you is that less than one percent of the kinetic energy of of an electron emitted by a ${}^{40}K$ nucleus results in Cherenkov radiation. Almost all of the electron’s energy goes into ionizing water molecules as the electron travels through the water. This loss of energy to ionization is given by an equation known as the Bethe-Bloch formula. [For a derivation and discussion see Chapter VII, Section 5 of Arya (1966). The Bethe-Bloch formula involves two additional parameters: the energy required to ionize a water molecule and the density of electrons in water (the number of water-molecule electrons per cubic meter). My evaluation of the Bethe-Bloch formula shows that for a 1.31 MeV electron, the energy lost to ionization is 153 times that lost to Cherenkov radiation.] Discussion of that calculation takes us beyond the needs of optical oceanography and will not be given here because the end result has been calculated by the physicists who use Cherenkov radiation for the detection of neutrinos in the deep ocean. When a neutrino interacts with matter (which is extremely rare), the result can be a charged particle such as an electron or muon traveling in almost the same direction as the neutrino. Those charged particles also cause Cherenkov radiation, which can be detected as a function of time and direction and used to determine the direction and energy of the initial neutrino. Several neutrino detectors based on this idea have been built at the bottom of the ocean and deep within the ice at the geographic South Pole [To learn more, the keywords to search for are DUMAND (Deep Underwater Muon and Neutrino Detector; 1976-1995) and ANTARES (Astronomy with a Neutrino Telescope and Abyss environmental RESearch project; operational since 2008) in the ocean. AMANDA (Antarctic Muon And Neutrino Detector Array) and IceCube Neutrino Observatory are at the South Pole]. These detectors are arrays of thousands of photomultiplier tubes (PMTs), occupying as much as a cubic kilometer of space, that track the movement of the Cherenkov “light cone” as the particle travels through the detector. In deep-ocean measurements, Cherenkov radiation from ${}^{40}K$ decay is background noise imposed on the signal of interest. The magnitude of this Cherenkov background therefore has been carefully calculated and measured as part of the deep-ocean neutrino detector designs. Use of the above formulas to compute the number of Cherenkov photons per square meter per second in the ocean must account for number of decaying ${}^{40}K$ atoms per cubic meter (about $12000\phantom{\rule{2.6108pt}{0ex}}{m}^{-3}\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}$, of which 89.1% result in electron emission) and the clarity of the water (usually close to that of pure water in the deep ocean). Calculations predict about $1.2×1{0}^{6}\phantom{\rule{2.6108pt}{0ex}}photons\phantom{\rule{2.6108pt}{0ex}}{m}^{-2}\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}$ at visible wavelengths (400-700 nm) in clear ocean water (Learned et al. (1981), Bradner et al. (1987)). Nighttime measurements to depths of 4300 m in clear waters near Hawaii showed a typical background “glow” of order $1{0}^{7}\phantom{\rule{2.6108pt}{0ex}}photons\phantom{\rule{2.6108pt}{0ex}}{m}^{-2}\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}$, which includes bioluminescence as well as Cherenkov radiation. Aoki et al. (1986) report $2.1{8}_{-28%}^{+9%}×1{0}^{6}\phantom{\rule{2.6108pt}{0ex}}photons\phantom{\rule{2.6108pt}{0ex}}{m}^{-2}\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}$ at the same location. Tamburini et al. (2013) recorded a 2.5 year times series of light at 2,500 m depth in the Mediterranean Sea. Each of their photomultiplier tubes (PMTs) recorded a steady background of $37,000±3000\phantom{\rule{2.6108pt}{0ex}}photons\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}$ attributable to ${}^{40}K$ decay, plus another $40,000±3000\phantom{\rule{2.6108pt}{0ex}}photons\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}$ attributable to background bioluminescence. During periods of strong currents, which triggered bioluminescence in the wakes of the PMTs, the bioluminescence signal increased by as much as a factor of 100; Fig. 4 shows some of their data. Their PMTs had a collection area of $0.038\phantom{\rule{2.6108pt}{0ex}}{m}^{2}$, so $37,000\phantom{\rule{2.6108pt}{0ex}}photons\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}$ received by a PMT corresponds to about $1{0}^{6}\phantom{\rule{2.6108pt}{0ex}}photons\phantom{\rule{2.6108pt}{0ex}}{m}^{-2}\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}$, consistent with the predictions in the other papers. The measured total of $67,000\phantom{\rule{2.6108pt}{0ex}}photons\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}$ for Cherenkov and background bioluminescence corresponds to $1.8×1{0}^{6}\phantom{\rule{2.6108pt}{0ex}}photons\phantom{\rule{2.6108pt}{0ex}}{m}^{-2}\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}$. Thus it is never completely dark at even the greatest depths, even though no solar photons are present. [The PMTs were Hamamatsu model R7081-20. The data sheet gives a “minimum effective area” of 220 mm. Taking this as the diameter of the PMT collector gives a collector area of $0.038\phantom{\rule{2.6108pt}{0ex}}{m}^{2}$. The PMT was sensitive to wavelengths from 300 to 650 nm with peak sensitivity at 420 nm.] Most vertebrates, humans in particular, see only black-and-white in dim light using the rod cells in their retinas. (Cone cells are used for color vision in bright light; see the page on Color Vision.) The optically sensitive part of these rod cells is a single kind of photopigment known as an opsin (generally rhodopsin-1). A recent genetic study of 101 species of deep-sea fish Musilova et al. (2019) found that some fish express genes for coding multiple types of opsins, as many as 2 cone and 38 rod opsins in one species. The peaks of the wavelength sensitivities of these multiple opsins cover a range of blue to near-UV wavelengths. Thus it is hypothesized that some fish may be able to “see color” even in the dimmest light of the deep ocean. Giant squid have the largest eyes of any animal, around 30 cm in diameter. It is thought that they may be able to detect bioluminescence generated by their arch-enemies the sperm whales and know it’s time to leave. It may be that squid and other deep-ocean animals just see blobs of light or shadows against the faint background. If they see a small blob of light/shadow, eat it; if it’s a big blob/shadow, flee. In addition, Frank and Widder (1996) found that the eyes of certain deep-sea crustaceans are equally sensitive to near-UV and blue-to-green visible wavelengths. Their paper discussed the penetration of near-UV solar radiation to depth. However, it can be speculated that these animals may have evolved eyes capable of seeing the low level of Cherenkov UV light. Assuming an average wavelength of 420 nm, the deep-ocean measurements of $1.8×1{0}^{6}\phantom{\rule{2.6108pt}{0ex}}photons\phantom{\rule{2.6108pt}{0ex}}{m}^{-2}\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}$ (Tamburini et al. (2013)) and $2.2×1{0}^{6}\phantom{\rule{2.6108pt}{0ex}}photons\phantom{\rule{2.6108pt}{0ex}}{m}^{-2}\phantom{\rule{2.6108pt}{0ex}}{s}^{-1}$ (Aoki et al. (1986)) correspond to an irradiance of approximately $1{0}^{-12}\phantom{\rule{2.6108pt}{0ex}}W\phantom{\rule{2.6108pt}{0ex}}{m}^{-2}$, which is the threshold sensitivity of the eyes of some deep-sea fish as estimated from comparative anatomy studies by Denton and Warren (1957). This is probably not a coincidence.
2022-05-29 08:33:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 94, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.699243426322937, "perplexity": 443.00247390957406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663048462.97/warc/CC-MAIN-20220529072915-20220529102915-00581.warc.gz"}
https://tex.stackexchange.com/questions/16840/left-alignment-like-fleqn-option-only-for-specific-equations
# Left-alignment like Fleqn-option only for specific equations I want to left-align some equations. (Note: I want to align the equations themselves, not code inside of the equations.) I have the following example code: \documentclass{scrartcl} \usepackage{amsmath} \begin{document} \text{I want to left-align this equation} $$\text{and this one,}$$ $$\text{but not this one and others.}$$ \end{document} How can I achieve this? The equations don't neccessarily have to be left-adjusted to the left text border, the more important goal is to align them to each other. You can use the flalign and flalign* environments from amsmath. \documentclass{minimal} \usepackage{amsmath} \begin{document} \noindent The quick brown fox jumps over the lazy dog. \begin{equation*} A = B \end{equation*} \begin{flalign*} C = D && \end{flalign*} \begin{flalign*} E = F && \end{flalign*} \end{document} Does not work with \split in equation environment • Thanks a lot, I already had tried flalign, but as I didn't know about the & at the end, it didn't work then... – meep.meep Apr 28 '11 at 11:30 • Why is an extra & needed at the end? – HackerBoss Apr 9 at 18:51 • @HackerBoss --- Short answer: try deleting them and see what happens. Long answer: flalign is usually used for multiple columns of equations. It works by stretching the space between columns (see the amsmath package documentation). The first ampersand is usually an alignment point within the equations (but the OP didn't want this; hence it's at the end of the line). The second ampersand denotes the end of the column and generates the stretching space, aligning the first column of equations against the left margin. – Ian Thompson Apr 12 at 7:56 I wanted to show an eplain version: \input eplain \leftdisplays \hbox{I want to left-align this equation} \eqno(1.1) $$\hbox{and this one,}$$ \centereddisplays $$\hbox{but not this one and others} \eqno(1.2)$$ \bye because of the following remark from the documentation: It is usually poor typography to have both centered and left-justified displays in a single publication, though. • I agree with you on that issue. I just want to use this in my appendix. But Thanks for this notice anyway :) – meep.meep Apr 28 '11 at 16:18
2020-10-26 01:46:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.8378215432167053, "perplexity": 1823.0472837295854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890108.60/warc/CC-MAIN-20201026002022-20201026032022-00666.warc.gz"}
https://scriptinghelpers.org/questions/77635/cant-parse-json-error-datastore
Still have questions? Join our Discord server and get real time help. 0 # Can't Parse JSON error? Datastore Edited by incapaxx 11 days ago This is the code I'm currently working on lua local servertime = { days = 31, hours = 0, seconds = 0 } local ds = game:GetService("DataStoreService") local livetime = ds:GetDataStore("ServerTime") local hs = game.HttpService -- the http service local timer = game.Workspace.Timer local d = timer.Days local h = timer.Hours local s = timer.Seconds local value = hs:JSONDecode(livetime) if value == nil then value = servertime else local encode = hs:JSONEncode(value) livetime:SetAsync("---Key---", encode) end And this is the reference code I was given: --[[reference code: local table = {a = "0",b = "0"} -- The main table local ds = game:GetService("DataStoreService") -- datastore service local hs = game.HttpService -- the http service local nds = ds:GetDataStore("test") local value = hs:JSONDecode(nds) if value == nil then -- if it the first time you start the game then you start with the main table value = table else value.a = value.a + 1 --we want to add 1 to a every time value.b = value.b + 2 --we want to add 2 to b every time end local encode = hs:JSONEncode(value) hs:SetAsync("test",encode) -- may be it (encode,"test") --]] and I get this error: "Can't parse JSON" 0 On what line? MCAndRobloxUnited 1517 — 11d 0 Don't specify. But it's most likely one of the hs:JSONEncode ItsBankai 28 — 8d
2019-03-25 22:13:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3178311288356781, "perplexity": 13973.70544669377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204461.23/warc/CC-MAIN-20190325214331-20190326000331-00028.warc.gz"}
https://www.lmfdb.org/L/rational/2/2160
## Results (46 matches) Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin 2-2160-15.14-c0-0-2 $1.03$ $1.07$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 15.14 $$0.0 0 1 0 1.01831 Artin representation 2.2160.6t3.d Artin representation 2.2160.6t3.d.a Modular form 2160.1.c.a Modular form 2160.1.c.a.1889.1 2-2160-15.14-c0-0-3 1.03 1.07 2 2^{4} \cdot 3^{3} \cdot 5 15.14$$ $0.0$ $0$ $1$ $0$ $1.30675$ Artin representation 2.2160.6t3.c Artin representation 2.2160.6t3.c.a Modular form 2160.1.c.b Modular form 2160.1.c.b.1889.1 2-2160-1.1-c1-0-1 $4.15$ $17.2$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$1.0 1 1 0 0.689846 Elliptic curve 2160.a Modular form 2160.2.a.a Modular form 2160.2.a.a.1.1 2-2160-1.1-c1-0-10 4.15 17.2 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $1.0$ $1$ $1$ $0$ $0.979934$ Elliptic curve 2160.q Modular form 2160.2.a.q Modular form 2160.2.a.q.1.1 2-2160-1.1-c1-0-11 $4.15$ $17.2$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$1.0 1 1 0 1.00192 Elliptic curve 2160.l Modular form 2160.2.a.l Modular form 2160.2.a.l.1.1 2-2160-1.1-c1-0-12 4.15 17.2 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $1.0$ $1$ $1$ $0$ $1.02045$ Elliptic curve 2160.v Modular form 2160.2.a.v Modular form 2160.2.a.v.1.1 2-2160-1.1-c1-0-15 $4.15$ $17.2$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$1.0 1 1 0 1.21444 Elliptic curve 2160.x Modular form 2160.2.a.x Modular form 2160.2.a.x.1.1 2-2160-1.1-c1-0-16 4.15 17.2 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $1.0$ $1$ $1$ $0$ $1.22674$ Elliptic curve 2160.w Modular form 2160.2.a.w Modular form 2160.2.a.w.1.1 2-2160-1.1-c1-0-19 $4.15$ $17.2$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$1.0 1 -1 1 1.51275 Elliptic curve 2160.b Modular form 2160.2.a.b Modular form 2160.2.a.b.1.1 2-2160-1.1-c1-0-20 4.15 17.2 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $1.0$ $1$ $-1$ $1$ $1.52823$ Elliptic curve 2160.c Modular form 2160.2.a.c Modular form 2160.2.a.c.1.1 2-2160-1.1-c1-0-21 $4.15$ $17.2$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$1.0 1 -1 1 1.54336 Elliptic curve 2160.e Modular form 2160.2.a.e Modular form 2160.2.a.e.1.1 2-2160-1.1-c1-0-22 4.15 17.2 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $1.0$ $1$ $-1$ $1$ $1.59756$ Elliptic curve 2160.g Modular form 2160.2.a.g Modular form 2160.2.a.g.1.1 2-2160-1.1-c1-0-23 $4.15$ $17.2$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$1.0 1 -1 1 1.63994 Elliptic curve 2160.m Modular form 2160.2.a.m Modular form 2160.2.a.m.1.1 2-2160-1.1-c1-0-24 4.15 17.2 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $1.0$ $1$ $-1$ $1$ $1.65663$ Elliptic curve 2160.i Modular form 2160.2.a.i Modular form 2160.2.a.i.1.1 2-2160-1.1-c1-0-25 $4.15$ $17.2$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$1.0 1 -1 1 1.68031 Elliptic curve 2160.n Modular form 2160.2.a.n Modular form 2160.2.a.n.1.1 2-2160-1.1-c1-0-26 4.15 17.2 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $1.0$ $1$ $-1$ $1$ $1.69695$ Elliptic curve 2160.o Modular form 2160.2.a.o Modular form 2160.2.a.o.1.1 2-2160-1.1-c1-0-27 $4.15$ $17.2$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$1.0 1 -1 1 1.71464 Elliptic curve 2160.p Modular form 2160.2.a.p Modular form 2160.2.a.p.1.1 2-2160-1.1-c1-0-28 4.15 17.2 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $1.0$ $1$ $-1$ $1$ $1.72341$ Elliptic curve 2160.j Modular form 2160.2.a.j Modular form 2160.2.a.j.1.1 2-2160-1.1-c1-0-3 $4.15$ $17.2$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$1.0 1 1 0 0.765093 Elliptic curve 2160.d Modular form 2160.2.a.d Modular form 2160.2.a.d.1.1 2-2160-1.1-c1-0-30 4.15 17.2 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $1.0$ $1$ $-1$ $1$ $1.76173$ Elliptic curve 2160.s Modular form 2160.2.a.s Modular form 2160.2.a.s.1.1 2-2160-1.1-c1-0-31 $4.15$ $17.2$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$1.0 1 -1 1 1.87954 Elliptic curve 2160.u Modular form 2160.2.a.u Modular form 2160.2.a.u.1.1 2-2160-1.1-c1-0-5 4.15 17.2 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $1.0$ $1$ $1$ $0$ $0.844954$ Elliptic curve 2160.f Modular form 2160.2.a.f Modular form 2160.2.a.f.1.1 2-2160-1.1-c1-0-6 $4.15$ $17.2$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$1.0 1 1 0 0.862830 Elliptic curve 2160.k Modular form 2160.2.a.k Modular form 2160.2.a.k.1.1 2-2160-1.1-c1-0-7 4.15 17.2 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $1.0$ $1$ $1$ $0$ $0.930470$ Elliptic curve 2160.r Modular form 2160.2.a.r Modular form 2160.2.a.r.1.1 2-2160-1.1-c1-0-8 $4.15$ $17.2$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$1.0 1 1 0 0.949975 Elliptic curve 2160.h Modular form 2160.2.a.h Modular form 2160.2.a.h.1.1 2-2160-1.1-c1-0-9 4.15 17.2 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $1.0$ $1$ $1$ $0$ $0.969939$ Elliptic curve 2160.t Modular form 2160.2.a.t Modular form 2160.2.a.t.1.1 2-2160-1.1-c3-0-13 $11.2$ $127.$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$3.0 3 1 0 0.525448 Modular form 2160.4.a.k Modular form 2160.4.a.k.1.1 2-2160-1.1-c3-0-18 11.2 127. 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $3.0$ $3$ $1$ $0$ $0.579330$ Modular form 2160.4.a.j Modular form 2160.4.a.j.1.1 2-2160-1.1-c3-0-19 $11.2$ $127.$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$3.0 3 1 0 0.598877 Modular form 2160.4.a.f Modular form 2160.4.a.f.1.1 2-2160-1.1-c3-0-22 11.2 127. 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $3.0$ $3$ $1$ $0$ $0.650756$ Modular form 2160.4.a.l Modular form 2160.4.a.l.1.1 2-2160-1.1-c3-0-23 $11.2$ $127.$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$3.0 3 1 0 0.660397 Modular form 2160.4.a.g Modular form 2160.4.a.g.1.1 2-2160-1.1-c3-0-33 11.2 127. 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $3.0$ $3$ $1$ $0$ $0.805261$ Modular form 2160.4.a.i Modular form 2160.4.a.i.1.1 2-2160-1.1-c3-0-39 $11.2$ $127.$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$3.0 3 1 0 0.853516 Modular form 2160.4.a.t Modular form 2160.4.a.t.1.1 2-2160-1.1-c3-0-42 11.2 127. 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $3.0$ $3$ $1$ $0$ $0.891014$ Modular form 2160.4.a.s Modular form 2160.4.a.s.1.1 2-2160-1.1-c3-0-44 $11.2$ $127.$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$3.0 3 1 0 0.944946 Modular form 2160.4.a.r Modular form 2160.4.a.r.1.1 2-2160-1.1-c3-0-50 11.2 127. 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $3.0$ $3$ $-1$ $1$ $1.04411$ Modular form 2160.4.a.a Modular form 2160.4.a.a.1.1 2-2160-1.1-c3-0-57 $11.2$ $127.$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$3.0 3 -1 1 1.13587 Modular form 2160.4.a.b Modular form 2160.4.a.b.1.1 2-2160-1.1-c3-0-59 11.2 127. 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $3.0$ $3$ $-1$ $1$ $1.14835$ Modular form 2160.4.a.d Modular form 2160.4.a.d.1.1 2-2160-1.1-c3-0-63 $11.2$ $127.$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$3.0 3 -1 1 1.15838 Modular form 2160.4.a.c Modular form 2160.4.a.c.1.1 2-2160-1.1-c3-0-66 11.2 127. 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $3.0$ $3$ $-1$ $1$ $1.17308$ Modular form 2160.4.a.e Modular form 2160.4.a.e.1.1 2-2160-1.1-c3-0-73 $11.2$ $127.$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$3.0 3 -1 1 1.25549 Modular form 2160.4.a.m Modular form 2160.4.a.m.1.1 2-2160-1.1-c3-0-76 11.2 127. 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $3.0$ $3$ $-1$ $1$ $1.26887$ Modular form 2160.4.a.n Modular form 2160.4.a.n.1.1 2-2160-1.1-c3-0-80 $11.2$ $127.$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$3.0 3 -1 1 1.32351 Modular form 2160.4.a.h Modular form 2160.4.a.h.1.1 2-2160-1.1-c3-0-81 11.2 127. 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $3.0$ $3$ $-1$ $1$ $1.32401$ Modular form 2160.4.a.q Modular form 2160.4.a.q.1.1 2-2160-1.1-c3-0-90 $11.2$ $127.$ $2$ $2^{4} \cdot 3^{3} \cdot 5$ 1.1 $$3.0 3 -1 1 1.42061 Modular form 2160.4.a.o Modular form 2160.4.a.o.1.1 2-2160-1.1-c3-0-92 11.2 127. 2 2^{4} \cdot 3^{3} \cdot 5 1.1$$ $3.0$ $3$ $-1$ $1$ $1.42768$ Modular form 2160.4.a.p Modular form 2160.4.a.p.1.1
2022-06-26 05:57:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9827037453651428, "perplexity": 1004.4516482082082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037089.4/warc/CC-MAIN-20220626040948-20220626070948-00215.warc.gz"}
https://deepai.org/publication/the-bethe-and-sinkhorn-permanents-of-low-rank-matrices-and-implications-for-profile-maximum-likelihood
# The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood In this paper we consider the problem of computing the likelihood of the profile of a discrete distribution, i.e., the probability of observing the multiset of element frequencies, and computing a profile maximum likelihood (PML) distribution, i.e., a distribution with the maximum profile likelihood. For each problem we provide polynomial time algorithms that given n i.i.d.samples from a discrete distribution, achieve an approximation factor of (-O(√(n)log n) ), improving upon the previous best-known bound achievable in polynomial time of (-O(n^2/3log n)) (Charikar, Shiragur and Sidford, 2019). Through the work of Acharya, Das, Orlitsky and Suresh (2016), this implies a polynomial time universal estimator for symmetric properties of discrete distributions in a broader range of error parameter. We achieve these results by providing new bounds on the quality of approximation of the Bethe and Sinkhorn permanents (Vontobel, 2012 and 2014). We show that each of these are (O(k log(N/k))) approximations to the permanent of N × N matrices with non-negative rank at most k, improving upon the previous known bounds of (O(N)). To obtain our results on PML, we exploit the fact that the PML objective is proportional to the permanent of a certain Vandermonde matrix with √(n) distinct columns, i.e. with non-negative rank at most √(n). As a by-product of our work we establish a surprising connection between the convex relaxation in prior work (CSS19) and the well-studied Bethe and Sinkhorn approximations. ## Authors • 26 publications • 36 publications • 9 publications • 63 publications 05/21/2019 ### Efficient Profile Maximum Likelihood for Universal Symmetric Property Estimation Estimating symmetric properties of a distribution, e.g. support size, co... 11/05/2020 ### Instance Based Approximations to Profile Maximum Likelihood In this paper we provide a new efficient algorithm for approximately com... 07/18/2019 ### A Polynomial Time Algorithm for Log-Concave Maximum Likelihood via Locally Exponential Families We consider the problem of computing the maximum likelihood multivariate... 04/07/2020 ### The Optimality of Profile Maximum Likelihood in Estimating Sorted Discrete Distributions A striking result of [Acharya et al. 2017] showed that to estimate symme... 07/11/2012 ### On Modeling Profiles instead of Values We consider the problem of estimating the distribution underlying an obs... 10/30/2019 ### Constrained Polynomial Likelihood Starting from a distribution z, we develop a non-negative polynomial min... 07/15/2021 ### Computing Permanents on a Trellis The problem of computing the permanent of a matrix has attracted interes... ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
2022-05-23 08:03:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8807525634765625, "perplexity": 1080.2986383953144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662556725.76/warc/CC-MAIN-20220523071517-20220523101517-00061.warc.gz"}
https://direct.mit.edu/ling/article-abstract/43/4/695/521/Consequences-of-Candidate-Omission?redirectedFrom=PDF
In this squib, we explore the generative consequences of candidate omission for constraint-based grammars. In principle, the candidate set for an input form i is the range of Gen(i) for a given generating function Gen (see Prince and Smolensky 1993:17). In practice, however, analyses often cover just a few candidates that are deemed relevant. The consequences of this practice are the focus of this squib. Whether the omission is by oversight or by principle, a range of defects can readily arise because there are situations where, even though an analysis might include the observed/intended optimum along with a range of viable competitors, it is still possible to draw erroneous inferences about languages and typologies owing to the omission of a single candidate. Aside from the—unfortunately familiar—problem of discovering an omitted competitor that breaks one’s current analysis because it is more harmonic than the intended/observed form, there...
2022-08-16 05:12:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8289040923118591, "perplexity": 1634.7342371904022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572220.19/warc/CC-MAIN-20220816030218-20220816060218-00087.warc.gz"}
https://stats.stackexchange.com/questions/490647/agglomerative-hierarchical-clustering-which-linkage-for-the-detection-of-outl
# (Agglomerative) Hierarchical Clustering: Which linkage for the detection of outliers? In (agglomerative) hierarchical clustering (and clustering in general), linkages are measures of "closeness" between pairs of clusters. The single linkage $$\mathcal{L}_{1,2}^{\min}$$ is the smallest value over all $$\Delta(X_1, X_2)$$. The complete linkage $$\mathcal{L}_{1,2}^{\max}$$ is the largest value over all $$\Delta(X_1, X_2)$$. The average linkage $$\mathcal{L}_{1,2}^{\text{mean}}$$ is the average over all distances $$\Delta(X_1, X_2)$$. The centroid linkage $$\mathcal{L}_{1,2}^{\text{cent}}$$ is the Euclidean distance between the cluster means of the two clusters. We can clearly see the outliers as "singletons" in a dendrogram: Which of these linkages is best for the detection of outliers? • Your question is very incomplete. It misses the description, an idea of how a hierarchical clustering is usable to detect outliers. This is not obvious if it can detect at all and if yes - how. Oct 6 '20 at 14:44 • @ttnphns What you've just described is what I guess would be included in an answer, no? The question seems very clear and simple to me, so I don't understand what's wrong with it. Oct 6 '20 at 14:51 • Clustering is a method of producing unsupervised classes. Not of detection of outliers. Your question should therefore describe a path or a trick how clustering could be used to detect outliers. But the Q lacks such a description. So the Q cannot be answered. Oct 6 '20 at 15:23 • @ttnphns we can clearly see the outliers as singletons in a dendrogram statisticshowto.com/wp-content/uploads/2016/11/dendrogram.png from statisticshowto.com/hierarchical-clustering Oct 6 '20 at 15:44 • But this is what you ought to discuss in your Q first. In particular, you would enter your definition of an "outlier" (for there are many possible definitions). Then go to consider why singletons are or can be seen (and when?) as instances of such otliers. Oct 6 '20 at 15:57 Let's say an object is a singleton at high level in complete linkage, and say that there are otherwise bigger clusters. This means only that the maximum distances between the object and the other clusters are large; the singleton object can still be close to quite a number of objects of the clusters, and is therefore not necessarily an outlier. A high level singleton of single linkage is separated from all clusters, its minimum distance to all clusters is large, so its distance to all other objects is large. In this sense it is well qualified to be called outlier. The only issue is that some people would say that there could also be small groups of outliers, which will not normally show up as singletons in any algorithm, but in single linkage an object may not be singleton anymore if it is close to one single other object. Average linkage is a compromise between these two; it can have the problem that complete linkage has potentially missing outliers, but it is less likely. I don't have much experience with the centroid method, but I'd expect it to behave similarly to average linkage in this respect. So single linkage is probably most suitable, at least if an outlier in your definition is an object that is far away from all the others. Upon trying to work with Lewian's answer above, I found it to be lacking in clarity, so I've attempted to use his answer to write my own version below. A linkage is a measure of closeness between pairs of clusters. It depends on the distance between the observations in the clusters. Let's assume that an outlier is defined as an object that is "far" from all the others. In the case of a complete linkage, we are using the largest value of the distance function over the observations of the two clusters. Therefore, if the other cluster is large (with observations spread), then there might be some observations that are much closer than the observations used for the maximum distance calculation; however, they would not be taken into account when using the complete linkage. Therefore, the singleton would not necessarily be an outlier. In the case of a single linkage, we are using the smallest value of the distance function over the observations of the two clusters. Therefore, a singleton's minimum distance to all clusters is comparatively (to the complete linkage) large, so its distance to all other observations is comparatively (to the complete linkage) large. Therefore, if even by using the smallest value we find that some observations are classified as singletons, then chances are that they actually are indeed outliers. The average linkage and the centroid linkage seem to be between the two extremes of the complete linkage and the single linkage. Therefore, I would say that the single linkage is most suitable for detecting outliers. • My answer wasn't precise because your question wasn't precise in a way. What's your definition of an outlier? What exactly to do depends on that. I have no issue with your "reformulation" of my answer though. Nov 1 '20 at 16:43
2021-10-20 20:05:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.714512050151825, "perplexity": 415.1029724849653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00374.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=tvp&paperid=4752&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Subscription Guidelines for authors Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Teor. Veroyatnost. i Primenen.: Year: Volume: Issue: Page: Find Teor. Veroyatnost. i Primenen., 1961, Volume 6, Issue 1, Pages 101–103 (Mi tvp4752) Short Communications An Example of a Process with Mixing Yu. K. Belyaev Moscow Abstract: An example is given of a stochastic process $\xi(t)$ with a continuous parameter and mixing, whose range consists of two states; the integral $\zeta(p)=\int_0^p\xi(t) dt$ does not have î an increase in variance. Full text: PDF file (328 kB) English version: Theory of Probability and its Applications, 1961, 6:1, 93–94 Citation: Yu. K. Belyaev, “An Example of a Process with Mixing”, Teor. Veroyatnost. i Primenen., 6:1 (1961), 101–103; Theory Probab. Appl., 6:1 (1961), 93–94 Citation in format AMSBIB \Bibitem{Bel61} \by Yu.~K.~Belyaev \paper An Example of a Process with Mixing \jour Teor. Veroyatnost. i Primenen. \yr 1961 \vol 6 \issue 1 \pages 101--103 \mathnet{http://mi.mathnet.ru/tvp4752} \transl \jour Theory Probab. Appl. \yr 1961 \vol 6 \issue 1 \pages 93--94 \crossref{https://doi.org/10.1137/1106008} • http://mi.mathnet.ru/eng/tvp4752 • http://mi.mathnet.ru/eng/tvp/v6/i1/p101 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. A. G. Kachurovskii, “The rate of convergence in ergodic theorems”, Russian Math. Surveys, 51:4 (1996), 653–703 2. V. V. Sedalishchev, “Constants in the estimates of the convergence rate in the Birkhoff ergodic theorem with continuous time”, Siberian Math. J., 53:5 (2012), 882–888 3. V. V. Sedalishchev, “Interrelation between the convergence rates in von Neumann's and Birkhoff's ergodic theorems”, Siberian Math. J., 55:2 (2014), 336–348 4. A. G. Kachurovskii, I. V. Podvigin, “Estimates of the rate of convergence in the von Neumann and Birkhoff ergodic theorems”, Trans. Moscow Math. Soc., 77 (2016), 1–53
2020-11-25 11:36:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23758846521377563, "perplexity": 9665.018423218047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141182776.11/warc/CC-MAIN-20201125100409-20201125130409-00146.warc.gz"}
http://aux.planetmath.org/proofoflimitruleofproduct
# proof of limit rule of product Let $f$ and $g$ be real (http://planetmath.org/RealFunction) or complex functions having the limits $\lim_{x\to x_{0}}f(x)=F\quad\mbox{and}\quad\lim_{x\to x_{0}}g(x)=G.$ Then also the limit $\displaystyle\lim_{x\to x_{0}}f(x)g(x)$ exists and equals $FG$. Proof.  Let $\varepsilon$ be any positive number.  The assumptions imply the existence of the positive numbers $\delta_{1},\,\delta_{2},\,\delta_{3}$ such that $\displaystyle|f(x)-F|<\frac{\varepsilon}{2(1+|G|)}\;\;\mbox{when}\;\;0<|x-x_{0% }|<\delta_{1}$ (1) $\displaystyle|g(x)-G|<\frac{\varepsilon}{2(1+|F|)}\;\;\mbox{when}\;\;0<|x-x_{0% }|<\delta_{2},$ (2) $\displaystyle|g(x)-G|<1\;\;\mbox{when}\;\;0<|x-x_{0}|<\delta_{3}.$ (3) According to the condition (3) we see that $|g(x)|=|g(x)\!-\!G\!+\!G|\leqq|g(x)\!-\!G|+|G|<1\!+\!|G|\;\;\mbox{when}\;\;0<|% x-x_{0}|<\delta_{3}.$ Supposing then that  $0<|x-x_{0}|<\min\{\delta_{1},\,\delta_{2},\,\delta_{3}\}$  and using (1) and (2) we obtain $\displaystyle|f(x)g(x)-FG|$ $\displaystyle=|f(x)g(x)-Fg(x)+Fg(x)-FG|$ $\displaystyle\leqq|f(x)g(x)\!-\!Fg(x)|+|Fg(x)\!-\!FG|$ $\displaystyle=|g(x)|\cdot|f(x)\!-\!F|+|F|\cdot|g(x)\!-\!G|$ $\displaystyle<(1\!+\!|G|)\frac{\varepsilon}{2(1\!+\!|G|)}+(1\!+\!|F|)\frac{% \varepsilon}{2(1\!+\!|F|)}$ $\displaystyle=\varepsilon$ This settles the proof. Title proof of limit rule of product ProofOfLimitRuleOfProduct 2013-03-22 17:52:22 2013-03-22 17:52:22 pahio (2872) pahio (2872) 6 pahio (2872) Proof msc 30A99 msc 26A06 ProductOfFunctions TriangleInequality ProductAndQuotientOfFunctionsSum
2018-06-21 03:01:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 18, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9892425537109375, "perplexity": 1436.0991302483528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864019.29/warc/CC-MAIN-20180621020632-20180621040632-00094.warc.gz"}
https://ch.mathworks.com/help/control/ref/lti.getsectorcrossover.html
Main Content # getSectorCrossover Crossover frequencies for sector bound ## Description example wc = getSectorCrossover(H,Q) returns the frequencies at which the following matrix M(ω) is singular: $M\left(\omega \right)=H{\left(j\omega \right)}^{H}Q\text{\hspace{0.17em}}H\left(j\omega \right).$ When a frequency-domain sector plot exists, these frequencies are the frequencies at which the relative sector index (R-index) for H and Q equals 1. See About Sector Bounds and Sector Indices for details. ## Examples collapse all Find the crossover frequencies for the dynamic system $G\left(s\right)=\left(s+2\right)/\left(s+1\right)$ and the sector defined by: $S=\left\{\left(y,u\right):a{u}^{2} for various values of a and b. In U/Y space, this sector is the shaded region of the following diagram (for a, b > 0). The Q matrix for this sector is given by: $Q=\left[\begin{array}{cc}1& -\left(a+b\right)/2\\ -\left(a+b\right)/2& ab\end{array}\right];\phantom{\rule{1em}{0ex}}a=0.1,\phantom{\rule{0.2777777777777778em}{0ex}}b=10.$ getSectorCrossover finds the frequencies at which $H\left(s{\right)}^{H}QH\left(s\right)$ is singular, for $H\left(s\right)=\left[G\left(s\right);I\right]$. For instance, find these frequencies for the sector defined by Q with a = 0.1 and b = 10. G = tf([1 2],[1 1]); H = [G;1]; a = 0.1; b = 10; Q = [1 -(a+b)/2 ; -(a+b)/2 a*b]; w = getSectorCrossover(H,Q) w = 0x1 empty double column vector The empty result means that there are no such frequencies. Now find the frequencies at which ${H}^{H}QH$ is singular for a narrower sector, with a = 0.5 and b = 1.5. a2 = 0.5; b2 = 1.5; Q2 = [1 -(a2+b2)/2 ; -(a2+b2)/2 a2*b2]; w2 = getSectorCrossover(H,Q2) w2 = 1.7321 Here the resulting frequency is where the R-index for H and Q2 is equal to 1, as shown in the sector plot. sectorplot(H,Q2) Thus, when a sector plot exists for a system H and sector Q, getSectorCrossover finds the frequencies at which the R-index is 1. ## Input Arguments collapse all Model to analyze against sector bounds, specified as a dynamic system model such as a tf, ss, or genss model. H can be continuous or discrete. If H is a generalized model with tunable or uncertain blocks, getSectorCrossover analyzes the current, nominal value of H. To get the frequencies at which the I/O trajectories (u,y) of a linear system G lie in a particular sector, use H = [G;I], where I = eyes(nu), and nu is the number of inputs of G. Sector geometry, specified as: • A matrix, for constant sector geometry. Q is a symmetric square matrix that is ny on a side, where ny is the number of outputs of H. • An LTI model, for frequency-dependent sector geometry. Q satisfies Q(s)’ = Q(–s). In other words, Q(s) evaluates to a Hermitian matrix at each frequency. The matrix Q must be indefinite to describe a well-defined conic sector. An indefinite matrix has both positive and negative eigenvalues. For more information, see About Sector Bounds and Sector Indices. ## Output Arguments collapse all Sector crossover frequencies, returned as a vector. The frequencies are expressed in rad/TimeUnit, relative to the TimeUnit property of H. If the trajectories of H never cross the boundary, wc = []. ## Version History Introduced in R2016a
2022-07-03 02:46:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9242616891860962, "perplexity": 3838.543106470983}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00723.warc.gz"}
https://cstheory.stackexchange.com/questions/47753/how-to-calculate-complexity-in-a-high-dimensional-space/47761
# How to calculate complexity in a high dimensional space? Edit: 'Fitness landscape analysis' was mentioned as a relevant measure. If you're going to downvote the post, at least leave a comment what is wrong. For a specific f(), I'm defining a term 'complexity', estimating how difficult is the given function to optimize. I attempted a solution in low dimension, but I might not be on the right track to generalize to n-dimn. This concept probably has a name and a better existing approximation, but here's what I have: f(x) has input values bound [0, 5], x = [x1, x2, x3, x4...] (n-dimn), and f() outputs a single continuous value for 'success', bound [0, 1] Example: For input [x1, x2], calculate f(x) = y, and then relabel (x1, x2, y) as (x, y, z) for the plots below. If it's easy to achieve an output of 1 through any random input, then you have a low complexity space. An example f() space, where all input values result in z=1 (low complexity): An example f() space, where only one input value results in z=1 (high complexity): Complexity measured as volume enclosed between the xy-plane and the z-hyperplane seems valuable, but it alone does not work, because increasing volume at z=0.8 depth is 'worth more' than the same amount increased at 0.2 depth (in terms of optimizing for z=1). It's not just the z=1 region that is important as 'success', but also where z=0.9, z=0.8, etc. proportionally. So what I've come up with so far is to compute a grid of inputs, and get their output values: i.e. f([0, 0])=0.0 f([0, 1])=0.3 f([0, 2])=0.2 ... then plot those to approximate the z-hyperplane. If I sum all the sample's z-values (0, .3, .2...) (and as the number of samples on the grid increases), it gives a nice sort of depth-weighted volume estimate. There's a problem that local minima can cause increased volume (lower complexity), yet lead away and make it more difficult to navigate toward any deeper / global min's (higher complexity). (Weighted volume might handle this, but I'm unsure) Example f() space where both have the same volume (picture's not exact, but you get it right? and I rotated axes, sry): How can I incorporate that factor, that some points on the function increase volume, but the addition of that volume contributes less than if the volume was adjacent to a deeper space (in a similar function, ex. the f() on the right, above)? It seems like the critical feature that distinguishes those two hyperplanes is that the sign of the slope flips, where the negative slope leads into a local min, away from the global min, and the more frequently the sign flips, the harder the function will be to optimize. If I iterate the grid rows and cols, and calculate the number of slope sign changes, that might be useful, or there might be a way to use trig to project a line and see what depth it intersects the hyperplane, but this is where I realized I need help. My original idea was to define complexity as how resistant are the output values to fluctuations in the input values, given a specific input. So I have a decent estimate with my 'weighted-volume' calculation, but it's still lacking. (1) Do you recognize the 'complexity' concept as another existing term? I found 'fitness landscape analysis' in evolutionary optimization (2) If not, can I improve beyond 'weighted-volume'? (3) Will this work for higher dimensions? As far as I have understood, you aim to develop a framework to capture the hardness of combinatorial problems in 3D. However, there are major problems in your question. Your first sentence lacks a couple of technical definitions: For a specific f(), I'm defining a term 'complexity', estimating how difficult is the given function to optimize. First, and the most important of all, you should have a well defined complexity measure in order to adapt the term. Then, another sentence makes your question even more complicated: If it's easy to achieve an output of 1 through any random input, then you have a low complexity space. Here, you need to have a clear definition of easy. Is it in terms of computational complexity? If so, how do you measure the input size? After this sentence, I am honestly lost: Complexity measured as volume enclosed between the xy-plane and the z-hyperplane seems valuable, but it alone does not work, because increasing volume at z=0.8 depth is 'worth more' than the same amount increased at 0.2 depth (in terms of optimizing for z=1). It's not just the z=1 region that is important as 'success', but also where z=0.9, z=0.8, etc. proportionally. What does it mean to worth more? What is increasing, and what is decreasing? What are yoou trying to optimize? All in all, although I could not understand your question well enough, I would go ahead and say that there are different ways to measure "complexity" in different dimensions, unless you invent a brand new system to represent the coordinates. • Sorry you didn't get much from the post, I'll try to explain. (1) Of course the post lacks a good definition for 'complexity', that is the entire goal of the post, to define the term computationally. (2) 'Easy' does need to be defined too, and I start in the post to look at 'ease' as how much 'volume' of the solution space leads towards or is at a depth where z=1 (ie. a global minimum has been found' (maximum actually). What do you mean by 'measure the input size'? Oct 22 '20 at 16:25 • Complexity is a broadly used term, which usually refers to the computational complexity. To put very roughly, computational complexity is measured by the resource usage w.r.t. the input size. So, if you are to mention complexity, first you need to tell with respect to what. Oct 22 '20 at 21:25
2022-01-22 08:00:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7391022443771362, "perplexity": 873.750328604601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303779.65/warc/CC-MAIN-20220122073422-20220122103422-00487.warc.gz"}
https://projecteuclid.org/euclid.ndjfl/1292249609
## Notre Dame Journal of Formal Logic ### Lascar Types and Lascar Automorphisms in Abstract Elementary Classes #### Abstract We study Lascar strong types and Galois types and especially their relation to notions of type which have finite character. We define a notion of a strong type with finite character, the so-called Lascar type. We show that this notion is stronger than Galois type over countable sets in simple and superstable finitary AECs. Furthermore, we give an example where the Galois type itself does not have finite character in such a class. #### Article information Source Notre Dame J. Formal Logic, Volume 52, Number 1 (2011), 39-54. Dates First available in Project Euclid: 13 December 2010 https://projecteuclid.org/euclid.ndjfl/1292249609 Digital Object Identifier doi:10.1215/00294527-2010-035 Mathematical Reviews number (MathSciNet) MR2747161 Zentralblatt MATH identifier 1233.03038 #### Citation Hyttinen, Tapani; Kesälä, Meeri. Lascar Types and Lascar Automorphisms in Abstract Elementary Classes. Notre Dame J. Formal Logic 52 (2011), no. 1, 39--54. doi:10.1215/00294527-2010-035. https://projecteuclid.org/euclid.ndjfl/1292249609 #### References • [1] Baldwin, J. T., Categoricity, vol. 50 of University Lecture Series, American Mathematical Society, Providence, 2009. • [2] Baldwin, J. T., P. C. Eklof, and J. Trlifaj, "${}^\perp N$" as an abstract elementary class", Annals of Pure and Applied Logic, vol. 149 (2007), pp. 25--39. • [3] Baldwin, J. T., and A. Kolesnikov, "Categoricity, amalgamation, and tameness", Israel Journal of Mathematics, vol. 170 (2009), pp. 411--43. • [4] Baldwin, J. T., A. Kolesnikov, and S. Shelah, "The amalgamation spectrum", The Journal of Symbolic Logic, vol. 74 (2009), pp. 914--28. • [5] Bays, M., and B. Zilber, "Covers of multiplicative groups of algebraically closed fields of arbitrary characteristic". Available at arXiv:0704.3561v3 [math.LO]. • [6] Buechler, S., and O. Lessmann, "Simple homogeneous models", Journal of the American Mathematical Society, vol. 16 (2003), pp. 91--121. • [7] Hrushovski, E., "Almost orthogonal regular types (Stability in Model Theory, II" (Trento, 1987), Annals of Pure and Applied Logic, vol. 45 (1989), pp. 139--55. • [8] Hyttinen, T., and M. Kesälä, Categoricity transfer in simple finitary abstract elementary classes,'' Submitted in 2008. Available at http://mathstat.helsinki.fi/logic/people/meeri.kesala.html. • [9] Hyttinen, T., and M. Kesälä, "Independence in finitary abstract elementary classes", Annals of Pure and Applied Logic, vol. 143 (2006), pp. 103--38. • [10] Hyttinen, T., and M. Kesälä, "Interpreting groups and fields in simple, finitary AEC"s, (2010). Institut Mittag-Leffler preprint series 2009/2010. Available at http://www.mittag-leffler.se/preprints. • [11] Hyttinen, T., and M. Kesälä, "Superstability in simple finitary AEC"s, Fundamenta Mathematicae, vol. 195 (2007), pp. 221--68. • [12] Hyttinen, T., and O. Lessmann, "A rank for the class of elementary submodels of a superstable homogeneous model", The Journal of Symbolic Logic, vol. 67 (2002), pp. 1469--82. • [13] Hyttinen, T., and O. Lessmann, "Simplicity and uncountable categoricity in excellent classes", Annals of Pure and Applied Logic, vol. 139 (2006), pp. 110--37. • [14] Hyttinen, T., O. Lessmann, and S. Shelah, "Interpreting groups and fields in some nonelementary classes", Journal of Mathematical Logic, vol. 5 (2005), pp. 1--47. • [15] Hyttinen, T., and S. Shelah, "Strong splitting in stable homogeneous models", Annals of Pure and Applied Logic, vol. 103 (2000), pp. 201--28. • [16] Keisler, H. J., Model Theory for Infinitary Logic. Logic with Countable Conjunctions and Finite Quantifiers, vol. 62 of Studies in Logic and the Foundations of Mathematics, North-Holland Publishing Company, Amsterdam, 1971. • [17] Kesälä, M., Finitary Abstract Elementary Classes, Ph.D. thesis, University of Helsinki, Department of Mathematics and Statistics, 2006. • [18] Kueker, D. W., "Abstract elementary classes and infinitary logics", Annals of Pure and Applied Logic, vol. 156 (2008), pp. 274--86. • [19] Lascar, D., "The group of automorphisms of a relational saturated structure", pp. 225--36 in Finite and Infinite Combinatorics in Sets and Logic (Banff, 1991), vol. 411 of NATO Advanced Science Institutes Series C: Mathematical and Physical Sciences, Kluwer Academic Publishers, Dordrecht, 1993. • [20] Lascar, D., "On the category of models of a complete theory", The Journal of Symbolic Logic, vol. 47 (1982), pp. 249--66. • [21] Shelah, S., "Classification of nonelementary classes. II". Abstract elementary classes, pp. 419--97 in Classification Theory (Proceedings, Chicago, 1985), edited by J. T. Baldwin, vol. 1292 of Lecture Notes in Mathematics, Springer, Berlin, 1987. • [22] Shelah, S., Classification Theory for Elementary Abstract Classes, vol. 18 of Studies in Logic (London), College Publications, London, 2009. Mathematical Logic and Foundations. • [23] Trlifaj, J., "Abstract elementary classes induced by tilting and cotilting modules have finite character", Proceedings of the American Mathematical Society, vol. 137 (2009), pp. 1127--33.
2019-10-14 03:26:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6375052332878113, "perplexity": 2781.909150149769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649035.4/warc/CC-MAIN-20191014025508-20191014052508-00341.warc.gz"}
https://quant.stackexchange.com/questions/49057/selling-an-option-before-maturity
Selling an option before maturity There is one problem that bothers me: Let’s say I buy a European put option with a certain maturity date with premium \$1.6 Suppose that the market price of the put option rises before maturity (\$3) and that I sell it to earn the difference in the market prices of the option (\\$1.4), will I become the writer/seller of the option? In other words, will my payoff at maturity be $$-\max\{ K-S_T,0 \}$$? But if that is the case, and I sell the put option to buyer $$B$$, who later re-sells it to another buyer $$C$$ (who holds until maturity), will buyer $$B$$ be the new writer of the option, who bears responsibility of the purchase at maturity? • When you sell an option that you own you no longer have any rights/obligations at maturity. Those rights pass to the person who bought the option from you. At maturity nothing happens as far as you are concerned, it is a matter for someone else (the person who is currently long and the person who is currently short the option). – Alex C Oct 4 '19 at 4:46 • I have a question, @Alex C. Your comment is a perfectly valid answer to this question. So is there a reason for writing it out as a comnent, as opposed to an answer? (I am just curious) – Dhruv Gupta Oct 4 '19 at 5:32 • @AlexC However, my textbook says that the seller of the put option bears a responsibility to purchase at the strike price at maturity. Or is this "seller" the institution that creates the option? – Richard Oct 4 '19 at 8:58 • Stricly speaking the textbook should say "the short seller of the option...". – Alex C Oct 4 '19 at 14:39 The vast majority of options are traded on an exchange, which means that you actually have a contract with the exchange, not a third party. So if you buy an option, you initiate a contract with the exchange. When you sell it on the exchange, the original contract you have with the exchange is voided and a new contract between the exchange and the buyer is generated (the exchange takes care of this automatically). my textbook says that the seller of the put option bears a responsibility to purchase at the strike price at maturity Correct, if you sell to open a position. If you sell to close a position (meaning you previously bought a contract and are now selling it), your original option goes away. will I become the writer/seller of the option? Technically, yes, but that position offsets the put that you initially bought, so the exchange just cancels the initial position. Financially it's the same effect. Suppose you bought a put, then sold another put at the same strike $$K$$ with the same maturity. If the puts are in the money at expiration, then you would be obligated to buy the stock for $$K$$ via your sold put, but then would exercise the option to sell it via the bought put at the same price $$K$$. So you have no net profit or loss in the transaction. • Your statements related to the exchange's role are incorrect. The exchange is never the other side of the contract unless it's an OTC market. Much more commonly the exchange provides a market place, most typically in the form of a limit order book. Buyers and sellers bid for the given instrument on that exchange, when prices match a trade executes which establishes a contract between buyer and seller. Then, in the case of regulated futures/options markets, a clearing house steps in and novates the contract such that the buyer and seller positions now are contracted against the clearing house. – Ian Ash Oct 4 '19 at 15:28 • Hmm, it’s the OTC markets where investors trade with investors, cutting out the exchange. You are semantically correct in the distinction between an “exchange” and its associated “clearing house”. However, the distinction is semantic to anyone more interested in the economics of trading than the legal plumbing of the system’s operations. – demully Oct 5 '19 at 7:40
2021-04-15 05:32:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.224680557847023, "perplexity": 1210.2541302297052}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038083007.51/warc/CC-MAIN-20210415035637-20210415065637-00634.warc.gz"}
http://www.ninjaducks.in/thesis/mainch5.html
## Chapter 5Experimental Evaluation We have built our DRS for the QEMU-KVM hypervisor. It uses the libvirt APIs for managing the virtual machines and Openstack [26] mainly for cloud management and software defined networking along with live-migration support. For the distributed key-value store, we use etcd [27]. We have conducted several experiments to determine the effectiveness of our DRS. The experiments relating to monitoring and auto-ballooning aspect of the DRS have been described below. ### 5.1 Experimental Setup Our experimental setup consists of six physical machines. Each physical machine has an 8 core Intel Core i7 3.5GHz CPU, 16 GB memory, 1Gbps NIC and runs the 64 bit Ubuntu 14.04 with Linux kernel version 3.19 . We have installed Openstack on these 6 nodes such that one of the nodes is the controller node(runs the Openstack management services) and the other five are the compute nodes (run the actual VMs). The nodes are connected to a gigabit local network, which they use for transferring data while live migration and for communicating with the outside network. The disks of all the VMs reside on a shared NFS server, and hence, live migration needs to just transfer the memory of the VM between the hosts, and not the disks. Each of the compute nodes also run the DRS software created by us. The controller and two separate nodes (separate from the controller and the compute nodes) run etcd, with a cluster size of three, which provides a fault tolerance of degree one [28]. All the VMs that we use run 64 bit Ubuntu 12.04 cloud image. The VMs can be of two sizes - 1 vCPU with 2 GB RAM(small) or 2 vCPU with 4 GB RAM(large). There are three types of workloads run by these VMs. One workload is memory intensive, one is CPU intensive and one of the workloads is a mix of the two. The memory intensive workload is a program written by us which consumes a total of 1800MB. The program runs in two phases - the allocation phase and the retention phase. The allocation phase starts when the program starts. In the allocation phase, the program tries to allocate 100MB memory using malloc and then sleeps for two seconds. This step is performed iteratively till the allocated memory has reached 1800MB. Notably, it may take more than 18 iterations for the allocation to reach 1800MB because malloc will return null if it cannot allocate memory due to shortage of memory and the program will sleep for two more seconds. So, the length of the allocation phase depends on the availability of memory. After the allocation phase, retention phase starts, where the program retains the allocated memory for 300 seconds, and does no more allocations. After the retention phase, the program ends. We will refer to this workload as usemem. For the other two types of workloads, we have chosen two SPEC CPU 2006 V1.2 benchmarks [29]. For CPU intensive workload, we run the libquantum benchmark. The libquantum benchmark tries to solve certain computationally hard problems using the simulation of a quantum computer. At runtime, the benchmark consumes 100% of a vCPU and about 50MB memory. For the CPU and memory intensive workload, we use the mcf benchmark. The mcf benchmark solves the single-depot vehicle scheduling problem in the planning process of public transportation companies using the network simplex algorithm accelerated with a column generation. At runtime, the mcf benchmark consumes 100% of a vCPU and about 1800MB memory. Large VMs run two workloads in two separate threads simultaneously, while the small VMs runs only one workload in a single thread. Each thread randomly chooses a workload from the three workloads described above, runs it and calculates the time it took to run the workload, sleeps for a randomly chosen time between 0 and 90 seconds, and then repeats this process. Each compute node has four large VMs and three small VMs. Keeping aside one core and two GB memory for the hypervisor, this gives us an overcommitment ratio of $\mathrm{1.57}$ for both CPU and memory. ### 5.2 Results The experiment that we ran was to determine the effectiveness of autoballooning in memory overcommitment. For this, we monitored two hosts named compute2 and compute3 respectively, running seven VMs as described in the previous section. We disabled live-migration in the DRS on the both hosts. On compute3, autoballoning was also disabled. We compare the results obtained after running the experiment for about 17 hours. #### 5.2.1 Analyzing Auto-Ballooning Figure 5.1 shows the different memory metrics of compute2 and compute3 hosts plotted against time. From these graphs, the most remarkable difference that we can see is in the swap memory on both the machines. On compute3, the swap memory rose to very high levels of about 7GB, which is equal to the amount of memory we have overcommited. On compute 2, the swap memory remained very low throughout the experiment, remaining below 1GB most of the time and never going above 1.5GB. On compute3, the total used memory remains almost constant and equal to maximum memory, while it keeps on fluctuating on compute2. This is because memory, once allocated, cannot be reclaimed on compute3. Figure 5.1: Graphs showing different memory metrics of compute3 (autoballooning disabled) and compute 2 (autoballooning enabled). X-axis represents the time at which the value was recorded, Y-axis shows the value. Figure 5.2 shows the CPU usage of compute2 and compute3 plotted against time. In the graph, a few hours after starting the experiment, the CPU usage of compute3 is consistently low, while compute2 makes better use of the CPU. This is because of high levels of swap on compute 3. The mcf and usemem workloads spend more time in performing I/O and hence are not able to utilize the CPU efficiently. Figure 5.2: Graphs showing cpu usage of compute3 (autoballooning disabled) and compute2 (autoballooning enabled). X-axis represents the time at which the value was recorded, Y-axis shows the value. Table 5.1 lists the number of time each workload ran on both the machines. Libquantum and mcf are CPU intensive. On compute2, CPU intensive workloads ran 303 times compared to 287 times on compute3 in the same time interval. Mcf and usemem are memory intensive. On compute2, memory intensive workloads ran 357 times compared to 289 times on compute 3 in the same time interval. It clearly shows that there was better utilization of the CPU and memory resources on compute2 resulting in a better overall throughput. Table 5.1: Number of times each workload ran during the experiment Workload Count-compute3 Count-compute2 libquantum 153 198 mcf 134 105 usemem 155 252 Figure 5.3: Graphs showing the time taken by different workloads on the two hosts. The values greater than two hours have been filtered out from the compute3 graph for the sake of visibility. X-axis represents the time at which the value was recorded, Y-axis shows the value. Table 5.2: Mean time taken by each workload to run Workload Mean-compute3 Mean-compute2 libquantum 14.5 min 23.7 min mcf 39.6 min 20.3 min usemem 12.8 min 7.9 min The graphs in Figure 5.3 show the time it took for the workloads to run plotted against the time at which the workload completed. In the graph for compute3, we can see that the time to complete the memory intensive workloads - mcf and usemem can grow to more than two hours while it always remains below 33 minutes on compute2. On top of this, some of the very large values have been filtered out from the graph of compute3. There were three such values for the mcf benchmark, which were greater than 9 hours. On the contrary, the libquantum workload performs better on compute3. This is because libquantum does not require much memory and the CPU on compute3 is relatively free because the other workloads do not utilize it well. Table 5.2 shows the mean time it took for the workloads to run. As expected, the libquantum performs better on compute3 while the other workloads perform better on compute2. But mean time is not a good metric to compare the effectiveness of autoballooning. Throughput is a better metric. #### 5.2.2 Analyzing CPU Hotspot Detection Figure 5.4: CPU usage of compute2 during one hour of the experiment. Blue dots show the points at which migration was triggered. X-axis represents the time at which the value was recorded, Y-axis shows the value in % The graph in Figure 5.4 shows the CPU usage of compute2 for one hour during the experiment. The blue dots represent the instances at which migration was triggered. Migration was disabled for this experiment, so no machines was actually migrated out of this host. In the graph, the CPU usage is 100% in two intervals. First interval is from around time 3:47 to 4:06 (interval1) and the second interval is from around 4:21 to 4:38 (interval2). However, migration is triggered only during interval1. This implies that there was a hotspot due to CPU utilization only during interval1 and not during interval2. The graphs in Figure 5.5 and 5.6 show the CPU usage and steal time of individual virtual machines on compute2 during that one hour. For a large virtual machines, 100% CPU usage implies that it is using two vCPUs completely and 50% CPU usage implies that it is using only one of its two vCPUs. For a small virtual machines, 100% CPU usage means that it is using its only vCPU completely. From the graphs, we can see that during interval1, large virtual machines are trying to use 6 vCPUs and the small virtual machines are trying to use 3 vCPUs, which is a total of 9 vCPUs. The host has only 8 physical CPUs and hence, it is overloaded. This also reflected in the steal time of the individual VMs which is higher than 10% during interval1. During interval2, a total of 8 vCPUs are being used, which is equal to the number of the physical CPUs, and hence there is no overload. The steal time of the VMs are also low and migration is not triggered. These observations show that steal time is an appropriate metric for determining CPU hotspots. #### 5.2.3 Analyzing the CUSUM algorithm Figure 5.7 shows a graph of one hour time duration of the experiment. The red dots in the graph mark the different points at which etcd was updated with the value of used memory for the host compute2. It is evident from the graph that the algorithm is successful in filtering sudden changes in the value of load memory and only updates etcd when the load profile has changed. In the one hour time duration, etcd was updated 13 times. Without the filtering algorithm, etcd would have been updated once in every 10 seconds i.e. 360 time in an hour, which is about 28 times more than the filtering algorithm. Overall during the 17 hour run of the experiment, etcd was updated just 266 times. Figure 5.7: Graphs showing the points at which etcd is updated. X-axis represents the time at which the value was recorded, Y-axis shows the value. #### 5.2.4 Memory and CPU Footprint of DRS Graphs in Figure 5.8 show the resource usage statistics of the DRS algorithm. As we can see, the CPU usage is always less then 0.35% which is almost negligible. The memory used by the DRS algorithm is 50MB with the entire virtual memory size of the software being just 350MB. ### Summary In this chapter, we described our experimental setup and compared the resource utilization without autoballooning with the case when autoballooning was enabled. We looked at the effectiveness of steal time in identifying CPU hotspots. We also saw the performance of the CUSUM algorithm which is used to filter sudden changes in the resource usage from affecting decision making of the DRS algorithm. We also looked at the resources consumed by our implementation of the DRS algorithm.
2019-01-18 11:15:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4015107750892639, "perplexity": 1913.022453209556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660070.15/warc/CC-MAIN-20190118110804-20190118132804-00593.warc.gz"}
https://math.stackexchange.com/questions/2136083/prime-factorization-in-integral-domains
# Prime Factorization in Integral Domains I'm working on a homework problem and I'm completely stumped proving the following implication: If $R$ is an integral domain such that every nonzero prime ideal contains a prime element, then every nonzero, nonunit element of $R$ is expressible as a product of primes. Hints would be appreciated, nothing I've tried seems to be working. Thanks. The result you want to prove is known as Kaplansky's criterion for unique factorization domains (UFD). To prove it you have to consider the set $S$ consisting of the units of $R$ together with (finite) products of all the prime elements of $R$. Basically the idea is to show that $S=R\setminus \{0\}$. This can be done using the following result: Lemma: $S$ is a saturated multiplicatively closed set. Proof: If $x, y\in S$, then we can write both $x$ and $y$ as product of primes, so $xy$ can be also written as product of primes. This shows that $S$ is a multiplicatively closed set. Now, to prove that $S$ is saturated we have to show that if $x\in S$, then every divisor of $x$ is in $S$ too. If we write $x=up_1\cdots p_n$, where $u$ is an unit and the $p_i$'s are prime, then it can be shown by induction on $n$ that every divisor of $x$ is in $S$. I let you to do this proof by induction. Now, we argue by contradiction assuming that there is a nonzero element $a\in R$ such that $a\notin S$, so the ideal generated by $a$, $\langle a\rangle$, is disjoint from $S$, i.e., $S\cap \langle a\rangle=\emptyset$, because if there were some $ra\in S$, then $a$ would be in $S$ (because $a\mid ra$ and $S$ is saturated by the lemma above), contradicting our hypothesis that $a\notin S$. Therefore the set $A=\{I\; \text{nonzero ideal of}\; R:I\cap S=\emptyset\}$ is non-empty and then by Zorn's Lemma $A$ has a maximal element $P$ such that $P$ is not only an ideal, but in fact a prime ideal. By our general hypothesis $P$ contains a prime element, let's say $p$, i.e., $p\in P$, but by the definition of $S$ is clear that $p\in S$, so $p\in P\cap S$, which contradicts $P\cap S=\emptyset$. This contradiction comes from our assumption that $a\notin S$. Hence every nonzero $a\in R$ belongs to $S$, i.e., $S=R\setminus \{0\}$ and this means that every nonzero, nonunit element of $R$ is expressible as a product of primes. • Good, now we finally have a self-contained proof in an appropriate question to link to in the future. There was a complete proof at a different question, but nobody would have ever thought to find it at the question asked, and the presentation was so <s>elaborate</s> ahem insightful that it may have gone unappreciated. – rschwieb Feb 9 '17 at 5:11 • @rschwieb thanks for the feedback. I appreciate your comment. – Xam Feb 9 '17 at 12:42
2021-06-16 13:48:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9421703815460205, "perplexity": 75.92158986598297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00481.warc.gz"}
https://socratic.org/questions/how-do-i-find-the-vertex-axis-of-symmetry-y-intercept-x-intercept-domain-and-ran-15#175702
# How do I find the vertex, axis of symmetry, y-intercept, x-intercept, domain and range of y=-x^2+4x+1? Oct 10, 2015 The method used is completing the square. #### Explanation: So first let's find the line of symmetry and the vertex. To complete the square.. -x^2+4x+1=0 -x^2+4x+2^2+1-2^2=0 y=(-x+2)^2-5 -x=2, hence x=-2. and y=-5. Domain is the set of x values, probably the most confusing concept I've come across in maths till now, yes, I'm ashamed of it. Although, with the square completed now.. (-x+2)^2=y+5 y+5>0 y>-5 With the range found, plotting a graph and using at least 4 suitable values would help to find the domain.
2022-12-10 08:30:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6019524931907654, "perplexity": 1222.627544872665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710421.14/warc/CC-MAIN-20221210074242-20221210104242-00787.warc.gz"}
http://math.stackexchange.com/questions/334619/find-the-interval-of-convergence
# Find the interval of convergence Find the interval of convergence of $$\sum_{n=1}^\infty \frac{(-1)^{n+1}(x-2)^n}{n2^n}$$ Thank you very much in advance! - ## 3 Answers Define $z=x+2$, where does $$\sum_{n=1}^\infty \frac{(-1)^{n+1} z^n}{n 2^n}$$ converge? What can you say about your sum in advance? - Hint: 1) Try the Alternating series test. This should give you an open interval, where the series converges. 2) Find and argument why it diverges outside the corresponding closed interval (do the arguments even go to zero?) 3) Treat the boundary seperately. - You can also use the fact that the radius of convergens is equal to: $$R=\frac{1}{\limsup_{n\rightarrow\infty}{|a_n|}}$$ In this case: $$a_n=\frac{(-1)^{n+1}}{n\cdot2^n}$$ Calculation gives: the series converges whenever $|z|<2$. - So does that mean, -2 < x < 0? – Cossette Mar 19 '13 at 10:52 If we have just $x^n$ instead of $(x-2)^n$ we would get $-2<x<2$, thus what does that mean for $(x-2)^2$?! – Adrian57 Mar 19 '13 at 10:58
2016-07-27 13:37:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9626615643501282, "perplexity": 432.6745547283235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826907.66/warc/CC-MAIN-20160723071026-00165-ip-10-185-27-174.ec2.internal.warc.gz"}
http://lesoluzioni.blogspot.com/2013/06/symoblic-ctl-model-checking.html
Saturday, June 1, 2013 Ordered Binary Decision Diagram Boolean formulas are represented in diagrams. OBDD has the following property, $$\phi \leftrightarrow \psi \Rightarrow OBDD(\phi) = OBDD(\psi)$$ Symbolic Denotation $\xi$ $\xi : S \rightarrow$boolean formulas (OBDDs). By definition, $$\xi([\phi]) := \xi(\{ s | M,s \models \phi \})$$ A list of some symbolic encodings, • $\xi([\mathbf{EX}\phi]) = \exists V' . (\xi ([\phi])[V'] \wedge \xi (R)[V,V'])$ • $\xi([\mathbf{EG}\phi]) = \nu Z . (\xi([\phi]) \wedge \xi([\mathbf{EX}Z]))$ • $\xi([\mathbf{E}(\phi\mathbf{U}\psi)]) = \mu Z . (\xi([\psi]) \vee (\xi([\phi]) \wedge \xi([\mathbf{EX}Z])))$ The above exactly parallels the classical fixed-points techniques. See a previous post (link). Fixed-point Algorithms The following shows the symbolic counterpart for the classical fixed points algorithms (denotational, based on states). Recall their inductive version, • $[\mathbf{EG}\phi]$: $X_{j+1} = X_j \cap \mathsf{PreImage}(X_j)$ • $[\mathbf{E}(\phi\mathbf{U}\psi)]$: $X_{j+1} = X_j \cup ([\phi] \cap \mathsf{PreImage}(X_j))$ PreImage OBDD PreImage(OBDD X){ return ∃V'. (X[V'] ∧ ξ(R)[V,V']); } Check_EG OBDD Check_EG(OBDD X) { Y′ := X; repeat Y := Y′; Y′ := Y ∧ PreImage(Y); until (Y′ ↔ Y); return Y; } Check_EU OBDD Check_EU(OBDD X1 ,X2) { Y′ := X2; repeat Y := Y′; Y′ := Y ∨ (X1 ∧ PreImage(Y)); until (Y′ ↔ Y); return Y; } Check_FairEG Here we need to refer Emerson Lei algorithm which specifies the fair CTL version. $$[\mathbf{E}_f \mathbf{G} \phi ] = \nu Z .([\phi] \cap \bigcap_{F_i \in FT} [\mathbf{EX}[\mathbf{E}(Z \mathbf{U}(Z \wedge F_i ))])$$ The can be interpreted as $Z$ keeps being true until all fair conditions are met when $Z$ is still true at each step of iteration. OBDD Check_FairEG(OBDD X) { Z’ := X; repeat Z := Z’; for each Fi in FT Y := Check_EU(Z, Fi ∧ Z); Z’:= Z’ ∧ PreImage(Y); end for; until (Z’ ↔ Z); return Z; } Check Invariants Invariants are specified as $\mathbf{AG}\phi$. Its negation is $\neg \mathbf{EF} \neg \phi$, which is a reachability issue: are there such states which can be reached from the initial states? We show the forward checking technique. OBDD Compute_reachable() { Y′ := I; Y := ⊥; while ¬(Y′ ↔ Y) { Y := Y′; Y′ := Y ∨ Image(Y); } return Y; } Note backward checking iteratively calculates the preimage of $\phi$ until a fixed-point is reached or the preimage intersects with $I$. An advanced version with the so-called frontier (F) is presented below. OBDD Compute_reachable() { Y := I; F := I; while ¬(F ↔ ⊥) { F := Image(F) ∧ ¬Y; Y := F ∨ Y; } return Y; } If we explain it in the set flavor, it could be at each time we only calculate the image of new states (frontier). The set of reachable states is saturated when the image of the frontier can't bring in more new states. Forward_Check_EF Based on the discussion above, we now can check invariants (often feature as bad behaviors). bool Forward_Check_EF(OBDD BAD) { Y := F := I; while ¬(F ↔ ⊥) and (F ∧ BAD) ↔ ⊥ { F := Image(F ) ∧ ¬Y ; Y := Y ∨ F ; } if F = ⊥ return false else return true } The above utilizes the frontier technique. It looks for all reachable states and verifies whether there are some bad states within them. 1. R. Sebastiani, Symbolic CTL Model Checking, slides on Formal Methods, 2012 2. McMillan, Symbolic Model Checking, a Carnegie-Mellon PhD dissertation, 1993 3. Wikipedia, Fair Computational Tree Logic, http://en.wikipedia.org/wiki/Fair_computational_tree_logic Appendix The most of this post is heavily extracted from the RS's teaching slides, hence I don't claim the copyright. The character of this post is intended to be a note, with marginal and personal remarks. For those who need to respect the source, please read the following copy of the original copyright notice. Copyright notice: some material (text, figures) displayed in these slides is courtesy of M. Benerecetti, A. Cimatti, P. Pandya, M. Pistore, M. Roveri, and S.Tonetta, who detain its copyright. Some exampes displayed in these slides are taken from [Clarke, Grunberg & Peled, “Model Checking”, MIT Press], and their copyright is detained by the authors. All the other material is copyrighted by Roberto Sebastiani. Every commercial use of this material is strictly forbidden by the copyright laws without the authorization of the authors. No copy of these slides can be displayed in public without containing this copyright notice.
2018-09-24 19:02:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6227632164955139, "perplexity": 10021.68325899169}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160641.81/warc/CC-MAIN-20180924185233-20180924205633-00472.warc.gz"}
http://putnam.jachavezd.oucreate.com/uncategorized/october-6-2016/
# October 6, 2016 We started by working on the following problem from the 2014 Virginia Tech Regional Math Contest: Suppose we are given a 19×19 chessboard (a table with $$19^2$$ squares) and remove the central square. Is it possible to tile the remaining $$19^2 -1 = 360$$ squares with 4 × 1 and 1 × 4 rectangles? (So each of the 360 squares is covered by exactly one rectangle.) Justify your answer. We proved that the answer is no, by adapting the proof of a classical problem: it is not possible to tile an 8×8 chessboard with 2 opposite corners removed using 2×1 and 1×2 tiles. Then we worked on solutions for three problems from the 2010 Putnam: • B1: We used the suggestion from last week’s post, and also saw how to do a proof using the Cauchy-Schwarz inequality. • A2: Using ideas similar to the ones we used in problem 8 of the proof strategies handout, we could prove that the derivative of f is periodic with period 1. Then we took the second derivative of f, which turned out to be zero and thus f must be of the form ax + b for some constants a and b. • A3: Here we first thought about a similar but simpler problem: $$h(x) = ah'(x)$$. Using standard ODE techniques,$$h(x) = Ce^{-x/a}$$, so the boundedness condition forces the constant C to be 0 and thus h is zero as well. For the two variable case, fix x and y and take derivatives of $$h(x+at,y+bt)$$ with respect to t. Today’s nuggets of wisdom: • Once again, invariants made an appearance. Without them, it can be quite difficult to prove that something does not exist! • Simplification is often a good idea to get you started.
2021-04-17 04:58:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8907783627510071, "perplexity": 348.9142467255554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00147.warc.gz"}
https://dsp.stackexchange.com/questions/41660/confusion-in-normalised-bandwidth-of-a-signal
# Confusion in Normalised Bandwidth of a signal I know normalised bandwidth can be found from FFT plot of the signal. But I'm confused that at what stage I have to take the FFT. I mean first binary data is input to a convolutional encoder, then to NRZ encoder, Then GMSK modulation (sampling frequency till now is 100KHz), then upsampling to 30MHz sampling frequency and then the quadrature modulation. At what stage I'm supposed to take the FFT to find normalised bandwidth? You can take the FFT on any signal after modulation, but would be simplest (fewer samples required) to do it immediately after modulation. With proper upsampling the FFT of your signal prior to upsampling would be identical as long as you used proportionally more points to keep the time length identical. In your example of 100 KHz and 30 MHz, you could take a N point FFT at 100 KHz or a $300 N$ point FFT at 30 MHz and the result should look identical (with just more points interpolated in the FFT due to the upsampling). Given the 100 KHz sampling rate and room for your interpolation filter, I would assume that your GMSK modulated signal BW you are trying to view is less than 70 KHz (as a complex signal)?
2022-01-21 08:05:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6571370959281921, "perplexity": 1400.3993991123007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302740.94/warc/CC-MAIN-20220121071203-20220121101203-00566.warc.gz"}
https://aviation.stackexchange.com/questions/44192/what-is-the-link-or-difference-between-change-in-momentum-pressure-difference
# What is the link OR difference between change in momentum & pressure difference? I ask in regards to the the thrust formula, which includes change in momentum, as well as difference in pressure between inlet & outlet of the engine to calculate generated thrust; this implies that momentum change & pressure difference are clearly two (2) different phenomenons. But in reaction turbine blade cascades, we can observe a pressure distribution similar to that of an aircraft wing, producing lift which causes the turbine to spin, & a load component acting on the turbine bearings. Also, the inlet & outlet momentum change is seen to cause the turbine to spin. I would expect that a turbine's rotation can be explained by either of the explanations but not both together, since the pressure distribution over the blade profile is a consequence of the deflection of air by the blade, which also causes the change in momentum of the flow exiting the turbine. So the force acting upon a turbine blade can be calculated either by calculating momentum change, or by calculating pressure difference, but not as a sum of both. Now, returning to the case of the engine, of course, there isn't a pressure difference vertically to the axis of an engine, so no force in this direction. But the pressure difference in line of the axis is a direct consequence of the acceleration of the flow (momentum change) through the engine: So why is pressure difference & momentum change summed together to get thrust? It's mr Bernouilli's signature discovery: total pressure is dynamic pressure (momentum change) plus static pressure. Both static and dynamic pressure are the result of the impact of air molecules upon the wing or turbine blade, which acts as the lifting surface. The main distinction between the two is that static pressure is the result of random velocity vectors averaging zero, while dynamic pressure has an average molecule velocity unequal to zero. Bernouilli's law can also be written as $$p_t = p_s + \dot{m} \cdot V$$ With $\dot{m}$ the mass of air that impacts the lifting surface per second, and $V$ its velocity. Since that is not a convenient way to make computations regarding aircraft structures, we usually see it in a format that uses air density and wing area. There is an analogy with potential and kinetic energy. Both are forms of energy, and one form can be freely exchanged for another one. A falling object transforms $m\cdot \Delta h$ into $\frac {1}{2} \cdot \rho \cdot \Delta V^2$, but the total energy remains constant at any point in time. The same goes for pressure, which is actually a measure of the energy of air molecules. A jet engine adds energy to the air molecules going through it. A turbine blade can be driven by either the dynamic pressure, or by the difference in static pressure before and behind the blade - there are different design blades according to the proportion of static & impulse. The exhaust shape of the engine can leave the outflowing air with a higher static pressure than surrounding air, or expand the higher static pressure completely into dynamic pressure. Static pressure and impulse change are summed together because they are two forms of exactly the same entity: the energy added to the air stream. • so the reaction of a turbine blade is the total of the pressure difference across the profile AND the momentum change across the blade? – Guha.Gubin Sep 29 '17 at 10:36 • Yes, accounting for efficiency, thermodynamic losses, friction etc. – Koyovis Sep 29 '17 at 12:06
2020-01-20 00:31:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6094257831573486, "perplexity": 441.2559776931818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595787.7/warc/CC-MAIN-20200119234426-20200120022426-00206.warc.gz"}
https://www.tutorialspoint.com/how-can-a-sequential-model-be-built-on-auto-mpg-dataset-using-tensorflow
# How can a sequential model be built on Auto MPG dataset using TensorFlow? PythonServer Side ProgrammingProgrammingTensorflow Tensorflow is a machine learning framework that is provided by Google. It is an open−source framework used in conjunction with Python to implement algorithms, deep learning applications and much more. It is used in research and for production purposes. The ‘tensorflow’ package can be installed on Windows using the below line of code − pip install tensorflow Tensor is a data structure used in TensorFlow. It helps connect edges in a flow diagram. This flow diagram is known as the ‘Data flow graph’. Tensors are nothing but multidimensional array or a list. They can be identified using three main attributes − • Rank − It tells about the dimensionality of the tensor. It can be understood as the order of the tensor or the number of dimensions in the tensor that has been defined. • Type − It tells about the data type associated with the elements of the Tensor. It can be a one dimensional, two dimensional or n dimensional tensor. • Shape − It is the number of rows and columns together. The aim behind a regression problem is to predict the output of a continuous or discrete variable, such as a price, probability, whether it would rain or not and so on. The dataset we use is called the ‘Auto MPG’ dataset. It contains fuel efficiency of 1970s and 1980s automobiles. It includes attributes like weight, horsepower, displacement, and so on. With this, we need to predict the fuel efficiency of specific vehicles. We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook. Following is the code snippet − ## Example def build_compile_model(norm): model = keras.Sequential([ norm, layers.Dense(64, activation='relu'), layers.Dense(64, activation='relu'), layers.Dense(1) ]) model.compile(loss='mean_absolute_error', return model print("The model is being built and compiled") dnn_horsepower_model = build_compile_model(horsepower_normalizer) print("The statistical summary is being computed") dnn_horsepower_model.summary() Code credit − https://www.tensorflow.org/tutorials/keras/regression ## Explanation • A function named ‘build_compile_model’ is defined that builds a sequential model and generates three dense layers. • The model is compiled and it is returned as output from function. • The statistical analysis of the model is displayed on the console, using the ‘summary’ method. Published on 20-Jan-2021 12:55:08
2021-12-08 16:20:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2157754749059677, "perplexity": 1328.4337765766509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363515.28/warc/CC-MAIN-20211208144647-20211208174647-00485.warc.gz"}
https://code.tutsplus.com/tutorials/getting-started-with-redux-connecting-redux-with-react--cms-30352?ec_unit=translation-info-language
# Getting Started With Redux: Connecting Redux With React This is the third part of the series on Getting Started With Redux, and in this tutorial, we're going to learn how to connect a Redux store with React. Redux is an independent library that works with all the popular front-end libraries and frameworks. And it works flawlessly with React because of its functional approach. You don't need to have followed the previous parts of this series for this tutorial to make sense. If you're here to learn about using React with Redux, you can take the Quick Recap below and then check out the code from the previous part and start from there. ## Quick Recap In the first post, we learned about the Redux workflow and answered the question, Why Redux? We created a very basic demo application and showed you how the various components of Redux—actions, reducers, and the store— are connected. In the previous post, we started building a contact list application that lets you add contacts and then displays them as a list. We created a Redux store for our contact list, and we added a few reducers and actions. We attempted to dispatch actions and retrieve the new state using store methods like store.dispatch() and store.getState(). 1. the difference between container components and presentational components 2. about the react-redux library and the redux-js-toolkit 3. how to bind React and Redux using connect() 4. how to dispatch actions using mapDispatchToProps 5. how to retrieve state using mapStateToProps 6. how to dispatch actions and get the state using the new Redux hooks: useDispatch  and useSelector The code for the tutorial is available on GitHub in the react-redux-demo repo. Grab the code from the main branch and use that as a starting point for this tutorial. If you're curious to know how the application looks by the end of this tutorial, try the v2 branch. Let's get started. ## Designing a Component Hierarchy: Smart vs. Dumb Components This is a concept that you've probably heard of before, but let's have a quick look at the difference between smart and dumb components. Recall that we created two separate directories for components, one named containers/ and the other components/. The benefit of this approach is that the behavior logic is separated from the view. The presentational components are said to be dumb because they are concerned about how things look. They are decoupled from the business logic of the application and receive data and callbacks from a parent component exclusively via props. They don't care if your application is connected to a Redux store if the data is coming from the local state of the parent component. The container components, on the other hand, deal with the behavioral part and should contain very limited DOM markup and style. They pass the data that needs to be rendered to the dumb components as props. I've covered the topic in depth in another tutorial, Stateful vs. Stateless Components in React. Moving on, let's see how we're going to organize our components. ## Presentational Components Here are the presentational components that we'll be using in this tutorial. This is an HTML form for adding a new contact. The component receives onInputChange and onFormSubmit callbacks as props. The onInputChange event is triggered when the input value changes and onFormSubmit when the form is being submitted. #### components/ContactList.jsx This component receives an array of contact objects as props, hence the name ContactList. We use the Array.map() method to extract individual contact details and then pass on that data to <ContactCard />. #### components/ContactCard.jsx This component receives a contact object and displays the contact's name and image. For practical applications, it might make sense to host JavaScript images in the cloud. ## Container Components We're also going to construct bare-bones container components. #### containers/Contacts.jsx The returnContactList() function retrieves the array of contact objects and passes it to the ContactList component. Since returnContactList() retrieves the data from the store, we'll leave that logic blank for the moment. We've created three bare-bones handler methods that correspond to the three actions. They all dispatch actions to update the state. We've left out the logic for showing/hiding the form because we need to fetch the state. Now let's see how to bind React and Redux together. ## The react-redux Library React bindings are not available in Redux by default. You will need to install an extra library called react-redux first. The library exports many important APIs including a <Provider /> component, a higher-order function known as connect(), and utility hooks like useSelector() and useDispatch(). ### The Provider Component Libraries like Redux need to make the store data accessible to the whole React component tree, starting from the root component. The Provider pattern allows the library to pass the data from top to bottom. The code below demonstrates how Provider magically adds the state to all the components in the component tree. #### Demo Code The entire app needs to have access to the store. So we wrap the provider around the app component and then add the data that we need to the tree's context. The descendants of the component then have access to the data. ### The connect() Method Now that we've provided the store to our application, we need to connect React to the store. The only way that you can communicate with the store is by dispatching actions and retrieving the state. We've previously used store.dispatch() to dispatch actions and store.getState() to retrieve the latest snapshot of the state. The connect() method lets you do exactly this, but with the help of two methods known as mapDispatchToProps and mapStateToProps. I have demonstrated this concept in the example below: #### Demo Code mapStateToProps and mapDispatchToProps both return an object, and the key of this object becomes a prop of the connected component. For instance, state.contacts.newContact is mapped to props.newContact. The action creator addContact() is mapped to props.addContact. But for this to work, you need the last line in the code snippet above. Instead of exporting the AddContact component directly, we're exporting a connected component. The connect provides addContact and newContact as props to the <AddContact/> component. ### Simplifying the Code With Redux Hooks We learnt how to connect our React component to the state in the previous section. The problem with the technique used above is the volume of code we had to write. We had to repeat functions to map the state to the action dispatcher and the component to the store. This may become an even bigger problem for large codebases. Fortunately, some utilities were added to the React Redux library with the sole aim of decreasing the amount of boilerplate, and one of those utilities is the useSelector hook. With this hook, you don't need to map anything, nor do you need connect()—just import the hook and use it to access your application state anywhere in your app. Demo Code Another hook—useDispatch()—was used above to dispatch an action on clicking the span element. Compared to the code in the previous section, you would agree that this version is cleaner and easier to understand. There is also no code repetition, making it very useful when dealing with large codebases. You should note that these hooks were introduced starting from React Redux v7.1, so you must install either that or a later version in order to use them. ## How to Connect React and Redux Next, we're going to cover the steps that you need to follow to connect React and Redux. ### Install the react-redux Library Install the react-redux library if you haven't already. You can use NPM or Yarn to install it. ### Provide the Store to Your App Component Create the store first. Then, make the store object accessible to your component tree by passing it as a prop to <Provider />. ### Connect React Containers to Redux to Use State The connect function is used to bind React containers to Redux. What that means is that you can use the connect feature to: 1. subscribe to the store and map its state to your props 2. dispatch actions and map the dispatch callbacks into your props However, we'll no longer use the connect function to connect our store. Instead, we'll use the hooks to fetch from our store and dispatch actions when the need arises. First, import useSelector, useDispatch, and the actions you want to dispatch into AddContact.jsx. Second, inside the AddContact() function, on the first line, import the state that the component needs and get the dispatcher: The component is now equipped to read state from the store and dispatch actions. Next, the logic for handeInputChange, handleSubmit, and showAddContactBox should be updated as follows: We've defined the handler methods, but there is still one part missing—the conditional statement inside the render function. If isHidden is false, the form is rendered. Otherwise, a button gets rendered. ## Displaying the Contacts We've completed the most challenging part. Now, all that's left is to display these contacts as a list. The Contacts container is the best place for that logic. We've gone through the same procedure that we followed above to connect the Contacts component with the Redux store in that we used useSelector to grab the needed branch of the state, which is contactList. That completes the integration of our app with the state of the Redux store. ## What Next? In the next post, we'll take a deeper look at middleware and start dispatching actions that involve fetching data from the server. Share your thoughts on the forum! This post has been updated with contributions from Kingsley Ubah. Kingsley is passionate about creating content that educates and inspires readers. Hobbies include reading, football and cycling.
2022-08-19 14:06:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17099782824516296, "perplexity": 2045.2280933243046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00653.warc.gz"}
http://tex.stackexchange.com/questions/60480/tikz-bars-width-and-size-of-plot
# TikZ: bars width and size of plot This is a followup on my previous question. I have tried to plot my data and I see the following now. Would it be possible to define the size of the area that is used for the plot? I have tried to change the size with scale but that just does what it says. Is there also a way to use smaller bars? I'm trying to get nearly vertical lines (very narrow bars) stopping at 1 in my case. This is the part of the data file I'm trying to get working. 0.000021 1 0.000723 1 0.000835 1 0.024507 1 0.024628 1 0.027483 1 0.027548 1 0.027702 1 0.027778 1 0.027916 1 - In general, you should always post a minimal example to show what you've tried and what doesn't work. Also, take a look at the pgfplots manual. It's really detailed, so it might seem a bit overwhelming at first, but if you skim through it once you'll have a good feeling of the capabilities and how to use them. – Jake Jun 19 '12 at 23:26 I think in this case it's better to use a ycomb plot, which uses lines instead of bars. Your data is so closely spaced that bars would always be too wide. You can control the size of the plot using width=<length> or height=<length. If you set only one of the two options, the plot area will be scaled proportionally, if you set both at the same time you can change the aspect ratio. \documentclass[border=5mm]{standalone} \usepackage{pgfplots, pgfplotstable} \usepackage{filecontents} \begin{filecontents}{data.csv} 0.000021 1 0.000723 1 0.000835 1 0.024507 1 0.024628 1 0.027483 1 0.027548 1 0.027702 1 0.027778 1 0.027916 1 \end{filecontents} \begin{document} \begin{tikzpicture} \begin{axis}[ ycomb, ymin=0, enlarge y limits=false, width=15cm, height=5cm ]
2016-06-27 07:51:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7876501679420471, "perplexity": 407.73212207680723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00166-ip-10-164-35-72.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/72290-what-complex-slope-line-complex-numbers-print.html
# What is complex slope of a line in complex numbers • February 7th 2009, 07:46 AM pankaj What is complex slope of a line in complex numbers If equation of a line in complex plane is given by $ \bar az+a\bar z+b=0 $ where $a$ is a complex constant and $b$ is a real number Then complex slope of the line is defined as $\omega =-\frac{a}{\bar a}$ Now,what I want to know is that what is the complex slope of the line,i.e. what is its geometrical meaning.Also,what do $a$ and $b$ signify geometrically • February 7th 2009, 10:14 AM HallsofIvy Let a= u+ iv, b= p+ iq, z= x+ iy. Then [tex]\overline{a}z+ a\overline{z}+ b= 0[tex] becomes (u- iv)(x+ iy)+ (u+ iv)(x- iy)+ p+ iq= 0 (ux+ vy)+ i(uy- vx)+ (ux+ vy)+ i(vx- uy)+ p+ iq= 0 Notice that i(uy- vx) and i(vx-uy) cancel. Separating real and imaginary parts, 2ux+ 2vy+ p= 0 and q= 0 (the latter is why b= p is a real number). We can rewrite the first equation as 2vy= -2ux+ b or y= -(u/v)x+ b/(2v) so that is, in fact, a straight line with slope -u/v and y-intercept b/(2v). Now, I honestly don't see what that has to do with the "complex slope", $-\frac{a}{\overline{a}}$! Where did you see that formula? • February 7th 2009, 07:12 PM pankaj It was given in a textbook . Among other things it was also given that complex slope of line joining $z_{1}$ and $z_{2}$ is given by $\frac{z_{1}-z_{2}}{\bar z_{1}-\bar z_{2}}$. also if complex slopes of two lines equal then the two lines are parallel and the two lines will be perpendicular if the sum of the slopes is $0$. It is also given that complex slope of line making angle $\theta$ with the real axis is given by $\omega=e^{2i\theta}$ All this I have verified and they are true. But I want to understand the geometrical interpretation of this concept.
2015-03-01 01:51:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287318348884583, "perplexity": 1199.1598053033779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462141.68/warc/CC-MAIN-20150226074102-00044-ip-10-28-5-156.ec2.internal.warc.gz"}
http://triskelionadvisory.com/fx31v8c5/ea72c5-how-to-put-a-kappie-on-the-e
# how to put a kappie on the e You can! Please send me any questions you may have, and thanks for looking. E.g., [[1,2,3],[4,5,6]] is a $2\times 3$ matrix whose first row has entries 1, 2 and 3, and whose second row has entries 4, 5 and 6. Listed below are the ALT codes for letter E with accents (or letter E ALT codes). Select the magnifying glass icon on the upper right-hand corner of the Google Play screen and type in “Smart Keyboard” on the search text field. This instructional video is a helpful time-saver that will enable you to get good at computer basics. Word Perfect 8.0 for Windows. I, Jessica Oscoy, certify under penalty of perjury under the laws of the State of Washington t hat I transmitted the REPORT AND DECISION to those listed on the attached page as follows: EMAILED to all County staff listed as parties/interested persons and parties with e -mail addresses on record. I suppose they try to hold stuff like this internal just like UbiSoft trying to keep the allegations internal. If a problem calls for a decimal answer, give at least four decimal digits, or as many as the problem specifies. P.S. Visit HUAWEI Official Support to quickly get HUAWEI P30 lite User muanuals, software downloads,FAQs and other repair services. thanks . There are no default settings for international characters in WordPerfect. Step 5: Learn shortcuts on a Mac On a Mac, hold down the Alt key plus the letter A, C, E, I, O, U, N, or the tilde key, depending on the accent you want to add. Title. Our Company News I want to be able to put the "^" over a "p"- a p with a hat, basically. Put Your Heart In Country Book 71 Kount on Kappie Kross-Stitch by Veronica Altman. On a PC, hold down the Control key, plus the letter A, C, E, I, O, U, N, or the tilde key, depending on the accent you want to add. Animal Services Enforcement Appeal . Kappie ... 1.2 million user records put for sale on hacker forums. Deesdae skop die pc uit.. dit skakel oor na 'n snaakse karakter iets soos 'n musiek noot. The song has some nice cool lyrics penned by Raftaarrrr. To use an uppercase accented "e," release the ALT key, press and hold the SHIFT key, and press the letter "e" on your keyboard. Ek gebruik al vir lank die deelteken en kappie , e , ens. How to Put an Accent on a Letter of a Word Using Your Keyboard. Applying accent symbols helps … Intervals in … A small accent menu appears with different diacritical accent options, each of which has a number beneath it. Translation memories are created by human, but computer aligned, which might cause mistakes. Choose a symbol from a context menu, select it from the Windows® Character Map®, or use a key sequence to add symbols to text notes quickly without interrupting your workflow. Kappie replied to HempBoosh's topic in Tech News. Smart Keyboard should … Kount on Kappie 71 Put Your Heart in Country Cross Stitch Design Booklet Mini Used 20 page cross stitch pattern booklet in very good condition to make the cross stitch designs shown. Dec 17, 2018 - This Pin was discovered by Karen Botma. You must set up your computer to recognize the keystrokes you want to use (such as "alt + e" for "é"). Kappie doesn't smile as his plane lands but he asks me if my plane is in there too. You can! Matrices can be formed in the Matrix Context. I shake my head and he smiles softly but chucks his chin at me in challenge before leaving me to it. your own Pins on Pinterest A circumflex is used above certain vowels to indicate their pronunciation. How to use Keyboard Function Keys (FN) and Hotkeys. If you ever find yourself in a bind and without a bong, try some of these neat homemade pieces invented by creative, open minded people like you. Jun 15, 2002 1,165 0 76. Mar 4, 2005 #2 For example, write 2.3453 instead of 2.34. 1. Video: Adding a Special Characters or Symbols to a Text Note Note: This video was recorded using Revit. Weet jy of daar 'n verstelling op my PC is wat dit veroorsaak of het Sacanada verander hoe dit … How to Insert Symbols Above Letters With the Keyboard. Showing page 1. Shop our expansive collection of needlework projects & kits, counted cross stitch, needlepoint, quick point, stamped cross stitch & tools. When a something in my needlework tool box starts to annoy me, I remove it. E.g. Newbie SSD upgrade question. I bought this booklet secondhand. Sung by: Raftaar & Kappie Human translations with examples: diaeresis, share sign cap, punctuation mark. After information related specifically to the accent over the E I also discuss inserting other non English symbols into email messages composed with Outlook. Apologies to Kappie for hi-jacking his thread About Lenovo + About Lenovo. Contextual translation of "deelteken kappie" into English. It is possible to have Complex entries in a Matrix, but there is not a pre-defined Context that makes that easy to do. Of dit gaan oor na 'n ander bladsy en ek verloor my werk. Windows 7 and Microsoft Word allow you to insert symbols above letters without using ribbon commands. Typing the Letter 'É' Using Keyboard Shortcuts Use the "Preview Button" to see exactly how your entry looks. Repeat this process for each special character you need . 1967 E.M. Slatter My Leaves Are Green 127 ‘Magtig, how fat I am,’ she said good-humouredly. She put on her grey kappie … 1. e - just your typical letter ‘e’ 2. è - makes the ‘eh’ sound (i.e. Tap the Enter key on your tablets keyboard to begin searching. Onto the sucking sound, I'm pretty sure mine did that as well when the setting wasn't on the default setting. Windows 7 and Microsoft Office Word 2010 can insert accent symbols in a document. He has performed nicely in it, so guys check this out. Have any suggestions let me know.. 06/03/2013 . Next, press the letter "e"; this should make the acute accent appear. They think it will go away if noone knows about it. (such as "alt + e" for "é") 5. Computer dictionary definition of what colon means, including related links, information, and terms. to tell the difference between 1+2/3+4 and [1+2]/[3+4] click the "Preview Button". In the insert symbols, the only ones with the hat are the French vowels. About Kappie. Look for the Smart Keyboard app. 1936 E. Rosenthal Old-Time Survivals 25 The kappie is worn by some wives of farmers. I'm wondering what I should work on, clowns or cats! Well, I found the pictures for more cross stitch patterns, now I just need to prepare them and put them in jpeg form to add here on Kappie's FB site. Create Your Own Personalized Name Art and Custom Letter Art by Alphabet Photography. Eg, if I had my flap set on demisting vent instead of front vents, it would do it when I turned ignition on. Cannabis smokers are extremely innovative when it comes to making pieces with materials found around the home. Accent Menu . Activate the compose key: Start Tweaks and choose at Keyboard & Mouse -> Compose-Key to designate your compose key. Discover (and save!) On a Mac, press and hold a vowel while typing to create a character with the circumflex accent mark. These accents on the letter E are also called accent marks, diacritics, or diacritical marks. Found 0 sentences matching phrase "deelteken".Found in 0 ms. Paperback. Save that it is of more generous dimensions, it is the cowl preferred by the Dutch peasant women of Rembrandt or Adriaan van Ostade. 1Cheap2Crazy Golden Member. Need to put the circumflex over the a in “coup de grâce” or the umlaut over the i in “naïve”? Hope this helps buddy. Instagram Love Lyrics: Indian rapper Raftaar’s latest rap song featuring Kappie, sponsored by Vodafone U. If it’s a tool that provides a service I need, I think of improvements to it, so that it doesn’t annoy me.. For example, I have a little pin cushion that works great for me outside my sewing box. Kappie S C Kaplan posted on Instagram: “Finally finished bobbining the last of the thread (well, the last of the first skein of each color)…” • See all of @drkappitan's photos and videos on their profile. She put on her grey kappie and waddled away across the veld. There is a specific ALT code for each accented E capital letter (uppercase, majuscule) and each accented E small letter (lowercase, minuscule), as shown in the table below. Low flat-rate shipping! Put it back, fasten selector, re-attach cable and voila. Since I am using Outlook 2007 the below Outlook keyboard shortcut will provide the easiest method for you to insert an E with an accent over the top in emails that you compose. Published by Kappie Originals Ltd, Frederick, MD, 1986. 1st Edition. Does anyone know how I could do this? In Ubuntu 18.10 and newer the best option if you regularly need umlauts is to use gnome-tweaks to activate the compose key -- install with: sudo apt install gnome-tweaks. KAPPIE AYERS . To use a lowercase accented "e," simply release the ALT key and press the letter "e" once more. A Word Using your Keyboard Pinterest 1936 E. Rosenthal Old-Time Survivals 25 the kappie is by... 1.2 million user records put for sale on hacker forums allow you to insert above! Makes that easy to do pre-defined Context that makes that easy to do in “ naïve?. Matching phrase deelteken kappie '' into English i also discuss inserting other non English symbols email... Microsoft Word allow you to get good at computer basics accent marks, diacritics, or as many the. Some nice cool Lyrics penned by Raftaarrrr diaeresis, share sign cap, punctuation mark small! Dit skakel oor na ' n ander bladsy en Ek verloor my werk p with a hat basically... Designate your compose key four decimal digits, or diacritical marks lowercase accented ''. ‘ e ’ 2. è - makes the ‘ eh ’ sound i.e! Adding a special characters or symbols to a Text Note Note: this was. Options, each of which has a number beneath it + e '' ; this should the!, which might cause mistakes sentences matching phrase deelteken kappie '' into English ander bladsy Ek! With Outlook a character with the circumflex over the e i also discuss inserting other English. Discuss inserting other non English symbols into email messages composed with Outlook of which has a number beneath it small. Video was recorded Using Revit me in challenge before leaving me to it makes ‘... In WordPerfect accent menu appears with different diacritical accent options, each of which has a number beneath it gaan. A Word Using your Keyboard Ltd, Frederick, MD, 1986 gaan oor na ' n bladsy. P30 lite user muanuals, software downloads, FAQs and other repair services the Enter on. Fn ) and Hotkeys also called accent marks, diacritics, or as many as the problem.... Each of which has a number beneath it or letter e with accents ( or letter e also... Vodafone U 25 the kappie is worn by some wives of farmers the between... deelteken kappie '' into English re-attach cable and voila sponsored by Vodafone U with the hat are the codes... Insert accent symbols helps … Listed below are the French vowels information related to! Options, each of which has a number beneath it, punctuation mark choose at Keyboard & -! Diacritics, or as many as the problem specifies four decimal digits, or diacritical.! Kappie Originals Ltd, Frederick, MD, 1986 3+4 ] click the Preview Button '' see! Characters or symbols to a Text Note Note: this video was recorded Using Revit my! 2. è - makes the ‘ eh ’ sound ( i.e it is possible to have Complex entries in Matrix! Insert accent symbols in a document key on your tablets Keyboard to begin searching over! A circumflex is used above certain vowels to indicate their pronunciation Function Keys FN... Translation memories are created by human, but computer aligned, which cause. ) and Hotkeys coup de grâce ” or the umlaut over the i... - just your typical letter ‘ e ’ 2. è - makes the ‘ eh ’ sound ( i.e and! Appears with different diacritical accent options, each of which has a number beneath it, only... The Enter key on your tablets Keyboard to begin searching ''.Found 0! Of farmers get HUAWEI P30 lite user muanuals, software downloads, FAQs other! The insert symbols above Letters with the circumflex accent mark vir lank deelteken! Designate your compose key plane is in there too letter ' É Using. These accents on the default setting, software downloads, FAQs and other repair services for É )... Put your Heart in Country Book 71 Kount on kappie Kross-Stitch by Veronica Altman cause... At computer basics to a Text Note Note: this video was recorded Using.! Thanks for looking computer basics like this internal just like UbiSoft trying to keep the allegations.! ' n musiek noot in the insert symbols above Letters with the circumflex over the a in naïve... Grey kappie and waddled away across the veld any questions you may,... Just your typical letter ‘ e ’ 2. è - makes the eh! Means, including related links, information, and terms Vodafone U and thanks for looking, 1986 need put... Work on, clowns or cats Enter key on your tablets Keyboard begin!: Indian rapper Raftaar ’ s latest rap song featuring kappie, e, ens, 1986 your Keyboard...... 1.2 million user records put for sale on hacker forums Context that makes that easy to.!, software downloads, FAQs and other repair services hold stuff like this internal just like UbiSoft to... Dit gaan oor na ' n musiek noot phrase deelteken kappie '' into English sentences matching phrase deelteken... Characters in WordPerfect but computer aligned, which might cause mistakes selector, re-attach cable and voila me it... Special character you need hold stuff like this internal just like UbiSoft trying to keep the allegations internal kappie to. Records put for sale on hacker forums kappie and how to put a kappie on the e away across the veld to! Challenge before leaving me to it '' simply release the ALT key and press the letter e. Cool Lyrics penned by Raftaarrrr your entry looks with a hat, basically composed. Or cats links, information, and thanks for looking get good at computer basics as his lands... Begin searching on a letter of a Word Using your Keyboard umlaut over the a in “ coup de ”. p '' - a p with a hat, basically the accent over the i in naïve. Human, but there is not a pre-defined Context that makes that easy to do to... Making pieces with materials found around the home this should make the acute accent.!
2021-05-17 13:32:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21434931457042694, "perplexity": 9132.474719119067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00164.warc.gz"}
https://xianblog.wordpress.com/category/statistics/r-statistics/
## a very quick Riddle Posted in Books, Kids, pictures, R with tags , , , , , , on January 22, 2020 by xi'an A very quick Riddler’s riddle last week with the question Find the (integer) fraction with the smallest (integer) denominator strictly located between 1/2020 and 1/2019. and the brute force resolution ```for (t in (2020*2019):2021){ a=ceiling(t/2020) if (a*2019<t) sol=c(a,t)} ``` leading to 2/4039 as the target. Note that $\dfrac{2}{4039}=\dfrac{1}{\dfrac{2020+2019}{2}}$ ## Le Monde puzzle [#1127] Posted in Books, Kids, R, Statistics with tags , , , , on January 17, 2020 by xi'an A permutation challenge as Le weekly Monde current mathematical puzzle: When considering all games between 20 teams, of which 3 games have not yet been played, wins bring 3 points, losses 0 points, and draws 1 point (each). If the sum of all points over all teams and all games is 516, was is the largest possible number of teams with no draw in every game they played? The run of a brute force R simulation of 187 purely random games did not produce enough acceptable tables in a reasonable time. So I instead considered that a sum of 516 over 187 games means solving 3a+2b=516 and a+b=187, leading to 142 3’s to allocate and 45 1’s. Meaning for instance this realisation of an acceptable table of game results ```games=matrix(1,20,20);diag(games)=0 while(sum(games*t(games))!=374){ games=matrix(1,20,20);diag(games)=0 games[sample((1:20^2)[games==1],3)]=0} games=games*t(games) games[lower.tri(games)&games]=games[lower.tri(games)&games]* sample(c(rep(1,45),rep(3,142)))* #1's and 3' (1-2*(runif(20*19/2-3)<.5)) #sign games[upper.tri(games)]=-games[lower.tri(games)] games[games==-3]=0;games=abs(games) ``` Running 10⁶ random realisations of such matrices with no constraint whatsoever provided a solution with] 915,524 tables with no no-draws, 81,851 tables with 19 teams with some draws, 2592 tables with 18 with some draws and 3 tables with 17 with some draws. However, given that 10*9=90 it seems to me that the maximum number should be 10 by allocating all 90 draw points to the same 10 teams, and 143 3’s at random in the remaining games, and I reran a simulated annealing version (what else?!), reaching a maximum 6 teams with no draws. Nowhere near 10, though! ## an elegant sampler Posted in Books, Kids, R, University life with tags , , , , , , , on January 15, 2020 by xi'an Following an X validated question on how to simulate a multinomial with fixed average, W. Huber produced a highly elegant and efficient resolution with the compact R code ```tabulate(sample.int((k-1)*n, s-n) %% n + 1, n) + 1 ``` where k is the number of classes, n the number of draws, and s equal to n times the fixed average. The R function sample.int is an alternative to sample that seems faster. Breaking the outcome of ```sample.int((k-1)*n, s-n) ``` as nonzero positions in an n x (k-1) matrix and adding a adding a row of n 1’s leads to a simulation of integers between 1 and k by counting the 1’s in each of the n columns, which is the meaning of the above picture. Where the colour code is added after counting the number of 1’s. Since there are s 1’s in this matrix, the sum is automatically equal to s. Since the s-n positions are chosen uniformly over the n x (k-1) locations, the outcome is uniform. The rest of the R code is a brutally efficient way to translate the idea into a function. (By comparison, I brute-forced the question by suggesting a basic Metropolis algorithm.) ## Le Monde puzzle [#1120] Posted in Books, Kids, pictures, R with tags , , , , on January 14, 2020 by xi'an A board game as Le weekly Monde current mathematical puzzle: 11 players in a circle and 365 tokens first owned by a single player. Players with at least two tokens can either remove one token and give another one left or move two right and one left. How quickly does the game stall, how many tokens are left, and where are they? The run of a R simulation like ```od=function(i)(i-1)%%11+1 muv<-function(bob){ if (max(bob)>1){ i=sample(rep((1:11)[bob>1],2),1) dud=c(0,-2,1) if((runif(1)<.5)&(bob[i]>2))dud=c(2,-3,1) bob[c(od(i+10),i,od(i+1))]=bob[c(od(i+10),i,od(i+1))]+dud } bob}``` always provides a solution ```> bob [1] 1 0 1 1 0 1 1 0 1 0 0 ``` with six ones at these locations. However the time it takes to reach this frozen configuration varies, depending on the sequence of random choices. ## postdoc at Warwick on robust SMC [call] Posted in Kids, pictures, R, Statistics, University life with tags , , , , , , , , on January 11, 2020 by xi'an Here is a call for a research fellow at the University of Warwick to work with Adam Johansen and Théo Damoulas on the EPSRC and Lloyds Register Foundaton funded project “Robust Scalable Sequential Monte Carlo with application to Urban Air Quality”. To quote The position will be based primarily at the Department of Statistics of the University of Warwick. The post holder will work closely in collaboration with the rest of the project team and another postdoctoral researcher to be recruited shortly to work within the Data Centric Engineering programme at the Alan Turing Institute in London. The post holder will be expected to visit the Alan Turing Institute regularly. Candidates with strong backgrounds in the mathematical analysis of stochastic algorithms or sequential Monte Carlo methods are particularly encouraged to apply. Closing date is 19 Jan 2020. ## Metropolis in 95 characters Posted in pictures, R, Statistics, Travel with tags , , , , , , , , on January 2, 2020 by xi'an Here is an R function that produces a Metropolis-Hastings sample for the univariate log-target f when the later is defined outside as another function. And when using a Gaussian random walk with scale one as proposal. (Inspired from a X validated question.) ```m<-function(T,y=rnorm(1))ifelse(rep(T>1,T), c(y*{f({z<-m(T-1)}[1])-f(y+z[1])<rexp(1)}+z[1],z),y) ``` The function is definitely not optimal, crashes for values of T larger than 580 (unless one modifies the stack size), and operates the most basic version of a Metropolis-Hastings algorithm. But as a codegolf challenge (on a late plane ride), this was a fun exercise. ## Le Monde puzzle [#1124] Posted in Books, Kids, R with tags , , , , , on December 29, 2019 by xi'an A prime number challenge [or rather two!] as Le weekly Monde current mathematical puzzle: When considering the first two integers, 1 and 2, their sum is 3, a prime number. For the first four integers, 1,2,3,4, it is again possible to sum them pairwise to obtain two prime numbers, eg 3 and 7. Up to which limit is this operation feasible? And how many primes below 30,000 can write as n^p+p^n? The run of a brute force R simulation like ```max(apply(apply(b<-replicate(1e6,(1:n)+sample(n)),2,is_prime)[,b[1,]>2],2,prod)) ``` provides a solution for the first question until n=14 when it stops. A direct explanation is that the number of prime numbers grows too slowly for all sums to be prime. And the second question gets solved by direct enumeration using again the is_prime R function from the primes package: ```[1] 1 1 [1] 1 2 [1] 1 4 [1] 2 3 [1] 3 4 ```
2020-01-22 09:10:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5640107989311218, "perplexity": 1243.44442135397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606872.19/warc/CC-MAIN-20200122071919-20200122100919-00421.warc.gz"}
https://deepai.org/publication/construction-methods-for-gaussoids
DeepAI # Construction Methods for Gaussoids The number of n-gaussoids is shown to be a double exponential function in n. The necessary bounds are achieved by studying construction methods for gaussoids that rely on prescribing 3-minors and encoding the resulting combinatorial constraints in a suitable transitive graph. Various special classes of gaussoids arise from restricting the allowed 3-minors. • 10 publications • 6 publications 01/16/2021 ### Asymptotics of running maxima for φ-subgaussian random double arrays The article studies the running maxima Y_m,j=max_1 ≤ k ≤ m, 1 ≤ n ≤ j X_... 01/12/2018 ### Estimating the Number of Connected Components in a Graph via Subgraph Sampling Learning properties of large graphs from samples has been an important p... 06/13/2019 ### Link Dimension and Exact Construction of a Graph Minimum resolution set and associated metric dimension provide the basis... 01/03/2020 ### Representing graphs as the intersection of cographs and threshold graphs A graph G is said to be the intersection of graphs G_1,G_2,...,G_k if V(... 11/11/2022 ### A Universal Construction for Unique Sink Orientations Unique Sink Orientations (USOs) of cubes can be used to capture the comb... 04/12/2019 ### Construction of conformal maps based on the locations of singularities for improving the double exponential formula The double exponential formula, or the DE formula, is a high-precision i... ## 1. Introduction Gaussoids are combinatorial structures that encode independence among Gaussian random variables, similar to how matroids encode independence in linear algebra. They fall into the larger class of CI structures which are arbitrary sets of conditional independence statements. The work of Fero Matúš is in particular concerned with special CI structures such as graphoids, pseudographoids, semigraphoids, separation graphoids, etc. In his works Fero Matúš followed the idea that conditional independence can be abstracted away from concrete random variables to yield a combinatorial theory. This should happen in the same manner as matroid theory abstracts away the coefficients from linear algebra. His work [Mat97] on minors of CI structures displays the inspiration from matroid theory very clearly. In 2007, Lněnička and Matúš defined gaussoids [LM07] of dimension as sets of symbols , denoting conditional independence statements, which satisfy the following Boolean formulas, called the gaussoid axioms: (G1) (ij|L) ∧(ik|jL) ⇒(ik|L) ∧(ij|kL), (G2) (ij|kL) ∧(ik|jL) ⇒(ij|L) ∧(ik|L), (G3) (ij|L) ∧(ik|L) ⇒(ij|kL) ∧(ik|jL), (G4) (ij|L) ∧(ij|kL) ⇒(ik|L) ∨(jk|L), for all distinct and . Here and in the following, we use the efficient “Matúš set notation” where union is written as concatenation and singletons are written without curly braces. For example, is shorthand for . A gaussoid is realizable if its elements are exactly the conditional independence statements that are valid for some -variate normal distribution. Realizability was characterized for in [LM07] and a characterization for is open. There is no general forbidden minor characterization for realizability of gaussoids [Šim06, Sul09]. We therefore think about gaussoids as synthetic conditional independence in the sense of Felix Klein [Kle16, Chapter V]. This view is inspired by the parallels to matroid theory. The algebra and geometry of gaussoids was developed with this in mind in [BDKS17]. Gaussoids are also the singleton-transitive compositional graphoids according to [Sad17, Section 2.3]. In the present paper we view gaussoids as structured subsets of -faces of an -cube. This readily simplifies the definition of a gaussoid, but it has several additional advantages. For example, it makes the formation of minors more effective, as this now corresponds to restricting to faces of the cube. To start, consider the usual 3-dimensional cube. A knee in the cube consists of two squares that share an edge. A belt consists of all but two opposing squares of the cube. The following combinatorial definition of a gaussoid can be confirmed (for example by examining Figure 2) to agree with the gaussoid axioms. ###### Definition 1.1. An -gaussoid is a set of -faces of the -cube such that for any -face of the -cube it holds: 1. If contains a knee of , then it also contains a belt that contains that knee. 2. If contains two opposing faces of , then it also contains a belt that contains these two faces. The dimension of the ambient cube is also the dimension of . is the set of -dimensional gaussoids and the set of all gaussoids. This definition is illustrated in Figure 1. As with the gaussoid axioms, this definition applies certain closure rules in every -face of the -cube, but whereas acts on the axes of the cube in the gaussoid axioms, the group acting on the two pictures in Figure 1 is the full symmetry group of the -cube, . This bigger group conflates the first three axioms into the first picture. The gaussoid axioms and also Definition 1.1 only work with -cubes. This locality can be expressed as in Lemma 3.3: For any , being an -gaussoid is equivalent to all restrictions to -faces being -gaussoids. The aim of this work is to explore gaussoid puzzling, the reversal of this idea, that is, constructing -gaussoids by prescribing their -gaussoids. The implementation hinges on an understanding of how exactly the -faces of the -cube intersect, because these intersections are obstructions to the free specification of -gaussoids. In Section 3 we encode these obstructions in a graph and then Brooks’ theorem gives access to large independent sets, where gaussoids can be freely placed. This yields a good estimate of the number of gaussoids in Theorem 3.12. In Section 4 we explore classes of special gaussoids that arise by restricting the puzzling of -gaussoids to subsets of the possibilites. Several of these classes have nice interpretations and can be matched to combinatorial objects. ### Acknowledgement The authors are supported by the Deutsche Forschungsgemeinschaft (314838170, GRK 2297, “MathCoRe”). ## 2. The cube Consider the face lattice of the -cube. This lattice contains , the unique face of dimension . To specify a face of non-negative dimension , one needs to specify the dimensions in which the face extends, and then the location of the face in the remaining dimensions. We employ two natural ways to work with faces. The first is string notation. In this notation a face is an element of where the s indicate dimensions in which the face extends and the remaining binary string determines the location; a at position means that the face is translated along the -th axis inside the cube. This string notation naturally extends the binary string notation for the vertices of the -cube: if , then its vertices are {a∈{0,1}n:ai=fi whenever fi≠∗}. The second choice is set notation. In this notation, a face of dimension is specified by two sets and , where and . The set of -faces of the -cube is . As in [BDKS17], the squares of the -cube are denoted by . Of special interest in this article are also the -cubes . The constructions in Section 3 based on Lemma 3.3 frequently exploit the following ###### Fact. For , a -face shares at most squares with an -face or is already included in it. In particular for , if a cube shares more than a single square with an -face, then it is already contained in it. Minors are important in matroid theory and gaussoid theory. When a simple matroid is represented as the geometric lattice of its flats, a minor corresponds to an interval of the lattice [Wel10, Theorem 4.4.3], which is again a geometric lattice. For gaussoid minors the lattice is replaced by the set of squares in the hypercube and the lattice intervals are replaced by hypercube faces. Minors for arbitrary CI structures have been studied for example in [Mat97]. There, a minor of a CI structure is obtained by choosing two disjoint sets and performing restriction to followed by contraction by , which are in symbols: contrLA ={(ij|K)∈AL:(ij|K([n]∖L))∈A}⊆AL, restrLA =A∩AL⊆AL. In [BDKS17], minors were also defined specifically for gaussoids using statistical terminology with an emphasis on the parallels to matroid theory. A minor is every set of squares arising from a gaussoid via any sequence of marginalization and conditioning: margLA ={(ij|K)∈A:L⊆[n]∖ijK}⊆A[n]∖L, condLA ={(ij|K)∈A[n]∖L:(ij|KL)∈A}⊆A[n]∖L. These operations are dual to the ones defined by Matúš: and . Furthermore, either operation can be the identity, and , and finally, the two sets and in Matúš’ definition of minor can be decoupled: . Thus both notions of minor coincide. Our aim is to provide a geometric intuition for the act of taking a gaussoid minor. A face of the -cube is canonically isomorphic to the -cube by deleting from the -cube all coordinates outside of . This deletion is a lattice isomorphism , with the face lattice of an -dimensional cube. We can interpret taking the minor as an operation in the hypercube. Let , then . ###### Proof. Take . Then and can be seen as subsets of and they satisfy and . From this it is immediate that and . Furthermore, , hence and . In the other direction, suppose that and let be its preimage under . Then and it follows , and also because . Thus decomposes into where naturally . This proves that . ∎ Proposition 2.1 compactly encodes the definitions of minor. The following definition introduces notation reflecting this as well as an opposite embedding, which mounts a set of squares from the -cube into an -dimensional face of a higher hypercube. ###### Definition 2.2. (1) For a set and , the -minor of is the set . A -minor is an -minor with . (2) For a set and , the embedding of into is the preimage . ## 3. Gaussoid puzzles Several theorems in matroid theory concern the (impossibility of a) characterization of classes of matroids in terms of forbidden and compulsory minors. For CI structures such as gaussoids the definitions read as follows. ###### Definition 3.1. 1. A class of sets of squares is minor-closed if with all minors of belong to . 2. A set of squares is a forbidden minor for a minor-closed class if it is minimal with the property that it does not belong to , in the sense that all its proper minors do belong to . 3. If there is a forbidden -minor for some , then all non-forbidden -minors are called compulsory -minors for the class . It is easy to see that gaussoids are minor-closed, i.e. any -minor of an -gaussoid is always a -gaussoid. But even more is true: given any set of squares in the -cube, if all of its -minors, for any , are -gaussoids, then the whole is an -gaussoid. This claim is proved in Lemma 3.3. The present section uses this property to construct gaussoids by prescribing their -minors. Section 4 investigates subclasses of gaussoids which have the same anatomy. We formalize this property in ###### Definition 3.2. A class of sets of squares stratified by dimension, i.e. , has a puzzle property if it is minor-closed and its -th stratum is generated via embeddings from the strata below , i.e. if for some all its -minors, , are in , then already . The lowest stratum is the basis of and the puzzle property is based in dimension . ###### Lemma 3.3. The set of gaussoids has a puzzle property based in dimension , whose basis are the eleven -gaussoids. ###### Proof. Let and . We show that is an -gaussoid if and only if is a -gaussoid for every . First consider the case . The gaussoid axioms are quantified over arbitrary cubes together with an order on the set , and each axiom refers to squares inside the cube only. Confined to this cube, the axioms state precisely that this -minor is a -gaussoid. The case of is reduced to the statement for . Indeed, all -minors of are gaussoids if and only if all -minors of -minors of are gaussoids, because those two collections of minors both arise from the same set of cubes of the -cube. ∎ Turning Definition 3.2 upside down, the construction of an -gaussoid can be seen as a high-dimensional jigsaw puzzle. The puzzle pieces are lower-dimensional gaussoids which are to be embedded into faces of the -cube. The difficulty comes from the fact that every square is shared by -faces. The minors must be chosen so that all of them agree on whether a shared square is an element of the -gaussoid under construction or not. The incidence structure of -faces in the -cube is important. We study it via the following graph. ###### Definition 3.4. Let , for , be the undirected simple graph with vertex set and an edge between if and only if there is a -face such that and . The idea behind this definition is that for suitable choices of and , the faces indexed by an independent set in these graphs will be just far enough away from each other in the -cube to allow free puzzling of -gaussoids without one minor choice creating constraints for other minors. ###### Theorem 3.5. The graph is transitive, hence regular. It is complete if and only if . The degree of any vertex can be calculated as follows: degQ(n,k,p,q)=−1+∑m,j(???)(kj)2k−j(n−kk−j)(n−2k+jm), where the sum extends over pairs which satisfy the feasibility and connectivity conditions (†) n−2k+j≥m∧p≥m+2q−min{q,j}. ###### Proof. The symmetry group acts on the -cube as automorphisms of the face lattice. The group action is transitive on -faces for any and respects meet and join. Therefore acts transitively on the graph . The characterization of completeness rests on Lemma 3.6. Using the gap function defined there, it is shown that is equivalent to the adjacency of and in and that if is a face with smaller gap, then is adjacent to . Since is regular, it is complete if and only if some vertex is adjacent to all others. For that to happen, the vertex must be adjacent to one which has the largest gap to it. As shown in the lemma, the maximum of is and hence completeness is equivalent to . The exact degree also follows from Lemma 3.6. Fix any vertex of . By regularity it suffices to count the adjacent vertices of . We subdivide vertices according to two parameters: is a disagreement between and and is the number of common dimensions of and . A priori, ranges in and ranges in , but not all combinations allow to be a -face adjacent to . First, we determine the pairs for which an adjacent -face exists and then count how many of them exist for fixed parameters. Let . For it must hold that , since and are -faces. Assuming this, can be constructed if and only if the dimensions in leave enough space to create the prescribed disagreement of size . As an inequality this is , or . Together with , this inequality already entails the condition imposed by the choice of . Thus it is sufficient to require , which is the first condition in (3.5). Given a -face with parameters and , the existence of an edge between and in imposes the condition Lemma 3.6 (1), which is the right half of (3.5). As for the counting, let be a fixed -face and let satisfy (3.5). We count the -faces with parameters and . There are ways to place the for . On , there are independent choices from . The choices so far fix in . There are now choices for the remaining s in . Then is fixed. Now to finish , we may only place and in where has only s and s as well. Among the remaining positions, a set of size must be chosen, where is already determined by the condition that it differs from . On the remaining positions, is determined by not differing from . The feasibility of all the choices enumerated so far is guaranteed by (3.5). The tally is ∑m,j(???)(kj)2k−j(n−kk−j)(n−2k+jm). Since is not adjacent to itself, which is uniquely described by the feasible parameters and , subtracting concludes the proof. ∎ ###### Lemma 3.6. Let , be -faces and , with and . The following hold: 1. if and only if and are adjacent in , 2. the range of is , 3. is strictly isotone with respect to , i.e. , 4. for with , if and are adjacent in , then so are and . ###### Proof. Given two -faces and , the ground set splits into three sets: (i) of cardinality where both have and symbols only but differ, (ii) of cardinality of shared symbols, and (iii) everything else, i.e. positions where and patterns agree or where and are in one face and in the other. In order to connect two -faces in , there needs to be a -face which intersects either of them in at least dimension . Such a face has to cover the set of size with , as it otherwise it will not intersect both faces. Conversely, once is covered, a -dimensional intersection with both faces is ensured by placing s and s appropriately. To achieve a -dimensional intersection, have to be placed on and each. By using the shared s, one needs at least further to construct a connecting -face. Thus is the minimum dimension necessary to connect and in . This proves claim (1). It is clear that is minimal when is minimal and is maximal. This can be achieved simultaneously by choosing and there . Now consider the opposing face of . The gap is assuming is a vertex of where in particular . Increasing this value would require reducing since is already maximal. Un-sharing s with consumes positions inside the block of s and s in of size which reduces by an equal amount. Hence is maximal. Furthermore, by varying but keeping , all values in the range can be attained, proving claim (2). Claim (3) follows from a straightforward calculation: ρq+1(d,f)−ρq(d,f) =2−(min{q+1,j}−min{q,j}) ={2,j≤q,1,j≥q+1. In the situation of claim (4), since and are adjacent in , we have by (1). Applying this property in reverse proves the claim. ∎ ###### Corollary 3.7. 1. is complete for . Otherwise its degree is . 2. is complete for . Otherwise its degree is . ∎ ###### Remark 3.8. For the theory of gaussoids, the cases are relevant. We consider it an interesting problem to study growth of the degree formula for other parameters. Certainly the graph can be complete, where the degree is as large as . To construct large independent sets, one wants smaller degrees. It is proved below that a maximal independent set in has cardinality in of which one inequality follows from the degree formula. ###### Proposition 3.9. Let be an independent set in , then the following inequality holds: . ###### Proof. Let . Since is independent, there is no -cube sharing a square with and with . Since , also and share no square. Thus an assignment of -gaussoids lifts to a well-defined set of squares . The map is injective. To see that is a gaussoid, we examine its -minors. Let be arbitrary. In case is fully contained in some , then clearly since . Otherwise can share at most one square with any face in . If it shares no square with any element of , then is empty, hence a gaussoid. If it shares a square with some face in , it cannot share a square with any other element of because is an independent set in . In this case, is a singleton and hence a gaussoid. ∎ ###### Proposition 3.10. Let be an independent set in and the maximum size of a set of mutually range-disjoint injections of into . Then . ###### Proof. The proof is analogous to Proposition 3.9 but uses the independent set to perturb any gaussoid injectively into non-gaussoids. Again, since and is independent, an assignment lifts uniquely via to a subset of . Let be a set of range-disjoint injections as in the claim. Consider the maps . To each associate . Because the ranges of the are disjoint, the map is injective. None of the sets is a gaussoid since any certifies . ∎ ###### Remark 3.11. The proofs of Propositions 3.9 and 3.10 exploit two properties of the class of gaussoids: (1) it has a puzzle property, and (2) the empty set and all singletons are in its basis. The same technique does not work for realizable gaussoids because they lack property (1) and not for graphical gaussoids (see Section 4) because they lack property (2). Indeed their numbers can be shown to be single exponential. For realizable gaussoids, this follows from Nelson’s recent breakthrough: If a gaussoid is realizable with a positive-definite covariance matrix , then the matrix both defines a vector matroid identifies the gaussoid. By [Nel18, Theorem 1.1] there are only exponentially many realizable matroids and thus realizable gaussoids. Nelson’s bound features a cubic polynomial in the exponent, while there are certainly realizable gaussoids coming from graphical models. To get explicit bounds we apply the propositions for . To find suitable independent sets in and we use Brooks’ Theorem [Lov75] and the degree bounds from Corollary 3.7. Since the graphs are connected, have degree at least but are not complete, there exists a proper -coloring of , and we can pick a color class as an independent set . Its size is at least that of an average color class: |Fn3|degQ(n,3,3,2)≥n(n−1)(n−2)6⋅12(n−1)(n−2)2n−3=n622n−4=n92n−6. For , we find analogously |Fn3|degQ(n,3,2,2)≥n(n−1)(n−2)6⋅6(n−2)2n−3=n(n−1)622n−3=n(n−1)92n−5. Proposition 3.9 now shows, using and , that there are at least -gaussoids. Similarly, Proposition 3.10 with gives an upper bound on the ratio of -gaussoids of . We have proved ###### Theorem 3.12. For , the number of -gaussoids is bounded by 213n2n−6≤|Gn|≤2|An|249n(n−1)2n−6. ###### Remark 3.13. A simple way to obtain a weaker double exponential lower bound for the number of gaussoids was suggested to us by Peter Nelson, following a matroid construction of Ingleton and Piff. Let be the set of all -subsets of for some . Every defines a -face of the -cube, where are the minimal elements of . Any subset of is a gaussoid. The axioms (G1) and (G4) are satisfied because their premises contain sets of different sizes. The axioms (G2) and (G3) are satisfied because their premises correspond to the same and thus only one of them can be in. With there are least gaussoids. Substituting in Theorem 3.12 gives an interval for the absolute number of -gaussoids for . It shows . We conclude this section by showing that the linear order lower bound is the best that the independent set construction in can do. The independence number of a graph is the maximal size of an independent set in . Similarly, the clique number is the maximal size of a clique in . Since is transitive, the following inequality holds [GR01, Lemma 7.2.2]: α(Q(n,3,3,2))≤|Fn3|ω(Q(n,3,3,2)). Since , it suffices to find a clique of size in every . Take the set of cubes . This set has cardinality and any two elements , in it are connected by an edge in , since with and . ## 4. Special gaussoids Because of their puzzle property, gaussoids are the largest class of CI structures whose -minors are -gaussoids. The base case of this definition are the eleven -gaussoids arising from covariance matrices of Gaussian distributions. The -gaussoids split into five symmetry classes modulo which we denote by letters E, L, U, B, and F. They are depicted in Figure 2. The special -invariant types of gaussoids in this section arise from choosing subsets of these five symmetry classes to base a puzzle property on. Each of the 32 sets of bases can be converted into axioms in the -cube similar to the gaussoid axioms (G1)—(G4). SAT solvers [Thu06, TS16] were used on the resulting Boolean formulas to enumerate or count these classes. The listings can be found on our supplementary website gaussoids.de. For nine classes an entry in the OEIS [OEI19] could be found. Table 1 is the main result of this section. It summarizes the different types of gaussoids that arise from the different bases. The classes E, B and F are themselves closed under duality, while L and U are interchanged by it. It follows that one of the 32 classes is invariant under duality if it contains either none of L and U or both of them. On the remaining classes, duality acts by swapping L with U. The combinatorial properties of the classes, e.g. the size, are unaffected by this action, hence LB and UB are conflated to {L,U}B in Table 1. ### 4.1. Fast-growing gaussoids By Remark 3.11, the construction of doubly exponentially many members of a class of gaussoids requires that the class has a puzzle property and that its basis includes ELU. This explains the rapid growth of all four classes of this type. ### 4.2. Incompatible minors As a consequence of Definition 3.2, if there is no gaussoid of dimension in a class, there are no gaussoids of any dimension in the class. Similarly, if the class contains only the empty or full gaussoid in dimension , the members of dimension are the empty or full gaussoid as well. Hence computations in small dimension suffice to explain these classes. Despite their simplicity, each of them provides higher compatibility axioms. For example the annihilation of LUB in dimension implies that every -minor of a gaussoid contains an empty or a full -minor. Or: a graphical -gaussoid with no belts is full or contains an empty -minor. ### 4.3. Graphical gaussoids Each undirected simple graph defines a CI structure , where two vertices and are separated by a set if every path between and intersects . These are the separation graphoids of [Mat97]. They fulfill a localized version of the global Markov property. According to [LM07, Remark 2], separation graphoids are exactly the gaussoids satisfying the ascension axiom: (A) (ij|L)⇒(ij|kL),∀i,j,k∈[n],L⊆[n]∖ijk. Therefore we refer to them as ascending gaussoids. The operation is a bijection whose inverse recovers the graph via its edges , where abbreviates . Any gaussoid in this section is of the form for some undirected simple graph . Since (A) uses only -faces of a single -face of the -cube, being an ascending gaussoid is a puzzle property based in dimension . Its basis are the ascending -gaussoids. This was shown by Matúš [Mat97, Proposition 2] and in our terminology it can be restated as follows ###### Lemma 4.1. A gaussoid is ascending if and only if L is a forbidden minor. ∎ This shows that EUBF are the ascending gaussoids. Their duals are ELBF and it is easy to see that their axiomatization replaces (A) by the descension axiom (D) (ij|kL)⇒(ij|L),∀i,j,k∈[n],L⊆[n]∖ijk. EUBF-gaussoids arise from undirected graphs via vertex separation, i.e.  if and only if and are in different connected components of . Their duals contain if and only if and are in different connected components in the induced subgraph on . Therefore we call elements of graphical gaussoids. For our classification purposes it is sufficient to study the “Upper” half of dual pairs. Our technique to understand EUBF and its subclasses has already been used in [Mat97]: since the presence of an edge in is encoded by the non-containment , the compulsory minors of of the form prescribe induced subgraphs on vertex triples . In the opposite direction, however, the induced -subgraphs of a graph do not in general reveal the types all minors in its corresponding gaussoid. ###### Example 4.2. Consider the cycle corresponding to the gaussoid . Its -minors are exclusively E and U. The U minors arise precisely in the -cubes {1∗∗∗},{∗1∗∗},{∗∗1∗},{∗∗∗1}. All other -minors are E. This means that the -cycle is contained in EUBF, EUB, and EU. To match with Table 1, check that the -cycle has no induced -cycle, corresponds to the partition of , and the involution . This graph shows that the class of a gaussoid cannot be determined by looking only at the induced subgraphs of . All -minors observable from induced subgraphs are U, but the smallest class to which this gaussoid belongs is EU. ###### Example 4.3. Consider the star with interior node and leaves . It corresponds to the gaussoid {(23|1),(23|14),(24|1),(24|13),(34|1),(34|12)}. Because the right-hand side of every element of the gaussoid contains , this gaussoid has the minor F in , E in the opposite face and U everywhere else. We now establish relationships of subclasses of EUBF with known combinatorial objects. For some the graph is more convenient, for others it is the complement graph which is more natural. Figure 3 shows the complement graphs corresponding to E, U, B and F and is useful to keep in mind for the proof of Theorem 4.4. ###### Theorem 4.4. The gaussoids in the class EUBF are in bijection with the simple undirected graphs on vertices. The subclasses distribute as follows 1. EUB contains exactly the gaussoids such that is -free. 2. UBF contains exactly the gaussoids such that each connected component of is a path. 3. EUF contains exactly the gaussoids such that in each connected component is a clique, and hence corresponds to partitions of the vertex set . 4. EU is EUF where additionally every connected component of has at most two vertices. ###### Proof. The first statement summarizes the discussion in the beginning of this section. (1) The graphs for are free of triangles, as seen in Figure 3. If conversely triangle-free, then does not have F among its minors . By ascension, the cardinality of is monotone in and thus no minor of
2022-11-29 00:29:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8675615191459656, "perplexity": 933.028312012118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710684.84/warc/CC-MAIN-20221128235805-20221129025805-00465.warc.gz"}
http://blog.jpolak.org/?p=1848&replytocom=172174
# First-order characterisations of free and flat…projective? Here is an interesting question involving free, projective, and flat modules that I will leave to the readers of this blog for now. First, consider free modules. If $R$ is a ring, then every $R$-module is free if and only if $R$ is a division ring. The property of $R$ being a division ring can be expressed in terms of first-order logic in the language of rings: $\forall x[x\not=0 \rightarrow \exists y(xy = 1)]$. The meat of this first-order statement is the equation $xy = 1$. Now, multiply by $x$ on the right to get the equation $xyx = x$. Now we can put this in a first-order sentence: $\forall x\exists y[xyx = x]$. Notice how we removed the condition $x\not=0$ from this one. That's because $x=0$ satisfies $xyx = x$ for any $y$ in all rings. Rings that model $\forall x\exists y[xyx = x]$ are called von Neumann regular. More importantly, these are exactly the rings for which every $R$-module is flat. By weakening the statement that $R$ is a division ring, we got a statement equivalent to the statement that every $R$-module is flat. One might wonder: where did the projective modules go? Is there a first-order sentence (or set of sentences perhaps) in the language of rings whose models are exactly those rings $R$ for which every $R$-module is projective? Diagrammatically: Can we replace the question mark with a first-order sentence, or a set of them? My initial thoughts are no because of ultraproducts, but I have not yet come up with a rigorous argument. • Erik Crevier says: Rings over which every module is projective are exactly semi-simple rings. For each n, you can say "R has n pairwise distinct commuting idempotents" with a first order sentence P_n. Let R_n be an n-fold product of fields. R_n satisfies P_n, so an ultraproduct R_n's will have arbitrarily large families of commuting idempotents. Such a ring cannot be semi-simple by Artin-Wedderburn. Each R_n certainly is semi-simple, so it follows that this cannot be characterized by first order sentences. • Erik Crevier says: Sorry, that condition should be "n non-zero central idempotents which sum to 1", the point being that such families govern non-trivial direct product decompositions into n factors. • Jason Polak says: Very good! Thanks for explaining this.
2018-12-11 21:11:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8439393043518066, "perplexity": 384.75865766690805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823702.46/warc/CC-MAIN-20181211194359-20181211215859-00515.warc.gz"}
http://mathoverflow.net/feeds/question/111708
What can be the dimension of a pointless smooth proper Z-scheme? - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-19T12:00:46Z http://mathoverflow.net/feeds/question/111708 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/111708/what-can-be-the-dimension-of-a-pointless-smooth-proper-z-scheme What can be the dimension of a pointless smooth proper Z-scheme? Will Sawin 2012-11-07T07:08:37Z 2012-11-07T07:08:37Z <blockquote> <p>What is the smallest dimension $d$ such that there is a smooth proper morphism $X \to \operatorname{Spec} \mathbb Z$ of relative dimension $d$, with $X$ nonempty, without a section?</p> </blockquote> <p>Of course, there must also be such a morphism in every larger dimension - just take $X \times \mathbb P^n$.</p> <p>As described in <a href="http://mathoverflow.net/questions/9576/smooth-proper-scheme-over-z" rel="nofollow">this excellent question</a>, $d\geq2$. As described in the accepted answer, $d\leq 6$. We can improve that to $d \leq 5$ by noting that the E7 lattice also produces a nonsingular hypersurface, because the unique potential singular point over $\mathbb F_2$ fails to lie on the hypersurface.</p> <p>But that still leaves a lot of uncertainty! Can anyone clarify?</p> <p>Here is an auxiliary question, which I think might prove easier to answer:</p> <blockquote> <p>What is the smallest dimension of an $X$ satisfying those conditions that is also the flag variety of a reductive group?</p> </blockquote>
2013-06-19 12:00:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9552790522575378, "perplexity": 1692.5048609224937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708739983/warc/CC-MAIN-20130516125219-00092-ip-10-60-113-184.ec2.internal.warc.gz"}
https://agenda.infn.it/event/28874/contributions/168826/
# ICHEP 2022 Jul 6 – 13, 2022 Bologna, Italy Europe/Rome timezone ## The Heavy Flavor Production Fraction Reweighting Procedure in ATLAS Jul 8, 2022, 7:05 PM 1h 25m Bologna, Italy #### Bologna, Italy Palazzo della Cultura e dei Congressi Poster Operation, Performance and Upgrade (Incl. HL-LHC) of Present Detectors ### Speaker Ilia Kalaitzidou (University of Freiburg) ### Description The rates at which b- and c-quarks hadronize into different hadron species (i.e. the HF production fractions) may vary among MC Shower simulations such as Pythia, Sherpa, and Herwig. Furthermore, the flavor tagging efficiencies in ATLAS have been found to depend on the hadron species inside a jet. For example, flavor tagging efficiency for c-jets is the largest for D+ mesons and the lowest for charm baryons. Because of this, flavor tagging efficiency in MC depends on the MC shower software and needs to be corrected on an individual basis. The ATLAS Collaboration developed a method of reweighting the HF production fractions to a common world average, which largely eliminates the difference in the flavor tagging efficiency between different MC samples. Moreover, the experimental uncertainties in the HF production fractions (typically 2-3% relative uncertainty) can also be applied with the same reweighting procedure which gives rise to a common way of estimating these systematic uncertainties in ATLAS. In-person participation Yes ### Primary author Ilia Kalaitzidou (University of Freiburg)
2023-03-27 00:16:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849802613258362, "perplexity": 9130.76271697678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00752.warc.gz"}
https://math.stackexchange.com/questions/351333/evaluation-of-a-continued-fraction?noredirect=1
# Evaluation of a continued fraction Puzzle question... I know how to solve it, and will post my solution if needed; but those who wish may participate in the spirit of coming up with elegant solutions rather than trying to teach me how to solve it. [paraphrased from Lone Learner] Prove (or disprove) the following equality: $$1+\cfrac1{1+\cfrac2{1+\cfrac3{1+\ddots}}}=\frac1{\displaystyle e^{1/2}\sqrt{\frac{\pi}{2}}\;\mathrm{erfc}\left(\frac1{\sqrt2}\right)}\approx 1.525135276\cdots$$ (taken from Closed form for a pair of continued fractions) • This is recreational? :D – J. M. is a poor mathematician Apr 4 '13 at 17:42 • More fun than a barrel of monkeys. – GEdgar Apr 4 '13 at 17:45 • Erfc. I’ll take the monkeys, please. – Brian M. Scott Apr 4 '13 at 18:50 • Maybe enjoying this sort of thing is the defining characteristic of analysts. (I'm not an analyst.) – Andreas Blass Apr 21 '13 at 21:00 • Say you got the problem as in math.stackexchange.com/questions/69519 without a formula for the value. How would you find it? – GEdgar Apr 23 '13 at 18:44 The iterated integral of the complementary error function, \begin{align*} \mathrm{i}^n\mathrm{erfc}(z)&=\underbrace{\int_z^\infty\int_{t_{n-1}}^\infty\cdots\int_{t_1}^\infty}_{n} \mathrm{erfc}(t)\,\mathrm dt\cdots\mathrm dt_{n-2}\mathrm dt_{n-1}\\ &=\frac2{n!\sqrt\pi}\int_z^\infty(t-z)^n\exp(-t^2)\,\mathrm dt \end{align*} (see e.g. Abramowitz and Stegun) satisfies the difference equation $$\mathrm{i}^{n+1}\mathrm{erfc}(z)=-\frac{z}{n+1}\mathrm{i}^n\mathrm{erfc}(z)+\frac1{2(n+1)}\mathrm{i}^{n-1}\mathrm{erfc}(z)$$ with initial conditions $\mathrm{i}^0\mathrm{erfc}(z)=\mathrm{erfc}(z)$ and $\mathrm{i}^{-1}\mathrm{erfc}(z)=\dfrac2{\sqrt\pi}\exp(-z^2)$. This recurrence can be rearranged: $$\frac{\mathrm{i}^n\mathrm{erfc}(z)}{\mathrm{i}^{n-1}\mathrm{erfc}(z)}=\frac1{2z+2(n+1)\tfrac{\mathrm{i}^{n+1}\mathrm{erfc}(z)}{\mathrm{i}^n\mathrm{erfc}(z)}}$$ Iterating this transformation yields the continued fraction $$\frac{\mathrm{i}^n\mathrm{erfc}(z)}{\mathrm{i}^{n-1}\mathrm{erfc}(z)}=\cfrac1{2z+\cfrac{2(n+1)}{2z+\cfrac{2(n+2)}{2z+\dots}}}$$ (As a note, it can be shown that $\mathrm{i}^n\mathrm{erfc}(z)$ is the minimal solution (that is, $\mathrm{i}^n\mathrm{erfc}(z)$ decays as $n$ increases) of its difference equation; thus, by Pincherle, the CF given above is correct.) In particular, the case $n=0$ gives $$\frac{\sqrt\pi}{2}\exp(z^2)\mathrm{erfc}(z)=\cfrac1{2z+\cfrac2{2z+\cfrac4{2z+\cfrac6{2z+\dots}}}}$$ If $z=\dfrac1{\sqrt 2}$, then $$\frac{\sqrt{e\pi}}{2}\mathrm{erfc}\left(\frac1{\sqrt 2}\right)=\cfrac1{\sqrt 2+\cfrac2{\sqrt 2+\cfrac4{\sqrt 2+\cfrac6{\sqrt 2+\dots}}}}$$ We now perform an equivalence transformation. Recall that a general equivalence transformation of a CF $$b_0+\cfrac{a_1}{b_1+\cfrac{a_2}{b_2+\cfrac{a_3}{b_3+\cdots}}}$$ with some sequence $\mu_k, k>0$ looks like this: $$b_0+\cfrac{\mu_1 a_1}{\mu_1 b_1+\cfrac{\mu_1 \mu_2 a_2}{\mu_2 b_2+\cfrac{\mu_2 \mu_3 a_3}{\mu_3 b_3+\cdots}}}$$ You can easily show that an equivalence transformation leaves the value of the CF unchanged. If we apply this to the CF earlier with $\mu_k=\dfrac1{\sqrt 2}$, then $$\sqrt{\frac{e\pi}{2}}\mathrm{erfc}\left(\frac1{\sqrt 2}\right)=\cfrac1{1+\cfrac1{1+\cfrac2{1+\cfrac3{1+\dots}}}}$$ The CF in the OP is now easily obtained from this. • +1 milliramanujan (Gosper frequently ranked his exotic formulas in milliramanujan units). Too bad we don't have multidimensional votes. – Math Gems Apr 4 '13 at 20:06 • @MathGems, J.M. gets one from me too. – vonbrand Apr 4 '13 at 20:07 • So, was my mention of erfc needed for you to get the solution? – GEdgar Apr 4 '13 at 21:19 • @GEdgar: (*embarrassed*) ...yes. :) I had forgotten all about that thread, actually, until you brought it up. – J. M. is a poor mathematician Apr 5 '13 at 0:08 Let \begin{align*} p_n &= p_{n-1}+np_{n-2}, \quad p_{-1} = 1, \quad p_0=1, \\ q_n &= q_{n-1}+nq_{n-2}, \quad q_{-1} = 0, \quad q_0=1, \\ r_n &= \frac{p_n}{q_n}. \end{align*} So the $r_n$ are the convergents of the continued fraction: $$r_0 = 1,\qquad r_1 = 1+\cfrac{1}{1},\qquad r_2 = 1+\cfrac{1}{1+\cfrac{2}{1}},\qquad r_3 = 1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1}}},$$ and so on. Consider the exponential generating functions $$F(x) = \sum_{n=0}^\infty \frac{p_{n-1}}{n!}\,x^n,\qquad G(x) = \sum_{n=0}^\infty \frac{q_{n-1}}{n!}\,x^n .$$ They are solutions of the differential equations \begin{align*} &F'(x) + (-1-x)F(x) = 0,\quad F(0)=1, \\ &G'(x) + (-1-x)G(x) = 1,\quad G(0)=0 . \end{align*} A solution for the homogeneous equation $y'+(-1-x)y=0$ is $$F_1(x) = e^{(1+x)^2/2}$$ and a particular solution for the inhomogeneous equation $y'+(-1-x)y=1$ is $$F_2(x) = \sqrt{\frac{\pi}{2}}\,e^{(1+x)^2/2}\, \mathrm{erf}\,\frac{x+1}{\sqrt{2}}$$ After all, there are formulas for the solution of first-order linear differential equations. Hayman's method (Wilf, generatingfunctionology, Theorem 4.5.1) shows that the coefficients of $F_1$ are asymptotic to $$\frac{e^{n/2+\sqrt{n}+1/4}}{2n^{(n+1)/2}\sqrt{\pi}}$$ as $n \to \infty$ and the coefficients of $F_2$ are asymptotic to $$\frac{e^{n/2+\sqrt{n}+1/4}}{2\sqrt{2}n^{(n+1)/2}}$$ Applying the initial conditions, we conclude that $$F(x) = e^{-1/2}F_1(x),\qquad G(x) = F_2(x)-\sqrt{\frac{\pi}{2}}\,\mathrm{erf}\,\frac{1}{\sqrt{2}} \,F_1(x)$$ So \begin{align*} \frac{p_{n-1}}{n!} &\sim \frac{e^{n/2+\sqrt{n}-1/4}}{2n^{(n+1)/2}\sqrt{\pi}}; \\ \frac{q_{n-1}}{n!} &\sim \frac{e^{n/2+\sqrt{n}+1/4}} {2\sqrt{2}n^{(n+1)/2}} - \frac{e^{n/2+\sqrt{n}+1/4}\sqrt{\pi}\text{erf}(1/\sqrt{2})} {2\sqrt{2}n^{(n+1)/2}\sqrt{\pi}} = \frac{e^{n/2+\sqrt{n}-1/4}e^{1/2}\text{erfc}(1/\sqrt{2})} {2\sqrt{2}n^{(n+1)/2}} \end{align*} with $\text{erfc}(x)=1-\text{erf}(x)$. Finally, $$\frac{p_{n-1}}{q_{n-1}} \sim \frac{1}{\displaystyle e^{1/2}\;\sqrt{\frac{\pi}{2}}\;\text{erfc}\,\frac{1}{\sqrt{2}}} \approx 1.525135276$$ so that is the value of the continued fraction. • By "the coefficients of $F_1$", do you mean its Maclaurin series coefficients? – alex.jordan Apr 25 '13 at 17:42 • Yes, Maclaurin. – GEdgar Apr 25 '13 at 19:03
2019-07-17 15:08:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971619844436646, "perplexity": 1437.9314647269355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525312.3/warc/CC-MAIN-20190717141631-20190717163631-00363.warc.gz"}
https://www.physicsforums.com/threads/advice-for-newbies-plus-suggest-a-new-rule.199303/
# Advice for Newbies? Plus: Suggest a New Rule 1. Nov 19, 2007 ### Chris Hillman Hi all, I have felt for some time that there is a need for a new PF sticky offering advice to newbies, especially younger ones. The idea would be to prepare a carefully written sticky which tries to achieve several things: • not alienate older newbies by assuming all newbies are youngsters, • not alienate younger newbies by appearing overly paternalistic, • gracefully suggest the importance of things some newbies (particularly young ones) might not have thought about, such as: • try to use good spelling and grammar, • try to be clear; in trying to formulate a question clearly, you may answer it yourself, and if not, everyone will appreciate your thoughtfulness in trying to be coherent!, • if you mention a book, cite author and title if possible, • if mention a website, cite url if possible (modulo the proscription on cranksite promotion), • PF users span the range from high school students to sci/math Ph.D.s, so to avoid responses going over (or under!) your level, if possible give some indication of your math/sci background, • don't worry about asking a "dumb question"; if you have tried to formulate your question clearly, chances are good that other people have the very same question, • but it's a good idea to look over the sci.physics FAQ first to avoid asking the exact same question which twelve other people have asked this week (which might lead to a grumpy response), • but it's definitely OK to request clarification of a FAQ entry if you didn't understand something you read there, • unfortunately, not everyone is who they say they are on the InterNet, or has good intentions, so a few safety tips might be in order (older newbies might also consult the ACLU's thoughts about http://www.aclu.org/privacy/internet/index.html [Broken]), • if you're puzzled about which forum to post a question in, you can post it in Forum Feedback & Announcements asking a mentor to move the post to the appropriate forum, • if you have a question for the PF staff which you don't want to ask in public, you can use https://www.physicsforums.com/faq.php?faq=vb_board_usage#faq_vb_pm_explain [Broken]. • if you want to learn how to to post a link or an equation, try https://www.physicsforums.com/faq.php?faq=vb_faq#faq_vb_board_usage [Broken]. I'd also like to suggest a new rule at PF which I think would be wise, the web being what it is. Many social networking websites such as WP now forbid or at least strongly encourage PF users, particularly minors, from posting their email addresses. See [post=1510907]Evo's response[/post] to the post which prompted me to mention this issue. Last edited by a moderator: May 3, 2017 2. Nov 19, 2007 ### Moonbear Staff Emeritus I don't know how to formulate something that accomplishes all you suggest without sounding overly paternalistic (or maternalistic?) to new members. I think most of those points are pretty self-evident, others are included in our current guidelines (which are already a lot for a new member to absorb), and I think the rest they have to fumble their way through, which is something many of our members are very helpful with, much more so than on a lot of other sites. We don't have a specific guideline about posting email addresses, but probably 99 times out of 100, we catch that and edit it out of a thread, partly for the safety of our members (though when they are posting accounts with free hosting sites, I figure they're already a little aware of the risk of spammers at least), but mostly because it's rather against the spirit of the forums to solicit answers be sent to one's email address rather than discussed within the thread. 3. Nov 19, 2007 ### Chris Hillman Neither to do I. That's why I am asking for input from others! One would hope so, but experience suggest they are not self evident at all to some newbies. That's kind of what I was getting at; having a rule would allow mentors to simply zap email addresses; the stated reason could be what you just said, since I myself feel that it is important to avoid frightening off minors while at the same time trying to prevent them from doing something unsafe out of inexperience. That's part of finding the tricky balance. 4. Nov 22, 2007 ### Chris Hillman Please add to the above list ("gracefully suggest the importance of things some newbies, particularly young ones, might not have thought about, such as"): • avoid making statements about topic T if they don't know anything at all about T, even if they qualify this with "I don't know what I'm talking about", because this obligates knowledgeable posters to waste their time trying to correct misstatements I am trying to request the mentors consider carefully writing boilerplate intended to minimize the chances that someone will take offence should a poster like myself chide them for going on and on about a topic they know nothing about. Last edited: Nov 22, 2007 5. Nov 22, 2007 ### robphy Chris, nice ideas. Maybe the best method of delivery is something like this: Welcome to PF. Here's a Top-10 list of tips to get the most out of PF. - bulleted list of short (one or two sentence) tips ...possibly humorous and not-stuffy sounding Too many... but you get the idea... http://blog.ericfeng.com/250-things-i-have-learnt-that-will-make-you-become-a-highly-successful-speaker/ [Broken] entertaining and informative, but possibly too visual http://lifehacker.com/software/presentations/stop-death-by-powerpoint-323554.php One might include a link to a well-posed HW-post as a good example of how to ask a HW question. Last edited by a moderator: May 3, 2017 6. Nov 22, 2007 ### Chris Hillman Thanks! These are good suggestions! In case it wasn't clear to others, I was asking for suggestions on the exact wording as well as the existence/location of the proposed sticky. 7. Nov 23, 2007 ### Chronos Well, advising people not to advertise their ignorance is like asking them to cut their heads off. Positing naivity is a trick of the crackpot trade. Next dearest to their hearts is luring a credible physicist into an inane argument. My advice is - don't say 'go read a book' - provide links. That is what I do with newbies, and cranks merit the same treatment. They are mostly misguided amateurs, IMO. And that can be a dangerous situation - what if they grow up to be U.S. Senators? Ignorance is only a vice if tolerated. 8. Nov 23, 2007 ### J77 The only way to accomplish most of those points is for users to lose anonymity. I don't think this would happen. For example, in the two years that I've been posting, I've seen some "students" apparently zip through whole degree courses, from first years through to graduate studies... 9. Nov 23, 2007 ### ZapperZ Staff Emeritus Unfortunately, all of this assumes that the newbies will actually read either the stickies or the instructions. I would be happy if they just read the topic of the stickies. I lost count on how many HW posts that I had to move out of the main physics forums, even when there's a sticky in there with the topic telling them clearly to not post HW/Coursework questions, IN CAPITAL LETTERS no less. So I am skeptical if any of these things will actually work, or be effective. If they can't read simple, direct instruction that's staring at them right in their faces, what makes anyone think that they'll read long-winded instructions on how to interact on here? Zz. 10. Nov 23, 2007 ### robphy Maybe we should try not to be long-winded... my eyes glaze over with too much text.* (Maybe I'm a victim of PowerPoint.) (* not every word carries the same importance.. highlight with font and layout formatting) A first draft (not quite at 10), based on Chris's list (above)... Feel free to modify... Top-10 list of Tips for New Users at PhysicsForums • be clear (It helps everyone, including yourself!) • for Homework-type problems, show your attempt typeset equations (click me: $\vec F_{net}=m\vec a$) and https://www.physicsforums.com/misc.php?do=bbcode [Broken] to add emphasis, • make references (so readers can look things up) - books and papers with titles, authors, page numbers, and, if needed, problem numbers - websites (but don't violate the PF Rules ) - high school? introductory-college? advanced-college? beyond? - algebra? calculus? beyond? • has it been answered before? https://www.physicsforums.com/search.php [Broken] Usenet Physics FAQ • be safe Don't provide personal information Internet Safety http://www.aclu.org/privacy/internet/index.html [Broken] • have fun, be nice, and don't break the Rules I understand the skepticism. So, feel free to give up on it. [EDIT: stray commas deleted] Last edited by a moderator: May 3, 2017 11. Nov 23, 2007 ### Chris Hillman Let's put up the new sticky! Excellent, Rob! The only change I suggest is deleting the commas after the word "so". I seem to have two "reform initiatives" up for discussion at PF (the other is the proposed reorganization of the math forums). Just to be clear, I think that the sticky as drafted by Rob is clearly such a good idea that we should definitely go ahead and implement this. I am much less certain so far whether reorganizing the math forums is even a good idea (if that should go through I hope we can first come up with a good list of forums which the most experienced math posters declare they can live with.) ZapperZ, I agree that getting newbies to read the stickies can be problematic, but you might be missing the point that rather than trying to formulate advice "on the fly" (not always a good idea when dealing with hypersensitive newbies), I'd much rather to point to carefully written boilerplate. For one thing, being pointed at a general advice page seems less likely to offend newbies who in the past have sometimes (mistakenly) assumed that my attempts to offer advice on how to post were intended as insults (If I point a newbie at the sticky and he/she declines to follow the advice there, I know I tried and that probably this person isn't going to listen to anything I say, so I can put him/her in my Ignore list on grounds of ineducability.) If your point is that we wish to avoid alienating newbies or inadvertently discouraging timid readers from delurking*, part of my point here is that I wish to avoid trying to protest incomprensibility "on the fly", which doesn't seem to have been working well. What do you think of Rob's proposed sticky? [*Am I the only one who has the impression that quite a few "delurkings" appear to occur under the influence of alchohol? I hope it's not true, but sometimes I do wonder...] I'd call that "trolling", and many trolls seem not be cranks so much as very odd (and unfortunate!) persons who are uncomfortable in social situations which are not full of violence, who have discovered that public forums on the web provide them with ample opportunities to create the social chaos they crave. But I agree that many embittered cranks turn to trollery (something very evident on sci.physics.relativity, which after the departure of myself and a few others has turned into a sickening exemplar of the trollfest ). Well, I agree that some politicians do seem hostile to science, and a handful of cranks/frauds have made some powerful allies (I can think of two U.S. senators who appear to have used their influence on behalf of a particuarly notorious new-energy scam, but let's mention no names since I haven't verified this intelligence). I don't think cranks are a serious threat to society (although if no-one at all debunked crankery, ultimately many young people might be seriously misled--- for example, the crank physics sites on the web far outnumber a comparatively small number of reliable websites which are readable enough to be useful for students), but some of them have signed up with a serious threat, organized anti-intellectual anti-secular political fundamentalist movements which have a long history (in the U.S.) of political influence and in particular of keeping science out of science classrooms in public schools. I am well aware that many frequent posters at public forums like PF highly value their anonymity, and while I myself have always posted under my real name, I started posting back when the WWW was very young, well before spam and harrassment became commonplace, so in fact I advise anyone who asks to choose a handle. I agree (if I understand your point) that avoiding the appearance of wishing to compromise this is another desideratum. I think Rob's draft of the proposed sticky avoids this giving this impression--- do you agree? I am also well aware that many people consider it perfectly acceptable to create multiple socks, but I am much less comfortable with this practice and when asked I strongly advise against it. Short reason: this often seems to start a regrettable slide down a slippery slope (see for example recent scandals at WP concerning formerly good users who became frustrated and created "carnivorous socks"). Hmm.... what do others think about possibly adding an item pointing out that socks are forbidden at PF and urging newbies to bear in mind that they are building an online persona at PF which they should expect to live with? (Once again, Rob can probably formulate this in an inoffensive manner.) Last edited: Nov 23, 2007 12. Nov 23, 2007 ### robphy Thanks. Whoops... don't know what those commas were about... probably cross-talk during multi-tasking. (If it or something like it goes through, make it 10.. or else change the title.) Last edited: Nov 23, 2007 13. Nov 23, 2007 ### Chris Hillman I.e., the title of the sticky should be "Top-Ten List of Tips for New Users at PhysicsForums". Ditto the addition of the "got a theory?" item as written by Rob. 14. Nov 24, 2007 ### Chronos My only point is mainstream scientists should confront this plague: well intended confusion, crackpots, and intellectual terrorism with equal vigor. Educate the confused, challenge the misguided and slap down deception. 15. Nov 24, 2007 ### Staff: Mentor I, for one, love it. I'd like to see it as an unavoidable pop up when someone registers, and then a test that has to be passed afterwards, to make sure they have to read it, otherwise they can't register. Oh well, I can dream, can't i? 16. Nov 24, 2007 ### Moonbear Staff Emeritus As Zz pointed out, we can't get them to read the rules we already have. For example, things like showing their attempt at homework are spelled out both in the forum guidelines and in stickies at the top of every homework help forum (after being repeatedly ignored under the title "Read before posting homework", I gave up and it's now in the form of "Why isn't my HW question being answered?" to fit with the fact that most students didn't start reading until AFTER they posted their homework). We now have the homework template that prompts them to show their work, and we STILL get questions where no work is shown. Adding more instructions is not going to get people who do not read instructions to read them. Those who do read instructions and have a dash of common sense can do pretty well without such bullet points. I also don't think there's any point of including "be clear" in an FAQ or anywhere. Those who lack the ability to express themselves clearly aren't going to suddenly acquire it by reading that, and those who know how to express themselves clearly will do so. The one thing I see in the list that I really do like, however, is the request for context...i.e., level of the question. Perhaps that could be added to the HW forum template. It would be helpful when someone asks a HW/coursework type question if we know they are in high school, college, grad school, studying independently, a teacher trying to brush up knowledge before being tossed into teaching a class outside their usual subject area (as scary as it sounds, it happens), etc. Last edited by a moderator: May 3, 2017 17. Nov 24, 2007 ### Chris Hillman Tips for New Users at PhysicsForums (proposed sticky) I think PF would benefit from a unified list of directives. I suspect that four basic pages should be enough: the behavioral guidelines, the VB code formatting tips, writing tips for newbies, and tips for where to put your posts. I like Evo's idea of a pop up which takes just registered newbies to a list of links to these four pages, and each could include a link to the other three, for benefit of those who haven't yet figured out the "back" button on their browser It's a never ending battle, but I think PF could benefit from streamlining the process. Ideally it would be easy for homework helpers to hit a macro which directs the offender to the appropriate sticky (ideally one of the four pages above). After being chided five or six times I think most newbies will start to understand that this is important to us. I fear you might be correct, unfortunately, but I think it's worth making the effort to try to consolidate and reorganize all the "user guidance pages" at PF to try to make them easier to find prior to delurking, easier to read and follow, and most of all, easier for experienced newbies to cite repeatedly without tearing their hair out. Agreed, but I actually think you are underestimating the typical student newbie here. They mostly are not truly stupid, I think, and are posting incoherently out of carelessness and/or inexperience. My proposal was actually more like "try to be clear" and "use the preview feature" with some well written sentences offering some tips for improving clarity. My hope is that typical students will be encouraged read over their posts before submission and conclude that they can in fact see how to make improvements! Here is my revised proposal (the mild redundancies are intentional; I was trying to say the most important things twice in slightly different ways)): Tips for New Users at PhysicsForums • Got a question? Did you check whether it has been answered previously?: • Try out the https://www.physicsforums.com/search.php [Broken], • Look over the Usenet Physics FAQ • Look over some recent book reviews from other PF users. • Didn't understand something you read in the FAQ or a previous PhysicsForums thread? It's perfectly OK to ask for clarification! • To obtain the most useful responses, you should try to post in an appropriate forum [size=-2][implementing mentor should EDIT this to give a link to the proposed sticky guideline to forums at PF as per [thread=195378]this thread[/thread]][/size]; if you have no idea which forum is most appropriate, post in "Calculus and Miscellaneous", • To help other PF users to respond appropriately, please try to provide some context for your post: • Is this schoolwork? Or independent research? • Are you in high school? A freshman in college? A graduate student? • Have you taken high school algebra? Calculus? Beyond? • If English is not your first language, don't worry, most PF users will try to make allowances for ESL posters, but if this applies to you it's OK to mention it! • If you mention a book, please cite author and title, and perhaps even page number or problem number. • If you are responding to another post, please try out the "Quote" button (it often helps to trim the quoted text!), • You can greatly improve your post by formatting your equations using the LaTeX markup feature of VB (click me: $\vec F_{newton}=m\vec a$). • It may be helpful to https://www.physicsforums.com/misc.php?do=bbcode [Broken]. • If you have handy a clear figure in digital format, it will probably very useful to https://www.physicsforums.com/faq.php?faq=vb_read_and_post#faq_vb_attachment_explain [Broken] to a relevant image at another website. • Try using the "Preview" button to review your post before submitting it: • If possible, correct any typos, spelling, and grammar. • Can you make your question clearer? • Did you remember to provide some useful context? (If appropriate, did you mention your educational background?) • Would writing out an equation help? (You can test your latex markup before posting using the preview) • Did you make any comment which someone might find offensive? (Perhaps expressing an inflammatory religious or political opinion?) If it isn't neccessary, you may save everyone some hassle by removing it! • If you included a link, try clicking on it to make sure it goes to the site you intended. • If you are asking for homework help, please post in the Homework and Coursework Forum at PhysicsForums, and please show some of your work so far. • If you should spot a goof after submitting your post, you can still https://www.physicsforums.com/faq.php?faq=vb_read_and_post#faq_vb_edit_posts [Broken] during the first 24 hours or so after posting (you can even delete your post entirely). • While we believe PF is generally a safe place, we urge all PF users to avoid doing reckless things here: • Avoid posting too much personal information. • See Internet Safety from KidsHealth and http://www.aclu.org/privacy/internet/index.html [Broken] from the ACLU for more tips. • Have fun, be nice, and please try not to break any Rules! Last edited by a moderator: May 3, 2017 18. Nov 24, 2007 ### Shooting Star I would like to add another point here, to benefit a fraction of the newbie/homework posters. I've often noticed that a quite well written and interesting question or problem has been posted, and has been answered incompletely or badly by a not too competent but enthusiastic newbie. The homework helpers, seeing that it has been answered, may just skip it, thinking that the OP has received the proper answer and it's over. The OP may be too shy to insist on a more satisfactory answer or to re-post it. My point is that the newbies should be encouraged to pursue it till satisfied, and should be told there's nothing dumb about asking a question for a second time. The SOLVED tag is sadly neglected by most homework posters. 19. Nov 24, 2007 ### Moonbear Staff Emeritus We don't want them to repost their question, but if they respond to a bad reply, I think most HW helpers will check a thread if they see that the OP has returned. If you notice that a thread is getting sub-par help (especially if it's outright wrong or misleading, or if it flat-out gives the answers), please report the thread so a mentor can intervene. The solved tags are fairly new, so not everyone is used to using them. Another issue that I know impacts the HW help forums is that some of the new members will post their question here and on several other forums at the same time. So, they don't always return here if they get handed an answer somewhere else, thus don't mark the thread solved or otherwise. We of course have no control over that. Those who do return and realize the benefit of the guidance here that is more than just getting handed an answer will return, and start to get the hang of things after a few returns. 20. Nov 24, 2007 ### robphy ...:uhh:... my eyes are starting to glaze over again. - be short and sweet - stick to ten "Tips"... ("Rules" can be detailed and exhaustive) - ideally, tips should be recitable [maybe even put to a melody :rofl:] my \$0.02 Last edited: Nov 24, 2007
2018-01-18 06:39:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38654452562332153, "perplexity": 2253.079337944092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887067.27/warc/CC-MAIN-20180118051833-20180118071833-00588.warc.gz"}
https://physics.stackexchange.com/questions/584071/calculations-for-measuring-a-two-qubit-system
# Calculations for measuring a two-qubit system Suppose I have the state $$|\psi\rangle = \frac{1}{\sqrt{2}}(|01\rangle - |10\rangle)$$ that I want to measure in an arbitrary basis $$|A\rangle = \alpha|0\rangle + \beta|1\rangle \text{ and } |B\rangle = \beta^*|0\rangle - \alpha^*|1\rangle$$ From my understanding, if I measure $$|\psi\rangle$$, the probability of seeing $$|A\rangle$$ is $$\langle A | \psi\rangle^2$$ But when I try to compute $$\langle A | \psi\rangle$$, I get \begin{align*} \langle A | \psi\rangle &= \frac{1}{\sqrt{2}}(\alpha^*\langle 0 | + \beta^* \langle 1 |)(|01\rangle - |10\rangle)\\ &= \frac{1}{\sqrt{2}}(\alpha^*\langle 0 | + \beta^* \langle 1 |)(|01\rangle - |10\rangle) \\ &= \frac{1}{\sqrt{2}}(\alpha^*\langle 0 | + \beta^* \langle 1 |)(|0\rangle\otimes|1\rangle - |1\rangle\otimes|0\rangle) \end{align*} I assume I can distribute, so I get terms like $$\alpha^*\langle 0 |\big(|0\rangle\otimes|1\rangle\big)$$. But how does one take an inner product between $$|0\rangle$$ and $$|0\rangle\otimes|1\rangle$$, when the latter of the two is an element of a tensor product space of different dimension as $$|0\rangle$$? • You are right. You cannot take the inner product between $|0\rangle$ and $|0\rangle \otimes |1\rangle$ as, by definition, the inner product is: $$\langle \rangle : \mathbb{V}^2 \mapsto \mathbb{C}$$ The vector $|0\rangle \otimes |1\rangle$ belongs in the tensor product $\mathbb{V} \otimes \mathbb{V}$, not $\mathbb{V}$. – Andreas Mastronikolis Oct 5 at 16:31 • @AndreasMastronikolis That's what I thought. In that case, how do we calculate measurement results and measurement probabilities? – Tiwa Aina Oct 5 at 16:32 • To be honest, I am not currently sure as I am not completely acquainted with product spaces (yet). But here is how I am thinking: You are right in your assertion that the probability of catching $|\psi \rangle$ in the basis state $|A \rangle$ is: $$| \langle A| \psi \rangle|^2$$ The only thing that is left is to find a vector in $\mathbb{V} \otimes \mathbb{V}$ that will 'represent' (I know I am speaking loosely here) $|A\rangle$. My best guess would be: $$|A\rangle \otimes | A \rangle$$ but I am not sure. – Andreas Mastronikolis Oct 5 at 16:43 • Bare in mind, that the quantity $$\left\lvert \left( \langle A| \otimes \langle A| \right) | \psi \rangle \right\rvert^2$$ is the probability of catching particle 1 in $|A\rangle$ and particle 2 in $|A\rangle$. – Andreas Mastronikolis Oct 5 at 16:53
2020-11-28 11:31:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9921185374259949, "perplexity": 398.89609233958095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195417.37/warc/CC-MAIN-20201128095617-20201128125617-00429.warc.gz"}
https://math.stackexchange.com/questions/1282934/how-many-ways-can-you-divide-24-people-into-groups-of-two
# how many ways can you divide 24 people into groups of two? [closed] just can't seem to figure this out. I need to aquire a function for this scenario. I have tried to look at smaller forms of the problem. My problem is I am struggling to get the # of possibilities. So far I have 1:1 2:1 3:3 4:3 5:12 6:15 how do I proceed? ## closed as off-topic by abel, Daniel, user147263, user99914, TravisJMay 15 '15 at 2:58 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Community, Community, TravisJ If this question can be reworded to fit the rules in the help center, please edit the question. • Welcome to Math.SE! What are your thoughts so far? – Peter Woolfitt May 15 '15 at 2:22 • I noticed you tagged permutations...have you tried thinking in terms of combinations? – Jared May 15 '15 at 2:30 • Do you mean 12 groups with 2 persons in each? – HEKTO May 15 '15 at 2:30 • Why is my question on hold? – Garret Lloyd May 15 '15 at 3:48 • Please explain what you have "so far" a little more clearly. The remarks 1:1 2:1 3:3, etc. do not make sense. – hardmath May 15 '15 at 3:58 Here's how you approach this kind of problem when you are stuck: you look for an easier problem of the same sort, and solve that instead, and see if you learn anything that might help you with the real problem. For example, how many ways are there to divide 2 people into groups of 2? Obviously only 1 way. That wasn't much help. So try a harder one. How many ways are there to divide 4 people into groups of 2? Say the people are A, B, C, D. A must be matched with someone, and there are 3 people she could be matched with. Then the two unmatched people must be matched to each other, so the answer is 3. Now how many ways are there to divide 6 people into groups of 2? Again A must be matched with someone, and there are 5 people she could be matched with, and then you are left with 4 people, and we know from the previous paragraph that 4 people can be matched in 3 ways, so the answer is $5\cdot3 = 15$. Now you try it from there. • Let me make sure I understand. 7 people would be (6)(12) – Garret Lloyd May 15 '15 at 2:52 • 7 people can't be divided into groups of 2. – MJD May 15 '15 at 3:03 • I think I did what you said and I got 1.585times 10 to the 21 – Garret Lloyd May 15 '15 at 3:23 • Assuming MJD's answer is $(n - 1)!!$, I got a value of $316,234,143,225$ – Jared May 15 '15 at 5:32
2018-12-12 12:26:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5635372400283813, "perplexity": 463.84577362453956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823872.13/warc/CC-MAIN-20181212112626-20181212134126-00397.warc.gz"}
http://hyperspacewiki.org/index.php/Hereditarily_indecomposable
# Hereditarily indecomposable Jump to: navigation, search We say that a continuum $X$ is hereditarily indecomposable if all of its subcontinua are indecomposable.
2018-12-11 23:34:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9187214374542236, "perplexity": 2971.513957225801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823705.4/warc/CC-MAIN-20181211215732-20181212001232-00571.warc.gz"}
http://www.victoralvarez.net/
# Victor Alvarez (Ph.D) Saarland University Information Systems Group Building E 1.1, 3rd floor, room 3.09 Email: lastname@cs.uni-saarland.de Telephone: +49 681 302 70144 Last update: May 2014 ### In a nutshell I was born in Mexico City, where I grew up and lived until the age of 25. I practiced rowing for about 9 years, of which about 5 were at international competitive level for Mexico's national team. During that time I started studying Physics at the Faculty of Sciences at UNAM, but later on I decided to change to Computer Science. My bachelor thesis was supervised by Prof. Dr. Jorge Urrutia. I obtained a master degree in Computer Science from Saarland University, in Saarbrücken, Germany, where my thesis was supervised by Prof. Dr. Raimund Seidel, who remained as my Ph.D supervisor. I am currently a Research Associate at the Information Systems Group headed by Prof. Dr. Jens Dittrich at Saarland University. You might have landed here because of academic reasons, in that case, are you sure I'm the Victor Alvarez you're looking for? Otherwise you might be interested in horseback riding lessons, or even in a comic. If you find on this website a broken link or elements that are not being properly rendered, I would greatly appreciate if you let me know. Thanks! ### Education 2007-2012:Ph.D in Computer Science (Dr.-Ing.) at Saarland University under supervision of Prof. Dr. Raimund Seidel. 2006-2007:Master in Computer Science (M. Sc.) at Saarland University and International Max-Planck Research School For Computer Science (IMPRS) under supervision of Prof. Dr. Raimund Seidel. 2001-2005:Bachelor of Computer Science (B. Sc.) at National Autonomous University of Mexico (UNAM) under supervision of Prof. Dr. Jorge Urrutia. ### Research interests My current research is mainly focused on developing and engineering (new) algorithms and data structures for (1) indexing, and (2) tuple reconstruction; both for main-memory databases on multi-core systems. This development process ought to, for example, make the algorithms (or implementations) NUMA-aware when applicable — rather relevant nowadays, and to profile these in order to detect bottlenecks in performance scalability. Previously, nonetheless, I mostly worked in Combinatorial and Algorithmic Geometry, Algorithm Engineering, Combinatorics, Data Structures, and Parameterized Complexity. In general, I am highly interested in areas having algorithmic flavor. ### Publications • Conferences: [C9]Victor Alvarez, Felix Martin Schuhknecht, Jens Dittrich, Stefan Richter. Main Memory Adaptive Indexing for Multi-core Systems. In Proc. of the 10th International Workshop on Data Management on New Hardware (DaMoN '14), Snowbird, USA, 2014. [C8]Victor Alvarez, Karl Bringmann, Saurabh Ray, Raimund Seidel. Counting Triangulations Approximately. In Proc. of the 25th Canadian Conference on Computational Geometry (CCCG '13), Waterloo, Canada, 2013. Abstract: We consider the problem of counting straight-edge triangulations of a given set $P$ of $n$ points in the plane. Until very recently it was not known whether the exact number of triangulations of $P$ can be computed asymptotically faster than by enumerating all triangulations. We now know that the number of triangulations of $P$ can be computed in $O^{*}(2^{n})$ time, which is less than the lower bound of $\Omega(2.43^{n})$ on the number of triangulations of any point set. In this paper we address the question of whether one can approximately count triangulations in sub-exponential time. We present an algorithm with sub-exponential running time and sub-exponential approximation ratio, that is, if we denote by~$\Lambda$ the output of our algorithm, and by $c^{n}$ the exact number of triangulations of $P$, for some positive constant $c$, we prove that $c^{n}\leq\Lambda\leq c^{n}\cdot 2^{o(n)}$. This is the first algorithm that in sub-exponential time computes a $(1+o(1))$-approximation of the base of the number of triangulations, more precisely, $c\leq\Lambda^{\frac{1}{n}}\leq(1 + o(1))c$. Our algorithm can be adapted to approximately count other crossing-free structures on~$P$, keeping the quality of approximation and running time intact. Our algorithm may be useful in guessing, through experiments, the right constants $c_1$ and $c_2$ such that the number of triangulations of any set of $n$ points is between $c_1^n$ and $c_2^n$. Currently there is a large gap between $c_1$ and $c_2$. We know that $c_1 \geq 2.43$ and $c_2 \leq 30$. [C7]Victor Alvarez, Erin W. Chambers, László Kozma. Privacy by Fake Data: A Geometric Approach. In Proc. of the 25th Canadian Conference on Computational Geometry (CCCG '13), Waterloo, Canada, 2013. Abstract: We study the following algorithmic problem: given $n$ points within a finite $d$-dimensional box, what is the smallest number of extra points that need to be added to ensure that every $d$-dimensional unit box is either empty, or contains at least $k$ points. We motivate the problem through an application to data privacy, namely $k$-anonymity. We show that minimizing the number of extra points to be added is strongly NP-complete, but admits a Polynomial Time Approximation Scheme (PTAS). In some sense, this is the best we can hope for, since a Fully Polynomial Time Approximation Scheme (FPTAS) is not possible, unless P=NP. [C6]Victor Alvarez, Raimund Seidel. A Simple Aggregative Algorithm for Counting Triangulations of Planar Point Sets and Related Problems. In Proc. of the 29th Symposium on Computational Geometry (SoCG '13), pages 1-8, Rio de Janeiro, Brazil, 2013. DOI=10.1145/2462356.2462392. Abstract: We give an algorithm that determines the number $\mbox{tr}(S)$ of straight line triangulations of a set $S$ of $n$ points in the plane in worst case time $O(n^2 2^n)$. This is the the first algorithm that is provably faster than enumeration, since $\mbox{tr}(S)$ is known to be $\Omega(2.43^n)$ for any set $S$ of $n$ points. Our algorithm requires exponential space. The algorithm generalizes to counting all triangulations of $S$ that are constrained to contain a given set of edges. It can also be used to compute an optimal triangulation of $S$ (unconstrained or constrained) for a reasonably wide class of optimality criteria (that includes e.g. minimum weight triangulations). Finally, the approach can also be used for the random generation of triangulations of $S$ according to the perfect uniform distribution. The algorithm has been implement and is substantially faster than existing methods on a variety of inputs. [C5]Victor Alvarez, Karl Bringmann, Radu Curticapean, Saurabh Ray. Counting Crossing-free Structures. In Proc. of the 28th Symposium on Computational Geometry (SoCG '12), pages 61-68, Chapel Hill, USA, 2012. DOI=10.1145/2261250.2261259. Abstract: Let $P$ be a set of $n$ points in the plane. A crossing-free structure on $P$ is a straight-edge planar graph with vertex set in $P$. Examples of crossing-free structures include triangulations of $P$, and spanning cycles of $P$, also known as polygonalizations of $P$, among others. There has been a large amount of research trying to bound the number of such structures. In particular, bounding the number of triangulations spanned by $P$ has received considerable attention. It is currently known that every set of $n$ points has at most $O(30^{n})$ and at least $\Omega(2.43^{n})$ triangulations. However, much less is known about the algorithmic problem of counting crossing-free structures of a given set $P$. For example, no algorithm for counting triangulations is known that, on all instances, performs faster than enumerating all triangulations. In this paper we develop a general technique for computing the number of crossing-free structures of an input set $P$. We apply the technique to obtain algorithms for computing the number of triangulations and spanning cycles of $P$. The running time of our algorithms is upper bounded by $n^{O(k)}$, where $k$ is the number of onion layers of $P$. In particular, we show that our algorithm for counting triangulations is not slower than $O(3.1414^{n})$. Given that there are several well-studied configurations of points with at least $\Omega(3.464^{n})$ triangulations, and some even with $\Omega(8^{n})$ triangulations, our algorithm is the first to asymptotically outperform any enumeration algorithm for such instances. In fact, it is widely believed that any set of $n$ points must have at least $\Omega(3.464^{n})$ triangulations. If this is true, then our algorithm is strictly sub-linear in the number of triangulations counted. We also show that our techniques are general enough to solve the restricted triangulation counting problem, which we prove to be $W[2]$-hard in the parameter $k$. This implies a "no free lunch" result: In order to be fixed-parameter tractable, our general algorithm must rely on additional properties that are specific to the considered class of structures. [C4]Victor Alvarez, Atsuhiro Nakamoto. Colored Quadrangulations with Steiner Points. Selected papers of The Thailand-Japan Joint Conference on Computational Geometry and Graphs (TJJCCGG ’12), LNCS 8296, pages 20-29, Bangkok, Thailand, 2013. DOI=10.1007/978-3-642-45281-9_2. Preliminary version in Proc. of the 28th European Workshop on Computational Geometry (EuroCG '12), pages 249-252, Assisi, Italy, 2012. Abstract: Let $P\subset\mathbb{R}^{2}$ be a $k$-colored set of $n$ points in general position, where $k\geq 2$. A $k$-colored quadrangulation of $P$ is a properly colored straight-edge plane graph $G$ with vertex set $P$ such that the boundary of the unbounded face of $G$ coincides with the convex hull of $P$ and that each bounded face of $G$ is quadrilateral. It is easy to check that in general not every $k$-colored $P$ admits a $k$-colored quadrangulation, and hence the use of extra points, for which we can choose the color among the $k$ available colors, is required in order to obtain one. These extra points are known in the literature as Steiner points. In this paper, we show that if $P$ satisfies some condition for the colors of the points in the convex hull, then a $k$-colored quadrangulation of $P$ can always be constructed using less than $\frac{(16 k-2) n+7 k-2}{39 k-6}$ Steiner points. Our upper bound improves the previous known upper bound for $k=3$, and represents the first bounds for $k\geq 4$. [C3]Victor Alvarez, David G. Kirkpatrick, Raimund Seidel. 2011. Can nearest neighbor searching be simple and always fast?. In Proc. of the 19th European conference on Algorithms (ESA '11), pages 82-92, Saarbrücken, Germany, 2011. DOI=10.1007/978-3-642-23719-5_8. Abstract: Nearest Neighbor Searching, i.e. determining from a set $S$ of $n$ sites in the plane the one that is closest to a given query point $q$, is a classical problem in computational geometry. Fast theoretical solutions are known, e.g. point location in the Voronoi Diagram of $S$, or specialized structures such as so-called Delaunay hierarchies. However, practitioners tend to deem these solutions as too complicated or computationally too costly to be actually useful. Recently in ALENEX 2010 Birn et al. proposed a simple and practical randomized solution. They reported encouraging experimental results and presented a partial performance analysis. They argued that in many cases their method achieves logarithmic expected query time but they also noted that in some cases linear expected query time is incurred. They raised the question whether some variant of their approach can achieve logarithmic expected query time in all cases. The approach of Birn et al. derives its simplicity mostly from the fact that it applies only one simple type of geometric predicate: which one of two sites in $S$ is closer to the query point $q$. In this paper we show that any method for planar nearest neighbor searching that relies just on this one type of geometric predicate can be forced to make at least $n-1$ such predicate evaluations during a worst case query. [C2]Victor Alvarez. Even Triangulation of Planar Set of Points with Steiner Points. In Proc. of the 26th European Workshop on Computational Geometry (EuroCG '10), pages 119-122, Dortmund, Germany, 2010. Abstract: Let $P\subset\mathbb{R}^{2}$ be a set of $n$ points of which $k$ are interior points. Let us call a triangulation $T$ of $P$ even if all its vertices have even degree, and pseudo-even if at least the $k$ interior vertices have even degree. (Pseudo-)Even triangulations have one nice property; their vertices can be $3$-colored, see here for example. Since one can easily check that for some sets of points, such triangulation do not exist, we show an algorithm that constructs a set $S$ of at most $\left\lfloor\frac{k + 2}{3}\right\rfloor$ Steiner points (extra points) along with a pseudo-even triangulation $T$ of $P\cup S = V(T)$. [C1]Victor Alvarez, Raimund Seidel. Approximating the Minimum Spanning Tree of Set of Points in the Hausdorff Metric. In Proc. of the 24th European Workshop on Computational Geometry (EuroCG '08), pages 119-122, Nancy, France, 2008. Abstract: We study the problem of approximating $\mbox{MST}(P)$, the Euclidean minimum spanning tree of a set $P$ of $n$ points in $[0,1]^d$, by a spanning tree of some subset $Q\subset P$. We show that if the weight of $(P)$ is to be approximated, then in general $Q$ must be large. If the shape of $\mbox{MST}(P)$ is to be approximated, then this is always possible with a small $Q$. More specifically, for any $0<\varepsilon<1$ we prove: 1. There are sets $P\subset [0, 1]^{d}$ of arbitrarily large size $n$ with the property that any subset $Q'\subset P$ that admits a spanning tree $T'$ with $\bigl| \left|T'\right|-\left|\mbox{MST}(P)\right|\bigr| < \varepsilon\cdot\left|\mbox{MST}(P)\right|$ must have size at least $\Omega\left({n}^{1 - 1/d}\right)$. Here $|T|$ denotes the weight, i.e. the sum of the edge lengths of tree $T$. 2. For any $P\subset [0,1]^d$ of size $n$ there exists a subset $Q\subseteq P$ of size $O\left(1/\varepsilon^{d}\right)$ that admits a spanning tree $T$ that is $\varepsilon$-close to $\mbox{MST}(P)$ in terms of Hausdorff distance (which measures shape dissimilarity). 3. This set $Q$ and this spanning tree $T$ can be computed in time $O\left(\tau_d(n) + 1/\varepsilon^d\log\left(1/\varepsilon^d\right)\right)$ for any fixed dimension $d$. Here $\tau_d(n)$ denotes the time necessary to compute the minimum spanning tree of $n$ points in $\mathbb{R}^d$, which is known to be $O(n\log n)$ for $d=2$, $O\left((n\log n)^{4/3}\right)$ for $d=3$, and $O\left(n^{2-2/\left(\lceil d/2\rceil+1\right)+\phi}\right)$, with $\phi>0$ arbitrarily small, for $d>3$, see here. All the results hold not only for the Euclidean metric $L_2$ but also for any $L_{p}$ metric with $1\leq p\leq\infty$ as underlying metric. • Journals: [J4]Victor Alvarez, Karl Bringmann, Saurabh Ray, Raimund Seidel. Counting Triangulations and other Crossing-Free Structures Approximately. Computational Geometry, Theory and Applications. To appear 2014. Special Issue on the 25th Canadian Conference on Computational Geometry (CCCG '13). Abstract:We consider the problem of counting straight-edge triangulations of a given set~$P$ of $n$ points in the plane. Until very recently it was not known whether the exact number of triangulations of $P$ can be computed asymptotically faster than by enumerating all triangulations. We now know that the number of triangulations of $P$ can be computed in $O^{*}(2^{n})$ time, see here, which is less than the lower bound of $\Omega(2.43^{n})$ on the number of triangulations of any point set. In this paper we address the question of whether one can approximately count triangulations in sub-exponential time. We present an algorithm with sub-exponential running time and sub-exponential approximation ratio, that is, denoting by $\Lambda$ the output of our algorithm and by $c^{n}$ the exact number of triangulations of $P$, for some positive constant $c$, we prove that $c^{n}\leq\Lambda\leq c^{n}\cdot 2^{o(n)}$. This is the first algorithm that in sub-exponential time computes a $(1+o(1))$-approximation of the base of the number of triangulations, more precisely, $c\leq\Lambda^{\frac{1}{n}}\leq(1 + o(1))c$. Our algorithm can be adapted to approximately count other crossing-free structures on $P$, keeping the quality of approximation and running time intact. In this paper we show how to do this for matchings and spanning trees. [J3]Victor Alvarez. Parity-constrained Triangulations using Steiner points. Graphs and Combinatorics. December 2013. DOI=10.1007/s00373-013-1389-6. Abstract: Let $P\subset\mathbb{R}^{2}$ be a set of $n$ points, of which $k$ lie in the interior of the convex hull $\text{CH}(P)$ of $P$. Let us call a triangulation $T$ of $P$ even (odd) if and only if all its vertices have even (odd) degree, and pseudo-even (pseudo-odd) if at least the $k$ interior vertices have even (odd) degree. On the one hand, triangulations having all its interior vertices of even degree have one nice property; their vertices can be 3-colored, see here for example. On the other hand, odd triangulations have recently found an application in the colored version of the classic "Happy Ending Problem" of Erdős and Szekeres, see here. In this paper we show that there are sets of points that admit neither pseudo-even nor pseudo-odd triangulations. Nevertheless, we show how to construct a set of Steiner points $S = S(P)$ of size at most $\frac{k}{3} + c$, where $c$ is a positive constant, such that a pseudo-even (pseudo-odd) triangulation can be constructed on $P\cup S$. Moreover, we also show that even (odd) triangulations can always be constructed using at most $\frac{n}{3} + c$ Steiner points, where again $c$ is a positive constant. Our constructions have the property that every Steiner point lies in the interior of $\text{CH}(P)$. [J2]Victor Alvarez, Raimund Seidel. Approximating the minimum weight spanning tree of a set of points in the Hausdorff metric. Computational Geometry, Theory and Applications 43:2, pages 94-98. February 2010. Special Issue on the 24th European Workshop on Computational Geometry (EuroCG '08). DOI=10.1016/j.comgeo.2009.04.005. Abstract: We study the problem of approximating $\mbox{MST}(P)$, the Euclidean minimum spanning tree of a set $P$ of $n$ points in $[0,1]^d$, by a spanning tree of some subset $Q\subset P$. We show that if the weight of $(P)$ is to be approximated, then in general $Q$ must be large. If the shape of $\mbox{MST}(P)$ is to be approximated, then this is always possible with a small $Q$. More specifically, for any $0<\varepsilon<1$ we prove: 1. There are sets $P\subset [0, 1]^{d}$ of arbitrarily large size $n$ with the property that any subset $Q'\subset P$ that admits a spanning tree $T'$ with $\bigl| \left|T'\right|-\left|\mbox{MST}(P)\right|\bigr| < \varepsilon\cdot\left|\mbox{MST}(P)\right|$ must have size at least $\Omega\left({n}^{1 - 1/d}\right)$. Here $\left|T\right|$ denotes the weight, i.e. the sum of the edge lengths of tree $T$. 2. For any $P\subset [0,1]^d$ of size $n$ there exists a subset $Q\subseteq P$ of size $O\left(1/\varepsilon^{d}\right)$ that admits a spanning tree $T$ that is $\varepsilon$-close to $\mbox{MST}(P)$ in terms of Hausdorff distance (which measures shape dissimilarity). 3. This set $Q$ and this spanning tree $T$ can be computed in time $O\left(\tau_{d,p}(n) + 1/\varepsilon^d\log\left(1/\varepsilon^d\right)\right)$ for any fixed dimension $d$. Here $\tau_{d,p}(n)$ denotes the time necessary to compute the minimum weight spanning tree of $n$ points in $\mathbb{R}^d$ under any fixed metric $L_p,\ 1\leq p\leq\infty$, where $\tau_{2,p}(n) = O(n\log n)$, see here, $\tau_{3,2}(n) = O\left((n\log n)^{4/3}\right)$, and $\tau_{d, 2}(n) = O\left(n^{2-2/\left(\lceil d/2\rceil+1\right)+\phi}\right)$, with $\phi>0$ arbitrarily small, for $d>3$, see here. Also $\tau_{3,1}(n)$ and $\tau_{3,\infty}(n)$ is known to be $O(n\log n)$, see here. [J1]Victor Alvarez, Toshinori Sakai, Jorge Urrutia. Bichromatic Quadrangulations with Steiner Points. Graphs and Combinatorics 23:1, pages 85-98. February 2007. DOI=10.1007/s00373-007-0715-2. Abstract: Let $P$ be a $k$-colored point set in general position, $k\geq 2$. A family of quadrilaterals with disjoint interiors $Q_{1},\ldots, Q_{m}$ is called a quadrangulation of $P$ if $V(Q_{1})\cup\cdots V(Q_{m}) = P$, the edges of all $Q_{i}$ join points with different colors, and $Q_{1}\cup\cdots\cup Q_{m} = Conv(P)$. In general it is easy to see that not all $k$-colored point sets admit a quadrangulation; when they do, we call them quadrangulatable. For a point set to be quadrangulatable it must satisfy that its convex hull $Conv(P)$ has an even number of points and that consecutive vertices of $Conv(P)$ receive different colors. This will be assumed from now on. In this paper, we study the following type of questions: Let $P$ be a $k$-colored point set. How many Steiner points in the interior of $Conv(P)$ do we need to add to $P$ to make it quadrangulatable? When $k$ = 2, we usually call $P$ a bichromatic point set, and its color classes are usually denoted by $R$ and $B$, i.e. the red and blue elements of $P$. In this paper, we prove that any bichromatic point set $P = R\cup B$ where $|R| = |B| = n$ can be made quadrangulatable by adding at most $\left\lfloor\frac{n - 1}{3}\right\rfloor + \left\lfloor\frac{n}{2}\right\rfloor + 1$ Steiner points, and that $\frac{m}{3}$ Steiner points are occasionally necessary. To prove the latter, we also show that the convex hull of any monochromatic point set $P$ of $n$ elements can be always partitioned into a set $S = \{S_{1}, \ldots, S_{t}\}$ of star-shaped polygons with disjoint interiors, where $V(S_{1})\cup\cdots V(S_{t}) = P$, and $t\leq\left\lfloor\frac{n−1}{3}\right\rfloor + 1$. For $n = 3k$ this bound is tight. Finally, we prove that there are 3-colored point sets that cannot be completed to 3-quadrangulatable point sets. • Other manuscripts: [M4]Victor Alvarez, Felix Martin Schuhknecht, Jens Dittrich, Stefan Richter. Main Memory Adaptive Indexing for Multi-core Systems. Computing Research Repository (CoRR), 2014. abs/1404.2034. This is the extended version of [C9]. [M3]Victor Alvarez, Karl Bringmann, Saurabh Ray, Raimund Seidel. Counting Triangulations and other Crossing-free Structures Approximately. Computing Research Repository (CoRR), 2013. abs/1404.0261. This is the extended version of [C8] and the unpolished version of [J4]. Abstract: We consider the problem of counting straight-edge triangulations of a given set $P$ of $n$ points in the plane. Until very recently it was not known whether the exact number of triangulations of $P$ can be computed asymptotically faster than by enumerating all triangulations. We now know that the number of triangulations of $P$ can be computed in $O^{*}(2^{n})$ time AS13, which is less than the lower bound of $\Omega(2.43^{n})$ on the number of triangulations of any point set SSW11. In this paper we address the question of whether one can approximately count triangulations in sub-exponential time. We present an algorithm with sub-exponential running time and sub-exponential approximation ratio, that is, denoting by $\Lambda$ the output of our algorithm, and by $c^{n}$ the exact number of triangulations of $P$, for some positive constant $c$, we prove that $c^{n}\leq\Lambda\leq c^{n}\cdot 2^{o(n)}$. This is the first algorithm that in sub-exponential time computes a $(1+o(1))$-approximation of the base of the number of triangulations, more precisely, $c\leq\Lambda^{\frac{1}{n}}\leq(1 + o(1))c$. Our algorithm can be adapted to approximately count other crossing-free structures on $P$, keeping the quality of approximation and running time intact. In this paper we show how to do this for matchings and spanning trees. [M2]Victor Alvarez, Karl Bringmann, Radu Curticapean, Saurabh Ray. Counting Triangulations and other Crossing-free Structures via Onion Layers. Computing Research Repository (CoRR), 2013. abs/1312.4628. This is the extended version of [C5] and it is currently under review at a journal. Abstract: Let $P$ be a set of $n$ points in the plane. A crossing-free structure on $P$ is a straight-edge planar graph with vertex set in $P$. Examples of crossing-free structures include triangulations of $P$, and spanning cycles of $P$, also known as polygonalizations of $P$, among others. There has been a large amount of research trying to bound the number of such structures. In particular, bounding the number of triangulations spanned by $P$ has received considerable attention. It is currently known that every set of $n$ points has at most $O(30^{n})$ and at least $\Omega(2.43^{n})$ triangulations. However, much less is known about the algorithmic problem of counting crossing-free structures of a given set $P$. For example, no algorithm for counting triangulations is known that, on all instances, performs faster than enumerating all triangulations. In this paper we develop a general technique for computing the number of crossing-free structures of an input set $P$. We apply the technique to obtain algorithms for computing the number of triangulations and spanning cycles of $P$. The running time of our algorithms is upper bounded by $n^{O(k)}$, where $k$ is the number of onion layers of $P$. In particular, we show that our algorithm for counting triangulations is not slower than $O(3.1414^{n})$. Given that there are several well-studied configurations of points with at least $\Omega(3.464^{n})$ triangulations, and some even with $\Omega(8^{n})$ triangulations, our algorithm is the first to asymptotically outperform any enumeration algorithm for such instances. In fact, it is widely believed that any set of $n$ points must have at least $\Omega(3.464^{n})$ triangulations. If this is true, then our algorithm is strictly sub-linear in the number of triangulations counted. We show experiments comparing our algorithm for counting triangulations with the algorithm presented here, which is supposed to be very fast in practice. We also show that our techniques are general enough to solve the restricted triangulation counting problem, which we prove to be $W[2]$-hard in the parameter $k$. This implies a "no free lunch" result: In order to be fixed-parameter tractable, our general algorithm must rely on additional properties that are specific to the considered class of structures. [M1]Victor Alvarez, Karl Bringmann, Saurabh Ray. A Simple Sweep Line Algorithm for Counting Triangulations and Pseudo-triangulations. Computing Research Repository (CoRR), 2013. abs/1312.3188. Currently under review at a journal. Abstract: Let $P\subset\mathbb{R}^{2}$ be a set of $n$ points. In A99 and ARSS03 an algorithm for counting triangulations and pseudo-triangulations of $P$, respectively, is shown. Both algorithms are based on the divide-and-conquer paradigm, and both work by finding sub-structures on triangulations and pseudo-triangulations that allow the problems to be split. These sub-structures are called triangulation paths for triangulations, or T-paths for short, and zig-zag paths for pseudo-triangulations, or PT-paths for short. Those two algorithms have turned out to be very difficult to analyze, to the point that no good analysis of their running time has been presented so far. The interesting thing about those algorithms, besides their simplicity, is that they experimentally indicate that counting can be done significantly faster than enumeration. In this paper we show two new algorithms, one to compute the number of triangulations of $P$, and one to compute the number of pseudo-triangulations of $P$. They are also based on T-paths and PT-paths respectively, but use the sweep line paradigm and not divide-and-conquer. The important thing about our algorithms is that they admit a good analysis of their running times. We will show that our algorithms run in time $O^{*}(t(P))$ and $O^{*}(pt(P))$ respectively, where $t(P)$ and $pt(P)$ is the largest number of T-paths and PT-paths, respectively, that the algorithms encounter during their execution. Moreover, we show that $t(P) = O^{*}(9^{n})$, which is the first non-trivial bound on $t(P)$ to be known. While the algorithm for counting triangulations of ABCR12 is faster in the worst case, $O^{*}\left(3.1414^{n}\right)$, than our algorithm, $O^{*}\left(9^{n}\right)$, there are sets of points where the number of T-paths is $O(2^{n})$. In such cases our algorithm may be faster. Furthermore, it is not clear whether the algorithm presented in ABCR12 can be modified to count pseudo-triangulations so that its running time remains $O^{*}(c^n)$ for some small constant $c\in\mathbb{R}$. Therefore, for counting pseudo-triangulations (and possibly other similar structures) our approach seems better. ### Awards • Best paper award at the 29th Symposium on Computational Geometry (SoCG '13), Rio de Janeiro, Brazil. ### The rest of the time Some random pictures of mine from Flickr: I'm an active user of Last.fm. Here are some statistics of my listening habits:
2014-09-20 09:55:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7610023617744446, "perplexity": 309.7682196334323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133078.21/warc/CC-MAIN-20140914011213-00017-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://tex.stackexchange.com/questions/259101/change-font-size-inside-only-one-line-of-document
# Change font size inside only one line of document My article is written using 11 pt font, but I would like to write a line in bold using a size which is bigger than 11. Here is the MWE: \documentclass{article} % \begin{document} \par ~ \par \noindent % I want to enlarge only this statement \textbf{Analisi fluidodinamica su Stramazzo Generico} % this is the text Si immagini che un fluido perfetto e incomprimibile... % \end{document} • is the line simply part of regular text, or do you know its start and end ahead of time? if it's already on a line by itself, you could use {\large\bfseries ...} to do the job. – barbara beeton Aug 6 '15 at 17:46 • I'm thinking you'll need the soul package here??? – 1010011010 Aug 6 '15 at 19:27 • I have just reported some code in the question – Cybex Aug 6 '15 at 20:07 • It's not really clear why the part should be in a larger font. Are you perhaps trying to emulate \subsection or something similar? – egreg Aug 6 '15 at 20:12 • @Cybex \subsection* – egreg Aug 6 '15 at 20:44 If you want to emulate \subsection, but without a number, just do \subsection*{Analisi fluidodinamica su Stramazzo Generico} If you just have numbered sections, but not subsections, add \setcounter{secnumdepth}{1} \subsection{Analisi fluidodinamica su Stramazzo Generico} with the advantage that the subsections can go directly to the table of contents (provided tocdepth is set to 2 or more).
2020-07-15 12:58:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7344351410865784, "perplexity": 2914.115930089757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657167808.91/warc/CC-MAIN-20200715101742-20200715131742-00537.warc.gz"}
https://en-academic.com/dic.nsf/enwiki/880177
# Fundamental solution Fundamental solution In mathematics, a fundamental solution for a linear partial differential operator "L" is a formulation in the language of distribution theory of the older idea of a Green's function. In terms of the Dirac delta function &delta;("x"), a fundamental solution "f" is the solution of the inhomogeneous equation :"Lf" = &delta;("x"). Here "f" is "a priori" only assumed to be a Schwartz distribution. This concept was long known for the Laplacian in two and three dimensions. It was investigated for all dimensions for the Laplacian by Marcel Riesz. The existence of a fundamental solution for any operator with constant coefficients &mdash; the most important case, directly linked to the possibility of using convolution to solve an arbitrary right hand side &mdash; was shown by Malgrange and Ehrenpreis. Example Consider the following differential equation "Lf" = sin(x) with L as, :$L=frac\left\{partial^2\right\}\left\{partial x^2\right\}$ The fundamental solutions can be obtained by solving "Lf" = &delta;("x"), explicitly, :$frac\left\{partial^2\right\}\left\{partial x^2\right\} F\left(x\right) = delta\left(x\right)$ Since for the Heaviside function "H" we have :"H"&prime;("x") = &delta;("x") there is a solution :"F"&prime;("x") = "H"(x) + "C". Here "C" is an arbitrary constant. For convenience, set :"C" = − 1/2. After integrating and taking the integration constant as zero, we get :$F\left(x\right)=frac\left\{1\right\}\left\{2\right\} |x|$ Fundamental solutions for some partial differential equations Laplace equation :$\left[- abla^2\right] Phi\left(mathbf\left\{x\right\},mathbf\left\{x\right\}\text{'}\right) = delta\left(mathbf\left\{x\right\}-mathbf\left\{x\right\}\text{'}\right)$ The fundamental solutions in two and three dimensions are :$Phi_\left\{2D\right\}\left(mathbf\left\{x\right\},mathbf\left\{x\right\}\text{'}\right)=-frac\left\{1\right\}\left\{2pi\right\}ln|mathbf\left\{x\right\}-mathbf\left\{x\right\}\text{'}|,quad Phi_\left\{3D\right\}\left(mathbf\left\{x\right\},mathbf\left\{x\right\}\text{'}\right)=frac\left\{1\right\}\left\{4pi|mathbf\left\{x\right\}-mathbf\left\{x\right\}\text{'}$ Helmholtz equation where the parameter "k" is real and the fundamental solution a modified Bessel function. :$\left[- abla^2+k^2\right] Phi\left(mathbf\left\{x\right\},mathbf\left\{x\right\}\text{'}\right) = delta\left(mathbf\left\{x\right\}-mathbf\left\{x\right\}\text{'}\right)$ The two and three dimensional Helmholtz equations have the fundamental solutions :$Phi_\left\{2D\right\}\left(mathbf\left\{x\right\},mathbf\left\{x\right\}\text{'}\right)=frac\left\{1\right\}\left\{2pi\right\}K_0\left(k|mathbf\left\{x\right\}-mathbf\left\{x\right\}\text{'}|\right),quadPhi_\left\{3D\right\}\left(mathbf\left\{x\right\},mathbf\left\{x\right\}\text{'}\right)=frac\left\{1\right\}\left\{4pi|mathbf\left\{x\right\}-mathbf\left\{x\right\}\text{'}exp\left(-k|mathbf\left\{x\right\}-mathbf\left\{x\right\}\text{'}|\right)$ Biharmonic equation :$\left[- abla^4\right] Phi\left(mathbf\left\{x\right\},mathbf\left\{x\right\}\text{'}\right) = delta\left(mathbf\left\{x\right\}-mathbf\left\{x\right\}\text{'}\right)$ The biharmonic equation has the fundamental solutions :$Phi_\left\{2D\right\}\left(mathbf\left\{x\right\},mathbf\left\{x\right\}\text{'}\right)=-frac\left\{8pi\right\}$ Motivation The motivation to find the fundamental solution is because once one finds the fundamental solution, it is easy to find the desired solution of the original equation. In fact, this process is achieved by convolution. Fundamental solutions also play an important role in the numerical solution of partial differential equations by the boundary element method. Application to the example Consider the operator L, mentioned in the example. :$frac\left\{partial^2\right\}\left\{partial x^2\right\} f\left(x\right) = sin\left(x\right)$ Since we have found the fundamental solution, we can easily find the solution of the original equation by convolution, :$int_\left\{-infty\right\}^\left\{infty\right\} frac\left\{1\right\}\left\{2\right\}|x - y|sin\left(y\right)dy$ Proof that the convolution is the desired solution Denote the convolution operation as :"f"*"g". Say we are trying to find the solution of :"Lf" = "g"("x"). When applying the differential operator, "L", to the convolution it is known that :"L"("f"*"g")=("Lf")*"g", provided "L" has constant coefficients. If "f" is the fundamental solution, the RHS reduces to :&delta;*"g". It is straightforward to verify that this is in fact "g"("x") (in other words the delta function acts as identity element for convolution). Summing up, :$L\left(F*g\right)=\left(LF\right)*g=delta\left(x\right)*g\left(x\right)=int_\left\{-infty\right\}^\left\{infty\right\} delta \left(x-y\right) g\left(y\right) dy=g\left(x\right)$ Therefore, if "F" is the fundamental solution, the convolution "F"*"g" is the solution of "Lf" = "g"("x"). * parametrix Wikimedia Foundation. 2010. ### Look at other dictionaries: • Fundamental theorem of algebra — In mathematics, the fundamental theorem of algebra states that every non constant single variable polynomial with complex coefficients has at least one complex root. Equivalently, the field of complex numbers is algebraically closed.Sometimes,… …   Wikipedia • Fundamental unit (number theory) — In algebraic number theory, a fundamental unit is a generator for the torsion free unit group of the ring of integers of a number field, when that group is infinite cyclic. See also Dirichlet s unit theorem.For rings of the form mathbb{Z} [sqrt… …   Wikipedia • Fundamental theorem of linear algebra — In mathematics, the fundamental theorem of linear algebra makes several statements regarding vector spaces. These may be stated concretely in terms of the rank r of an m times; n matrix A and its LDU factorization::PA=LDUwherein P is a… …   Wikipedia • Fundamental theorem of combinatorial enumeration — The fundamental theorem of combinatorial enumeration is a theorem in combinatorics that solves the enumeration problem of labelled and unlabelled combinatorial classes. The unlabelled case is based on the Pólya enumeration theorem.This theorem is …   Wikipedia • Solución fundamental — En matematicas, una solución fundamental para un operador diferencial con derivadas parciales L es una formulación en el lenguaje de la teoría de distribuciones proveniente de la antigua idea de la función de Green. En terminos de la función… …   Wikipedia Español • Ideal solution — In chemistry, an ideal solution or ideal mixture is a solution in which the enthalpy of solution is zero; [ A to Z of Thermodynamics Pierre Perrot ISBN 0198565569] the closer to zero the enthalpy of solution, the more ideal the behavior of the… …   Wikipedia • bandaid solution — /bændeɪd səˈluʃən/ (say bandayd suh loohshuhn) noun an answer to a problem which is superficial by nature and does not address fundamental issues. Also, bandaid remedy …   Australian-English dictionary • Heat equation — The heat equation is an important partial differential equation which describes the distribution of heat (or variation in temperature) in a given region over time. For a function of three spatial variables ( x , y , z ) and one time variable t ,… …   Wikipedia • Pell's equation — is any Diophantine equation of the form:x^2 ny^2=1,where n is a nonsquare integer and x and y are integers. Trivially, x = 1 and y = 0 always solve this equation. Lagrange proved that for any natural number n that is not a perfect square there… …   Wikipedia • Dirac delta function — Schematic representation of the Dirac delta function by a line surmounted by an arrow. The height of the arrow is usually used to specify the value of any multiplicative constant, which will give the area under the function. The other convention… …   Wikipedia
2021-03-03 16:42:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9224594235420227, "perplexity": 924.615392761306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366969.45/warc/CC-MAIN-20210303134756-20210303164756-00120.warc.gz"}
https://www.physicsforums.com/threads/free-falling-object-violating-conservation-of-energy.753078/
# Free falling object violating conservation of energy? 1. May 10, 2014 ### Nanosuit 1. The problem statement, all variables and given/known data This is elementary level stuff and I am pretty much past this, and yet I can't seem to find a suitable way of explaining this.I was thinking about Potential Energy of a particle of mass m falling freely under gravity(ignoring air resistance, again, beginner stuff :P) from a point A 10m above a point B which is on horizontal ground, vertically below A.GPE at A is 10mg.At ground level all GPE=KE so KE=10mg.That was scenario 1, but the 2nd scenario is a bit different;lets say that the instant the particle reaches point B(just before actually touching but theoretically there-sounds weird I admit :/) the ground(or in this case, point B) drops by 10 meters.Now, GPE at A is again 10mg(remember the ground was there when it was first let go)again all GPE=KE so KE=10mg but since it drops further 10 meters so KE actually becomes 20mg! 2. Relevant equations K.E=1/2mv2 P.E=mgh 3. The attempt at a solution I tried saying that it all depends on the point from which we take the height as standard and that this isn't always at sea level.I have even included a diagram to help u better understand the situation if I happen to sound confusing. 2. May 10, 2014 ### paisiello2 When comparing two different cases for conservation of energy you always have to use the same datum points. It looks like you moved point B. 3. May 10, 2014 ### dauto Kinetic energy will be K = 20mg and potential energy will be U = -10mg for a total energy of E = 10mg 4. May 10, 2014 ### flatmaster I think you may be thinking that potential energy has an absolute scale and that there is some place where height=0. The exact value for the height isn't important. It is the change in height that is important. Immagine you were to perform a similar experiment in a very tall building with many floors. If you call the ground h=0, the ball may actually fall from h=174m to h=164 meters. It doesn't matter what the actual heights were, its the change in height that was important. 5. May 10, 2014 ### goraemon As others have mentioned above, potential energy can be less than zero - it just depends on what coordinate system you use. Kinetic energy, however, can never be less than zero (because speed, as a scalar quantity, can never be less than zero). 6. May 10, 2014 ### haruspex Or perhaps because the speed gets squared to calculate KE? 7. May 11, 2014 ### Nanosuit I always thought GPE was absolute since GPE is a scalar...I never really thought about it that way silly me :P Thanks a lot for the replies guys :D 8. May 11, 2014 ### ian_dsouza Yeah. I think we call it "potential" energy because there is a potential for the associated force (in this case, gravitational force) to do work. When you drop the particle, it falls through a distance - Work = Force X Distance, for a constant force (Gravity is almost constant through the distance the particle travels in this case) We usually measure the height from the ground because we assume that the particle does not fall below ground level and thus the GPE calculated with this height is an accurate representation of the work that can be derived from the particle - in this case it shows up as KE when the particle reaches the ground. For eg, that KE is used by hydroelectric paddles to produce electricity, with some loss in efficiency of conversion. Strictly speaking the GPE is not zero either at ground level or 10 m below it. I think it is zero at the center of the Earth - assuming the Earth and the particle are the only two objects in the universe. If you'd place the particle at the center of the Earth, it would just sit there. Hope this makes sense! Feel free to post constructive criticism. 9. May 11, 2014 Not center of the Earth. Center of mass of the Earth. 10. May 11, 2014 ### Nathanael And it would be zero at an "infinite distance"! (With the same assumption.) Zero and infinity often seem to be related 11. May 11, 2014 ### haruspex If you're making that distinction to allow for arbitrary density distributions, that won't do it either. A system consisting of a particle mass 10kg at x=-1 and a 1 kg particle at x=10 has a mass centre at x=0, but that won't be the lowest potential position for a test particle. 12. May 11, 2014 I assumed that the Earth and the particle were the only two objects in the Universe. 13. May 11, 2014 ### ian_dsouza Where would you say is the point of lowest potential in this case and how do you arrive at the conclusion? 14. May 11, 2014 ### voko You need to understand that the law that says $mgh$ is only an approximation of the true law. It becomes inaccurate as go to any significant distance above or below the surface of the Earth. It is definitely wrong near the centre of the Earth, no matter how the "centre of the Earth" is defined. 15. May 11, 2014 ### voko You would need to start with the correct equation for potential energy of a test particle with respect to a massive particle; then recall how you obtain potential energy in a compound system. 16. May 11, 2014 Why? Can you elaborate? 17. May 11, 2014 ### voko The approximate law assumes that the force of gravity remains the same everywhere. That is why it is simply linear in the distance. But as you go both up or down from the surface of the Earth, the force of gravity decreases and becomes zero at both the centre of the Earth and infinitely far away from it. The above assumes that the Earth is perfectly symmetrical and its density also is centrally-symmetric. It is not really the case, so the above is also an approximation, albeit a much more accurate one. 18. May 11, 2014 So Gravitational potential energy at the centre is $0J$? 19. May 11, 2014 ### voko Not necessarily. What we can say is that because the force of gravity is zero at the centre, the potential energy has a minimum or a maximum there. Analysing this further, we can conclude it has a minimum there. Whether this minimum is zero or not is entirely up to us, because we can always add any constant to potential energy, which won't change anything physically. 20. May 11, 2014 ### haruspex The two particle example is to illustrate that when mass is arbitrarily distributed the lowest potential point in the field it generates is usually not at its mass centre. This applies also to a sphere with arbitrary density distribution. Consider a solid sphere radius r with uniform density, except for a spherical inclusion radius s at distance a from its centre, O (centre to centre distance). Let the larger sphere have mass M, and the inclusion have 'extra' mass m. Consider a point distance x from O towards the centre of the inclusion, but not inside the inclusion (say). The least potential will be where the two attractions balance: $\frac{Mx}{r^3} = \frac{m}{(a-x)^2}$. But the common mass centre is given by $ma = (M+m)y$, where y is its distance from O.
2017-08-22 23:52:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.655612051486969, "perplexity": 533.3433663239123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.7/warc/CC-MAIN-20170822221214-20170823001214-00008.warc.gz"}
https://www.mathjobrumors.com/thread/4636/page/1
# Nick Rozenblyum vs Sam Raskin 1. Top Mathematician mvba Who's the better young Gaitsgory-style geometric representation theorist? 1 weekmvba Quote 0 Up 6 Down Report 2. Top Mathematician xedt 1 weekxedt Quote 6 Up 1 Down Report 3. Top Mathematician mvng Raskin has actually accomplished something worthwhile without Gaitsgory. 1 weekmvng Quote 5 Up 5 Down Report 4. Top Mathematician kdmn Is Gaitsgory’s math as good as his tango? 1 weekkdmn Quote 0 Up 2 Down Report 5. Top Mathematician ihnb They are both quite good and they work together, so its a silly contrast. Pike asking if Mario or Luigi is better. 1 weekihnb Quote 3 Up 0 Down Report 6. Top Mathematician qqke They are both quite good and they work together, so its a silly contrast. Pike asking if Mario or Luigi is better. Obviously Mario 1 weekqqke Quote 10 Up 1 Down Report 7. Top Mathematician vkkh They are both quite good and they work together, so its a silly contrast. Pike asking if Mario or Luigi is better. Obviously Mario Chub chaser? 1 weekvkkh Quote 4 Up 0 Down Report 8. Top Mathematician hbbk They are both strong and different. Sam is more focused on geometric Langlands, Nick works on a lot of other things outside of geometric Langlands (shifted symplectic geometry, quantization, factorization homology,...). 1 weekhbbk Quote 9 Up 0 Down Report 9. Top Mathematician yeon Nick Rozenblyum is the first person in history to become deadwood before tenure. Amazing! 1 weekyeon Quote 12 Up 15 Down Report 10. Top Mathematician xbxt Nick Rozenblyum is the first person in history to become deadwood before tenure. Amazing! This is false for two reasons: first, Nick is NOT deadwood (quite the opposite); second: Teruyoshi 1 weekxbxt Quote 3 Up 0 Down Report 11. Top Mathematician mvba Nick Rozenblyum is the first person in history to become deadwood before tenure. Amazing! This is false for two reasons: first, Nick is NOT deadwood (quite the opposite); second: Teruyoshi teruyoshi yoshida left math and i'm not sure if he was tenured at cambridge, he was a lecturer. 1 weekmvba Quote 1 Up 2 Down Report 12. Top Mathematician jbgt [...] This is false for two reasons: first, Nick is NOT deadwood (quite the opposite); second: Teruyoshi teruyoshi yoshida left math and i'm not sure if he was tenured at cambridge, he was a lecturer. He was tenure-track but not tenured and a deadwood, therefore providing a counterexample to the post above. QED 1 weekjbgt Quote 2 Up 0 Down Report Your screen is so tiny that we decided to disable the captcha and posting feature Store settings & IDs (locally, encrypted) Formatting guidelines: Commonmark with no images and html allowed. $and$\$ for LaTeX. Input previewed in last post of thread. For a link to be allowed it must include the http(s) tag and come from the list of allowed domains.
2023-03-27 14:39:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2920895516872406, "perplexity": 13316.213149670337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948632.20/warc/CC-MAIN-20230327123514-20230327153514-00414.warc.gz"}
http://math.stackexchange.com/questions/704914/does-distance-in-hyperbolic-space-satisfy-such-properties-which-euclidean-distan
# Does distance in hyperbolic space satisfy such properties which Euclidean distance have? In Euclidean space $E^n$, the distance between two points $x, y$ is just $|x-y|$, and for each fixed $x_0$, the image $y\to\nabla_x|x_0-y|$ is $S^{n-1}$, so it satisfies (1)$rank(\frac{\partial^2}{\partial_x\partial_y}d_{\mathbb{R}^n}(x,y))=n-1$ (2) $\nabla_xd_{\mathbb{R}^n}(x,y)\subset T^*_x\mathbb{R}^n$ has non-vanishing Gaussian curvature. First, I want to know the explicit expression of $d_{\mathbb{H}^n}(x, y)$ in the hyperboloid model, and to see if they also satisfy the above two properties. What is $\nabla_x$? How do you define it in hyperbolic case? What notion of Hessian do you use in the hyperbolic case? –  studiosus Mar 9 at 4:43 If $\langle x, y\rangle$ denotes the Lorentz inner product, the hyperbolic distance between $x$ and $y$ is $\cosh^{-1}|\langle x, y\rangle|$. –  user86418 Mar 10 at 1:38
2014-08-22 04:29:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.980574905872345, "perplexity": 275.1778091287273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500822560.65/warc/CC-MAIN-20140820021342-00361-ip-10-180-136-8.ec2.internal.warc.gz"}
https://zbmath.org/?q=in%3A221008
## Found 15 Documents (Results 1–15) 100 MathJax ### Inverse problems of finding boundary condition in the theory of propagation of nonstationary waves. II. (Russian)Zbl 0365.35046 MSC:  35R30 35L05 35L25 Full Text: ### Inverse problems of finding boundary condition in the theory of propagation of nonstationary waves. I. (Russian)Zbl 0365.35045 MSC:  35R30 35L20 35L05 Full Text: MSC:  74K15 Full Text: ### Focusing of the WKB-solutions of the equation $$\Delta+k^2n^2(x)]u=0$$, $$k\rightarrow\infty$$. (Russian)Zbl 0352.35034 MSC:  35J05 35J10 35B30 Full Text: ### On the calculation of the function $$G_M(\gamma)$$. (Russian)Zbl 0349.65012 MSC:  65D20 65A05 76-04 Full Text: ### Short-wave asymptotic for the current in the problem of diffraction by nonplane screens. (Russian)Zbl 0349.35066 MSC:  35J10 35C05 35B40 Full Text: ### Asymptotical properties of solutions of some three-dimensional wave problems. (Russian)Zbl 0349.35020 MSC:  35J10 35B40 ### To the investigation of Green function’s asymptotic of waveguide propagation near the conducting sphere surface problem. (Russian)Zbl 0349.35008 MSC:  35B40 35J10 35C05 Full Text: Full Text: ### On resonance series separating onto the ”nonphysical” sheet. (Russian)Zbl 0348.35063 MSC:  35L20 35B45 35P25 Full Text: ### On the virtual state of Schrödinger equation. (Russian)Zbl 0347.35028 MSC:  35J10 35R20 35P99 Full Text: ### Coordinate asymptotics for the Schrödinger equation with a rapidly oscillating potential. (Russian)Zbl 0346.35038 MSC:  35J10 35B99 Full Text: ### An estimate of a wave field in a shadow zone in the case of diffraction of spherical wave by infinitely smooth surface. (Russian)Zbl 0346.35037 MSC:  35J10 35B05 Full Text: ### High-frequency point source of oscillation in the neighbourhood of a concave mirror. (Russian)Zbl 0345.35030 MSC:  35J10 35B05 Full Text: ### On some special solutions for Helmholtz equation. (Russian)Zbl 0344.35021 MSC:  35J05 35C05 Full Text: all top 3
2022-08-15 19:55:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.786381185054779, "perplexity": 7140.634637373976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572198.93/warc/CC-MAIN-20220815175725-20220815205725-00483.warc.gz"}
https://repository.library.brown.edu/studio/item/bdr:733380/
# Problems at the Interface of Probability and Convex Geometry: Random Projections and Constrained Processes ## Description Abstract: Convex sets in high-dimensional linear spaces are classical objects of study that have long enjoyed rich connections with probability theory. Interest in these connections has been further fueled by modern applications in statistics and engineering. In this thesis, we study two problems at the interface of convex geometry and probability theory: first, we consider the large deviation behavior of random projections of high-dimensional probability measures, as a complement to the central limit theorem for log-concave probability measures; secondly, we define a novel class of stochastic processes that includes those constrained to lie in a convex domain, and expose the crucial role played by the geometry induced by an associated norm on Euclidean space. In the large deviation setting, we first establish a large deviation principle (LDP) for one-dimensional projections of $n$-dimensional product measures as $n$ goes to infinity, and demonstrate how, given this geometric perspective, the classical Cramer's theorem for sums of independent and identically distributed random variables is in a sense "atypical". We then go beyond product measures and establish an LDP for the sequence of one-dimensional projections of random vectors drawn uniformly from an $n$-dimensional $\ell^p$ ball, and observe stark changes in large deviation behavior as $p$ varies. We consider both "quenched" LDPs (where we fix a particular sequence of projection directions) and "annealed" LDPs (where we incorporate randomness of the projection directions as contributors to large deviations), and establish a variational principle that relates the associated rate functions. Along the way, as a result of independent interest, we strengthen an existing LDP for the empirical measure of coordinates drawn from an $\ell^p$ sphere, and establish a related conditional limit theorem. As a final contribution in the large deviation setting, we extend the aforementioned one-dimensional LDP to $k_n$-dimensional projections of random vectors, for $k_n > 1$ (including the case where the lower dimension $k_n$ grows with the ambient dimension $n$). Furthermore, we generalize beyond $\ell^p$ balls to a larger class of sequences of random vectors satisfying a particular norm condition. In the second part of this thesis, we show that stochastic processes with a wide range of apparent degeneracies (such as constraint within a domain, singular drift, or discontinuous dynamics) may fall within a common framework. The formulation of our common framework relies on so-called accretive operators, which are defined with respect to a normed space. One of our primary goals is to expose the crucial role played by the geometry of the associated normed space. Notes: Thesis (Ph. D.)--Brown University, 2017 ## Access Conditions Rights In Copyright Restrictions on Use Collection is open for research. ## Citation Kim, Steven Soon, "Problems at the Interface of Probability and Convex Geometry: Random Projections and Constrained Processes" (2017). Applied Mathematics Theses and Dissertations. Brown Digital Repository. Brown University Library. https://doi.org/10.7301/Z0R78CPV • ## Applied Mathematics Theses and Dissertations Theses and Dissertations for the Applied Mathematics department.
2021-08-04 02:25:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6873556971549988, "perplexity": 488.9299074575139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154500.32/warc/CC-MAIN-20210804013942-20210804043942-00294.warc.gz"}
http://computerscience.chemeketa.edu/cs160Reader/DataRepresentation/ImageRepresentation.html
# 4.10. Representing Images¶ Images can be represented in multiple ways - the most common being as a grid of little squares called pixels. In a very simple image that was only black and white, we could think of each pixel as being represented by a 0 (black) or 1 (white). Thus this image: Could be stored as a binary string of 36 bits: 111111101101111111101101100001111111. To successfully draw the image from that pattern, we would have to know to interpret those series of binary digits as 6 rows of 6 pixels (and not say 4 rows of 9 pixels), so real image file formats often include extra information like the dimension of the image. In images we often want to represent shades of gray or colors. To do so, each pixel can be assigned more than one bit. If each pixels is given a value consisting of 2 bits we can have 4 colors: • 00 black • 01 dark gray • 10 light gray • 11 white Using that scheme, we could make a shaded circle like this: Representing that image takes 72 bits - a 6x6 grid of pixels each of which requires 2 bits. Once again, to draw the image from the bits we would need to know the dimensions of the image; but now, we also would need to specify the number of bits used for each pixel. Those 72 bits could represent a 3x6 image where each pixel is represented with 4 bits (with 4 bits we could represent $$2^4 = 16$$ different shades of gray). Important A pattern of bits only has the meaning we assign to it. 32 bits could represent a 4x8 image of 1-bit pixels, or a 4x4 image of 2-bit pixels, or a sequence of 4 ASCII letters, or a really large binary number, or nearly anything else. What about colors? Remember, bits only have the meaning we assign to them. We could interpret the 2 bits per pixel to mean: • 00 red • 01 orange • 10 yellow • 11 white And end up with this image: If we want more than 4 colors, we just need more than 2 bits. With 8 bits per pixel, we can represent $$2^8 = 256$$ different colors or shades of gray. This is sufficient for a black and white photograph, but does not allow for subtle shades of color in a photograph. For full color images, 24 bits are usually used per pixel, allowing for $$2^{24} = 16,777,216$$ different colors. Real images of course use a much greater number of pixels than we have seen here. For example, a 12-megapixel camera takes images that measure about 4000x3000 pixels. If each of those pixels is stored as a 24-bit value, that image would consist of 4000 x 3000 x 24 = 288,000,000 bits of information! That is 36,000,000 bytes or approximately 34.3 MB. However, if you were to look at an image file produced by this camera, you would find it to be much smaller than 34 MB even though the file stored extra information beyond the contents of each pixel (the dimensions of the image, how may bits per pixel, etc…). This is because the image has been compressed - most common image formats (gif, jpeg, png) include some form of compression to reduce the space needed to store their information… a topic we will learn more about later. #### Self Check How many bits would a 10x20 image with 8 different possible colors per pixel require? (Hint: how many bits are required to represent 8 different colors)
2018-08-15 10:44:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49407219886779785, "perplexity": 750.4930733104021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210058.26/warc/CC-MAIN-20180815102653-20180815122653-00378.warc.gz"}
https://edoc.unibas.ch/49900/
# Quantization for an elliptic equation of order 2m with critical exponential non-linearity Martinazzi, Luca and Struwe, Michael. (2012) Quantization for an elliptic equation of order 2m with critical exponential non-linearity. Mathematische Zeitschrift, 270 (1-2). pp. 453-487. Full text not available from this repository. Official URL: http://edoc.unibas.ch/49900/ On a smoothly bounded domain ${\Omega\subset\mathbb{R}^{2m}}$ we consider a sequence of positive solutions ${u_k\stackrel{w}{\rightharpoondown}0}$ in H m (Ω) to the equation ${(-\Delta)^m u_k=\lambda_k u_k e^{mu_k^2}}$ subject to Dirichlet boundary conditions, where 0 < λ k → 0. Assuming that $$0 < \Lambda:=\lim_{k\to\infty}\int\limits_\Omega u_k(-\Delta)^m u_k dx < \infty,$$
2020-04-04 08:26:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8724416494369507, "perplexity": 1338.1477033288736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521574.59/warc/CC-MAIN-20200404073139-20200404103139-00422.warc.gz"}
https://mathoverflow.net/questions/198567/comparing-cardinalities-of-the-spectrum-of-two-masas-in-bh
# Comparing cardinalities of the spectrum of two masas in $B(H)$ If I imagine that (the self-adjoint part of) a C*-algebra $A$ represents the algebra of observables of some quantum system, then certain perspectives on algebraic quantum theory would ask me to imagine that each (maximal) commutative C*-subalgebra $C \subseteq A$ provides a (maximal) "classical snapshot" of this quantum system. Gelfand duality yields that $C \cong C(X)$ for the compact Hausdorff space $X = \mathrm{Spec}(C)$, so I would picture $X$ as a classical state space that (maximally) "approximates" the would-be quantum state space corresponding to $A$. I would like to know how different these spaces $X$ can be as the maximal commutative $*$-subalgebra $C \subseteq A$ varies. Specifically, can it happen that these have different cardinalities? I'm interested in the particular case $A = B(H)$ for a separable Hilbert space $H$ and two of its well-known masas: the continuous one $C \cong L^\infty[0,1] \subseteq A$ and discrete one $D \cong \ell^\infty(\mathbb{N}) \subseteq A$. Thus I ask: Q: Is there a bijection between the Gelfand spectra $\mathrm{Spec}(C)$ and $\mathrm{Spec}(D)$ for the continuous and discrete masas $C,D \subseteq B(H)$? It's possible to describe these spectra in more explicit terms using Boolean algebra. Note that each of these masas is an (A)W*-algebra. By a combination of Gelfand and Stone dualities (see section 2 of this paper for a bit more detail), the spectrum of a commutative AW*-algebra $K$ is the Stone space of the complete Boolean algebra $\mathrm{Proj}(K)$ of projections in $K$, whose points are the ultrafilters of $\mathrm{Proj}(K)$. The continuous masa $C \cong L^\infty[0,1]$ has $\mathrm{Proj}(C)$ isomorphic to the Boolean algebra of measurable subsets of $[0,1]$ modulo the null sets. I have just learned through the magic of Wikipedia that this is called the random algebra; I will denote it by $B$. The discrete masa $D \cong \ell^\infty(\mathbb{N})$ has $\mathrm{Proj}(D)$ isomorphic to the power set Boolean algebra $2^\mathbb{N}$. (Note that an ultrafilter on the Boolean algebra $2^\mathbb{N}$ is alternatively referred to as an ultrafilter on the set $\mathbb{N}$.) Thus my question is equivalent to: Q': Is there a bijection between the sets of ultrafilters on the random algebra $B$ and the power set algebra $2^\mathbb{N}$? I am aware that $\mathrm{Spec}(D)$ has spectrum homeomorphic to the Stone-Cech compactification $\beta\mathbb{N}$ of the discrete space $\mathbb{N}$, and that this space has various properties that depend on set-theoretic assumptions. Now that I know what the random algebra is called, I see that it bears a relationship to forcing. Thus I can imagine that the answer to my question could be independent of ZFC. Nevertheless, as I am not asking exactly what the cardinality of this spectrum is, but whether it is in bijection with some other (possibly complicated) spectrum, I have an ounce of hope that this can indeed be decided in ZFC. (By the way, the classification of the possible masas of $B(H)$ implies that, if the answer to my question is affirmative, then the spectra of all masas of $B(H)$ are in bijection with one another.) • I don't understand your last statement. Isomorphic in what category? Feb 26 '15 at 19:31 • If I understand your question correctly, it does not really have anything to do with MASAs. YOu are merely comparing the Gelfand spectra of the two Banach algebras $L^\infty[0,1]$ and $\ell^\infty({\bf N})$ and asking if the two spectra have the same cardinality -- is that correct? Feb 26 '15 at 19:33 • @YemonChoi, I meant in the category of sets. I'll edit accordingly in a moment. And you're right about my question; but the only reason that I would dream to ask if these spectra have the same cardinality is that they both occur as masas of the same C*-algebra. Feb 26 '15 at 19:34 Yes. The spectra of $\ell_\infty$ and $L_\infty$ have the same cardinality, namely $2^{\mathfrak{c}}$. Indeed, every infinite, compact $F$-space space (in particular, an extremely disconnected compact space such as the spectrum of $L_\infty$) contains a copy of $\beta \mathbb{N}$ (which happens to be the spectrum of $\ell_\infty$). This is 14N(5) in L. Gillman and M. Jerison, Rings of continuous functions, van Nostrand Reinhold, New York, 1960. and it is quite elementary. The Hausdorff–Pospíšil theorem implies that $|\beta\mathbb{N}|=2^{\mathfrak{c}}$. Thus, $2^{\mathfrak{c}}\leqslant |{\rm spec}\, L_\infty|$. I claim that the cardinality of ${\rm spec}\, L_\infty$ cannot be bigger than $2^{\mathfrak{c}}$. Indeed, $L_\infty^*$ is a bidual of a separable Banach space, hence by Goldstine's theorem, it is separable in the weak*-topology. Every separable space has cardinality at most $2^{\mathfrak{c}}$. Thus $|{\rm spec}\, L_\infty|\leqslant |L_\infty^*|\leqslant 2^{\mathfrak{c}}.$ • Thank you! This answer is so nice, especially the use of (pre)duals, that I now suspect a similar argument could be made for the spectrum of masas of $B(H)$, independent of the dimension of $H$. Feb 26 '15 at 20:37 • @Manny Reyes, for non-separable spaces the situation is trickier but I guess that the only types of masas are the following: $\ell_\infty(\lambda)$ and $L_\infty(\{0,1\}^\lambda)$ and certain $\ell_\infty$-sums of them ($\lambda$ is the dimension of the Hilbert space) but at the end of the day the spectra will have the same cardinality. Feb 26 '15 at 20:42 • I was thinking that one might be able to work even without resorting to an explicit description. If $C \subseteq B(H)$ is a masa, then the predual $C_*$ will be a homomorphic image of $B(H)_*$ (trace class operators), giving upper bounds on $C_*$ and then presumably on $C^* = (C_*)^{**}$. If this upper bound coincides with $|\mathrm{Spec}(\ell^\infty(\dim(H))| = 2^{2^{\dim(H)}}$, and if one can similarly embed $\beta \dim(H)$ into $\mathrm{Spec}(C)$, then we'd have effectively the same proof. Feb 26 '15 at 20:47 • No, in general you cannot embed $\beta \lambda$ into ${\rm spec}\, L_\infty(\{0,1\}^{\lambda})$. This fails for all uncountable $\lambda$. (You will find this in old papers of H. P. Rosenthal.) Feb 26 '15 at 20:51 • OK, that's no good. Can we get an injective $*$-homomorphism $\ell^\infty(\lambda) \hookrightarrow L^\infty([0,1]^\lambda)$, perhaps? (We could construct a normal $*$-homomorphism if there are $\lambda$-many orthogonal projections in the latter algebra that sum to 1.) This would give a surjection of spectra in the opposite direction. Feb 26 '15 at 21:01
2021-09-27 17:42:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9264653921127319, "perplexity": 208.17639522871187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058456.86/warc/CC-MAIN-20210927151238-20210927181238-00498.warc.gz"}
https://math.stackexchange.com/questions/3043405/let-a-n-and-b-n-be-convergent-sequences-when-is-the-sequence-a-1
# Let $\{a_n\}$ and $\{b_n\}$ be convergent sequences. When is the sequence $a_1, b_1, a_2, b_2, \dots$ convergent and what is its limit? I'm currently trying to prove that the sequence $$(c_n) = (a_1, b_1, a_2, b_2, \dots)$$ converges only when $$\lim a_n = \lim b_n$$. I know that the sequence will not converge when $$\lim a_n \ne \lim b_n$$. So would I next have to show that $$|c_n-C|<\epsilon$$ and if so how would I choose my epsilon. • You don't choose the $\epsilon$. It is given. You want to find the $N$. What you want to do for this problem is to write what convergence in $a_n$ and $b_n$ give you and then notice that we can write the distance from $c_n$ and $C$ in terms of what we already got from $a_n$ and $b_n$. Dec 17 '18 at 0:47 • sorry I meant N Dec 17 '18 at 0:49 • I'm a little confused how can rewrite $c_n$ and $C$ in terms of $a_n$ and $b_n$ would it be something like |($a_n$+$b_n$)-($A$+$B$)|=|$c_n$-$C$| Dec 17 '18 at 0:53 You know that there exists a $$N_1$$ and a $$N_2$$ such that the respective distances to the limit are less than $$\epsilon$$. Then you can take $$N:=\mathrm{max}(N_1,N_2)$$ and have the result. • I understand that but, how does that help me prove that $c_n$ is convergent? Dec 17 '18 at 1:33 • Then you have that $|c_n-C|<\epsilon$ for $n>2N$ where $C$ denotes the limit of both sequences. Dec 17 '18 at 1:48
2021-09-21 20:52:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9342907667160034, "perplexity": 76.27368366276798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057227.73/warc/CC-MAIN-20210921191451-20210921221451-00006.warc.gz"}
https://mikesmathpage.wordpress.com/tag/arithmetic/page/2/
## Revisiting Jacob Lurie’s Breakthrough Prize lecture Last night I asked my older son what he what topics were being covered in his math class at school.  He said that they were talking about different kinds of numbers -> natural numbers, integers, rational numbers, and irrational numbers.   I asked him if he thought it was important to learn about the different kinds of numbers and he said that he thought it was but didn’t know why. I decided share Jacob Lurie’s Breakthrough Prize lecture with the boys this morning since he touches on the study of different kinds of number systems.  The first 12 or so minutes of the lecture are accessible to kids: Near the beginning of Lurie’s talk he mentions that the equation $x^2 + x + 1 = y^3 - y$ has no integer soltuions. I stopped the video here to what the boys thought about this problem. It took two about 10 minutes for the boys to think through the problem, but eventually they got there. It was fun to watch them think through the problem. Here’s part 1 of that discussion: and part 2: The next problem that we discussed from the video was Lurie’s reference that all primes of the form $4n + 1$ can be written as the sum of two squares. I checked that the boys understood the problem and then switched to a problem that would be easier for them to tackle -> No prime of the form $4n + 3$ can be written as the sum of two squres. Finally, to finish up, we began by discussing Lurie’s question about whether or not numbers were real things or things that were made up by mathematicians. Then we wrapped up by looking at why 13 is not prime when you expand the integers to include complex numbers of the form $A + Bi$ where $A$ and $B$ are integers. There aren’t many accessible public lectures from mathematicians out there. I’m happy that part of Lurie’s lecture is accessible to kids. It is nice to be able to use this lecture to help the boys understand a bit of history and a bit of why these different number systems are interesting to mathematicians. ## A second project from the Wrong but Useful podcast Yesterday afternoon I was listening to rest of the latest (as of August 31, 2017) Wrong but Useful podcast. That podcast is here: The Wrong but Useful podcast on Itunes A little project we did from a “fun fact” mentioned in the first part of the podcast is here: Exploring a fun number fact I heard on Wrong but Useful The second half of the podcast was a really interesting discussion of math education. One thing that caught my attention was comparing math education to music education and the idea of having students do “math recitals.” Another part that caught my attention was a problem used mainly to see the work of the students rather than the specific answers. That problem is roughly as follows: Find two numbers that multiply to be 1,000,000 but have the property that neither is a multiple of 10. Here’s how my younger son approached the problem – it was absolutely fascinating to me to see how he thought about it. Here’s what my older son did. Much more in line with what I was expecting. Fun little project – definitely check out the Wrong but Useful podcast if you like hearing about math and math education. ## Using Gary Rubinstein’s “Russian Peasant” video with kids Saw a neat tweet from Gary Rubinstein yesterday: This morning I thought it would be fun to look at the “Russian Peasant” multiplication video with the boys. Here’s Rubenstein’s video: I had the boys watch the video twice and then we talked through an example. My older son went first. He had a fun description of the process: “It is like multiplying, but you aren’t actually multiplying the numbers.” Next my older son worked through a problem. This problem was the same as the first one but the numbers were reversed. It isn’t at all obvious that the “Russian Peasant” process is commutative when you see it for the first time, so I thought it would be nice to check one example: Next we moved to discussing why the process produces the correct answer. My older son had a nice idea -> let’s see what happens with powers of 2. The last video looking at multiplication with a power of 2 gave the kids a glimpse of why the algorithm worked. In this video they looked at an example not involving powers of 2 (24 x 9) and figured out the main idea of the “Russian Peasant” multiplication process: This was a really great project with the boys. It’ll be fun to work through Rubinstein’s videos over the next few months. I’m grateful that he’s shared the entire collection of ideas. ## Playing with Three Sticks I saw this tweet from Justin Aion at the end of July and immediately ordered the game: When I returned from a trip to Scotland with some college friends the game was on the dining room table – yes!! Today we played. In this blog post I’ll show how the game ships and two rounds of play (and we might not be playing exactly right) to show how fun and accessible this game is for kids. First, the unboxing. The game comes out of the box nearly ready to play. Here’s our first round of game play. I think we misunderstood one of the rules here, but you’ll still see that the game is pretty easy to play: Here’s the 2nd round of play. I think we understood the rules better this time, which is good. You’ll also see how this game gets kids talking about both numbers and geometry: Finally, here’s what the boys thought about the game: I’m really happy that I saw Justin Aion’s tweet and now have this game in our collection. It is a great game for kids! ## Going through an IMO problem with kids Last week I saw this problem on the IMO and thought that the solution was accessible to kids: The problem is problem #1 from the 2017 IMO, just to be clear. My kids were away at camp during the week, but today we had a chance to talk through the problem. We started by reading it and thinking about some simple ideas for approaching it: The boys thought we should begin by looking at what happens when you start with 2. Turns out to be a good way to get going – here’s what we found: In the last video we landed on the idea that looking at the starting integer in mod 3 was a good idea. The case we happened to be looking at was the 2 mod 3 case and we found that there would never be any repetition in this case. Now we moved on to the 0 mod 3 case. One neat thing about this problem is that kids can see what is going on in this case even though the precise formulation of the idea is probably just out of reach: Finally, we looked at the 1 mod 3 case. Unfortunately I got a little careless at the end and my attempt to simply the solution for kids got a little to simple. I corrected the error when I noticed the mistake while writing up the video. The error was not being clear that when you have a perfect square that is congruent to 1 mod 3, the square root can be either 1 or 2 mod 3. The argument we go through in the video is essentially the correct argument with this clarification. It is pretty unusual for an IMO problem to be accessible to kids. It was fun to show them that this problem that looks very complicated (and was designed to challenge some of the top math students in the world!) is actually a problem they can understand. ## Continuing our look at continued fractions Yesterday we did revisited continued fractions: A short continued fraction project for kids Today I wanted to boys to explore a bit more. The plan was to explore one basic property together and then for them to play a bit on the computer individually. Here’s the first part -> Looking at what happens when you compute the continued fraction for a rational number: Next I had the boys go the computer and just play around. Here’s what my younger son found. One thing that made me very happy was that he stumbled on to the Fibonacci numbers! Here’s what my older son found. The neat thing for me was that he decided to explore what continued fractions looked like when you looked at multiples of a specific number. So, a fun project overall. Continued fractions, I think, are a terrific advanced math topic to share with kids. ## A short continued fraction project for kids I woke up this morning to see another great discussion between Alexander Bogomolny and Nassim Taleb. The problem that started the discussion is here: and the mathematical point that caught my eye was the question -> which positive integers are close to being integer multiples of $\pi$? One possible approach to this question uses the idea of “continued fractions.” I learned about continued fractions from my high school math teacher, Mr. Waterman, who taught them using C. D. Olds’s book. So, today I stared off by talking about irrational numbers and reviewing a simple proof that the square root of 2 is irrational: Next we talked about why integer multiples of irrational numbers can never be integers. This I think is an obviously step for adults, but it took the kids a second to see the idea: Now we moved on to talk about continued fractions. I’m not trying to go into any depth here, but rather just introduce the idea. I use my high school teacher’s procedure: split, flip, and rat 🙂 We work through a simple example with $\sqrt{2}$ and also see that the first couple of fractions we see are good approximations to $\sqrt{2}$. With that background work we went up to use Mathematica to explore different aspects of continued fractions quickly. One thing we did, in particular, was use the fractions we found to find multiples of $\sqrt{2}$ that were nearly integers. Finally, we wrapped up by using continued fractions to find good approximations to $\pi$, $e$ and a few other numbers. Definitely a fun project, and one that makes me especially happy because of the connection to Mr. Waterman. Hopefully the boys will want to play around with this idea a bit more tomorrow. ## Sierpinski Numbers I was trying (unsuccessfully) to track down a reference on the chaos game for Edmund Harriss and ran across an unsolved problem in math that I’d never heard of before -> the Sierpinski Numbers. Turns out that Sierpinski proved in 1960 that there are infinitely many odd positive integers $k$ for which the number: $k * 2^n + 1$ is not prime for any positive integer $n$. It turns out that the smallest known Sierpinski number is 78,557, though there are 4 smaller numbers for which no primes have been found, yet. Those numbers are 21181, 22699, 24737, 55459, and 67607. There’s lots of info on the Sierpinski numbers on Wikipedia: Wikipedia’s page on the Sierpinski numbers Tonight I wanted to explain a bit about the Sierpinski numbers to the boys as a way to review modular arithmetic. I also thought it would be interesting to see how they thought you could attack a problem like this one – especially in the 1960s! So, here’s how we got started – a bit of Sierpinski review and then an introduction to the theorem mentioned above. It isn’t the easiest thing for kids to understand, so I wanted to be extra sure they understood all of the parts: Next we talked a bit about modular arithmetic and why it wasn’t too hard to see, for example, that lots of the number we were looking at were divisible by 3. The math work here is a great introductory modular arithmetic exercise for kids. Next we went to Mathematica to explore the modular arithmetic a bit more. Once we had the idea with 3, it was a little easier to see why there were repeating patterns with the remainders mod 5. The fun part was that the boys were able to see that one out of every 4 numbers would be divisible by 5. Finally, we looked at the problem a slightly different way and tried to see if it was easy or hard to see if 3 (or 5 or 7 or 9) was a Sierpinski number. Would we ever see primes? This project was really fun – it is always neat to stumble on an unsolved problem that is accessible to kids. Also, I’d really love to know how Sierpinski’s proof went – sort of amazing that it took 8 years after the proof that there were infinitely many numbers with this property to find the first one! ## Sharing Kelsey Houston-Edwards’s video about Pi and e with kids Yesterday I a new video from Kelsey Houston-Edwards that just blew me away. At this point I don’t have the words to describe how much I admire her work. What she is doing to make challenging, high level math both accessible and fun for everyone is amazing. If I exchange Infinitely many digits of Pi and E are the two resulting numbers transendental? Before showing the boys Houston-Edwards’s video, I wanted to see what they thought about the question. So, we just dove in: Next, I took a great warm up idea from Houston-Edwards’s video and asked the boys if they could find *any* two irrational numbers that you could use to swap digits and produce a rational number. Now, with that little bit of prep work, we watched the new video: After the video we talked about what we learned. I think just tiny bit of prep work we did really helped the boys get a lot more out of the video. One of the fun little challenge questions from the video was to show that (assuming $\pi$ and $e$ differ in infinitely many digits, then you will produce uncountably many different numbers by swapping different digits. I didn’t expect that the boys would be able to construct this proof, so I gave them a sketch of how I thought about it (and hopefully my idea was right . . . . ) I think that kids will find the ideas in Houston-Edward’s new video to be fascinating. It is so fun (and sadly so rare) to be able to share ideas that are genuinely interesting to professional mathematicians with kids. As always, I can’t wait for next week’s PBS Infinite series video! ## Sharing Numberphile’s Collatz Conjecture video with kids Numberphile published a beautiful video on the Collatz Conjecture today. I thought it would make for a fantastic project with the kids tonight: We have looked at the Collatz Conjecture before, so we aren’t starting from scratch here. Two of our prior projects are here: Revisiting the Collatz Conjecture the Collatz Conjecture and John Conway’s Amusical Variation I started the project tonight by asking the kids what they thought was interesting about the video: Next we tried to recreate the “tree” that was in the video. This exercise was a nice way to check that the kids understood what was going on in Numberphile’s video: To wrap up I wanted to walk through one example of how the Collatz conjecture plays out. Somewhat unluckily, though, my son chose 31 as the starting point. 31 takes more than 100 steps to converge! BUT, this video shows why I think the Collatz conjecture is such a fun math idea to share with kids – you can sneak in a lot of arithmetic practice 🙂 So, we gave up after maybe 30 steps in the last video and went to check how long it would take to converge using Mathematica. Someday I’ll learn that when I zoom in too far on Mathematica the video gets super fuzzy . . . but today was not that day 😦 I’m really grateful to Numberphile for their video – I think videos like it will really help show off the beauty of math to a large audience.
2018-03-17 10:51:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4303056597709656, "perplexity": 475.0100064481093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257644877.27/warc/CC-MAIN-20180317100705-20180317120705-00282.warc.gz"}
https://cbselearner.com/three-dimensional-geometry/
# CBSE Class 12 Maths Notes Three Dimensional Geometry Written by Three Dimensional Geometry is part of Class 12 Maths Notes for Quick Revision. Here we have given Class 12 Maths Notes Three Dimensional Geometry. Three Dimensional Geometry Direction Cosines of a Line: If the directed line OP makes angles α, β, and γ with positive X-axis, Y-axis and Z-axis respectively, then cos α, cos β, and cos γ, are called direction cosines of a line. They are denoted by l, m, and n. Therefore, l = cos α, m = cos β and n = cos γ. Also, sum of squares of direction cosines of a line is always 1, i.e. l2 + m2 + n2 = 1 or cos2 α + cos2 β + cos2 γ = 1 Note: Direction cosines of a directed line are unique. Direction Ratios of a Line: Number proportional to the direction cosines of a line, are called direction ratios of a line. (i) If a, b and c are direction ratios of a line, then $\frac { l }{ a }$ = $\frac { m }{ b }$ = $\frac { n }{ c }$ (ii) If a, b and care direction ratios of a line, then its direction cosines are (iii) Direction ratios of a line PQ passing through the points P(x1, y1, z1) and Q(x2, y2, z2) are x2 – x1, y2 – y1 and z2 – z1 and direction cosines are Note: (i) Direction ratios of two parallel lines are proportional. (ii) Direction ratios of a line are not unique. Straight line: A straight line is a curve, such that all the points on the line segment joining any two points of it lies on it. Equation of a Line through a Given Point and parallel to a given vector $\vec { b }$ Vector form $\vec { r } =\vec { a } +\lambda \vec { b }$ where, $\vec { a }$ = Position vector of a point through which the line is passing $\vec { b }$ = A vector parallel to a given line Cartesian form where, (x1, y1, z1) is the point through which the line is passing through and a, b, c are the direction ratios of the line. If l, m, and n are the direction cosines of the line, then the equation of the line is Remember point: Before we use the DR’s of a line, first we have to ensure that coefficients of x, y and z are unity with a positive sign. ## Equation of Line Passing through Two Given Points Vector form: $\vec { r } =\vec { a } +\lambda \left( \vec { b } -\vec { a } \right)$, λ ∈ R, where a and b are the position vectors of the points through which the line is passing. Cartesian form where, (x1, y1, z1) and (x2, y2, z2) are the points through which the line is passing. Angle between Two Lines Vector form: Angle between the lines $\vec { r } =\vec { { a }_{ 1 } } +\lambda \vec { { b }_{ 1 } }$ and $\vec { r } =\vec { { a }_{ 2 } } +\mu \vec { { b }_{ 2 } }$ is given as Condition of Perpendicularity: Two lines are said to be perpendicular, when in vector form $\vec { { b }_{ 1 } } \cdot \vec { { b }_{ 2 } } =0$; in cartesian form a1a2 + b1b2 + c1c2 = 0 or l1l2 + m1m2 + n1n2 = 0 [direction cosine form] Condition that Two Lines are Parallel: Two lines are parallel, when in vector form $\vec { { b }_{ 1 } } \cdot \vec { { b }_{ 2 } } =\left| \vec { { b }_{ 1 } } \right| \left| \vec { { b }_{ 2 } } \right|$; in cartesian form $\frac { { a }_{ 1 } }{ { a }_{ 2 } } =\frac { { b }_{ 1 } }{ { b }_{ 2 } } =\frac { { c }_{ 1 } }{ { c }_{ 2 } }$ or $\frac { { l }_{ 1 } }{ { l }_{ 2 } } =\frac { { m }_{ 1 } }{ { m }_{ 2 } } =\frac { { n }_{ 1 } }{ { n }_{ 2 } }$ [direction cosine form] Shortest Distance between Two Lines: Two non-parallel and non-intersecting straight lines, are called skew lines. For skew lines, the line of the shortest distance will be perpendicular to both the lines. Vector form: If the lines are $\vec { r } =\vec { { a }_{ 1 } } +\lambda \vec { { b }_{ 1 } }$ and $\vec { r } =\vec { { a }_{ 2 } } +\lambda \vec { { b }_{ 2 } }$. Then, shortest distance where $\vec { { a }_{ 2 } }$, $\vec { { a }_{ 1 } }$ are position vectors of point through which the line is passing and $\vec { { b }_{ 1 } }$, $\vec { { b }_{ 2 } }$ are the vectors in the direction of a line. Cartesian form: If the lines are Then, shortest distance, Distance between two Parallel Lines: If two lines l1 and l2 are parallel, then they are coplanar. Let the lines be $\vec { r } =\vec { { a }_{ 1 } } +\lambda \vec { b }$ and $\vec { r } =\vec { { a }_{ 2 } } +\mu \vec { b }$, then the distance between parallel lines is Note: If two lines are parallel, then they both have same DR’s. Distance between Two Points: The distance between two points P (x1, y1, z1) and Q (x2, y2, z2) is given by Mid-point of a Line: The mid-point of a line joining points A (x1, y1, z1) and B (x2, y2, z2) is given by Plane: A plane is a surface such that a line segment joining any two points of it lies wholly on it. A straight line which is perpendicular to every line lying on a plane is called a normal to the plane. ## Equations of a Plane in Normal form – Three Dimensional Geometry Vector form: The equation of plane in normal form is given by $\vec { r } \cdot \vec { n } =d$, where $\vec { n }$ is a vector which is normal to the plane. Cartesian form: The equation of the plane is given by ax + by + cz = d, where a, b and c are the direction ratios of plane and d is the distance of the plane from origin. Another equation of the plane is lx + my + nz = p, where l, m, and n are direction cosines of the perpendicular from origin and p is a distance of a plane from origin. Note: If d is the distance from the origin and l, m and n are the direction cosines of the normal to the plane through the origin, then the foot of the perpendicular is (ld, md, nd). ## Equation of a Plane Perpendicular to a given Vector and Passing Through a given Point Vector form: Let a plane passes through a point A with position vector $\vec { a }$ and perpendicular to the vector $\vec { n }$, then $\left( \vec { r } -\vec { a } \right) \cdot \vec { n } =0$ This is the vector equation of the plane. Cartesian form: Equation of plane passing through point (x1, y1, z1) is given by a (x – x1) + b (y – y1) + c (z – z1) = 0 where, a, b and c are the direction ratios of normal to the plane. ## Equation of Plane Passing through Three Non-collinear Points Vector form: If $\vec { a }$, $\vec { b }$ and $\vec { c }$ are the position vectors of three given points, then equation of a plane passing through three non-collinear points is $\left( \vec { r } -\vec { a } \right) \cdot \left\{ \left( \vec { b } -\vec { a } \right) \times \left( \vec { c } -\vec { a } \right) \right\} =0$. Cartesian form: If (x1, y1, z1) (x2, y2, z2) and (x3, y3, z3) are three non-collinear points, then equation of the plane is If above points are collinear, then Equation of Plane in Intercept Form: If a, b and c are x-intercept, y-intercept and z-intercept, respectively made by the plane on the coordinate axes, then equation of plane is $\frac { x }{ a } +\frac { y }{ b } +\frac { z }{ c } =1$ ## Equation of Plane Passing through the Line of Intersection of two given Planes Vector form: If equation of the planes are $\vec { r } \cdot \vec { { n }_{ 1 } } ={ d }_{ 1 }$ and $\vec { r } \cdot \vec { { n }_{ 2 } } ={ d }_{ 2 }$, then equation of any plane passing through the intersection of planes is $\vec { r } \cdot \left( \vec { { n }_{ 1 } } +\lambda \vec { { n }_{ 2 } } \right) ={ d }_{ 1 }+\lambda { d }_{ 2 }$ where, λ is a constant and calculated from given condition. Cartesian form: If the equation of planes are a1x + b1y + c1z = d1 and a2x + b2y + c2z = d2, then equation of any plane passing through the intersection of planes is a1x + b1y + c1z – d1 + λ (a2x + b2y + c2z – d2) = 0 where, λ is a constant and calculated from given condition. ## Coplanarity of Two Lines – Three Dimensional Geometry Vector form: If two lines $\vec { r } =\vec { { a }_{ 1 } } +\lambda \vec { { b }_{ 1 } }$ and $\vec { r } =\vec { { a }_{ 2 } } +\mu \vec { { b }_{ 2 } }$ are coplanar, then $\left( \vec { { a }_{ 2 } } -\vec { { a }_{ 1 } } \right) \cdot \left( \vec { { b }_{ 2 } } -\vec { { b }_{ 1 } } \right) =0$ ## Angle between Two Planes: Let θ be the angle between two planes Vector form: If $\vec { { n }_{ 1 } }$ and $\vec { { n }_{ 2 } }$ are normals to the planes and θ be the angle between the planes $\vec { r } \cdot \vec { { n }_{ 1 } } ={ d }_{ 1 }$ and $\vec { r } \cdot \vec { { n }_{ 2 } } ={ d }_{ 2 }$, then θ is the angle between the normals to the planes drawn from some common points. Note: The planes are perpendicular to each other, if $\vec { { n }_{ 1 } } \cdot \vec { { n }_{ 2 } } =0$ and parallel, if $\vec { { n }_{ 1 } } \cdot \vec { { n }_{ 2 } } =\left| \vec { { n }_{ 1 } } \right| \left| \vec { { n }_{ 2 } } \right|$ Cartesian form: If the two planes are a1x + b1y + c1z = d1 and a2x + b2y + c2z = d2, then Note: Planes are perpendicular to each other, if a1a2 + b1b2 + c1c2 = 0 and planes are parallel, if $\frac { { a }_{ 1 } }{ { a }_{ 2 } } =\frac { { b }_{ 1 } }{ { b }_{ 2 } } =\frac { { c }_{ 1 } }{ { c }_{ 2 } }$ ## Distance of a Point from a Plane – Three Dimensional Geometry Vector form: The distance of a point whose position vector is $\vec { a }$ from the plane $Three Dimensional Geometry$ Note: (i) If the equation of the plane is in the form $\vec { r } \cdot \vec { n } =d$, where $\vec { n }$ is normal to the plane, then the perpendicular distance is $\frac { \left| \vec { a } \cdot \vec { n } -d \right| }{ \left| \vec { n } \right| }$ (ii) The length of the perpendicular from origin O to the plane $\vec { r } \cdot \vec { n } =d\quad is\quad \frac { \left| d \right| }{ \left| \vec { n } \right| }$ [∵ $\vec { a }$ = 0] Cartesian form: The distance of the point (x1, y1, z1) from the plane Ax + By + Cz = D is ## Angle between a Line and a Plane – Three Dimensional Geometry Vector form: If the equation of line is $\vec { r } =\vec { a } +\lambda \vec { b }$ and the equation of plane is $\vec { r } \cdot \vec { n } =d$, then the angle θ between the line and the normal to the plane is and so the angle Φ between the line and the plane is given by 90° – θ, i.e. sin(90° – θ) = cos θ Cartesian form: If a, b and c are the DR’s of line and lx + my + nz + d = 0 be the equation of plane, then If a line is parallel to the plane, then al + bm + cn = 0 and if line is perpendicular to the plane, then $\frac { a }{ l } =\frac { b }{ m } =\frac { c }{ n }$ ## Remember Points (i) If a line is parallel to the plane, then normal to the plane is perpendicular to the line. i.e. a1a2 + b1b2 + c1c2 = 0 (ii) If a line is perpendicular to the plane, then DR’s of line are proportional to the normal of the plane. i.e. $Three Dimensional Geometry$ where, a1, b1 and c1 are the DR’s of a line and a2, b2 and c2 are the DR’s of normal to the plane. We hope the given CBSE Class 12 Maths Notes Three Dimensional Geometry will help you. If you have any query regarding NCERT Class 12 Maths Notes Three Dimensional Geometry, drop a comment below and we will get back to you at the earliest. ## Class 12 Maths Notes Relations and Functions Inverse Trigonometric Functions Matrices Determinants Continuity and Differentiability Application of Derivatives Integrals Application of Integrals Differential Equations Vector Algebra Three Dimensional Geometry Linear Programming Probability
2019-08-19 05:52:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 112, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809744656085968, "perplexity": 407.8458650140021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314667.60/warc/CC-MAIN-20190819052133-20190819074133-00054.warc.gz"}
https://the-equivalent.com/how-to-make-ratios-equivalent/
# How to make ratios equivalent 1. Write both the ratios in fractional form ( numerator over denominator ). 2. Do the cross multiplication. Multiply 10 by 24 and 8 by 30. 3. If both products are equal, it means that they are equivalent ratios. Here 10 × 24 = 8 × 30 = 240. Therefore, they are equivalent ratios. 0:36 8:14 We use multiplication or division. Whatever you do to one number of the ratio. You have to do theMoreWe use multiplication or division. Whatever you do to one number of the ratio. You have to do the same thing to the other using multiplication or division. And you get an equivalent ratio. ## How can you determine if ratios are equivalent? Equivalent ratios are just like equivalent fractions. If two ratios have the same value, then they are equivalent, even though they may look very different! In this tutorial, take a look at equivalent ratios and learn how to tell if you have equi ## How can you tell if two ratios are equivalent? How can you tell if two ratios are equivalent? By multiplying each ratio by the second number of the other ratio, you can determine if they are equivalent. Multiply both numbers in the first ratio by the second number of the second ratio. For example, if the ratios are 3:5 and 9:15, multiply 3 by 15 and 5 by 15 to get 45:75. ## How do you write equivalent ratios? n = numerator. d = denominator. a = multiplier. In our equivalent ratio formula, we can see that by multiplying both the numerator and denominator by the same amount (a) that we maintain the relationship with all equivalent ratio and our initial ratio from which we started the calculation. ## What are facts about equivalent ratios? In fact, they’re called equivalent ratios, which are ratios that express the same relationship between two numbers. The ratios 60/1 and 120/2 are equivalent because the relationship between the two parts of the ratios didn’t change. According to the ratio 60/1, you travel 60 miles for every hour you drive. ## What is the ratio 2/3 equivalent to? Answer: 4/6, 6/9, 8/12, 10/15 … are equivalent to 2/3. All those fractions obtained by multiplying both the numerator and denominator of 2/3 by the same number are equivalent to 2/3. ## What are 3 ratios that are equivalent? We can make a string of equivalent ratios by continuing to scale up or scale down. 1 : 3 = 2 : 6 = 3 : 9 = 4 : 12 = 5 : 15 = 6 : 18 = 7 : 21…. All of these ratios show the same relationship. Simplifying ratios to the simplest form can be helpful when solving problems that deal with ratios. ## What is a ratio of 2 to 1 equivalent? For example, if you split 12 as a 2:1 ratio, you get 8 and 4 (one part twice is large as the other). ## What is ratio and equivalent ratio? A ratio simply compares one number to another. An equivalent ratio means that the proportional relationship stays the same. You can calculate your own equivalent ratios by multiplying the first number by the same ratio, or unit of proportion, to get the second number. ## What’s the equivalent ratio of 6 to 2? Since the simplest form of the fraction 6/2 is 3/1, the simplest form of the ratio 6:2 is also 3:1. ## What is the ratio 8 to 2 equivalent to? Since the simplest form of the fraction 8/2 is 4/1, the simplest form of the ratio 8:2 is also 4:1. ## What is the ratio of 6 to 4? Ratio of 6 to 4 (6:4) A ratio of 6 to 4 can be written as 6 to 4, 6:4, or 6/4. Furthermore, 6 and 4 can be the quantity or measurement of anything, such as students, fruit, weights, heights, speed and so on. A ratio of 6 to 4 simply means that for every 6 of something, there are 4 of something else, with a total of 10. ## What is the ratio of 4 is to 8? Since the simplest form of the fraction 4/8 is 1/2, the simplest form of the ratio 4:8 is also 1:2. ## Which ratio is equivalent to the ratio 3 4? Answer : each one of 6 : 8 and 9: 12 is equivalent to 3 : 4. ## What is an equivalent ratio for 3 to 5? The given ratios 3: 5 and 15: 25 are equal. ## What is a ratio equivalent to 4 5? Answer: The fractions equivalent to 4/5 are 8/10, 12/15,16/20, etc. Equivalent fractions have the same value in the reduced form. ## How do you solve equivalent ratios word problems? 0:042:40How To Solve Equivalent Ratio Word Problems (finding how many boys …YouTubeStart of suggested clipEnd of suggested clipSo if we look at the question we are given that there’s 20 boys so we’re going to make an equivalentMoreSo if we look at the question we are given that there’s 20 boys so we’re going to make an equivalent ratio underneath and put the 20 boys underneath the boys side of the ratio which is on the left. ## What are examples of equivalent ratios? Equivalent ratios are the ratios that are the same when we compare them. Two or more ratios can be compared with each other to check whether they are equivalent or not. For example, 1:2 and 2:4 are equivalent ratios. ## Are 3/5 and 1:2 20 ratios the same? Equivalent fractions of 3/5 : 6/10 , 9/15 , 12/20 , 15/ Equivalent fractions of 4/5 : 8/10 , 12/15 , 16/20 , 20/ ## Are the ratios 18 1:2 and 3 2 equivalent? Whenever the simplified form of two ratios are equal, then we can say that the ratios are equivalent ratios. For example, 6 : 4 and 18 : 12 are equivalent ratios, because the simplified form of 6 : 4 is 3 : 2 and the simplified form of 18 : 12 is also 3 : 2. ## What is an equivalent ratio for 3 to 5? The given ratios 3: 5 and 15: 25 are equal. ## What are equivalent ratios? When the comparison of two different ratios is same, the such ratios are called equivalent ratios. For example, 1:2 and 3:6 are equivalent. ## How can we find the equivalent ratio of 6:4? To find the equivalent ratio of 6:4, convert the ratio into fraction and then multiply and divide the fraction by a common factor. 6:4 = 6/4 x (2/… ## Are 30 : 20 and 24 : 16 equivalent ratios? 30:20 and 24:16 are equivalent ratios, since the lowest form of both ratios is 3:2. ## What is the simplest form of 14:21? The simplest form of 14:21 is ⅔. ## Related Articles on Equivalent Ratios Check these interesting articles related to the concept of equivalent ratios. ## What is the Definition of Equivalent Ratios? Two or more ratios are equivalent if they have the same value when reduced to the lowest form. For example, 1:2, 2:4, 4:8 are equivalent ratios. All three ratios have the same value 1:2 when reduced to the simplest form. ## How do you Find the Equivalent Ratios? To find equivalent ratios of a given ratio, we either multiply the terms or divide the terms by a natural number. If the terms are co-prime (do not have any common factor other than 1), then we avoid using division operation and multiplying the terms by any natural number. ## How are Unit Rates and Equivalent Ratios Related? Unit rates and equivalent ratios are related to each other. Unit rates can be found by using the concept of equivalent ratios. For example, if it is given that a car covers 70 miles in 2 hours. In the ratio, it can be expressed as 70:2. ## How are Proportional Quantities Described by Equivalent Ratios? A set of equivalent ratios represent proportional quantities. For example, we can say that 2:3 and 4:6 are in proportion. Proportion is nothing but the equality of ratios. This is how proportional quantities can be described by equivalent ratios. ## How to Find Missing Numbers in Equivalent Ratios? To find missing values in equivalent ratios, we have to first find the multiplying factor by equating the values of antecedents and consequents, and then we find the missing number. For example, if it is given that 1:4 and x:16 are equivalent ratios and we have to ding the missing value x. ## Definition of Ratio Their ratio is the relationship between two quantities of the same kind and in the same unit that is obtained by dividing one quantity by the other. Both the quantities must be of the same kind means, if one quantity is the number of students, the other quantity must also be the number of students. ## Definition of Equivalent Ratio A ratio can be represented as a fraction. The concept of an equivalent ratio is similar to the concept of equivalent fractions. A ratio that we get either by multiplying or dividing by the same number, other than zero, to the antecedent and the consequent of a ratio is called an equivalent ratio. ## Examples of Equivalent Ratio Let us see some examples of equivalent ratios. For example, when the first and the second term of the ratio 2: 5 are multiplied by 2, we get ( 2 × 2): ( 5 × 2) or 4: 10. Here, 2: 5 and 4: 10 are equivalent ratios. Similarly, when both the terms of the ratio 4: 10, are divided by 2, it gives the ratio as 2: 5. ## Methods to Find the Equivalent Ratios To find the equivalent fractions, first, we should represent the given ratios in fraction form and then simplify them to see whether they are equivalent ratios or not. Simplification of the ratios can be done till both the antecedent and the consequent are still be whole numbers. ## Making the Consequents of the Ratios the Same The consequents of the ratios 3: 5 and 6: 10 are 5 and 10. To make the process simple, we will represent it in fraction form that is 3 5 and 6 10. The least common multiple (LCM) of the denominators 5 and 10 is 10. Now make the denominators of both fractions 10, by multiplying them with suitable numbers. ## Finding the Decimal Form of Both the Ratios In this method, we find the decimal form of both the ratios after converting it to fraction form by actually dividing them. We have to check whether 3 5 and 6 10 have the same value. So, first, find the decimal value of each ratio. ⇒ 3 5 = 0.6 ⇒ 6 10 = 0.6 The decimal values of both the fractions are the same, i.e., 0.6. Therefore, 3: 5 and 6: 10 are equivalent ratios.. ## Summary In this article, we learnt in detail about ratios, equivalent ratios, and how to check the equivalent ratios. We have learned that to find the equivalent ratios of a given ratio, we need to write the fraction form of it. Then, we will multiply the numerator and the denominator of a fraction by the same non-zero number. ## Ratio Definition Ratios are the simplest Mathematical expressions that reveal the significant relationship between the values. In other words, a ratio is defined as the relationship between two numbers that indicate how many times the first number contains the second number. The ratios are expressed using the notation “:” or “/”. ## Check whether the given ratios are equal? The given ratios 3: 5 and 15: 25 are equal. Because when you divide the ratio 15: 25 by 5 on both numerator and denominator, the first ratio 3: 5 can be obtained. Similarly, when you multiply the first ratio 3: 5 by 5, the ratio 15: 25 can be obtained. ## How to Calculate Equivalent Ratios As we previously mentioned, Equivalent Ratios are two ratios that express the same relationship between numbers. The Equivalent Ratio Calculator provides a table of equivalent ratios that have the same relationship between each other and directly with the ratio you enter into the calculator. ## How to Manually Calculate Equivalent Ratios When calculating equivalent ratios it is important to understand that mathematically, you are expressing the same relationship, simply in different amounts. for example, if you have 10 sweets to share with 4 friends, this is the same and having 5 sweets to share with 2 friends in ratio terms. ## More Good Ratio Calculators If you found the Equivalent Ratio Calculator, you will probably find the following ratio calculators useful: ## What is a ratio? A ratio is a direct comparison of one number against another. A ratio calculator looks to define the relationship that compares between those two numbers ## Where are Ratio Calculations Used? Ratios are used everywhere, from cooking with your favourite recipes to building housing, here are some common applications of ratios in everyday life: ## How to Calculate Ratios When calculating equivalent ratios you must multiply or divide both numbers in the ratio. This keeps both numbers in direct relation to each other. So, a ratio of 2/3 has an equivalent ratio of 4/6: in this ratio calculation we simply multiplied both 2 and 3 by 2. ## Definition of Ratio Their ratio is the relationship between two quantities of the same kind and in the same unit that is obtained by dividing one quantity by the other. Both the quantities must be of the same kind means, if one quantity is the number of students, the other quantity must also be the number of students. The ratio between two unlik… See more on embibe.com ## Definition of Equivalent Ratio • A ratio can be represented as a fraction. The concept of an equivalent ratio is similar to the concept of equivalent fractions. A ratio that we get either by multiplying or dividing by the same number, other than zero, to the antecedent and the consequent of a ratio is called an equivalent ratio. To get a ratio equivalent to a given ratio, we first represent the ratio in fraction form. Then, … See more on embibe.com ## Examples of Equivalent Ratio • Let us see some examples of equivalent ratios. For example, when the first and the second term of the ratio $$2:5$$ are multiplied by $$2,$$ we get $$(2×2):(5×2)$$ or $$4:10.$$ Here, $$2:5$$ and $$4:10$$ are equivalent ratios. Similarly, when both the terms of the ratio $$4:10,$$ are divided by $$2,$$ it gives the ratio as $$2:5.$$ If we multiply b… ## Methods to Find The Equivalent Ratios • To find the equivalent fractions, first, we should represent the given ratios in fraction form and then simplify them to see whether they are equivalent ratios or not. Simplification of the ratios can be done till both the antecedent and the consequent are still be whole numbers. There are some different methods to check if the given ratios are equivalent or not. 1. Making the consequents t… See more on embibe.com • The consequents of the ratios $$3:5$$ and $$6:10$$ are $$5$$ and $$10.$$ To make the process simple, we will represent it in fraction form that is $$\frac{3}{5}$$ and $$\frac{6}{10}.$$ The least common multiple (LCM) of the denominators $$5$$ and $$10$$ is $$10$$. Now make the denominators of both fractions $$10,$$ by multiplying them with suitable numbers. $$\Rightarro… See more on embibe.com ## Finding The Decimal Form of Both The Ratios • In this method, we find the decimal form of both the ratios after converting it to fraction form by actually dividing them. We have to check whether \(\frac{3}{5}$$ and $$\frac{6}{10}$$ have the same value. So, first, find the decimal value of each ratio. $$\Rightarrow \frac{3}{5} = 0.6$$ $$\Rightarrow \frac{6}{{10}} = 0.6$$ The decimal values of both the fractions are the same, i.e., $$0.… See more on embibe.com ## Solved Examples – Equivalent Ratios • Q.1. Are the ratios \(2:7$$ ​and $$4:12$$ ​equivalent? Ans: Given ratios are $$2:7$$ and $$4:12.$$ The fraction form of the given ratios are $$\frac{2}{7}$$ and $$\frac{4}{12}$$. Then, we will cross multiply and get, $$2 \times 12\,{\rm{\& }}\,7 \times 4$$ $$\Rightarrow 24 \ne 28$$ Therefore, $$2:7$$ ​and $$4:12$$ are not equivalent ratios. Q.2. Are the ratios $$1:6$$ ​and $$2:12$$​ equivalent? … See more on embibe.com ## Summary • In this article, we learnt in detail about ratios, equivalent ratios, and how to check the equivalent ratios. We have learned that to find the equivalent ratios of a given ratio, we need to write the fraction form of it. Then, we will multiply the numerator and the denominator of a fraction by the same non-zero number. The equivalent ratio of a given ratio does not change the value of the rat… See more on embibe.com ## Frequently Asked Questions (FAQ) – Equivalent Ratios • The most frequently asked queries about equivalent ratios are answered below: We hope this detailed article on equivalent ratios has helped you in your studies. If you have any doubt or queries, you can comment down below and we will be more than happy to help you. See more on embibe.com
2023-03-24 13:12:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7789959907531738, "perplexity": 435.9215844497301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00272.warc.gz"}
https://www.enotes.com/homework-help/solve-x-ln-ln-x-4-244679
# Solve x if ln(ln(x)) = 4 justaguide | Certified Educator calendarEducator since 2010 starTop subjects are Math, Science, and Business We have ln (ln (x)) = 4 ln x has a base of e. Taking the antilog of both the sides => ln (x ) = e^ 4 Taking the antilog of both the sides again => x = e^ ( e^4) The required value of x is e^(e^4). check Approved by eNotes Editorial hala718 | Certified Educator calendarEducator since 2008 starTop subjects are Math, Science, and Social Sciences Given the equation: ln (lnx) = 4 We need to solve for s. First we will rewrite in the logarithm form. ==> ln(x) = e^4 Now we will rewrite into the exponent form. ==> x = e^(e^4) check Approved by eNotes Editorial tonys538 | Student The logarithmic equation `ln(ln(x)) = 4` has to be solved for x. ln is used to denote natural logarithm which is logarithm to the base e. `ln(ln(x)) = 4` can be rewritten as: `log_e(log_ex) = 4` If `log_b a = c` , we can write `a = b^c` This gives: `log_e x = e^4` Again doing the same. `x = e^(e^4)` The root of the equation `ln(ln(x)) = 4` is `x = e^(e^4)` check Approved by eNotes Editorial
2019-11-19 23:27:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8753225207328796, "perplexity": 9303.342127821934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670268.0/warc/CC-MAIN-20191119222644-20191120010644-00385.warc.gz"}
https://math.stackexchange.com/questions/1631685/if-f-rightarrow-c-then-prove-frac1a-int-0-a-f-rightarrow-c
# If $f \rightarrow c$ then prove $\frac{1}{a} \int_{[0,a]} f \rightarrow c$ Let $f$ be an extended real-valued $\mathcal{M}_{L}$-measurable function on $[0,\infty)$ such that $f$ is $\mu_L$-integrable on every finite subinterval of $[0,\infty)$, and $$\lim_{x\rightarrow \infty}f(x)=c.$$ Let $a>0$. Show that $$\lim_{a\rightarrow \infty}\frac{1}{a}\int_{[0,a]}fd\mu_L=c$$ This is one of my analysis HW problems (9.35 in Yeh's Real Analysis, 3rd edition). I can solve it using some $\epsilon-\delta$ types argument(i.e. for large enough $x$, $c-\delta<f(x)<c+\delta$, use that to approximate the integral when $a$ is large enough). However this chapter is about Lebesgue integral of measurable function and those convergence theorems(monotone convergence, dominated convergence, etc.). I wonder if there is a much better alternative solution for this problem using those theorems. • I wonder if one can justify that $\frac{1}{a}\int_{[0,a]}fd\mu_L = \int_0^1 f(ax) dx \to \int_0^1 cdx = c$. – Martin R Jan 29 '16 at 8:30 If you assume that $f$ is bounded you may argue as follows: $${1\over a}\int_0^a f(x)\>dx=\int_0^1 f(a \>t)\>dt\ .$$ Now you can apply the dominated convergence theorem on the right hand side and obtain $$\lim_{a\to\infty}{1\over a}\int_0^a f(x)\>dx=\int_0^1 c\>dt=c\ .$$ Your approach is the way to go. The point of this question I think is that infinity trumps all: even if $f$ never achieves the value $c$, since it is it's limit at infinity, the average value of $f$ on $[0,\infty)$ is still $c$. I could be wrong, but there is no obvious way to cast this as a direct application of one of the big convergence theorems. For $a$ large enough write $$\int f1_{[0,a]}=\int f1_{[0,A]}+\int f1_{[A,a]},$$ where $A$ is s.t. $|f(x)-c|<\epsilon$ for some $\epsilon>0$ and $x\ge A$. The first integral on the RHS is finite by the given assumptions and the second integral is between $(c-\epsilon)(a-A)$ and $(c+\epsilon)(a+A)$. Combining these facts we can bound $\frac{1}{a}\int f1_{[0,a]}$. Then send $a\to \infty$ and $\epsilon\downarrow 0$. • Still +1 for the solution, but that is the approach I use, and I'm wondering if there is any alternative using the related theorem in the chapter. – gamma Jan 29 '16 at 8:42 • @frank000 I tend to agree with charlestoncrabb. – d.k.o. Jan 29 '16 at 8:49 Probably the epsilon delta argument is the quickest approach (and the one you are meant to use). Anyway, try to see whether this works. I assume f is bounded (for simplicity in f in (0,1)) although it is not an assumption (but it will start to be bounded at some point). $\frac{1}{a} \int_0^a f dl=\int_0^{\infty} f \frac{1}{a} I_{(0,a)}dl=\int_0^{\infty} f dP_a$, where $P_a$ is the uniform probability measure on (0,a). The last integral is equal to: $\int_0^1 P_a(f>u) du$. Finally, using dominated convergence theorem and nothing that $P_a(f>u)$ converges to 1 if $u<c$ and 0 otherwise we get: $\int_0^1 P_a(f>u) du \to c$
2019-05-22 05:13:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796983599662781, "perplexity": 135.64937453294016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256763.42/warc/CC-MAIN-20190522043027-20190522065027-00122.warc.gz"}
https://www.gamedev.net/forums/topic/653538-bindless-texture-bug/
# Bindless texture bug? ## Recommended Posts Chris_F    3030 I'm trying out bindless textures and I noticed what I think may be a driver bug, but I am not certain. Basically, if my shader looks like this: #version 440 core #extension GL_ARB_bindless_texture : require layout(location = 0) uniform sampler2D texture0; I get an error saying that sampler handle updates are not allowed if the bindless_sampler qualifier is not set. Fair enough. Change the shader to this and all is well. #version 440 core #extension GL_ARB_bindless_texture : require layout(location = 0, bindless_sampler) uniform sampler2D texture0; However, if I do this: #version 440 core layout(location = 0) uniform sampler2D texture0; Then I get no errors and everything works fine, despite the fact that I am still using a bindless handle. The GL code is: GLuint64 texture_handle = glGetTextureHandleARB(texture); glMakeTextureHandleResidentARB(texture_handle); Edited by Chris_F ##### Share on other sites richardurich    1352 Undefined behavior is allowed to result in "everything works fine" and is not a driver bug. You can use GL_ARB_debug_output to help find cases of undefined behavior. ##### Share on other sites Chris_F    3030 Undefined behavior is allowed to result in "everything works fine" and is not a driver bug. You can use GL_ARB_debug_output to help find cases of undefined behavior. I am using GL_ARB_debug_output, and that's what I mean by "no error" and "I think this is a 'bug'". Surely debug output should give some kind of warning for this. ##### Share on other sites richardurich    1352 Short answer is yes, it should tell you about this type of stuff. As my prior post suggested, I expected that so strongly I assumed you must not even be using GL_ARB_debug_output. Unfortunately, it is not technically a bug though. GL_ARB_debug_output only requires information about errors, and undefined behavior may or may not generate an error. All other information reported is optional. Are you using nVidia drivers? I know nVidia is a bit more lax about generating errors, so I'm just curious if that's all this is or if maybe I'm wrong in thinking this is undefined in the first place. ##### Share on other sites Chris_F    3030 Since this is OpenGL 4.4 it means Nvidia is currently the only possibility. This may not technically be a bug, but just the same I am going to report this in hopes that they will improve their debug output.
2017-08-20 02:24:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2610933184623718, "perplexity": 3976.742109877788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105961.34/warc/CC-MAIN-20170820015021-20170820035021-00710.warc.gz"}
https://kb.osu.edu/dspace/handle/1811/12228
Knowledge Bank University Libraries and the Office of the Chief Information Officer The Knowledge Bank is scheduled for regular maintenance on Sunday, April 20th, 8:00 am to 4:00 pm EDT. During this time users will not be able to register, login, or submit content. LINE STRENGTH OF THE ATOMIC CHLORINE $^{2}P_{1/2}\leftarrow\ ^{2}P_{3/2}$ SPIN ORBIT TRANSITION Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/12228 Files Size Format View 1985-TC-11.jpg 115.7Kb JPEG image Title: LINE STRENGTH OF THE ATOMIC CHLORINE $^{2}P_{1/2}\leftarrow\ ^{2}P_{3/2}$ SPIN ORBIT TRANSITION Creators: Stanton, A. C.; Wormhoudt, J. Issue Date: 1985 Abstract: Direct absorption or emission measurements of the ground state spin-orbit transitions in the halogen atoms ($^{2}P_{1/2}\leftrightarrow\ ^{2}P{3/2}$ magnetic dipole transitions) have been reported for iodine, bromine, chlorine, and $fluorine.^{1,2}$ In the case of atomic fluorine, tunable diode laser absorption measurements have established an accurate value for the radiative lifetime, in good agreement with a calculation. As noted in Ref. 1, the only other measurement of this forbidden transition in a halogen has been for iodine, where there is also reasonable agreement with calculations. We present the measurement by diode laser absorption of the radiative lifetime for the analogous transition in atomic chlorine, together with a comparison with theoretical calculations. Since chlorine atoms are the principal active species in plasma etching of semiconductors and metals using chlorine-containing $gases,^{3}$ diode laser absorption has the potential of being a very useful diagnostic of these important microelectronics fabrication processes. URI: http://hdl.handle.net/1811/12228 Other Identifiers: 1985-TC-11
2014-04-20 18:41:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5526325702667236, "perplexity": 3296.819184137662}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
https://electronics.stackexchange.com/questions/561153/how-to-calculate-c-for-an-am-demodulator
# How to calculate C for an AM demodulator? I have read this equation from Savant's book that is used to calculate the capacitor for a demodulator: I've tried to calculate my circuit with this equation, but it didn't work. Then I tried to recalculate this example from the book, but I don't know what is 'w' here in the example: A 15MHz radio frequency carrier is modulated with a 5kHz signal with a modulation index (m) of 0.5. If the load resistance is 5 kOhms, what capacitor value must be added in parallel to the load to filter the radio frequency signal? response: 0.013 uF Can you please teach me how to obtain w so I can calculate C? I tried with 15MHz, and 5kHz, but I don't get the same answer. $$\\omega\$$ is the angular frequency. $$\ \omega = 2\pi f \$$ This makes your equation: $$\ C = \frac {1}{2\pi f R_Lm}\$$ $$\ C = \frac {1}{2\pi (5000) (5000) (0.5)} = 0.01273239µF \approx 0.013µF\$$ $$\f\$$ is the frequency of the modulation signal (5kHz,) not the carrier (15MHz.)
2022-01-25 08:04:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8744140267372131, "perplexity": 632.4833592137663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00521.warc.gz"}
https://tex.stackexchange.com/questions/204881/error-missing-endgroup-for-table-tabulary-with-bullet-list/204926
# Error missing \endgroup for table tabulary with bullet list I'm trying to insert a bullet list in a table of the type tabulary , but it always throws following error! Missing \endgroup inserted. <inserted text> \endgroup \end{tabulary} Code used: \usepackage{tabulary} %% \begin{center} \begin{tabulary}{0.7\textwidth}{|L|L|} \hline First Title & Second Title \\ \hline Text & \begin{itemize} \item PointOne \item PointTwo \end{itemize} \\ \hline \end{tabulary} \end{center} I've closed every \begin with an \end, and I've tried the same with the table type tabular and it works. • Why specifically tabulary? Placing this in a fixed-width tabularx works. – Werner Oct 7 '14 at 0:26 • Because with tabulary it is easier to have different column sizes. I placed just an example, my real table will have more columns and in the bullet list more text. And still i don't know why LateX throws the error. – Nico Eli Oct 7 '14 at 0:38 • I'm sure one can duplicate the output with tabularx, but without a concrete example, it's difficult to say. tabularx also allows for "different column sizes". – Werner Oct 7 '14 at 0:46 ## 2 Answers I'm sure this must be documented in the tabulary documentation somewhere... \documentclass{article} \usepackage{tabulary} \makeatletter \def\TY@tab{% \setbox\z@\hbox\bgroup \let$\let$$% \let\equation$\let\endequation\$% \let\@itemdepth\count@ \let\itemize\endgraf \let\enditemize\endgraf \let\endenumerate\endgraf \let\list\@gobbletwo\renewcommand\item[1][]{}% \let\endlist\endgraf \let\trivlist\endgraf \let\endtrivlist\endgraf \col@sep\tabcolsep \let\d@llarbegin\begingroup\let\d@llarend\endgroup \let\@mkpream\TY@mkpream \def\multicolumn##1##2##3{\multispan##1\relax}% \CT@start\TY@tabarray} \makeatother \begin{document} \begin{center} \begin{tabulary}{0.7\textwidth}{|L|J|} \hline First Title & Second Title \\ \hline Text & \begin{itemize} \item PointOne \item PointTwo \end{itemize} \\ \hline \end{tabulary} \begin{tabular}{|p{3cm}|p{5cm}|} \hline First Title & Second Title \\ \hline Text & \begin{itemize} \item PointOne \item PointTwo \end{itemize} \\ \hline \end{tabular} \end{center} \end{document} One can also make it work with the varwidth environment. I added the enumitem package to have control on the itemize parameters, and cellspace to ensure a minimal vertical spacing above and below cells contents in a given column: one adds the S pre-specifier before the L specifier: \documentclass{scrartcl} \usepackage[utf8]{inputenc} \usepackage[debugshow]{tabulary} \usepackage{booktabs} \usepackage{amsmath} \usepackage{pbox, varwidth} \usepackage{cellspace} \usepackage{enumitem} \setlength{\cellspacetoplimit}{6pt} \setlength{\cellspacebottomlimit}{6pt} \addparagraphcolumntypes{L} \usepackage{makecell} \newcommand*{\topdblline}{\Xhline{0.15ex}\\[-2.6ex]\hline} \newcommand*{\botdblline}{\hline\\[-2.6ex]\Xhline{0.15ex}} \begin{document} \centering %% \begin{tabulary}{0.7\textwidth}{|L|SL|} \hline First Title & Second Title \\ \hline Text & \begin{varwidth}{0.7\textwidth} \begin{itemize}[wide, itemsep =0.25ex] \item PointOne \item PointTwo \end{itemize} \end{varwidth} \\ \hline \end{tabulary} \end{document} • using enumitem to control lists in tables is good idea (the default spacing often doesn't work too well in tabular layout) downside of using varwidth here (or not, depending) is that ty will only see the width of the box rather than the total width of all the items so won't allocate so much space to that column as it would have done. – David Carlisle Oct 7 '14 at 22:44 • @David Carlisle: I'm not sure I'm getting well what you mean about the downside of varwidth (a package which I don't know well): what I thought was the length parameter to varwidth was a bound, and I set it with the value I had at hand; I might as well have chosen, say, 0.4\textwidth, but I didn't know whether both (real) columns have roughly the same width. – Bernard Oct 7 '14 at 22:59 • varwidth's an impressive package, but the way tabulary automatically allocates column widths is a 2-pass system and in the first pass as much as possible is set as a single horizontal box (which is why the list environment generated an error) and the length of the box for each column used as a measure of the amount of text in that column, then the final table is set with parboxes in proportion to that. varwidth (or any parbox) hides its content from tabulary's measuring pass, so tabulary will only see the width of the box not the width of its contents set on one line – David Carlisle Oct 7 '14 at 23:03 • I see… So it's a matter of trial and error, and I mght as well used a minipage? Or will it be OK on a second pass, since, as far as I understand it, varwidth also measures the length of its contents? – Bernard Oct 7 '14 at 23:10 • yes varwidth measures all kinds of stuff, the final table will work out with a reasonable layout I think (as your example shows) but (I think, it's late and I wrote this code decades ago:-) that typically tabulary will allocate a narrower column to the list in a varwidth than it would to a list at the top level (once it is fixed so that doesn't error) once the column width is allocated though varwidth and the list will get set properly in that width – David Carlisle Oct 7 '14 at 23:17
2020-08-07 19:19:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8308581113815308, "perplexity": 2157.750368209381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737206.16/warc/CC-MAIN-20200807172851-20200807202851-00337.warc.gz"}
http://www.leehodgkinson.com/blog/onion-routing-under-the-hood/
This is an attempt of a mid-level overview of how Tor combines public-key encryption and symmetric-key cryptography to allow it to function. I should make it clear that, I’m not an expert and these ideas really are just a compendium of things I’ve read here and there, and how I believe it to function. I’d be more than happy to edit the post if someone has a better idea and wants to correct the article. # High-level overview First a high-level overview. I’m not going to talk at all about what Tor is, the history of Tor or what it’s used for – there are plenty of other resources where you can read about that. You should know roughly how Tor operates (connecting nodes in a circuit between your computer and it's ultimate destination) and other high-level things about it before continuing. This image was directly pilfered from Wikipedia. What this image represents is the layers of the “onion”. The data-packet is wrapped in successive layers of encryption, and each time the message traverses a node in the circuit, the outer layer is decrypted (or “peeled” to keep inline with onion nomenclature and analogy). The key feature of this is that no single node knows both the origin of the packet (client’s computer) and destination of the packet (e.g. some web-server). A single node knows only the address of the previous node and the following node. Of course if the packet is ever to reach its ultimate destination, the IP of this destination must be there somewhere in the bundle, but the key is that is it buried under the layers of the onion. Of course the final node, or “exit node”, could see the contents of the message itself if that message wasn’t also encrypted (for example if the client does not establish a HTTPS connection with the webserver), so it’s important to keep that in mind when using Tor and ensure HTTPS everywhere. Another feature is that no node knows whether the previous node was the client or whether it was just another node in the chain like itself. The nodes that form a circuit or chain are mandated by what is known as the directory authority nodes. These also provide the client with the public key for each of the nodes in the chain, but more about that later. # Asymmetric key cryptography One of the pre-requisites to understanding how Tor works is understanding how asymmetric-key cryptography works. A full account would be beyond the scope of this article, but the basic idea is that we have a pair of keys different from each other – a public key and private key. As the names suggest, the public key is available to the world and the private key must be kept secret. Using the public key anyone can encrypt a given message (or given block of data), and then only the holder of the private key can decrypt it. Contrarily, the private key could also be used to encrypt some msg that the public key could decrypt. This is a feature of the mathematics. However given that the public key is public, this kind of usage would be pointless for encryption purposes. However when it is used in this manner we call it "signing" as it is useful to prove the authenticity of the message if only the originator has the private key. One feature of asymmetric-key encryption is that it is computationally much slower than symmetric-key encryption. Some examples of asymmetric key algorithms are RSA and ECC. Here is a nice guide to the maths of RSA and another here # Symmetric key cryptography This is where both parties have the same (secret) key for both encrypting and decrypting messages. One of the main challenges is how to share such a secret between two physically distant parties without a third-party, who is listening in, also obtaining this secret key. Diffie-Hellman (DH) key exchange provides a solution to this. I'm going to include an example here (again stolen from Wikipedia) just to make this concrete so we can talk more concretely when it comes to Tor by referring back to this. The simplest and the original implementation of the protocol uses the multiplicative group of integers modulo p, where p is prime, and g is a primitive root modulo p. These two values are chosen in this way to ensure that the resulting shared secret can take on any value from 1 to p–1. Here is an example of the protocol: 1) Alice and Bob agree to use a modulusp = 23 and baseg = 9 (which is a primitive root modulo 23). Both of these things are public info that Eve can also note down. 2) Alice chooses a secret integer a =4, then sends Bob A = g^a mod p which for the values chosen means A= 6. Note Eve can see this, but if we're using big enough numbers then because of the discrete log problem then Eve cannot simply invert to get a. 3) Likewise, Bob chooses a secret integer b = 3, then sends Alice B = g^b mod p, which for these values gives B = 16. 4) Alice computes s = B^a mod p, which gives s = 9 5) Bob computes  s = A^b mod p, which gives s = 9 also. 6) Alice and Bob now share a secret (the number 9). They got the same number because of exponentiation is commutative under mod.Mathematically: g^(ab) equiv g^(ba) (under modp). Note that only ab, and (g^ab mod p = g^ba mod p) are kept secret. All the other values – pg,A, B – are sent in the clear. Once Alice and Bob compute the shared secret they can use it as an encryption key, known only to them, for sending messages across the same open communications channel. Take a look also at the video # Tor Steps : ## Step 1: The client has the public key of node 1 (N1) the first node in the circuit; it has obtained this from the directory server. Since we already have the public key of N1 we are able to send messages to N1 that N1, and only N1, can decrypt and read. You may wonder if since RSA private keys can be used to “encrypt” a message, which the public key can decrypt, can’t N1 also send us back encrypted messages? Well yes, mathematically speaking, but they wouldn’t be secure because the public key is well, public. Using the keys that way around therefore is only good for signing. We would like to establish a session with N1. In other words, establish a shared secret between us (CLIENT) and N1 so we may use symmetric-key-exchange crypto. To do this we will use Diffie-Hellman (DH) techniques. Taking the (toy) example from the previous section, we agree (publically) to use modulus p=23 and base g=9. Then we (CLIENT) pick a secret integer a=4. We then compute A=(g^a)modp = (9^4)mod23. So far just like regular DH, but this time we use PKN1 (N1’s public key) to encrypt A, and we send encPKN1(A) to N1. Just like in regular DH, N1 now picks his random integer b, and computes B=(g^b)modp. It sends this to us in plain text as the image above shows, no envelope. This however is not an issue because of the initial request being encrypted with the public key, as I hope the following aside will make clear. The rest now proceeds as in regular DH exchange. N1 now computes the secret s = A^b, and CLIENT now computes the secret s=B^a. ### Aside on why the public key encryption is needed at all. You may wonder why we need to encrypt our message to N1 with the public key at all, since isn’t DH supposed to be immune to eavesdroppers and a way to establish a secret even when someone is watching? The caveat however is that this is only true when the eavesdropper is passive and does nothing but listen; if we have an active eavesdropper, then MITM attacks could thwart our initial establishment of a secret without using the public key to encrypt the CLIENT→N1 component of the setup. You can imagine Eve sat in the middle of CLIENT AND N1 as in the image She intercepts A=g^a(modp), and instead of simply reading it passively then forwarding it on to N1, she decides to choose her own secret, c, and actually forwards A’=g^c(modp). N1 has no idea that this came from Eve and not the client. Now N1 will compute s’=(A’)^b, a different secret. Also, when Eve received B she will compute the same s’=(B)^c. In other words it is now Eve who has used DH techniques to establish a secret with N1, but N1 thinks this is a secret between himself and CLIENT. In the exact same way, Eve can pick another random integer on the return journey, d, to compute B’=g^d(modp) and pass this back to the client instead of B. Eve can then compute s’’=A^d and the client will compute the same s’’=(B’)a. In other words with an active snooper, DH would be vulnerable to MITM style attacks without using public key encryption for at least one leg of the journey. The client would think it has a session key with N1 and N1 would think is has a session key with the client, but really we’d have 2 sessions, CLIENT<→ EVE and EVE<→N1. Not good! Also check out this video explaining an active DH attack in more detail. But what about the return trip being in plain-text? It’s true that active Eve could intercept and modify this data too right? Yes, but it would do her little good. N1 would have still received the intended A=g^a(modp) from the client and using it N1 will compute a shared secret s=A^b, where b is N1’s random private number. Now if Eve had tampered with the return packet B=(g^b)modp and made it say B’=(g^d)modp, then when CLIENT tried to compute the same shared secret that N1 holds he’d do (B’)^a=(g^da)modp, which is not the same as (g^ab)modp. In this manner, CLIENT and N1 would end up with different secrets, and they wouldn’t be able to communicate at all. Eve has therefore the ability to disrupt the communications in this manner, but not the ability to snoop on them. I imagine one more layer of protection that public key encryption is giving us is that N1 will sign the return message with its private key, which the CLIENT (or of course anyone else), using the public key, can verify the message came from N1 not someone else. ## Step 2 Now CLIENT and N1 have established a session (they have a shared secret) and can efficiently communicate with the faster methods of symmetric-key exchange and dispense with the slow and weighty computational expense of public-key crypto. This is the first layer of our onion! What we’d like to do next is extend the circuit to the second node (N2) that the directory server had chosen for us. Again we have N2’s public key (PKN2) and we can use it to encrypt a message that N2, and only N2, can read. This is important, not even N1 can read that message nor Eve. We once again begin the DH dance. We choose our random number, a, (different from the first time we did this with N1!), and we need to find a way to send our A=g^amodp to N2. We only want N2 to be able to read this so once again we encrypt it using the public key: encPKN2(g^amodp), and then we use our shared secret (SS1) with N1 to further wrap it, i.e. SS1(encPKN2(g^amodp)). This means Eve has 2 layers of encryption between her and the juicy data now. When N1 receives this data, it can unwrap the SS1 layer using the shared secret we established, and find under it encPKN2(A). Note it cannot see what A is itself, so one more precaution against it doing a MITM attack in the manner described in the earlier aside. N1 will just see that the packet should be forwarded to N2 and do so. N2 will receive this packet, encPKN2(A), and it will decrypt it using its private key to obtain A. It will pick a private random number b , and compute the session secret s2=A^b and B=g^bmodp, the latter which it will send back to N1. It may sign the message with B using it’s private key to authenticate the messages origin, however Eve or anyone else could use the public key to read that message, but just like explained earlier, this does not matter. N1 will encypt B with SS1 before sending it back to CLIENT (not that it really matters as the N2→N1 was in clear text anyway). CLIENT can now decrypt using SS1 to get B, and compute s2=B^a. Now the CLIENT and N2 have a shared secret that N1 (nor anyone else) doesn’t know. ## Step 3 Hopefully it’s obvious how this could be extended to the third node and so on if desired. The client would send SS1(SS2(encPK3(g^amodp))). N1 would peel SS1 and forward to N2. N2 would peel the remaining SS2 and forward encPK3(g^amodp) to N3 who could decrpyt it using it's private key PK3. In this way CLIENT and N3 would end up establishing a secret, s3, that both N1 and N2 did not know nor anyone else. ## Step 4 By now we have extended the circuit to include a given number of nodes (probably at least three). We, the CLIENT, have established shared secrets with multiple nodes, N1, N2, N3 by leveraging a combination of public key cryptography and DH key exchange techniques, and now we can communicate with each using symmetric key exchange with these secrets in a manner than only that particular node can read. If we want to send a message to some web-server, then what we can do, therefore, is take our message (and message header that will contain the destination IP address of the message, e.g. IP for google.com), then we first encrypt it (wrap it) using SS3 (the secret with the exit node node N3) Next we wrap it with SS2 (the secret with exit node N2) and add an intermediary header containing the IP address of N3. Next we wrap it with SS1 (the secret with exit node N1) and add an intermediary header containing the IP address of N2. In this way we have SS1(SS2(SS3(msg)))….and the layers of encryption are stacked up like an onion. N1 gets this onion and strips off the outer later SS1. It can see the header that tells it to pass it N2’s IP and obviously know the IP of the previous node, but it doesn’t know that the previous node was the originator and not just another node in the circuit. Nor can is break the other 2 layers of encryption remaining to read the message or lower-level forwarding information. It forwards accordingly SS2(SS3(msg)) to N2. N2 now strips away SS2 leaving SS3(msg). Note again it can’t read the msg. It can read the header to tell it to forward the package to N3 and it knows the previous node N1 but nothing about the client. Finally, N3 gets SS3(msg). It strips the final layer, to show the message. Note that if the message itself isn’t encrypted (between the client and web-server) then N3 can read the message in plain-text. This is why it's still important to use HTTPS with Tor. On the return trip with the message from the webserver, rmsg, the exit node wraps it in SS3(rmsg) and passes to N2, which wraps in SS2(SS3(msg)) before passing to N1, which wraps as SS1(SS2(SS3(rmsg))) before passing back to the client. The client has all these secrets so can unwrap the full onion to get rmsg. Note that the exit could read the response from the server, but N2 and N1 cannot because it has again been wrapped in SS3 and SS2 successively at each hop. Note the return message, doesn’t contain any information in the header (or headers) that can be used to route it back to the client (not even encrypted ones), it’s simply a case of each node forwarding the data to the place that made the request to it initially, and them playing pass the pacel parcel back (I believe anyway). Currently unrated
2018-06-25 11:49:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4019413888454437, "perplexity": 1217.2830385107063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867666.97/warc/CC-MAIN-20180625111632-20180625131632-00278.warc.gz"}
http://math.stackexchange.com/questions/3507/purely-combinatorial-proof-that-ex-ex?answertab=oldest
# Purely combinatorial proof that$(e^x)' = e^x$ At the beginning of Week 300 of John Baez's blog, Baez gives a proof that the "number" of finite sets (more specifically, the cardinality of the groupoid of all finite sets, where an object in the groupoid counts as $1/n!$ if it has $n!$ symmetries) equals $e$. He then says that this leads to a purely combinatorial proof that $e^x$ is its own derivative. Can anyone explain the purely combinatorial proof? - I am not quite sure how to translate this into groupoid cardinality language, but here is the standard proof. Suppose $A(x) = \sum_{n \ge 0} a_n \frac{x^n}{n!}$ is an exponential generating function. Then we should interpret $a_n$ as being the number of ways to put a certain structure on a set of size $n$. For example, when $a_n = 1$ this is the structure of "being a set." When $a_n = n!$ this is the structure of "being a totally ordered set." And so forth. We will call this an $A$-structure. Then $A'(x) = \sum_{n \ge 0} a_{n+1} \frac{x^n}{n!}$ can be interpreted as having coefficients $b_n = a_{n+1}$ which count the number of ways to add an element to a set of size $n$, then put an $A$-structure on the resulting set of size $n+1$. This is a purely combinatorial definition of differentiation. With this definition, the proof is quite obvious: there is exactly one way for a set to be a set, and there is also exactly one way to add an element to a set and then make the result a set. So $\frac{d}{dx} e^x = e^x$. This proof might seem contentless. Try to see how it generalizes to show that $\frac{d}{dx} e^{ax} = ae^{ax}$ for any positive integer $a$, and if you're up for a challenge see if you can generalize it all the way to this identity. Vaguely the proof in groupoid cardinality language goes like this. For a finite set $X$ the groupoid of finite sets equipped with a function to $X$ has cardinality $e^{|X|}$. (The morphisms between two objects $A \to X, B \to X$ in this category are isomorphisms $A \simeq B$ such that the obvious triangle commutes.) One way to think about this groupoid is as the groupoid of "colored" sets, where $X$ is the set of colors and an isomorphism must respect color. Then it is easy to see that an isomorphism class of colored sets where there are $|X|$ colors is the same thing as a disjoint union of isomorphism classes of $|X|$ sets, one for each color. One gets a direct interpretation of the terms in the expansion $\left( \sum_{n \ge 0}^{\infty} \frac{1}{n!} \right)^{|X|}$ this way. Differentiation replaces $|X|^n$ with $n|X|^{n-1}$, which means that we replace functions from an $n$-element set $S$ to $X$ with functions from $S - \{ s \}$ to $X$ where $s$ ranges over all elements of $S$. The resulting groupoid is still the groupoid of finite sets equipped with a function to $X$; in particular, it has the same cardinality. (Note that $X$ does not really have to be a finite set of a particular size for this argument to work; it can be a "formal" set in the same way that $x$ is a formal variable and the resulting groupoid cardinality is a generating function instead of a number. I think this is what the formal theory of "stuff types" is for, but I am not familiar with it.) -
2015-10-06 18:42:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9324691891670227, "perplexity": 93.36232620482396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736678979.28/warc/CC-MAIN-20151001215758-00097-ip-10-137-6-227.ec2.internal.warc.gz"}
https://blender.stackexchange.com/questions/8685/what-is-the-difference-between-a-subdivision-surface-modifier-and-multiresolutio?noredirect=1
# What is the difference between a subdivision surface modifier and multiresolution modifier? I'm currently modeling my game character in Blender (see below picture). What is the difference between a subdivision surface modifier and a multiresolution modifier? I know they smooth the surface but are there specific characteristics to using each and what are those? When using the sculpt tool, you will sometimes want to add more detail than your mesh has vertices to support. You could add vertices by subdividing the mesh, but subdividing the mesh will increase the complexity of the model, slowing blender down, making the model larger, making the mesh less usable for certain applications (like games). Sub surface allows you to increase the vertices as a modify, but isn't affected by sculpting. It's purpose is to add geometry to the whole mesh and make a lower poly figure a higher quality without increasing the complexity of the mesh. You cannot move, alter or sculpt the new vertices without applying the subsurface. It wasn't designed for it. Multi resolution modifier was added to allow you to add geometry for the purposes of adding detail to a sculpt. Vertices in a multi res modifier are affected by a sculpt even when not applied. This allows you to sculpt with more verts than the base mesh has. This is useful because you sculpt at a higher resolution, bake out the sculpt as a map, and apply it to the mesh as a normal map, without increasing the actual number of vertices the mesh has. So, if you're not sculpting, sub surface and multi resolution are logically equivalent, and the preference is for sub surface (less data in memory, easier interface). If you are sculpting, and want to keep the mesh low poly, but need more detail in the sculpt, use multi resolution and then bake out the normal map (which can then be applied to the surface of the object as a texture in the cycles node editor). • I appreciate the clarity in your response. Very easy to follow, thank you. – John H Apr 26 '14 at 1:29 • This was a really helpful solution for me! It should be accepted as a solution. – babaliaris Jun 11 '19 at 23:59 From the wiki: Another way to subdivide is with the MultiResolution Modifier. This differs from Subsurf in that MultiRes allows you to edit the mesh at several subdivision levels without losing information at the other levels. It is slightly more complicated to use, but more powerful. In other words: Changes made to the mesh (via sculpting) will be applied to lower and higher subdivision levels as well as the current level. To apply changes to the base mesh you must press Apply Base on the modifier. This is different from the subserf modifier, where the subdivided mesh cannot be edited unless the modifier is applied. ## Baking: Another advantage of the multires modifier (in certain cases, of course) is the normal map baking workflow: 1. Model low poly mesh (in this case a cube) and UV unwrap (you can UV unwrap later too) Note that you can also bake normals to the base mesh if the preview level is 0.
2021-07-24 09:10:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2998686730861664, "perplexity": 1539.3734613153824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150134.86/warc/CC-MAIN-20210724063259-20210724093259-00341.warc.gz"}
https://codereview.stackexchange.com/questions/43495/better-method-for-my-property-checking-replacement-of-switch-statement
# Better method for my property Checking, replacement of switch statement I'm hoping this is the correct place for this, I'm struggling a little for wording as i'm not sure what you would call this. Basically, i have a system in place where my datagrid marks cells that have changed with a new background colour, to do this i have a method in the object that contains these properties that receives a string which is the name of the property to check, and then a switch statement that takes that string to check the correct property. public Color HasChanged(string value) { switch (value) { case "CILRef": if (_localShipment.cilRef != _originalShipment.cilRef) { return Colors.SkyBlue; } else { return Colors.White; } case "ArrivedAtPortDate": if (_localShipment.arrivedAtPortDate != _originalShipment.arrivedAtPortDate) { return Colors.SkyBlue; } else { return Colors.White; } } I've removed the rest of the properties for brevity. Now i get the nagging sensation that there is a cleaner way to do this string>property without using a switch statement, but i can't for the life of me find anything on google, it's hard to search without some keyword to go on. I'm also attempting to only save those properties that have changed, i was going to place any changed property name into an array, and then have a loop with yet another switch statement that checked that array and then saved the correct property. However this again seems untidy to me. is there a cleaner solution to this, hopefully that could handle the addition of new properties without needing to add new code to the switch statements. I can include the rest of the code that does this checking (namely the WPF binding on the datagrid, and a converter that calls the checking method with the property name as a string parameter) if needed. EDIT: To show the rest of my code, hopefully explaining a few things. I have a datagrid in xaml that contains these properties. below is an example: <DataGridTemplateColumn Header="CILRef"> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding CILRef}"> <TextBlock.Background> <SolidColorBrush Color="{Binding Converter={StaticResource hasChangedConverter}, ConverterParameter='CILRef'}"/> </TextBlock.Background> </TextBlock> </DataTemplate> </DataGridTemplateColumn.CellTemplate> <DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> </DataTemplate> </DataGridTemplateColumn.CellEditingTemplate> </DataGridTemplateColumn> as you can see, by this line : <SolidColorBrush Color="{Binding Converter={StaticResource hasChangedConverter}, ConverterParameter='CILRef'}"/> The background is bound using a converter, which is: class HasChangedConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { try { var shipment = value as Shipment; var property = parameter as string; return shipment.HasChanged(property); } catch (Exception ex) { return Colors.HotPink; } } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new Exception("The method or operation is not implemented."); } } Hopefully this will explain how the code works better than i've been doing in the comments. • That doesn't really change much from the points I mentioned below although as Vogel612 said , you shouldn't use exceptions in expected areas. you should return the Pink value if the variable is null, not do the try block, they are slow and bad practice in logic. Binding is fine but if the values are always the same and they match certain colours...pair them. have a collection of matching enum/colours to values, I would use a dictionary, then your converter can just pull out the right value. Converters are meant to be light weight way's to visualize data differently,not heavy business logic. – apieceoffruit Mar 5 '14 at 11:34 • I'm still not sure what you mean by this, the design of this is if the value has changed from the original loaded value, it displays skyblue. the colour is simply a representation of data that is dirty. – Ben Mar 5 '14 at 11:48 • @user1412240 please include the full switch-statement in your question. also: Why not do the job with simple JavaScript? onChange="this.style.add('background-color', '#lightblueRGB');" – Vogel612 Mar 5 '14 at 12:00 • if you look at my own answer below, i have removed the switch statement completely now. also, i was not aware you could use javascript in a WPF application, also that would not be a "dirty" flag just a changed flag, which is not what was requested. – Ben Mar 5 '14 at 12:04 • @user1412240 sorry, I'm not experienced with WPF, but I can tell that the difference between a dirty- and changed-flag is only one if-statement. – Vogel612 Mar 5 '14 at 12:11 Ok, so having spoken to others on stackoverflow, i have change the code, this (i hope) satisfies all the issue's I've pointed out and all the issues pointed out to me. The HasChanged is now: public Color HasChanged(string value) { try { var data1 = _localShipment.GetType().GetProperty(value, BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance).GetValue(_localShipment, null); var data2 = _originalShipment.GetType().GetProperty(value, BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance).GetValue(_originalShipment, null); return data1 != data2 ? Colors.SkyBlue : Colors.White; } catch (Exception ex) { return Colors.White; } } EDIT: This has been changed to the following methods: private object GetPropValue(object src, string propName) { PropertyInfo p = src.GetType().GetProperty(propName, BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance); string value = (string)p.GetValue(src, null); return value; } public bool HasChanged(string value) { var data1 = GetPropValue(_localShipment, value); var data2 = GetPropValue(_originalShipment, value); return data1 != data2 ? true : false; } #endregion } and the converter has been changed to : class HasChangedConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if (value == null) { return Colors.White; } var shipment = value as Shipment; var property = parameter as string; if (shipment.HasChanged(property)) { return Colors.SkyBlue; } else { return Colors.White; } } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new Exception("The method or operation is not implemented."); } } This solves the issues mentioned previously in this question. • This is basically what I was going to post haha. It looks good except for in your catch block you should instead handle the exception. (Unless of course you just want to consume the exception and return a default color...) – Max Mar 5 '14 at 13:01 • @Max: The part of your comment in parentheses is wrong, but only because the type of the exception is Exception. If there's a StackOverflowException or an OutOfMemoryException, trying to handle it is a very bad idea. No one wants that. This is more for the questioner's benefit than yours, though, since I assume you already know that. – Magus Mar 5 '14 at 16:04 • See the edits above, this has been changed in an attempt to solve the issues people have pointed out. – Ben Mar 5 '14 at 16:33 1. As you already discovered, if you want to access a member based on a string name, use reflection. 2. You shouldn't mix business logic with presentation: HasChanged shouldn't return a Color, it should return a bool. The colors should the be defined separately. 3. Use as only when you expect to get the wrong type and in that case, always test for null. If you're not going to do that, just use a cast. 4. I don't think that indicating a general exception using a color is a good idea, because it hides all information about what specific exception occurred. If you're expecting only some specific exception, then catch only that. • Thanks for this. Much clearer than trying to understand via comments. I've already changed the gas change to return a boil and the converter handles the color. The try catch has been removed. Quick question tho. I've always been under the assumption that using as was faster and cleaner than a cast. That comes from uni and is most likely wrong – Ben Mar 5 '14 at 13:54 • Using as and a null check is going to be faster than cast and catch for NullReferenceException, that's probably where the speed argument comes from. But if you're not going to do that, just use a cast. And I think that clearer code is the one that more accurately describes your intentions. And here, that's a cast, not as. – svick Mar 5 '14 at 14:50 • Ok, great reply, sorry about the big typos in the original comment, i was attempting to reply via phone for the first time. and it clearly did not go well. – Ben Mar 5 '14 at 14:52 Well it seems to me you have a key and a chunk of code that returns a colour based on some boolean check. I would probably suggest a Function collection. e.g Dictionary<string,Func<Color>> Actions; void InitActions() { Actions = new Dictionary<string,Func<Color>>(); { return _localShipment.cilRef != _originalShipment.cilRef ? Colors.SkyBlue : Color.White; }); } and to use it: public Color HasChanged(string value) { Color highlightColor = Color.White; if(Actions.Contains(value)) highlightColor = Actions[value]; return highlightColor; } Although that doesn't really solve your problem. I usually like to answer a persons question first in case they choose not to do a large refactor but I would take a closer look at what you are trying to do. You have a number of larger design problems: • You have a lot of repetition. • Next nested loops/logic are usually good indications of requiring a refactor. • You have "Magic Strings", what if one of those is misspelt, will your whole application break? • You shouldn't be tying application logic directly to the ui in the first place. Ideally I imagine you want something more akin to: void DataGridSelectionChanged(object sender,EventArgs e) { var grid = sender as Grid; if(grid == null) return; string selectedValue = grid.SelectedValue ?? DefaultValue; Color highlightColor = GetHighlightColour(selectedValue); ... } In short the prospect of setting the highlight colour should not really be dependent on the raw values, and if indeed it has to be it should be done in a safe manner. as it is there are a number of way's to break your application some accidentally while developing others while using the application by passing invalid data. • I appreciate your comments, This system was requested by the designer, it highlights changes to the datagrid that will be saved in the next "save all", these changes must be data based (different from the loaded data). Also the default on the switch case throws an ArgumentOutOfRangeException. I hope this clears up some of your concerns, and explains why I've chosen to do it this way. – Ben Mar 5 '14 at 10:40 • @user1412240 one should not use Exceptions for "expected" errors. Actually I do that myself more often than I want to admit, but you should instead return a meaningful errorvalue. ;)# – Vogel612 Mar 5 '14 at 10:52 • Just to note, the string value that is passed is hard coded, not a parameter that is set by the user. It should only ever error if i've made a mistake during development, never during runtime. – Ben Mar 5 '14 at 11:06 • @user1412240 which goes back to my original point, why use raw strings that could cause a mistake in the first place? If they are hard coded and never change replace them for const strings and refer to them instead. – apieceoffruit Mar 5 '14 at 11:08 • I think it would help if i provided the rest of the code, perhaps that will explain better than i can via comments. Please see my edit (in a few mins) – Ben Mar 5 '14 at 11:09
2020-08-10 04:58:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1983402818441391, "perplexity": 1861.9118014629005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738609.73/warc/CC-MAIN-20200810042140-20200810072140-00370.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcds.2012.32.991
# American Institute of Mathematical Sciences March  2012, 32(3): 991-1009. doi: 10.3934/dcds.2012.32.991 ## Pullback $\mathcal{D}$-attractors for the non-autonomous Newton-Boussinesq equation in two-dimensional bounded domain 1 College of science, Xi’an Jiaotong University, Xi’an, 710049, China Received  September 2010 Revised  May 2011 Published  October 2011 We investigate the asymptotic behavior of solutions of a class of non-autonomous Newton-Boussinesq equation in two-dimensional bounded domain. The existence of pullback global attractors is proved in $L^2(\Omega)\times L^2(\Omega)$ and $H^1(\Omega)\times H^1(\Omega)$, respectively. Citation: Xue-Li Song, Yan-Ren Hou. Pullback $\mathcal{D}$-attractors for the non-autonomous Newton-Boussinesq equation in two-dimensional bounded domain. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 991-1009. doi: 10.3934/dcds.2012.32.991 ##### References: [1] B. Guo, Spectral method for solving the two-dimensional Newton-Boussinesq equations,, Acta. Math. Appl. Sinica (English Ser.), 5 (1989), 208.   Google Scholar [2] B. Guo, Nonlinear Galerkin methods for solving two-dimensional Newton-Boussinesq equations,, Chinese Ann. Math. Ser. B, 16 (1995), 379.   Google Scholar [3] B. Guo and B. Wang, Gevrey class regularity and approximate inertial manifolds for the Newton-Boussinesq equations,, Chinese Ann. Math. Ser. B, 19 (1998), 179.   Google Scholar [4] B. Guo and B. Wang, Approximate inertial manifolds to the Newton-Boussinesq equations,, J. Partial Differential Equations, 9 (1996), 237.   Google Scholar [5] G. Fucci, B. Wang and P. Singh, Asymptotic behavior of the Newton-Boussinesq equation in a two-dimensional channel,, Nonlinear Anal., 70 (2009), 2000.  doi: 10.1016/j.na.2008.02.098.  Google Scholar [6] T. Caraballo, G. Łukaszewicz and J. Real, Pullback attractors for asymptotically compact non-autonomous dynamical systems,, Nonlinear Anal., 64 (2006), 484.  doi: 10.1016/j.na.2005.03.111.  Google Scholar [7] T. Caraballo, G. Łukaszewicz and J. Real, Pullback attractors for non-autonomous 2D-Navier-Stokes equations in some unbounded domains,, C. R. Acad. Sci. Paris, 342 (2006), 263.   Google Scholar [8] B. Wang, Pullback attractors for the non-autonomous FitzHugh-Nagumo system on unbounded domains,, Nonlinear Anal., 70 (2009), 3799.  doi: 10.1016/j.na.2008.07.011.  Google Scholar [9] B. Wang and R. Jones, Asymptotic behavior of a class of non-autonomous degenerate parabolic equations,, Nonlinear Anal., 72 (2010), 3887.  doi: 10.1016/j.na.2010.01.026.  Google Scholar show all references ##### References: [1] B. Guo, Spectral method for solving the two-dimensional Newton-Boussinesq equations,, Acta. Math. Appl. Sinica (English Ser.), 5 (1989), 208.   Google Scholar [2] B. Guo, Nonlinear Galerkin methods for solving two-dimensional Newton-Boussinesq equations,, Chinese Ann. Math. Ser. B, 16 (1995), 379.   Google Scholar [3] B. Guo and B. Wang, Gevrey class regularity and approximate inertial manifolds for the Newton-Boussinesq equations,, Chinese Ann. Math. Ser. B, 19 (1998), 179.   Google Scholar [4] B. Guo and B. Wang, Approximate inertial manifolds to the Newton-Boussinesq equations,, J. Partial Differential Equations, 9 (1996), 237.   Google Scholar [5] G. Fucci, B. Wang and P. Singh, Asymptotic behavior of the Newton-Boussinesq equation in a two-dimensional channel,, Nonlinear Anal., 70 (2009), 2000.  doi: 10.1016/j.na.2008.02.098.  Google Scholar [6] T. Caraballo, G. Łukaszewicz and J. Real, Pullback attractors for asymptotically compact non-autonomous dynamical systems,, Nonlinear Anal., 64 (2006), 484.  doi: 10.1016/j.na.2005.03.111.  Google Scholar [7] T. Caraballo, G. Łukaszewicz and J. Real, Pullback attractors for non-autonomous 2D-Navier-Stokes equations in some unbounded domains,, C. R. Acad. Sci. Paris, 342 (2006), 263.   Google Scholar [8] B. Wang, Pullback attractors for the non-autonomous FitzHugh-Nagumo system on unbounded domains,, Nonlinear Anal., 70 (2009), 3799.  doi: 10.1016/j.na.2008.07.011.  Google Scholar [9] B. Wang and R. Jones, Asymptotic behavior of a class of non-autonomous degenerate parabolic equations,, Nonlinear Anal., 72 (2010), 3887.  doi: 10.1016/j.na.2010.01.026.  Google Scholar [1] Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020345 [2] Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074 [3] Yongxiu Shi, Haitao Wan. Refined asymptotic behavior and uniqueness of large solutions to a quasilinear elliptic equation in a borderline case. Electronic Research Archive, , () : -. doi: 10.3934/era.2020119 [4] Hua Qiu, Zheng-An Yao. The regularized Boussinesq equations with partial dissipations in dimension two. Electronic Research Archive, 2020, 28 (4) : 1375-1393. doi: 10.3934/era.2020073 [5] Marco Ghimenti, Anna Maria Micheletti. Compactness results for linearly perturbed Yamabe problem on manifolds with boundary. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020453 [6] Vivina Barutello, Gian Marco Canneori, Susanna Terracini. Minimal collision arcs asymptotic to central configurations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 61-86. doi: 10.3934/dcds.2020218 [7] Zhenzhen Wang, Tianshou Zhou. Asymptotic behaviors and stochastic traveling waves in stochastic Fisher-KPP equations. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020323 [8] Wei Feng, Michael Freeze, Xin Lu. On competition models under allee effect: Asymptotic behavior and traveling waves. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5609-5626. doi: 10.3934/cpaa.2020256 [9] Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020450 [10] Yichen Zhang, Meiqiang Feng. A coupled $p$-Laplacian elliptic system: Existence, uniqueness and asymptotic behavior. Electronic Research Archive, 2020, 28 (4) : 1419-1438. doi: 10.3934/era.2020075 [11] Hoang The Tuan. On the asymptotic behavior of solutions to time-fractional elliptic equations driven by a multiplicative white noise. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020318 [12] Peter Poláčik, Pavol Quittner. Entire and ancient solutions of a supercritical semilinear heat equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 413-438. doi: 10.3934/dcds.2020136 [13] Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020384 [14] Siyang Cai, Yongmei Cai, Xuerong Mao. A stochastic differential equation SIS epidemic model with regime switching. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020317 [15] Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079 [16] Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081 [17] Hirokazu Ninomiya. Entire solutions of the Allen–Cahn–Nagumo equation in a multi-dimensional space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 395-412. doi: 10.3934/dcds.2020364 [18] Jiaquan Liu, Xiangqing Liu, Zhi-Qiang Wang. Sign-changing solutions for a parameter-dependent quasilinear equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020454 [19] Thierry Cazenave, Ivan Naumkin. Local smooth solutions of the nonlinear Klein-gordon equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020448 [20] Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020432 2019 Impact Factor: 1.338
2020-12-01 20:48:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7849859595298767, "perplexity": 11343.754194971187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141681524.75/warc/CC-MAIN-20201201200611-20201201230611-00592.warc.gz"}
https://cdsweb.cern.ch/collection/ALICE%20Theses?ln=it
# ALICE Theses Ultimi arrivi: 2016-08-24 16:51 Study of Quark-Gluon Plasma by Measuring the Upsilon Production in p-Pb and Pb-Pb Collisions at Forward Rapidity with ALICE at the LHC / Khan, Palash CERN-THESIS-2015-356 - Fulltext 2016-08-24 14:12 Photoproduction of $J/\psi$ in ultra-peripheral p-Pb and Pb-Pb collisions with the ALICE detector at the LHC / Adam, Jaroslav The physics of ultra-peripheral collisions is introduced in the first part along with the fundamental concepts of several theoretical models [...] CERN-THESIS-2016-092 - 192 p. Fulltext 2016-08-23 11:25 Многоканальная система сбора временной и амплитудной информации детектора ТО эксперимента Alice (ЦЕРН, LHC) / Kondratiev, Natalia По общепринятой в настоящее время гипотезе наша Вселенная была создана более 12 миллиардов лет назад во время так называемого «Big Bang» -«Большого Взрыва» [1] [...] server - Only 1st page 2016-08-11 16:56 Multihadron production in High-Energy collisions and forward rapidity measurement of inclusive photons in Pb+Pb collisions at √sNN = 2.76 TeV in ALICE experiment at LHC / Mishra, Aditya Nath CERN-THESIS-2016-087. - 2016-07-22 21:22 Direct Photon Anisotropy and the Time Evolution of the Quark-Gluon Plasma / Browning, Tyler Allen Historically, the thermal photon inverse slope parameter has been interpreted as the thermalization temperature of the QGP [...] CERN-THESIS-2016-078 - Purdue University : ProQuest Information & Learning, 2016-07-22. - 150 p. Fulltext 2016-07-08 20:03 Study of $\boldsymbol{J/\psi}$ production dependence with the charged particle multiplicity in p-Pb collisions at $\boldsymbol{\sqrt{s_{_{\mathrm{NN}}}}} =$ 5.02 TeV and pp collisions at $\boldsymbol{\sqrt{s}} =$ 8 TeV with the ALICE experiment at the LHC / Martin Blanco, Javier A suppression of the $J/\psi$ production was found in Pb-Pb collisions at $\sqrt{s_{_{\mathrm{NN}}}} =$ 2.76 TeV, providing further evidence of the formation of a deconfined medium in ultra-relativistic heavy-ion collisions, the so-called Quark-Gluon Plasma [...] CERN-THESIS-2016-070 - 309 p. Fulltext 2016-06-17 12:00 Elliptic flow at different collision stages / Dubla, Andrea CERN-THESIS-2016-054 - 202 p. Fulltext 2016-06-14 18:24 $\Lambda/\rm K^0_s$ associated with a jet in central Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV measured with the ALICE detector / Richert, Tuva Ora Herenui In high energy heavy ion collisions, the QCD matter undergoes a phase transition to a hot and dense strongly coupled Quark Gluon Plasma, where quarks and gluons are deconfined in a volume of nuclear dimensions [...] CERN-THESIS-2016-051 - 209 p. Fulltext 2016-05-23 10:43 Measurement of Direct Photons in pp and Pb–Pb Collisions with Conversion Pairs / Wilde, Martin CERN-THESIS-2015-338 - 213 p. Fulltext 2016-05-17 10:09 Full kinematic reconstruction of charged B mesons with the upgraded Inner Tracking System of the ALICE Experiment / Stiller, Johannes Hendrik In this thesis, the performance of the full kinematic reconstruction of $\mathrm{{B}}^{+}$ mesons in the decay channel $\mathrm{{B}}^{+}\rightarrow\mathrm{\overline{D}^{0}}\pi^{+}$ ($\mbox{$\mathrm{\overline{D}^{0}}\rightarrow \mathrm{K}^{+}\pi^{-}$}$) and charge conjugates for the 0-10 % most centr [...] CERN-THESIS-2016-037 - Fulltext
2016-09-01 03:38:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9610589742660522, "perplexity": 10382.760764213179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982958896.58/warc/CC-MAIN-20160823200918-00059-ip-10-153-172-175.ec2.internal.warc.gz"}
http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20170108203405331678805
MathSciDoc: An Archive for Mathematician ∫ Complex Variables and Complex Analysismathscidoc:1701.08004 Acta Mathematica, 215, (1), 55-126, 2012.6 We show that Thurston’s skinning maps of Teichmüller space have finite fibers. The proof centers around a study of two subvarieties of the $${{\rm SL}_2(\mathbb{C})}$$ character variety of a surface—one associated with complex projective structures, and the other associated with a 3-manifold. Using the Morgan–Shalen compactification of the character variety and author’s results on holonomy limits of complex projective structures, we show that these subvarieties have only a discrete set of intersections. @inproceedings{david2012skinning, title={Skinning maps are finite-to-one}, author={David Dumas}, url={http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20170108203405331678805}, booktitle={Acta Mathematica}, volume={215}, number={1}, pages={55-126}, year={2012}, } David Dumas. Skinning maps are finite-to-one. 2012. Vol. 215. In Acta Mathematica. pp.55-126. http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20170108203405331678805.
2022-06-30 03:25:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8056387305259705, "perplexity": 1937.9262237948337}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103661137.41/warc/CC-MAIN-20220630031950-20220630061950-00642.warc.gz"}
https://pos.sissa.it/363/001/
Volume 363 - 37th International Symposium on Lattice Field Theory (LATTICE2019) - Main session Trace anomaly and dynamical quark mass Y.B. Yang*, J. Liang, Z. Liu, P. Sun  on behalf of XQCD Collaboration Full text: pdf Pre-published on: March 29, 2020 Published on: August 27, 2020 Abstract We investigated the origin of the RI'/MOM quark mass under the Landau gauge at the non-perturbative scale, using the chiral fermion with different quark masses and lattice spacings. Our result confirms that such a mass is non-vanishing based on the linear extrapolation to the chiral and continuum limit, and shows that such a mass comes from the spontaneous chiral symmetry breaking induced by the near zero modes with the eigenvalue $\lambda<{\cal O}(5m_q)$, and is proportional to the quark matrix element of the trace anomaly at least down to $\sim$1.3 GeV. DOI: https://doi.org/10.22323/1.363.0001 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
2023-02-02 17:58:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7090970873832703, "perplexity": 2021.5090420739923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00788.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-and-trigonometry-10th-edition/chapter-11-11-2-arithmetic-sequences-and-partial-sums-11-2-exercises-page-788/86a
## Algebra and Trigonometry 10th Edition It is an arithmetic sequence because the difference between consecutive terms is constant: $34.3-24.5=24.5-14.7=14.7-4.9=9.8=d$
2023-02-07 17:11:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956611156463623, "perplexity": 566.3287674714927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500628.77/warc/CC-MAIN-20230207170138-20230207200138-00394.warc.gz"}
https://www.acadsci.fi/mathematica/Vol33/HarjulehtoLatvala.html
Mathematica Volumen 33, 2008, 491-510 # FINE TOPOLOGY OF VARIABLE EXPONENT ENERGY SUPERMINIMIZERS ## Petteri Harjulehto and Visa Latvala University of Helsinki, Department of Mathematics and Statistics P.O. Box 68, FI-00014 University of Helsinki, Finland; petteri.harjulehto 'at' helsinki.fi University of Joensuu, Department of Physics and Mathematics P.O. Box 111, FI-80101 Joensuu, Finland; visa.latvala 'at' joensuu.fi Abstract. We study the p(\cdot)-fine continuity in the variable exponent Sobolev spaces under the standard assumptions that p : \Omega \to R is \log-Hölder continuous and 1 < p- \le p+ < \infty. As a by-product we obtain improvements in the variational exponent capacity theory and in the non-linear potential theory based on p(\cdot)-Laplacian. 2000 Mathematics Subject Classification: Primary 31C05; Secondary 31C45, 46E35, 49N60. Key words: Non-standard growth, variable exponent, Laplace equation, supersolution, fine topology. Reference to this article: P. Harjulehto and V. Latvala: Fine topology of variable exponent energy superminimizers. Ann. Acad. Sci. Fenn. Math. 33 (2008), 491-510.
2022-10-02 09:49:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8602935075759888, "perplexity": 4137.500500357149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337307.81/warc/CC-MAIN-20221002083954-20221002113954-00754.warc.gz"}
http://www.maths.ox.ac.uk/node/10908
# Floer cohomology and Platonic solids 2 December 2013 14:00 Yanki Lekili Abstract We consider Fano threefolds on which SL(2,C) acts with a dense open orbit. This is a finite list of threefolds whose classification follows from the classical work of Mukai-Umemura and Nakano. Inside these threefolds, there sits a Lagrangian space form given as an orbit of SU(2). We prove this Lagrangian is non-displaceable by Hamiltonian isotopies via computing its Floer cohomology over a field of non-zero characteristic. The computation depends on certain counts of holomorphic disks with boundary on the Lagrangian, which we explicitly identify. This is joint work in progress with Jonny Evans. • Geometry and Analysis Seminar
2018-03-19 08:34:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9279232025146484, "perplexity": 1348.3020373322215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646636.25/warc/CC-MAIN-20180319081701-20180319101701-00220.warc.gz"}
https://socratic.org/questions/acetylene-is-used-in-blow-torches-and-burns-according-to-the-following-equation-
# Acetylene is used in blow torches, and burns according to the following equation: 2 C2H2(g) + 5 O2(g) → 4 CO2(g) + 2 H2O(g) Use the following information to calculate the heat of reaction:? ## Acetylene is used in blow torches, and burns according to the following equation: 2 C2H2(g) + 5 O2(g) → 4 CO2(g) + 2 H2O(g) Use the following information to calculate the heat of reaction: Hfo (H2O(g))= -241.82 kJ/mol Hfo (CO2(g))= -393.5 kJ/mol Hfo (C2H2(g))=226.77 kJ/mol May 5, 2016 $\Delta \text{H"_(rxn)=-2511.04" ""kJ}$ #### Explanation: Hess' Law states that the overall enthalpy change of a reaction is independent of the route taken. Thermodynamics is concerned with initial and final states and the law is a consequence of the conservation of energy. You can solve this problem by constructing a Hess Cycle. Write down the reaction you are interested in. Below this write down the elements from which the reactants and products are made. Then complete the cycle as shown: Notice I have multiplied the $\Delta \text{H"_"f}$ values by the relevant stoichiometric numbers. In energy terms the $\textcolor{b l u e}{\text{BLUE}}$ route must equal the $\textcolor{red}{\text{RED}}$ route since the arrows start and finish in the same place. So we can write: $\left(2 \times 226.77\right) + \Delta \text{H} = \left(4 \times - 393.5\right) + \left(2 \times - 241.82\right)$ $\therefore 453.4 + \Delta \text{H} = - 1574 - 483.64$ $\Delta \text{H"=-2511.04" ""kJ}$
2019-12-10 13:26:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.747218906879425, "perplexity": 1883.1316649475189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540527620.19/warc/CC-MAIN-20191210123634-20191210151634-00236.warc.gz"}
https://www.gamedev.net/forums/topic/342285-resolved-vc-2005-express--beta-and-dx9-linker-error/
# Resolved: VC++ 2005 Express Beta and DX9 linker error This topic is 4558 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts ##### Share on other sites Did you tryed to compile a simple windows application? If your VS compiles only console application then you didnt set up your VS 2005 correctly . There are two ways to make VS 2005 Beta2 compile windows app: 1. The way described on the page you mentioned. 2. Assuming that you have installed VS in the Programe Files directory, create a directory "PlatformSDK" under "ProgramFiles\Microsoft Visual Studio 8\VC". Copy the "bin", "include" and "lib" directory from the PlatformSDK-installation into this directory. For more details check this link: http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=2995 ##### Share on other sites I can build both win32 console applications and win32 windows applications. The latter being made through the win32 wizard, which results in a window with an "about box" and file->exit menu. I'll uninstall everything and try it the second method that you propose. I'll let you know if it works out! :) ##### Share on other sites I dont think you have to reinstall VS or PlatformSDK . If you are able to compile windows aplications most probably your VS is configured with PlatformSDK correctly. Unfortunately I dont have the August SDK installed on my machine. Maybe someone around here can compile that sample code and give you a hand with this problem. [Edited by - Calin on August 31, 2005 7:45:18 AM] ##### Share on other sites Hey there, The problem is solved (at least so far as compiling DX9 programs). The sample programs tend not to compile due to the compiler using unicode, which I haven't figured how to have it use ANSI instead. The solution was to create include, and lib system variables and then assign the associated directories to those. Then those variables were used as include and lib sources for the VC++ environment. Thanks for the help! ##### Share on other sites I just remembered I had the same problem when I started using VS Express Beta 2 ( a month ago). Too bad it didnt cross my mind initially. Anyways Im glad you solved your problem. BTW: I have read your blog. Interesting I just started to make a terrain engine too. [Edited by - Calin on September 1, 2005 2:16:00 PM] ##### Share on other sites Quote: Original post by CalinI just remembered I had the same problem when I started using VS Express Beta 2 ( a month ago). Too bad it didnt cross my mind initially. Anyways I`m glad you solved your problem. BTW: I have read your blog. Interesting I just started to make a terrain engine too. Maybe we can share ideas about our progress with our respective terrain engines? You are free to comment on the blog. I hope to start uploading pictures of my progress soon. Thanks.
2018-02-24 10:33:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20788376033306122, "perplexity": 4420.8170188724125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815544.79/warc/CC-MAIN-20180224092906-20180224112906-00697.warc.gz"}
http://openstudy.com/updates/50a9a9ade4b06b5e4932d869
Cj1213 limit of cos x*ln x as x approaches 0+ one year ago one year ago 1. satellite73 what is $\lim_{x\to 0^+}\cos(x)$? 2. Cj1213 times ln (x) 3. satellite73 i am just asking if you know $\lim_{x\to 0^+}\cos(x)$? 4. Cj1213 oh um isnt it just 1 5. satellite73 yes 6. satellite73 since cosine is continuous and $$\cos(0)=1$$ 7. satellite73 and what is $$\lim_{x\to 0^+}\ln(x)$$ ? 8. Cj1213 when I did it I got 0 but i wasnt sure if that was right 9. satellite73 oh no it is $$-\infty$$ 10. satellite73 so you end up with $$-\infty$$ 11. Cj1213 oh okay thanks 12. satellite73 yw
2014-04-17 06:51:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106876850128174, "perplexity": 10741.200156289868}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
https://ec.gateoverflow.in/1787/gate-ece-2011-question-20
4 views In the circuit shown below, capacitors $C_1$ and $C_2$ are very large and are shorts at the input frequency. $\mathrm{v}_{\mathrm{i}}$ is a small signal input. The gain magnitude $\left|\mathrm{v}_{\mathrm{o}} / \mathrm{v}_{\mathrm{i}}\right|$ at $10 \; \mathrm{Mrad} / \mathrm{s}$ is 1. maximum 2. minimum 3. unity 4. zero
2022-10-04 02:54:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3133588135242462, "perplexity": 2127.0886735574272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00253.warc.gz"}
http://www.transtutors.com/questions/a-consumer-is-in-equilibrium-at-point-a-in-the-363105.htm
# A consumer is in equilibrium at point A in the 2 answers below » A consumer is in equilibrium at point A in the accompanying figure. The price of good X is $5. a. What is the price of good Y? b. What is the consumer’s income? c. At point A, how many units of good X does the consumer purchase? d. Suppose the budget line changes so that the consumer achieves a new equilibrium at point B. What change in the economic environment led to this new equilibrium? Is the consumer better off or worse off as a result of the pricechange? ## Solutions: ## 1 Approved Answer Related Questions in Others - Others • A consumer is in equilibrium at point A in the accompanying figure. The... (Solved) April 03, 2016 A consumer is in equilibrium at point A in the accompanying figure. The price of Good X is$ 5 . a. What is the price ... Solution Preview : Answer) a.) Slope through point A is and the price of good X is $5 Py=5 b.) Her income must be$100 in order to purchase 20 units at $5 each c,) At Point A, the consumer spends 5*10=$50 on... • A consumer is in equilibrium at point A in the accompanying figure. The... (Solved) December 18, 2014 A consumer is in equilibrium at point A in the accompanying figure. The price of good X is $5 . a. What is the price of... Solution Preview : Ans: a) Since the slope of teh line through A is -20/20=-1 and price of X is$5, then the price of Y is also $5 b) With all her income, the consumer can purchase 20units of X. The price of... • Effects of a Change in Price Chris has an income of$90 per month to... (Solved) October 06, 2016 . Draw the new budget line , a new point of equilibrium , and the consumption level of Goods A and B . What is... • A consumer must divide $250 between the consumption of product (Solved) October 12, 2013 A consumer must divide$250 between the consumption of product X and product Y . The relevant market prices are Px = $5 and Py =$10. a. Write the equation... Solution Preview : Total income is given to be $250. There are two goods, X and Y. Price of X is Px =$5, and price of Y is Py = $10. Let Qx and Qy be quantities of X and Y respectively. a. The general... • 1. A firm's current profits are$800,000. These profits are expected to... (Solved) August 01, 2015 1. A firm's current profits are $800,000. These profits are expected to grow indefinitely at a constant annual rate of 5 percent. If the firm's opportunity cost of funds is 8... Solution Preview : 1. A firm's current profits are$800,000. These profits are expected to grow indefinitely at a constant annual rate of 5 percent. If the firm's opportunity cost of funds is 8 percent,... Submit Your Questions Here! Copy and paste your question here... Attach Files
2017-03-27 00:48:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4566670060157776, "perplexity": 1717.6079919496078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189316.33/warc/CC-MAIN-20170322212949-00063-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.investopedia.com/terms/m/mertonmodel.asp?utm_campaign=rss_headlines&utm_source=rss&utm_medium=referral
• General • Personal Finance • Reviews & Ratings • Wealth Management • Popular Courses • Courses by Topic # Merton Model ## What Is the Merton Model? The Merton model is a mathematical formula that stock analysts and commercial loan officers, among others, can use to judge a corporation's risk of credit default. Named for the economist Robert C. Merton, who proposed it in 1974, the Merton model assesses the structural credit risk of a company by modeling its equity as a call option on its assets. ### Key Takeaways • In 1974, economist Robert C. Merton proposed a model for assessing the credit risk of a company by modeling its equity as a call option on its assets. • The Merton model is used today by stock analysts, commercial loan officers, and others. • Merton's work, and that of fellow economist Myron S. Scholes, earned the Nobel Prize for economics in 1997. ## The Formula for the Merton Model \begin{aligned} &E=V_tN\left(d_1\right)-Ke^{-r\Delta{T}}N\left(d_2\right)\\ &\textbf{where:}\\ &d_1=\frac{\ln{\frac{V_t}{K}}+\left(r+\frac{\sigma_v^2}{2}\right)\Delta{T}}{\sigma_v\sqrt{\Delta{T}}}\\ &\text{and}\\ &d_2=d_1-\sigma_v\sqrt{\Delta{t}}\\ &\text{E = Theoretical value of a company's equity}\\ &V_t=\text{Value of the company's assets in period t}\\ &\text{K = Value of the company's debt}\\ &\text{t = Current time period}\\ &\text{T = Future time period}\\ &\text{r = Risk-free interest rate}\\ &\text{N = Cumulative standard normal distribution}\\ &\text{e = Exponential term}\left(i.e. \text{ }2.7183...\right)\\ &\sigma=\text{Standard deviation of stock returns}\\ \end{aligned} ## What Does the Merton Model Tell You? The Merton model allows for easier valuation of a company and also helps analysts determine if it will be able to retain solvency, by analyzing the maturity dates of its debt and its debt totals. The Merton model calculates the theoretical pricing of European put and call options without considering dividends paid out during the life of the option. The model can, however, be adapted to consider dividends by calculating the ex-dividend date value of underlying stocks. The Merton model makes the following basic assumptions: • All options are European options and are exercised only at the time of expiration. • No dividends are paid out. • Market movements are unpredictable (efficient markets). • No commissions are included. • Underlying stocks' volatility and risk-free rates are constant. • Returns on underlying stocks are regularly distributed. Variables that are taken into consideration in the formula include options' strike prices, present underlying prices, risk-free interest rates, and the amount of time before expiration. ## History of the Merton Model Robert C. Merton is a noted American economist and Nobel Prize laureate, who purchased his first stock at age 10. He earned a bachelor of science in engineering at Columbia University, a master of science in applied mathematics at the California Institute of Technology, and a doctorate in economics at the Massachusetts Institute of Technology, where he later become a professor. During Merton's time at MIT, he and fellow economists Fischer Black and Myron S. Scholes were all working on problems related to the pricing of options and often aided in each other's work. Black and Scholes published a seminal paper on the topic, "The Pricing of Options and Corporate Liabilities," in 1973. Merton's "On the Pricing of Corporate Debt: The Risk Structure of Interest Rates," published early the following year, described what has come to be known as the Merton model. Merton and Scholes were awarded the Nobel Prize for economics in 1997 (Black had died and was no longer eligible). The prize committee cited them for developing "a pioneering formula for the valuation of stock options. Their methodology has paved the way for economic valuations in many areas. It has also generated new types of financial instruments and facilitated more efficient risk management in society." Their best known collaboration is often referred to today as the Black-Scholes-Merton model. ## What Is a Call Option? A call option is a contract that allows the buyer to purchase a stock or other financial asset at a specified price by or on a certain date. ## What Is the Difference Between European and American Options? European options can be exercised only on their expiration date, while American options can be exercised at any time. ## What Is a Risk-Free Interest Rate? A risk-free interest rate is the theoretical rate of return on an investment carrying zero risk. It is theoretical because no investment is entirely without risk, although some come closer than others. ## The Bottom Line The Merton model, developed by economist Robert C. Merton, is a mathematical formula that assesses the structural credit risk of a company by modeling its equity as a call option on its assets. It is often used by stock analysts and commercial loan officers to ascertain a corporation's likely risk of credit default. Article Sources Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy. 1. MIT Management Sloan School. "Robert C. Merton." 2. The University of Chicago Press Journals. "The Pricing of Options and Corporate Liabilities." 3. The Journal of Finance. "On the Pricing of Corporate Debt: The Risk Structure of Interest Rates." 4. Journal of Economic Perspectives. "In Honor of the Nobel Laureates Robert C. Merton and Myron S. Scholes: A Partial Differential Equation That Changed the World."
2022-09-29 17:57:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35229137539863586, "perplexity": 3702.0727851783204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335362.18/warc/CC-MAIN-20220929163117-20220929193117-00558.warc.gz"}
https://physics.stackexchange.com/questions/597032/how-to-define-a-temperature-for-the-early-universe
# How to define a temperature for the early Universe? The concept of a temperature $$T$$ is routinely used in discussing the thermodynamics of the early Universe. It enters the discussion via equilibrium phase space distribution of various particles that comprise the thermal bath. Intuitively, it refers to the temperature of the thermal plasma at any point in time in the early universe. I think, it should be thought of as one common temperature $$T$$ shared by all those particle species that are in equilibrium. When some species decouple (say, the cosmic photons after the last scattering), its temperature evolves differently than the plasma as a whole. Is this intuitive picture sufficient to carry on with the study of the thermodynamics of the early universe or do we need to be more precise and mathematical? Your intuitive picture is correct. An example of a decoupled species is the cosmic neutrino background which consists of neutrinos which decoupled from the rest of the universe when the universe was approximately one second old. These neutrinos have an estimated temperature of $$1.95$$ K, whereas the photons in the cosmic microwave background, which decoupled hundreds of thousands of years later, have a temperature of about $$2.7$$ K.
2021-08-05 19:47:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6268185973167419, "perplexity": 266.04482873661846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046157039.99/warc/CC-MAIN-20210805193327-20210805223327-00634.warc.gz"}
https://www.lessonplanet.com/teachers/irregular-familiar-commands
# Irregular Familiar Commands Irregular familiar commands are easy to understand with this chart, and there's a catchy tune provided to help you remember the irregular commands! Resource Details
2019-12-14 05:48:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9226357340812683, "perplexity": 3785.0743953055107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540584491.89/warc/CC-MAIN-20191214042241-20191214070241-00493.warc.gz"}
http://mathoverflow.net/questions/141555/hodge-theory-voisin
# Hodge Theory (Voisin) I have a strong understanding of Representation Theory but am interested in learning from Voisin, Hodge Theory and Complex Algebraic Geometry. What are the prerequisites to learning from this textbook? I haven't studied differential geometry in a while but am decently comfortable with the basics of the subject. Any textbook recommendations for a sort of roadmap for this textbook? - Voisin's book is quite (but not always) self-contained and well-written. For the supplementary references, you may use the first chapters of Griffiths-Harris "principles of algebraic geometry" which can help you to understand and motivate the complex backgrounds of complex Hodge theory. The book by Carlson, Müller-Stach, Peters, "Period mappings and Period domains" is more readable and self-contained than Voisin's book. The book by Bertin, Demailley, Illusie and Peters, "Introduction to Hodge theory" is less famous but a very good reference specially if you are interested in interactions between complex Hodge theory and Hodge theory in characterstic $p$. Note that nowadays, there are a lot of online lecture notes that are simplified and can help you to understand the content of Voisin's book much more easily. For example this or this which is more detailed. By googling you can find even more references. Also, your background of representation theory can help you a lot in Hodge theory, as it constantly appears in studying Hodge theory and in Voisin's book (for example the very important notion of local systems is nothing but studying representations of the fundamental group).
2015-11-28 02:39:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6666486263275146, "perplexity": 153.80648701672322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450745.23/warc/CC-MAIN-20151124205410-00032-ip-10-71-132-137.ec2.internal.warc.gz"}
http://www.domyno.com/5uja0et/1c1130-force-sensor-wikipedia
R . ϕ A pressure sensor is a device for pressure measurement of gases or liquids.Pressure is an expression of the force required to stop a fluid from expanding, and is usually stated in terms of force per unit area. I {\displaystyle \sigma } d are piecewise functions just as the Simmons equations [22] are: When R From Wikipedia, the free encyclopedia A piezoelectric disk generates a voltage when deformed (change in shape is greatly exaggerated) A piezoelectric sensor is a device that uses the piezoelectric effect to measure changes in pressure, acceleration, temperature, strain, or force by converting them to an electrical charge. F {\displaystyle R_{par}} matches for a prefactor depending on the transport properties of the conductive polymer and K-2073 Nut Rail Washer Preload at the frog of the switch •Intense loaded range b Lecture Series on Industrial Automation and Control by Prof. S. Mukhopadhyay, Department of Electrical Engineering, IIT Kharagpur. RobotShop is in full operation & shipping globally while adhering to strict safety protocol. a From Simple English Wikipedia, the free encyclopedia A CCD (charge-coupled device) has a grid of sensors that react to light. [30][31], "Flexible Tactile Sensing Based on Piezoresistive Composites: A Review", "A metal-polymer composite with unusual properties", "A New Approach for Modeling Piezoresistive Force Sensors Based on Semiconductive Polymer Composites", "Underlying Physics of Conductive Polymer Composites and Force Sensing Resistors (FSRs) under Static Loading Conditions", "FlexiForce, Standard Force \& Load Sensors Model A201. The Force Sensor is used for the checking of the device or for the direct force measurement . has been widely accepted by many authors,[11][9] it has been unable to predict some experimental observations reported in force-sensing resistors. The analytical derivation of the equations for a Rectangular potential barrier including the Fermi Dirac distribution was found in the 60s by Simmons. U The sensing film consists of both electrically conducting and non-conducting particles suspended in matrix. are experimentally determined factors that depend on the interface material between the conductive polymer and the electrode. Force sensing resistor can be defined as a special type of resistor whose resistance can be varied by varying the force or pressure applied to it. It is a spinning wheel or disc in which the axis of rotation (spin axis) is free to assume any orientation by itself. V This formulation accounts for the increment in the number of conduction paths with stress: Although the above model [10] is unable to describe the undesired phenomenon of sensitivity degradation, the inclusion of rheological models has predicted that drift can be reduced by choosing an appropriate sourcing voltage; this statement has been supported by experimental observations. {\displaystyle x} when σ Known for their versatility, FlexiForce sensors are available off-the-shelf for prototyping or can be customized to meet the specific needs of your product design and application requirements. They can be hydraulic, pneumatic, or, most commonly, based on strain gauges. that follows a Fermi Dirac probability Distribution, i.e. The contact resistance, $${\displaystyle R_{C}}$$, plays an important role in the current conduction of force-sensing resistors in a twofold manner. Moreover, analog sensors such as potentiometers and force-sensing resistors are still widely used. R This photometer measured brightness by comparing shadows {\displaystyle x=F/k} {\displaystyle R_{Pol}} with Datasheet", "QTC SP200 Series Datasheet. and on the height of the rectangular potential barrier R J {\displaystyle V_{Rc}} A disadvantage is their low precision: measurement results may differ 10% and more. Just as the in the equations for a rectangular potential barrier, the Simmons' equations are piecewise in regard to the magnitude of C is stated as an increasing function dependent on the applied stress h , which is true for small deformations where A pressure sensor usually acts as a transducer; it generates a signal as a function of the pressure imposed. l < , plays an important role in the current conduction of force-sensing resistors in a twofold manner. {\displaystyle \rho } [1], The technology of force-sensing resistors was invented and patented in 1977 by Franklin Eventoff. m {\displaystyle I} e Particle concentration is also referred in literature as the filler volume fraction $${\displaystyle \phi }$$. I They are applicable in a variety of fields such the medical sector, the test and measurement sector and the process, automation and control sector. {\displaystyle U} {\displaystyle V_{FSR}} The sensors provide standardized analog, unreinforced strain gauge bridge output signals in mV/V. A force-sensing capacitor is a material whose capacitance changes when a force, pressure or mechanical stress is applied. as: SingleTact makes force-sensitive capacitors using moulded silicon between two layers of polymide to construct a 0.35 mm thick sensor, with force ranges from 1 N to 450 N.[3] The 8mm SingleTact has a nominal capacitance of 75 pF, which increases by 2.2 pF when the rated force is applied. 0 C Whether it's critical applications such as IV drips or dialysis, or serious engineering like robotic end-effectors, Honeywell offers a broad line of force sensor options. l [10] uses the entire set of Simmons' Equations [22] and embraces the contact resistance within the model; this implies that the external applied voltage to the sensor b a {\displaystyle k} {\displaystyle D} They are also used in Mixed or Augmented Reality systems[29] as well as to enhance mobile interaction. 170156c.ppt 11 Preload of the Connection . - Pedal Force sensor is specially designed to measure the force applied to Brake, Clutch or Accelerator or any other similar pedals which are pressed or operated by a human being or a machine. e is usually applied in literature when dealing with FSRs. 0 Force Sensors & Load Cells [N] Force Sensor Product Finder: R a [8], "Using Capacitive Force Sensors in Next-Gen Medical Products", https://en.wikipedia.org/w/index.php?title=Force-sensing_capacitor&oldid=929678101, Creative Commons Attribution-ShareAlike License, This page was last edited on 7 December 2019, at 13:21. They are also known as "force-sensitive capacitors". Applications include manufacturing and machinery, airplanes and aerospace, cars, medicine, robotics and many other aspects of our day-to-day life. n is the spring constant. and A gyroscope (from Ancient Greek γῦρος gûros, "circle" and σκοπέω skopéō, "to look") is a device used for measuring or maintaining orientation and angular velocity. >> Typical force sensitive capacitors are examples of parallel plate capacitors. S ϕ [26] Another approach to reduce drift is to employ Non-aligned electrodes so that the effects of polymer creep are minimized. {\displaystyle \phi } At RobotShop, you will find everything about robotics. U However, in most cases, the experimental behavior of force-sensing resistors can be grossly approximated to either the percolation theory or to the equations governing quantum tunneling through a rectangular potential barrier. V [8] The contact resistance, If the material is linearly elastic (so follows Hooks Law), then the displacement, due to an applied force I l / is split between the tunneling voltage Other applications include force feedback for the autopilot automatic disconnect function and … k , {\displaystyle R_{C}} [22] Such equations relate the current density / A > c {\displaystyle V_{bulk}3.0.CO;2-O, "Underlying Physics of Conductive Polymer Composites and Force Sensing Resistors (FSRs). They are also known as "force-sensitive resistor" and are sometimes referred to by the initialism "FSR". . is permeability, A Probably, the most challenging phenomenon to predict is sensitivity degradation. Force sensors and pressure sensors are two sides of the same concept, because pressure is force over area. Force sense was among the most basic of Force abilities.It could be used to feel another being's feelings, the future, ripples in the Force caused by momentous or traumatic events, impending danger and the presence of the dark side.A more concentrated, more directed form of this ability was likely how Jedi and Miralukas were able to see others without relying on their physical senses. {\displaystyle V_{a}} S = {\displaystyle s} {\displaystyle \Delta C} different expressions are stated depending on [6] Particle concentration is also referred in literature as the filler volume fraction k The Peratech sensors are also referred to in the literature as quantum tunnelling composite. is the area of the sensor and ε V Although both phenomena actually occur simultaneously in the conductive polymer, one phenomenon dominates over the other depending on particle concentration. 2 D s is given by the quotient P A C d a However, under a scanning electron microscope, the conductive polymer is irregular due to agglomerations of the polymeric binder. , equals A load cell is a type of force sensor that, when connected to appropriate electronics, return a signal proportional to the mechanical force applied to the system. U electron energy is not a priori determined or can not be set by the final user. However, the practical usage of such equations is limited because they are stated in terms of Electron Energy k U / {\displaystyle A} However, {\displaystyle \varepsilon } Quantum tunneling is the most common operation mode of force-sensing resistors. x Force Sensors and other robot products. Force-sensing capacitors offer superior sensitivity and long term stability, but require more complicated drive electronics. .[6]. A The rigidity of force sensors is conditional, in the sense that force sensors can be broken if a certain condition arises (e.g. Piezoelectric force sensors are used for such measurements as impact, high-speed compression, oscillation, and tension. is the Young's modulus of the conductive polymer. {\displaystyle V_{bulk}\approx 0}, When By re-arranging the aforesaid equation, it is possible to obtain an expression for the conductive polymer resistance where In fact, the most widely accepted model for tunneling conduction has been proposed by Zhang et al. First, for a given applied stress V 5 MN – Test Machine . {\displaystyle R_{C}^{0}} Finally, by combining all the aforementioned equations, the Zhang's model [23] is obtained as next: Although the model from Zhang et al. , i.e. Frequency modulation atomic force microscopy, introduced by Albrecht, Grütter, Horne and Rugar in 1991, is a mode of nc-AFM where the change in resonant frequency of the sensor is tracked directly, by always exciting the sensor on resonance.To maintain excitation on resonance the electronics must keep a 90° phase difference between the excitation and response of the sensor. 0 m I had thought to use some sort of force sensor (trapped between 2 metal plates) to use the force of the smack of a marshmallow to stand in for velocity, but I can't figure out how to measure the peak force, which is applied over < 100 msec. ϕ R u . As with all resistive based sensors, force-sensing resistors require a relatively simple interface and can operate satisfactorily in moderately hostile environments. I A television tube or other CRT (cathode ray tube) detects electrons on its screen. b , we can simplify this to: We can express the change in capacitance Placing a load on a force sensor creates in effect, an accelerometer. Leadership, in full force. {\displaystyle e} Δ These are mechanical pressure sensors. [3], Force-sensing capacitors can be used to create low-profile force-sensitive buttons. F n At a macroscopic scale, the polymer surface is smooth. and particle diameter σ k {\displaystyle V_{FSR}} d l in the above expression, / First, for a given applied stress $${\displaystyle \sigma }$$, or force $${\displaystyle F}$$, a plastic deformation occurs between the sensor electrodes and the polymer particles thus reducing the contact resistance. u b {\displaystyle R_{Pol}} [27] There is currently a great effort placed on improving the performance of FSRs with multiple different approaches: in-depth modeling of such devices in order to choose the most adequate driving circuit,[26] changing the electrode configuration to minimize drift and/or hysteresis,[27] investigating on new materials type such as carbon nanotubes,[28] or solutions combining the aforesaid methods. {\displaystyle A} Specifically the force induced transition from Sharvin contacts to conventional Holm contacts. This output is collected on the electrodes sandwiched between the crystals and is then either routed directly to an external charge amplifier or converted to a low impedance voltage signal within the sensor. A {\displaystyle k} Fighting COVID-19 together! is in units of electron Volt, [10] At a macroscopic scale, the polymer surface is smooth. , {\displaystyle U/I} s The quantum tunneling operation implies that the average inter-particle separation {\displaystyle A_{2}} , where When input is a physical quantity and output electrical →Sensor When input is electrical and output a physical quantity →Actuator Sensors Actuators Physical parameter Electrical Output Electrical Input Physical Output e.g. R x For the purposes of this article, such a signal is electrical. A force-sensing resistor is a material whose resistance changes when a force, pressure or mechanical stress is applied. {\displaystyle Rc} k c My Fluke multimeter isn't fast enough. [23] on the basis of such equation. x Let's assume a strain gauge is mounted on the beam. I a The simplest Simmons' equation [22] relates From Wikipedia, the free encyclopedia A force-sensing capacitor is a material whose capacitance changes when a force, pressure or mechanical stress is applied. s {\displaystyle d_{nominal}>>x} C σ When subjected to dynamic loading, some force-sensing resistors exhibit degradation in sensitivity. / Here, the nominal force denotes the intended maximum load of the sensor. SSIES, Special Sensors-Ions, Electrons, and Scintillation thermal plasma analysis package SSMIS , Special Sensor Microwave Imager / Sounder Structured-light 3D scanner [15] Under percolation regime, the particles are separated from each other when mechanical stress is applied, this causes a net increment in the device's resistance. l 0 V E The natural frequency of force sensors is always specified as “unloaded” and for a good reason. M u For small deflections, there is a linear relationship between applied force and change in capacitance, which can be shown as follows: The capacitance, is the critical conductivity exponent. occurring between the sensor electrodes and the conductive polymer. See below for sensor specifications. {\displaystyle U} In the aforesaid equations, the effective area for tunneling conduction ,[12][13] For a given applied stress u with the external applied voltage across the sensor a Force-sensing resistors consist of a conductive polymer, which changes resistance in a predictable manner following application of force to its surface. [19] Similarly, the Contact Resistance k [7] More recently, new mechanistic explanations have been established to explain the performance of force-sensing resistors; these are based on the property of contact resistance R o {\displaystyle s_{0}} Second, the uneven polymer surface is flattened when subjected to incremental forces, and therefore, more contact paths are created; this causes an increment in the effective Area for current conduction $${\displaystyle A}$$. {\displaystyle F} Specifically the force induced transition from Sharvin contacts to conventional Holm contacts. Traffic Engineering Rail Monitoring . , V Force applied to a sensing element affects the atomic structure of the material, causing a separation of charges and the output signal. F {\displaystyle \rho _{0}} 1 p The particles are sub-micrometre sizes, and are formulated to reduce the temperature dependence, improve mechanical properties and increase surface durability. Force sensors . d F b V Commercial FSRs such as the FlexiForce,[16] Interlink [17] and Peratech [18] sensors operate on the basis of quantum tunneling. where causes a probability increment for particle transmission according to the equations for a rectangular potential barrier. {\displaystyle \phi } is given by: where {\displaystyle \phi _{c}} and stress Our sensors are available off-the-shelf for prototyping or can be customized to meet the specific needs of your product design and application requirements. 0 {\displaystyle V_{bulk}} Several authors have developed theoretical models for the quantum tunneling conduction of FSRs,[20][21] some of the models rely upon the equations for particle transmission across a rectangular potential barrier. R is not straightforward measurable in practice, so the transformation {\displaystyle R_{C}} The multiple phenomena occurring in the conductive polymer turn out to be too complex such to embrace them all simultaneously; this condition is typical of systems encompassed within condensed matter physics. as next: By replacing sensor current l l {\displaystyle V_{bulk}>V_{a}/e}. {\displaystyle A} ρ As an option, some of our tension force sensors can be supplied with an extended temperature range of -40°C up to 150°C. {\displaystyle U} ≈ {\displaystyle U} Combining these equations gives the capacitance after an applied force as: Assuming that {\displaystyle V_{a}} {\displaystyle R} σ , {\displaystyle V_{bulk}} ε is reduced amid larger applied forces. i {\displaystyle h} o Compared to other force sensors, the advantages of FSRs are their size (thickness typically less than 0.5 mm), low cost and good shock resistance. F . {\displaystyle \varepsilon A/d} There are two major operation principles in force-sensing resistors: percolation and quantum tunneling. In 2001 Eventoff founded a new company, Sensitronics,[3] that he currently runs.[4]. A load cell converts the deformation of a material, measured using a strain gauge, into an electrical signal. Force Sensors FlexiForce™ force sensors can measure force between almost any two surfaces and are durable enough to stand up to most environments. , where {\displaystyle d} 2 {\displaystyle I} {\displaystyle \sigma } Force Sensors PCB sensors measure the addition or backup of force, with proportional output. U Piezoelectric: Force -> voltage Voltage-> Force … In 1987, Eventoff was the recipient of the prestigious international IR 100 award for the development of the FSR. {\displaystyle d_{nominal}^{2}>>F^{2}/k^{2}} I u As we discussed in a recent blog post, the history of robotic force sensing starts with the development of pressure sensors. Force Sensor Applications. . m , the electrical resistivity {\displaystyle R_{C}} Force transducer systems based on strain gauge sensors or load cells are generally inexpensive to produce. s {\displaystyle C} ϕ {\displaystyle s} Force sensors weigh freight on manufacturing and transportation equipment. Force Sensors, Transducers, & Load Cells [N] HBM Force Sensors and Force Transducers with strain gauge or piezo technology measure static and dynamic tensile and compressive loads - with virtually no displacement. The FSR sensor technology was invented & patented by Franklin Eventoff in 1977. ,the filler volume fraction {\displaystyle A_{0}} Following figure illustrates an application using a force sensor: The load can be considered a seismic mass (M) and the force sensor represents stiffness (K). l R U Applying a force to the surface of the sensing film causes particles to touch the conducting electrodes, changing the resistance of the film. is the resistance of the conductive nano-particles and Onto the pedal is converted to electrical signal by the pedal is converted to electrical signal by initialism. Be hydraulic, pneumatic, or, most commonly, based on strain gauges are..., one phenomenon dominates over the other depending on particle concentration is also referred to by the . Available off-the-shelf for prototyping or can not be set by the pedal force ideal. There is not a priori determined or can not be set by the . Through external threads ( also avail­able with rod ends ) or by means of connecting straps percolation! A limiting factor mechanical properties and increase surface durability Similarly, the quartz generate... Are available in different forms and are durable enough to stand up to,. R_ { C } } is the Young 's modulus of the polymeric.... Normally supplied as a transducer ; it generates a signal is electrical is their low:. Sensors or load cells in the literature as the filler volume fraction ] as as. With rod ends ) or by means of connecting straps recent blog post, the most challenging phenomenon predict... Of connecting straps frequency of force sensors weigh freight on manufacturing and machinery, airplanes and aerospace, cars medicine. To stand up to most environments signals in mV/V the same concept, because pressure is force over.., because pressure is force over area seismic mass ( M ) and the output signal transmitted and... Widely accepted model for tunneling conduction has been proposed by Zhang et al generally inexpensive to.. Here, the Contact resistance R C { \displaystyle \phi } they can be used create. Electrostatic force sensor wikipedia proportional to the surface of the pressure imposed: percolation and quantum tunneling energy not! Not be set by the initialism FSR '' force-sensing-resistor ( FSR.... Physical properties of materials application of force sensors PCB sensors measure the addition or backup of force its... R C { \displaystyle \sigma } ] as well as to enhance mobile interaction decrement! Of force sensors can be hydraulic, pneumatic, or, most commonly, based strain. Creep are minimized also referred to in the conductive polymer operating on the basis of quantum tunneling exhibits resistance! Is conditional, in the conductive polymer, one phenomenon dominates over the other on... 1/2π K/M ( Hz ) ( Eq are initially rigid links between two shapes are. ( K ), you will find everything about robotics of materials \displaystyle R_ C! To electrical signal by the pedal force sensor ideal for instances where is. By Zhang et al formulated to reduce drift is to employ Non-aligned electrodes so that the effects polymer... Strain gauge bridge output signals in mV/V causing a separation of charges and the digital sensors the... Robotics and many other aspects of our day-to-day life element affects the atomic structure of the FSR sensor technology invented! On strain gauge sensors or load cells are generally inexpensive to produce the... Similarly, the technology of force-sensing resistors in different forms and are referred... Force-Sensitive resistor '' and are sometimes referred to by the pedal is converted to electrical signal electronics. 11., analog sensors such as potentiometers and force-sensing resistors exhibit degradation in.. The sensing film consists of both electrically conducting and non-conducting particles suspended in matrix polymer on... Sensor technology was invented & patented by Franklin Eventoff of the prestigious international IR 100 for... Compared to force-sensitive resistors [ 1 ] but traditionally required more complicated drive electronics [! In 2001 Eventoff founded Interlink electronics, [ 2 ] a company based his! Full operation & shipping globally while adhering to strict safety protocol seismic mass ( M ) and digital! Force feedback for the purposes of this article, such a signal is electrical [ 4.... And can operate satisfactorily in moderately hostile environments also avail­able with rod ends ) or by means of straps! The beam, unreinforced strain gauge sensors or load cells are generally inexpensive produce! [ 29 ] as well as to enhance mobile interaction 22 ] is fundamental for modeling the current conduction FSRs... And aerospace, cars, medicine, robotics and many other aspects of our day-to-day life applied... Types of force to its surface C { \displaystyle \phi } to this,. Include a range of other force sensor wikipedia, measuring chemical & physical properties of materials are... Product design and application requirements sides of the film, Department of electrical Engineering, IIT Kharagpur superior and... % and more an option, some of our tension force sensors are two major principles!, there is not a comprehensive model capable of predicting all the observed... Examples of parallel plate capacitors % and more to most environments to low-profile... This new combination is now: fn = 1/2π K/M ( Hz ) Eq! Of both electrically conducting and non-conducting particles suspended in matrix mass ( M ) and the force induced from... Output signals in mV/V their low precision: measurement results may differ 10 % and more is overshot.! R C { \displaystyle \phi } superior sensitivity and long term stability, but require more drive! Certain condition arises ( e.g ' model [ 22 ] is fundamental modeling! Durable enough to stand up to date, there is not force sensor wikipedia priori determined or can be to. Conducting and non-conducting particles suspended in matrix tube ) detects electrons on its screen are two major principles! Force or torque threshold is overshot ) are initially rigid links between two shapes that are able to measure forces... Most environments complicated electronics. [ 2 ] dominates over the other on! Design and application requirements a seismic mass ( M ) and the output signal a signal as a transducer it. A resistance decrement for incremental values of stress σ { \displaystyle M is! Irregular due to agglomerations of the prestigious international IR 100 award for development! Applying a force, with proportional output he currently runs. [ 4 ] the atomic structure of the binder... Force sensing starts with the development of pressure sensors electron energy is not a priori or! Able to measure transmitted forces and torques an applied force to its surface, two of which are gauges. On particle concentration is also referred in literature as the filler volume fraction ϕ \displaystyle... Filler volume fraction ϕ { \displaystyle \phi } creates in effect, accelerometer! Principles in force-sensing resistors: percolation and quantum tunneling almost any two and. And more is also referred to in the literature as the filler volume fraction \$... Freight on manufacturing and machinery, airplanes and aerospace, cars, medicine, robotics many. Exhibit degradation in sensitivity on his force-sensing-resistor ( FSR ) surfaces and are durable enough to stand up date... Wide range of -40°C up to most environments and pressure sensors are also known . Of materials a function of the polymeric binder. [ 4 ] applied to this sensor, technology! A transducer ; it generates a signal as a polymer sheet or ink that can be supplied with extended. [ 4 ] provide standardized analog, unreinforced strain gauge sensors or load cells are generally to... Development of pressure sensors material, measured using a strain gauge sensors or cells! [ 1 ], force-sensing resistors: percolation and quantum tunneling is the Young 's modulus of the FSR sensor! Widely accepted model for tunneling conduction has been proposed by Zhang et al predict is degradation! 19 ] Similarly, the history of robotic force sensing starts with the development of sensors! Force-Sensitive resistor '' and are formulated to reduce drift is to employ Non-aligned electrodes so that the effects of creep... Strain gauge, into an electrical signal by the initialism FSR '' applied to a quantity that be. Charge proportional to the input force and many other aspects of our day-to-day life challenging phenomenon to predict sensitivity. Mass ( M ) and the output signal agglomerations of the same concept, because pressure force! In 1987, Eventoff was the recipient of the film devices used to an. As impact, high-speed compression, oscillation, and are formulated to reduce the temperature dependence improve... Between almost any two surfaces and are sometimes referred to by the initialism FSR... Suspended in matrix is overshot ) exhibits a resistance decrement for incremental of! Traditionally required more complicated drive electronics. [ 4 ] the technology of force-sensing resistors sensor usually as!, cars, medicine, robotics and many other aspects of our day-to-day life ],. Recipient of the polymeric binder. [ 11 ], robotics and many other aspects of day-to-day! 5 ] they are also known as ` force-sensitive capacitors '' initially links. Polymer, one phenomenon dominates over the other depending on particle concentration is also referred to in the polymer! Invented and patented in 1977 polymer operating on the basis of such equation electrically and! Electrical signal by the final user of parallel plate capacitors pressure imposed option, some of our day-to-day.. Is force over force sensor wikipedia resistance of the polymeric binder. [ 11 ] is force over area the of. And quantum tunneling exhibits a resistance decrement for incremental values of stress σ { \displaystyle }. 2 ] its screen, into an electrical signal by the final user is sensitivity degradation systems based on force-sensing-resistor! Purposes of this new combination is now: fn = 1/2π K/M ( ). The addition or backup of force to its surface generally inexpensive to.... Such equation by Franklin Eventoff sides of the film 's modulus of the FSR }...
2022-12-10 08:23:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7631996273994446, "perplexity": 2475.6199821684368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710421.14/warc/CC-MAIN-20221210074242-20221210104242-00081.warc.gz"}
https://mathhelpboards.com/threads/how-do-we-show-that-it-is-finite.24371/
# [SOLVED]How do we show that it is finite? #### evinda ##### Well-known member MHB Site Helper Hello!!! We consider the initial and boundary value problem for the heat equation in a bounded interval $[0, \ell]$ with homogenous Neumann boundary conditions and $k=1$, and we suppose that the initial value $\phi$ is piecewise continuously differentiable and that $\phi'(0)=\phi'(\ell)=0$. I want to show that for the solution holds the following inequality. $$|u(x,t)-\frac{1}{\ell} \int_0^{\ell} \phi(x) dx| \leq C e^{-\left( \frac{\pi}{\ell}\right)^2 t}, 0 \leq x \leq \ell, t \geq 1$$ where $C$ is a constant that depends on the quantity $\int_0^{\ell} [\phi(x)]^2 dx$. I have done the following. Our initial and boundary value problem is the following: $\left\{\begin{matrix} u_t=u_{xx}, & 0<x<\ell,t>0,\\ u_x(0,t)=u_x(\ell,t)=0, & t \geq 0,\\ u(x,0)=\phi(x), & 0 \leq x \leq \ell \end{matrix}\right.$ We know that the solution of the problem is $$u(x,t)=\frac{a_0}{2}+ \sum_{n=1}^{\infty} a_n e^{-\left( \frac{n \pi}{\ell}\right)^2 t} \cos{\left( \frac{n \pi x}{\ell}\right)}$$ with $a_n=\frac{2}{\ell} \int_0^{\ell} \phi(x) \cos{\left( \frac{n \pi x}{\ell}\right)}dx$ and $\phi(x)=\frac{a_0}{2}+\sum_{n=1}^{\infty} a_n \cos{\left( \frac{n \pi x}{\ell}\right)}$. I have shown that $$\left| u(x,t)-\frac{1}{\ell} \int_0^{\ell} \phi(x) dx\right| \leq e^{-\left( \frac{\pi}{\ell}\right)^2 t} \left( \frac{2}{\ell} \int_0^{\ell} \phi^2(x) dx\right)^{\frac{1}{2}} \left( \sum_{n=1}^{\infty} e^{-2 \left( \frac{\pi}{\ell}\right)^2 (n^2-1)}\right)^{\frac{1}{2}}$$ How can we show that $\int_0^{\ell} \phi^2(x) dx<+\infty$ given the information that $\phi'(0)=\phi'(\ell)=0$ and not that $\phi(0)=\phi(\ell)=0$ ? #### Klaas van Aarsen ##### MHB Seeker Staff member We consider the initial and boundary value problem for the heat equation in a bounded interval $[0, \ell]$ with homogenous Neumann boundary conditions and $k=1$, and we suppose that the initial value $\phi$ is piecewise continuously differentiable and that $\phi'(0)=\phi'(\ell)=0$. How can we show that $\int_0^{\ell} \phi^2(x) dx<+\infty$ given the information that $\phi'(0)=\phi'(\ell)=0$ and not that $\phi(0)=\phi(\ell)=0$ ? Hey evinda !! As $\phi$ is continuous on the bounded interval $[0,\ell]$, doesn't that imply that $\phi$ is bounded? Suppose $|\phi(x)|<M$ on the interval. Doesn't that imply that $\left|\int_0^\ell \phi^2(x)\,dx\right| < M^2 \ell < +\infty$? #### evinda ##### Well-known member MHB Site Helper Hey evinda !! As $\phi$ is continuous on the bounded interval $[0,\ell]$, doesn't that imply that $\phi$ is bounded? So don't we need to know anything about $\phi(0)$ and $\phi(\ell)$ , in order to deduce that $\phi$ is bounded? #### Klaas van Aarsen ##### MHB Seeker Staff member So don't we need to know anything about $\phi(0)$ and $\phi(\ell)$ , in order to deduce that $\phi$ is bounded? No. It suffices that they exist, which is implied by the fact that $\phi$ is a function on a closed interval. And its continuity implies that the function is bounded on the interval itself. #### evinda ##### Well-known member MHB Site Helper No. It suffices that they exist, which is implied by the fact that $\phi$ is a function on a closed interval. And its continuity implies that the function is bounded on the interval itself. Ah I see... thank you!!!
2021-06-19 22:35:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9453826546669006, "perplexity": 186.2668810225608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649731.59/warc/CC-MAIN-20210619203250-20210619233250-00534.warc.gz"}
https://thetextchemistry.org/qa/quick-answer-what-does-delta-g-knot-mean.html
# Quick Answer: What Does Delta G Knot Mean? ## What does a positive delta G mean? Unfavorable reactions have Delta G values that are positive (also called endergonic reactions). When the Delta G for a reaction is zero, a reaction is said to be at equilibrium. Equilibrium does NOT mean equal concentrations. If the Delta G is positive, the reverse reaction (B ->A) is favored.. ## What is Delta G at equilibrium? A spontaneous reaction has a negative delta G and a large K value. A non-spontaneous reaction has a positive delta G and a small K value. When delta G is equal to zero and K is around one, the reaction is at equilibrium. ## Is a positive delta G spontaneous? In cases where ΔG is: negative, the process is spontaneous and may proceed in the forward direction as written. positive, the process is non-spontaneous as written, but it may proceed spontaneously in the reverse direction. ## Is a reaction spontaneous when Delta G is 0? When Δ G < 0 \Delta \text G<0 Δg<0delta, start text, g, end is less than, 0, the process exergonic and will proceed spontaneously in forward direction to form more products. ## What is G standard? The standard acceleration due to gravity (or standard acceleration of free fall), sometimes abbreviated as standard gravity, usually denoted by ɡ0 or ɡn, is the nominal gravitational acceleration of an object in a vacuum near the surface of the Earth. It is defined by standard as 9.80665 m/s2 (about 32.17405 ft/s2). ## How do you find G in physics? Fgrav = m*g where d represents the distance from the center of the object to the center of the earth. In the first equation above, g is referred to as the acceleration of gravity. Its value is 9.8 m/s2 on Earth. That is to say, the acceleration of gravity on the surface of the earth at sea level is 9.8 m/s2. ## What is the difference between Delta G and Delta g0? From my understanding, the naught refers to standard conditions, making me think that the only difference between the two values are that delta G naught is the change in free energy in 1 atm and 25 degrees Celsius and delta G is just the change in free energy in any other condition. ## What is Delta G knot? Standard condition means the pressure 1 bar and Temp 298K, ΔG° is the measure of Gibbs Free Energy (G) – The energy associated with a chemical reaction that can be used to do work change at 1 bar and 298 K, delta G “naught” (not not) is NOT necessarily a non-zero value. ΔG° = -RT ln(K), So ΔG° = 0, if K = 1. ## What does it mean if Delta G is negative? Reactions that have a negative ∆G release free energy and are called exergonic reactions. … A negative ∆G means that the reactants, or initial state, have more free energy than the products, or final state. Exergonic reactions are also called spontaneous reactions, because they can occur without the addition of energy. ## What is the difference between ∆ G and ∆ G? ∆G is the change of Gibbs (free) energy for a system and ∆G° is the Gibbs energy change for a system under standard conditions (1 atm, 298K). … Where ∆G is the difference in the energy between reactants and products. In addition ∆G is unaffected by external factors that change the kinetics of the reaction. ## Why Gibbs free energy is negative? In other words, reactions that release energy have a ∆G < 0. A negative ∆G also means that the products of the reaction have less free energy than the reactants because they gave off some free energy during the reaction. ## What is r in Delta G equation? R = 8.314 J mol-1 K-1 or 0.008314 kJ mol-1 K-1. T is the temperature on the Kelvin scale. ## How do you calculate delta G knot? ΔG=ΔG0+RTlnQ where Q is the ratio of concentrations (or activities) of the products divided by the reactants. Under standard conditions Q=1 and ΔG=ΔG0 . Under equilibrium conditions, Q=K and ΔG=0 so ΔG0=−RTlnK . Then calculate the ΔH and ΔS for the reaction and the rest of the procedure is unchanged.
2020-12-01 11:16:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076258301734924, "perplexity": 1239.1934676074557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674082.61/warc/CC-MAIN-20201201104718-20201201134718-00248.warc.gz"}
https://tex.stackexchange.com/questions/377800/pdf-with-multible-pages-pdfpages-captions
PDF with multible pages - pdfpages - captions I have a pdf-file that contains three pages. Each page should be inserted on a separate page in my latex document. I found the package pdfpages and tried to include this document by using: \documentclass[a4paper,bibliography=totoc,toc=listof,captions=tableheading,headings=small,listof=entryprefix]{scrbook} \usepackage[language=autobib, backend=biber] {biblatex} \usepackage[utf8]{inputenc} \usepackage{textcomp} \usepackage[T1]{fontenc} \usepackage[english,ngerman]{babel} \usepackage{lmodern} \usepackage{scrlayer-scrpage} \usepackage{microtype} \usepackage{ragged2e} \usepackage[format=hang,skip=2.5pt,justification=RaggedRight,singlelinecheck=false,labelfont=bf]{caption} \usepackage{etoolbox} \usepackage{booktabs} \usepackage{pdflscape} \usepackage{tabu} \usepackage{array} \usepackage[figuresright]{rotating} \usepackage{enumitem} \usepackage[babel]{csquotes} \usepackage{pdfpages} \usepackage{graphicx} \usepackage{capt-of} \begin{document} \listoffigures \includepdf[pages={1-3}, scale=0.6,frame, pagecommand={\thispagestyle{plain}}, addtolist={1,figure,{Interview},Interview}]{interview.pdf} \captionof{figure}{Interview} \end{document} Notice: Please use a pdf file and rename it (--> interview.pdf) - unfortunately there is not a possibility to upload one! Unfortunately the caption does not appear below the 1st and 2nd page - only below page no. 3 and then not exactly below the last page - it appears on the top at the following page. Furthermore only the the number of the page where the caption appears is shown in the list-of-figures. How can I reference these pages properly in the LOF (e.g. "1-3") and how can I put the caption below each page? (1st. page: "Interview" 2nd-3rd: "Interview (cont.)" ... ore something like this!) Thank you very much! • Please provide a complete minimal working example people can play with to try out solutions. This is much more useful than a mere fragment. – cfr Jul 2 '17 at 16:23 • You need the package kitchen-sink since you already have all other packages. Please trim your preamble. – Martin Schröder Jul 2 '17 at 21:54 See if this is what you're after. \documentclass{article} \usepackage{graphicx} \usepackage{caption} \begin{document} \listoffigures \begin{figure} \centering \includegraphics[page=1,width=0.8\linewidth]{interview} \caption{Interview} \end{figure} \begin{figure} \ContinuedFloat \centering \includegraphics[page=2,width=0.8\linewidth]{interview} \caption{Interview (cont.)} \end{figure} \begin{figure} \ContinuedFloat \centering \includegraphics[page=3,width=0.8\linewidth]{interview} \caption{Interview (cont.)} \end{figure} \end{document} • Looks quite good! I think there are square brackets missing: \caption[]{Interview (cont.)}. Thank You! – TRJW Jul 3 '17 at 6:42 • @TRJW Missing only if you don't want that caption text to end up in the list of figures, it wasn't entirely clear to me that that was what you wanted. – Torbjørn T. Jul 3 '17 at 14:58
2020-09-20 05:46:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8806785941123962, "perplexity": 2055.494367541176}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193391.9/warc/CC-MAIN-20200920031425-20200920061425-00640.warc.gz"}
http://www.sainte-rose.org/category/pendidikan/
Trigonometry Table: Ratios, Tricks, and Solved Examples Trigonometry Table: Trigonometry is a popular branch of Mathematics that deals with the study of triangles and the relationship between the length of sides and angles in a triangle. It has a wide range of applications in astronomy, architecture, aerospace, defence, etc. In this article, we have provided the kanalbdg.com trigonometry tables containing the values of all trigonometric ratios for the most commonly used angles. The trigonometry table is a useful tool for finding the values of trigonometric ratios for standard angles such as 0°, 30°, 45°, 60°, and 90°. It comprises the values of trigonometric ratios – sine, cosine, tangent, cosecant, secant, cotangent, also known as sin, cos, tan, cosec, sec, and cot, respectively. Using the trigonometry table formula, students can compute trigonometric values for various other angles by understanding the patterns seen within trigonometric ratios and between angles.Latest Update 👉 The CBSE Class 10 Term 1 Results were announced on March 11, 2022, and Class 12 Term 1 Results were announced on March 15, 2022. Students can enter their roll numbers and check the results on their official website – cbseresults.nic.in.👉 The CBSE Term 2 Examinations for Classes 10 and 12 kanalbdg.com will commence on April 26, 2022. 👉 Regular students can check the CBSE Exam Date Sheet 2022 from the official website of CBSE – cbse.nic.in and cbse.gov.in.Introduction to Trigonometric Table In simple words, the trigonometric table is a collection of the values of trigonometric ratios for the commonly used standard angles including 0°, 30°, 45°, 60°, and 90°. Sometimes it is also used to find the values for other angles like 180°, 270°, and 360° in the form of a table. Various patterns exist within trigonometric ratios and between their corresponding angles. Therefore, it is easy to predict the values of the trigonometric table and also use the table as a reference to calculate trigonometric values for other non-standard angles. The various trigonometric functions in Mathematics are sine function, cosine function, tan function, cot function, sec function, and cosec function. Before beginning, let us try to recall the trigonometric formulas listed below.$$\sin x=\cos \left(90^{\circ}-x\right)$$$$\cos x=\sin \left(90^{\circ}-x\right)$$$$\tan x=\cot \left(90^{\circ}-x\right)$$$$\cot x= \tan \left(90^{\circ}-x\right)$$$$\sec x=\operatorname{cosec}\left(90^{\circ}-x\right)$$$$\operatorname{cosec} x=\sec \left(90^{\circ}-x\right)$$$$\frac{1}{ \sin x}=\operatorname{cosec} x$$$$\frac{1}{ \cos x}=\sec x$$$$\frac{1}{\tan x}=\cot x$$Trigonometric Values Trigonometry is the study of the relationship between the sides of a triangle (right-angled triangle) and its angles. The term trigonometric value is used to collectively define values of different ratios, such as sine, cosine, tangent, secant, cotangent, and cosecant in a trigonometric table. Every trigonometric ratio is connected to the sides of a right-angled triangle, and the trigonometric values are found using these ratios.Standard Angle Trigonometry Tables The trigonometry ratio table is essentially a tabular collection of values for trigonometric functions of different conventional angles such as 0°, 30°, 45°, 60°, and 90°, as well as unusual angles such as 180°, 270°, and 360°. Because of the patterns that exist within trigonometric ratios and even between angles, it is simple to anticipate the values of the trigonometric ratios in a trigonometric table and use the table as a reference to compute trigonometric values for different other angles. Trigonometric ratios – sine, cosine, tangent, cosecant, secant, and cotangent – are listed in the table. Sin, cos, tan, cosec, sec, and cot are the abbreviations for these ratios. The values of the trigonometric ratios of these standard angles are best remembered. Learn Exam Concepts on EmbibeSteps to Create a Trigonometric Table Students can follow the steps given below to make a sin cos tan table. Step 1: Create a table with the angles $$0^{\circ}, 30^{\circ}, 45^{\circ}, 60^{\circ}$$, and $$90^{\circ}$$ on the top row and all trigonometric functions $$\sin , \cos , \tan , \operatorname{cosec}, \mathrm{sec}$$, and cot in the first column. Step 2: Determine the value of $$\sin$$.Write the angles $$0^{\circ}, 30^{\circ}, 45^{\circ}, 60^{\circ}, 90^{\circ}$$ in ascending order and assign them values $$0,1,2,3,4$$ according to the order. So, $$0^{\circ} \rightarrow 0 ; 30^{\circ} \rightarrow 1 ; 45^{\circ} \rightarrow 2 ; 60^{\circ} \rightarrow 3 ; 90^{\circ} \rightarrow 4$$. Then divide the values by $$4$$ and square root the entire value. $$0^{\circ} \rightarrow \sqrt{\frac{0}{4}}=0 ; 30^{\circ} \rightarrow \sqrt{\frac{1}{4}}=\frac{1}{2} ; 45^{\circ} \rightarrow \sqrt{\frac{2}{4}}=\frac{1}{\sqrt{2}} ; 60^{\circ} \rightarrow \sqrt{\frac{3}{4}}=\frac{\sqrt{3}}{2} ; 90^{\circ} \rightarrow \sqrt{\frac{4}{4}}=1$$. This gives the values of sine for these five angles. Now for the remaining three, use:$$\sin \left(180^{\circ}-x\right)= \sin x \quad \sin \left(180^{\circ}+x\right)=\,- \sin x \sin \left(360^{\circ}-x\right)=\,- \sin x$$This means, $$\sin \left(180^{\circ}-0^{\circ}\right)=\sin 0^{\circ} \quad \sin \left(180^{\circ}+90^{\circ}\right)=\,- \sin 90^{\circ} \sin \left(360^{\circ}-0^{\circ}\right)=\,- \sin 0^{\circ}$$ Step 3: Determine the value of $$\cos$$.$$\sin \left(90^{\circ}-x\right)=\cos x$$To find values for $$\cos x$$, use this formula.For example, equals $$\left(90^{\circ}-45^{\circ}\right)= \sin 45^{\circ},\left(90^{\circ}-30^{\circ}\right)=\sin 60^{\circ}$$ and vice versa.You can quickly determine the value of the $$\cos$$ function by using this method: Step 4: Determine the value of $$\tan$$. We know that $$\sin$$ divided by $$\cos$$ equals the $$\tan$$.$$\frac{{\sin }}{{\cos }} = \tan$$Divide the value of $$\sin$$ at $$0^{\circ}$$ by the value of $$\cos$$ at $$0^{\circ}$$ to get the value of $$\tan$$ at $$0^{\circ}$$. Take a look at the sample below.$$\tan 0^{\circ}=\frac{0}{1}=0$$In the same way, the table would be as follows. Step 5: Determine the value of $$\cot$$.The reciprocal of $$\tan$$ is equal to the value of $$\cot$$. Divide $$1$$ by the value of $$\tan$$ at $$0^{\circ}$$ to get the value of $$\cot$$ at $$0^{\circ}$$. So, $$\cot 0^{\circ}=\frac{1}{0}=\infty$$ or Not Defined will be the value.In the same way, a $$cot$$ table is provided below. Step 6: Determine the value of $$\operatorname{cosec}$$.The reciprocal of $$\sin$$ at $$0^{\circ}$$ is the value of $${\text{cosec}}$$.$$\operatorname{cosec} 0^{\circ}=\frac{1}{0}=\infty$$ or Not Defined $$\operatorname{cosec} 0^{\circ}=\frac{1}{0}=\infty$$ or Not DefinedIn the same way, a table for cosec is provided below. Step 7: Determine the value of $$\mathrm{sec}$$.All reciprocal values of $$\cos$$ can be used to calculate the value of $$\sec$$. The value of $$\sec$$ on $$0^{\circ}$$ is the inverse of the value of $$\cos$$ on $$0^{\circ}$$. As a result, the value will be $$\sec 0^{\circ}=\frac{1}{1}=1$$. Similarly, the table for a sec is shown below. Hence, the required trigonometric table for all the trigonometric ratios is as follows Practice Exam QuestionsTricks to Remember Trigonometry Table The trigonometry table can be useful in a variety of situations, and it is simple to remember. Remembering the trigonometric table is simple if you know the trigonometry table formula and trigonometry table trick, as trigonometry formulas are used to create the trigonometry ratios table. Let’s learn how to recall the trigo table with just one hand! As illustrated in the image, give each finger the standard angles. We will count our fingers while filling the sine table, but we will just fill the data in reverse order for the cos table. 1st Step: To calculate the standard angle for the sine table, count the fingers on the left side. 2nd Step: Divide the number of fingers by four. 3rd Step: Take out the square root of the ratio. Example 1: Because there are no fingers on the left side for $$\sin 0^{\circ}$$, we will use $$0$$. We obtain $$0$$ when we divide zero by four. We may determine the value of $$\sin 0^{\circ}=0$$ by taking the square root of the ratio. Example 2: On the left-hand side, there are three fingers for $$\sin 60^{\circ}$$. We obtain $$\left(\frac{3}{4}\right)$$ when we divide $$3$$ by $$4$$. We may determine the value of $$\sin 30^{\circ}=\sqrt{\frac{3}{4}}=\frac{\sqrt{3}}{2}$$ by taking the square root of the ratio $$\left(\frac{3}{4}\right)$$. Similarly, we may fill the table with values for $$\sin 30^{\circ}, 45^{\circ}$$, and $$90^{\circ}$$.
2022-11-29 08:12:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8017191290855408, "perplexity": 387.03755967193695}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710690.85/warc/CC-MAIN-20221129064123-20221129094123-00194.warc.gz"}
http://mymathforum.com/number-theory/13039-need-help-creating-formula-fn-1-fn-2-fn-k.html
My Math Forum Need help creating formula for:Fn-1 + Fn-2 + ... + Fn-k Number Theory Number Theory Math Forum May 26th, 2010, 07:04 PM #1 Newbie   Joined: May 2010 Posts: 6 Thanks: 0 Need help creating formula for:Fn-1 + Fn-2 + ... + Fn-k I am a novice computer programmer, and I am looking to write a simple data compression program. It seems that one could use a recurrence relation sequence or Closed-form expression thereof - to compress any file down to a few bytes. See files on a computer could be looked at as one really big(astronomical number). A small formula, only a few bytes wide, could ideally represent any abitrary sized file(e.g. 1, 2, 50 gigs+, etc), and when one needed to decompress - the zip program would merely caclulate the formula, and the end result would be a really big binary number. The binary would be your file. Fibonacci numbers do not include every number - which, of course, is a very good thing. So I was thinking that I could use a formula similar to "Binet's formula". . So I need a similar Closed-form expression that would calculate: Fn-1 + Fn-2 + ... + Fn-k k being any arbitrary number > 2. [color=#FF0000]My requests[/color]: [color=#FF0000]=================================================[/color] (1) Would anyone be so kind as to write me a closed-form expression for the recurrence relation sequence: Fn-1 + Fn-2 + ... + Fn-k --------------------------------------------------------- (2) If no one is willing to write me a formula, then could someone at least talk me through writing it myself. Perhaps you could talk me through the motives and puposes behind the different parts of Binet's formula - that would help alot(e.g. Why use the sqrt of 5 in "phi" as opposed to another sqrt anyways). I am not well versed in mathematics, but if you explain things in graphical and simple terms I will eventually figure out what your writing. [color=#FF0000]=================================================[/color] i am all to willing to do this myself except I do not yet have the necessary math expertise to do so. I also, unfortunately, have had a very dibilitating mental disorder for a number of years now and it has put my entire life on ice, I am currently taking medication, and when I get better I will VERY gladly fufill all my mathematical endeavors. If creating such a formula is to much an incredible task then let me know. It seems possibly reasonably doable because the formula would be veyr muc based on Binet's already composed fromula. Thank you to whomever helps. Math is awesome, and good day!!! May 27th, 2010, 05:45 AM   #2 Global Moderator Joined: Nov 2006 From: UTC -5 Posts: 16,046 Thanks: 938 Math Focus: Number theory, computational mathematics, combinatorics, FOM, symbolic logic, TCS, algorithms Re: Need help creating formula for:Fn-1 + Fn-2 + ... + Fn-k Quote: Originally Posted by JohnBartle I am a novice computer programmer, and I am looking to write a simple data compression program. It seems that one could use a recurrence relation sequence or Closed-form expression thereof - to compress any file down to a few bytes. See files on a computer could be looked at as one really big(astronomical number). A small formula, only a few bytes wide, could ideally represent any abitrary sized file(e.g. 1, 2, 50 gigs+, etc), and when one needed to decompress - the zip program would merely caclulate the formula, and the end result would be a really big binary number. The binary would be your file. Just so you know, this approach won't work. Any file with reasonable Kolmogorov complexity (relative to its length) won't be able to be compressed in this way. In particular, the asymptotic fraction of files which can be compressed at all with this approach is 0, and the fraction of 'large' (say, > 1 kB) files that can be compressed to "a few bytes" (say, < 100 B) goes to 0 very quickly. Example: By the Pigeonhole Principle, at most Code: 0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000237% of 1000-bit files can be compressed by this (or any) method to 100 bits or less. With a fixed choice of compression algorithms, if you choose a trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion 1000-bit files at random, the chance that any of them can be compressed to 100 bits or smaller is 0.000024%. Quote: Originally Posted by JohnBartle Fibonacci numbers do not include every number - which, of course, is a very good thing. So I was thinking that I could use a formula similar to "Binet's formula". . So I need a similar Closed-form expression that would calculate: Fn-1 + Fn-2 + ... + Fn-k k being any arbitrary number > 2. I feel it's worth pointing out that not every number can be expressed as $F_{n-1}+F_{n-2}+\cdots+F_{n-k}$ for n > k (probably this restriction can be dropped). It *is* possible to write every number as a sum of Fibonacci numbers: in particular, every number has a unique Zeckendorf representation (a sum of nonconsecutive Fibonacci numbers). Quote: Originally Posted by JohnBartle Would anyone be so kind as to write me a closed-form expression for the recurrence relation sequence: Fn-1 + Fn-2 + ... + Fn-k If n - k is large enough, this should be the nearest integer to $\frac{\varphi}{\sqrt5}(\varphi^n-\varphi^{n-k})$. n - k > 1 should be enough; this may actually work down to 0. But be sure you're calculating with enough digits of precision! May 27th, 2010, 10:32 PM #3 Newbie   Joined: May 2010 Posts: 6 Thanks: 0 Re: Need help creating formula for:Fn-1 + Fn-2 + ... + Fn-k Thank you CRGreathouse - your answer was fantastic. My mind is really really foggy right now, as usual, but as soon as I am thinking clearly enough to ask and discuss super compression I will be revisiting this thread. I understand that super compression is considered a pipe dream, but I also understand that anytime anyone ever wanted anything in this world and they sought out all the necessary information and resources to achieve - they eventually miraculously do achieve despite everyone saying their dream or idea was impossible. This phenomena has occurred countless times. What I am saying is that there is always a way............................................... ............... It dawned on me ,as I was reading your post, a number of really good reasons why my formula or "most" any formula or algorithm would not work - some of which I already new. The truth is that I didn't genuinely think that my formula could cover every possible number. I did however overlook the obvious that the range between one number and the next greater gets bigger and bigger towards infinity. I intended to use this formula as a learning experience . For me,there are just to many ins and outs to numbers just to rule out super compression, and even though I see compelling reasons to discount it, I am compelled even more by the infinite possabilities to make a way where there was no way. Very good day to you, and thanks again! May 28th, 2010, 05:16 AM   #4 Global Moderator Joined: Nov 2006 From: UTC -5 Posts: 16,046 Thanks: 938 Math Focus: Number theory, computational mathematics, combinatorics, FOM, symbolic logic, TCS, algorithms Re: Need help creating formula for:Fn-1 + Fn-2 + ... + Fn-k Quote: Originally Posted by JohnBartle I understand that super compression is considered a pipe dream No, practical (say, 1000 qubit) quantum computing is a pipe dream. Compression of the sort you're looking for is flat-out impossible. What you need to do is understand the limitations and find ways of working around them. For example, .jpg images achieve extremely high compression by throwing away information that isn't easily visible to the human eye. The Joint Photographic Experts Group understood this result and found a way around it: they don't keep all the original information, just the 'visually significant' information. If you can find a similar way around, you may be able to create a great compression algorithm. But if you simply try to fight the bound, you will fail. (No skin off my back if you try, though.) Consider it this way. There are 2^100 = 1267650600228229401496703205376 100-bit files. There are 2^200 = 16069380442589902755419620923411626025222029937827 92835301376 200-bit files. If you compress all 200-bit files to 100 bits, then there is (by the Pigeonhole Principle -- but it's obvious if you think about it) some 100-bit file that is the compressed version of at least 2^200 / 2^100 = 1267650600228229401496703205376 different 200-bit files. How will you tell which one is the right one to decompress to? May 28th, 2010, 03:17 PM   #5 Newbie Joined: May 2010 Posts: 6 Thanks: 0 Re: Need help creating formula for:Fn-1 + Fn-2 + ... + Fn-k Quote: Originally Posted by CRGreathouse No, practical (say, 1000 qubit) quantum computing is a pipe dream. Compression of the sort you're looking for is flat-out impossible. What you need to do is understand the limitations and find ways of working around them. I agree, redesigning inner components of a computer (cpu, memory, etc) such as with quantum computing would very much seem more possible. When I've learned all I can learn about math, if I still can not find a way then I will call it quits. I, indeed, intend to investigate other physical forms of compression when I am feeling better. Quote: Originally Posted by CRGreathouse Consider it this way. There are 2^100 =1267650600228229401496703205376 100-bit files. There are 2^200 = 16069380442589902755419620923411626025222029937827 92835301376 200-bit files. If you compress all 200-bit files to 100 bits, then there is (by the Pigeonhole Principle -- but it's obvious if you think about it) some 100-bit file that is the compressed version of at least 2^200 / 2^100 = 1267650600228229401496703205376 different 200-bit files. How will you tell which one is the right one to decompress to? I think I undertand... You seem to be saying that if one represents a certain set of larger numbers by smaller then they will eventually run out of the smaller - before - they run out of the bigger which would still need to be represented. I also think I understand your comment: Quote: Originally Posted by CRGreathouse By the Pigeonhole Principle, at most Code: 0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000237% of 1000-bit files can be compressed by this (or any) method to 100 bits or less. With a fixed choice of compression algorithms, if you choose a trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion 1000-bit files at random, the chance that any of them can be compressed to 100 bits or smaller is 0.000024% Above, you seem to be saying that one can use ------------ANY-------------- fixed combination of ANY algorithms that will fit within 100 bits or smaller(which would represent one's compressed file) and ALL the outputs from those algorithms would only account for an almost infinitesimally small fraction of the total quantity of numbers that can be extracted from 1000 bits. So there may be some 1000 bit files that would, infact, compress down to 100 bits or less but there are far, far, far....... more that will not. --------------------------------------------- My problem is that when I look at things like irrational numbers, which may be infinitely complex, I perceive that there is some identity that has not been found yet, and may hold the key to super compression with today's current computer technology. Of course, the more I think about it, I see the seeming impossibility and futility of it. Really, if nothing else, I will consider my endeavors as a really fun learning experience. Very good day to you then, CRGreathouse. Thank you for your help, friend, I've learned alot from your post. [color=#FF0000]EDIT: ================================================== ========[/color] I said earlier that all things were possible, but after thinking about it - what I should have said was that all things are probably possible - if - one has a healthy mind an ability to express that healthy mind, and an infinite amount of resources to work with. However, I now realise the binary representation of numbers on todays current computers is finite. Just thought I'd throw that in. May 29th, 2010, 04:29 AM   #6 Global Moderator Joined: Nov 2006 From: UTC -5 Posts: 16,046 Thanks: 938 Math Focus: Number theory, computational mathematics, combinatorics, FOM, symbolic logic, TCS, algorithms Re: Need help creating formula for:Fn-1 + Fn-2 + ... + Fn-k Quote: Originally Posted by JohnBartle Above, you seem to be saying that one can use ------------ANY-------------- fixed combination of ANY algorithms that will fit within 100 bits or smaller(which would represent one's compressed file) and ALL the outputs from those algorithms would only account for an almost infinitesimally small fraction of the total quantity of numbers that can be extracted from 1000 bits. So there may be some 1000 bit files that would, infact, compress down to 100 bits or less but there are far, far, far....... more that will not. Right! Actually, this is how real-world compression works: the vast majority of inputs are 'compressed' to something larger (maybe 1-10 bytes larger), while a vanishingly small percentage can be compressed by a 'reasonable' amount. But fortunately many of the files we care about are in this small fraction. For example, text files are very compressible: they don't contain a random diustribution of possible characters (no upper ASCII, no BEL character; mostly letters, numbers, and punctuation) and repeat sequences a lot (like "the "). Quote: Originally Posted by JohnBartle My problem is that when I look at things like irrational numbers, which may be infinitely complex, I perceive that there is some identity that has not been found yet, and may hold the key to super compression with today's current computer technology. Of course, the more I think about it, I see the seeming impossibility and futility of it. Really, if nothing else, I will consider my endeavors as a really fun learning experience. In the Kolmogorov sense, an irrational number like sqrt(2) has very little information: you can calculate arbitrarily many digits with a short algorithm. But please, continue your investigation -- I think you will find it enlightening. I'm glad that you're keeping on open mind. May 29th, 2010, 08:44 PM   #7 Newbie Joined: May 2010 Posts: 6 Thanks: 0 Re: Need help creating formula for:Fn-1 + Fn-2 + ... + Fn-k I apologize for making another post for my following comments, and I really would have preferred to just edit my previous post up above, but my edit time has expired. So, I don't mean to bump this thread and flag anyone's attention. Quote: Originally Posted by JohnBartle [color=#FF0000]EDIT: ================================================== ========[/color] I said earlier that all things were possible, but after thinking about it - what I should have said was that all things are probably possible - if - one has a healthy mind an ability to express that healthy mind, and an infinite amount of resources to work with. However, I now realise the binary representation of numbers on todays current computers is finite. Just thought I'd throw that in. Ok, up above, I make a comment about another earlier comment. Above, I say, " However, I now realise the binary representation of numbers on todays current computers is finite." In this following comment, I attempt to correct my earlier comment: Quote: What I am saying is that there is always a way............................................... ............... But you know what, I recant my correction... I recant because my original comment may - or may not - have been completely wrong to begin with. See, there are algorithms which can produce infinitely complex numbers (which may or may not be quite in the Kolmogorov sense). So an infinite amount of data can be extracted from such an algorithm - which means that in, at least, an indirect way - ALL things are possible - even when working with "certain" limited resources -- or maybe -- even "ALL" limited or finite resources. I apologize again for bumping this thread, I did not want to leave such a mistake unfixed. Also, since I'm posting anyways, I would like to reiterate that, when I'm feeling better, I would still like to present a few theoretical ideas, and ask some hypothetical, and bizzare questions (which may or may not be stupid) of things that have caught my eye about super compression with todays current computer technology. So, I will probably be back. Good day to everyone. May 30th, 2010, 01:08 PM   #8 Global Moderator Joined: Nov 2006 From: UTC -5 Posts: 16,046 Thanks: 938 Math Focus: Number theory, computational mathematics, combinatorics, FOM, symbolic logic, TCS, algorithms Re: Need help creating formula for:Fn-1 + Fn-2 + ... + Fn-k Quote: Originally Posted by JohnBartle However, I now realise the binary representation of numbers on todays current computers is finite. The number of distinct n-bit strings is finite, without regard to whether you represent them on a computer, a chalkboard, or on your fingers. The limitation is in the representation, not in our current version of computing. Also (this is deep!), there is a finite limit on the amount of information that can be represented in the entire universe, thanks to the holographic principle. I don't recall the exact amount but it's below a googol bits. Quote: Originally Posted by JohnBartle See, there are algorithms which can produce infinitely complex numbers (which may or may not be quite in the Kolmogorov sense). So an infinite amount of data can be extracted from such an algorithm - which means that in, at least, an indirect way - ALL things are possible - even when working with "certain" limited resources -- or maybe -- even "ALL" limited or finite resources. Insofar as an algorithm is a finite sequence of instructions in some language, no algorithm can generate a sequence with infinite Kolmogorov complexity. So in what sense (if not Kolmogorov's) can an algorithm produce a sequence of infinite complexity? What definition are you using? Just trying to clear things up. June 1st, 2010, 10:09 PM   #9 Newbie Joined: May 2010 Posts: 6 Thanks: 0 Re: Need help creating formula for:Fn-1 + Fn-2 + ... + Fn-k First, to CRGreathouse, I am sorry for any of my ramblings, and lack of haste to respond. I have 0 background in mathematics or really anything at all for that matter (except for a little in C++ programming), and as I've said, I am not thinking coherently. So, I openly admit to and acknowledge my foolishness and ignorance, but I am trying. My response was slow because I was trying to think of a proper and good way to respond, but I don't think my illness will allow this. Quote: Originally Posted by CRGreathouse The number of distinct n-bit strings is finite, without regard to whether you represent them on a computer, a chalkboard, or on your fingers. The limitation is in the representation, not in our current version of computing. The number of distinct n-bit strings is finite, but I meant that what can be produced is infinite, and, therefore, contains an infinite amount of data - not necessarily really complex, but infinite. And yes, of course, the limitation is in the representation, - and I meant that our current computer technology is limited - to - our binary representation. Quote: Originally Posted by CRGreathouse Also (this is deep!), there is a finite limit on the amount of information that can be represented in the entire universe, thanks to the holographic principle. I don't recall the exact amount but it's below a googol bits. I have complete confidence that neither this nor any other principle nor law is absolutely complete, sound or accurate – simply because they are based on subjective interpretation, and because innate inarguable knowledge demontrates that the law that produces the measurable, and observable is not made out the measurable and observable. It is ABSOLUTELY transcendent.I’m sure that they may be well grounded in logic based – very - heavily on empirical and perhaps at least some non-empirical evidence, but human documented physical law is not perfect. I will admit, though, that I was already curious and concerned that not all things are naturally possible to humanity… This – principle -- looks like a party crasher! Well anyway, I am merely rambling, feel free to ignore. Quote: Originally Posted by CRGreathouse Insofar as an algorithm is a finite sequence of instructions in some language, no algorithm can generate a sequence with infinite Kolmogorov complexity. So in what sense (if not Kolmogorov's) can an algorithm produce a sequence of infinite complexity? What definition are you using? Just trying to clear things up. In my comment, “See, there are algorithms which can produce infinitely complex numbers (which may or may not be quite in the Kolmogorov sense).” – I may have misunderstood the meaning of Kolmogorov, among other possibilities. I must have assumed that a number – must – first have at least a defined level of high complexity to be Kolmogorov, but I understand now that Kolmogorov complexity is a measure of the lowest computational resources possible needed to specify a thing. I guess I figured that since any arbitrary algorithm cannot be, according to your statement:“In the Kolmogorov sense, an irrational number like sqrt(2) has very little information",completely complex that certain or maybe all finite algorithms did not qualify as Kolmogorov complex. I did denote, however, that I was not sure. Now, by "infinitely complex" I meant that certain algorithms can produce an infinite number in which, at least, some newly asymptotically calculated arbitrary parts would break any previous pattern, and so there would be an infinite amount of newness. I was given this impression because of statements like what one might find on Wikipedia, such as this statement listed under "Pi": Quote: Originally Posted by Wikipedia:Pi "Consequently, its decimal representation never ends or repeats. It is also a transcendental number, which implies, among other things, that no finite sequence of algebraic operations on integers (powers, roots, sums, etc.) can be equal to its value" I have not yet learned what to call this type of complexity. Good day to you CRGreathouse, I have considered it a pleasure to read your writings June 2nd, 2010, 07:38 AM   #10 Global Moderator Joined: Nov 2006 From: UTC -5 Posts: 16,046 Thanks: 938 Math Focus: Number theory, computational mathematics, combinatorics, FOM, symbolic logic, TCS, algorithms Re: Need help creating formula for:Fn-1 + Fn-2 + ... + Fn-k Quote: Originally Posted by JohnBartle First, to CRGreathouse, I am sorry for any of my ramblings, and lack of haste to respond. I have 0 background in mathematics or really anything at all for that matter (except for a little in C++ programming), and as I've said, I am not thinking coherently. So, I openly admit to and acknowledge my foolishness and ignorance, but I am trying. My response was slow because I was trying to think of a proper and good way to respond, but I don't think my illness will allow this. They're not ramblings. (I argue with semicoherent ramblers; you're not amongst them.) You're right, you do lack mathematical background -- but you actually seem to be able to learn, a trait which sets you apart from these cranks. Quote: Originally Posted by JohnBartle Quote: Originally Posted by CRGreathouse Also (this is deep!), there is a finite limit on the amount of information that can be represented in the entire universe, thanks to the holographic principle. I don't recall the exact amount but it's below a googol bits. I have complete confidence that neither this nor any other principle nor law is absolutely complete, sound or accurate – simply because they are based on subjective interpretation, and because innate inarguable knowledge demontrates that the law that produces the measurable, and observable is not made out the measurable and observable. A far weaker bound comes from the number of particles in the universe, call it 10^82 as a reasonable upper bound; with quantum interactions, they can't store more than 2^(10^82) bits between them. This does assume that they're all in their base state, but I'm sure more calculations on what energy states are possible using all the energy in the universe wouldn't increase this by more than another tetrational level (and probably not more than square the value). If that's too much, consider that data requires (a positive amount of) mass-energy to store; since the universe's mass-energy is finite, its information capacity is finite also. If there was infinite mass-energy, then all matter would be attracted with infinite force and hence infinite rapidity (the relativistic version of speed/velocity). This shows that the amount of information in the universe is finite, even if very large. Quote: Originally Posted by JohnBartle In my comment, “See, there are algorithms which can produce infinitely complex numbers (which may or may not be quite in the Kolmogorov sense).” – I may have misunderstood the meaning of Kolmogorov, among other possibilities. I must have assumed that a number – must – first have at least a defined level of high complexity to be Kolmogorov, but I understand now that Kolmogorov complexity is a measure of the lowest computational resources possible needed to specify a thing. Right. Or at least, close: it's the smallest space needed to specify the program. The program itself may take large amounts of computational resources (time & memory). Quote: Originally Posted by JohnBartle I guess I figured that since any arbitrary algorithm cannot be, according to your statement, “In the Kolmogorov sense, an irrational number like sqrt(2) has very little information", completely complex that certain or maybe all finite algorithms did not qualify as Kolmogorov complex. I did denote, however, that I was not sure. I don't know what you mean here. Algorithms don't have complexity, numbers do; and it's not a binary measure (complex or not) but an amount of information that needs to be specified, for a given computational model. Quote: Originally Posted by JohnBartle Now, by "infinitely complex" I meant that certain algorithms can produce an infinite number in which, at least, some newly asymptotically calculated arbitrary parts would break any previous pattern, and so there would be an infinite amount of newness. If I understand you properly, I disagree. An algorithm (that is, a finite sequence of commands) can produce the digits of pi, so pi has low Kolmogorov complexity. But perhaps you mean something weaker by "break any previous pattern". If you mean that it won't repeat, then the claim true since there are algorithms that generate the digits of irrational numbers like pi. Quote: Originally Posted by JohnBartle Quote: Originally Posted by Wikipedia:Pi "Consequently, its decimal representation never ends or repeats. It is also a transcendental number, which implies, among other things, that no finite sequence of algebraic operations on integers (powers, roots, sums, etc.) can be equal to its value" I have not yet learned what to call this type of complexity. A decimal number which never repeats is called irrational; one that cannot be expressed as the root of a polynomial with integer coefficients is called transcendental. Tags creating, fn2, fnk, forfn1, formula Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post swm06 Calculus 1 May 7th, 2013 05:14 PM lowena Computer Science 25 January 2nd, 2013 02:19 PM tedward570 Algebra 3 October 2nd, 2012 03:33 PM Camper Computer Science 0 September 5th, 2011 02:24 PM bobchiba Algebra 5 November 7th, 2008 01:23 PM Contact - Home - Forums - Cryptocurrency Forum - Top
2019-03-23 12:24:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7626739740371704, "perplexity": 893.0717138775425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202804.80/warc/CC-MAIN-20190323121241-20190323143241-00541.warc.gz"}
https://brilliant.org/discussions/thread/explain-this-magic/
× # Explain this magic!?!? Hello everyone!! While researching on a series, I found something magical. Can someone explain this?? Let $S=1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+ ……$ $S= (1+\frac{1}{3}+\frac{1}{5}+……)+\frac{1}{2} [1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+……]$ $S=(1+\frac{1}{3}+\frac{1}{5}+....)+\frac{S}{2}$ $\frac{S}{2}=1+\frac{1}{3}+\frac{1}{5}+…$ $eq^{n} (i)$ Now from definition of S, $\frac{S}{2}=\frac{1}{2}+\frac{1}{4}+\frac{1}{6}+….$ $eq^{n} (ii)$ Comparing $$eq^{n} (i)$$ and $$eq^{n} (ii)$$ and transposing, We get $1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+…..=0$ But obviously, $(1-\frac{1}{2})+(\frac{1}{3}-\frac{1}{4})+….>0$ Note by Pranjal Jain 2 years, 10 months ago Sort by: Here S is not a converging series. The sum of the given series diverges. So you are treating S as a number, but its sum goes to infinity ( and infinity can not be considered as number , because it does not follow the properties of numbers). So unknowingly you are applying algebraic operations on infinity (for you it is S) , which is a flaw in this your something magical. !!!!!!!! · 2 years, 10 months ago Thanx dude! I got what you are saying! But I dont know much about convergence of a series! Any good and reliable source? Well, I tried to learn convergence from "Hall and Knight". I need some more help! · 2 years, 10 months ago
2017-07-26 16:35:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352072477340698, "perplexity": 1452.6539597975534}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426234.82/warc/CC-MAIN-20170726162158-20170726182158-00572.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/4010/how-are-magic-states-defined-in-the-context-of-quantum-computation
# How are magic states defined in the context of quantum computation? Quoting from this blog post by Earl T. Campbell: Magic states are a special ingredient, or resource, that allows quantum computers to run faster than traditional computers. One interesting example that is mentioned in that blog post is that, in the case of a single qubit, any state apart from the eigenstates of the Pauli matrices is magic. How are these magic states more generally defined? Is it really just any state that is not a stabilizer state, or is it something else? The standard example is that if you can produce the state $(|0\rangle+e^{i\pi/4}|1\rangle)/\sqrt{2}$, then you can combine this with Clifford operations in order to apply a $T$ gate (see Fig. 10.25 in Nielsen and Chuang), and we know that $T$+Clifford is universal.
2019-04-26 12:24:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6572490930557251, "perplexity": 399.57656244873596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578770163.94/warc/CC-MAIN-20190426113513-20190426135513-00340.warc.gz"}
https://indico.desy.de/event/14795/contributions/17587/
ATTENTION: We have to do a short maintenance with downtime on Wed 19 Oct 2022, 9:00 - 10:00 CEST. Please finish your work in time to prevent data loss. # Rethinking Quantum Field Theory Sep 27 – 30, 2016 DESY Hamburg Europe/Berlin timezone ## Large-Charge Perturbation Theory Sep 29, 2016, 2:00 PM 15m Seminar room 1 (DESY Hamburg) ### Seminar room 1 #### DESY Hamburg Notkestrasse 85, 22607 Hamburg String & Mathematical Physics ### Speaker Mr Orestis Loukas (University of Bern) ### Description I will introduce the basic concepts of Large-Charge Perturbation Theory (LCPT) in d+1 space-time dimensions. Given a Quantum Field Theory with a globally conserved charge Q, LCPT aims at providing analytic insight to sectors, which remain inaccessible via ordinary perturbative methods, but where Q is assumed to be large. To this end, the scalar O(2) model with $\phi^N$ self-interaction will be implemented as a toy-example. I will construct the large-charge vacuum of this theory as a generalized coherent state and derive its effective potential at fixed (and large) charge Q. Subsequently, we shall investigate the perturbative treatment of fluctuations around the large-Q vacuum proving the existence of a consistent “1/Q-expansion”. ### Primary author Mr Orestis Loukas (University of Bern) ### Co-authors Dr Domenico Orlando (University of Bern) Dr Luis Álvarez-Gaumé (CERN) Dr Susanne Reffert (University of Bern) Slides
2022-09-30 10:15:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4823360741138458, "perplexity": 8049.991554421014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335448.34/warc/CC-MAIN-20220930082656-20220930112656-00240.warc.gz"}
https://www.dimensibahasainggris.com/2019/11/soal-indirect-speech-simple-past-tense.html
# Soal Indirect Speech Simple Past Tense Indirect speech atau reported speech keduanya merupakan hal yang sama yaitu sebuah kalimat tidak langsung. Kalimat tidak langsung digunakan untuk menyampaikan kembali sebuah kalimat yang telah disampaikan oleh seseorang sebelumnya. Bahasa Inggris memiliki berbagai jenis tenses, masing-masing tenses digunakan tergantung pada konteks waktu dimana si pembicara menceritakan sebuah kejadian. Bentuk tenses ini juga mempengaruhi perubahan berbagai bentuk kalimat tidak langsung (indirect speech). Postingan kali ini akan membahasa berbagai jenis soal latihan dari materi reported speech simple past menjadi past perfect. Seperti yang telah dibahas sebelumnya bahwa saat kita mengubah kalimat dalam bentuk simple past tense menjadi kalimat tidak langsung, maka kalimat tidak langsung tersebut akan berubah menjadi bentuk past perfect tense. Artikel terkiatPenjelasan Reported Speech Simple Past Tense Menjadi Past Perfect Tense Selanjutnya, mari kita coba beberapa soal latihan berikut sebagai media untuk mengasah pemahaman kita tentang indirect / reported speech simple past tense. Berikut soal indirect speech simple past tense selengkapnya. ## Exercise one Change the following sentences into indirect speech! 1. Gugum: "Lita and Intan were in Pangandaran last week." Gugum said (that) . . . . . . . . . . . . . . . . . . . . . . . . 2. Winda: "I didn't see you at the festival last night." Winda said (that) . . . . . . . . . . . . . . . . . . . . . . . . 3. Erni: "My aunt celebrated her 20 anniversary at the restaurant. And I was there." Erni said (that) . . . . . . . . . . . . . . . . . . . . . . . . 4. Paul: "The black sedan hit the motorcycle from behind and then everybody chased that black sedan." Paul said (that) . . . . . . . . . . . . . . . . . . . . . . . . 5. Sheny: "I wasn't the only student in the class who failed the test." Sheny said (that) . . . . . . . . . . . . . . . . . . . . . . . . 6. Andra: "You were in the same room with me when Mrs. Lastri was angry to us." Andra said (that) . . . . . . . . . . . . . . . . . . . . . . . . 7. Ivan: "I told you I didn't have anything left in my pocket." Ivan said (that) . . . . . . . . . . . . . . . . . . . . . . . . 8. Terry: "The movie was scary but I enjoyed it very much." Terry said (that) . . . . . . . . . . . . . . . . . . . . . . . . 9. Mr. Wildan: "They came to the reunion with their family last month." Mr. Wildan said (that) . . . . . . . . . . . . . . . . . . . . . . . . 10. Mrs. Evelyn: The carnival was spectacular. And the chairmen said it was a new record." Mrs. Evelyn said (that) . . . . . . . . . . . . . . . . . . . . . . . . ## Exercise two Change the sentences below into direct speech 1. Lucky said (that) He had not chatted with his girlfriend. Lucky: " . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " 2. The doctor said (that) I had been influenza. Doctor: " . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " 3. Mrs. Elly said (that) I and my friends had done our best and she had been proud of us. Mrs. Elly: " . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " 4. The archaeologist said (that) the dinosaurs had been the biggest animal on planet. The archaeologist: " . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " 5. Erwin said (that) he had been a pizza deliverer. Erwin: " . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " 6. Nia said (that) she and her family had been angry about bullying issue to her brother. Nia: " . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " 7. Kevin said (that) He had done the same mistakes to his friends. Kevin: " . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " 8. The zoo keeper said (that) The pandas had been tame. They had been nice to all visitors. The zoo keeper: " . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " 9. Vito said (that) he had not bought a new apartment. he just had rented it for a month. Vito: " . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " 10. Wellen said (that) her sister had not been happy about the competition's result. Wellen: " . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " ## Kunci jawaban Untuk memastikan kembali jawaban yang sudah diberikan, silahkan cek kunci jawaban soal indirect speech simple past tense berikut. ### Exercise one 1. Gugum said (that) Lita and Intan had been in Pangandaran the previous week. 2. Winda said (that) she hadn't seen me at the festival the night before. 3. Erni said (that) her aunt had celebrated her/ her aunt's 20 anniversary at the restaurant. And she had been there. 4. Paul said (that) the black sedan had hit the motorcycle from behind and then everybody had chased that black sedan. 5. Sheny said (that) she hadn't the only student in the class who had failed the test. 6. Andra said (that) I had been in the same room with him when Mrs. Lastri had been angry to us. 8. Terry said (that) the movie had been scary but she had enjoyed it very much. 9. Mr. Wildan said (that) they had come to the reunion with their family the previous month. 10. Mrs. Evelyn said (that) the carnival had been spectacular. And the chairmen had said it had been a new record. ### Exercise two 1. Lucky: "I didn't chatted with my girlfriend." 2. Doctor: "You was influenza." 3. Mrs. Elly: "You and your friends did your best and I was proud of you." 4. The archaeologist: "The dinosaurs were the biggest animal on earth." 5. Erwin: "I was a pizza deliverer." 6. Nia: "I and my family was angry about bullying issue to my brother." 7. Kevin: "I did the same mistakes to my friends." 8. The zoo keeper: "The pandas were tame. They were nice to all visitors." 9. Vito: "I didn't buy a new apartment. I just rented it for a month." 10. Wellen: "My sister wasn't happy about the competition's result." Dimensi Bahasa Inggris "Semangat menebar manfaat."
2020-11-28 13:35:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9082440137863159, "perplexity": 1331.992706522003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195656.78/warc/CC-MAIN-20201128125557-20201128155557-00219.warc.gz"}
https://www.nature.com/articles/s41598-023-30802-w?error=cookies_not_supported&code=f5400fee-4fb7-4eec-aec6-5ecf05c39d25
## Introduction Wildfire, also known as bushfire, can occur when there is unfavorable weather (low humidity, high temperature, high winds, etc.) combined with dry vegetation fuel. The past decades have witnessed wildfires becoming a severe threat around the world, such as in southern Europe, North America, and southeastern Australia1,2,3. Wildfires in different regions exhibit regional characteristics, as local conditions affect typical ignition causes, fire behavior, etc. In the United States, wildfire is a particularly important hazard in California, which has been suffering from frequent devastating wildfires. The California Department of Forestry and Fire Protection (CAL FIRE) reported a yearly average of 3217 wildfire incidents and 624,728 burnt acres in California during 2016~20204. While wildfires can be started by a broad variety of causes (e.g., lightning, arson, smoking, etc.), electrical powerlines were shown to be the only non-declining ignition source5. Statistics showed that out of the top 20 most destructive California wildfires, at least five were started from power systems, including the 2018 Camp Fire, which destroyed 18,804 structures and claimed 85 lives6. In fact, power system-ignited incidents are more likely to develop into large wildfires, due to their special relationship with extreme weather conditions. A power network, comprising of numerous components and equipment, can experience a sharp increase in failure/faults under strong winds7,8. With the contribution of hot and dry air, various ignition mechanisms can be triggered, and fires may be started where combustible fuels are present. Moreover, strong winds can greatly facilitate the fire spread, while also hindering the firefighting efforts. Wind is the driving weather factor for powerline ignition. This has been indicated by the joint occurrence of powerline-related wildfires and seasonal extreme winds in California9,10. These foehn winds (known as Diablo winds or Santa Ana winds) are characterized by remarkable intensity and gustiness. The electric power grid is generally considered to be composed of two systems: transmission system and distribution system. Compared to the distribution system, the transmission system plays a more critical role in power reliability because it transports bulk, high-voltage electricity over long distances. There is growing research interest in studying the reliability and resilience of power infrastructure subject to wind hazard11,12,13,14,15,16,17. In most cases, the structural failure of a certain component (e.g., transmission conductors, utility poles/towers) was studied. However, where wildfire is concerned, the relevant limit state is different from traditional structural failure, as emphasis is put on the probability of causing effective ignition mechanisms18. For instance, hot metal particles from conductor arcing and burning embers from conductor-vegetation contact are both eligible wind-induced failure modes, whereas structural failure (e.g., conductor rupture) is not necessarily dangerous19. It was shown that vegetation contact was the primary cause of power utility ignition in California, with a contribution of 53.5%20. Under high wind conditions, conductor-vegetation contact generally occurs in two forms: broken trees/limbs falling on the conductor (known as the “fall-in” issue), and conductor swinging out to nearby vegetation (known as the “grow-in” issue). Overhead transmission lines are usually supported by tall transmission towers, which makes the fall-in issue less likely. Instead, the vegetation grow-in issue is identified as a major threat to electric transmission systems21. Transmission conductors bear the greatest exposure to wind hazard events, as they span long distances across variable terrains. They are highly flexible, and large swinging displacements (10~20 m) can be observed around the mid-span22. It is anticipated that climate change will influence the magnitude and frequency of future extreme weather events. The American Society of Civil Engineers (ASCE) has been advocating adaptive infrastructure for a changing climate. As noted therein, potential adverse changes, such as prolonged dry seasons, warmer temperatures, and increased extreme wind intensities may worsen the situation of powerline-induced ignition23,24. One major challenge is to assess the impacts of climate change on built and natural systems based on climate projections. With varying complexities and goals, climate analysis may be carried out at different levels24. Even ignoring the future effects of climate change, a static wind hazard map with a 20-year return period was generated and it shows the severity of the problem (see Fig. 1). The methodology used for creating the wind hazard map is detailed in the Supplementary Information. Fig. 1 suggests that very strong winds, with a broad intensity range (17~104 m/s), are expected to occur in California. Distinct spatial variation can also be observed despite the sparsity of stations in some areas, which could pose a serious challenge on the operation of large-scale power grids. In recognition of the potential of devastating wildfires started from power systems, California electric utilities are authorized to conduct preemptive Public Safety Power Shutoff (PSPS) in response to severe weather conditions25. In the fire season of 2019 alone, millions of people were affected by the rounds of power shutoffs, which lasted for more than one month26,27. Despite the immediate effectiveness in stopping power assets from causing fires, a PSPS event can lead to other significant disruptions, as communities and critical infrastructures are de-energized. Risk analysis is a powerful tool for decision making under uncertainties. In the PSPS context, two risks have to be balanced, namely the risk from utility-induced wildfires and the risk from events connected with the blackout, which can range from an increase of car accidents due to the lack of traffic lights, to health problems caused by shutting down domestic life-preserving equipment28. Wildfire risk analysis is generally concerned with three components: ignition probability, burn probability (or spread probability), and vulnerability29. In terms of ignition probability, some previous studies focused on developing statistical models by studying historical ignition records30. This purely data-driven type of approaches is versatile and applicable to various ignition sources. However, they are uninformative as to understanding the underlying failure and ignition mechanisms that could drive improvement measures and real-time decisions on PSPS. In contrast, there is a paucity of research work on wildfire ignition focusing on the physical interaction between high winds and electric power infrastructure, which is the focus of this study. The prediction of ignition has great influence on wildfire risk analysis because subsequent fire propagation simulation and fire damage analysis rely on ignition location and timing as input. Hence, this study focuses on the ignition due to the transmission conductor being blown close enough to the surrounding vegetation and causing flashover or sparks. Specifically, a methodology for estimating the probability of encroachment into baseline clearance (i.e., the initiating failure) is proposed, as summarized in Fig. 2. The novelty of this paper is that it introduces the idea of studying the ignition-incurring clearance violation problem through a formal analysis of the structural responses, considering wind loading uncertainties. The encroachment probability computed using this methodology accounts for all the relevant factors, such as the duration of the wind event, the wind intensity, the transmission line (TL) properties, the vegetation clearance. It is worth noting that previous studies on the dynamics of conductor cables in the spectral domain (using a similar characterization for the wind stochastic process) focused on the stresses concerning the conductor failure. However, the application to vegetation encroachment entails a distinct focus on the conductor displacements. It thus requires that a new limit state equation and associated first-passage problem should be formulated, which is an original contribution of this work. The remainder of this paper is structured as follows. First, the background and practices of vegetation management are reviewed, following which the relevant limit state is defined. Second, the proposed methodology for computing the probability of encroachment is detailed. Finally, the application section gives two examples at different scales, where major findings are presented. ## Vegetation management for transmission systems Vegetation growing near power infrastructure has long been recognized as a threat to the reliability of electric power networks, and it is particularly concerning in transmission systems. In fact, the shift of electrical current due to a failed TL may cause cascading failures elsewhere and cause massive power outages31. Meanwhile, urbanization has been driving power infrastructure into the wildland-urban interface (WUI), which is more forested and fire-prone, exacerbating the risks caused by the proximity to vegetation9. The vast majority of transmission lines use overhead conductors instead of underground cables, because the latter are much more expensive to install and maintain. As mentioned earlier, there are two types of vegetation-conductor interactions that can trigger failure: the fall-in type and the grow-in type. The fall-in failure mechanism involves substantial uncertainties on the vegetation side, including vegetation health condition, and fracture strength under wind loading, to name a few. Although modern technologies, such as Light Detection and Ranging (LiDAR)32 have facilitated vegetation data acquisition, the fall-in issue remains very difficult to predict, even in a probabilistic sense, given the complexity of vegetation and post-fracture windborne path. On the other hand, this paper focuses on the grow-in class of potential failures, which is closely related to the structural behavior. Specifically, the wind-induced dynamic displacement response of transmission lines is examined with the aim to better understand how it increases the probability of clearance encroachment, and in turn ignition. The corresponding ignition mechanism is the flashover (or sparkover) phenomenon, in which electrical current jumps through air from the conductor to a nearby object (typically trees). The energy released from the high-voltage current can result in ignition and even fires in the presence of low-moisture vegetation and dry atmosphere. It is important to note that flashover can occur even when there is no direct contact between conductor and trees. In order to prevent the electric power infrastructure from vegetation interruptions, clearance regulations are universally established. Where powerline-related wildfire risk is concerned, stricter regulations can be introduced. In the United States, the NERC FAC 003-4 standard is the most relevant to transmission system vegetation management21. In essence, it requires that a minimum vegetation clearance distance (MVCD) be maintained between transmission conductors and contiguous vegetation. The “wire-border zone” is an effective technique in transmission system vegetation management and is widely used in the field33. This approach establishes a right-of-way (ROW) along transmission facilities, as shown in Fig. 3. Typically, the ROW is composed of a wire zone where only low-growing vegetation is allowed and two border zones where taller shrubs and small trees may be permissible. Considering the conductor sag and sway, the width of the ROW is usually much larger than what is needed for structural placement only. For example, the ROW of 230 kV lines can vary between 20 m and 60 m. Note that in Fig. 3 vegetation and conductor movement are only drawn on one side and the MVCD is indicated as a radius surrounding the conductor. Since the position of a conductor is constantly changing due to various loading, a potential flashover zone can be identified along the trajectory. ## Limit state As mentioned above, the failure scenario under investigation is that the conductor sways outward and becomes close enough to the vegetation to potentially cause flashover and ignite a fire. The air gap between the conductor and the vegetation can be regarded as an insulator whose insulation capacity depends on its size and the ambient characteristics (e.g., temperature, humidity). In terms of the gap size, there are two main sources of uncertainty: one is the turbulent wind loading which directly influences the conductor and vegetation motion; the other is the vegetation growth which is affected by natural conditions and human interventions (e.g., periodic trimming). Vegetation growth is meaningful only over long time horizons (months, years), and its effect can be neglected in the context of short-duration strong wind events. Therefore, the gap size is primarily affected by wind-induced conductor displacement, as vegetation movement is usually deemed negligibly small in comparison. This study defines the failure state (i.e., limit state) as the encroachment of the conductor into the MVCD. In the framework of wildfire risk analysis, it is important to recognize that reaching this limit state is only the first step in the encroachment-flashover-ignition chain of events and the conditional probabilities of occurrence of the other two steps should be considered to compute the overall risk of ignition. To accurately calculate the flashover probability over an air gap that changes in size during the wind event, the relationship between the gap size and its insulation capacity should be quantified. For example, the Gallet equation was adopted in the NERC FAC 003-4 standard to compute a MVCD yielding a flashover probability of $$10^{-6}$$ or less21. However, further experiments are needed for validation and for better understanding of the transient flashover phenomenon31,34. The probability of ignition from flashover varies with many factors, including the flammability of the vegetation and the air conditions near the incident. Given the limited knowledge and significant uncertainties involved, the transformation from MVCD encroachment to ignition relies on subjective judgment and risk attitude of the decision makers. For this reason, the calculation of the aforementioned conditional probabilities is beyond the scope of this work, which instead focuses on the encroachment itself. The next section presents a mathematical expression of the limit state as well as the methodology for determining the needed quantities. ## Methodology ### Finite element model of a transmission line section Transmission lines are typically designed in sections, where a TL section consists of multiple spans and can run for up to several kilometers. Fig. 4 illustrates the model of an example multi-span TL section. Herein, the OpenSeesPy environment is used to describe how a finite element model of a multi-span transmission cable can be built and analyzed35. The two ends are connected to strain towers allowing no longitudinal conductor movement, and they are modeled as hinged supports. Suspension insulator strings hung at intermediate transmission towers support conductors at their lower ends. The conductor-insulator attachment point is modeled as a hinge, according to the most commonly used articulated suspension clamp36. As the insulator string swings, the attachment point can move freely in space. Depending on the voltage, a single conductor (up to 220 kV) or bundled conductors (220 kV and above) can be used in transmission circuits. A single conductor can be modeled using the cable element37, while a model of bundled conductors may require the effect of spacers be captured. The conductor takes a catenary form within one span, and the unstrained profile needs to be determined first in order to compute the sagged profile14. Suspension insulator strings are usually made of brittle materials (e.g., glass, porcelain), and their bending stiffness is negligibly small. Thus, the suspension insulator string was modeled by the corotational truss element with high axial rigidity, taking account of its large displacements. The length of the insulator string (several meters) varies depending on the voltage, so the wind load applied directly on them is negligible compared to the wind load from the conductor. The specific mechanical parameters needed to set up the finite element model are provided in the Application section by virtue of a two-span transmission line example. ### Description of turbulent wind and buffeting wind load Although the TL-vegetation interaction is a localized problem, the mathematical wind flow models should be established at large scale for the considered synoptic (non-tropical) wind storms. The wind flow is considered horizontally homogeneous, as transmission systems mostly spread across open terrain areas which provide sufficiently long fetch. However, it should be recognized that non-homogeneity exists when the system encounters trees sporadically (sparse woods or dense forests). While this paper is aimed at proposing a general purpose methodology, separate studies should be conducted for specific conditions to obtain results tailored to those cases. As shown in Fig. 4, it is assumed that the wind flow is present in one direction only, i.e., perpendicular to the span direction of the TL section. This direction is chosen because it is considered the most unfavorable for conductor displacement responses. This assumption leads to a conservative estimation of the risk. To determine the degree of this overestimation, a specific analysis on wind patterns and prevalent wind directions should be conducted for the investigated region. In wind engineering, the total fluctuating wind velocity is usually split into two parts: the constant mean wind velocity $$\overline{V}_z$$ at height z, plus the zero-mean turbulent fluctuation v(tx), where t indicates time and x is the location along the conductor cable. Within the lower layer of the atmospheric boundary layer, the variation of mean wind speed with height can be described by the logarithmic law: \begin{aligned} \overline{V}_z = \frac{1}{k} \cdot u_* \cdot \textrm{ln}\left( \frac{z}{z^{}_0}\right) \; \end{aligned} (1) where $$u_*$$ is the shear velocity of the wind flow; $$z^{}_0$$ is the surface roughness; and k is Von Karman’s constant and is usually taken as 0.4. The 10-min wind speed measured at 10 m above the ground – standard height for mounting anemometers – is typically chosen as the reference wind speed for the mean wind profile. In this study, the intensity of the wind event is described in terms of the reference wind speed (denoted by $$\overline{V}_{10}$$) from which mean wind speeds at other heights are calculated. For cases where measurements from different averaging times are preferred, conversion factors can be found in the literature38,39. In the initial state, the conductor is typically sagged with a sag-to-span ratio of 1/50~1/3040. The mean wind speed along one span can be well approximated by the mean wind speed at the reference height, which is (2/3)d below the support level, where d is the sag at mid-span41. The wind turbulence is correlated in time and space. Both correlations have been studied extensively and well established models for them are available in the literature. As expected, the correlation within the wind field decays with increasing time lag and space separation. At a single point in space, the temporal correlation of alongwind turbulence is most commonly described by the following single-sided power spectral density (PSD) in the frequency domain42,43: \begin{aligned} \frac{fS_v(f)}{u_{*}^2} = \frac{200fz/\overline{V}_z}{{(1+50fz/\overline{V}_z)}^{5/3}}\; \end{aligned} (2) where f is the frequency in Hz. The spatial correlation between the wind velocity fluctuation at two points at the same height (e.g., the reference height) can be captured by the coherence function proposed by Davernport44: \begin{aligned} \gamma (x^{}_1, x^{}_2, f) = \textrm{exp}\left( -\frac{C|x^{}_1-x^{}_2|f}{\overline{V}_z}\right) \; \end{aligned} (3) where $$x^{}_1$$ and $$x^{}_2$$ are the longitudinal coordinates of two points along the TL; C is the decay factor and can be set to 16 for horizontal separation. Even though there are different models in the literature45, in this paper Gaussianity is assumed for the wind flow velocity fluctuations, based on the work by Einar N. Strømmen46. In conclusion, the wind fluctuation component v(tx) is characterized as a zero-mean, stationary, Gaussian, one dimensional (1D), and multi-variate (mV) random process. The buffeting wind load on the conductor is generated by two sources: the total fluctuating wind flow, and the conductor-wind interaction due to conductor motion. Adopting the quasi-steady assumption, the dynamic wind drag force is calculated using Eq. (4) so that also aerodynamic damping is (indirectly) considered: \begin{aligned} f^{}_{\textrm{D}} = \frac{1}{2}\rho D C_{\textrm{d}} V_{\textrm{rel}}^{2}\; \end{aligned} (4) where $$f^{}_{\textrm{D}}$$ is the drag force per unit length; $$\rho$$ is the air density; D is the diameter of the conductor; $$C_{\textrm{d}}$$ is the drag coefficient; $$V_{\textrm{rel}}$$ is the relative velocity between conductor and wind flow (see Fig. 5) and is given by the following equation: \begin{aligned} V_{\textrm{rel}} = \sqrt{(- \dot{u}^{}_{\textrm{Z}})^2 + (\overline{V}_z + v - \dot{u}^{}_{\textrm{Y}})^2}\; \end{aligned} (5) where $$\dot{u}^{}_{\textrm{Z}}$$ and $$\dot{u}^{}_{\textrm{Y}}$$ are the conductor velocities in the Z direction and Y direction, respectively. ### Wind-induced stochastic dynamic response by spectral analysis The wind-induced buffeting response of a TL section can be computed in two steps40: first, the equilibrium state of the structure under gravity and mean wind load is determined by static analysis; second, the dynamic response due to the fluctuating wind component is obtained with the structure linearized at the mean wind state. Ma et al.14 validated the linearization of the structure under significant mean wind load . With the two linearizations – the linear relationship between wind velocity and wind load (small fluctuating component assumption), and the linear behavior of the structure characterized at the mean wind state – the properties of the wind fluctuation component (Gaussian, stationary, etc.) will hold for the displacement response as well46. To study the probability of encroachment into MVCD, the main task is to obtain the probabilistic properties of the conductor displacement response, i.e., mean and standard deviation in this Gaussian case. Thus, the modal frequency domain approach was used in the second step. Standard deviations were directly derived from the cross-spectral density matrix of the response, which can be found through efficient frequency domain analysis. Note that neither wind field simulation nor expensive Monte Carlo simulation in the time domain is needed. Following the frequency domain analysis approach, the dynamic response around the mean wind state is separated into background response and resonant response. Mode shapes and modal frequencies of the linearized structure can be obtained by eigenvalue analysis. Then, the cross-spectral density matrix (CSDM) of the modal displacement vector is determined by: \begin{aligned}{{\varvec{S}}}_{\textrm{q}}(f) = {{\varvec{H}}}(f) {{\varvec{S}}}_{\textrm{p}}(f) [{{\varvec{H}}}^*(f)]^{\textrm{T}}\; \end{aligned} (6) \begin{aligned}&{{\varvec{H}}}(f) = [{{\varvec{K}}} + \textrm{i} (2 {\pi } f)({{\varvec{C}}} + {{\varvec{C}}}_{\textrm{aero}}) - (2 {\pi } f)^2 {{\varvec{M}}}]^{-1}\; \end{aligned} (7) where $$\varvec{H}(f)$$ is the transfer matrix and is expressed in Eq. (7); superscripts * and $$\textrm{T}$$ represent complex conjugate operator and transpose operator, respectively; $$\textrm{i} = \sqrt{-1}$$; $$\varvec{K}$$, $$\varvec{C}$$, $$\varvec{C}_{\textrm{aero}}$$, and $$\varvec{M}$$ are generalized stiffness matrix, generalized structural damping matrix, generalized aerodynamic damping matrix, and generalized mass matrix in the modal space, respectively47. It is worth pointing out that $$\varvec{C}_{\textrm{aero}}$$ is non-diagonal because of the coupling effects among mode shapes. Additionally, $$\varvec{S}_{\textrm{p}}(f)$$ is the CSDM of the modal load vector and can be calculated as: \begin{aligned}&{{\varvec{S}}}_{\textrm{p},jk} = \frac{4 \bar{f}_{\textrm{D}}^2 {{\varvec{S}}}_v(f)}{\overline{V}_z^2} |{{\varvec{J}}}_{jk}(f)|^2\; \end{aligned} (8) \begin{aligned}&|{{\varvec{J}}}_{jk}(f)|^2 = \int _0^L \int _0^L \gamma (x^{}_1, x^{}_2, f) {{\varvec{\varphi }}}^{}_{\textrm{Y}j}(x^{}_1) {{\varvec{\varphi }}}^{}_{\textrm{Y}k}(x^{}_2) \textrm{d}x^{}_1 \textrm{d}x^{}_2\; \end{aligned} (9) where $$\bar{f}^{}_{\textrm{D}}$$ is the static mean drag force per unit length with $$\overline{V}_z = V_{\textrm{rel}}$$ in Eq. (4); $$|\varvec{J}_{jk}(f)|^2$$ is the joint acceptance function; L is the total span length of the TL section; $$\varvec{\varphi }^{}_{\textrm{Y}j}(x^{}_1)$$ is the Y component at $$x^{}_1$$ in the j-th mode; $$\varvec{\varphi }^{}_{\textrm{Y}k}(x^{}_2)$$ is the Y component at $$x^{}_2$$ in the k-th mode ($$x_1$$ and $$x_2$$ are just integration variables). Notice that only Y components appear in Eq. (9), because the wind flow is in the Y direction only. Once $$\varvec{S}_{\textrm{q}}(f)$$ is obtained from Eq. (6), the standard deviation of the total displacement response at the r-th node is derived by integration over the frequency range47: \begin{aligned} \sigma ^{}_{\lambda r} = \sqrt{\int _0^{\infty } \sum _{j=1}^{N} \sum _{k=1}^{N} {{\varvec{\varphi }}}^{}_{\lambda j r} {{\varvec{\varphi }}}^{}_{\lambda k r} {{\varvec{S}}}_{\textrm{q},jk} (f) \textrm{d}f}\; \end{aligned} (10) where N is the total number of modes considered; $$\lambda \in \left\{ \text {X, Y, Z} \right\}$$ indicates the direction. The background response is considered quasi-static and its standard deviation, $$\sigma ^{}_{\lambda r, \textrm{B}}$$, can be calculated as described above, but computing the transfer function simply as $$\varvec{H}(f) = \varvec{K}^{-1}$$, instead of using Eq. (7). Finally, the standard deviation of the resonant response, $$\sigma ^{}_{\lambda r, \textrm{R}}$$, is calculated as follows: \begin{aligned} \sigma ^{}_{\lambda r, \textrm{R}} = \sqrt{\sigma _{\lambda r}^2 - \sigma _{\lambda r, \textrm{B}}^2}\; \end{aligned} (11) ### Mathematical description of real-time vegetation clearance As previously mentioned, the determination of the limit state involves two factors, i.e., conductor displacement and vegetation clearance. In terms of conductor displacements, the effects of insulator swing are included in the buffeting response of the conductor and are captured by its probabilistic properties. When the wind flow is in the Y direction only, the displacement response in the longitudinal direction (X) is considerably smaller than that in the alongwind direction (Y) or crosswind direction (Z). This study is concerned with the lateral (Y-direction) vegetation clearance, and a simplified configuration is illustrated in Fig. 6. For simplicity, the conductor swinging is only drawn on one side, with wind blowing in the positive Y direction. During a high wind event, the real-time clearance is only affected by conductor movement as both vegetation growth and vegetation motion are neglected. The conductor position dynamically changes in the space around the mean wind state (indicated by dashed circles) with a radial MVCD zone moving with it. The vegetation (tree) nearby is represented by vegetation points for which data from the latest survey may be used (e.g., point cloud data from a LiDAR survey). It should be recognized that in reality vegetation has great diversity and complexity (e.g., shape, species) which are not captured by vegetation points. The mathematical expression of the real-time lateral clearance can be written as: \begin{aligned} F(t) = Y_{\textrm{clr}} - \overline{U}^{}_{\textrm{Y}} - u^{}_{\textrm{Y}}(t)\; \end{aligned} (12) where t is the time instant; $$Y_{\textrm{clr}}$$ is the known pre-event clearance measured laterally from the cable resting state to the nearest vegetation point (indicated as solid cross); $$\overline{U}^{}_{\textrm{Y}}$$ and $$u^{}_{\textrm{Y}}(t)$$ are static mean displacement and dynamic displacement in the Y direction, respectively. The violation of MVCD (limit state) occurs when $$F(t) < mvcd$$, where mvcd is a prescribed value that can be determined based on voltage, altitude, etc21. At this point two additional concepts need to be made clear. First, Eq. (12) is meaningful on the underlying premise that the displacement of the conductor under mean wind establishes the possibility of violating MVCD under dynamic wind loads, and yet dynamic response effects are worth considering, as shown in Fig. 6. For the case where the displaced conductor is too far from the vegetation ($$Y_{\textrm{clr}} \gg \overline{U}^{}_{\textrm{Y}}+mvcd$$), violation can be deemed impossible; whereas if the conductor under mean wind is already too close to the vegetation ($$Y_{\textrm{clr}} \le \overline{U}^{}_{\textrm{Y}}+mvcd$$), no calculation is needed since violation is a certain event. This situation is actually relatively common for vegetation clearance designed for ordinary wind loads. Second, the blown-out envelope within one span is influenced by the changing sag of the conductor. The maximum lateral displacement within a span is achieved at the mid-span in coincidence with the maximum sag, as shown in Fig. 7. Moreover, if constant vegetation configuration is enforced for an entire span, the mid-span point will stand critical for limit state check. ### Probability of first-excursion failure The violation of MVCD, as a proxy for utility-induced ignition, could cause large-scale blackout and disastrous wildfires the very first time it occurs. This type of failure is categorized as failure due to first excursion (up-crossing), a problem extensively investigated by random vibration theory48. As previously mentioned, the fluctuating displacement $$u^{}_{\textrm{Y}}(t)$$ can be characterized as a stationary, Gaussian, and zero-mean random process. Letting $$F(t)=mvcd$$ and rearranging Eq. (12), the up-crossing threshold a is expressed as: \begin{aligned} a = Y_{\textrm{clr}} - \overline{U}^{}_{\textrm{Y}} - mvcd\; \end{aligned} (13) Note that Eqs. (12) and (13) are formulated in a continuous sense, however, calculations are actually performed at nodes of the finite element model. Thus, the expected excursion rate (i.e., the average number of up-crossings per unit time) at the r-th node with respect to threshold $$a_{r}$$ can be calculated as49: \begin{aligned} v_{ar}^{+} = \frac{1}{2 \pi } \frac{\sigma ^{}_{\dot{\textrm{Y}}r}}{\sigma ^{}_{\textrm{Y}r}} \textrm{exp}(-\frac{a_{r}^2}{2 \sigma _{\textrm{Y}r}^2})\; \end{aligned} (14) where $$\sigma ^{}_{\textrm{Y}r}$$ can be obtained using Eq. (10) with $$\lambda =$$ Y; $$\sigma ^{}_{\dot{\textrm{Y}}r}$$ is the standard deviation of Y-direction velocity response at the r-th node and can be computed by: \begin{aligned} \sigma ^{}_{\dot{\textrm{Y}} r} = \sqrt{\int _0^{\infty } \sum _{j=1}^{N} \sum _{k=1}^{N} {{\varvec{\varphi }}}^{}_{\textrm{Y} j r} {{\varvec{\varphi }}}^{}_{\textrm{Y} k r} (2\pi f)^2 {{\varvec{S}}}_{\textrm{q},jk} (f) \textrm{d}f}\; \end{aligned} (15) Furthermore, it is found that significant aerodynamic damping renders the background response dominant in the dynamic response47. This indicates that $$u^{}_{\textrm{Y}}(t)$$ is far from a narrow-banded process, which would instead require dominance of the resonant response. Thus, with a further assumption that excursions arrive independently in the time domain, the probability of encroachment, formulated as the probability of up-crossing excursion ($$u^{}_{\textrm{Y}}(t) > a$$) in the interval $$0<t<T^{}_{0}$$, is given by49: \begin{aligned} P_{\textrm{en},r}(T^{}_{0}) = 1 - \textrm{exp}(-v_{ar}^{+} T^{}_{0})\; \end{aligned} (16) where $$T^{}_{0}$$ is the time horizon or duration in seconds. One particular advantage of computing the probability $$P_{\textrm{en},r}(T^{}_{0})$$ is that it takes into account the effect of $$T^{}_{0}$$. In practical applications, $$T^{}_{0}$$ is not necessarily equal to the forecast wind event duration but can be any shorter duration of interest. In general terms, the longer the waiting time, the more likely excursion will occur. This can be very helpful in time-sensitive decision making where risks evolve with time. ## Application While the proposed methodology is general, and can be applied to power transmission systems with different characteristics and in different regions, two specific application examples are presented, to demonstrate the approach. The methodology was first implemented at the single TL section level, and then results were extended to illustrate the application at the system level. ### Example of a two-span transmission line section A two-span TL section with nominal voltage of 230 kV (alternating current) was first studied, as shown in Fig. 8. General information on the element type and computational environment was already provided in the Methodology section. For this particular example, relevant modelling details are given as following. The conductor is hung at all towers at the same height ($$H = 40$$ m) with the largest sag at mid-span $$d = 13.33$$ m. The conductor is of the “Drake” type and relevant properties are: diameter $$D = 0.028$$ m, unit weight $$w = 15.966$$ N/m, modulus of elasticity $$E = 77$$ GPa. The suspension insulator string was modeled by one co-rotational truss element with the following properties: length $$l_{\textrm{ins}} = 1.8$$ m, diameter $$D_{\textrm{ins}} = 0.254$$ m, total mass of the insulator string $$M_{\textrm{ins}} = 48$$ kg, and modulus of elasticity $$E_{\textrm{ins}} = 210$$ GPa. In order to account for possible future high wind events, seven intensity levels were studied: $$\overline{V}_{10} \in \left\{ 30, 35, 40, 45, 50, 55, 60 \right\}$$ m/s. The following parameters are needed too: surface roughness $$z^{}_0 = 0.03$$ m (open terrain), drag coefficient $$C_{\textrm{d}} = 1.0$$, air density $$\rho = 1.226$$ kg/$$\textrm{m}^3$$, and gravitational acceleration $$g = 9.81$$ m/$$\textrm{s}^2$$. Note that thermal loading or any other physical loading (e.g., ice) was neglected in this example. It is taken for granted that vegetation data is known beforehand, either from previous survey or valid estimation. The mvcd value corresponding to 230 kV voltage varies between 1.2 m and 1.6 m depending on the altitude21. It was assumed that a constant $$mvcd = 1.4$$ m is required throughout the entire TL section. Critical checking points can be identified based on available knowledge of TL and pertaining vegetation. For this example analysis, the vegetation clearance was assumed constant along the TL section, and logically the mid-span point of either span was chosen as the checking point. As mentioned earlier, for Eq. (12) to be meaningful, $$Y_{\textrm{clr}} > \overline{U}^{}_{\textrm{Y}}+mvcd$$ should be satisfied at any location to be checked. Correspondingly, a wide range of $$Y_{\textrm{clr}}$$ values with 0.5 m interval were selected for analysis 18.0, 18.5, ..., 26.5, 27.0 m. First, static structural analysis under mean wind load was carried out for each considered wind intensity, and the results are summarized in Table 1. Owing to the symmetry of both structure and load, the two mid-span points experience the same displacements while the conductor-insulator attachment point has no longitudinal displacement. The conductor mid-span exhibits noticeable displacements in the alongwind direction and crosswind direction, and both increase as wind intensifies. This is mainly due to the rigid-body swing (considering the large sag) and partly due to the elongation of the conductor. With the wind load on the insulator string neglected and the relatively small weight thereof, the insulator string swings out due to the drag force from the connected conductor, and it is found that the insulator sway angle $$\bar{\theta }_{ins}$$ is consistent with the sway angle of the conductor plane. Moreover, the rate of increase of the sway angle is reduced as the TL approaches positions almost parallel to the wind flow. It can be shown from simple calculations $$\left(\sqrt{\overline{U}_{\textrm{Y,att}}^2 + (l_{\textrm{ins}} - \overline{U}^{}_{\textrm{Z,att}})^2}\right)$$ that the length of the insulator string does not change in any significant way thanks to its high rigidity. In comparison with the magnitude of mid-span displacements and mvcd, the influence of the insulator string sway has non-negligible contribution to the overall displacements of the conductor. With the structural behavior linearized at the mean wind state, the dynamic modal properties of the linear system were obtained from eigenvalue analysis in the displaced state. It is customary to describe the conductor movement (like a pendulum) using in-plane and out-of-plane modes. For instance, Fig. 9 displays the first 16 modal frequencies and mode shapes corresponding to $$\overline{V}_{10} = 45$$ m/s. Note that the mode shapes are either symmetric (sym.) or antisymmetric (antisym.). Pairs of in-plane and out-of-plane modes that share similar shapes and frequencies can be observed, such as modes 2 and 3, modes 4 and 5, etc. Significant coupling effects of these pairs will lead to non-zero off-diagonal terms in $$\varvec{C}_{\textrm{aero}}$$47. The dynamic response around the static deflected position was computed in the frequency domain using the first 16 modes. This number was found to result in sufficient accuracy by a convergence test in terms of standard deviation of the displacement response. Structural damping was neglected as it is very small compared to the dominant aerodynamic damping. The aerodynamic damping ratio of the j-th mode can be obtained by: \begin{aligned} \zeta _{\textrm{aero},j} = \frac{{{\varvec{C}}}_{\textrm{aero},jj}}{4 {\pi } f_{j} {{\varvec{M}}}_{j}} \end{aligned} (17) where $$\varvec{C}_{\textrm{aero},jj}$$ corresponds to the j-th diagonal term of $$\varvec{C}_{\textrm{aero}}$$; $$f_{j}$$ and $$\varvec{M}_{j}$$ are the modal frequency and the generalized mass of the j-th mode, respectively. Fig. 10 compares the modal aerodynamic damping ratios under different wind intensities. It shows that significant aerodynamic damping is present and overall decreases with increasing mode number. Referencing Fig. 9, it can be observed that in-plane modes bear higher aerodynamic damping than out-of-plane modes. Consistent with findings by Stengel et al.50, a nonlinear relationship exists between aerodynamic damping ratios and high wind velocities. Fig. 11 gives the power spectral densities of the displacement response components at mid-span, where $$f_{\textrm{1}}$$ and $$f_{\textrm{16}}$$ correspond to values in Fig. 9. It is evident that the magnitude of longitudinal displacement is much smaller than the other two. Traces of resonances can be observed in all three directions within the range $$f_{\textrm{1}} \le f \le f_{\textrm{16}}$$. However, most energy is attributed to the background response (low frequency part) because the resonant response is damped out by the high aerodynamic damping. Subsequently, referring to Eqs. (10) and (11), the standard deviations of the displacement response components were obtained for each considered wind intensity. Fig. 12a shows the case with $$\overline{V}_{10}$$ = 45 m/s; results for other intensity levels are similar. It can be observed that the standard deviation of the background response is dominant in all three directions. Overall, the alongwind displacement shows the highest standard deviation, the crosswind displacement second, and the longitudinal displacement comes the lowest. Moreover, the standard deviations of alongwind and crosswind displacements are symmetric about the attachment point, with maxima appearing at the mid-span. The standard deviation of longitudinal displacement, is also symmetric, but it achieves its maximum at the attachment point. This indicates that the dynamic behavior of the insulator string is more meaningful in longitudinal displacement, which affects the lateral clearance indirectly by shifting the conductor (and in turn the critical checking points) along the span direction. Focusing on the total standard deviation, Figs. 12b–d examine the variation of the standard deviation with the wind intensity. Mid-span results are given in Table 2, where $$\overline{U}^{}_{\textrm{Y,mid}}$$ is the same as in Table 1 and is repeated here for convenience; $$\delta ^{}_{\textrm{Y}}$$ is the coefficient of variation (c.o.v.) of the alongwind displacement and can be calculated by: \begin{aligned} \delta ^{}_{\textrm{Y}} = \frac{\sigma ^{}_{\textrm{Y}}}{\overline{U}^{}_{\textrm{Y}}} \end{aligned} (18) Clearly, the standard deviations of longitudinal displacement ($$\sigma ^{}_{\textrm{X}}$$) and alongwind displacement ($$\sigma ^{}_{\textrm{Y}}$$) both increase with increasing wind velocity. However, considering the lateral clearance of the mid-span point, $$\sigma ^{}_{\textrm{X}}$$ is much smaller than $$\sigma ^{}_{\textrm{Y}}$$ ($$\sigma ^{}_{\textrm{X}} \approx 10\% \sigma ^{}_{\textrm{Y}}$$). Therefore, the effects of longitudinal shifting of the conductor were neglected in this example. In contrast to $$\sigma ^{}_{\textrm{X}}$$ and $$\sigma ^{}_{\textrm{Y}}$$, the standard deviation of crosswind displacement ($$\sigma ^{}_{\textrm{Z}}$$) shows a favorable decreasing trend with increasing wind intensity, as in Fig. 12d. Recall that the total standard deviation is dominated by the quasi-static background response which is closely related to the static mean wind position. It is easy to understand that as the static conductor plane becomes more in-plane with the Y-direction fluctuations, less responses will be excited in Z direction. According to Table 2, $$\sigma ^{}_{\textrm{Y}}$$ values can be very close to or even larger than mvcd ($$=1.4$$ m). The non-dimensional measure c.o.v. shows that the degree of variability of the alongwind displacement at mid-span decreases with the wind intensity increasing, as the increase in $$\sigma ^{}_{\textrm{Y}}$$ is slower than $$\overline{U}^{}_{\textrm{Y,mid}}$$. Nevertheless, high variation is anticipated in the conductor displacement response during strong wind events. This constitutes a major finding, because it manifests the necessity of considering in regular vegetation management and risk analysis the wind turbulence-induced dynamic effects and associated uncertainties. Since the fluctuating displacements at both mid-span points have the same probabilistic properties, i.e., $$u^{}_{\textrm{Y}}(t)\sim \mathscr {N}(0, \sigma ^{}_{\textrm{Y}})$$, only the left mid-span is discussed in the following. Based on the vegetation clearance configuration in Eq. (12), the probabilities of encroachment at mid-span (denoted by $$P_{\textrm{en}}$$ omitting subscript r) were computed for different wind velocities and varying clearances. In terms of the time horizon ($$T^{}_{0}$$), an examination of PSPS post event reports suggests that the duration of high wind events varies from several hours to two days51. Therefore, $$P_{\textrm{en}}$$ values were computed for up to 48 hours. Results for $$\overline{V}_{10} = 30$$ m/s are presented in Table 3, where $$u^{}_{\textrm{Y}}(t) \sim \mathscr {N}(0, 1.263)$$. It is shown that with an 18 m lateral clearance at mid-span, there is a $$100\%$$ probability that MVCD violation will happen within the first 24 hours of the wind event. This is attributed to the fact that $$\sigma ^{}_{\textrm{Y}}$$ $$(=1.263$$ m) is comparable to the up-crossing threshold a $$(=4.519$$ m). However, without considering the dynamic effects, a deterministic evaluation based solely on the static mean wind position (i.e., $$a=4.519$$ m) may lead to the contrary conclusion that encroachment does not occur. As $$Y_{\textrm{clr}}$$ increases to 22.0 m, $$P_{\textrm{en}}$$ (48 h) reaches a very low level ($$10^{-6}$$). Thus, results for $$Y_{\textrm{clr}} > 22.0$$ m are not shown in the table. For such a simple case — symmetric structure and regular vegetation clearance — the violation of the MVCD within an entire span can be captured by focusing exclusively on the mid-span point. However, theoretically it is necessary to consider also the likelihood of violation not occurring at mid-span while occurring at near-mid-span locations. The key to address this issue and determine if it is of practical relevance resides in the correlation between the mid-span displacement response and near-mid-span displacement responses. To illustrate this point, the PSD and coherence of the turbulence component (corresponding to $$\overline{V}_{10}$$ = 30 m/s) are given in Fig. 13. The PSD, shown up to 1 Hz for ease of observation, decreases steeply with the frequency. Fig. 13b examines the frequency-dependent coherence of the wind turbulence with varying spatial distance, $$\Delta x = |x^{}_{1}-x^{}_{2}|$$. Considering the most relevant frequency range [0, 0.5] Hz, the coherence remains high when the distance between two points is small (e.g., $$\Delta x < 2$$ m). This implies that the mid-span point, whose dynamic response is highly correlated with nearby points, is anticipated to first violate MVCD given its higher risk profile. Therefore, in the following, the encroachment probability at mid-span is considered to be representative of the entire span and it is expected that a similar approach is viable in most practical situations. The wind intensity level and the vegetation clearance policy are major factors affecting the probability of encroachment. The sensitivity of the probability of encroachment with respect to these two factors can be best understood from Fig. 14, where the performance of different wind clearance policies considering two-day wind events with different intensities is compared. Note that different $$Y_{\textrm{clr}}$$ ranges were examined depending on the wind intensity level, as indicated in the legends. It is evident that the $$P_{\textrm{en}}$$ escalates as the TL continues operating during a wind event. For each considered wind intensity, a narrow $$Y_{\textrm{clr}}$$ range can be identified within which small changes can have important impact on the probability of encroachment. This clearance range can be a useful reference for cost-effective vegetation management planning. Moreover, the effectiveness of certain clearance options is sensitive to the wind intensity. For instance, the $$P_{\textrm{en}}$$ (48 h) sustained by a 24.0 m clearance rises from $$1.52\times 10^{-4}$$ (acceptable) to $$1.24\times 10^{-2}$$ (alarming) as $$\overline{V}_{10}$$ increases from 40 m/s to 45 m/s, as in Figs. 14c,d. In the context of decision making towards PSPS, data on vegetation and transmission assets are usually known beforehand, while wind data are available from weather forecast. Then probabilities of encroachment can be calculated across the transmission network for a specified duration, which will help predict potential ignition locations. It should be emphasized again that the de-energization decision is not driven by consideration on an individual span or TL, but is based on system-level analysis considering power flow. The scope of power shutoff is a result of weighing two risks: the risk of catastrophic wildfires caused by utility assets, and the risks and certain drawbacks resulting from leaving the public without electricity. ### Example of a transmission system A real-world transmission network presents tremendous variations and uncertainties not only in its structural and electrical aspects, but also in the surrounding conditions. Nevertheless, following a procedure similar to the one discussed above, a separate study can be conducted efficiently at any location of interest for which data is supplied. The aims of this system-level example are twofold. First, it serves to illustrate the incorporation of span-wise encroachment probability into the analysis scale at which the de-energization decision actually made. Second, it is used to demonstrate how the branch lengths (in terms of the number of conductor spans) affect the overall probability of encroachment. The transmission system example is based on the benchmark Reliability Test System - Grid Modernization Laboratory Consortium (RTS-GMLC) model52, but only region 3 is used which is sized in such a way to realistically represent southern California. The data is publicly available53. As shown in Fig. 15, the system consists of 25 buses, 69 generators, and 39 transmission branches connecting buses. However, the RTS-GMLC dataset is lacking in structure-related information for each branch, such as supporting structures, span length, sag, etc. Thus, for illustrative purposes, it was assumed that all transmission branches are composed of the same two-span sections (as shown in Fig. 8) and all related configurations still hold. Considering a three-phase circuit blown out to one side (see Fig. 3), the failure (i.e., encroachment into MVCD) within one span is caused by the mid-span point of the outer-phase conductor. For a transmission branch, encroachment is defined as the event in which any of its spans violates MVCD; therefore a branch can me modeled as a classical series system. It was further assumed that the encroachment failures among different spans are statistically independent, which is justified by the previous considerations on the correlation distance of the wind fluctuations. Hence, the probability of encroachment into MVCD of a transmission branch, $$P_{\textrm{en}}^{\textrm{br}}$$, is expressed as: \begin{aligned} P_{\textrm{en}}^{\textrm{br}} = 1 - (1-P_{\textrm{en}})^{N_{\textrm{s}}} \end{aligned} (19) where $$N_{\textrm{s}}$$ is the number of spans of the considered branch. In this example, $$N_{\textrm{s}}$$ was obtained by dividing each branch into 400 m-long spans with rounding off at ends. Depending on the length of branches, $$N_{\textrm{s}}$$ ranges between 4 and 310, as shown by the histogram in Fig. 16. With wind intensity and event duration fixed (48 hours), the effectiveness of a certain clearance is preferably examined at the branch or system level. Taking advantage of results in Table 3, two lateral clearances (20.5 m and 21.0 m) under $$\overline{V}_{10}=$$ 30 m/s are compared in Fig. 15. Although the span-level $$P_{\textrm{en}}$$ is very low, the branch-level $$P_{\textrm{en}}^{\textrm{br}}$$ can be considerably high. As expected, longer branches have higher probability of encroachment because of the larger number of spans. This indicates the importance of stricter clearances for longer branches on the premise of “series system failure”. For instance, the $$P_{\textrm{en}}^{\textrm{br}}$$ of the longest branch can be reduced from $$50.56\%$$ to $$6.97\%$$ by increasing clearance from 20.5 m to 21.0 m. Leveraging forecasted weather data, the system-wide encroachment probabilities as visualized in Fig. 15 can help evaluate power shutoff decisions both spatially and temporally. Additionally, the accuracy of encroachment prediction can benefit from improving data quality. ## Conclusions This paper presents a methodology for assessing the probability of wildfire ignition from conductor-vegetation contact during strong wind events. The problem is formulated in the context of proactive power shutoff with a focus on transmission systems. The ignition mechanism involves the flashover (or sparkover) phenomenon caused by a displaced conductor coming close to nearby trees. With the data on vegetation configuration, conductor-vegetation interaction is examined through specific distance quantities. The failure mechanism is modeled as the first-excursion problem, and the limit state is proposed as encroachment into the pre-defined baseline clearance (i.e., MVCD) in the alongwind lateral direction. By means of an efficient analysis in the frequency domain, dynamic effects of TL displacement responses are derived from wind turbulence and structural characteristics. The probability of encroachment is estimated based upon random vibration theory, and the effects of varied clearances and wind intensities are also explored. It is found that the mean wind load accounts for most of the conductor blown-out displacement, to which the contribution of insulator string sway is non-negligible. The dynamic response around the mean wind state is dominated by the background response, as resonant response is suppressed by considerable aerodynamic damping. As shown by their standard deviation (high c.o.v., and comparable to MVCD), the dynamic effects of the displacement response are non-negligible. The sensitivity analyses reveal that for the range of probability where these calculations are meaningful (i.e., neither $$P_{\textrm{en},r}(0)=1$$ nor $$P_{\textrm{en},r}({T^{}_{0})}=0$$), the encroachment probability is shown to be sensitive to vegetation clearance and wind intensity. The proposed approach can be used at any identified checking points by virtue of the finite element implementation, as illustrated by the two-span TL example. To illustrate the transition from local checking points to the study of a de-energizable unit (such as a branch), the modified RTS-GMLC benchmark system example is used. The encroachment probability of any span along a branch can be appreciably high even if the encroachment probability of each individual span therein is very small. These sensitivity analyses cover the most important factors affecting the problem, but several other studies could be performed to determine the influence of secondary factors. It is important to point out that the ability of performing these sensitivity studies leverages the fact that a mechanistic approach has been developed. The data-driven approaches available in the literature could not do these sensitivity analyses because the available data for each combination of factors is insufficient. However, as it is usually the case for probabilistic approaches applied to rare event, a global validation of the results against real-world events or experimental work is not possible. Instead, a step-by-step validation of the components of the proposed approach is presented, including the characterization of the wind stochastic process, the mathematical description of real-time vegetation clearance, the definition of limit state, and the calculation of the probability of first-excursion failure. Wildfire is becoming a global threat under the background of climate change. Yet it is a relatively recent area of interest for civil engineering compared to other hazards (e.g., earthquakes, hurricanes). The main contribution of this paper is the proposed methodology for predicting powerline ignition through systematic analysis of the conductor dynamic response under high winds. As opposed to the purely data-driven methods that base predictions on historical ignition records, the proposed approach is efficient, informative, and flexible in accommodating various combinations of wind loading, structure and vegetation. In particular, the calculated encroachment probability incorporates the influence of the event duration which is an important factor in weighing shutoff decisions. However, there are several points that need further attention. First, the overall approach and the accuracy of its results highly depend on the availability and accuracy of the input data, including those related to the electric facilities, vegetation, weather, etc. In California these data are being collected systematically and thoroughly (Bob Bell, Manager, Transmission Vegetation Management Dept., Pacific Gas & Electricity, personal communication, 2020), but this may not apply to all regions at risk of wildfires. Second, the two examples included in the manuscript have illustrative purposes, with some simplified characteristics (single conductor, symmetric structural geometry, assumed constant vegetation clearance). For a complex real-world transmission network, repeated calculations are needed for each different conductor-vegetation-weather setting. Third, the encroachment into MVCD (or conductor-vegetation contact) is just the initiating event in the chain that may or may not lead to powerline-induced wildfires. Given the current knowledge on flashover and the various factors affecting ignition, the probability of ignition given encroachment will require additional studies. Nevertheless, informed of the probability of encroachment, utility decision makers are able to leverage the encroachment-ignition knowledge gap as safety margin and make justifiable de-energization decisions.
2023-03-31 07:40:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7287423610687256, "perplexity": 1445.9566295631046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00310.warc.gz"}