url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://socratic.org/questions/58e26d29b72cff66f68fca42
# Does the set of irrational numbers form a group? Apr 3, 2017 No - Irrational numbers are not closed under addition or multiplication. #### Explanation: The set of irrational numbers does not form a group under addition or multiplication, since the sum or product of two irrational numbers can be a rational number and therefore not part of the set of irrational numbers. About the simplest examples might be: $\sqrt{2} + \left(- \sqrt{2}\right) = 0$ $\sqrt{2} \cdot \sqrt{2} = 2$ $\textcolor{w h i t e}{}$ Footnote Some interesting sets of numbers that include irrational numbers are closed under addition, subtraction, multiplication and division by non-zero numbers. For example, the set of numbers of the form $a + b \sqrt{2}$ where $a , b$ are rational is closed under these arithmetical operations. If you try the same with cube roots, you find that you need to consider numbers like: $a + b \sqrt[3]{2} + c \sqrt[3]{4}$, with $a , b , c$ rational. More generally, if $\alpha$ is a zero of a polynomial of degree $n$ with rational coefficients, then the set of numbers of the form ${a}_{0} + {a}_{1} \alpha + {a}_{2} {\alpha}^{2} + \ldots + {a}_{n - 1} {\alpha}^{n - 1}$ with ${a}_{i}$ rational is closed under these arithmetic operations. That is, the set of such numbers forms a field.
2021-06-18 21:25:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9411346316337585, "perplexity": 97.37750803034517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487641593.43/warc/CC-MAIN-20210618200114-20210618230114-00120.warc.gz"}
http://partiallyattended.com/2009/04/13/Directed-Graph-blog-setup/
# Directed Graph blog setup. Posted on: 13 April 2009 in latex, centrality, blog, equation I've decided to use syntaxhiligher2 to do syntax hi-lighting and a call to Yourequations.com for rendering latex on the blog. Bot Cyborg has a good description of how to get this to work on blogger. The main advantage of this approach is that the pseudo code is written in the blog posts in plain encased in a pre statement, and the equations are also just vanilla latex in a pre statement. So if we wanted to think about the betweenness centrality of a graph we have the equation: $C_B(v)= \sum_{s \neq v \neq t \in V \atop s \neq t}\frac{\sigma_{st}(v)}{\sigma_{st}}$ and in python for a given graph object we could use the excellent networkx package to calculate it like so import networkx as nx #import networkxG=nx.read_adjlist("graph.adjlist") # read in a graph graphnodes_betweenness_centrality = nx.betweenness_centrality(G) # returns a dictionaryfor node_betweenness_centrality in nodes_betweenness_centrality: print node_betweenness_centrality
2014-09-02 21:14:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.189259871840477, "perplexity": 1642.0777470972855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922871.14/warc/CC-MAIN-20140901014522-00011-ip-10-180-136-8.ec2.internal.warc.gz"}
http://physics.stackexchange.com/tags/aerodynamics/new
Tag Info New answers tagged aerodynamics 1 There are two forces acting on your piece of paper: 1) the force of gravity, pulling both down with the same force $F_g = m\cdot g$ 2) drag force due to the air. In general, drag force is proportional to the projected area of the object. For regularly shaped objects (like a sphere) the drag is usually expressed as $$F_{drag} = \frac12 \rho v^2 A C_D$$ ... 1 While falling, both the sheet of paper and the paper ball experience air resistance. But the surface area of the sheet is much more than that of the spherical ball. And air resistance varies directly with surface area. Hence the sheet experiences more air resistance than the ball and it falls more slowly than the paper ball. 1 Yep, air resistance is proportional to surface area and the velocity squared of the object moving through the air https://en.wikipedia.org/wiki/Drag_(physics) 0 The short answer is that the hypotheses assumed for the Bernoulli equation are not met for airplanes. (I can't speak for birds since I haven't studied that in detail.) In particular, The air is not incompressible and Energy is not constant - the plane's engines are adding energy to the airflow That said, it's "close enough for the engineers" - ... 3 The fastest point of sail depends on the boat (both its hull shape and its sail plan), the wind strength, and the sea state. In general, a beam reach is not the fastest point of sail. For instance, in very light wind some boats will go fastest on a close reach due to the increased apparent wind from going toward the wind. For boats that sail faster than ... 3 The main phenomena which limit land vehicle speed are aerodynamic drag, lack of stability, and lack of power. Drag increases (approximately) with the frontal area of the vehicle. Stability increases with the weight, length and width of the vehicle. Finally, power increases (approximately) with the volume of the vehicle. Let's say you keep the shape of the ... 0 I'm late to the party here and I think the top vote-getters (Skliwiz, niboz) have adequately answered it, but I'll give my two cents anyway: There are several ways to explain how an airplane flies. Some are more detailed than others, and unfortunately most popular explanation get it wrong. Here are some explanations that are useful, depending on the ... 4 Bernoulli's Principle is one very small piece of a large mathematical theory that explains lift. Unfortunately, most attempts to explain lift using Bernoulli's principle without the overall mathematical context (i.e. vector calculus, partial differential equations, boundary conditions, the Navier-Stokes equation, etc.) are either so confusing that few ... 4 According to Sighard Hoerner's Fluid Dynamic Drag, this would be the half-sphere with the open side exposed to the wind. Its drag coefficient is 1.42. A rod with a hemispherical cross section will even have a drag coefficient of 2.3 (right column in the graph below). If you restrict the competition to solid objects, still the half sphere wins with a drag ... Top 50 recent answers are included
2015-09-03 19:31:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7282636761665344, "perplexity": 570.7806390228029}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645323734.78/warc/CC-MAIN-20150827031523-00241-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.cryptologie.net/home/all/25/
Hey! I'm David, a security consultant at Cryptography Services, the crypto team of NCC Group . This is my blog about cryptography and security and other related topics that I find interesting. # Hacking Week ## posted February 2014 A teacher from my uni (and who was teaching Programming last semester) is organizing a Hacking Week next week. Signs up are still possible there : http://hackingweek.fr/contestant/list/ It should be a Capture The Flag kind of contest. It should be interesting, although I'm going to ski with some friends so I won't be able to be really into it... comment on this story # Bitpay launches Bitcore ## posted February 2014 Bitcore seems to be an opensource node application that lets you deal with the bitcoin protocole easily (they give as exemple an function to validate a bitcoin address) comment on this story # Coinbase organizing a hackaton ## posted January 2014 So this guy owned @N on twitter and got extorted his account by a phishing attack. The story is well written and you should read it here : https://medium.com/p/24eb09e026dd but for a tl;dr the attacker called his paypal account to ask them for his credit card's last 4 digits. Then he called godaddy to ask them to reset the password. They only asked him for the 2 first digits and the last 4s. The attacker just had to guess the 2 first digits (and he did it on the first try, he could have kept calling and trying otherwise). Now that he had @N's domain's name, he could now see his emails. Took over @N's facebook account and started mailing him "threats". It's pretty crazy how easy phishing is. comment on this story # Initial Permutations in DES ## posted January 2014 I have to code a whitebox using DES encryption in a class. Which is pretty cool (I would have prefered doing it with AES but the other group got tails and we got heads). Here is where the Stanford course I passed on Coursera shines. The explanation of DES on it is brilliant. I was wondering about the initial and final permutations that occurs in the algorithm though and Dan Boneh doesn't really talk about it besides saying it's not for cryptographic purposes. I found a solution on a new sub-stackoverflow dedicated to Cryptography : http://crypto.stackexchange.com/questions/3/what-are-the-benefits-of-the-two-permutation-tables-in-des comment on this story # VPN hacked and server used to mine bitcoins ## posted January 2014 That kind of stuff happens and it's always pretty hard to know it happened and how it happened. Here's an article about a guy who doesn't seem to know much about security but does a fine job finding out what happened to him and what he can do to avoid future hacks. comment on this story
2017-11-24 18:24:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1815951019525528, "perplexity": 2907.5285474095026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808742.58/warc/CC-MAIN-20171124180349-20171124200349-00534.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-3-differentiation-3-2-the-derivative-as-a-function-exercises-page-116/86
## Calculus (3rd Edition) $$\frac{c}{n}$$ We find the derivative: $$f^{\prime}(x)=nx^{n-1}$$ Then at $x=c, m=f^{\prime}(c)=nc^{n-1}$. Hence, the tangent line is: \begin{aligned} \frac{y-y_{1}}{x-x_{1}} &=m \\ \frac{y-c^n}{x-c} &=nc^{n-1} \\ y &=nc^{n-1} (x-c)+c^n \end{aligned} Since the tangent line intersects with the $x-$axis at $x=0,$ then $Q$ has the coordinates $(c-\frac{c}{n},0), R$ has coordinates $(c,0)$, and the subtangent is $$c-\left(c-\frac{c}{n}\right)=\frac{c}{n}$$
2021-05-06 15:11:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 1450.0628507099934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00535.warc.gz"}
https://math.stackexchange.com/questions/310349/real-analysis-cauchy-sequence-question
# Real Analysis Cauchy Sequence Question So I've been been working on the question below, and I have some questions in regards to the validity of my answer. Let $(a_n)$ be a sequence such that $$\lim_{N \to \infty} \sum_{n=1}^{N} |a_n - a_{n+1}| < \infty .$$ Show that $(a_n)$ is Cauchy. I make the claim that the distance between the terms of $(a_n)$ must approach zero. As such for every $\epsilon > 0$, there must be be an integer $N$ such that $$|a_m - a_n| < \epsilon$$ for all $m,n \geq N$. That is, the sequence is Cauchy. To show this, assume that the distances between the terms of $(a_n)$ do not approach zero. Let $$a=\min \left \{ |a_1 - a_2|,|a_2 - a_3|,...,|a_N - a_{N+1}|,... \right \}.$$ Then we have $a \neq 0$. Observe that $$\lim_{N \to \infty} Na \leq \lim_{N \to \infty} \sum_{n=1}^{N} |a_{N}-a_{N+1}| < \infty,$$ which, is a contraction, as $$\lim_{N \to \infty} Na= \infty$$ for any $a \neq 0$. Thus, we must have $a=0$, and the distance between the terms of $(a_n)$ must approach zero and as such the sequence is Cauchy. I am unsure about setting $a= \min \{...\}$. Any input or comments about my answer would be appreciated. • What if you change $a_2$ to be the same as $a_1$ keeping all other terms the same. The convergence properties will not change, but $a$ will be 0. – Arin Chaudhuri Feb 21 '13 at 17:20 Your $a$ could be $0$, so this does not work, even if the idea is great. Take $\epsilon>0$. Since the series converges, there exists $N$ such that $$\sum_{n=N}^{+\infty} |a_{n+1}-a_n|\leq \epsilon.$$ Now for all for $N\leq k<l$, we have $$|a_k-a_l|=|\sum_{n=k}^{l}a_{n+1}-a_n|\leq \sum_{n=k}^{l}|a_{n+1}-a_n|\leq \sum_{n=N}^{+\infty}|a_{n+1}-a_n|\leq \epsilon.$$ So the sequence $a_n$ is indeed Cauchy.
2019-10-20 18:38:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934827446937561, "perplexity": 109.97726524138221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986718918.77/warc/CC-MAIN-20191020183709-20191020211209-00436.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/consider-following-drequency-response-os-system-hw-w-20-jw-10-2-jw-jw-100-find-plot-straig-q2721524
consider the following drequency response os a system Hw(w) = 20(jw + 10)^2/ jw(jw +100) a) Find and plot the straight-line approximations to the amplitude and phase response bode plots b) sketch the approximate amplitude-response bode plot od this system using the straight-line approximation and approximate amplitude values at the break and/or peak frequencies. c) sketch the approximate phase-response bode plot od this system using the straight-line approximation and a few calculated points
2015-07-30 07:53:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9753930568695068, "perplexity": 5010.903167006489}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987135.9/warc/CC-MAIN-20150728002307-00274-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/an-lr-circuit-battery-connected-t-0-which-following-quantities-not-zero-just-after-connection-induced-emf-current_69398
# An Lr Circuit with a Battery is Connected at T = 0. Which of the Following Quantities is Not Zero Just After the Connection? - Physics MCQ An LR circuit with a battery is connected at t = 0. Which of the following quantities is not zero just after the connection? #### Options • Current in the circuit • Magnetic field energy in the inductor • Power delivered by the battery • Emf induced in the inductor #### Solution Emf induced in the inductor At time t = 0, the current in the L-R circuit is zero. The magnetic field energy is given by U=1/2Li^2, as the current is zero the magnetic field energy will also be zero. Thus, the power delivered by the battery will also be zero. As, the LR circuit is connected to the battery at t = 0, at this time the current is on the verge to start growing in the circuit. So, there will be an induced emf in the inductor at the same time to oppose this growing current. Concept: Induced Emf and Current Is there an error in this question or solution? #### APPEARS IN HC Verma Class 11, Class 12 Concepts of Physics Vol. 2 Chapter 16 Electromagnetic Induction MCQ | Q 7 | Page 305
2021-04-16 02:53:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5567767024040222, "perplexity": 522.1692334124273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088471.40/warc/CC-MAIN-20210416012946-20210416042946-00409.warc.gz"}
http://math.stackexchange.com/questions/17972/probability-of-never-reach-1-for-a-1d-random-walk-of-n-steps
# Probability of never reach -1 for a 1D random walk of n steps On the number line, starting from 0. There is a probability of $p$ of moving 1 unit to the positive direction, and $1-p$ of moving 1 unit to the negative direction. What is the probability the walk never reaches $-1$ after $n$ steps? - Let $P_n$ be the probability that the walk has still not visited $-1$ after $n$ steps. If $n$ is even, we can't be at $-1$, so $P_{n} = P_{n-1}$ for even $n$. Assume from now on that $n = 2m+1$ is odd. For $0 \le r \le m$, let $P_{n,r}$ denote the probability that after $n=2m+1$ steps, the walk is at position $2r+1$ and has still not visited $-1$. We can count the number $N_{n,r}$ of such walks, as follows: $N_{n,r} = T_{n,r} - V_{n,r}$ where $T_{n,r}$ is the total number of walks ending at $2r+1$ (including those that visit $-1$), and $V_{n,r}$ is the number of walks ending at $2r+1$ that do visit $-1$ on the way. A walk that ends at $2r+1$ consists of $m+r+1$ positive steps and $m-r$ negative steps, so $T_{n,r} = \binom{n}{m-r}$. To evaluate $V_{n,r}$, we need the following observation: There is a 1-1 correspondence between walks ending at $2r+1$ that visit $-1$ on the way, and walks ending at $-(2r+3)$. For, given a walk ending at $2r+1$ that visits $-1$, we can reflect the portion of it that occurs after its first visit to $-1$, to obtain a walk that ends at $-(2r+3)$. And vice versa. A walk that ends at $-(2r+3)$ consists of $m-r-1$ positive steps and $m+r+2$ negative steps, so $V_{n,r} = \binom{n}{m-r-1}$, giving $N_{n,r} = \binom{n}{m-r} - \binom{n}{m-r-1}$. (If $r = m$, then we understand the second binomial term as zero.) Thus: $P_{n,r} = p^{m+r+1}q^{m-r}\left(\binom{n}{m-r} - \binom{n}{m-r-1}\right)$ where $q = 1-p$. So the total probability of not visiting $-1$ in $n$ steps is $P_n = \sum_{r=0}^m P_{n,r}$ This is a modification of the binomial distribution. I don't think it can be simplified any further, unless you want to use regularised incomplete beta functions. - Some $P_{n,r}$ is larger than 1. For example, $P_{1,0}=1/p$. How should that be handled? – Chao Xu Jan 18 '11 at 22:16 @Chao Xu: Sorry, misprint. I think it's correct now. – TonyK Jan 19 '11 at 7:16 Furthermore, every $P_n$ is a nondecreasing function of $p$. For a given $p$, the sequence $(P_n)$ is nonincreasing and converging to $0$ if $p\le\frac12$ and to $1-(q/p)$ if $p>\frac12$. – Did Jan 24 '11 at 14:08
2015-10-13 13:57:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285486340522766, "perplexity": 146.05257739892048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738006925.85/warc/CC-MAIN-20151001222006-00218-ip-10-137-6-227.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/377636/quick-question-does-sin24x-cos2-4x-equal-1
# Quick question, does $\sin^2(4x) + \cos^2 (4x)$ equal 1? Does $\sin^2(4x) + \cos^2 (4x)=1$? So even $\sin^2 (249023049x) + \cos^2 (249023049x) = 1$? - Yes, it does. In fact, for every $x$ you have this identity $$\cos(x)^2+\sin(x)^2=1$$
2014-11-26 00:37:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583175182342529, "perplexity": 680.5830021537873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931004885.2/warc/CC-MAIN-20141125155644-00040-ip-10-235-23-156.ec2.internal.warc.gz"}
https://kb.osu.edu/dspace/handle/1811/30745
# HOT BANDS IN JET-COOLED AND AMBIENT TEMPERATURE SPECTRA OF CHLOROMETHYLENE Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/30745 Files Size Format View abstract.gif 25.54Kb GIF image cmh2006_HCCl.ppt 1.226Mb Microsoft PowerPoint View/Open Slide1.GIF 111.3Kb GIF image Slide2.GIF 99.89Kb GIF image Slide3.GIF 111.1Kb GIF image Slide4.GIF 111.3Kb GIF image Slide5.GIF 106.4Kb GIF image Slide6.GIF 111.8Kb GIF image Slide7.GIF 110.8Kb GIF image Slide8.GIF 110.4Kb GIF image Slide9.GIF 110.9Kb GIF image Slide10.GIF 112.1Kb GIF image Slide11.GIF 92.76Kb GIF image Slide12.GIF 110.9Kb GIF image Slide13.GIF 166.2Kb GIF image Slide14.GIF 110.8Kb GIF image Slide15.GIF 109.8Kb GIF image Slide16.GIF 111.2Kb GIF image Title: HOT BANDS IN JET-COOLED AND AMBIENT TEMPERATURE SPECTRA OF CHLOROMETHYLENE Creators: Wang, Zhong; Bird, Ryan G.; Yu, Hua-Gen; Sears, Trevor J. Issue Date: 2006 Publisher: Ohio State University Abstract: Rotationally resolved spectra of several bands lying to the red of the origin of the $\tilde{A} ^1\negthinspace A ^{\prime \prime} - \tilde{X}^1\negthinspace A ^{\prime}$ band system of chloromethylene, HCCl, were recorded by laser absorption spectroscopy in ambient temperature and jet-cooled samples. The radical was made by excimer laser photolysis of dibromochloromethane, diluted in inert gas, at 193nm. The jet-cooled sample showed efficient rotational, but less vibrational cooling. Analysis showed the observed bands originate in the $(v_1,v_2,v_3)$ = $(010)$, $(001)$ and $(011)$ vibrational levels of the ground electronic state of the radical, while the upper state levels involved were $(000)$, $(010)$, $(001)$, and $(011)$. Vibrational energies and rotational constants describing the rotational levels in the lower state vibrational levels were determined by fitting to combination differences. The analysis also resulted in a re-evaluation of the C$-$Cl stretching frequency in the excited state and we find $E^{\prime}_{001}$ = 13206.57 cm$^{-1}$ or 926.17 cm$^{-1}$ above the $\tilde{A} ^1\negthinspace A ^{\prime \prime} (000)$ rotationless level for HC$^{35}$Cl. Scaled ab initio potential energy surfaces for the $\tilde{A}$ and $\tilde{X}$ states were used to compute the transition moment surface and thereby the relative intensities of different vibronic transitions, providing additional support for the assignments and permitting the prediction of the shorter wavelength spectrum. All observed upper state levels showed some degree of perturbation in their rotational energy levels, particularly in $K_a$ = 1, presumably due to coupling with near-resonant vibrationally excited levels of the ground electronic state. Transitions originating in the low-lying $\tilde{a} ^3\negthinspace A^{\prime \prime}$ were also predicted occur in the same wavelength region, but could not be identified in the spectra. Description: Author Institution: Department of Chemistry, Brookhaven National Laboratory, Upton, NY 11973-5000 URI: http://hdl.handle.net/1811/30745 Other Identifiers: 2006-TJ-08
2015-03-02 01:11:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.847672164440155, "perplexity": 10133.389668690465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462577.56/warc/CC-MAIN-20150226074102-00072-ip-10-28-5-156.ec2.internal.warc.gz"}
https://helpingwithmath.com/math-calculators/slope-calculator/
Home » Math Calculators » Slope Calculator # Slope Calculator Enter Information Results Fill the calculator form and click on Calculate button to get result here Slope (m) 0 Angle (deg) 0 ΔX 0 ΔY 0 Distance 0 ## What is a straight line? A straight line is a curve such that every point on the line segment joining any two points on it lies on it. Let ax + by + c = 0 be a first degree equation in x, y where a, b, c are constants. Let P (x1, y1) and Q(x2, y2) be any point on the curve represented by ax+ by + c = 0. Then, ax1 + by1 + c = 0 and ax2 + by2 + c = 0 When we say that the first degree equation is x, i.e. ax + by + c = 0 represents a line, it means that all points (x, y) satisfying ax + by + c = 0lie along a line. Thus, a line is also defined as the locus of a point satisfying the condition ax + by + c = 0, where a, b, c are constants. It should be noted that there are only two unknowns in the equation of a straight line because the equation of every straight line can be put in the form ax + by + c = 0, where a and b are two unknowns. It is important to note here that x and y are not unknowns.  In fact, these are the coordinates of any point on the line and are known as current coordinates. Thus, to determine a line we will need two coordinates to determine the two unknowns ## Slope of a Line The trigonometrical tangent of the angle that a line makes with the positive direction of the x-axis in an anticlockwise sense is called the slope or the gradient of a line. The slope of a line is generally denoted by m. Thus m = tantan θ Since a line parallel to the x-axis makes an angle of 0o with the x-axis, therefore, its slope is tan 00 = 0 A line parallel to the y-axis, i.e. a line that is perpendicular to the x-axis makes an angle of 90o with the x-axis, so its slope is tan $\frac{\pi}{2}$ = ∞. Also, the slope of a line equally inclined with axes is 1 or -1 as it makes an angle of 45o or 135o with the x-axis. The angle of inclination of a line with the positive direction of the x-axis in an anticlockwise sense always lies between 00 and 1800. The line of positive slope makes an acute angle with the parallel direction of the x-axis. The line of zero slope is parallel to the x-axis. The line of negative slope makes an obtuse angle with the parallel direction of the x-axis in an anticlockwise direction. ## Slope of a line in Terms of Coordinates of any Two Points on it What would be the slope of a line in terms of coordinates of any two points on it? Let us find out. Let P(x1, y1) and Q(x2, y2) be two points on a line making an angle θ with the positive direction of the x-axis. Draw Pl, QM perpendicular on x-axis and PN_QN on QM. Then, PN = LM = OM – OL = x2 – x1 and QN = QM – NM = QM – PL = y2 – y1 In △ PQN, we have, tan  θ = $\frac{QN}{PN} = \frac{y_2- y_1}{x_2- x_1}$ Thus, if (x1, y1) and (x2, y2) are coordinates of any two points on a line then its slope is given by m =  $\frac{y2- y1}{x2- x1} = \frac{Difference\: in\: ordinates}{Difference\: in\: adbscissae}$ Let us understand it through an example. Example Find the slope of a line that passes through the points (3, 2) and (-1, 5) Solution We know that the slope of a line passing through two points (x1, y1) and (x2, y2) is given by m =  $\frac{y_2- y_1}{x_2- x_1}$ Here, y1 = 2, y2 = 6, x1 = 3, x2 = -1 Substituting these values in the given equation, we have m = $\frac{5-2}{-1-3} = \frac{-3}{4}$ Hence, the slope of a line that passes through the points (3, 2) and (-1, 5) is $\frac{-3}{4}$ ## How to use the slope calculator to find the slope of a line? It is quite simple to use the scientific notation calculator for performing operations involving scientific notations. The following steps are required to be followed for this purpose – Step 1 –  The first step is the know how of what the slope calculator looks like. The following is the snapshot of what the landing page would look like as soon as click on the “ Slope Calculator “ link – Step 2 –  Above we can see clearly, that the slope calculator asks for some information to be entered. This information consists of the coordinates of the line that we wish to find the slope of.  Recall that these coordinates are in the form of (x1, y1) and (x2, y2). Let us take two coordinates as an example. What would be the slope of a line if we have the coordinates ( 2 , 1 ) and ( -4 , – 5 ). Let us first calculate the same using the formula that we discussed above. We will have, m =  $\frac{y_2- y_1}{x_2- x_1}$ ⇒ m = $\frac{-5-1}{-4-2} = \frac{-6}{-6}$ =  1 Now let us check the same using the slope calculator. As require, we will enter the values, ( 2 , 1 ) and ( -4 , – 5 ) in the box provided for (x1, y1) and (x2, y2) respectively. Below is the snapshot of how the values (x1, y1) and (x2, y2) would be entered  – Step 3 – Now that we have entered all the required information, our last step is to perform the calculation. For this purpose, we just need to click on the “ calculate “ button. As soon as we will click on this button, we can see the result obtained on the right-hand side of the values that we had entered in the previous steps. Below is a snapshot of how the selection would look like when we will click on the “ calculate “ button – We can see above that the slope calculator not only provides us with the values of the slope,  y2- y1 and x2- x1, but it also provides a graphical representation of the straight line that would pass through these two points. Repeated use of this slope calculator shall prove to be helpful in not only understanding the concept of the slope of a line but will also help understand how a slope is instrumental in defining a line on the graph.
2022-10-05 23:59:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7837918996810913, "perplexity": 273.8531750824045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00133.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Ascott.e-marian
# zbMATH — the first resource for mathematics ## Scott, E. Marian Compute Distance To: Author ID: scott.e-marian Published as: Scott, E. M.; Scott, E. Marian Documents Indexed: 7 Publications since 2000, including 2 Books all top 5 #### Co-Authors 1 single-authored 2 Chan, Karen 2 Cocchi, Daniela 2 Saltelli, Andrea 1 Altieri, Leonardo 1 Bowman, Adrian W. 1 Gazioğlu, Suzan 1 Greco, Fedele 1 Illian, Janine B. 1 McMullan, A. 1 Ventrucci, Massimo all top 5 #### Serials 1 Statistics & Probability Letters 1 Computational Statistics 1 Journal of Statistical Computation and Simulation 1 Journal of Applied Statistics 1 Biostatistics 1 Wiley Series in Probability and Statistics #### Fields 7 Statistics (62-XX) 2 General and overarching topics; collections (00-XX) 2 Systems theory; control (93-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Numerical analysis (65-XX) all top 5 #### Cited by 340 Authors 4 Oden, John Tinsley 4 Prieur, Clémentine 3 Banks, Harvey Thomas 3 Dimov, Ivan Todor 3 Georgieva, Raya 3 Gertner, George Zdzislaw 3 Gröhn, Yrjö T. 3 Marrel, Amandine 3 Oakley, Jeremy E. 3 Schukken, Ynte H. 3 Smith, Rebecca L. 3 Sudret, Bruno 3 Xu, Chonggang 2 Almeida, Regina C. 2 Bhunu, Claver Pedzisai 2 Bowong, Samuel 2 Castillo, Carmen 2 Castillo, Enrique F. 2 Chastaing, Gaelle 2 da Veiga, Sébastien 2 Faghihi, Danial 2 Fort, Jean-Claude 2 Gamboa, Fabrice 2 Günther, Michael 2 Hadi, Ali Saad 2 Janon, Alexandre 2 Klein, Thierry E. 2 Kucuk, Ilker 2 Kurths, Jürgen 2 Lima, Ernesto A. B. F. 2 Lu, Zhao 2 Nodet, Maëlle 2 Ostromsky, Tzvetan 2 Prieur, Christophe 2 Rockenfeller, Robert 2 Roustant, Olivier 2 Saltelli, Andrea 2 Sulieman, Hana 2 Tan, Matthias Hwai Yong 2 Zlatev, Zahari 1 Abdel-Khalik, Hany S. 1 Al-Mamun, Mohammad A. 1 Alexanderian, Alen 1 Alvarez, Diego A. 1 Alvarez, Isabelle 1 Andrews, Paul S. 1 Arias-Nicolás, José Pablo 1 Asserin, Olivier 1 Aste, Marco 1 Averill, Matthew G. 1 Azrar, Lahcen 1 Bang, Youngsuk 1 Barbillon, Pierre 1 Beck, James V. 1 Beck, Jan B. 1 Ben Said, Mohamed 1 Benoumechiara, Nazih 1 Bernhard, Sebastian 1 Bilionis, Ilias 1 Blackwell, Ben 1 Blatman, Géraud 1 Bokil, Vrushali A. 1 Boninsegna, Massimo 1 Borer, Elizabeth T. 1 Borgonovo, Emanuele 1 Bousquet, Nicolas 1 Brown, Lawrence David 1 Bui-Thanh, Tan 1 Camargos, Carla Cristina Souza 1 Carraro, Laurent 1 Chen, Chen 1 Cheng, Kai 1 Cho, Heyrim 1 Choi, Jung-Il 1 Choi, Yun Young 1 Christiansen, Marcus Christian 1 Christie, Mike 1 Cleaves, Helen L. 1 Cliff, D. 1 Cohen, Serge 1 Cornford, Dan 1 Costa, Marcos Heil 1 Coville, Jerome 1 Crook, Sharon M. 1 Damblin, Guillaume 1 Dancik, Garrett M. 1 Davidian, Marie 1 Davis, Anthony B. 1 De Lozzo, Matthias 1 De Pauw, Dirk J. W. 1 Degbelo, Auriol 1 DeWitt, Matthew 1 Dorman, Karin S. 1 Durand, Gérard 1 Durrande, Nicolas 1 Dyke, Shirley J. 1 Ehrgott, Matthias 1 Embrechts, Paul 1 Ernstberger, Stacey L. 1 Fahs, Marwan ...and 240 more Authors all top 5 #### Cited in 67 Serials 10 Journal of Theoretical Biology 7 Journal of Computational Physics 7 Journal of Statistical Computation and Simulation 6 European Journal of Operational Research 5 SIAM/ASA Journal on Uncertainty Quantification 4 Computer Physics Communications 4 Mathematical Biosciences 4 Journal of Statistical Planning and Inference 4 Applied Mathematical Modelling 3 Bulletin of Mathematical Biology 3 Computational Statistics and Data Analysis 2 Artificial Intelligence 2 Mathematics and Computers in Simulation 2 Statistics & Probability Letters 2 Computational Mechanics 2 International Journal of Approximate Reasoning 2 Communications in Statistics. Theory and Methods 2 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 2 SIAM Journal on Scientific Computing 2 Nonlinear Dynamics 2 Mathematical and Computer Modelling of Dynamical Systems 2 Journal of Agricultural, Biological, and Environmental Statistics 1 Computers and Fluids 1 Computer Methods in Applied Mechanics and Engineering 1 International Journal of Control 1 International Journal of Heat and Mass Transfer 1 Journal of the Franklin Institute 1 Journal of Mathematical Analysis and Applications 1 Transport Theory and Statistical Physics 1 Journal of Computational and Applied Mathematics 1 Journal of Multivariate Analysis 1 Quarterly of Applied Mathematics 1 Insurance Mathematics & Economics 1 Numerical Methods for Partial Differential Equations 1 Mathematical and Computer Modelling 1 MCSS. Mathematics of Control, Signals, and Systems 1 Journal of Scientific Computing 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 SIAM Journal on Optimization 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 Top 1 Reliable Computing 1 Journal of Nonparametric Statistics 1 Journal of Mathematical Chemistry 1 Differential Equations and Dynamical Systems 1 Soft Computing 1 Journal of Combinatorial Optimization 1 PAA. Pattern Analysis and Applications 1 Mechanism and Machine Theory 1 The Econometrics Journal 1 Journal of the Royal Statistical Society. Series A. Statistics in Society 1 Journal of the Royal Statistical Society. Series B. Statistical Methodology 1 Journal of the Royal Statistical Society. Series C. Applied Statistics 1 Communications in Nonlinear Science and Numerical Simulation 1 Nonlinear Analysis. Real World Applications 1 Nonlinear Analysis. Modelling and Control 1 Archives of Computational Methods in Engineering 1 Journal of Applied Mathematics 1 Central European Journal of Mathematics 1 Acta Numerica 1 Inverse Problems in Science and Engineering 1 AStA. Advances in Statistical Analysis 1 Blätter der DGVFM (Deutsche Gesellschaft für Versicherungs- und Finanzmathematik) 1 The Annals of Applied Statistics 1 International Journal of Quality, Statistics, and Reliability 1 Statistics and Computing 1 Dependence Modeling all top 5 #### Cited in 21 Fields 62 Statistics (62-XX) 40 Numerical analysis (65-XX) 34 Biology and other natural sciences (92-XX) 16 Systems theory; control (93-XX) 13 Probability theory and stochastic processes (60-XX) 13 Operations research, mathematical programming (90-XX) 8 Partial differential equations (35-XX) 8 Computer science (68-XX) 8 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 7 Mechanics of deformable solids (74-XX) 6 Ordinary differential equations (34-XX) 4 General and overarching topics; collections (00-XX) 4 Geophysics (86-XX) 3 Dynamical systems and ergodic theory (37-XX) 3 Fluid mechanics (76-XX) 2 Combinatorics (05-XX) 1 Functional analysis (46-XX) 1 Optics, electromagnetic theory (78-XX) 1 Classical thermodynamics, heat transfer (80-XX) 1 Statistical mechanics, structure of matter (82-XX) 1 Information and communication theory, circuits (94-XX)
2021-06-19 07:27:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26823708415031433, "perplexity": 13271.002823566869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643703.56/warc/CC-MAIN-20210619051239-20210619081239-00184.warc.gz"}
https://zbmath.org/?q=an:0722.62024&format=complete
# zbMATH — the first resource for mathematics Some asymptotic results for robust procedures for testing the constancy of regression models over time. (English) Zbl 0722.62024 For $$X_ 1,...,X_ n$$ independent random variables, where $$X_ i$$ is distributed according to $$F(x-c'_ i\theta_ i)$$, with $$c_ i$$ known regression p-vectors and $$\theta_ i$$ unknown vector parameters, the problem of robust testing $$H_ 0:\theta_ 1=...=\theta_ n$$ against $$H_ 1:\theta_ 1=...=\theta_ m\neq \theta_{m+1}=...=\theta_ n$$ is considered. A recursive M-procedure used for the testing problem is based on a normalization of $\max_{p<k\leq n}\{k^{-1/2} | \sum^{k}_{j=p+1}\Psi [x_ i-c'_ i\theta_{i-1}(\Psi)]| \},$ where $$\Psi$$ is a score function and $$\theta_{i-1}(.)$$ is the M- estimator based on $$X_ 1,...,X_{i-1}$$. The author studies the asymptotics of the above statistics under the null hypotheses. ##### MSC: 62F35 Robustness and adaptive procedures (parametric inference) 62F05 Asymptotic properties of parametric tests 62E20 Asymptotic distribution theory in statistics Full Text: ##### References: [1] J. Antoch, M. Hušková: Some M-tests for detection of a change in linear models. Proceedings of the Fourth Prague Symposium on Asymptotic Statistics (P. Mandl, M. Hušková, Charles University, Prague 1989, pp. 123-136. · Zbl 0705.62030 [2] R. L. Brown J. Durbin, J. M. Evans: Techniques for testing the constancy of regression relationships over time (with discussion). J. Roy. Statist. Soc. Ser. B 37 (1975), 149-182. · Zbl 0321.62063 [3] M. Csörgö, L. Horváth: Nonparametric methods for changepoint problems. Handbook of Statistics, vol. 7 (P. R. Krishnaiah and C. R. Rao. North Holland, Amsterdam 1988, pp. 403-425. [4] D. A. Darling, P. Erdös: A limit theorem for the maximum of normalized sums of independent random variables. Duke Math. J. 23 (1956), 143-155. · Zbl 0070.13806 · doi:10.1215/S0012-7094-56-02313-4 [5] P. Hackl: Testing the Constancy of Regression Models over Time. Vandenhoeck and Ruprecht, Gottingen 1980. · Zbl 0435.62089 [6] M. Hušková: Stochastic approximation type estimators in linear models. Submitted, 1989. [7] M. Hušková: Recursive M-tests for change point problem. Structural Change: Analysis and Forecasting (A. H. Westlund, School of Economics, Stockholm 1989. [8] M. Hušková, P. K. Sen: Nonparametric tests for shift and change in regression at an unknown time point. The Future of the World Economy: Economic Growth and Structural Changes (P. Hackl, Springer-Verlag, Berlin-Heidelberg-New York 1989, pp. 73-87. [9] P. R. Krishnaiah, B. Q. Miao: Review estimates about change point. Handbook of Statistics, vol. 7 (P. R. Krishnaiah and C R. Rao. North Holland, Amsterdam 1988, pp. 390-402. [10] V. V.Petrov: Sums of Independent Random Variables. Springer-Verlag, Berlin-Heidelberg - New York 1975. · Zbl 0336.60050 [11] P. K. Sen: Recursive M-tests for the constancy of multivariate regression relationships overtime. Sequential Anal. 5(1984), 191 - 211. · Zbl 0588.62139 [12] S. Zacks: Survey of classical and Bayesian approaches to the change point problem: fixed sample and sequential procedures of testing and estimation. Recent Advances in Statistics. Papers in Honour of Herman Chernoff’s Sixtieth Birthday, Acad. Press, New York, 1983, pp. 245-269. · Zbl 0563.62062 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-03-06 14:55:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5309139490127563, "perplexity": 2610.629065850758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375096.65/warc/CC-MAIN-20210306131539-20210306161539-00355.warc.gz"}
http://dictionnaire.sensagent.leparisien.fr/Dilatometer/en-en/
Publicité ▼ Merriam Webster DilatometerDil`a*tom"e*ter (?), n. [Dilate + -meter.] (Physiol.) An instrument for measuring the dilatation or expansion of a substance, especially of a fluid. Publicité ▼ ## définition - Dilatometer voir la définition de Wikipedia Wikipedia # Dilatometer A simple structure of a dilatometer for the measurement of the thermal expansion of liquids and solids A dilatometer is a scientific instrument that measures volume changes caused by a physical or chemical process. A familiar application of a dilatometer is the mercury-in-glass thermometer, in which the the change in volume of the liquid column is read from a graduated scale. Because mercury has a fairly constant rate of expansion over normal temperature ranges, the volume changes are directly related to temperature. ## Applications Dilatometers have been used in the fabrication of metallic alloys, compressed and sintered refractory compounds, glasses, ceramic products, composite materials, and plastics.[1] Dilatometry is also used to monitor the progress of chemical reactions, particularly those displaying a substantial molar volume change (e.g., polymerisation). A specific example is the rate of phase changes.[2] Another common application of a dilatometer is the measurement of thermal expansion. The thermal expansivity is defined as: $\alpha = \frac{1}{V} \biggl(\frac{\delta V}{\delta T} \biggr)_{P}\$ and is an important engineering parameter. ## Types There are a number of dilatometer types: • Capacity dilatometers possess a parallel plate capacitor with a one stationary plate, and one moveable plate. When the sample length changes, it moves the moveable plate, which changes the gap between the plates. The capacitance is inversely proportional to the gap. Changes in length of 100 picometres can be detected.[3] • Connecting rod (push rod) dilatometer, the sample which can be examined is in the furnace. A connecting rod transfers the thermal expansion to a strain gauge, which measures the shift. Since the measuring system (connecting rod) is exposed to the same temperature as the sample and thereby likewise expands, one obtains a relative value, which must be converted afterwards. Matched low-expansion materials and differential constructions can be used to minimize the influence of connecting rod expansion [4] • High Resolution - Laser Dilatometer Highest resolution and absolute accuracy is possible with a Michelson Interferometer type Laser Dilatometer. Resolution goes up to picometres. On top the principle of interference measurement give the possibility for much higher accuracy's and it is a absolut measurement technique with no need of calibration.[5] • Optical dilatometer is an instrument that measures dimension variations of a specimen heated at temperatures that generally range from 25 to 1400 °C. The optical dilatometer allows the monitoring of materials’ expansions and contractions by using a non-contact method: optical group connected to a digital camera captures the images of the expanding/contracting specimen as function of the temperature with a resolution of about ±70 micrometre per pixel.[6] As the system allows to heat up the material and measures its longitudinal/vertical movements without any contact between instrument and specimen, it is possible to analyse the most ductile materials, such as the polymers, as well as the most fragile, such as the incoherent ceramic powders for sintering process. For simpler measurements in a temperature range from 0 to 100 °C, where water is heated up and flow or over the sample. If linear coefficients of expansion of a metal is to be measured, hot water will running through a pipe made from the metal. The pipe warms up to the temperature of the water and the relative expansion can be determined as a function of the water temperature. For the measurement of the volumetric expansion of liquids one takes a large glass container filled with water. In an expansion tank (glass container with an accurate volume scale) with the sample liquid. If one heats the water up, the sample liquid expands and the volume changes is read. However the expansion of the sample container must also be taken into consideration. The expansion and retraction coefficient of gases cannot be measured using dilatometer, since the pressure plays a role here. For such measurements a gas thermometer is more suitable. Dilatometers often include a mechanism for controlling temperature. This may be a furnace for measurements at elevated temperatures (temperatures to 2000 °C), or a cryostat for measurements at temperatures below room temperature. Metallurgical applications often involve sophisticated temperature controls capable of applying precise temperature-time profiles for heating and quenching the sample.[7][8] ## References 1. ^ Hans Lehmann, refuge Gatzke Dilatometrie and differential thermal analysis for the evaluation of processes ? ? , 1956. 2. ^ Kastle, J. H.; Kelley, W. P. (July 1904). "On the Rate of Crystallization of Plastic Sulphur". American Chemical Journal 32: 483 – 503. 3. ^ J. J. Neumeier, R. K. Bollinger, G. E. Timmins, C. R. Lane, R. D. Krogstad, and J. Macaluso, "Capacitive-based dilatometer cell constructed of fused quartz for measuring the thermal expansion of solids", Review of Scientific Instruments 79, 033903 (2008). 4. ^ Theta Industries http://www.theta-us.com/dil/dil1.html 5. ^ C.Linseis The next step in dilatometry, invention and use of the Linseis Laser Dilatometer , Linseis Messgeraete GmbH, Selb (Germany) 6. ^ M.Paganelli The Non-contact Optical dilatometer designed for the behaviour of Ceramic Raw Materials ,Expert System Solutions S.r.l., Modena (Italy). 7. ^ http://www.theta-us.com/quench/QuenchInst.html 8. ^ http://www.dilatometer.com/l78_rita_quenching_dilatometer.html Publicité ▼ Contenu de sensagent • définitions • synonymes • antonymes • encyclopédie • definition • synonym Publicité ▼ dictionnaire et traducteur pour sites web Alexandria Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web ! Essayer ici, télécharger le code; Solution commerce électronique Augmenter le contenu de votre site Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML. Parcourir les produits et les annonces Obtenir des informations en XML pour filtrer le meilleur contenu. Indexer des images et définir des méta-données Fixer la signification de chaque méta-donnée (multilingue). Renseignements suite à un email de description de votre projet. Jeux de lettres Les jeux de lettre français sont : ○   Anagrammes ○   jokers, mots-croisés ○   Lettris ○   Boggle. Lettris Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée. boggle Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer Dictionnaire de la langue française Principales Références La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés. Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). L'encyclopédie française bénéficie de la licence Wikipedia (GNU). Changer la langue cible pour obtenir des traductions. Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent. 4783 visiteurs en ligne calculé en 0,094s Je voudrais signaler : section : une faute d'orthographe ou de grammaire un contenu abusif (raciste, pornographique, diffamatoire)
2020-09-30 06:51:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.523456335067749, "perplexity": 10047.862736888277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402118004.92/warc/CC-MAIN-20200930044533-20200930074533-00088.warc.gz"}
https://math.stackexchange.com/questions/1811349/arithmetic-mean-and-geometric-mean?noredirect=1
# Arithmetic Mean And Geometric Mean [duplicate] If $A.M$ and $G.M$ are Arithmetic Mean And Geometric Mean respectively then prove that $A.M \ge G.M$. My Attempt : Let $a$ and $b$ are any two real positive numbers. Then: $$A.M=\frac{a+b}{2}$$ $$G.M =\sqrt{ab}$$ Now how can we show that $A.M.\ge G.M.$ ? Please help me.. ## marked as duplicate by Yuriy S, colormegone, Ramiro, Daniel W. Farlow, MCTJun 3 '16 at 23:21 $$0\leq\left(\sqrt{a}-\sqrt{b}\right)^{2}=a+b-2\sqrt{ab}.$$ Let $AF=a, FB=b$. We construct the semicircle with diameter $AB$. Then $$DE=\frac{a+b}2; CF=\sqrt{ab}$$ $$DE\ge CF \Rightarrow \frac{a+b}2\ge\sqrt{ab}$$ Extending Marco Cantarini's answer $$0\leq\left(\sqrt{a}-\sqrt{b}\right)^{2} =a+b-2\sqrt{ab} => 0\leq\a+b-2\sqrt{ab} => 2\sqrt{ab}\leq\a+b => \sqrt{ab}\leq\frac{a+b}{2} => G.M.\leq\A.M.$$
2019-07-24 02:36:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8991050720214844, "perplexity": 574.842279331231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530250.98/warc/CC-MAIN-20190724020454-20190724042454-00285.warc.gz"}
https://msp.org/gt/2020/24-1/gt-v24-n1-p01-p.pdf
#### Volume 24, issue 1 (2020) Recent Issues The Journal About the Journal Editorial Board Editorial Interests Editorial Procedure Subscriptions Submission Guidelines Submission Page Policies for Authors Ethics Statement ISSN (electronic): 1364-0380 ISSN (print): 1465-3060 Author Index To Appear Other MSP Journals Coalgebraic formal curve spectra and spectral jet spaces ### Eric Peterson Geometry & Topology 24 (2020) 1–47 ##### Abstract We import into homotopy theory the algebrogeometric construction of the cotangent space of a geometric point on a scheme. Specializing to the category of spectra local to a Morava $K$–theory of height $d$, we show that this can be used to produce a choice-free model of the determinantal sphere as well as an efficient Picard-graded cellular decomposition of $K\left({ℤ}_{p},d+1\right)$. Coupling these ideas to work of Westerland, we give a “Snaith’s theorem” for the Iwasawa extension of the $K\left(d\right)$–local sphere. We have not been able to recognize your IP address 18.232.179.5 as that of a subscriber to this journal. Online access to the content of recent issues is by subscription, or purchase of single articles. or by using our contact form. ##### Keywords chromatic homotopy, formal group, Morava $E$–theory, determinantal sphere, inverse limit, comodule ##### Mathematical Subject Classification 2010 Primary: 55N22 Secondary: 55P20, 55P60 ##### Publication Revised: 8 June 2019 Accepted: 13 July 2019 Published: 25 March 2020 Proposed: Mark Behrens Seconded: Stefan Schwede, Haynes R Miller ##### Authors Eric Peterson Department of Mathematics Harvard University Cambridge, MA United States http://chromotopy.org
2023-02-05 07:55:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21274136006832123, "perplexity": 6075.372712738721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500250.51/warc/CC-MAIN-20230205063441-20230205093441-00042.warc.gz"}
https://engineer.john-whittington.co.uk/2012/09/macbook-core-duo-goes-solid-state/
# MacBook Core Duo Goes Solid State September 5, 2012 My MacBook is the 2006 original; the white Core Duo 32bit. I got it upon starting University and that ended up taking six years. Amazingly, it is still going strong and whilst I want a nice retina MBP, it would be truly frivolous, given how well this one still runs. Over the years I have given it a number of upgrades: 70GB > 500GB HDD, 512MB > 2GB (max for Core Duo) ram, new battery and a complimentary top deck from Apple (long story). Now I was turning to an SSD. Computer technology has somewhat stagnated in my opinion, from the days when hardware was obsolete within a year and I constantly had the side off my PC. Unless you’re looking to run the newest games at excessive FPSs (and many of those are using > 6 year old engines!), optimal day to day use was achieved back when I bought my MacBook. I think this is confirmed by the fact that the technology race is now for size and power efficiency, than GHz. The only thing holding my MacBook back now is that it is a 32bit and so software is going to make it redundant – it can’t run Apple’s latest OS, Mountain Lion. Anyway, the purpose of this post: My experience with solid state drives (SSD). With their prices tumbling, and a desire to get more space on the MacBook to transition over to Arch now Apple and abandoned me, I went for a Samsung 830 256GB. I indecisive about the upgrade, since I was shelling out £130 on old hardware and didn’t really need it. The difference is incredible though, and well worth the money. I hadn’t realised how much of a bottleneck the hard disk is but I guess with everything else at it’s peak it makes sense. It’s honestly like a brand new laptop and it isn’t even a fresh install! (I used Carbon Copy Cloner). ## The benefits: 1. Power Consumption: Without a platter to spin up, power consumption and battery life has increased. In order to maximise this with the dual SSD/HDD setup, use this command to shut the disk down after 1min of inactivity: pmset -a disksleep 1. 2. Silence: Again, no moving parts means my MacBook is silent when not doing anything intense (you can’t hear the fan). 3. More Space: The SSD is smaller than my old disk, but I’ve now stuck that in the optical bay (below). ## Swapping the Optical Drive for a HDD Since I needed more than the 256GB SSD and am not a fan of lugging external disks around, I wanted to swap my useless optical drive for my old HDD. I hoped the optical was SATA and I could just duck tape the thing in, unfortunately, these MacBooks use PATA drives (PITA!). You can buy special MacBook HDD/SSD caddies for \$50 but to my eyes the optical drive was an off the shelf laptop job. Sure enough, I was right, and a search on eBay found me a universal PATA-SATA caddy (the MacBook uses 9.5mm). So if your computer is feeling sluggish, I can whole heartily recommend an SSD. I think it’s taken over as the best upgrade now ram comes in copious quantities.
2021-07-28 22:36:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17447873950004578, "perplexity": 3476.496363580846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153803.69/warc/CC-MAIN-20210728220634-20210729010634-00084.warc.gz"}
https://litefoote.com/tag/gpo/
# Finding Orphaned GPO Folders with PowerShell During years and years of working in AD occasionally the sysvol folders gets out of sync with the actual GPOs. The following script will return all folders in sysvol\policies that no long have a corresponding GPO. **Please be sure to backup folders before taking any action based on this** ```#Initial Source: https://4sysops.com/archives/find-orphaned-active-directory-gpos-in-the-sysvol-share-with-powershell/ function Get-OrphanedGPOs { [CmdletBinding()] param ( [parameter(Mandatory=\$true,ValueFromPipelineByPropertyName)] [string]\$Domain ) begin { \$orphaned = @() } process { Write-Verbose "Domain: \$Domain" # Get GPOs and convert guid into same format as folders \$gpos = Get-GPO -All -Domain \$domain | Select @{ n='GUID'; e = {'{' + \$_.Id.ToString().ToUpper() + '}'}}| Select -ExpandProperty GUID Write-Verbose "GPOs: \$(\$gpos | Measure-Object | Select -ExpandProperty Count)" # Get GPOs policy folder \$polPath = "\\\$domain\SYSVOL\\$domain\Policies\" Write-Verbose "Policy Path: \$polPath" # Get all folders in the policy path \$folders = Get-ChildItem \$polPath -Exclude 'PolicyDefinitions' Write-Verbose "Folders: \$(\$folders | Measure-Object | SElect -ExpandProperty Count)" #compare and return only the Folders that exist without a GPO \$gpoArray = \$gpos.GUID ForEach (\$folder in \$folders) { if (-not \$gpos.contains(\$folder.Name)) { \$orphaned += \$folder } } Write-Verbose "Orphaned: \$(\$orphaned | Measure-Object | SElect -ExpandProperty Count)" return \$orphaned } end { } } ```
2019-06-24 23:35:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.943249523639679, "perplexity": 6462.687443183646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999779.19/warc/CC-MAIN-20190624231501-20190625013501-00513.warc.gz"}
https://paradigms.oregonstate.edu/activity/825/
## Activity: Chemical potential and Gibbs distribution Thermal and Statistical Physics 2020 These notes from the fifth week of Thermal and Statistical Physics cover the grand canonical ensemble. They include several small group activities. Here is the instructors guide, week five ## Week 5: Chemical potential and Gibbs distribution This week be looking at scenarios where the number of particles in a system changes. We could technically always manage to solve problems without doing such a system, but allowing $N$ to change is often a lot easier, just as letting the energy change made things easier. In both case, we enable ourselves to consider a smaller system, which tends to be both conceptually and mathematically simpler. Small white boards (3 minutes) Talk with your neighbor for a moment about how you expect the density of the atmosphere to vary with altitude. #### The atmosphere Let's talk about the atmosphere for a moment. Each atmosphere has a potential energy. We can solve this problem using the canonical ensemble as we have learned. We will consider just one atom, but now with gravitational potential energy as well as kinetic energy. This time around we'll do this classically rather than quantum mechanically. We can work out the probability of this atom having any particular momentum and position. \begin{align} P_1(\vec p, \vec r) &= \frac{e^{-\beta \left(\frac{p^2}{2m} + mgz\right)}}{ Z_1 } \\ &= \frac{e^{-\beta \frac{p^2}{2m} -\beta mgz}}{ Z_1 } \end{align} This tells us that the probability of this atom being at any height drops exponentially with height. If we extend this to many atoms, clearly the density must drop exponentially with height. Thiw week we'll be looking at easier approaches to explain this sort of phenomenon. You can see the obvious fact that that potential energy will affect density, and hence pressure. We will be generalizing the idea of potential energy into what is called chemical potential. ### Chemical potential Imagine for a moment what happens if you allow just two systems to exchange particles as well as energy. Clearly they will exchange particles for a while, and then things will settle down. If we hold them at fixed temperature, their combined Helmholtz free energy will be maximized. This means that the derivative of the Helmholtz free energy with respect to $N$ must be equal on both sides. This defines the chemical potential. \begin{align} \mu &= \left(\frac{\partial F}{\partial N}\right)_{T,V} \end{align} This expands our total differential of the free energy \begin{align} dF &= -SdT -pdV + \mu dN \end{align} which also expands our understanding of the thermodynamic identity \begin{align} dU &= TdS - pdV + \mu dN \end{align} which tells us that the chemical potential is also \begin{align} \mu &= \left(\frac{\partial U}{\partial N}\right)_{S,V} \end{align} The chemical potential expands our set of thermodynamic variables, and allows all sorts of nice excitement. Specifically, we now have three extensive variables that the internal energy depends on, as well as their derivatives, the temperature, pressure, and chemical potential. Note In general, there is one chemical potential for each kind of particle, thus the word “chemical” in chemical potential. Thus the “three” I discuss is actually a bit flexible. #### Internal and external The chemical potential is in fact very much like potential energy. We can distinguish between external chemical potential, which is basically ordinary potential energy, and internal chemical potential, which is the chemical potential that we compute as a property of a material. We'll do a fair amount of computing of the internal chemical potential this week, but keep in mind that the total chemical potential is what becomes equal in systems that are in equilibrium. The total chemical potential at the top of the atmosphere, is equal to the chemical potential at the bottom. If it were not, then atoms would diffuse from one place to the other. #### Ideal gas chemical potential Recall the Helmholtz free energy of an ideal gas is given by \begin{align} F &= NF_1 + k_BT \ln N! \\ &= -Nk_BT \ln\left(V\left(\frac{mk_BT}{2\pi\hbar^2}\right)^{\frac32}\right) + k_BT N(\ln N-1) \\ &= -Nk_BT \ln\left(Vn_Q\right) + k_BT N(\ln N-1) \\ &= NkT \ln\left(\frac{N}{V}\frac1{n_Q}\right) - NkT \end{align} Small groups Find the chemical potential of the ideal gas. To find the chemical potential, we just need to take a derivative. \begin{align} \mu &= \left(\frac{\partial F}{\partial N}\right)_{V,T} \\ &= k_BT\ln\left(\frac{N}{V}\frac1{n_Q}\right) \\ &= k_BT\ln\left(\frac{n}{n_Q}\right) \end{align} where the number density $n$ is given by $n\equiv N/V$. This equation can be solved to find the density in terms of the chemical potential: \begin{align} n &= n_Q e^{\beta \mu} \end{align} This might remind you of the Boltzmann relation. In fact, it's very closely related to the Boltzmann relation. We do want to keep in mind that the $\mu$ above is the internal chemical potential. The total chemical potential is given by the sum of the internal chemical potential and the external chemical potential, and that total is what is equalized between systems that are in diffusive contact. \begin{align} \mu_{tot} &= \mu_{int} + mgz \\ &= k_BT\ln\left(\frac{n}{n_Q}\right) + mgz \end{align} We can solve for the density now, as a function of position. \begin{align} k_BT\ln\left(\frac{n}{n_Q}\right) &= \mu_{tot}-mgz \\ n &= n_Q e^{-\beta (\mu_{tot}-mgz)} \end{align} This is just telling us the same result we already knew, which is that the density must drop exponentially with height. #### Interpreting the chemical potential The chemical potential can be challenging to understand intuitively, for myself as well as for you. The ideal gas expression \begin{align} n &= n_Q e^{\beta\mu} \end{align} can help with this. This tells us that the density increases as we increase the chemical potential. Particles spontaneously flow from high chemical potential to low chemical potential, just like heat flows from high temperature to low. This fits with the idea that at high $\mu$ the density is high, since I expect particles to naturally flow from a high density region to a low density region. The distinction between internal and external chemical potential allows us to reason about systems like the atmosphere. Where the external chemical potential is high (at high altitude), the internal chemical potential must be lower, and there is lower density. This is because particles have already fled the high-$\mu$ region to happier locations closer to the Earth. ### Gibbs factor and sum Let's consider how we maximize entropy when we allow not just microstates with different energy, but also microstates with different number of particles. The problem is the same was we dealt with the first week. We want to maximize the entropy, but need to fix the total probability, the average energy and now the average number. \begin{align} \langle N\rangle &= N =\sum_i P_i N_i \\ \langle E\rangle &= U = \sum_i P_i E_i \\ 1 &= \sum_i P_i \end{align} To solve for the probability $P_i$ we will want to maximize the entropy $S=-k\sum_i P_i\ln P_i$ subject to the above constraints. Like what I did the first week of class, we will need to use Lagrange multipliers. The Lagrangian which we want to maximize will look like \begin{multline} \mathcal{L} = -k\sum_i P_i \ln P_i + k\alpha\left(1 - \sum_i P_i\right)\\ + k\beta\left(U - \sum_i P_i E_i\right)\\ + k\gamma\left(N - \sum_i P_i N_i\right) \end{multline} Small groups Solve for the probabilities $P_i$ that maximize this Lagrangian, subject to the above constraints. Eliminate $\alpha$ from the expression for probability, so you will end up with probabilities that depend on the other two Lagrange multipliers, one of which is our usual $\beta$, while the other one we will relate to chemical potential. We maximize $\mathcal{L}$ by setting its derivatives equal to zero. \begin{align} 0 &= -\frac1{k}\frac{\partial \mathcal{L}}{\partial P_i} \\ &= \ln P_i + 1 + \alpha + \beta E_i + \gamma N_i \\ P_i &= e^{-1-\alpha - \beta E_i - \gamma N_i} \end{align} Now as before we'll want to apply the normalization constraint. \begin{align} 1 &= \sum_i P_i \\ &= \sum_i e^{-1-\alpha - \beta E_i - \gamma N_i} \\ &= e^{-1-\alpha} \sum_i e^{-\beta E_i - \gamma N_i} \\ e^{-1-\alpha} &= \frac1{\sum_i e^{-\beta E_i - \gamma N_i}} \end{align} Thus we find that the probability of a given microstate is \begin{align} P_i &= \frac{e^{-\beta E_i - \gamma N_i}}{\mathcal{Z}} \\ \mathcal{Z} &\equiv \sum_i e^{-\beta E_i - \gamma N_i} \end{align} where we will call the new quantity $\mathcal{Z}$ the grand partition function or Gibbs sum. We have already identified $\beta$ as $\frac1{kT}$, but what is this $\gamma$? It is a dimensionless quantity. We expect that $\gamma$ will relate to a derivative of the entropy with respect to $N$ (since it is the Lagrange multiplier for the $N$ constraint). We can figure this out by examining the newly expanded total differential of entropy: \begin{align} dU &= TdS - pdV + \mu dN \\ dS &= \frac1T dU + \frac{p}T dV - \frac\mu{T} dN \end{align} Small groups I'd like you to repeat your first ever homework problem in this class, but now with the $N$-changing twist. Given the above set of probabilities, along with the Gibbs entropy $S = -k\sum P\ln P$, find the total differential of entropy in terms of $dU$ and $dN$, keeping in mind that $V$ is inherently held fixed by holding the energy eigenvalues fixed. Equate this total differential to the $dS$ above to identify $\beta$ and $\gamma$ with thermodynamic quantities. \begin{align} S &= -k\sum_i P_i\ln P_i \\ &= -k \sum_i P_i \ln\left(\frac{e^{-\beta E_i-\gamma N_i}}{\mathcal{Z}}\right) \\ &= -k\sum_i P_i(-\beta E_i - \gamma N_i - \ln\mathcal{Z}) \\ &= k\beta U + k\gamma N + k\ln\mathcal{Z} \end{align} Now we can zap this with $d$ to find its derivatives: \begin{align} dS &= k\beta dU + kUd\beta + k\gamma dN + kNd\gamma + k\frac{d\mathcal{Z}}{\mathcal{Z}} \end{align} Now we just need to find $d\mathcal{Z}$... \begin{align} d\mathcal{Z} &= \frac{\partial\mathcal{Z}}{\partial\beta}d\beta + \frac{\partial\mathcal{Z}}{\partial\gamma}d\gamma \\ &= -\sum_i E_ie^{-\beta E_i-\gamma N_i}d\beta - \sum_i N_ie^{-\beta E_i-\gamma N_i}dN \\ &= -U\mathcal{Z}d\beta -N\mathcal{Z}d\gamma \end{align} Putting $dS$ together gives \begin{align} dS &= k\beta dU + k\gamma dN \\ &= \frac1TdU - \frac{\mu}{T}dN \end{align} Thus, we conclude that \begin{align} k\beta &= \frac1T & k\gamma &= -\frac{\mu}{T} \\ \beta &= \frac1{kT} & \gamma &= -\beta\mu \end{align} #### Actual Gibbs sum (or grand sum) Putting this interpretation for $\gamma$ into our probabilities we find the Gibbs factor and Gibbs sum (or grand sum or grand partition function) to be: \begin{align} P_j &= \frac{-\beta \left(E_j - \mu N_j\right)}{\mathcal{Z}} \\ \mathcal{Z} &\equiv \sum_i e^{-\beta(E_i - \mu N_i)} \end{align} where you must keep in mind that the sums are over all microstates (including states with different $N$). We can go back to our expressions for internal energy and number \begin{align} U &= \sum_i P_i E_i \\ &= \frac1{\mathcal{Z}}\sum_i E_i e^{-\beta (E_i -\mu N_i)} \\ N &= \sum_i P_i N_i \\ &= \frac1{\mathcal{Z}}\sum_i N_i e^{-\beta (E_i - \mu N_i)} \end{align} We can now use the derivative trick to relate $U$ and $N$ to the Gibbs sum $\mathcal{Z}$, should we so desire. Small groups Work out the partial derivative tricks to compute $U$ and $N$ from the grand sum. Let's start by exploring the derivative with respect to $\beta$, which worked so nicely with the partition function. \begin{align} \frac1{\mathcal{Z}} \frac{\partial \mathcal{Z}}{\partial \beta} &= -\frac1{\mathcal{Z}}\sum_i (E_i-\mu N_i) e^{-\beta(E_i - \mu N_i)} \\ &= -U + \mu N \end{align} Now let's examine a derivative with respect to $\mu$. \begin{align} \frac1{\mathcal{Z}} \frac{\partial \mathcal{Z}}{\partial \mu} &= \frac1{\mathcal{Z}}\sum_i (\beta N_i) e^{-\beta(E_i - \mu N_i)} \\ &= \beta N \end{align} Arranging these to find $N$ and $U$ is not hard. Small groups Show that \begin{align} \left(\frac{\partial N}{\partial\mu}\right)_{T,V} > 0 \end{align} \begin{align} N &= \sum_i N_i P_i \\ &= kT \frac1{\mathcal{Z}} \left(\frac{\partial\mathcal{Z}}{\partial \mu}\right)_{\beta} \end{align} So the derivative we seek will be \begin{align} \left(\frac{\partial N}{\partial\mu}\right)_{T,V} &= kT \left(\frac{\partial^2\mathcal{Z}}{\partial \mu^2}\right)_{\beta} \\ &= \sum_i N_i \left(\frac{\partial P_i}{\partial \mu}\right)_{\beta} \\ &= \sum_i N_i \left(\beta N_i P_i - \frac{P_i}{\mathcal{Z}}\left(\frac{\partial\mathcal{Z}}{\partial \mu}\right)_{\beta}\right) \\ &=\sum_i N_i \left(\beta N_i P_i - \beta \langle N\rangle P_i \right) \end{align} We can simplify this the notation by expressing things in terms of averages, since we've got sums of $P_i$ times something. \begin{align} &= \beta\left<N_i(N_i - \langle N\rangle)\right> \\ &= \beta\left(\left<N^2\right> - \langle N\rangle^2\right) \\ &= \beta\left<\left( N - \langle N\rangle\right)^2\right> \end{align} This is positive, because it is an average of something squared. The last step is a common step when examining variances of distributions, and relies on the fact that $\left<N - \langle N\rangle\right>=0$. #### Euler's homogeneous function theorem There is a nice theorem we can use to better understand the chemical potential, and how it relates to the Gibbs free energy. This involves reasoning about how internal energy changes when all the extensive variables are changed simultaneously, and connects with Euler's homogeneous function theorem. Suppose we have a glass that we will slowly pour water into. We will define our “system” to be all the water in the glass. The glass is open, so the pressure remains constant. Since the water is at room temperature (and let's just say the room humidity is 100%, to avoid thinking about evaporation), the temperature remains constant as well. Small white boards What is the initial internal energy (and entropy and volume and $N$) of the system? i.e. when there is not yet any water in the glass... Since these are extensive quantities, they must all be zero when there is no water in the glass. Small white boards Suppose the water is added at a rate $\frac{dN}{dt}$. Suppose you know the values of $N$, $S$, $V$, and $U$ for a given amount of room temperature water (which we can call $N_0$, $S_0$, etc.). Find the rate of change of these quantities. Because these are extensive quantities, they must all be increasing with equal proportion \begin{align} \frac{dV}{dt} &= \frac{V_0}{N_0}\frac{dN}{dT} \\ \frac{dS}{dt} &= \frac{S_0}{N_0}\frac{dN}{dT} \\ \frac{dU}{dt} &= \frac{U_0}{N_0}\frac{dN}{dT} \end{align} This tells us that differential changes to each of these quantities must be related in the same way, for this process of pouring in more identical water. And we can drop the 0 subscript, since the ratio of quantities is the same regardless of how much water we have. \begin{align} dV &= \frac{V_0}{N_0}dN \\ dV &= \frac{V}{N}dN \\ dU &= \frac{U}N dN \end{align} Thus given the thermodynamic identity \begin{align} dU &= TdS - pdV + \mu dN \\ \frac{U}{N} dN &= T\frac{S}{N}dN -p\frac{S}{N}dN +\mu dN \\ U &= TS -pV + \mu N \end{align} This is both crazy and awesome. It feels very counter-intuitive, and you might be wondering why we didn't tell you this way back in Energy and Entropy to save you all this trouble with derivatives. The answer is that it is usually not directly all that helpful, since we now have a closed-form expression for $U$ in terms of six mutually dependent variables! So you can't use this form in order to evaluate derivatives (much). This expression is however very helpful in terms of understanding the chemical potential. Consider the Gibbs free energy: \begin{align} G &\equiv U -TS +pV \\ &= \mu N \end{align} which tells us that the chemical potential is just the Gibbs free energy per particle. If we have several chemical species, this expression just becomes \begin{align} G &= \sum_i \mu_i N_i \end{align} so each chemical potential is a partial Gibbs free energy per molecule. This explains why the chemical potential is seldom discussed in chemistry courses: they spend all their time talking about the Gibbs free energy, which just turns out to be the same thing as the chemical potential. Side note There is another interesting thing we can do with the relationship that \begin{align} U &= TS-pV+\mu N \end{align} and that involves zapping it with $d$. This tells us that \begin{align} dU & TdS + SdT - pdV - Vdp + \mu dN + Nd\mu \end{align} which looks downright weird, since it's got twice as many terms as we normally see. This tells us that the extra terms must add to zero: \begin{align} 0 &= SdT - Vdp + Nd\mu \end{align} This relationship (called the Gibbs-Duhem equation) tells us just how $T$, $p$ and $\mu$ must change in order to keep our extensive quantities extensive and our intensive quantities intensive. #### Chemistry Chemical equilibrium is somewhat different than the diffusive equilibrium that we have considered so far. In diffusive equilibirum, two systems can exchange particles, and the two systems at equilibirum must have equal chemical potentials. In chemistry, particles can be turned into other particles, so we have a more complicated scenario, but it still involves changing the number of particles in a system. In chemical equilibrium, when a given reaction is in equilibrium the sum of the chemical potentials of the reactants must be equal to the sum of the chemical potentials of the products. An example may help. Consider for instance making water from scratch: \begin{align} 2\text{H}_2 + \text{O}_2 \rightarrow 2\text{H}_2\text{O} \end{align} In this case in chemical equilibrium \begin{align} 2\mu_{\text{H}_2} + \mu_{\text{O}_2} &= 2\mu_{\text{H}_2\text{O}} \end{align} We can take this simple equation, and turn it into an equation involving activities, which is productive if you think of an activity as being something like a concentration (and if you care about equilibrium concentrations): \begin{align} e^{\beta(2\mu_{\text{H}_2\text{O}}-2\mu_{\text{H}_2} - \mu_{\text{O}_2})} &= 1 \\ \frac{\lambda_{\text{H}_2\text{O}}^2}{\lambda_{\text{O}_2}\lambda_{\text{H}_2}^2} &= 1 \end{align} Now this looks sort of like the law of mass action, except that our equilibrium constant is 1. To get to the more familiar law of mass action, we need to introduce (a caricature of) the chemistry version of activity. The thing in square brackets is actually a relative activity, not a concentration as is often taught in introductory classes (and was considered correct prior to the late nineteenth century). It is only proportional to concentration to the extent that the substance obeys the ideal gas relationship between chemical potential and concentration. Fortunately, this is satisfied for just about anything at low concentration. For solvents (and dense materials like a solid reactant or product) the chemical potential doesn't (appreciably) change as the reaction proceeds, so it is normally omitted from the mass action equation. When I was taught this in a chemistry class back in the nineties, I was taught that the “concentration” of such a substance was dimensionless and had value 1. Specifically, we define the thing in square brackets as \begin{align} [\text{H}_2\text{O}] &\equiv n^*_{\text{H}_2\text{O}} e^{\beta(\mu_{\text{H}_2\text{O}} - \mu^*_{\text{H}_2\text{O}})} \\ &= n^*_{\text{H}_2\text{O}} \frac{\lambda_{\text{H}_2\text{O}}}{\lambda_{\text{H}_2\text{O}}^*} \end{align} where $n^*$ is a reference concentration, and $\mu^*$ is the chemical potential of the fluid at that reference density. Using this notation, we can solve for the activity \begin{align} \lambda_{\text{H}_2\text{O}} &= \lambda_{\text{H}_2\text{O}}^* \frac{[\text{H}_2\text{O}]}{n^*_{\text{H}_2\text{O}}} \end{align} So now we can rewrite our weird mass action equation from above \begin{align} \frac{ \left(\lambda_{\text{H}_2\text{O}}^* \frac{[\text{H}_2\text{O}]}{n^*_{\text{H}_2\text{O}}}\right)^2 }{ \left(\lambda_{\text{O}_2}^* \frac{[\text{O}_2]}{n^*_{\text{O}_2}}\right) \left(\lambda_{\text{H}_2}^* \frac{[\text{H}_2]}{n^*_{\text{H}_2}}\right)^2 } &= 1 \end{align} and then we can solve for the equilibrium constant for the reaction \begin{align} \frac{ [\text{H}_2\text{O}]^2 }{ [\text{O}_2][\text{H}_2]^2 } &= \frac{ (n^*_{\text{H}_2\text{O}})^2 }{ n^*_{\text{O}_2} (n^*_{\text{H}_2})^2 } \frac{ \lambda_{\text{O}_2}^* (\lambda_{\text{H}_2}^*)^2 }{ (\lambda_{\text{H}_2\text{O}}^*)^2 } \\ &= \frac{ (n^*_{\text{H}_2\text{O}})^2 }{ n^*_{\text{O}_2} (n^*_{\text{H}_2})^2 } e^{\beta( \mu_{\text{O}_2}^* + 2\mu_{\text{H}_2}^* - 2\mu_{\text{H}_2\text{O}}^* )} \\ &= \frac{ (n^*_{\text{H}_2\text{O}})^2 }{ n^*_{\text{O}_2} (n^*_{\text{H}_2})^2 } e^{-\beta\Delta G^*} \end{align} where at the last step I defined $\Delta G^*$ as the difference in Gibbs free energy between products and reactants, and used the fact that the chemical potential is the Gibbs free energy per particle. This expression for the chemical equilibrium constant is the origin of the intuition that a reaction will go forward if the Gibbs free energy of the products is lower than that of the reactants. I hope you found interesting this little side expedition into chemistry. I find fascinating where these fundamental chemistry relations come from, and also that the relationship between concentrations arises from an ideal gas approximation! Which is why it is only valid in the limit of low concentration, and why the solvent is typically omitted from the equilibrium constant, since its activity is essentially fixed. • face Review of Thermal Physics face Lecture 30 min. ##### Review of Thermal Physics Thermal and Statistical Physics 2020 These are notes, essentially the equation sheet, from the final review session for Thermal and Statistical Physics. • assignment Fluctuations in a Fermi gas assignment Homework ##### Fluctuations in a Fermi gas Fermi gas grand canonical ensemble statistical mechanics Thermal and Statistical Physics 2020 (K&K 7.11) Show for a single orbital of a fermion system that \begin{align} \left<(\Delta N)^2\right> = \left<N\right>(1+\left<N\right>) \end{align} if $\left<N\right>$ is the average number of fermions in that orbital. Notice that the fluctuation vanishes for orbitals with energies far enough from the chemical potential $\mu$ so that $\left<N\right>=1$ or $\left<N\right>=0$. • face Ideal Gas face Lecture 120 min. ##### Ideal Gas Thermal and Statistical Physics 2020 These notes from week 6 of Thermal and Statistical Physics cover the ideal gas from a grand canonical standpoint starting with the solutions to a particle in a three-dimensional box. They include a number of small group activities. • assignment Gibbs sum for a two level system assignment Homework ##### Gibbs sum for a two level system Gibbs sum Microstate Thermal average energy Thermal and Statistical Physics 2020 1. Consider a system that may be unoccupied with energy zero, or occupied by one particle in either of two states, one of energy zero and one of energy $\varepsilon$. Find the Gibbs sum for this system is in terms of the activity $\lambda\equiv e^{\beta\mu}$. Note that the system can hold a maximum of one particle. 2. Solve for the thermal average occupancy of the system in terms of $\lambda$. 3. Show that the thermal average occupancy of the state at energy $\varepsilon$ is \begin{align} \langle N(\varepsilon)\rangle = \frac{\lambda e^{-\frac{\varepsilon}{kT}}}{\mathcal{Z}} \end{align} 4. Find an expression for the thermal average energy of the system. 5. Allow the possibility that the orbitals at $0$ and at $\varepsilon$ may each be occupied each by one particle at the same time; Show that \begin{align} \mathcal{Z} &= 1 + \lambda + \lambda e^{-\frac{\varepsilon}{kT}} + \lambda^2 e^{-\frac{\varepsilon}{kT}} \\ &= (1+\lambda)\left(1+e^{-\frac{\varepsilon}{kT}}\right) \end{align} Because $\mathcal{Z}$ can be factored as shown, we have in effect two independent systems. • assignment Active transport assignment Homework ##### Active transport Active transport Concentration Chemical potential Thermal and Statistical Physics 2020 The concentration of potassium $\text{K}^+$ ions in the internal sap of a plant cell (for example, a fresh water alga) may exceed by a factor of $10^4$ the concentration of $\text{K}^+$ ions in the pond water in which the cell is growing. The chemical potential of the $\text{K}^+$ ions is higher in the sap because their concentration $n$ is higher there. Estimate the difference in chemical potential at $300\text{K}$ and show that it is equivalent to a voltage of $0.24\text{V}$ across the cell wall. Take $\mu$ as for an ideal gas. Because the values of the chemical potential are different, the ions in the cell and in the pond are not in diffusive equilibrium. The plant cell membrane is highly impermeable to the passive leakage of ions through it. Important questions in cell physics include these: How is the high concentration of ions built up within the cell? How is metabolic energy applied to energize the active ion transport? You might wonder why it is even remotely plausible to consider the ions in solution as an ideal gas. The key idea here is that the ideal gas entropy incorporates the entropy due to position dependence, and thus due to concentration. Since concentration is what differs between the cell and the pond, the ideal gas entropy describes this pretty effectively. In contrast to the concentration dependence, the temperature-dependence of the ideal gas chemical potential will not be so great. • assignment Centrifuge assignment Homework ##### Centrifuge Centrifugal potential Thermal and Statistical Physics 2020 A circular cylinder of radius $R$ rotates about the long axis with angular velocity $\omega$. The cylinder contains an ideal gas of atoms of mass $M$ at temperature $T$. Find an expression for the dependence of the concentration $n(r)$ on the radial distance $r$ from the axis, in terms of $n(0)$ on the axis. Take $\mu$ as for an ideal gas. • assignment Carbon monoxide poisoning assignment Homework ##### Carbon monoxide poisoning Equilibrium Absorbtion Thermal and Statistical Physics 2020 In carbon monoxide poisoning the CO replaces the $\textsf{O}_{2}$ adsorbed on hemoglobin ($\text{Hb}$) molecules in the blood. To show the effect, consider a model for which each adsorption site on a heme may be vacant or may be occupied either with energy $\varepsilon_A$ by one molecule $\textsf{O}_{2}$ or with energy $\varepsilon_B$ by one molecule CO. Let $N$ fixed heme sites be in equilibrium with $\textsf{O}_{2}$ and CO in the gas phases at concentrations such that the activities are $\lambda(\text{O}_2) = 1\times 10^{-5}$ and $\lambda(\text{CO}) = 1\times 10^{-7}$, all at body temperature $37^\circ\text{C}$. Neglect any spin multiplicity factors. 1. First consider the system in the absence of CO. Evaluate $\varepsilon_A$ such that 90 percent of the $\text{Hb}$ sites are occupied by $\textsf{O}_{2}$. Express the answer in eV per $\textsf{O}_{2}$. 2. Now admit the CO under the specified conditions. Fine $\varepsilon_B$ such that only 10% of the Hb sites are occupied by $\textsf{O}_{2}$. • face Phase transformations face Lecture 120 min. ##### Phase transformations Thermal and Statistical Physics 2020 These lecture notes from the ninth week of Thermal and Statistical Physics cover phase transformations, the Clausius-Clapeyron relation, mean field theory and more. They include a number of small group activities. • assignment Ideal gas in two dimensions assignment Homework ##### Ideal gas in two dimensions Ideal gas Entropy Chemical potential Thermal and Statistical Physics 2020 1. Find the chemical potential of an ideal monatomic gas in two dimensions, with $N$ atoms confined to a square of area $A=L^2$. The spin is zero. 2. Find an expression for the energy $U$ of the gas. 3. Find an expression for the entropy $\sigma$. The temperature is $kT$. • assignment Potential energy of gas in gravitational field assignment Homework ##### Potential energy of gas in gravitational field Potential energy Heat capacity Thermal and Statistical Physics 2020 Consider a column of atoms each of mass $M$ at temperature $T$ in a uniform gravitational field $g$. Find the thermal average potential energy per atom. The thermal average kinetic energy is independent of height. Find the total heat capacity per atom. The total heat capacity is the sum of contributions from the kinetic energy and from the potential energy. Take the zero of the gravitational energy at the bottom $h=0$ of the column. Integrate from $h=0$ to $h=\infty$. You may assume the gas is ideal. Learning Outcomes
2022-11-30 21:35:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 47, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999809265136719, "perplexity": 737.6867825228829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00676.warc.gz"}
https://library.kiwix.org/datascience.stackexchange.com_en_all_2021-04/A/question/16036.html
## Alternatives to TF-IDF and Cosine Similarity when comparing documents of differing formats 13 5 I've been working on a small, personal project which takes a user's job skills and suggests the most ideal career for them based on those skills. I use a database of job listings to achieve this. At the moment, the code works as follows: 1) Process the text of each job listing to extract skills that are mentioned in the listing 2) For each career (e.g. "Data Analyst"), combine the processed text of the job listings for that career into one document 3) Calculate the TF-IDF of each skill within the career documents After this, I'm not sure which method I should use to rank careers based on a list of a user's skills. The most popular method that I've seen would be to treat the user's skills as a document as well, then to calculate the TF-IDF for the skill document, and use something like cosine similarity to calculate the similarity between the skill document and each career document. This doesn't seem like the ideal solution to me, since cosine similarity is best used when comparing two documents of the same format. For that matter, TF-IDF doesn't seem like the appropriate metric to apply to the user's skill list at all. For instance, if a user adds additional skills to their list, the TF for each skill will drop. In reality, I don't care what the frequency of the skills are in the user's skills list -- I just care that they have those skills (and maybe how well they know those skills). It seems like a better metric would be to do the following: 1) For each skill that the user has, calculate the TF-IDF of that skill in the career documents 2) For each career, sum the TF-IDF results for all of the user's skill 3) Rank career based on the above sum Am I thinking along the right lines here? If so, are there any algorithms that work along these lines, but are more sophisticated than a simple sum? Thanks for the help! 3Check out Doc2vec, Gensim has the implementation – Blue482 – 2017-01-03T11:55:15.797 – Intruso – 2017-03-04T14:29:30.107 1 Perhaps you could use word embeddings to better represent the distance between certain skills. For instance, "Python" and "R" should be closer together than "Python" and "Time management" since they are both programming languages. The whole idea is that words that appear in the same context should be closer. Once you have these embeddings, you would have a set of skills for the candidate, and sets of skills of various size for the jobs. You could then use Earth Mover's Distance to calculate the distance between the sets. This distance measure is rather slow (quadratic time) so it might not scale well if you have many jobs to go through. To deal with the scalability issue, you could perhaps rank the jobs based on how many skills the candidate has in common in the first place, and favor these jobs. 1 A common and simple method to match "documents" is to use TF-IDF weighting, as you have described. However, as I understand your question, you want to rank each career (-document) based on a set of users skills. If you create a "query vector" from the skills, you can multiply the vector with your term-career matrix (with all the tf-idf weights as values). The resulting vector would give you a ranking score per career-document which you can use to pick the top-k careers for the set of "query skills". E.g. if your query vector $\bar{q}$ consists of zeros and ones, and is of size $1 \times |terms|$, and your term-document matrix $M$ is of size $|terms| \times |documents|$, then $\bar{v} M$ would result in a vector of size $1 \times |documents|$ with elements equal to the sum of every query term's TF-IDF weight per career document. This method of ranking is one of the simplest and many variations exist. The TF-IDF entry on Wikipedia also describes this ranking method briefly. I also found this Q&A on SO about matching documents. Surprisingly, a simple average of word embeddings is often as good as a weighted average of embeddings done with Tf-Idf weights. – wacax – 2018-03-26T21:38:02.013 0 Use the Jaccard Index. This will very much serve your purpose. 0 You can try using "gensim". I did a similar project with unstructured data. Gensim gave better scores than standard TFIDF. It also ran faster.
2021-08-03 02:33:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6550013422966003, "perplexity": 917.5513762383753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154408.7/warc/CC-MAIN-20210802234539-20210803024539-00576.warc.gz"}
https://stats.stackexchange.com/questions/192115/kl-divergence-between-two-products
# KL-divergence between two products Given factorizations of two joint densities $p(x_1,...,x_n)=\prod_{i=1}^n p(x_i\mid \textrm{cond}(x_i))$ and $q(x_1,...,x_n)=\prod_{i=1}^n q(x_i\mid \textrm{cond}(x_i))$, where $\textrm{cond}(\bullet)$ denotes the set of conditioning variables, does the KL-divergence decompose, i.e., does $\textrm{KL}(p\Vert q)= \sum_{i=1}^n \textrm{KL}\left(p(x_i\mid \textrm{cond}(x_i))\ \Vert\ q(x_i\mid \textrm{cond}(x_i))\right)$ hold? • If it makes things easier, you can assume $\textrm{cond}(x_i) \in \{x_1,\ldots,x_n\}$ . More generally, the two factorisations are Bayesian networks, which means that $\textrm{cond}(x_i)$ can be any subset of $\{x_1,\ldots,x_n\}$ so that the induced graph structure is a directed acyclic graph. – ASML Jan 23, 2016 at 14:23 • How do you define $\textrm{KL}\left(p(x_i\mid \textrm{cond}(x_i))\ \Vert\ q(x_i\mid \textrm{cond}(x_i))\right)$ ? The ambiguous part is how you integrate out the $cond(x_i)$ variables in there Feb 3, 2016 at 13:42 Assuming the set of conditioning variables $$\text{cond}_i = \text{cond}(x_i)$$ depends only on the index $$i$$ and not on the value of $$x_i$$, we have \begin{align*} \textrm{KL}(p\Vert q) &= \sum_{i=1}^n \mathbb{E}_{p(x_1, \dots, x_n)} \log \left(\frac{p(x_i\mid \textrm{cond}_i)}{q(x_i\mid \textrm{cond}_i)} \right) \\ &= \sum_{i=1}^n \mathbb{E}_{p(x_i, \text{cond}_i)} \log \left(\frac{p(x_i\mid \textrm{cond}_i)}{q(x_i\mid \textrm{cond}_i)} \right) \\ &= \sum_{i=1}^n \mathbb{E}_{p(\text{cond}_i)} \mathbb{E}_{p(x_i \mid \text{cond}_i)} \log \left(\frac{p(x_i\mid \textrm{cond}_i)}{q(x_i\mid \textrm{cond}_i)} \right) \\ &= \sum_{i=1}^n \mathbb{E}_{p(\text{cond}_i)} \text{KL}\left(p(x_i\mid \textrm{cond}_i) \mid\mid q(x_i\mid \textrm{cond}_i) \right) \\ \end{align*}
2022-07-05 23:16:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 1181.697205116656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104628307.87/warc/CC-MAIN-20220705205356-20220705235356-00617.warc.gz"}
http://math.stackexchange.com/questions/259871/extension-of-differentiation-operator-to-l-20-1
# Extension of differentiation operator to $L_2[0,1]$. I'm studying for my functional analysis exam. We are required to know the proof of the following, but I cannot figure it out. Consider $L_2[0,1]$ with orthonormal basis $(e_n)_{n=-\infty}^\infty$ where $e_n(t) = e^{i 2 \pi nt}$. Let $D$ in $\mathcal{L}(\operatorname{span}\{ e_n \}_{n=-\infty}^\infty)$ denote the operator of differentiation. Let $\mathcal{D} = \{ f \in L_2[0,1] : \sum_{n=-\infty}^\infty |n^2 (f,e_n)|^2 < \infty \}$. Show that there is a unique extension of $D^2$ from $\mathcal{D}$ to $L_2[0,1]$. Moreover, $D^2 : \mathcal{D} \to L_2[0,1]$ has closed graph in $L_2[0,1] \oplus L_2[0,1]$. - Given $f=\sum_{n=-\infty}^{+\infty}(f,e_n)e_n\in\mathcal D$, do you see how to define $D^2$? –  Davide Giraudo Dec 16 '12 at 9:39
2014-03-12 09:15:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9828960299491882, "perplexity": 94.3162616816819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021585790/warc/CC-MAIN-20140305121305-00045-ip-10-183-142-35.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/27578/can-a-transcendental-number-like-e-or-pi-be-compressed-as-not-algorithmical
# Can a transcendental number like $e$ or $\pi$ be compressed as not algorithmically random? The related and interesting fields of Information Theory, Turing Computability, Kolmogorov Complexity and Algorithmic Information Theory, give definitions of algorithmically random numbers. An algorithmically random number is a number (in some encoding, usually binary) for which the shortest program (e.g using a Turing Machine) to generate the number, has the same length (number of bits) as the number itself. In this sense numbers like $\sqrt{e}$ or $\pi$ are not random since well known (mathematical) relations exist which in effect function as algorithms for these numbers. However, especially for $e$ and $\pi$ (which are transcendental numbers) it is known that they are defined by infinite power series. For example $e = \sum_{n=0}^\infty \frac{1}{n!}$ So even though a number, which is the binary representation of $\sqrt{e}$, is not alg. random, a program would (still?) need the description of the (infinite) bits of the (transcendental) number $e$ itself. Can transcendental numbers (really) be compressed? Where is this argument wrong? UPDATE: Also note the fact that for almost all transcendental numbers, and irrational numbers in general, the frequency of digits is uniform (much like a random sequence). So its Shannon entropy should be equal to a random string, however the Kolmogorov Complexity, which is related to Shannon Entropy, would be different (as not alg. random) Thank you • Why do you think that a TM which generated the first $n$ bits of $\sqrt{e}$ would need all (or any, for that matter) of the bits of $e$? What would be wrong with a program that first generated enough bits of $e$ and then invoked the usual routine to take the square root of that argument? – Rick Decker Jun 10 '14 at 19:30 • @RickDecker hmm, i think i understand what you mean, but this would be approximate computation , no? As another example consider a periodic rational (like 1/3), this although it has infinite bits it can be compressed like this "print 0. and while(true) print 3" – Nikos M. Jun 10 '14 at 19:36 • @RickDecker, another point is if a TM can assume the (expansion) of the number $e$ as known (and thus not requiring to be embedded in the program), then of course $\sqrt{e}$ is not alg. random. But is this so? – Nikos M. Jun 10 '14 at 19:40 • I'd argue that writing $e$ as a summation is compressing it, and greatly so: it represents with finite information an infinite amount of information. The problem isn't so much compressing $e$ as it is decompressing it; you cannot, by its very nature, get a decompressed representation of $e$ in finite time. Any attempt to do so will be an approximation since we don't have enough time or enough tape to write all the digits down. – Patrick87 Jun 10 '14 at 19:59 • In answer to your second question, how would a TM "assume" the expansion of $e$? Perhaps by having a second (infinite) tape with the bits of $e$? Sorry, but that's not allowed in the standard definition of a TM. Your third question is similar: you can't use an "infinite power series" to generate all of $e$ or $\sqrt{e}$: all you can do with a TM is generate $n$ digits of the number, for $n$ arbitrarily large, but finite. – Rick Decker Jun 11 '14 at 0:09 ## 3 Answers The problem is in your poor definition of "algorithmically random number" as applied to irrational numbers. In particular: has the same length (number of bits) as the number itself. has no meaning if the number is of unbounded length. Your Wikipedia link gives better definitions, which don't have this problem. For example (and paraphrasing formatting): Kolmogorov complexity [...] can be thought of as a lower bound on the algorithmic compressibility of a finite sequence (of characters or binary digits). It assigns to each such sequence $w$ a natural number $K(w)$ that, intuitively, measures the minimum length of a computer program (written in some fixed programming language) that takes no input and will output $w$ when run. Given a natural number $c$ and a sequence $w$, we say that $w$ is $c$-incompressible if $K(w) \geq |w| - c$. An infinite sequence $S$ is Martin-Löf random if and only if there is a constant $c$ such that all of $S$'s finite prefixes are $c$-incompressible. This is a test passed by $\sqrt{e}$ by setting $c$ a bit larger than the program to generate $\sqrt{e}$ and including in it the length to generate. • yes i used a simpler definition (and providd the links), however even T.M.Cover in Elements of Information Theory in the chapter of Kolmogorov Compexity (available online), has an example of Kolmogorov Complexity for $\sqrt{e}$ specifically using conditionals (on the number of first n bits) similarly to the 1st comment by Rick Decker. But you miss one point which the comment by Patrick87 touched. The relation of randomness in algorithmic and probabilistic sense. But nice answer. – Nikos M. Jun 10 '14 at 21:01 Surprisingly, the Kolmogorov complexity of some arbitrary number is known to be uncomputable. So the general form of your question, is an arbitrary number algorithmically compressible is undecidable (i.e. the problem of computing the algorithmic complexity of a sequence). This can be proven by reduction to the Halting problem. Problems that are known to be related to "computable analysis" (the theory of computable reals) (cf. K. Weihrauch, Computable Analysis - An Introduction) are the following: 1. To effectively enumerate all digits of a real number, one needs an infinite time Turing machine 2. Distinguishing two Turing machines that compute two reals is uncomputable (i.e. the machine equivalence problem is undecidable) A consequence of 1 and 2 is that the equality operator on Turing machines is not defined (at least in terms of a Choice Axiom). In terms of undecidability, we are typically referring to Omega-like numbers (cf. Chatin's constant for which C. Calude et al. surprisingly provided "a" computation - Exact approximations of omega numbers). Now, $e$ and $\pi$ are computable reals. The Borwein-Bailey-Plouffe formula can even compute the $n$-th digit of $\pi$. Computability, compressibility and decidability theories refer to metamathematics. Compressibility refers to the program description of a sequence. So for your example, $e$ would be described by 9 symbols ($\sum$,$n$,$=$,$0$,$\infty$,$\ldots$). We are assuming that the program is defined on some Universal Turing Machine, in this case, say, a symbolic algebra system. So in this case, $9 + O(1)$ symbols since the Universal Turing Machine is considered to be $O(1)$. The problem of enumerating all digits of $e$ is a problem distinct from the problem of representing $e$ algorithmically. Furthermore, in computer science, we only have pseudo-randomness since any random number generator that is fully described by an algorithm is by definition pseudo-random (in contrast to a hardware random number generator). • +1, nice, indeed KC is in general not-computable (although can be approximated by Shannon entropy to which is related), and it is related in some ways to notions of randmness and tests for randomality in pseudo-random generators (another test of how random a sequence is, is provided by the work of P. Diaconis et al. by studying arbitrary permutations of the sequence and their distribution using a gneralised convolution operator). Anyway i plan to post another question related specifically to algorithmic and probabilistic approaches to randomness – Nikos M. Jun 11 '14 at 18:27 • Since you mention a formula for the nth digit of $\pi$ (which i will check out by the way). Let me give a converse example, given a decimal expansion of a number (which is $\pi$) can one infer that it is in fact $\pi$? – Nikos M. Jun 11 '14 at 18:30 • Wolfram (of Mathematica fame) is (in-)famous for supporting a variety of cellular automata which although extremely simple to describe algorithmicaly, yet can produce completely un-predictable (random?) behaviour (not periodic) – Nikos M. Jun 11 '14 at 18:41 • It's not at all surprising that the Kolmogorov complexity of anything is uncomputable. The definition of Kolmogorov complexity of $X$ is essentially, "The shortest program that outputs $X$". Since it's undecidable whether any given program even outputs $X$ (or anything at all!), why would you be surprised that it's undecidable whether any given program is the shortest program that outputs $X$? – David Richerby Oct 24 '14 at 21:28 Your statement Also note the fact that for transcendental numbers, and irrational numbers in general, the frequency of digits is uniform is wrong. Look at Liouville's number. It is a transcendental number, containing almost all zeros, with a sparse amount of ones added (the nth 1 is at the n! decimal position). The distribution of digits to this number is completely one sided containing only 2 different digits, and one with a dominating frequency. • I read that claim differently to you. The statement is quite true "in general". Just like how most real numbers are uncomputable (the computable numbers are countably infinite, but the real numbers are not), most real numbers are normal in the Borel sense. – Pseudonym Jun 11 '14 at 23:57 • Makes sense, I read it as since transcendentals are a subset of irrationals the "in general" merely broadened the scope of the fact. The problem with irrationals is that you cant make real generalizations about them due to their lack of real structure. I think that when you look at real numbers however the majority will not have even frequencies since that is a limiting parameter that the majority cannot possibly have. – lPlant Jun 12 '14 at 0:10 • Of course the uniform digit distribution for irrationals holds for almost all, so this is the sense in which it is stated in the question. However (as stated) it can lead to misunderstanding if you are not familiar with this property of numbers. Still is this an answer or a comment? – Nikos M. Jun 12 '14 at 0:48
2019-11-15 23:34:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9034388065338135, "perplexity": 553.4403759046176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.22/warc/CC-MAIN-20191115222436-20191116010436-00244.warc.gz"}
https://electronics.stackexchange.com/questions/314418/how-to-connect-i2c-peripherals-to-digispark-attiny85
# how to connect i2c peripherals to digispark Attiny85 I am new to micro controllers and I have problem with understanding and connecting I2C devices to Attiny85 digiSpark micro controller. According to this picture SDA and SCL are on the same pin and I am confused how can I connect any i2c device to this controller. Please can somebody help me in order to achieve this? To be more specific, I have oled display like this one and I want to connect it to digispark. If somebody can help me I will appreciate that! Thanks • SCK is a name of the clock signal in SPI protocol. SCL is the clock in the I2C. This pin is just multiplexed. I2C is using two signals SCL/SDA only (Well, connecting GND would be a good idea too). – Eugene Sh. Jul 4 '17 at 14:01 • sorry miss typed. I just realized that I made mistake with pinouts. Can you please help me with documentation for connection digispark and oled display? There are no concrete results when I am googling :? – hogar Jul 4 '17 at 14:06 • So on the picture SCL and SDA are on different pins. – Eugene Sh. Jul 4 '17 at 14:07
2019-10-24 02:11:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3656577467918396, "perplexity": 2333.49252435814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987838289.72/warc/CC-MAIN-20191024012613-20191024040113-00520.warc.gz"}
https://eprint.iacr.org/2021/1554
### How to Claim a Computational Feat Clémence Chevignard, Rémi Géraud-Stewart, Antoine Houssais, David Naccache, and Edmond de Roffignac ##### Abstract Consider some user buying software or hardware from a provider. The provider claims to have subjected this product to a number of tests, ensuring that the system operates nominally. How can the user check this claim without running all the tests anew? The problem is similar to checking a mathematical conjecture. Many authors report having checked a conjecture $C(x)=\mbox{True}$ for all $x$ in some large set or interval $U$. How can mathematicians challenge this claim without performing all the expensive computations again? This article describes a non-interactive protocol in which the prover provides (a digest of) the computational trace resulting from processing $x$, for randomly chosen $x \in U$. With appropriate care, this information can be used by the verifier to determine how likely it is that the prover actually checked $C(x)$ over $U$. Unlike traditional'' interactive proof and probabilistically-checkable proof systems, the protocol is not limited to restricted complexity classes, nor does it require an expensive transformation of programs being executed into circuits or ad-hoc languages. The flip side is that it is restricted to checking assertions that we dub \emph{refutation-precious}'': expected to always hold true, and such that the benefit resulting from reporting a counterexample far outweighs the cost of computing $C(x)$ over all of $U$. Available format(s) Category Cryptographic protocols Publication info Preprint. MINOR revision. Keywords proof of workhashing Contact author(s) david naccache @ ens fr History Short URL https://ia.cr/2021/1554 CC BY BibTeX @misc{cryptoeprint:2021/1554, author = {Clémence Chevignard and Rémi Géraud-Stewart and Antoine Houssais and David Naccache and Edmond de Roffignac}, title = {How to Claim a Computational Feat}, howpublished = {Cryptology ePrint Archive, Paper 2021/1554}, year = {2021}, note = {\url{https://eprint.iacr.org/2021/1554}}, url = {https://eprint.iacr.org/2021/1554} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
2022-07-01 11:26:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3518851697444916, "perplexity": 3696.9378468022223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00665.warc.gz"}
https://mathoverflow.net/questions/261482/is-widehat-mathbbzt-cong-widehat-mathbbz-widehat-mathbbz
# Is $\widehat{\mathbb{Z}}[[t]]\cong\widehat{\mathbb{Z}}[[\widehat{\mathbb{Z}}]]$? Let $\widehat{\mathbb{Z}}[[\widehat{\mathbb{Z}}]] := \varprojlim_{n,m}(\mathbb{Z}/n)[x]/(x^m-1)$ be the complete group algebra of the profinite free group of rank 1. In Corollary 5.9.2 of Ribes-Zalesski's Profinite Groups, they state that $\widehat{\mathbb{Z}}[[\widehat{\mathbb{Z}}]]\cong \widehat{\mathbb{Z}}[[t]]$, citing a paper of Lim which doesn't seem to prove exactly what they claim (though I'm sure it must follow, if one is sufficiently familiar with the theory). Here, let's try giving $\widehat{\mathbb{Z}}[[t]]$ the topology corresponding to the product topology on $\prod_{n\ge 0}\widehat{\mathbb{Z}}$, indexed by the coefficients of $t^n, n\ge 0$. I would like to show directly that $\widehat{\mathbb{Z}}[[\widehat{\mathbb{Z}}]]\cong \widehat{\mathbb{Z}}[[t]]$. If this is true, then the map should be given by identifying a generator "$x$" of the group algebra with $1+t$. Relative to the (product) topology described above on $\widehat{\mathbb{Z}}[[t]]$, a neighborhood basis of 0 is given by the ideals $(n,t^m)$ for $n,m\ge 1$. For every such ideal, one can find some $N,M\ge 1$ such that the map "$x\mapsto 1+t$" defines a quotient map $$\widehat{\mathbb{Z}}[[\widehat{\mathbb{Z}}]]\rightarrow(\mathbb{Z}/N)[x]/(x^M-1) \rightarrow \widehat{\mathbb{Z}}[t]/(n,t^m)$$ (this follows from divisibility properties of binomial coefficients). However when I try to go in the other direction, it seems what I need is - for every $N,M\ge 1$, to find an $n,m$ such that $t\mapsto x-1$ induces a map $$\widehat{\mathbb{Z}}[t]/(n,t^m)\rightarrow(\mathbb{Z}/N)[x]/(x^M-1)$$ However, this is clearly impossible, since $t$ is always nilpotent on the left, and yet $x-1$ is rarely nilpotent on the right. Thus, either the "product" topology on $\widehat{\mathbb{Z}}[[t]]$ is not sufficient, or the map is more tricky than just sending "$x\mapsto 1+t$". On the other hand, $\widehat{\mathbb{Z}}[[\widehat{\mathbb{Z}}]]$ is a commutative profinite ring, and hence a product of commutative profinite local rings. It certainly admits $\mathbb{Z}_p[[\mathbb{Z}_p]]$ as quotients for all primes $p$, which is known to be isomorphic to the local ring $\mathbb{Z}_p[[t]]$, so it seems hard to imagine any other possibility for $\widehat{\mathbb{Z}}[[\widehat{\mathbb{Z}}]]$. If it turns out the titular question has a negative answer, then naturally question is: $$\text{What are the local direct factors of the profinite ring \widehat{\mathbb{Z}}[[\widehat{\mathbb{Z}}]]?}$$ • Which edition of Ribes-Zalesskii do you have? I have the second edition, and in that, Cor 5.9.2 is not as you state it, so maybe they've corrected it since the first edition? Feb 7, 2017 at 8:57 • @JeremyRickard It seems I have both the first and the second (in the first edition it's called Cor 5.9.1b), but in both it states that the results (a),(b),(c) of 5.9.1 hold for $M(n) = \widehat{\mathbb{Z}}[[t]]$ and $F^{\text{nilp}}$ in place of $\mathbb{Z}_p[[t]]$ and $F$ the free pro-$p$ group of rank $n$. In particular, it seems that the analogue of (c) in the case $n = 1$ should be $\widehat{\mathbb{Z}}[[\widehat{\mathbb{Z}}]] = \widehat{\mathbb{Z}}[[t]]$... Feb 7, 2017 at 17:14 • @JeremyRickard The result of Lim cited by Cor 5.9.2 seems to say that the closed multipllicative subgroup generated by $1+t$ in $\widehat{\mathbb{Z}}[[t]]$ is isomorphic to $\widehat{\mathbb{Z}}$, which certainly induces a homomorphism $\widehat{\mathbb{Z}}[[\widehat{\mathbb{Z}}]]\rightarrow\widehat{\mathbb{Z}}[[t]]$ sending "$x\mapsto 1+t$" which is an isomorphism both on the coefficient ring and on the units coming from the "group" of the completed group ring, but I suppose Ribes/Zalesski never explicitly say what the analogous statements are, so perhaps they just meant that there is an epi? Feb 7, 2017 at 17:26 There may be some things to check here, but I think the following is correct and should answer your last question. I believe the final result is $$\Zhat[[\Zhat]] \cong \prod_q\ZZ_q[[t]]$$ as $q$ ranges over all prime powers $p^r$ with $r$ coprime to $p$, each appearing $N_{p,r}$ times, where $N_{p,r}$ is the number of Frobenius orbits of generators of $\mathbb{F}_q^\times$ Firstly, $\Zhat[[\Zhat]]\cong\prod_p\ZZ_p[[\Zhat]]$. To see this, note that in every $(\ZZ/n)[x]/(x^m-1)$, the distinct prime powers dividing $n$ generate comaximal ideals, and hence if $n = \prod_i p_i^{r_i}$ then $$(\ZZ/n)[x]/(x^m-1)\cong\prod_i(\ZZ/p_i^{r_i})[x]/(x^m-1)$$ This decomposition at every finite stage should extend to the limit, and so it suffices to analyze $\ZZ_p[[\Zhat]]$. Let $$\ZZ' :=\prod_{p'\text{ prime}\\p'\ne p}\ZZ_{p'}$$ then since $\Zhat = \ZZ_p\times\ZZ'$, and since $\ZZ_p[A\times B] = \ZZ_p[A]\hat{\otimes}\ZZ_p[B]$ (completed tensor product over $\ZZ_p$), we have: $$\ZZ_p[[\Zhat]] = \ZZ_p[[\ZZ_p]]\hat{\otimes}\ZZ_p[[\ZZ']] = \ZZ_p[[t]]\hat{\otimes}\ZZ_p[[\ZZ']]$$ By definition, $$\ZZ_p[[\ZZ']] = \varprojlim_m \ZZ_p[x]/(x^m-1)$$ where $m$ is coprime to $p$. Since such $x^m-1$ factors into distinct irreducibles mod $p$, by Hensels lemma we find that $x^m-1 = \prod_i f_{m,i}$, where each $f_{m,i}\mid x^m-1$ and is irreducible in both $\ZZ_p[x]$ and $\mathbb{F}_p[x]$. These $f_{m,i}$ are actually pairwise comaximal (see this answer), and hence we have a decomposition $$\ZZ_p[x]/(x^m-1) = \prod_i \ZZ_p[x]/(f_{m,i})$$ where each term in the product is now isomorphic to $\ZZ_q$, where $q = p^{\deg f_{m,i}}$ (ie, the unique domain unramified over $\ZZ_p$ of degree $\deg f_{m,i}$) Taking the limit over all $m$ coprime to $p$, we get: $$\ZZ_p[[\ZZ']] = \prod_f\ZZ_p[x]/(f)$$ where $f$ ranges over the set: $$\{f\in\ZZ_p[x] \text{ irreducible} : \exists m\text{ coprime to p such that } f\mid x^m-1\}$$ Thus, we get $$\ZZ_p[[\Zhat]] = \ZZ_p[[t]]\hat{\otimes}\prod_f\ZZ_p[x]/(f)$$ By Proposition 7.7.5 in Wilson's book Profinite Groups, the completed tensor product commutes with arbitrary direct products, so we get $$\ZZ_p[[\Zhat]] = \prod_f(\ZZ_p[[t]]\hat{\otimes}\ZZ_p[x]/(f))$$ Since each $\ZZ_p[x]/(f)$ is a finite $\ZZ_p$-algebra, the completed tensor product coincides with the usual tensor product (c.f. Ribes-Zalesski prop 5.5.3(d)), so we get $$\ZZ_p[[\Zhat]] = \prod_f(\ZZ_p[[t]]\otimes\ZZ_p[x]/(f)) = \prod_f (\ZZ_p[x]/(f))[[t]] = \prod\ZZ_q[[t]]$$ as $q$ ranges over all prime-to-$p$ powers of $p$, each appearing multiple times in the product. Thus, I believe we have an isomorphism $$\Zhat[[\Zhat]] \stackrel{\varphi}{\longrightarrow}\prod_q\ZZ_q[[t]] = \left(\prod_q\ZZ_q\right)[[t]]$$ For every $r$ coprime to $p$, the number of times each $\ZZ_q = \ZZ_{p^r}$ appears in the product is precisely the number $N_r$ of irreducible degree $r$ factors of $x^{p^r-1}-1$ over $\mathbb{F}_p$, or equivalently the number of $p$-power Frobenius orbits of generators of $\mathbb{F}_q^\times$. Let $a\in\Zhat$ come from within the $[[\cdots]]$ of $\Zhat[[\Zhat]]$. Then, $\varphi(a) = a'(t+1)^{a_p}$, where $a_p$ is the image of $a$ in $\ZZ_p$, and $a'\in\prod_{p'\ne p}\ZZ_{p'}$ is defined as follows. For every $q = p^r$ with $(r,p) = 1$, let $\overline{a}$ be the residue of $a$ mod $p^r-1$. Then, the images of $a'$ in the $N_r$ copies of $\ZZ_q$ are in bijection with a complete set of representatives of the Frobenius orbits of primitive $(q-1)$th roots of unity in $\ZZ_q$, each raised to the $\overline{a}$-th power. In particular, when $r = 1$, we find that the number of copies of $\ZZ_p[[t]]$ in the product is precisely the number of linear factors of $x^{p-1}-1$ over $\mathbb{F}_p$, which is precisely $p-1$. Thus, the projections onto each of these factors, followed by quotienting by $(t)$, yields the $p-1$ homomorphisms onto $\ZZ/p\ZZ$ referred to by Tom Goodwillie, and it seems like there can't be any others. It seems to me that for $p$ prime $\hat{\mathbb Z}[[t]]$ has exactly one continuous ring homomorphism to $\mathbb Z/p\mathbb Z$, while $\hat{\mathbb Z}[[\hat{\mathbb Z}]]$ has exactly $p-1$ such homomorphisms. • Ah, interesting... In fact it seems unlikely a change of topology would make your statement incorrect, though it makes me wonder what Ribes-Zalesski meant in their Corollary, or perhaps they were just mistaken? Do you have any idea what the direct factors of $\widehat{\mathbb{Z}}[[\widehat{\mathbb{Z}}]]$ might be? Feb 7, 2017 at 5:37 • No, I realized later that all the homomorphisms are continuous. Feb 7, 2017 at 12:32 There is no continuous surjection from $\hat Z[[t]]$ to $(\mathbb Z/3\mathbb Z)[\mathbb Z/2\mathbb Z] = \mathbb Z/3\mathbb Z \oplus \mathbb Z/3\mathbb Z$. • In fact, the only ring homomorphism $\varphi$ (continuous or not) is the obvious one, i.e. $\varphi(f(t))=(f(0)\bmod3,f(0)\bmod3)$. Indeed, we must have $\varphi(3)=0$, and also $\varphi(t)=0$ because $t$ is in the Jacobson radical. Feb 7, 2017 at 9:31
2022-06-28 20:50:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9805939197540283, "perplexity": 126.18583883691119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103617931.31/warc/CC-MAIN-20220628203615-20220628233615-00718.warc.gz"}
https://ask.sagemath.org/answers/57528/revisions/
Indeed, when f and g are symbolic variables, and x is a symbolic variable, then f(x) and g(x) will both evaluate to x, and integral(f(x)*g(x), (x,0,1)) hence evaluates to integral(x^2, (x,0,1)) which is indeed 1/3.
2021-09-21 02:44:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9742328524589539, "perplexity": 1374.5187812421204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057131.88/warc/CC-MAIN-20210921011047-20210921041047-00039.warc.gz"}
http://www.opuscula.agh.edu.pl/om-vol35iss3art3
Opuscula Math. 35, no. 3 (2015), 293-332 http://dx.doi.org/10.7494/OpMath.2015.35.3.293 Opuscula Mathematica # Frames and factorization of graph Laplacians Palle Jorgensen Feng Tian Abstract. Using functions from electrical networks (graphs with resistors assigned to edges), we prove existence (with explicit formulas) of a canonical Parseval frame in the energy Hilbert space $$\mathscr{H}_{E}$$ of a prescribed infinite (or finite) network. Outside degenerate cases, our Parseval frame is not an orthonormal basis. We apply our frame to prove a number of explicit results: With our Parseval frame and related closable operators in $$\mathscr{H}_{E}$$ we characterize the Friedrichs extension of the $$\mathscr{H}_{E}$$-graph Laplacian. We consider infinite connected network-graphs $$G=\left(V,E\right)$$, $$V$$ for vertices, and $$E$$ for edges. To every conductance function $$c$$ on the edges $$E$$ of $$G$$, there is an associated pair $$\left(\mathscr{H}_{E},\Delta\right)$$ where $$\mathscr{H}_{E}$$ in an energy Hilbert space, and $$\Delta\left(=\Delta_{c}\right)$$ is the $$c$$-Graph Laplacian; both depending on the choice of conductance function $$c$$. When a conductance function is given, there is a current-induced orientation on the set of edges and an associated natural Parseval frame in $$\mathscr{H}_{E}$$ consisting of dipoles. Now $$\Delta$$ is a well-defined semibounded Hermitian operator in both of the Hilbert $$l^{2}\left(V\right)$$ and $$\mathscr{H}_{E}$$. It is known to automatically be essentially selfadjoint as an $$l^{2}\left(V\right)$$-operator, but generally not as an $$\mathscr{H}_{E}$$ operator. Hence as an $$\mathscr{H}_{E}$$ operator it has a Friedrichs extension. In this paper we offer two results for the Friedrichs extension: a characterization and a factorization. The latter is via $$l^{2}\left(V\right)$$. Keywords: unbounded operators, deficiency-indices, Hilbert space, boundary values, weighted graph, reproducing kernel, Dirichlet form, graph Laplacian, resistance network, harmonic analysis, harmonic analysis, frame, Parseval frame, Friedrichs extension, reversible random walk, resistance distance, energy Hilbert space. Mathematics Subject Classification: 47L60, 46N30, 46N50, 42C15, 65R10, 05C50, 05C75, 31C20, 46N20, 22E70, 31A15, 58J65, 81S25. Full text (pdf)
2018-06-19 18:21:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.773960292339325, "perplexity": 655.6291571348925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863109.60/warc/CC-MAIN-20180619173519-20180619193519-00367.warc.gz"}
https://leanprover-community.github.io/archive/stream/208328-IMO-grand-challenge/topic/AI.html
## Stream: IMO-grand-challenge ### Topic: AI #### Jason Rute (Sep 07 2019 at 12:00): I think the term “AI” means a lot of things to a lot of people. For one, it could just mean “write a program to solve this”. In this case that could be a imo_solver tactic written in Lean’s meta language, or a script which generates Lean code. However, it could also mean using SAT/SMT solvers and other such tools. In other ITPs, these have been built into “hammers”. Last, it could mean using machine learning—possibly neural networks and/or reinforcement learning. #### Jason Rute (Sep 07 2019 at 12:00): Does Lean currently have support, or is there going to be support, for hammer-like interfaces with powerful ATPs? Also, on the machine learning side, it seems that the most successful ML for ITP projects rely on having training data in the form of a dataset of proven theorems. This requires proof recording. Also, since Lean has so much syntactic sugar, it makes sense to try to strip that down to the bare bones in the training data, but to still keep the high-level tactic commands. Indeed the best AI for ITP projects, TacticToe, HOList, CoqGym/ASTactic, and ProverBot9001 all seem to rely on a good tactic-level recorded proofs stored in a simple to parse format. Moreover, to do reinforcement learning requires an interface which allows rapid back-and-forth between the proof-checker and some AI interface (attached to TensorFlow, PyTorch, etc. utilizing a GPU or many GPU machines). I think Google Research spent a lot of effort writing their own HOL Light kernel and interface to make it usable for reinforcement learning. #### Jason Rute (Sep 07 2019 at 12:00): I would love to see Lean with better tooling for AI, and this will better encourage users to participate. (I wrote some more rambling thoughts on AI and Lean in the AI and theorem proving stream.) I wonder if the organizers of this challenge have any plans to talk to Christian Szegedy, Josef Urban, Cezary Kaliszyk and others about how well Lean is set up for this challenge. #### Daniel Selsam (Sep 07 2019 at 13:55): I wonder if the organizers of this challenge have any plans to talk to Christian Szegedy, Josef Urban, Cezary Kaliszyk and others about how well Lean is set up for this challenge. @Jason Rute Yes, definitely. #### Daniel Selsam (Sep 07 2019 at 14:00): I think the term “AI” means a lot of things to a lot of people. For one, it could just mean “write a program to solve this”. In this case that could be a imo_solver tactic written in Lean’s meta language, or a script which generates Lean code. However, it could also mean using SAT/SMT solvers and other such tools. In other ITPs, these have been built into “hammers”. Last, it could mean using machine learning—possibly neural networks and/or reinforcement learning. @Jason Rute I don't think it is at all clear which if any of the existing paradigms of AI can be leveraged to win. This is one reason why I think it is a worthy grand challenge. Last updated: Aug 05 2021 at 03:12 UTC
2021-08-05 04:02:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3446968197822571, "perplexity": 2365.8746836905934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155322.12/warc/CC-MAIN-20210805032134-20210805062134-00082.warc.gz"}
https://www.lenimar.com/7qz7y/a973ef-survival-function-python
(1) Kaplan-Meier plots to visualize survival curves. It allows doing survival analysis while utilizing the power of scikit-learn, … The Kaplan-Meier estimator is also called the product-limit estimator. Kaplan-Meier estimator of survival function. I just don't get how I can best interpret this data so that I can use the results elsewhere. From that, we can say that the probability at that timeline certainly lies between that confidence interval only. So we can say that the survival probability is as high as possible. In the case of the balls, we want to find out what’s the probability that both of the selected balls are red? Nelson-Aalen estimator of cumulative hazard function. Note that, in contrast to the survivor function, which focuses on not having an event, the hazard function focuses on the event occurring. I am only looking at 21 observations in my example. However, it is not the only way. That’s why we add it here. Kaplan-Meier nonparametric survival function estimator. KFold cross-validation). It gives us various information for our data fitted. This will create biases in model fit-up Time from a salesperson hire to their first sale. Here, we start by defining fundamental terms of survival analysis, including: Survival time and type of events in cancer studies. The survival function is a function that gives the probability that a patient, device, or other object of interest will survive beyond any specified time.. For that, we use the Nelson-Aalen hazard function: For example, Calculating the amount of time(year, month, day) certain patient lived after he/she was diagnosed with cancer or his treatment starts. Notice that we have a total of 5 red balls out of 15 balls. (17) Get survival probability for the whole timeline: The kmf object’s survival_function_ gives us the complete data for our timeline. Normal distribution is continous whereas poisson is discrete. In the following graph, you can see that around 139 values have a status of 1, and around 90 values have a status of 2. For example, If h(200) = 0.7, then it means that the probability of that person being dead at time t=200 days is 0.7. What benefits does lifelines have?. It means that a function calls itself. For that, we use the Nelson-Aalen hazard function: Keep in mind we take at_risk of the current row: The cumulative hazard has less obvious understanding than the survival functions, but the hazard functions are the basis of more advanced techniques in survival analysis. lifelines is a complete survival analysis library, written in pure Python. ... kmsurvival includes an auxiliary function to plot right-censoring. As of this writing, scikit-survival includes implementations of. A useful summary stat is the median survival time, which represents when 50% of the population has died: from lifelines.utils import median_survival_times median_ = kmf.median_survival_time_ median_confidence_interval_ = median_survival_times(kmf.confidence_interval_) Usually, there are two main variables exist, duration and event indicator. ndarray of sksurv.functions.StepFunction, shape = (n_samples,) Examples The stupidly simple data discovery tool. It’s very important for us to remove the rows with a null value for some of the methods in survival analysis. Let’s take a very simple example to understand the concept of conditional probability. Python also accepts function recursion, which means a defined function can call itself. (1) We can find the number of days until patients showed COVID-19 symptoms. Time could be measured in years, months, weeks, days, etc. If a person died or is censored, then they fall into this category. hazard functions, and its easy deployment in production systems & research stations along side other Python libraries. Here our goal is to find the number of days a patient survived before they died. The survival function $$S(t)$$ and cumulative hazard function $$H(t)$$ can be estimated from a set of observed time points $$\{(y_1, \delta_i), \ldots, (y_n, \delta_n)\}$$ using sksurv.nonparametric.kaplan_meier_estimator() and sksurv.nonparametric.nelson_aalen_estimator(), respectively.. scikit-survival. (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); By subscribing you accept KDnuggets Privacy Policy. Normal distribution is continous whereas poisson is discrete. In medical research, it is often used to measure the fraction of patients living for a certain amount of time after treatment. I also want to mention scikit-survival, which provides models for survival analysis that can be easily combined with tools from scikit-learn (e.g. What benefits does lifelines have? If we are curious about the hazard function h(t) of a population, we, unfortunately, can’t transform the Kaplan Meier estimate. For a quick introduction to the Kaplan-Meier estimator, see e.g. If we are curious about the hazard function h (t) of a population, we, unfortunately, can’t transform the Kaplan Meier estimate. Interval Censoring: When we have data for some intervals only. To see how the estimator is constructed, we do the following analysis. Survival analysis is one of the less understood and highly applied algorithm by business analysts. For example, let’s say there are 2 groups of people diagnosed with cancer. (2) Nelson-Aalen plots to visualize the cumulative hazard. scikit-survival is a Python module for survival analysis built on top of scikit-learn. (5) Observed: The number of patients that died during the experiment. Those 2 groups were given 2 different kinds of treatments. Deep Recurrent Survival Analysis, an auto-regressive deep model for time-to-event data analysis with censorship handling. It analyses a given dataset in a characterised time length before another event happens. Kaplan-Meier nonparametric survival function estimator. In a simple way, we can say that the person at_risk of the previous row. (2) We can find for which age group it’s deadlier. We look at a detailed example implementing the Kaplan-Meier fitter based on different groups, a Log-Rank test, and Cox Regression, all with examples and shared code. The above estimators are often too simple, because they do not take additional factors … I think we can clearly see that higher survival probability and lower hazard probability is good for the patient. Survival probability for a patient models in python Sorry in the shape of previous... Experiment is alive or actively participates in a characterised time length before another happens... ( 5 ) we can plot the at-risk process using the plot_at_risk ( ) frequently in... That in our case ) within the study of time-to-event data, usually survival. To analyze ongoing COVID-19 pandemic data there is still a possibility that the may... Create an object for KaplanMeierFitter: now we need to find the probability that a variate x takes on value... Python Sorry in the future often used to measure the fraction of patients to the..., see e.g a more advanced state Deep Recurrent survival analysis, an auto-regressive Deep model for time-to-event analysis... Now all the information we have a total of 15 balls, and 3 green.! Into visualziaiton or further data processing termination or quit functions, and the R package survival uses a function (... Code, we ’ ll implement Kaplan-Meier fitter and Nelson-Aalen fitter using python new patients are also diagnosed cancer. Death or relapse in our dataset, there are three general types of censoring, right. Destruction or permanent end of an experiment, then that person it provides a user friendly interface for analysis. The patient timeline person under experiment is alive or actively participates in a more generalized way we... Time it takes is the timeline for our data fitted start, it s! Library, written in pure python ( often referred to as death occurs! Science and is going for a patient on the occurrence of an experiment, then fall! Can easily get hazards and survival functions are a great way to and. Different kinds of treatments a complete survival analysis that can be easily combined with from. Current patients at_risk + entrance — removed that is called “ dead.. A defined function can call itself Cox proportional hazards regression to find out sex using. The advancement in technology, survival function simplified is called “ dead ” column 5 ) can. Reliability function so Hard and the number of days a patient survived before they.... University of Southern California ( n_samples, n_features ) ) – data matrix be piped visualziaiton! The balls selected to be red includes events that occurred before the experiment started that goes... Probability is good for the survival functions which can be useful to analyze ongoing pandemic., number of probability distributions as well as a growing library of statistical.... A salesperson hire to their first sale as part of the previous row it can easily... Object as part of our AAAI 2019 paper and a benchmark for several ( python ) implemented analysis... That: at_risk = current patients alive, then we add it to the estimator. Might not be observed for some of the previous row benchmark for several ( python ) implemented survival analysis can. Much of this implementation is inspired by the R package survival uses a function (. Summarize and visualize the survival of a SurvivalDataobject will not happen in the shape of timeline... Such event occurred in cancer studies data in columns called censored and observed given timeline censoring: we! 310 days after the day of diagnosis yet ) experienced the event may not observed! This context, duration indicates the length of the statistics module of people diagnosed with.! At risk just before time ( t ), is calculated as time period )! Meier is a complete survival analysis is frequently used in survival between groups of that. Result the survival probability is good for the hospital staff one of survival... S ( t ) for survival probability is as follows: the process of developing or moving gradually a... Series, we saw the basics of the previous row about when a subject alive! Idea about how our data is distributed a user friendly interface for survival analyis using python his undergraduate computer. Can plot the graph for survival analyis using python functions return a p-value from a salesperson hire either. Put complex theories in simple ways years, months, days, months, days?. End of an experiment, then we add it to the censored.! 2 different kinds of treatments t want to find the number of days a patient this gives us general! A branch of statistics focused on the occurrence of an event of interest ( e.g., birth, birth... Branch of statistics focused on the study period us a general idea how... Great way to summarize and visualize the cumulative hazard often referred to an amount time... Our data in columns called censored and observed an instance of the timeline our. Be “ death ”, which is stored in the advance for the post! Call itself, as the number of patients we are going to store our data fitted object KaplanMeierFitter! For which age group it ’ s say there are two main variables exist, duration event. Going to observe in our case ) within the study period formula for Kaplan-Meier is as high as possible Why! Fitter and Nelson-Aalen fitter using python graph the light blue color shade represents confidence. That, in a characterised time length before another event happens of meaning that you jump! The highest survival probability is good for the survival function at customer level this three-part,. And interval censored data will cause to change in the shape of the 15 balls data distributed! To visualize survival curves of two or more groups s predict function does all of this implementation is inspired the. Important method of kmf object is “ event_table ” before another event happens something! The stars it so Hard AAAI 2019 paper and a benchmark for several ( python implemented! You read the first half of this work for us to remove the rows with a null for! Censored category jump here right censoring, is there any difference between the group people. Some intervals only do in the “ dead ” column of censoring, named right censoring, is any... Its easy deployment in production systems & research stations along side other python libraries a univariate approach solving. To analyze ongoing COVID-19 pandemic data censoring may arise in the next step Log Rank test to make any of. Various reasons object ’ s always good to know the logic behind it usually, there are 139 and! Number x ( array-like, shape = ( n_samples, n_features ) ) – data matrix more generalized,... Arise in the above sections is 1 ll discuss the Log-rank test make. Hazard function may assume more a complex form participates in a given timeline with advancement... ( ) lifelines is a branch of statistics focused on the occurrence of an event interest! * survival analysis uses a function survdiff ( ) method of a person ’ s has. Can see in the future final Result the survival probability, we have used the same data-set. Customer level one of the less understood and highly applied algorithm by business.... To be the total probability of a person to die at a certain amount of time some. Is: what if we don ’ t be observed for various reasons for a quick to... Person at_risk of the less understood and highly applied algorithm by business analysts the... Non-Parametric statistic used to study the time it takes is the probability at time ti, (! Probability of a person died or is censored, then that person goes into the censored.! Kaplan-Meier estimator is a complete survival analysis is one of the curve at_risk current... Have been using in the “ dead ” timeline person under experiment alive... Duration and event indicator tells whether such event occurred discussed in the first part does! Within the study of time-to-event data, usually called survival times importing what we need organize. Another event happens not have significantly different survival rates the shape of the 15 balls in characterised! Survival for patients * is a univariate approach to solving the problem 3 ) Log-rank test compare... Lets get started by importing what we need to organize our data in columns called censored and.! Usually, there are 2 groups of patients that died during the experiment in. Survival function ( probability of them surviving the time until some event of interest will be “ ”! The denominator value is the subjects at risk just before time ( t ), which we have data some... Specific timeline person under experiment is alive, then we add it to the Kaplan-Meier estimator, e.g. ’ s have a total of 15 balls in a characterised time length before another event happens not. The stars, we can say that the survival functions which can be piped into visualziaiton or further processing... Patients showed COVID-19 symptoms variables like age, sex, weight on survival function simplified we do following. 139 males and around 90 females as possible look at it column-by-column say we have to it.: the number of days of survival days for a patient died, then person... Color shade represents the confidence interval of survival 139 males and around 90 females going to observe in our )! Example to understand the logic behind it ( array-like, shape = ( n_samples, )! Analysis with censorship handling with an example: here we can say that the event of interest, etc user. Using survivor function s ( t ), which provides models for analysis! Cause to change in the “ dead ” column or relapse in our case it. Justin Tucker Total Points, Isle Of Man Festival, App State Men's Basketball Roster, Ruben Dias Fifa 20 Career Mode, Isle Of Man Pub Quiz Questions, The Parent Hood Podcast Contact, Nearest Train Station To Heysham Port,
2022-05-20 18:00:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35249048471450806, "perplexity": 1464.0599627201054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662533972.17/warc/CC-MAIN-20220520160139-20220520190139-00596.warc.gz"}
https://www.trustudies.com/question/791/abc-is-a-right-angled-triangle-in-whi/
3 Tutor System Starting just at 265/hour # ABC is a right angled triangle in which $$\angle{A}$$ = $$90^\circ$$ and AB = AC, find $$\angle{B}$$ and $$\angle{C}$$. In $$\triangle{ABC}$$, we have, AB = AC ...(Given) $$\angle{B}$$ = $$\angle{C}$$ ...(i)($$\because$$ angles opposite to equal sides are equal) Now, we know that, $$\angle{A}$$ + $$\angle{B}$$ + $$\angle{C}$$ = $$180^\circ$$ $$\Rightarrow$$ $$90^\circ$$ + $$\angle{B}$$ + $$\angle{C}$$ = $$180^\circ$$ ...(given) $$\Rightarrow$$ $$90^\circ$$ + $$\angle{B}$$ + $$\angle{B}$$ = $$180^\circ$$ ...(from(i)) $$\Rightarrow$$ 2 $$\angle{B}$$ = $$180^\circ$$ - $$90^\circ$$ $$\Rightarrow$$ 2 $$\angle{B}$$ = $$90^\circ$$ $$\Rightarrow$$ $$\angle{B}$$ = $$45^\circ$$ $$\therefore$$ $$\angle{C}$$ = $$45^\circ$$, too.
2023-03-22 10:11:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.877395749092102, "perplexity": 382.7533648802201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00741.warc.gz"}
https://socratic.org/questions/how-do-you-divide-2x-3-6x-2-8-x-2-4
# How do you divide (2x^3 - 6x^2 + 8) / (x^2 - 4)? Feb 2, 2018 #### Explanation: $\frac{2 {x}^{3} - 6 {x}^{2} + 8}{{x}^{2} - 4}$ $\textcolor{w h i t e}{\ldots \ldots . .} \textcolor{w h i t e}{.} 2 x - 6$ ${x}^{2} - 4 | \overline{2 {x}^{3} - 6 {x}^{2} \textcolor{w h i t e}{\ldots \ldots} + 8}$ color(white)(............)ul(2x^3 color(white)(.........)-8x $\textcolor{w h i t e}{\ldots \ldots \ldots \ldots \ldots .} - 6 {x}^{2} + 8 x + 8$ $\textcolor{w h i t e}{\ldots \ldots \ldots \ldots \ldots \ldots} \underline{- 6 {x}^{2} \textcolor{w h i t e}{\ldots \ldots .} + 24}$ $\textcolor{w h i t e}{\ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots} 8 x - 16$ $\frac{2 {x}^{3} - 6 {x}^{2} + 8}{{x}^{2} - 4} = 2 x - 6$ and remainder of $\frac{8 x - 16}{{x}^{2} - 4}$
2020-03-28 13:46:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2576111853122711, "perplexity": 3408.6524042512387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370491998.11/warc/CC-MAIN-20200328134227-20200328164227-00438.warc.gz"}
https://math.stackexchange.com/questions/2033918/proof-with-lagranges-remainder-theorem
# Proof with Lagrange's Remainder Theorem Consider the function $f(x) = \frac{1}{\sqrt{1-x}}$. Generate the Taylor series for f centered at zero, and use Lagrange's Remainder Theorem to show the series converges to f on $[0,1/2].$ I have generated the Taylor series, which is $$1+\frac{x}{2} + \frac{3x^2}{8}+\frac{5x^3}{16}+\frac{35x^4}{128}+\frac{63x^5}{256}+...$$ I'm still pretty shaky on using Lagrange's Remainder Theorem to prove convergence, and it doesn't seem very straightforward with this problem. For reference, LRT says that given a point $x$ in the radius of convergence of the series, there exists a $c$ satisfying $|c|<|x|$ where the error function $E_N(x)$ satisfies $$E_N(x)=\frac{f^{N+1}(c)}{(N+1)!}x^{N+1}$$ We have $f(x) = (1-x)^{-\alpha}$ where $\alpha = 1/2$. Note that $$|E_{N}| = \left| \frac{\alpha(\alpha-1) \ldots (\alpha - N)x^{N+1}}{(N+1)!}(1-c)^{-\alpha - N -1}\right|\\ = \alpha|x||1-c|^{-\alpha-1}\left| \frac{(\alpha-1) \ldots (\alpha - N)}{(N+1)!}\left(\frac{x}{1-c}\right)^N\right|.$$ Consider $$a_Nz^N = \frac{(\alpha-1) \ldots (\alpha - N)}{(N+1)!}\left(\frac{x}{1-c}\right)^N.$$ Since $0 < c < x < 1/2$ we have $|z| < 1$ and $$\lim_{N\to \infty} \frac{|a_{N+1}z^{N+1}|}{|a_N z^N|} = \lim_{N\to \infty} \frac{|\alpha - N - 1|}{N+2}|z| = |z| < 1.$$ By the ratio test $\sum a_Nz^N$ converges and $$\lim_{N \to \infty}|a_N z^N| = 0.$$ Hence, $|E_{N}| \to 0$ and the Taylor series for $f$ converges.
2020-08-09 20:46:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.973910391330719, "perplexity": 83.17897425165836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738573.99/warc/CC-MAIN-20200809192123-20200809222123-00143.warc.gz"}
http://www.physicsforums.com/showthread.php?t=115715
Recognitions: Homework Help ## Senior thesis I am writing my senior thesis (I am an undergrad math major at UCSB) on Dirichlet Series, which are, in the classical sense, series of the form $$\xi (s)=\sum_{n=1}^{\infty}\frac{a_n}{n^s}$$ where $$a_n,s\in\mathbb{C}$$ and $$a_n$$ is multiplicative, hence $$\forall n,m\in\mathbb{N}, \, a_{nm}=a_{n}a_{m}$$ I have begun this bit on analytic continuation for such series, here it goes: $$\xi (s)+\sum_{n=1}^{\infty}(-1)^{n}\frac{a_n}{n^s}=\sum_{n=1}^{\infty}\frac{a_n}{n^s}+\sum_{n=1}^{\i nfty}(-1)^{n}\frac{a_n}{n^s}=2\sum_{n=1}^{\infty}\frac{a_{2n}}{(2n)^s}=2^{1-s}a_2\sum_{n=1}^{\infty}\frac{a_n}{n^s}$$ so that $$\xi (s)=(1-a_22^{1-s})^{-1}\sum_{n=1}^{\infty}(-1)^{n-1}\frac{a_n}{n^s}$$ which is the first first stage of analytic continuation. Now, to the above series apply Euler's series transformation, which, if you don't recall, is $$\sum_{n=1}^{\infty}(-1)^{n-1}b_n=\sum_{n=0}^{\infty}\frac{1}{2^{n+1}}\sum_{m=0}^{n}(-1)^{m}\left( \begin{array}{c}n\\m\end{array}\right) b_{m+1}$$ to get the the second stage, namely $$\boxed{\xi (s)=(1-a_22^{1-s})^{-1}\sum_{n=0}^{\infty}\frac{1}{2^{n+1}}\sum_{m=0}^{n}(-1)^{m}\left( \begin{array}{c}n\\m\end{array}\right)\frac{a_{m+1}}{(m+1)^s}}$$ when this same process of continuation is applied to the Riemann zeta it produces a series for the zeta function that converges for all s in the complex plane except s=1 (see prior thread for details.) My trouble is proving convergence in the present, more general case. Any thoughts? PhysOrg.com science news on PhysOrg.com >> Hong Kong launches first electric taxis>> Morocco to harness the wind in energy hunt>> Galaxy's Ring of Fire Recognitions: Homework Help Science Advisor Not all dirichlet series have an analytic continuation. In the case of zeta, the 1-2^(1-s) is cancelling the pole at s=1. If the coefficients of $$\xi(s)$$ are real and non-negative then you have a pole at the real point on the line of convergence. I don't see anything that will gurantee this pole to be canceled in your case (could be of higher order, or at a different location, or...)- if this doesn't happen that sum cannot possibly converge. Recognitions: Homework Help So I should then ask, For what sequences $$a_n$$ does this favorable condition occur? Recognitions: Homework Help ## Senior thesis Quote by shmoe Not all dirichlet series have an analytic continuation. In the case of zeta, the 1-2^(1-s) is cancelling the pole at s=1. If the coefficients of $$\xi(s)$$ are real and non-negative then you have a pole at the real point on the line of convergence. I don't see anything that will gurantee this pole to be canceled in your case (could be of higher order, or at a different location, or...)- if this doesn't happen that sum cannot possibly converge. I see the cancelation by the factor of 1-2^(1-s) at s=1, since $$1-2^{1-s}=0 \Leftrightarrow s=1+i\frac{2k\pi}{\log 2}, k\in\mathbb{Z},$$ but we may assume principle values so that s=1 is the only point of interest. But for the factor of 1-a22^(1-s), s=1 would not be of interest unless a_2=1. Recognitions: Homework Help Quote by benorin So I should then ask, For what sequences $$a_n$$ does this favorable condition occur? Recognitions: Homework Help I wish to consider when exactly does Euler's series transformation provide an analytic continuation of a function defined by an alternating series. I will use a modified version of the transformation given above: For a known convergent alternating series $\sum (-1)^k b_k ,$ Euler's series transformation is given by $$\sum_{k=0}^{\infty}(-1)^{k}b_k=\sum_{k=0}^{\infty}\frac{1}{2^{k+1}}\sum_{m=0}^{k}(-1)^{m}\left( \begin{array}{c}k\\m\end{array}\right) b_{m}$$ An example: the result is trivial, yet the concept of continuation by the series transformation is rather at hand. Let z be complex. Consider the function f(z) defined by the alternating series $$f(z) = \sum_{k=0}^{\infty}(-1)^kz^k$$ which converges to $$\frac{1}{1+z}$$ on the unit disk $$|z|<1$$. Applying Euler's series transformation to f(z) we obtain $$f(z) = \sum_{k=0}^{\infty}(-1)^kz^k = \sum_{k=0}^{\infty}\frac{1}{2^{k+1}}\sum_{m=0}^{k}(-1)^{m}\left( \begin{array}{c}k\\m\end{array}\right) z^{m}$$ and since the binomial theorem gives $$\sum_{m=0}^{k}(-1)^{m}\left( \begin{array}{c}k\\m\end{array}\right) z^{m} = (1-z)^{k}$$ we may simplify this to obtain $$f(z) = \sum_{k=0}^{\infty}\frac{1}{2^{k+1}}\cdot (1-z)^{k} = \frac{1}{2}\sum_{k=0}^{\infty}\left( \frac{1-z}{2} \right) ^{k}$$ where the last series is a geometric series which converges to $$\frac{1}{1+z}$$ on the disk $$\left| \frac{1-z}{2} \right| <1 \Rightarrow |z-1|<2$$. Notice that the series thus obtained converges everywhere the given series did and on a disk twice as big! If one applies the transformation yet again, the series $$\frac{1}{4}\sum_{k=0}^{\infty}\left( \frac{3-z}{4} \right) ^{k}$$ is obtained, which is a geometric series converging to $$\frac{1}{1+z}$$ on the disk $$\left| \frac{3-z}{4} \right| <1 \Rightarrow |z-3|<4$$. I suspect that successive applications of the transformation would produce series with circles of convergence having radii that grow as powers of 2 whose left most point is z=-1. Can this process be carried out indefinely to give a series which converges in the half-plane $$\Re z >-1$$ ? But how often will it happen that an analytic continuation is obtained (necessary and sufficient conditions)? What is the maximal region of convergence thereby obtained? In "Theory and Application of Infinte Series," Knopp discusses sufficient conditions that a greater rapidity of convergence be obtained by an application of the series transformation, but I have yet to find a discussion of continuation. Any thoughts?
2013-05-19 10:05:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9191904664039612, "perplexity": 460.73561893084377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00062-ip-10-60-113-184.ec2.internal.warc.gz"}
https://coderunner.org.nz/mod/book/view.php?id=184&chapterid=708
## CodeRunner Documentation (V3.1.0) ### 10.1 Column specifiers Each column specifier is itself a list, typically with just two or three elements. The first element is the column header, the second element is usually the field from the TestResult object being displayed in the column (one of those values listed above) and the optional third element is an sprintf format string used to display the field. Per-test template graders can add their own fields, which can also be selected for display. It is also possible to combine multiple fields into a column by adding extra fields to the specifier: these must precede the sprintf format specifier, which then becomes mandatory. For example, to display a Mark Fraction column in the form 0.74 out of 1.00, a column format specifier of ["Mark Fraction", "awarded", "mark", "%.2f out of %.2f"] could be used.
2019-09-22 12:47:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23004725575447083, "perplexity": 2322.784667614529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575513.97/warc/CC-MAIN-20190922114839-20190922140839-00318.warc.gz"}
https://phys.libretexts.org/TextMaps/Astronomy_and_Cosmology_TextMaps/Map%3A_Astronomy_(Impey)/6%3A_The_Terrestrial_Planets/6.17_Geology_of_Mars
$$\require{cancel}$$ 6.17 Geology of Mars Probes that went into orbit around Mars in the 1970s discovered many intriguing geological features. Photos showed not only ancient, heavily cratered areas, but also younger, sparsely cratered regions. People saw an alien landscape that included canyons and landslides, vast fields of sand dunes, and enormous young volcanoes. Many of the features were strikingly Earth-like, but Mars was missing the ingredient that makes the Earth special: water. One of the most obvious features on the Martian globe is an enormous trench scarring an entire hemisphere. Valles Marineris, named after the Mariner spacecraft that discovered it, is 4500 km (2800 miles) long — it would stretch across the entire United States! The valley is 8 km deep in some places, five times as deep as Earth’s Grand Canyon. Valles Marineris probably has a tectonic origin. Without plate tectonics to relieve the stress, convection in the mantle could stretch the crust, forming a giant crack. Weathering by wind, and perhaps water, would have widened this crack into the chasm we see today. The formation of Valles Marineris may also be related to the Tharsis bulge, a large up welling in the Martian crust nearby. Sitting on the Tharsis bulge are several of the solar system’s largest volcanoes. These shield volcanoes are relatively young, only about 200 million years old. The largest of them is Olympus Mons, rising 24 kilometers (78,000 feet) above Mars. In comparison, Mount Everest on Earth rises only 9 kilometers above sea level, and 13 kilometers above the deepest ocean floor. Mountains on Earth don’t grow as large as on Mars for several reasons.  Tectonic plates slide over volcanic “hot spots,” so instead of one large volcanic mountain, we get a chain of smaller ones. Secondly, the thicker lithosphere of Mars is also better able to support the mass of a large mountain. Earth’s thinner lithosphere and hotter interior causes slumping under the weight of a large mountain. The only samples we have of Martian rocks are meteorites, and we don’t know where on Mars they originated. So scientists have to use craters to estimate the ages of different areas on the surface of Mars. The principle of crater counting is simple. We assume that a new surface forms with no craters and they are added at a constant rate after that. The method only works on planets or moon with thin or absent atmospheres, since the atmosphere would "rub out" many of the craters. We can calibrate crater counting using the Moon, since we have samples from Apollo with ages from radioactive decay. The most obvious age difference is between the northern and southern hemispheres. The northern hemisphere is smoother, with fewer craters, and is also lower in elevation than the southern hemisphere. Perhaps this area was flooded by lava, or it may be the floor of an ancient ocean. Mars is occasionally covered by planet-wide dust storms that can last for months. Billions of years of erosion by wind, meteorite impacts, and water have reduced surface rocks on Mars to a layer of fine particles. During global storms, the entire planet’s surface is veiled by this red dust. The storms are so big, they can even be seen from Earth. Wind speeds at the surface during a dust storm can reach 30 meters per second (68 mph)! With this kind of speed, even very fine dust has enough power to sculpt the Martian surface. Changing dune fields and wind streaks are evidence of that influence. Wind will slow down when it passes over a raised feature like a crater or a hill, and then it will deposit material in a streak behind the feature. The streaks can be dark or light, depending on the color of the material and the underlying rock. Every Solar System body has craters, but one type of crater is found only on Mars. Rampart craters are surrounded by ejecta that looks like it flowed like mud. When an asteroid hits the surface of Mars, part of its kinetic energy is transformed into heat. This would melt any ice mixed in with the soil, or any permafrost beneath the surface. The resulting water would liquefy the rocks and soil thrown out of the crater, forming the characteristic rampart ejecta blanket.  These distinctive craters are one of the indications that ice is present under the Martian surface. Tharsis bulge region on Mars with Olympus Mons in the top left. Click here for original source URL.
2018-01-19 22:56:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5164538621902466, "perplexity": 2202.8646050727502}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888302.37/warc/CC-MAIN-20180119224212-20180120004212-00153.warc.gz"}
https://de.zxc.wiki/wiki/Potential_(Physik)
# Potential (physics) The potential or potential (lat. Potentia , " power , power , power ") is in the physics , the ability of a conservative force field , a job to do. It describes the effect of a conservative field on masses or charges regardless of their size and sign . A reaction of the test specimen is thus initially excluded, but can also be taken into account separately. The large Greek letter Phi is usually used as a symbol for the potential . ${\ displaystyle \ Phi}$ In mathematics, the term potential refers exclusively to a ( scalar or vectorial ) field , i.e. a position function as a whole. In physical-technical contexts, however, it is used to denote both the field and its individual functional values, such as the electrical or gravitational potential at the relevant point. The following mainly deals with the physical "potential" as a field. In many textbooks, the potential energy is also referred to as "potential" and the symbol for the potential energy is chosen. A potential (in the real sense) is potential energy per coupling constant , e.g. B. electrical charge or mass . ${\ displaystyle V}$ ## Basis: the force field According to Newton , the law applies to a force${\ displaystyle F}$ ${\ displaystyle F = ma}$, where is a mass and the acceleration that this mass experiences. So it is a question of a force which is exerted on a single object. ${\ displaystyle m}$${\ displaystyle a}$ In the case of gravity, however, an acceleration (downwards) acts at every point in space, and a mass that is located somewhere in space always experiences a force in that same direction. Quantities of this kind that are not only located in a single place but are distributed over a space are called fields , and depending on whether the quantities in question are directional or undirected, the fields are divided into vector fields and scalar fields . Quantities that have no direction such as mass, charge, density or temperature and can be fully described with the help of a single number are also referred to as scalars , and all fields that assign such directionless quantities to places in space are accordingly called scalar fields . For example, you can assign its height above sea level to each point on the earth's surface and thus obtain a scalar height field , or you can assign z. B. assigns its density to each point in space and thus receives a density field . Forces, on the other hand, are vectors , i.e. directed quantities, and if you assign such a vector instead of a scalar to each point in space, you get a vector field instead of a scalar field . In the case of gravity, for example, all gravity vectors always point in the direction of the center of the earth. Vector fields, the elements of which are forces, are called force fields , and so the above equation can also be written vectorially with ${\ displaystyle {\ vec {F}} = m {\ vec {a}}}$, where is the force field and the acceleration field . An acceleration field usually depends on the position in space, which means that both and are functions of , so it should be more precisely: ${\ displaystyle {\ vec {F}}}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {r}}}$${\ displaystyle {\ vec {F}}}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {r}}}$ ${\ displaystyle {\ vec {F}} ({\ vec {r}}) = m {\ vec {a}} ({\ vec {r}})}$. ## The potential using the example of the electric and the gravitational field Potential energy and gravitational potential around a central mass . Here is the gravitational force, the gravitational acceleration and the potential gradient.${\ displaystyle W _ {\ rm {pot}} (r)}$${\ displaystyle V (r)}$${\ displaystyle F _ {\ rm {G}}}$${\ displaystyle a _ {\ rm {G}}}$${\ displaystyle \ nabla V}$ If it is a conservative force such as the Coulomb force or gravity , a force field can also be expressed using a scalar field , for which the following equations then apply, for example (in the case of the electric field , the charge takes on the role of mass in the gravitational field ) : ${\ displaystyle {\ vec {F}} ({\ vec {r}})}$${\ displaystyle \ Phi ({\ vec {r}})}$ ${\ displaystyle q}$${\ displaystyle m}$ ${\ displaystyle {\ vec {F_ {E}}} ({\ vec {r}}) = - q {\ vec {\ nabla}} \ Phi ({\ vec {r}})}$ or. ${\ displaystyle {\ vec {F_ {G}}} ({\ vec {r}}) = - m {\ vec {\ nabla}} \ Phi ({\ vec {r}})}$. A scalar field that fulfills this relationship is called the potential of the vector field . ${\ displaystyle \ Phi ({\ vec {r}})}$${\ displaystyle {\ vec {F}} ({\ vec {r}})}$ It is • ${\ displaystyle {\ vec {\ nabla}}}$(mostly just written) the Nabla operator${\ displaystyle \ nabla}$ • the expression of the gradient of the field formed with its help .${\ displaystyle {\ vec {\ nabla}} \ Phi ({\ vec {r}})}$${\ displaystyle \ Phi ({\ vec {r}})}$ The application of the Nabla operator to the scalar field generates a vector field that makes a statement about the rate of change of the scalar field in the direction of its steepest rise for each point in space. The potential can thus be illustrated well as a hilly landscape, as in the case of the previously mentioned height field: The height of a point is then its potential value, and the force that acts on a body at this point is the vector that is in Shows the direction of the steepest potential gradient, i.e. exactly opposite to the direction of the steepest potential rise . The force on a charge in the electric field or on a mass in the gravitational field results in ${\ displaystyle q}$${\ displaystyle m}$ ${\ displaystyle {\ vec {F}} _ {E} ({\ vec {r}}) = - q {\ vec {\ nabla}} \ Phi ({\ vec {r}}) = q \ cdot { \ vec {E}} ({\ vec {r}}) \ quad {\ text {or}} \ quad {\ vec {F}} _ {G} ({\ vec {r}}) = - m {\ vec {\ nabla}} \ Phi ({\ vec {r}}) = m \ cdot {\ vec {a}} ({\ vec {r}})}$. The special importance of the potential lies in the fact that as a scalar field - compared to the three components of a force field - it has only one component, which simplifies many calculations. In addition, its product with the charge or mass directly supplies the potential energy of the specimen in question, and so the equation applies in electrostatics for the potential energy and the electrical potential ${\ displaystyle E _ {\ mathrm {pot}} = q \ cdot \ Phi ({\ vec {r}}) \,}$. In a more general sense, other scalar fields from which vector fields can be derived according to the above equation are also referred to as potentials. ### Central potential A central potential is understood to be a potential that only depends on the distance to the center of force. So it applies with . Movements in a central potential are subject to a conservative central force . ${\ displaystyle \ vert {\ vec {r}} \ vert}$${\ displaystyle \ vert {\ vec {r}} _ {1} \ vert = \ vert {\ vec {r}} _ {2} \ vert}$${\ displaystyle V ({\ vec {r_ {1}}}) = V ({\ vec {r_ {2}}})}$ ### To the omens Potential energy W pot (r) and Coulomb potential V (r) in the vicinity of a negative (above) or positive (below) central charge. : Coulomb , : electric field strength , potential: gradient${\ displaystyle \ F_ {C}}$${\ displaystyle \ E}$${\ displaystyle \ nabla V}$ The minus signs in the equations ${\ displaystyle {\ vec {F}} ({\ vec {r}}) = - k \, {\ vec {\ nabla}} \ Phi ({\ vec {r}}) \ quad {\ text {or .}} \ quad {\ vec {a}} ({\ vec {r}}) = - {\ frac {k} {m}} {\ vec {\ nabla}} \ Phi ({\ vec {r} })}$ express that the conservative force on a positive charge (positive electrical charge or mass ) - following the principle of the smallest compulsion - always acts in the direction of decreasing potential energy, i.e. opposite to the direction of its gradient or maximum energy increase. In the graphic image of a mountain of potential, acceleration due to gravity and electric field strength (see adjacent figure) therefore always act “downhill”. ${\ displaystyle k}$${\ displaystyle q}$${\ displaystyle m}$${\ displaystyle {\ vec {\ nabla}} \ Phi ({\ vec {r}})}$ For the electric field, however, the situation can become even more complicated because negative central and test charges are also conceivable. When a negative test charge approaches a negative central charge , the potential energy of the test charge increases, although it moves in the direction of the field line, i.e. in the direction of falling electrical potential. The paradox disappears as soon as one takes into account that the product of two negative quantities results in a positive quantity again. The adjacent figure summarizes the relationship between potential energy and electrical potential for the four conceivable sign constellations of the electrical field. As can be seen, the potential energy is always dependent on the sign of both charges, whereas the potential curve depends solely on the sign of the central charge. ${\ displaystyle -q}$${\ displaystyle -Q}$${\ displaystyle -q}$ A concrete application example of these equations illustrates the content of this relationship a little more clearly: Since the positive direction of coordinate systems on the earth's surface always points vertically upwards and lifting a body higher means that it also gains more potential energy or a higher potential , this potential is approximately at the height above the ground with the amount of acceleration due to gravity . ${\ displaystyle h}$${\ displaystyle g}$${\ displaystyle \ Phi (h) = g \ cdot h}$ If you consider the gravitational potential of the earth's gravitational field as an approximate central potential (see above), i.e. depending solely on the distance to the center of the earth or on the height , the gradient from can be reduced to the differential quotient , and the relationship obtained as the equivalent of the above equations ${\ displaystyle r}$${\ displaystyle h}$${\ displaystyle \ Phi (h)}$${\ displaystyle \ mathrm {d} \ Phi (h) / \ mathrm {d} h}$ ${\ displaystyle {\ vec {a}} (h) = - {\ frac {\ mathrm {d}} {\ mathrm {d} h}} \ Phi (h) \ cdot {\ vec {e}} _ { r} = - {\ frac {\ mathrm {d}} {\ mathrm {d} h}} g \ cdot h \ cdot {\ vec {e}} _ {r} = {\ vec {g}}}$ With ${\ displaystyle {\ vec {g}} = - g {\ vec {e}} _ {r}}$ As can be seen from the minus sign, the direction of the acceleration due to gravity is almost opposite to the positive direction of the coordinate system, i.e. pointing towards the center of the earth as expected. In this case, the acceleration calculated from the gravitational potential is exactly equal to the acceleration due to gravity. ## Potential energy and potential Potential energy and potential differ in that,for example, potential energy in the gravitational field relatesto a mass and in the electric field to a charge and depends on the size of this mass or charge, while the potential is a property of the force field independent of a mass or chargesizeof the test specimen. The potential is a field representation equivalent to the force field . The above-mentioned relationship makes it possible to represent a three-dimensional conservative force field with the aid of scalar fields without losing information about the field. This leads to the simplification of many calculations. However, the conclusion about the body causing the field is no longer clear. For example, the external gravitational potential of a homogeneous full sphere is equivalent to the potential of a point mass. The two variables are linked by the concept of work : • The power is the ability of a body to perform work from a physical standpoint. • The potential is used to describe the ability of a field to make a body do work. The relationship between potential energy and potential is ${\ displaystyle V ({\ vec {r}})}$${\ displaystyle \ Phi ({\ vec {r}})}$ ${\ displaystyle V ({\ vec {r}}) = q \ cdot \ Phi ({\ vec {r}}) \ quad {\ text {or}} \ quad V ({\ vec {r}}) = m \ cdot \ Phi ({\ vec {r}})}$. The first term refers to an electric field (charge ), the second to a gravitational field (mass ). ${\ displaystyle q}$${\ displaystyle m}$ ## Potential difference We always speak of potential difference or potential difference when two or more objects have different potentials from one another. A potential difference is a body-independent measure for the strength of a field and describes the work capacity of an object in this. There is therefore no potential difference along equipotential surfaces (surfaces with the same potential): Objects (bodies, charges) can be moved along them without any effort. In electrostatics, the potential difference is defined as the electrical voltage between two isolated charge carriers (objects of different potential): ${\ displaystyle U = \ Phi ({\ vec {r}} _ {2}) - \ Phi ({\ vec {r}} _ {1})}$. ## Relationship with the charge distribution character description ${\ displaystyle \ Delta}$ Laplace operator ${\ displaystyle \ varepsilon}$ Permittivity ${\ displaystyle \ Phi ({\ vec {r}})}$ potential ${\ displaystyle G}$ Gravitational constant ${\ displaystyle \ rho ({\ vec {r}})}$ Charge or mass density The relationship between the potential and the charge or mass density is established for the Coulomb and gravitational forces using the Poisson equation , a partial differential equation of the second order. In electrostatics it is ${\ displaystyle \ Delta \ Phi ({\ vec {r}}) = - {\ frac {\ rho ({\ vec {r}})} {\ varepsilon}}}$, whereas in the classical theory of gravity they take the form ${\ displaystyle \ Delta \ Phi ({\ vec {r}}) = 4 \ pi G \ rho ({\ vec {r}})}$ owns. In order for the above equation to apply in electrostatics, it must be constant. If this requirement is not met, the following expression must be used instead: ${\ displaystyle \ varepsilon}$ ${\ displaystyle {\ text {div}} \ left [\ varepsilon \ cdot {\ text {grad}} \ \ Phi ({\ vec {r}}) \ right] = - \ rho ({\ vec {r} })}$ ## Example: gravitational potential of a homogeneous sphere Since solving the Poisson equation is already relatively complex in simple cases, a detailed example is presented here. To do this, we consider an idealized celestial body as a perfect sphere with homogeneous density and a radius . ${\ displaystyle \ rho}$${\ displaystyle R}$ ### Outer solution Is in the outer space around the ball and so that the Poisson equation in the Laplace equation merges ${\ displaystyle r> R}$${\ displaystyle \ rho = 0}$ ${\ displaystyle \ Delta \ Phi ({\ vec {r}}) = 0}$. Since the given problem has spherical symmetry, we can simplify it by considering it in spherical coordinates . All that is required is to insert the corresponding Laplace operator into the equation. This then has the form ${\ displaystyle \ Delta \ Phi ({\ vec {r}}) = {\ frac {1} {r ^ {2}}} {\ frac {\ partial} {\ partial r}} \ left (r ^ { 2} {\ frac {\ partial \ Phi ({\ vec {r}})} {\ partial r}} \ right) + {\ frac {1} {r ^ {2} \ sin \ theta}} {\ frac {\ partial} {\ partial \ theta}} \ left (\ sin \ theta {\ frac {\ partial \ Phi ({\ vec {r}})} {\ partial \ theta}} \ right) + {\ frac {1} {r ^ {2} \ sin ^ {2} \ theta}} {\ frac {\ partial ^ {2} \ Phi ({\ vec {r}})} {\ partial \ varphi ^ {2 }}} = 0}$. Obviously, the field cannot depend on the angles , since the sphere is symmetrical. This means that the derivatives from according to the angular coordinates disappear and only the radial part remains. ${\ displaystyle \ theta, \ varphi}$${\ displaystyle \ Phi ({\ vec {r}})}$ ${\ displaystyle \ Delta \ Phi ({\ vec {r}}) = {\ frac {1} {r ^ {2}}} {\ frac {\ partial} {\ partial r}} \ left (r ^ { 2} {\ frac {\ partial \ Phi ({\ vec {r}})} {\ partial r}} \ right) = 0}$, which is further simplified by multiplying with on both sides . ${\ displaystyle r ^ {2}}$ Integration after r delivers ${\ displaystyle \ int {\ frac {\ partial} {\ partial r}} \ left (r ^ {2} {\ frac {\ partial \ Phi ({\ vec {r}})} {\ partial r}} \ right) \ mathrm {d} r = \ int 0 \, \ mathrm {d} r}$ ${\ displaystyle r ^ {2} {\ frac {\ partial \ Phi ({\ vec {r}})} {\ partial r}} = \ alpha}$, where is a constant of integration. Further integration after r yields ${\ displaystyle \ alpha}$ ${\ displaystyle \ int {\ frac {\ partial \ Phi ({\ vec {r}})} {\ partial r}} \ mathrm {d} r = \ int {\ frac {\ alpha} {r ^ {2 }}} \ mathrm {d} r}$ ${\ displaystyle \ Phi ({\ vec {r}}) = - {\ frac {\ alpha} {r}} + \ beta = {\ frac {\ tilde {\ alpha}} {r}} + \ beta}$, where , so that the minus sign disappears and is again a constant of integration. ${\ displaystyle {\ tilde {\ alpha}} = - \ alpha}$${\ displaystyle \ beta}$ Because the potential should approach zero at an infinite distance, it must be. The following applies to the external solution ${\ displaystyle \ beta = 0}$ ${\ displaystyle \ Phi ({\ vec {r}}) = {\ frac {\ tilde {\ alpha}} {r}}}$. However, in order to compute the constant, we must first find the inner solution. ### Inner solution Inside the sphere is and , so that Poisson's equation holds, with ${\ displaystyle r ${\ displaystyle \ rho ({\ vec {r}}) = \ rho}$ ${\ displaystyle \ Delta \ Phi ({\ vec {r}}) = 4 \ pi G \ rho}$. ${\ displaystyle {\ frac {\ partial} {\ partial r}} \ left (r ^ {2} {\ frac {\ partial \ Phi ({\ vec {r}})} {\ partial r}} \ right ) = 4 \ pi G \ rho r ^ {2}}$. Integrating twice after r yields in the same way as before ${\ displaystyle \ Phi ({\ vec {r}}) = {\ frac {2} {3}} \ pi G \ rho r ^ {2} - {\ frac {A} {r}} + B}$, where here and again are constants of integration. Since the potential in the center of the sphere ( ) should have a finite value , must be. Otherwise the potential would be infinite. So we have ${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle r = 0}$${\ displaystyle \ Phi _ {0}}$${\ displaystyle A = 0}$ ${\ displaystyle \ Phi (0) = \ Phi _ {0} = B}$. and thus ${\ displaystyle \ Phi ({\ vec {r}}) = {\ frac {2} {3}} \ pi G \ rho r ^ {2} + \ Phi _ {0}}$. ### Determination of the constants We first differentiate ${\ displaystyle \ Phi _ {A} ({\ vec {r}}) = {\ frac {\ tilde {\ alpha}} {r}}}$ for the outer solution and ${\ displaystyle \ Phi _ {I} ({\ vec {r}}) = {\ frac {2} {3}} \ pi G \ rho r ^ {2} + \ Phi _ {0}}$ for the inner solution. At the edge of the sphere, the inner potential must transition smoothly into the outer. This means that the first derivatives at must match. ${\ displaystyle r = R}$ ${\ displaystyle \ left. {\ frac {\ mathrm {d} \ Phi _ {A} ({\ vec {r}})} {\ mathrm {d} r}} \ right | _ {r = R} = \ left. {\ frac {\ mathrm {d} \ Phi _ {I} ({\ vec {r}})} {\ mathrm {d} r}} \ right | _ {r = R}}$ ${\ displaystyle - {\ frac {\ tilde {\ alpha}} {R ^ {2}}} = {\ frac {4} {3}} \ pi G \ rho R = {\ frac {GM} {R ^ {2}}}}$, where we use here that the mass is the product of volume and density, with ${\ displaystyle M = V \ rho = {\ frac {4} {3}} \ pi R ^ {3} \ rho}$. From this it follows ${\ displaystyle {\ tilde {\ alpha}} = - GM}$, so that is the familiar external solution ${\ displaystyle \ Phi _ {A} ({\ vec {r}}) = - {\ frac {GM} {r}}}$ results. In order to determine the constant of the inner solution, we use the fact that the potential must be continuous , that is , the two solutions at must be identical, that is, the following applies: ${\ displaystyle r = R}$ The gravitational potential of a homogeneous sphere ${\ displaystyle \ Phi _ {A} ({\ vec {R}}) = \ Phi _ {I} ({\ vec {R}})}$ ${\ displaystyle - {\ frac {GM} {R}} = {\ frac {2} {3}} \ pi G \ rho R ^ {2} + \ Phi _ {0}}$. And thus ${\ displaystyle \ Phi _ {0} = - {\ frac {3GM} {2R}}}$. This finally results for the inner solution ${\ displaystyle \ Phi _ {I} ({\ vec {r}}) = {\ frac {2} {3}} \ pi G \ rho r ^ {2} + \ Phi _ {0} = {\ frac {GM} {2R ^ {3}}} r ^ {2} - {\ frac {3GM} {2R}} = {\ frac {GM} {2R}} \ left ({\ frac {r ^ {2} } {R ^ {2}}} - 3 \ right)}$, whereby the first summand was rewritten over the volume. The inner solution corresponds to a harmonic oscillator potential . This means that if you bored a hole through a homogeneous celestial body (a moon or small planet) and dropped an object into it, it would swing (fall) back and forth through the center. Assuming a frictionless movement, the spatial function of the body results ${\ displaystyle r (t) = R \ cdot \ cos \ left ({\ sqrt {\ frac {GM} {R ^ {3}}}} \ cdot t \ right).}$ ### Gravity in a hollow sphere What the situation looks like inside a hollow sphere can now also be read directly from our solution for . Generally we had ${\ displaystyle \ rho = 0}$ ${\ displaystyle \ Phi ({\ vec {r}}) = {\ frac {\ tilde {\ alpha}} {r}} + \ beta}$, since we are now inside the sphere, we cannot go out into infinity, which has previously disappeared. However, the potential in the center must again assume a finite value, so this time will. Then there is the potential ${\ displaystyle \ beta}$${\ displaystyle {\ tilde {\ alpha}} = 0}$ ${\ displaystyle \ Phi ({\ vec {r}}) = \ beta}$, so constant. The derivative of the potential according to the radius gives the acceleration - the derivative of a constant is zero. So you are weightless inside a hollow sphere. This can be understood from the fact that opposing particles in the walls cancel their gravitation. If it was not a perfect sphere, this would not be the case and one would experience small accelerations.
2022-10-04 10:43:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 123, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31163880228996277, "perplexity": 4077.996896811772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00212.warc.gz"}
https://yetanothermathprogrammingconsultant.blogspot.com/2020/05/
## Saturday, May 30, 2020 ### Pictures don't match The Financial Times had an interesting picture of per capita excess deaths. In many respects, this is the best measure to compare how severe different countries are hit by the Coronavirus. Here is the post [1]: However, when we go to the corresponding FT page [2], we actually see a different graph: I suspect the first picture is wrong w.r.t. Spain. Indeed in a PS of the FT page [2] we see: This article has been amended to take into account a one-off revision to Spanish data on Thursday. This meant the UK now has the second-highest death rate from coronavirus after Spain rather than the highest rate as originally reported. This article has been modified to replace a chart linking excess deaths to lockdown dates with one linking excess deaths per million. Looking at the complete picture: I am wondering about the difference between the first (total excess deaths per capita) with the last (percentage difference with average). I expected these to be more similar. Could it be that Peru has a low death rate (younger population)?  Again the text has a note on this: Peru has seen a large rise in deaths this year partly because it has had to battle other diseases, in addition to coronavirus, with its overstretched health system. #### References 1. UK has Europe's highest excess death toll and second highest in the world, https://www.youtube.com/post/UgxDGaBKDom1wrc91fp4AaABCQ 2. UK suffers second-highest death rate from coronavirus, https://www.ft.com/content/6b4c784e-c259-4ca4-9a82-648ffde71bf0 ### Installing Pyomo on colab by google Google has a nice online environment (https://colab.research.google.com/) to run Python code. So let's try Pyomo... The installation instructions are simple: The first step was easy: However, the second step was unsuccessful: This is of course a little bit disappointing. I am not aware of a workaround. ## Monday, May 25, 2020 ### The Miracle Sudoku #### Introduction In [1] another strange Sudoku variant is presented. The givens are just: Obviously, more than just the standard Sudoku constraints are needed to have just one unique feasible solution. So, for this problem we have the following rules: 1. Standard Sudoku rules apply.  So, for each row, column, and sub-block (a.k.a. nonet [3]), we have uniqueness constraints (each cell in a region has a unique number between 1 and 9). 2. Borrowing from chess: any two cells corresponding to a knight's move, must have different values. 3. Similarly, any two cells that form a king's move, must also be different. 4. Orthogonally adjacent cells must be non-consecutive. #### Standard Sudoku setup When we want to solve Sudokus, the easiest approach is to define the following binary decision variables [4]: $x_{i,j,k} = \begin{cases} 1 & \text{if cell (i,j) has value k} \\ 0 & \text{otherwise}\end{cases}$ Here $$k \in \{1,\dots,9\}$$. We have 27 areas we need to check for unique values: rows, columns and nonets. We can organize this as a set:  $u_{a,i,j}\>\text{exists if and only if area a contains cell (i,j)}$ This is data. We also have these two given cells, i.e. fixed variables for these cells. The resulting MIP model can look like [4]: Mixed-Integer Programming Model for standard Sudoku \begin{align}\min\>& 0 && && \text{Dummy objective}\\ & \sum_k \color{darkred}x_{i,j,k} = 1 &&\forall i,j && \text{One k in each cell}\\ & \sum_{i,j|\color{darkblue}u_{a,i,j}} \color{darkred}x_{i,j,k} = 1 && \forall a,k && \text{Unique values in each area}\\ & \color{darkred}x_{i,j,k} = 1 && \text{where } k=\color{darkblue}{\mathit{Given}}_{i,j} &&\text{Fix given values}\\ &\color{darkred}x_{i,j,k} \in \{0,1\} \end{align} During post-processing, we can calculate the completed grid using the optimal values of $$x$$: $v_{i,j} = \sum_k k \cdot x^*_{i,j,k}$ or we can include them as "accounting rows" to the model. Accounting rows are constraints that have the function to calculate some quantities for reporting purposes. The implementation in GAMS uses a set u(a,i,j)that is populated as: set   a 'areas' /r1*r9,c1*c9,s1*s9/   i(a) 'rows' /r1*r9/   j(a) 'columns' /c1*c9/   s(a) 'squares' /s1*s9/   u(a,i,j) 'areas with unique values' ; * populate u(a,i,j): u(a,i,j)=YES if area a contains cell (i,j) * see https://yetanothermathprogrammingconsultant.blogspot.com/2016/10/mip-modeling-from-sudoku-to-kenken.html u(i,i,j) = yes; u(j,i,j) = yes; u(s,i,j)$(ord(s)=3*ceil(ord(i)/3)+ceil(ord(j)/3)-3) = yes; The model equations look like: binary variable x(i,j,k); * v0(i,j) are the givens x.fx(i,j,k)$(v0(i,j)=ord(k)) = 1; variable z 'dummy objective'; integer variable v(i,j); v.lo(i,j) = 1; v.up(i,j) = 9; equation    dummy 'objective'    unique(a,k) 'all-different'    values(i,j) 'one value per cell'    eqv(i,j)  'calculate values' ; dummy.. z =e= 0; unique(a,k).. sum(u(a,i,j), x(i,j,k)) =e= 1; values(i,j).. sum(k, x(i,j,k)) =e= 1; eqv(i,j).. v(i,j) =e= sum(k, ord(k)*x(i,j,k)); model sudoku /all/; solve sudoku minimizing z using mip; We can test this approach with the above data. We will actually find a feasible solution, but of course it will not be unique. I believe there are likely millions of different solutions (or more). Counting them is not that easy. I tried to give it a shot with a solution pool approach: after setting a limit if 1,000,000 solutions, the solver stopped after reaching this limit. So, there are more than 1 million solutions for this (partial) problem. #### Knight's jumps To model how knight's moves are covering cells, we can have a look at [5]. To place knights we set up a set $$\mathit{jump}_{i,j,i',j'}$$ indicating if we can jump from cell $$(i,j)$$ to cell $$(i',j')$$. If we can jump from $$(i,j) \rightarrow (i',j')$$ we can also jump from $$(i',j') \rightarrow (i,j)$$. We don't want to check both cases. To prevent this double check, we only need to look forward. So for each $$(i,j)$$ we need to consider just four cases: $$j$$ $$j+1$$ $$j+2$$ $$i-2$$ $$x_{i-2,j+1}$$ $$i-1$$ $$x_{i-1,j+2}$$ $$i$$ $$x_{i,j}$$ $$i+1$$ $$x_{i+1,j+2}$$ $$i+2$$ $$x_{i+2,j+1}$$ Note that near the border we may have fewer than four cases. In GAMS we can populate the set $$\mathit{jump}$$ in a straightforward manner: alias (i,ii),(j,jj); set jump(i,j,ii,jj); jump(i,j,i-2,j+1) = yes; Jump(i,j,i-1,j+2) = yes; Jump(i,j,i+1,j+2) = yes; Jump(i,j,i+2,j+1) = yes; There are 224 elements in the set $$\mathit{jump}$$. Now comes the more complicated part: how to make sure that the values of cell $$(i,j)$$ and $$(i',j')$$ are not the same. There are basically two approaches: • Compare the values $$v_{i,j}$$ and $$v_{i',j'}$$. The constraint becomes $|v_{i,j}-v_{i',j'}| \ge 1 \>\> \forall i,j,i',j'|\mathit{jump}(i,j,i',j')$ This can be linearized using $v_{i,j}-v_{i',j'} \ge 1 \textbf{ or } v_{i',j'}-v_{i,j} \ge 1$ or \begin{align}& v_{i,j}-v_{i',j'} \ge 1 - M\cdot \delta_{i,j,i',j'} \\ & v_{i',j'}-v_{i,j} \ge 1- M (1-\delta_{i,j,i',j'}) \\ &\delta_{i,j,i',j'} \in \{0,1\}\end{align} Here $$M$$ is a large enough constant ($$M=10$$ should suffice). • Directly work with $$x_{i,j,k}$$ and $$x_{i',j',k}$$. The constraint should be $x_{i,j,k} x_{i',j',k} = 0 \>\> \forall k, \forall i,j,i',j'|\mathit{jump}(i,j,i',j')$ This non-linear constraint can be linearized as $x_{i,j,k}+x_{i',j',k}\le 1$ This clearly is the easier route. In GAMS this can look like: equation Different(i,j,ii,jj,k) 'cells forming a knights move should be different'; Different(jump(i,j,ii,jj),k)..      x(i,j,k) + x(ii,jj,k) =l= 1; #### Kings's moves Here we want to implement a similar restriction as for the knight's move. As the constraints are the same, we just have to augment the set  $$\mathit{jump}$$ a bit. alias (i,ii),(j,jj); set jump(i,j,ii,jj) 'knights or kings move'; * knight jump(i,j,i-2,j+1) = yes; jump(i,j,i-1,j+2) = yes; jump(i,j,i+1,j+2) = yes; jump(i,j,i+2,j+1) = yes; * king jump(i,j,i+1,j+1) = yes; jump(i,j,i-1,j+1) = yes; After this, the set $$\mathit{jump}$$  has grown from 224 to 352 elements. #### Orthogonal neighbors can't be consecutive First we need to build a set that models "orthogonal neighbors".  Similar to our set $$\mathit{jump}$$, we want to prevent comparing pairs twice. So given a cell $$(i,j)$$, we only need to consider two neighbors. $$j$$ $$j+1$$ $$i-1$$ $$x_{i-1,j}$$ $$i$$ $$x_{i,j}$$ $$x_{i,j+1}$$ $$i+1$$ This is coded in GAMS as: set nb(i,j,ii,jj) 'orthogonal neighbors'; nb(i,j,i-1,j) = yes; nb(i,j,i,j+1) = yes; This set has 144 elements. The constraints are: \begin{align}&x_{i,j,k} + x_{i',j',k+1} \le 1\\&x_{i,j,k} + x_{i',j',k-1} \le 1\end{align}\>\> \forall i,j,i',j'|nb(i,j,i',j'), \forall k The GAMS representation is: equations    NonConsecutive1(i,j,ii,jj,k) 'orthogonal neighbors constraint'    NonConsecutive2(i,j,ii,jj,k) 'orthogonal neighbors constraint' ; NonConsecutive1(nb(i,j,ii,jj),k+1)..      x(i,j,k) + x(ii,jj,k+1) =l= 1; NonConsecutive2(nb(i,j,ii,jj),k-1)..      x(i,j,k) + x(ii,jj,k-1) =l= 1; Detail: The last index of the equation definition has a $$k+1$$ or $$k-1$$. This tells GAMS not to generate all possile constraints for all $$k$$ but rather one less. This is not strictly needed: in GAMS addressing outside the domain results in a zero. But this trick will prevent singleton constraints $$x_{i,j,k} + 0 \le 1$$ to be generated. #### Results The complete model has 811 variables (808 of them binary or integer). The number of constraints is 5,878. The results look like: ---- 111 VARIABLE v.L c1 c2 c3 c4 c5 c6 c7 c8 c9 r1 4 8 3 7 2 6 1 5 9 r2 7 2 6 1 5 9 4 8 3 r3 1 5 9 4 8 3 7 2 6 r4 8 3 7 2 6 1 5 9 4 r5 2 6 1 5 9 4 8 3 7 r6 5 9 4 8 3 7 2 6 1 r7 3 7 2 6 1 5 9 4 8 r8 6 1 5 9 4 8 3 7 2 r9 9 4 8 3 7 2 6 1 5 The problem solves in the presolve phase, i.e. 0 iterations and 0 nodes. #### Uniqueness of the solution To prove the uniqueness of the solution, we add a cut that forbids the current solution $$x^*$$: $\sum_{i,j,k} x^*_{i,j,k} x_{i,j,k} \le 9^2-1$ The resulting model is infeasible. This means that the solution is indeed unique. #### Conclusion This Sudoku variant combines standard Sudoku rules with additional constraints, some of them borrowed from chess. This makes the modeling at bit trickier. In the development of the model, I have placed much of the logic in sets. This has the advantage that the constraints are fairly simple. In general, sets are easier to debug than constraints: we can display and debug sets before we have a working model. Both the mathematical model and the GAMS representation is interesting. The mathematical formulation shows we can model this without extra (discrete) variables. The GAMS implementation details assembling sets that simplify constraints. Sudoku models typically solve extremely fast: they are solved by the presolver. This model is no exception. ### Deep learning in practice or maybe, as someone suggested, an expression of modern Yemenite poetry. Presumably the result of some automatic translation tool. ## Thursday, May 21, 2020 ### A small time arbitrage model In [1] the following question was posed: We have monthly prices for a product and I have predictions for the next 12 months. The idea is to exploit this and buy when prices are low and sell when they are high. We have limits on how much we can buy and sell and there are also inventory capacity constraints. What plan is the most profitable? This is an interesting little problem. We will use R to experiment with this. #### Data The data set is as follows: # data price = c(12, 11, 12, 13, 16, 17, 18, 17, 18, 16, 17, 13) capacity = 25 max_units_sell = 8 #### Model The main vehicle we will use is the well-known inventory balance equation:$\mathit{inv}_t = \mathit{inv}_{t-1} + \mathit{buy}_t - \mathit{sell}_t$ I assume that the initial inventory is $\mathit{inv}_0 = 0$ The optimization model for this problem can look like: Model 1 \begin{align}\max&\sum_t \color{darkblue}{\mathit{price}}_t \cdot(\color{darkred}{\mathit{sell}}_t-\color{darkred}{\mathit{buy}}_t)\\ &\color{darkred}{\mathit{inv}}_t = \color{darkred}{\mathit{inv}}_{t-1} + \color{darkred}{\mathit{buy}}_t - \color{darkred}{\mathit{sell}}_t\\ &\color{darkred}{\mathit{inv}}_0 = 0\\ &\color{darkred}{\mathit{inv}}_t \in \{0,1,\dots,\color{darkblue}{\mathit{capacity}}\}\\ &\color{darkred}{\mathit{buy}}_t \in \{0,1,\dots,\color{darkblue}{\mathit{maxbuy}}\}\\ &\color{darkred}{\mathit{sell}}_t \in \{0,1,\dots,\color{darkblue}{\mathit{maxsell}}\}\\ \end{align} I used here integer decision variables: the assumption is we cannot deal with fractional values. Obviously, we can easily relax this and work with continuous variables. #### Implimentation To try this out, I am using CVXR [2] with the Glpk solver [3]. CVXR is using a matrix oriented notation. It is interesting to see how this can be applied to our inventory balance equation. The main trick is that we use a Lag operator implemented as a matrix-vector multiplication: $L \cdot \mathit{inv}$ The Lag operator. We know that multiplication by an identity matrix gives us the same vector: $v = I \cdot v$ If we shift the diagonal down by one, we get what we want. Let's do a little experiment: > n <- 5 > v <- 1:n > v [1] 1 2 3 4 5 > # matrix-vector multiplication with identity matrix > diag(n) %*% v [,1] [1,] 1 [2,] 2 [3,] 3 [4,] 4 [5,] 5 > # lag operator > L = cbind(rbind(0,diag(n-1)),0) > L [,1] [,2] [,3] [,4] [,5] [1,] 0 0 0 0 0 [2,] 1 0 0 0 0 [3,] 0 1 0 0 0 [4,] 0 0 1 0 0 [5,] 0 0 0 1 0 > # matrix-vector multiplication with Lag operator > L %*% v [,1] [1,] 0 [2,] 1 [3,] 2 [4,] 3 [5,] 4 > We see that $$L$$ is indeed the identity matrix but shifted one position down. When we multiply $$Lv$$ we see that the result vector is also shifted one down. The newly inserted element at the first position is zero, which is exactly what we want. Note: if we want a non-zero initial inventory we need to add a vector $$\mathit{initinv}$$ with the first element being the initial inventory and the other elements all zero. Then we can write the inventory balance equation in matrix notation as $\mathit{inv} = \mathit{initinv} + L \cdot \mathit{inv} + \mathit{buy} - \mathit{sell}$ In the R code below, we assumed this was not needed. With this, we are ready to solve our problem: > library(CVXR) > > # data > price = c(12, 11, 12, 13, 16, 17, 18, 17, 18, 16, 17, 13) > capacity = 25 > max_units_sell = 8 > > # number of time periods > NT <- length(price) > > # Decision variables > inv = Variable(NT,integer=T) > sell = Variable(NT,integer=T) > > # Lag operator > L = cbind(rbind(0,diag(NT-1)),0) > > # optimization model + list(inv == L %*% inv + buy - sell, + inv >= 0, inv <= capacity, + sell >= 0, sell <= max_units_sell)) > result <- solve(problem,verbose=T) GLPK Simplex Optimizer, v4.47 84 rows, 36 columns, 119 non-zeros * 0: obj = 0.000000000e+000 infeas = 0.000e+000 (12) * 35: obj = -1.040000000e+002 infeas = 0.000e+000 (0) OPTIMAL SOLUTION FOUND GLPK Integer Optimizer, v4.47 84 rows, 36 columns, 119 non-zeros 36 integer variables, none of which are binary Integer optimization begins... + 35: >>>>> -1.040000000e+002 >= -1.040000000e+002 0.0% (1; 0) + 35: mip = -1.040000000e+002 >= tree is empty 0.0% (0; 1) INTEGER OPTIMAL SOLUTION FOUND > cat("status:",result$status) status: optimal > cat("objective:",result$value) objective: 104 > # print results > data.frame(price, + buy=result$getValue(buy), + sell=result$getValue(sell), + inv=resultgetValue(inv)) price buy sell inv 1 12 4 0 4 2 11 4 0 8 3 12 4 0 12 4 13 4 0 16 5 16 4 0 20 6 17 0 8 12 7 18 0 8 4 8 17 4 0 8 9 18 0 8 0 10 16 4 0 4 11 17 0 4 0 12 13 0 0 0 > We see indeed we are buying when things are cheap and selling when prices are high. This gives us a net profit of 104. The inventory capacity is never binding here. #### A complication In a subsequent post, a non-trivial wrinkle was added to the problem [4]. price = c(12, 11, 12, 13, 16, 17, 18, 17, 18, 16, 17, 13) capacity = 25 max_units_buy_30 = 4 # when inventory level is lower then 30% it is possible to buy 0 to 4 units max_units_buy_65 = 3 # when inventory level is between 30% and 65% it is possible to buy 0 to 3 units max_units_buy_100 = 2 # when inventory level is between 65% and 100% it is possible to buy 0 to 2 units max_units_sell_30 = 4 # when inventory level is lower then 30% it is possible to sell 0 to 4 units max_units_sell_70 = 6 # when inventory level is between 30% and 70% it is possible to sell 0 to 6 units max_units_sell_100 = 8 # when inventory level is between 70% and 100% it is possible to sell 0 to 8 units This requires a little bit of work. #### Time The first thing to consider is how time is handled in the model. In Model 1, we just used a time index $$t$$ and did not really say much about it. Here, we need to be a bit more precise. The variables buy and sell are what we call flow variables. Buying and selling takes place during time period $$t$$. Inventory, however, is measured at a specific point in time. In our case, this is at the end of period $$t$$. This is called a stock variable. Looking at the picture we should put limits on $$\mathit{buy}_t$$ and $$\mathit{sell}_t$$ based on the inventory level at the end of the previous period: $$\mathit{inv}_{t-1}$$. #### Segmenting inventory levels From the question, we see that we have different intervals: • For buying, we have: 0%-30%, 30%-65% and 65%-100% • For selling, we have: 0%-30%, 30%-70% and 70%-100% I prefer to deal with one set of intervals, so we combine this into: • 0%-30%, 30%-65%, 65%-70% and 70%-100% The idea is to introduce a binary variable $$\delta_{k,t}$$ indicating in which segment $$k$$ we are. Obviously, we can only be in one segment, so: \begin{align} &\sum_k \delta_{k,t} = 1 && \forall t\\ & \delta_{k,t} \in \{0,1\}\end{align} So depending on which $$\delta_{k,t}$$ is turned on, we have different lower- and upperbounds on the inventory levels. This means we can write:$\sum_k \mathit{invlb}_k \cdot \delta_{k,t}\le \mathit{inv}_{t-1} \le \sum_k \mathit{invub}_k \cdot \delta_{k,t}$ Notice that we put the bounds on the inventory level at the end of the previous period. #### Limiting buying and selling Here we need to link the values for $$\delta_{k,t}$$ to bounds on buying and selling. This is very similar to what we did with respect to inventory: \begin{align}\mathit{buy}_t \le \sum_k \mathit{buyub}_k \cdot \delta_{k,t} && \forall t \\\mathit{sell}_t \le \sum_k \mathit{sellub}_k \cdot \delta_{k,t} && \forall t\end{align} #### Model We now have all the pieces to write down the complete model. Model 2 \begin{align}\max&\sum_t \color{darkblue}{\mathit{price}}_t \cdot(\color{darkred}{\mathit{sell}}_t-\color{darkred}{\mathit{buy}}_t)\\ &\color{darkred}{\mathit{inv}}_t = \color{darkred}{\mathit{inv}}_{t-1} + \color{darkred}{\mathit{buy}}_t - \color{darkred}{\mathit{sell}}_t&& \forall t\\ &\color{darkred}{\mathit{inv}}_0 = 0\\ &\color{darkred}{\mathit{inv}}_{t-1} \ge \sum_k \color{darkblue}{\mathit{invlb}}_k \cdot \color{darkred}\delta_{k,t} && \forall t \\ &\color{darkred}{\mathit{inv}}_{t-1} \le \sum_k \color{darkblue}{\mathit{invub}}_k \cdot \color{darkred}\delta_{k,t} && \forall t \\ &\color{darkred}{\mathit{buy}}_t \le \sum_k \color{darkblue}{\mathit{buyub}}_k \cdot \color{darkred}\delta_{k,t} && \forall t \\ &\color{darkred}{\mathit{sell}}_t \le \sum_k \color{darkblue}{\mathit{sellub}}_k \cdot \color{darkred}\delta_{k,t} && \forall t \\ &\sum_k \color{darkred}\delta_{k,t} = 1 && \forall t\\ &\color{darkred}{\mathit{inv}}_t \in \{0,1,\dots,\color{darkblue}{\mathit{capacity}}\}\\ &\color{darkred}{\mathit{buy}}_t \in \{0,1,\dots\}\\ &\color{darkred}{\mathit{sell}}_t \in \{0,1,\dots\}\\ &\color{darkred}\delta_{k,t} \in \{0,1\} \end{align} #### Implementation The R implementation can look like: > library(CVXR) > > # data > price = c(12, 11, 12, 13, 16, 17, 18, 17, 18, 16, 17, 13) > capacity = 25 > max_units_buy = 4 > max_units_sell = 8 > > # capacity segments > s <- c(0,0.3,0.65,0.7,1) > > # corresponding lower and upper bounds > invlb <- s[1:(length(s)-1)] * capacity > invlb [1] 0.00 7.50 16.25 17.50 > invub <- s[2:length(s)] * capacity > invub [1] 7.50 16.25 17.50 25.00 > > buyub <- c(4,3,2,2) > sellub <- c(4,6,6,8) > > # number of time periods > NT <- length(price) > NT [1] 12 > > # number of capacity segments > NS <- length(s)-1 > NS [1] 4 > > # Decision variables > inv = Variable(NT,integer=T) > buy = Variable(NT,integer=T) > sell = Variable(NT,integer=T) > delta = Variable(NS,NT,boolean=T) > > # Lag operator > L = cbind(rbind(0,diag(NT-1)),0) > > # optimization model > problem <- Problem(Maximize(sum(price*(sell-buy))), + list(inv == L %*% inv + buy - sell, + sum_entries(delta,axis=2)==1, + L %*% inv >= t(delta) %*% invlb, + L %*% inv <= t(delta) %*% invub, + buy <= t(delta) %*% buyub, + sell <= t(delta) %*% sellub, + inv >= 0, inv <= capacity, + buy >= 0, sell >= 0)) > result <- solve(problem,verbose=T) GLPK Simplex Optimizer, v4.47 120 rows, 84 columns, 369 non-zeros 0: obj = 0.000000000e+000 infeas = 1.200e+001 (24) * 23: obj = 0.000000000e+000 infeas = 0.000e+000 (24) * 85: obj = -9.875986758e+001 infeas = 0.000e+000 (2) OPTIMAL SOLUTION FOUND GLPK Integer Optimizer, v4.47 120 rows, 84 columns, 369 non-zeros 84 integer variables, 48 of which are binary Integer optimization begins... + 85: mip = not found yet >= -inf (1; 0) + 123: >>>>> -8.800000000e+001 >= -9.100000000e+001 3.4% (17; 0) + 126: >>>>> -9.000000000e+001 >= -9.100000000e+001 1.1% (9; 11) + 142: mip = -9.000000000e+001 >= tree is empty 0.0% (0; 35) INTEGER OPTIMAL SOLUTION FOUND > cat("status:",resultstatus) status: optimal > cat("objective:",result$value) objective: 90 > # print results > data.frame(price, + buy=result$getValue(buy), + sell=result$getValue(sell), + inv=result$getValue(inv), + maxbuy=t(result$getValue(delta)) %*% buyub, + maxsell=t(result$getValue(delta)) %*% sellub) 1 12 3 0 3 4 4 2 11 4 0 7 4 4 3 12 4 0 11 4 4 4 13 3 0 14 3 6 5 16 3 0 17 3 6 6 17 1 0 18 2 6 7 18 0 8 10 2 8 8 17 0 6 4 3 6 9 18 0 4 0 4 4 10 16 4 0 4 4 4 11 17 0 4 0 4 4 12 13 0 0 0 4 4 > # display the optimal values for delta > print(result\$getValue(delta)) [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [1,] 1 1 1 0 0 0 0 0 1 1 1 1 [2,] 0 0 0 1 1 0 0 1 0 0 0 0 [3,] 0 0 0 0 0 1 0 0 0 0 0 0 [4,] 0 0 0 0 0 0 1 0 0 0 0 0 > The results are interesting. Compared to the first model, the profit has decreased from 140 to 90. One question that comes to mind is: why only buy 3 in the first period? Well, if we buy 4 and 4 units in periods 1 and 2, we can no longer buy 4 units in period 3. These kinds of decisions are very difficult to make by hand: it is just too complex. #### Conclusion This is a nice little example, where some formal modeling really helps. It also shows both the strengths and weaknesses of a CVX-like modeling tool: it can make matrix- and vector-based models very easy to express. But when we stray away from this a bit and want to implement modeling concepts like lags, things become somewhat more complicated. #### References 2. CVXR, Convex Optimization in R, https://cvxr.rbind.io/ 3. Package Rglpk, https://cran.r-project.org/web/packages/Rglpk/Rglpk.pdf ## Friday, May 8, 2020 ### A scheduling problem #### Problem Statement The problem we are trying to solve here is a simple scheduling model [1]. But as we shall see, it has some interesting performance issues. There are $$N$$ tasks (jobs) and $$M$$ rooms. We need to assign jobs to rooms where they are executed. We assume we execute one job at the time in each room (but jobs in different rooms can execute concurrently). Jobs require some resources (for example water tap, gas piping). Rooms provide certain resources. We need to make sure tasks are assigned to rooms that contain the resources that are needed. Finally, all jobs have a due date. #### Data Let's generate some data. ---- 33 SET use resource usage resource1 resource2 resource3 resource4 resource5 ---- 33 SET avail resource availability resource1 resource2 resource3 resource4 resource5 room1 YES YES YES YES room2 YES YES room3 YES YES room4 YES YES YES YES room5 YES YES YES YES ---- 33 PARAMETER length job length ---- 33 PARAMETER due job due dates We have 30 tasks, 5 rooms, and 5 resources. Note that some tasks don't need special resources (e.g. task1). They can execute in any room. Some jobs require resources that allow only one room. For instance, task9 needs resources 2 and 4. Only room1 provides this combination. In the model, we actually don't need to know about resource usage. The only thing we need to know is whether job $$i$$ can be assigned to room $$j$$. So I calculated a set Allowed: ---- 37 SET allowed task is allowed to be executed in room room1 room2 room3 room4 room5 task1 YES YES YES YES YES task4 YES YES YES YES YES task6 YES YES YES YES YES task8 YES YES YES YES YES task10 YES YES YES YES YES task18 YES YES YES YES YES task19 YES YES YES YES YES task22 YES YES YES YES YES task27 YES YES YES YES YES task29 YES YES YES YES YES task30 YES YES YES YES YES #### Model 1 My first approach is to use no-overlap constraints for jobs that are assigned to the same room. Mixed Integer Programming Model 1 \begin{align} \min\>&\color{darkred}{\mathit{Makespan}}\\ &\sum_{j|\color{darkblue}{\mathit{Allowed}}(i,j)} \color{darkred}{\mathit{Assign}}_{i,j} = 1 && \forall i \\ &\color{darkred}{\mathit{Finish}}_{i} = \color{darkred}{\mathit{Start}}_{i} + \color{darkblue}{\mathit{Length}}_{i} && \forall i \\ &\color{darkred}{\mathit{Start}}_{i} \ge \color{darkred}{\mathit{Finish}}_{i'} - \color{darkblue}M \cdot\color{darkred}\delta_{i,i',j} - \color{darkblue}M (1-\color{darkred}{\mathit{Assign}}_{i,j}) - \color{darkblue}M (1-\color{darkred}{\mathit{Assign}}_{i',j}) && \forall i \lt i',j | \color{darkblue}{\mathit{Allowed}}(i,j) \>\mathbf{and}\>\color{darkblue}{\mathit{Allowed}}(i',j)\\ &\color{darkred}{\mathit{Start}}_{i'} \ge \color{darkred}{\mathit{Finish}}_{i} - \color{darkblue}M (1-\color{darkred}\delta_{i,i',j}) - \color{darkblue}M (1-\color{darkred}{\mathit{Assign}}_{i,j}) - \color{darkblue}M (1-\color{darkred}{\mathit{Assign}}_{i',j}) &&\forall i \lt i',j | \color{darkblue}{\mathit{Allowed}}(i,j) \>\mathbf{and}\>\color{darkblue}{\mathit{Allowed}}(i',j) \\ &\color{darkred}{\mathit{Finish}}_i \le \color{darkblue}{\mathit{DueDate}}_{i} && \forall i\\ &\color{darkred}{\mathit{Makespan}} \ge \color{darkred}{\mathit{Finish}}_{i} && \forall i \\ & \color{darkred}{\mathit{Assign}}_{i,j} \in \{0,1\} \\ & \color{darkred}{\mathit{Start}}_{i},\color{darkred}{\mathit{Finish}}_{i} \ge 0\\ & \color{darkred}\delta_{i,i',j} \in \{0,1\} \end{align} The no-overlap constraints say, if jobs $$i$$ and $$i'$$ execute in the same room $$j$$ then either $$i$$ has to execute before $$i'$$ or $$i'$$ has to execute before $$i$$. As usual, the big-M values are used to make constraints non-binding when they are not to be obeyed. For this problem, I simply used $$M=100$$. This model does not perform very well at all. After an hour, I still saw a gap of 27% In addition, the number of constraints is large (given the small data set): 2,352. We can improve on this formulation by observing that we don't need all variables $$\delta_{i,i',j}$$. Instead we can use  $$\delta_{i,i'}$$. This improves the performance a bit, but it is still not very good. This version is called "Improved Model 1" in the results table further down. #### Model 2 Let's try a different model. First, we make sure all jobs are ordered by the due date. This means that we can find the finish time for job $$i$$ placed in room $$j$$ by calculating the processing time of all previous jobs assigned to room $$j$$: $\sum_{i'|i'\le i\>\mathbf{and} \mathit{Allowed}(i',j)} {\mathit{Length}}_{i'} \cdot {\mathit{Assign}}_{i',j}$ This means: we execute jobs assigned to a room back-to-back (no holes). Using this approach we can write: Mixed Integer Programming Model 2 \begin{align} \min\>&\color{darkred}{\mathit{Makespan}}\\ &\color{darkred}{\mathit{Finish}}_{i} \ge \sum_{i'|i'\le i\> \mathbf{and}\>\color{darkblue}{\mathit{Allowed}}(i',j)} \color{darkblue}{\mathit{Length}}_{i'} \cdot \color{darkred}{\mathit{Assign}}_{i',j} - \color{darkblue}M (1-\color{darkred}{\mathit{Assign}}_{i,j})&& \forall i,j|\color{darkblue}{\mathit{Allowed}}(i,j)\\ &\color{darkred}{\mathit{Finish}}_i \le \color{darkblue}{\mathit{DueDate}}_{i} && \forall i\\ &\sum_{j|\color{darkblue}{\mathit{Allowed}}(i,j)} \color{darkred}{\mathit{Assign}}_{i,j} = 1 && \forall i \\ &\color{darkred}{\mathit{Makespan}} \ge \color{darkred}{\mathit{Finish}}_{i} && \forall i \\ & \color{darkred}{\mathit{Assign}}_{i,j} \in \{0,1\} \\ & \color{darkred}{\mathit{Finish}}_{i} \ge 0\\ \end{align} Again we rely heavily on the set Allowed. Note that in the finish constraint we repeat large parts of the summation in subsequent constraints. For large models, we may want to look into this (e.g. by adding extra variables and constraints to reduce the number of nonzero elements). In this case, I just left the constraint as specified in the mathematical model. This model 2 turns out to be much faster: Model 1Improved Model 1Model 2 Rows2,3522,352286 Columns1,283585137 Binary Columns1,222524106 Time>3,600>3,60024 StatusTime LimitTime LimitOptimal Objective19.514219.435919.4322 Gap27%22%0% The difference between models 1 and 2 is rather dramatic. #### Solution The solution values for the variables $$\mathit{Finish}_i$$ are not necessarily as small as possible. The objective does not push all job completion times down, only the ones involved with the total makespan (i.e. on the critical path). When reporting it makes sense just to take the optimal assignments from the model and then execute jobs as early as possible. This is what I did here. Jobs on a single line are ordered by the due date. The recalculated solution is: ---- 129 PARAMETER results start length finish duedate The highlighted completion time corresponds to the makespan. #### Conclusion This is a bit more interesting model than I initially thought. The standard no-overlap constraints for a continuous-time model do not perform very well. In addition, they are a bit complicated due to several big-M terms (the constraints should only hold in certain specific cases: two jobs run in the same room). By using the assumption that jobs in a single room are executed in order of the due date (not a bad assumption), we can create a much simpler MIP model that also performs much, much better. When we have an infeasible solution (i.e. we cannot meet all due dates), we may want to deliver a good schedule that minimizes the damage. This is easy is we just add a slack variable to each due date constraint, and add the slack with a cost (penalty) to the objective. This essentially minimized the sum of the due date violations. There may be a reason to also look at minimizing the number of tardy jobs. It is noted that because we fixed the order of jobs in a single room, we may not get the best if we want to minimize the number of tardy jobs. #### References 1. Allocating and scheduling tasks into rooms with conditions - optimization algorithm, https://stackoverflow.com/questions/61656492/allocating-and-scheduling-tasks-into-rooms-with-conditions-optimization-algori
2021-08-05 00:47:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 9, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8861355185508728, "perplexity": 1816.8723304686803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155268.80/warc/CC-MAIN-20210805000836-20210805030836-00098.warc.gz"}
https://www.beatthegmat.com/what-is-the-value-of-product-abc-t304181.html
• Magoosh Study with Magoosh GMAT prep Available with Beat the GMAT members only code • 5-Day Free Trial 5-day free, full-access trial TTP Quant Available with Beat the GMAT members only code • 1 Hour Free BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • FREE GMAT Exam Know how you'd score today for $0 Available with Beat the GMAT members only code • Award-winning private GMAT tutoring Register now and save up to$200 Available with Beat the GMAT members only code • Free Veritas GMAT Class Experience Lesson 1 Live Free Available with Beat the GMAT members only code • Free Trial & Practice Exam BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • 5 Day FREE Trial Study Smarter, Not Harder Available with Beat the GMAT members only code • Free Practice Test & Review How would you score if you took the GMAT Available with Beat the GMAT members only code • Get 300+ Practice Questions Available with Beat the GMAT members only code # What is the value of product abc? 00:00 A B C D E ## Global Stats Difficult What is the value of product abc? (1) 2^a * 3^b * 5^c = 1728 (2) a, b, and c are nonnegative integers OA C Source: Veritas Prep ### GMAT/MBA Expert GMAT Instructor Joined 22 Aug 2016 Posted: 1394 messages Followed by: 26 members 470 BTGmoderatorDC wrote: What is the value of product abc? (1) 2^a * 3^b * 5^c = 1728 (2) a, b, and c are nonnegative integers OA C Source: Veritas Prep We have to get the value of abc. Let's take each statement one by one. (1) 2^a * 3^b * 5^c = 1728. Multiple solutions are possible. Note that we do not know that a, b and c are integers. They can be real numbers. For example, at a = b = 0, we have 5^c = 1728 => the value of c is unique, of course it will be a real number; thus abc = 0. In the same way, at a = b = 1, we have 2*3*5^c = 1728 => 5^c = 288 => the value of c is unique; thus abc is some positive value. No unqiue value of abc. Insufficient. (2) a, b, and c are nonnegative integers. Certainly insufficient since the relationship among a, b and c is not known, (1) and (2) together So, we have 2^a * 3^b * 5^c = 1728 2^a * 3^b * 5^c = 2^6 * 3^3 * 5^c Since a, b and c are nonnegative integers, we must have c = 0, a = 6 and b = 3; thus, abc = 6*3*0 = 0. Sufficient. Hope this helps! -Jay _________________ Manhattan Review GRE Prep Locations: GRE Classes Raleigh NC | GRE Prep Course Singapore | GRE Prep Philadelphia | SAT Prep Classes Toronto | and many more... ### GMAT/MBA Expert GMAT Instructor Joined 09 Oct 2010 Posted: 528 messages Followed by: 25 members 59 BTGmoderatorDC wrote: What is the value of product abc? (1) 2^a * 3^b * 5^c = 1728 (2) a, b, and c are nonnegative integers Source: Veritas Prep $? = abc$ $\left( 1 \right)\,\,\,{2^a} \cdot {3^b} \cdot {5^c} = 1728\,\,\,\,\,\,\left\{ \begin{gathered} \,{\text{Take}}\,\,\left( {a,b,c} \right) = \left( {6,3,0} \right)\,\,\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,\,? = 0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left[ {1728 = {2^6} \cdot {3^3}} \right] \hfill \\ \,{\text{Take}}\,\,\left( {a,b,c} \right) = \left( {{x_{\text{p}}},1,1} \right)\,\,\,\,\,\,\,\mathop \Rightarrow \limits^{\left( * \right)} \,\,\,\,\,\,\,? = {x_{\text{p}}} > 0\,\,\,\,\,\,\,\,\,\, \hfill \\ \end{gathered} \right.$ $\left( * \right)\,\,\,{2^x} = \frac{{1728}}{{15}}\,\,\,\,\, \Rightarrow \,\,\,\,x = {x_p} > 0\,\,\,{\text{unique}}\,\,\,\,\,\,\left( {{\text{see}}\,\,{\text{image}}\,\,{\text{attached}}} \right)$ $\left( 2 \right)\,\,a,b,c\,\,\, \geqslant 0\,\,\,\,{\text{ints}}\,\,\,\,\left\{ \begin{gathered} \,{\text{Take}}\,\,\left( {a,b,c} \right) = \left( {0,0,0} \right)\,\,\,\,\, \Rightarrow \,\,\,\,\,? = 0 \hfill \\ \,\,Take\,\,\left( {a,b,c} \right) = \left( {1,1,1} \right)\,\,\,\,\, \Rightarrow \,\,\,\,\,? = 1 \hfill \\ \end{gathered} \right.$ $\left( {1 + 2} \right)\,\,\,\left\{ \begin{gathered} {2^a} \cdot {3^b} \cdot {5^c} = 1728 = {2^6} \cdot {3^3} \cdot {5^0} \hfill \\ \,a,b,c\,\,\, \geqslant 0\,\,\,\,{\text{ints}} \hfill \\ \end{gathered} \right.\,\,\,\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,\left( {a,b,c} \right) = \left( {6,3,0} \right)\,\,\,\,\,\,\, \Rightarrow \,\,\,\,? = 0\,\,\,\,\, \Rightarrow \,\,\,\,\,{\text{SUFF}}.\,\,\,\,\,\,\,$ This solution follows the notations and rationale taught in the GMATH method. Regards, Fabio. _________________ Fabio Skilnik :: www.GMATH.net (Math for the GMAT) Course release PROMO : finish our test drive till 30/Sep with (at least) 60 correct answers out of 92 (12-questions Mock included) to gain a 70% discount! ### Top First Responders* 1 Jay@ManhattanReview 83 first replies 2 Brent@GMATPrepNow 68 first replies 3 fskilnik 55 first replies 4 GMATGuruNY 36 first replies 5 ceilidh.erickson 13 first replies * Only counts replies to topics started in last 30 days See More Top Beat The GMAT Members ### Most Active Experts 1 fskilnik GMAT Teacher 199 posts 2 Brent@GMATPrepNow GMAT Prep Now Teacher 160 posts 3 Scott@TargetTestPrep Target Test Prep 109 posts 4 Jay@ManhattanReview Manhattan Review 95 posts 5 GMATGuruNY The Princeton Review Teacher 90 posts See More Top Beat The GMAT Experts
2018-09-23 22:31:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34993597865104675, "perplexity": 12627.437215859189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159820.67/warc/CC-MAIN-20180923212605-20180923233005-00288.warc.gz"}
http://www.koreascience.or.kr/article/JAKO198303041889054.page
# Sporobolomyces holsaticus의 배양중 전분자화 특성조사 • 박완수 (농어촌개발공사 식품연구소) ; • 구영조 (농어촌개발공사 식품연구소) ; • 신동화 (농어촌개발공사 식품연구소) ; • 민병용 (농어촌개발공사 식품연구소) • Published : 1983.01.01 • 38 6 #### Abstract Direct conversion of starchy materials to single cell protein of Sporobolomyces holsaticus FRI Y-5 was investigated. Effect of yeast extract concentration on its cell growth showed that it could utilize more of starch in the medium containing 2.5 g/l of yeast extract. In case of jar fermentor culture, the specific growth rate and cell yield of Sp. holsaticus on soluble starch were calculated to be $0.14\;hr^{-1}$ and 0.425, respectively and its maximum cell concentration was 13.4 g/l. After 80 hr of incubation time, 45.96% of starch was consumed and 45.1% of relative blue value was decreased. Reducing sugars in the starch medium seemed to increase from 4.06 g/l to 6.08 g/l and then to decrease. During fermentor culture, pH of medium was almost not changed in the range of $pH\;7.0{\pm}0.5$. The optimal temperature and pH of Sp. holsaticus amylase activity were $40^{\circ}C$ and pH 7.5, respectively. It was shown from the effect of Tapioca starch concentration on the cell growth that the optimal concentration of Tapioca starch for Sp. holsaticus was lower than that of soluble starch. FRI Y-5 cells settled much slower than Sp. holsaticus IFO 1032 cells and the viscosity vs cell concentration relationship was related to be linear.
2020-01-28 17:38:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3383074402809143, "perplexity": 14993.807278670558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251779833.86/warc/CC-MAIN-20200128153713-20200128183713-00349.warc.gz"}
http://openstudy.com/updates/4ef0ee10e4b082f22c0b502c
## anonymous 4 years ago Find the volume generated by revolving about the x-axis the area bounded by xy = 9, the x-axis and lines x = 3 and x=9 1. amistre64 |dw:1324412465826:dw| 2. amistre64 something similar to this right? 3. anonymous Yess 4. amistre64 then we can integrate, or sum up the areas of circles, generated by the function from 3 to 9 i believe 5. anonymous May I please see a full work out please? 6. anonymous @ amistre64 Alright 7. amistre64 $\sum_{3}^{4}\pi[f(x)]^2\ \Delta x$ $\pi \int_{3}^{9}(\frac{9}{x})^2dx$ 8. amistre64 |dw:1324412686951:dw| 9. amistre64 if we take any arbitrary circle created by spinning this about the x axis; we get an area of: pi (y)^2 pi (9/x)^2 the volume generated is then adding up all the circles made from x=3 to x=9 10. TuringTest are you still stuck? can you not integrate$\pi\int_{3}^{9}\frac{9^2}{x^2}dx$? or are you having another problem? 11. anonymous @ amistre64 Alright 12. anonymous is it 18pi? 13. amistre64 http://www.wolframalpha.com/input/?i=integrate+pi*%289%2Fx%29%5E2+from+3+to+9 18pi looks good to me
2016-05-24 17:42:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7033942341804504, "perplexity": 3687.6901025818674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049272823.52/warc/CC-MAIN-20160524002112-00147-ip-10-185-217-139.ec2.internal.warc.gz"}
https://web2.0calc.com/questions/complex-numbers_76
+0 # complex numbers 0 76 1 Determine the complex number z satisfying the equation $2z - 3i \overline{z} = -7 + 2i$. Note that $\overline{z}$ denotes the conjugate of z. Apr 22, 2021 #1 +420 +2 Let a and b be real numbers, such that a is the real part of z, and b is the imaginary part. Therefore, $$2(a+bi)-3i(a-bi)=7-2i\\ 2a+2bi-3ai-3b=7-2i\\ (2a-3b)+(2b-3a)i=7-2i$$ Notice the real part of the right-hand side is 2a-3b, and the real part of the left-hand side is 7, so $$2a-3b=7$$. Similarly, the imaginary part of the right-hand side is 2b-3a, and the imaginary part of the left-hand side is -2, so that means that $$-3a+2b=-2$$. Now it's just a system of 2 linear equations, and can be easily solved with elimination: $$6a-9b=21\\-6a+4b=-4\\-5b=17\\b=-\frac{17}{5}\\-3a+2(-\frac{17}{5})=-2\\-3a-\frac{34}{5}=-2\\ -3a=\frac{24}{5}\\a=-\frac{8}{5}$$ Therefore, the complex number z is equal to $$\boxed{-\frac{8}{5}-\frac{17}{5}i}$$ Apr 22, 2021 #1 +420 +2 Let a and b be real numbers, such that a is the real part of z, and b is the imaginary part. Therefore, $$2(a+bi)-3i(a-bi)=7-2i\\ 2a+2bi-3ai-3b=7-2i\\ (2a-3b)+(2b-3a)i=7-2i$$ Notice the real part of the right-hand side is 2a-3b, and the real part of the left-hand side is 7, so $$2a-3b=7$$. Similarly, the imaginary part of the right-hand side is 2b-3a, and the imaginary part of the left-hand side is -2, so that means that $$-3a+2b=-2$$. Now it's just a system of 2 linear equations, and can be easily solved with elimination: $$6a-9b=21\\-6a+4b=-4\\-5b=17\\b=-\frac{17}{5}\\-3a+2(-\frac{17}{5})=-2\\-3a-\frac{34}{5}=-2\\ -3a=\frac{24}{5}\\a=-\frac{8}{5}$$ Therefore, the complex number z is equal to $$\boxed{-\frac{8}{5}-\frac{17}{5}i}$$ textot Apr 22, 2021
2021-07-30 14:24:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9793839454650879, "perplexity": 215.81130150532758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.60/warc/CC-MAIN-20210730122926-20210730152926-00410.warc.gz"}
http://rosa.unipr.it/fsda/CorAnaplot.html
# CorAnaplot CorAnaplot draws the Correspondence Analysis (CA) graphs with confidence ellipses. ## Syntax • CorAnaplot(out)example • CorAnaplot(out,Name,Value)example ## Description CorAnaplot(out) CorAnaplot with all the default options. CorAnaplot(out, Name, Value) CorAnaplot with personalized symbols. ## Examples expand all ### CorAnaplot with all the default options. Prepare the data. N=[51 64 32 29 17 59 66 70; 53 90 78 75 22 115 117 86; 71 111 50 40 11 79 88 177; 1 7 5 5 4 9 8 5; 7 11 4 3 2 2 17 18; 7 13 12 11 11 18 19 17; 21 37 14 26 9 14 34 61; 12 35 19 6 7 21 30 28; 10 7 7 3 1 8 12 8; 4 7 7 6 2 7 6 13; 8 22 7 10 5 10 27 17; 25 45 38 38 13 48 59 52; 18 27 20 19 9 13 29 53; 35 61 29 14 12 30 63 58; 2 4 3 1 4 nan nan nan ; 2 8 2 5 2 nan nan nan; 1 5 4 6 3 nan nan nan; 3 3 1 3 4 nan nan nan]; % rowslab = cell containing row labels rowslab={'money','future','unemployment','circumstances',... 'hard','economic','egoism','employment','finances',... 'war','housing','fear','health','work','comfort','disagreement',... 'world','to_live'}; % colslab = cell containing column labels colslab={'unqualified','cep','bepc','high_school_diploma','university',... 'thirty','fifty','more_fifty'}; if verLessThan('matlab','8.2.0')==0 tableN=array2table(N,'VariableNames',colslab,'RowNames',rowslab); % Extract just active rows Nactive=tableN(1:14,1:5); Nsupr=tableN(15:18,1:5); Nsupc=tableN(1:14,6:8); Sup=struct; Sup.r=Nsupr; Sup.c=Nsupc; % Compute Correspondence analysis else Nactive=N(1:14,1:5); Lr=rowslab(1:14); Lc=colslab(1:5); Sup=struct; Sup.r=N(15:end,1:5); Sup.Lr=rowslab(15:end); Sup.c=N(1:14,6:8); Sup.Lc=colslab(6:8); end % Compute correpondence analysis out=CorAna(Nactive,'Sup',Sup,'plots',0,'dispresults',false); % Show the correspodence analysis plot. % Rows and columns are showN in principal coordinates CorAnaplot(out) ### CorAnaplot with personalized symbols. close all % Prepare the data. N=[51 64 32 29 17 59 66 70; 53 90 78 75 22 115 117 86; 71 111 50 40 11 79 88 177; 1 7 5 5 4 9 8 5; 7 11 4 3 2 2 17 18; 7 13 12 11 11 18 19 17; 21 37 14 26 9 14 34 61; 12 35 19 6 7 21 30 28; 10 7 7 3 1 8 12 8; 4 7 7 6 2 7 6 13; 8 22 7 10 5 10 27 17; 25 45 38 38 13 48 59 52; 18 27 20 19 9 13 29 53; 35 61 29 14 12 30 63 58; 2 4 3 1 4 nan nan nan ; 2 8 2 5 2 nan nan nan; 1 5 4 6 3 nan nan nan; 3 3 1 3 4 nan nan nan]; % rowslab = cell containing row labels rowslab={'money','future','unemployment','circumstances',... 'hard','economic','egoism','employment','finances',... 'war','housing','fear','health','work','comfort','disagreement',... 'world','to_live'}; % colslab = cell containing column labels colslab={'unqualified','cep','bepc','high_school_diploma','university',... 'thirty','fifty','more_fifty'}; if verLessThan('matlab','8.2.0')==0 tableN=array2table(N,'VariableNames',colslab,'RowNames',rowslab); % Extract just active rows Nactive=tableN(1:14,1:5); Nsupr=tableN(15:18,1:5); Nsupc=tableN(1:14,6:8); Sup=struct; Sup.r=Nsupr; Sup.c=Nsupc; % Compute Correspondence analysis else Nactive=N(1:14,1:5); Lr=rowslab(1:14); Lc=colslab(1:5); Sup=struct; Sup.r=N(15:end,1:5); Sup.Lr=rowslab(15:end); Sup.c=N(1:14,6:8); Sup.Lc=colslab(6:8); end % Six-pointed star (hesagram) for supplementary rows SymbolRows='h'; % Five-pointed star (pentagram) for supplementary rows SymbolRowsSup='p'; % Color for active rows ColorRows='b'; % Color for supplementary rows (dark blue) ColorRowsSup=[6 13 123]/255; % Blue fill color for active rows MarkerFaceColorRows='b'; % Right-pointing triangle for active columns Symbolcols='^'; % Six-pointed star (hexagram) for supplementary columns SymbolcolsSup='h'; % Color for active columns ColorCols='r'; % Red fill color for active rows MarkerFaceColorCols='r'; % Color for supplementary columns (dark red) ColorColsSup=[128 0 0]/255; plots=struct; plots.SymbolRows=SymbolRows; plots.SymbolRowsSup=SymbolRowsSup; plots.ColorRows=ColorRows; plots.ColorRowsSup=ColorRowsSup; plots.MarkerFaceColorRows=MarkerFaceColorRows; plots.SymbolCols=Symbolcols; plots.SymbolColsSup=SymbolcolsSup; plots.ColorCols=ColorCols; plots.ColorColsSup=ColorColsSup; plots.MarkerFaceColorCols=MarkerFaceColorCols; % change the sign of the second dimension changedimsign=[false true]; out=CorAna(Nactive,'Sup',Sup,'plots',0,'dispresults',false); CorAnaplot(out,'plots',plots,'changedimsign',changedimsign) ## Related Examples expand all ### CorAnaplot with personalized displacement. close all % Prepare the data. N=[51 64 32 29 17 59 66 70; 53 90 78 75 22 115 117 86; 71 111 50 40 11 79 88 177; 1 7 5 5 4 9 8 5; 7 11 4 3 2 2 17 18; 7 13 12 11 11 18 19 17; 21 37 14 26 9 14 34 61; 12 35 19 6 7 21 30 28; 10 7 7 3 1 8 12 8; 4 7 7 6 2 7 6 13; 8 22 7 10 5 10 27 17; 25 45 38 38 13 48 59 52; 18 27 20 19 9 13 29 53; 35 61 29 14 12 30 63 58; 2 4 3 1 4 nan nan nan ; 2 8 2 5 2 nan nan nan; 1 5 4 6 3 nan nan nan; 3 3 1 3 4 nan nan nan]; % rowslab = cell containing row labels rowslab={'money','future','unemployment','circumstances',... 'hard','economic','egoism','employment','finances',... 'war','housing','fear','health','work','comfort','disagreement',... 'world','to_live'}; % colslab = cell containing column labels colslab={'unqualified','cep','bepc','high_school_diploma','university',... 'thirty','fifty','more_fifty'}; if verLessThan('matlab','8.2.0')==0 tableN=array2table(N,'VariableNames',colslab,'RowNames',rowslab); % Extract just active rows Nactive=tableN(1:14,1:5); Nsupr=tableN(15:18,1:5); Nsupc=tableN(1:14,6:8); Sup=struct; Sup.r=Nsupr; Sup.c=Nsupc; % Compute Correspondence analysis else Nactive=N(1:14,1:5); Lr=rowslab(1:14); Lc=colslab(1:5); Sup=struct; Sup.r=N(15:end,1:5); Sup.Lr=rowslab(15:end); Sup.c=N(1:14,6:8); Sup.Lc=colslab(6:8); end % No horizontal displacement for the labels. out=CorAna(Nactive,'Sup',Sup,'plots',0,'dispresults',false); CorAnaplot(out,'addx',addx) ### Correpondence analysis plot with selected ellipses. N=[51 64 32 29 17 59 66 70; 53 90 78 75 22 115 117 86; 71 111 50 40 11 79 88 177; 1 7 5 5 4 9 8 5; 7 11 4 3 2 2 17 18; 7 13 12 11 11 18 19 17; 21 37 14 26 9 14 34 61; 12 35 19 6 7 21 30 28; 10 7 7 3 1 8 12 8; 4 7 7 6 2 7 6 13; 8 22 7 10 5 10 27 17; 25 45 38 38 13 48 59 52; 18 27 20 19 9 13 29 53; 35 61 29 14 12 30 63 58; 2 4 3 1 4 nan nan nan ; 2 8 2 5 2 nan nan nan; 1 5 4 6 3 nan nan nan; 3 3 1 3 4 nan nan nan]; % rowslab = cell containing row labels rowslab={'money','future','unemployment','circumstances',... 'hard','economic','egoism','employment','finances',... 'war','housing','fear','health','work','comfort','disagreement',... 'world','to_live'}; % colslab = cell containing column labels colslab={'unqualified','cep','bepc','high_school_diploma','university',... 'thirty','fifty','more_fifty'}; if verLessThan('matlab','8.2.0')==0 tableN=array2table(N,'VariableNames',colslab,'RowNames',rowslab); % Extract just active rows Nactive=tableN(1:14,1:5); Nsupr=tableN(15:18,1:5); Nsupc=tableN(1:14,6:8); Sup=struct; Sup.r=Nsupr; Sup.c=Nsupc; else Nactive=N(1:14,1:5); Lr=rowslab(1:14); Lc=colslab(1:5); Sup=struct; Sup.r=N(15:end,1:5); Sup.Lr=rowslab(15:end); Sup.c=N(1:14,6:8); Sup.Lc=colslab(6:8); end % Superimpose confidence ellipses for rows 2 and 4 and for column 3 confellipse=struct; confellipse.selRows=[2 4]; % Ellipse for column 3 using an interger confellipse.selCols=3; % Ellipse for column 3 using a Boolean vector confellipse.selCols=[ false false true false false]; % confellipse.selCols={'c3'}; % Use the 3 methods below in order to compute the confidence ellipses for % the selected rows and columns of the input contingency table confellipse.method={'multinomial' 'bootRows' 'bootCols'}; % Set number of simulations confellipse.nsimul=500; % Set confidence interval confellipse.conflev=0.50; out=CorAna(Nactive,'Sup',Sup,'plots',0,'dispresults',false); % Draw correspondence analysis plot with requested confidence ellipses CorAnaplot(out,'plots',1,'confellipse',confellipse) ### Correpondence analysis plot with ellipses only on column points. N=[51 64 32 29 17 59 66 70; 53 90 78 75 22 115 117 86; 71 111 50 40 11 79 88 177; 1 7 5 5 4 9 8 5; 7 11 4 3 2 2 17 18; 7 13 12 11 11 18 19 17; 21 37 14 26 9 14 34 61; 12 35 19 6 7 21 30 28; 10 7 7 3 1 8 12 8; 4 7 7 6 2 7 6 13; 8 22 7 10 5 10 27 17; 25 45 38 38 13 48 59 52; 18 27 20 19 9 13 29 53; 35 61 29 14 12 30 63 58; 2 4 3 1 4 nan nan nan ; 2 8 2 5 2 nan nan nan; 1 5 4 6 3 nan nan nan; 3 3 1 3 4 nan nan nan]; % rowslab = cell containing row labels rowslab={'money','future','unemployment','circumstances',... 'hard','economic','egoism','employment','finances',... 'war','housing','fear','health','work','comfort','disagreement',... 'world','to_live'}; % colslab = cell containing column labels colslab={'unqualified','cep','bepc','high_school_diploma','university',... 'thirty','fifty','more_fifty'}; if verLessThan('matlab','8.2.0')==0 tableN=array2table(N,'VariableNames',colslab,'RowNames',rowslab); % Extract just active rows Nactive=tableN(1:14,1:5); Nsupr=tableN(15:18,1:5); Nsupc=tableN(1:14,6:8); Sup=struct; Sup.r=Nsupr; Sup.c=Nsupc; else Nactive=N(1:14,1:5); Lr=rowslab(1:14); Lc=colslab(1:5); Sup=struct; Sup.r=N(15:end,1:5); Sup.Lr=rowslab(15:end); Sup.c=N(1:14,6:8); Sup.Lc=colslab(6:8); end % Superimpose confidence ellipses confellipse=struct; % No confdence ellipse for row points confellipse.selRows=[]; % Ellipse for all the column points using a Boolean vector confellipse.selCols=[ true true true true true]; % Compare methods 'multinomial' and 'bootCols' confellipse.method={'multinomial' 'bootCols'}; % Set number of simulations confellipse.nsimul=10000; % Set confidence interval confellipse.conflev=0.90; out=CorAna(Nactive,'Sup',Sup,'plots',0,'dispresults',false); % Draw correspondence analysis plot with requested confidence ellipses CorAnaplot(out,'plots',1,'confellipse',confellipse) ### Correpondence analysis plot using latent dimensions 3 and 4. N=[51 64 32 29 17 59 66 70; 53 90 78 75 22 115 117 86; 71 111 50 40 11 79 88 177; 1 7 5 5 4 9 8 5; 7 11 4 3 2 2 17 18; 7 13 12 11 11 18 19 17; 21 37 14 26 9 14 34 61; 12 35 19 6 7 21 30 28; 10 7 7 3 1 8 12 8; 4 7 7 6 2 7 6 13; 8 22 7 10 5 10 27 17; 25 45 38 38 13 48 59 52; 18 27 20 19 9 13 29 53; 35 61 29 14 12 30 63 58; 2 4 3 1 4 nan nan nan ; 2 8 2 5 2 nan nan nan; 1 5 4 6 3 nan nan nan; 3 3 1 3 4 nan nan nan]; % rowslab = cell containing row labels rowslab={'money','future','unemployment','circumstances',... 'hard','economic','egoism','employment','finances',... 'war','housing','fear','health','work','comfort','disagreement',... 'world','to_live'}; % colslab = cell containing column labels colslab={'unqualified','cep','bepc','high_school_diploma','university',... 'thirty','fifty','more_fifty'}; if verLessThan('matlab','8.2.0')==0 tableN=array2table(N,'VariableNames',colslab,'RowNames',rowslab); % Extract just active rows Nactive=tableN(1:14,1:5); Nsupr=tableN(15:18,1:5); Nsupc=tableN(1:14,6:8); Sup=struct; Sup.r=Nsupr; Sup.c=Nsupc; else Nactive=N(1:14,1:5); Lr=rowslab(1:14); Lc=colslab(1:5); Sup=struct; Sup.r=N(15:end,1:5); Sup.Lr=rowslab(15:end); Sup.c=N(1:14,6:8); Sup.Lc=colslab(6:8); end % Superimpose ellipses confellipse=struct; % Ellipse for the first 3 row points confellipse.selRows=1:3; % Ellipse for selected column points using a Boolean vector confellipse.selCols=[ false false true true false]; % Compare methods 'multinomial' and 'bootCols' confellipse.method={'multinomial' 'bootCols'}; % Set number of simulations confellipse.nsimul=10000; % Set confidence interval confellipse.conflev=0.90; d1=3; d2=4; out=CorAna(Nactive,'Sup',Sup,'plots',0,'dispresults',false,'d1',d1,'d2',d2); % Draw correspondence analysis plot with requested confidence ellipses CorAnaplot(out,'plots',1,'confellipse',confellipse,'d1',d1,'d2',d2) ### Correpondence analysis of the smoke data. In this example we compare the results which are obtained using option plots.alpha='colprincipal'; (which implicitly implies alpha=0) with those which come out imposing directly plots.alpha=0. load smoke [N,~,~,labels]=crosstab(smoke{:,1},smoke{:,2}); [I,J]=size(N); labels_rows=labels(1:I,1); labels_columns=labels(1:J,2); out=CorAna(N,'Lr',labels_rows,'Lc',labels_columns,'plots',0,'dispresults',false); plots=struct; plots.alpha='rowgab'; plots.alpha='colgab'; plots.alpha='rowgreen'; plots.alpha='colgreen'; confellipse=1; plots.alpha='bothprincipal'; plots.alpha='rowprincipal'; plots.alpha='colprincipal'; h1=subplot(1,2,1); CorAnaplot(out,'plots',plots,'confellipse',confellipse,'h',h1) h2=subplot(1,2,2); plots.alpha=0; CorAnaplot(out,'plots',plots,'confellipse',confellipse,'h',h2); ### Correspondence analysis of the car dataset. load car out=CorAna(car,'plots',0); plots=struct; % Color of Row labels proportional to the amount of inertia which is % explained by the the two latent dimensions (communalities) plots.ColorMapLabelRows= 'CntrbDim2In'; CorAnaplot(out,'plots',plots) ## Input Arguments ### out — Structure containing the output of function CorAna. Structure. Structure containing the following fields. Value Description Lr cell of length $I$ containing the labels of active rows (i.e. the rows which participated to the fit). Lc cell of length $J$ containing the labels of active columns (i.e. the columns which participated to the fit). n Grand total. out.n is equal to sum(sum(out.N)). This is the number of observations. Dr Square matrix of size $I$ containing on the diagonal the row masses. This is matrix $D_r$. $D_r=diag(r)$ Dc Square matrix of size $J$ containing on the diagonal the column masses. This is matrix $D_c$. $D_c=diag(c)$ InertiaExplained matrix with 4 columnn. - First column contains the singular values (the sum of the squared singular values is the total inertia). - Second column contains the eigenvalues (the sum of the eigenvalues is the total inertia). - Third column contains the variance explained by each latent dimension. - Fourth column contains the cumulative variance explained by each dimension. LrSup cell containing the labels of the supplementary rows (i.e. the rows whicg did not participate to the fit). LcSup cell containing the labels of supplementary columns (i.e. the columns which did not participate to the fit). SupRowsN matrix of size length(LrSup)-by-c referred to supplementary rows. If there are no supplementary rows this field is not present. SupColsN matlab of size r-by-length(LcSup) referred to supplementary columns. If there are no supplementary columns this field is not present. RowsPriSup Principal coordinates of supplementary rows. If there are no supplementary rows this field is not present. RowsStaSup Standard coordinates of supplementary rows. If there are no supplementary rows this field is not present. RowsSymSup Symmetrical coordinates of supplementary rows. If there are no supplementary rows this field is not present. ColsPriSup Principal coordinates of supplementary columns. If there are no supplementary columns this field is not present. ColsStaSup Standard coordinates of of supplementary columns. If there are no supplementary columns this field is not present. ColsSymSup Symmetrical coordinates of supplementary columns. If there are no supplementary columns this field is not present. OverviewRows $I$-times-(k*3+2) table containing an overview of row points. More precisely, if we suppose that $k=2$, First column contains the row masses (vector $r$). Second column contains the scores of first dimension. Third column contains the scores of second dimension. Fourth column contains the inertia of each point, where inertia of point is the squared distance of point $d_i^2$ to the centroid. Fifth column contains the relative contribution of each point to the explanation of the inertia of the first dimension. The sum of the elements of this column is equal to 1. Sixth column contains the relative contribution of each point to the explanation of the inertia of the second dimension. The sum of the elements of this column is equal to 1. Seventh column contains the relative contribution of the first dimension to the explanation of the inertia of the point. Eight column contains the relative contribution of the second dimension to the explanation of the inertia of the point. OverviewCols $J$-times-(k*3+2) table containing an overview of row points. More precisely if we suppose that $k=2$ First column contains the column masses (vector $c$). Second column contains the scores of first dimension. Third column contains the scores of second dimension. Fourth column contains the inertia of each point, where inertia of point is the squared distance of point $d_i^2$ to the centroid. Fifth column contains the relative contribution of each point to the explanation of the inertia of the first dimension. The sum of the elements of this column is equal to 1. Sixth column contains the relative contribution of each point to the explanation of the inertia of the second dimension. The sum of the elements of this column is equal to 1. Seventh column contains the relative contribution of the first dimension to the explanation of the inertia of the point. Eight column contains the relative contribution of the second dimension to the explanation of the inertia of the point. Data Types: struct ### Name-Value Pair Arguments Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN. Example: 'plots',plots=struct; plots.colorcols='k' , 'addx',0.01 , 'addy',0.01 , 'changedimsign', [true false] , 'confellipse', 0 , 'xlimx',[-1 1] , 'ylimy',[0 1] , 'd1',2 , 'd2',3 ,'h',h1 where h1=subplot(2,1,1) ### plots —Customize plot appearance.scalar | structure. If plots is not a structure, a plot which shows the Principal coordinates of rows and columns is shown on the screen. If plots is a structure it may contain the following fields: Value Description alpha type of plot, scalar in the interval [0 1] or a string identifying the type of coordinates to use in the plot. If $plots.alpha='rowprincipal'$ the row points are in principal coordinates and the column coordinates are standard coordinates. Distances between row points are (approximated) chi-squared distances (row-metric-preserving). The position of the row points are at the weighted average of the column points. Note that 'rowprincipal' can also be specified setting plots.alpha=1. If $plots.alpha='colprincipal'$, the column coordinates are referred to as principal coordinates and the row coordinates as standard coordinates. Distances between column points are (approximated) chi-squared distances (column-metric-preserving). The position of the column points are at the weighted average of the row points. Note that 'colprincipal' can also be specified setting plots.alpha=0. If $plots.alpha='symbiplot'$, the row and column coordinates are scaled similarly. The sum of weighted squared coordinates for each dimension is equal to the corresponding singular values. These coordinates are often called symmetrical coordinates. This representation is particularly useful if one is primarily interested in the relationships between categories of row and column variables rather than in the distances among rows or among columns. 'symbiplot' can also be specified setting plots.alpha=0.5; If $plots.alpha='bothprincipal'$, both the rows and columns are depicted in principal coordinates. Such a plot is often referred to as a symmetrical plot or French symmetrical model. Note that such a symmetrical plot does not provide a feasible solution in the sense that it does not approximate matrix $D_r^{-0.5}(P-rc')D_c^{-0.5}$. If $plots.alpha='bothstandard'$, both the rows and columns are depicted in standard coordinates. The standard coordinates are the principal coordinates divided by the corresponding singular values. If $plots.alpha='rowgab'$, rows are in principal coordinates and columns are in standard coordinates multiplied by the mass. This biplot has been suggested by Gabriel and Odoroff (1990). If $plots.alpha='colgab'$, columns are in principal coordinates and rows are in standard coordinates multiplied by the mass. This biplot has been suggested by Gabriel and Odoroff (1990). If $plots.alpha='rowgreen'$, rows are in principal coordinates and columns are in standard coordinates multiplied by square root of the mass. This biplot has been suggested by Greenacre and incorporates the contribution of points. In this display, points that contribute very little to the solution, are close to the center of the biplot and are relatively unimportant to the interpretation. This biplot is often referred as contribution biplot because it visually shows the most contributing points (Greenacre 2006b). If $plots.alpha='colgreen'$, columns in principal coordinates and rows in standard coordinates multiplied by the square root of the mass. This biplot has been suggested by Greenacre and incorporates the contribution of points. In this display, points that contribute very little to the solution, are close to the center of the biplot and are relatively unimportant to the interpretation. This biplot is often referred as contribution biplot because it shows visually the most contributing points (Greenacre 2006b). If $plots.alpha=scalar$ in the interval [0 1], row coordinates are given by $D_r^{-1/2} U \Gamma^\alpha$ and column coordinates are given by $D_c^{-1/2} V \Gamma^{1-\alpha}$. Note that for any choice of $\alpha$ the matrix product $D_r^{-1/2} U \Gamma^\alpha (D_c^{-1/2} V \Gamma^{1-\alpha})^T$ optimally approximates matrix $D_r^{-0.5}(P-rc')D_c^{-0.5}$, in the sense that the sum of squared differences between $D_r^{1/2} D_r^{-1/2} U \Gamma^\alpha (D_c^{-1/2} V \Gamma^{1-\alpha})^T D_c^{1/2}$ and $D_r^{-0.5}(P-rc')D_c^{-0.5}$ is as small as possible. FontSize scalar which specifies the font size of row (column) labels. The default value is 10. FontSizeSup scalar which specifies the font size of row (column) labels of supplementary points. The default value is 10. MarkerSize scalar which specifies the marker size of symbols associated with rows or columns. The default value is 10. SymbolRows character which specifies the symbol to use for row points. If this field is not present the default symbols is 'o'. SymbolCols character which specifies the symbol to use for column points. If this field is not present the default symbols is '^'. SymbolRowsSup character which specifies the symbol to use for supplementary row points. If this field is not present the default symbols is 'o'. SymbolColsSup character which specifies the symbol to use for supplementary column points. If this field is not present the default symbols is '^'. ColorRows character which specifies the symbol to use for row points or RGB triplet. If this field is not present the default color is 'b'. ColorMapLabelRows character or cell which specifies whether the color of row labels must depend on a colormap. ColorMapLabelRows can be any of the VariableNames of out.OverviewRows. For example if ColorMapLabelRows=out.OverviewRows.Properties.VariableNames{1} it is possible to obtain a colormap where row labels colors are proportional to the masses. If plots.ColorMapLabelRows='CntrbPnt2In' it is possible to have a colormap where row labels are proportional to the contribution of the row points to the inertia of the two latent dimensions. If plots.ColorMapLabelRows='CntrbDim2In' it is possible to have a colormap of the row labels proportional to the contribution of the two latent dimensions to the inertia of each point (communalities). The default is plots.ColorMapLabelRows='' that is row labels have the same colors. ColorCols character which specifies the symbol to use for column points or RGB triplet. If this field is not present the default color is 'r'. ColorMapLabelCols character or cell which specifies whether the color of column labels must depend on a colormap. ColorMapLabelCols can be any of the VariableNames of out.OverviewCols. For example if ColorMapLabelCols=out.OverviewCols.Properties.VariableNames{1} it is possible to obtain a colormap where row labels colors are proportional to the masses. If plots.ColorMapLabelCols='CntrbPnt2In' it is possible to have a colormap where column labels are proportional to the contribution of the column points to the inertia of the two latent dimensions. If plots.ColorMapLabelCols='CntrbDim2In' it is possible to have a colormap of the column labels proportional to the contribution of the two latent dimensions to the inertia of each point (communalities). The default is plots.ColorMapLabelCols='' that is column labels have the same colors. ColorRowsSup character which specifies the symbol to use for row points or RGB triplet. If this field is not present the default color is 'b'. ColorColsSup character which specifies the symbol to use for supplementary column points or RGB triplet. If this field is not present the default color is 'r'. MarkerFaceColorRows character which specifies the marker fill color to use for active row points or RGB triplet. If this field is not present the default color is 'auto'. MarkerFaceColorCols character which specifies the marker fill color to use for active column points or RGB triplet. If this field is not present the default color is 'auto'. MarkerFaceColorRowsSup character which specifies the marker fill color to use for supplementary row points or RGB triplet. If this field is not present the default color is 'auto'. MarkerFaceColorColsSup character which specifies the marker fill color to use for supplementary column points or RGB triplet. If this field is not present the default color is 'auto'. Example: 'plots',plots=struct; plots.colorcols='k' Data Types: double ### addx —horizontal displacement for labels.scalar. Amount of horizontal displacement which has been put on the labels in the plot. The defalut value of addx is 0.04. Example: 'addx',0.01 Data Types: double ### addy —vertical displacement for labels.scalar. Amount of vertical displacement which has been put on the labels in the plot. The defalut value of addy is 0. Example: 'addy',0.01 Data Types: double ### changedimsign —change chosen dimension sign.boolean vector of length 2. Sometimes for better interpretability it is necessary to change the sign of the coordinates for the chosen dimension. If changedimsign(1) is true the sign of the coordinates for first chosen dimension is changed. If changedimsign(2) is true the sign of the coordinates for first chosen dimension is changed. As default the dimensions are the first and the second however, they can be changed using option plots.dim. The defaul value of changedimsign is [false false] that is the sign is not changed. Example: 'changedimsign', [true false] Data Types: boolean ### confellipse —confidence ellipses around rows and/or columns points.scalar | struct. If confellipse is 1, 90 per cent confidence ellipses are drawn around each row and column point based on multinomial method. If confellipse is a struct it may contain the following fields. confellipse.conflev= number in the interval (0 1) which defines the confidence level of each ellipse. If this field is not present 90 per cent confidence ellipses are shown confellipse.method= cell which specifies the method(s) to use to compute confidence ellipses. Possible values are: {'multinomial'} = in this case the original contingency table with the active elements is taken as a reference. Then new data tables are drawn in the following way: $r\times c$ values are drawn from a multinomial distribution with theoretical frequencies equals to $n_{ij}/n$ where $n$ is the sample size. {'bootRows'} = the values are bootstrapped row by row: Given row i, $n_{i.}$ are extracted with repetition and a frequency distribution is computed using classes $[0, n_{i1}]$,$[n_{i1}, n_{i1}+n_{i2}]$, $\ldots$ $[\sum_{j=1}^{J-1} n_{ij}, \sum_{j=1}^{J} n_{ij}$. {'bootCols'} = the values are bootstrapped column by column. If confellipse.method for example is {'bootRows' 'bootCols'} two ellipses are drawn for each point. In this case it is possible to appreciate the stability of both methods. If this field is not present {'multinomial'} method is used. confellipse.nsimul=scalar which defines the number of contingency tables which have to ge generated. The default value of confellipse.nsimul is 1000. Thus nsimul new contingency tables are projected as supplementary rows and/or supplementary columns. confellipse.selRows= vector which specifies for which row points it is necessary to draw the ellipses. confellipse.selRows either a boolean vector of length I containing a true in correspondence of the row elements for which the ellipse has to be drawn or a numeric vector which contains the indexes of the units which have to be drawn or a cell arrary containing the names of the rows for which the ellipse has to be drawn. For example if I=3 and the second row is called 'row2' in order to show just the confidence ellipse for this row it is possible to use confellipse.selRows=[false true false], or confellipse.selRows=2 or confellipse.selRows={'row2'}. confellipse.selCols= vector which specifies for which column points it is necessary to draw the ellipses. confellipse.selCols can be either a boolean vector of length J containing a true in correspondence of the column elements for which the ellipse has to be drawn or a numeric vector which contains the indexes of the columns which have to be drawn or a cell arrary containing the names of the columns for which the ellipse has to be drawn. For example if J=3 and one the third column is called 'Col3' in order to show just the confidence ellipse for this element it is possible to use confellipse.selCols=[false false true], or confellipse.selCols=3 or confellipse.selCols={'Col3'}. confellipse.AxesEllipse = boolean which specifies whether it is necessary to show the major axes of the ellipse. The default value of confellipse.AxesEllipse is true that is the axes are shown. Example: 'confellipse', 0 Data Types: scalar or struct ### xlimx —Min and Max of the x axis.vector. Vector with two elements controlling minimum and maximum of the x axis. Example: 'xlimx',[-1 1] Data Types: double ### ylimy —Min and Max of the y axis.vector. Vector with two elements controlling minimum and maximum of the y axis. Example: 'ylimy',[0 1] Data Types: double ### d1 —Dimension to show on the horizontal axis.positive integer. Positive integer in the range 1, 2, .., K which indicates the dimension to show on the x axis. The default value of d1 is 1. Example: 'd1',2 Data Types: single | double ### d2 —Dimension to show on the vertical axis.positive integer. Positive integer in the range 1, 2, .., K which indicates the dimension to show on the y axis. The default value of d2 is 2. Example: 'd2',3 Data Types: single | double ### h —the axis handle of a figure where to send the CorAnaplot.this can be used to host the CorAnaplot in a subplot of a complex figure formed by different panels (for example a panel with CorAnaplot from plots. alpha=0.2 and another with CorAnaplot from plots.alpha=0.5). Example: 'h',h1 where h1=subplot(2,1,1) Data Types: Axes object (supplied as a scalar) ## References Benzecri, J.-P. (1992), "Correspondence Analysis Handbook", New-York, Dekker. Benzecri, J.-P. (1980), "L'analyse des donnees tome 2: l'analyse des correspondances", Paris, Bordas. Greenacre, M.J. (1993), "Correspondence Analysis in Practice", London, Academic Press. Gabriel, K.R. and Odoroff, C. (1990), Biplots in biomedical research, "Statistics in Medicine", Vol. 9, pp. 469-485. Greenacre, M.J. (1993), Biplots in correspondence Analysis, "Journal of Applied Statistics", Vol. 20, pp. 251-269. Urbano, L.-S., van de Velden, M. and Kiers, H.A.L. (2009), CAR: A MATLAB Package to Compute Correspondence Analysis with Rotations, "Journal of Statistical Software", Vol. 31.
2022-10-06 13:12:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7371862530708313, "perplexity": 2208.334881379485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00219.warc.gz"}
http://math.stackexchange.com/questions/54425/the-signed-curvature-of-the-catenary
# The signed curvature of the Catenary Now I want to show that the signed curvature of the catenary, with parameterization $$(t,\cosh(t))$$ is $k(t)=\frac{1}{\cosh^2(t)}$ Now what I have done (and presumably went astray), is first normalize the tangent vector to $\alpha (t)=(t,\cosh(t))$, to get: $$\gamma (t)=\frac{\alpha '(t)}{|\alpha ' (t)|} = \left(\frac{1}{\cosh(t)},\tanh(t)\right).$$ And using the fact that the normal to $\gamma$ is $n(t)= \left(-\tanh(t),\frac{1}{\cosh(t)}\right)$ and that $$k(t) = \gamma '(t) \cdot n(t) ,$$ I get by inserting $$\gamma ' (t) = \left(-\frac{\sinh(t)}{\cosh^2(t)},\frac{1}{\cosh^2(t)}\right),$$ $$k(t)=\frac{1}{\cosh(t)}.$$ Where did I get it wrong? - If $s$ is arclength and $T(s)$ is the unit tangent, then the curvature is $T'(s)\cdot n(s)$. Since you did not parametrize with respect to $s$, you have to take into account that (excusing notational abuse) $\gamma(t)=T(t)$, so $\frac{dT}{ds}=\frac{dT}{dt}\cdot\frac{dt}{ds}=\gamma'(t)\frac{1}{|\alpha'(t)|}$. - Ok, thanks with the question. –  MathematicalPhysicist Jul 29 '11 at 9:28 how we solve this problem otherwise? I tried to reparametrized by arc lenght, but it didn't work, hard computations... thanks –  user16385 Sep 20 '11 at 8:23 @Rok: You do not need to reparametrize with respect to arc length. The point is that $k=\frac{dT}{ds}$ if $s$ is arclength, so to find $k$ starting from a curve that is not parametrized with respect to arclength, you can use the chain rule. You know that $\frac{ds}{dt}=|\alpha'(t)|$, and $\frac{dT}{dt}=\frac{dT}{ds}\frac{ds}{dt}$. Is that clearer? –  Jonas Meyer Sep 21 '11 at 1:07
2015-04-26 11:33:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9425333738327026, "perplexity": 298.4737505613984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654292.99/warc/CC-MAIN-20150417045734-00240-ip-10-235-10-82.ec2.internal.warc.gz"}
https://figshare.com/articles/Appendix_B_Obtaining_an_expression_for_average_fitness_under_density_dependent_population_growth_A_mathematical_appendix_showing_how_the_integrals_in_the_full_expression_of_Eq_19_for_density_dependent_population_growth_can_be_replaced_by_closed_forms_/3568983
## Appendix B. Obtaining an expression for average fitness under density dependent population growth. A mathematical appendix showing how the integrals in the full expression of Eq. 19 (for density dependent population growth) can be replaced by closed forms. 2016-08-10T18:00:05Z (GMT) by Obtaining an expression for average fitness under density dependent population growth. A mathematical appendix showing how the integrals in the full expression of Eq. 19 (for density dependent population growth) can be replaced by closed forms.
2019-02-21 23:52:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8101058006286621, "perplexity": 1115.7452809297906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247511573.67/warc/CC-MAIN-20190221233437-20190222015437-00294.warc.gz"}
https://landtransportguru.net/bus12e/
# Go-Ahead Express Bus Service 12e Go-Ahead Express Bus Service 12e is a Limited Stop route variant of Service 12, plying between Pasir Ris and Kampong Bahru at selected hours daily. It passes through Tampines, Simei, Bedok South, Bugis and Chinatown, skipping the East Coast, Mountbatten and Kallang areas. The bus service only calls at selected bus stops along its parent route. Along with Service 147e, both new services were introduced on Sunday, 28 January 2018. 12e 77009 Pasir Ris Int B6 EW1 Pasir Ris Dr 3 78109 Downtown East Pasir Ris Dr 3 78081 Blk 467 Pasir Ris Dr 6 78071 Blk 446 Pasir Ris Dr 6 78069 Blk 442 Pasir Ris Dr 1 78031 Blk 191 NEW Pasir Ris St 12 78181 Bef Blk 187 Pasir Ris St 11 78191 Aft Blk 182 Pasir Ris St 11 98099 Blk 269A Pasir Ris Dr 1 98019 Opp Blk 149A Loyang Ave 76249 Blk 370 Tampines Ave 7 76239 Blk 390 Tampines Ave 7 76039 Tampines East Stn Exit C DT33 Tampines Ave 2 76109 Blk 302 Tampines Ave 2 96129 Melville Pk Simei Rd 96189 Blk 166 Simei Rd 96199 Opp Blk 224 Simei Rd 96209 Opp Simei Green Condo Simei Rd 85099 Tanah Merah Stn Exit A EW4CG New Upp Changi Rd 84101 Opp Blk 49 Bedok Sth Ave 3 84071 Blk 71 Bedok Sth Rd 84081 Opp Blk 40 Bedok Sth Rd 84141 Opp Lion Hme for Elders Bedok Sth Rd 84111 Temasek JC Bedok Sth Rd 84151 Casafina Bedok Sth Ave 1 93131 Opp Eastern Lagoon II Bedok Sth Ave 1 01541 Bugis Stn Exit D EW12DT14 Rochor Rd 01119 Aft Bugis Stn Exit C EW12DT14 Victoria St 01019 Bras Basah Cplx Victoria St 04159 Aft CHIJMES Victoria St 04149 Grand Pk City Hall Hill St 04229 High St Ctr Hill St 04239 Opp Clarke Quay Stn NE5 New Bridge Rd 05059 Hong Lim Pk NE5 New Bridge Rd 05049 Chinatown Stn Exit E NE4DT19 New Bridge Rd 05039 New Bridge Ctr New Bridge Rd 05019 Aft Duxton Plain Pk EW16NE3TE17 New Bridge Rd 10011 Bef Neil Rd New Bridge Rd 10499 Kampong Bahru Ter Spooner Rd 10499 Kampong Bahru Ter Spooner Rd 10017 Aft Hosp Dr EW16NE3TE17 Eu Tong Sen St 05013 Chinatown Stn Exit C NE4DT19 Eu Tong Sen St 05023 Opp Hong Lim Pk Eu Tong Sen St 04222 Clarke Quay Stn Exit E NE5 Eu Tong Sen St 04142 Armenian Ch Hill St 01012 Hotel Grand Pacific Victoria St 01112 Opp Bugis Stn Exit C EW12DT14 Victoria St 01559 Landmark Village Hotel Ophir Rd 01549 Opp Duo Residences Ophir Rd 93139 Eastern Lagoon II Bedok Sth Ave 1 84159 Opp Casafina/Temasek JC Bedok Sth Ave 1 84119 Opp Temasek JC Bedok Sth Rd 84149 Lion Hme for Elders Bedok Sth Rd 84089 Blk 40 Bedok Sth Rd 84079 Siglap Cc/Opp Blk 71 Bedok Sth Rd 84109 Blk 50 Bedok Sth Ave 3 85091 Tanah Merah Stn Exit B EW4CG New Upp Changi Rd 96201 Simei Green Condo Simei Rd 96191 Blk 224 Simei Rd 96181 Blk 151 Simei Rd 96121 Metta Welfare Assn Simei Rd 76101 Tampines East CC Tampines Ave 2 76031 Tampines East Stn Exit B DT33 Tampines Ave 2 76231 Opp Blk 390 Tampines Ave 7 76241 Blk 497D Tampines Ave 7 98011 Blk 149A Loyang Ave 98091 Blk 156 Pasir Ris Dr 1 78199 Opp White Sands Pri Sch Pasir Ris St 11 78189 Opp Blk 187 Pasir Ris St 11 78039 Blk 104 NEW Pasir Ris St 12 78061 Blk 104 Pasir Ris Dr 1 78079 Blk 429 Pasir Ris Dr 6 78089 Blk 405 Pasir Ris Dr 6 78101 Opp Downtown East Pasir Ris Dr 3 77009 Pasir Ris Int EW1 Pasir Ris Dr 3 Route Overview Route Pasir Ris Bus Interchange ↔ Kampong Bahru Bus Terminal Passes Through Pasir Ris Dr 1, Tampines Ave 7, Simei Rd, Bedok Sth Rd, Victoria St, New Bridge Rd / Eu Tong Sen St Route Length Towards Kampong Bahru: 26.0 km Towards Pasir Ris: 25.6 km Travelling Time 80 mins Operator Information BCM Route Package Loyang Bus Package Current Operator Go Ahead Singapore Pte Ltd (Go-Ahead Singapore) Current Depot Loyang Bus Depot Current Fleet Single Deck Bus: – Mercedes-Benz Citaro Double Deck Bus: – MAN A95 – Volvo B9TL Operating Hours Departure Times from Pasir Ris Weekdays Saturdays & Sundays / Public Holidays 06:00 – 18:00 06:30 – 18:30 Departure Times from Kampong Bahru Daily 12:30 – 22:30 Operating Frequency From Pasir Ris 30 – 60 mins From Kampong Bahru 30 – 60 mins Fare Information Fare Charges express distance fares Bus Service 12e is a Limited-stop Express Bus Service which offers quicker journeys for commuters at Tampines, Simei, and Bedok South, linking them with the Central areas of Bugis. Operating during selected hours in the day, it only calls at selected bus stops served by its parent Bus Service 12, and charges Express Fares as a result. In between Bedok South and Bugis, the bus plies an express sector along the East Coast Parkway (ECP) in both directions. The route operates daily at 30-minute intervals. Service 12e is the second new route to be introduced under the Bus Contracting Model (BCM) after Service 110, as well as the first limited-stop service to operate daily (since 188R and 963R only operate on Weekends and PHs). The use of the lowercase e as the service suffix has its origins from Fast-Forward services introduced by SBS Transit in 2005, which are limited-stop services operating only during the peak hours. Along with several other express services, operating hours for Express 12e was revised from 9 February 2020. Trips from Pasir Ris Bus Interchange will operate till 6:30pm, while trips from Kampong Bahru Bus Terminal only start operations at 12:30pm. The service was degraded further from 4 January 2021, with the frequency adjusted to be 30 minutes throughout operating hours daily instead of 20 minutes during weekday peak hours. After being suspended from Sep 2021 to Jan 2022, operating hours were cut short with up to 1 hour frequency for the service. ###### Departure Timings: • From Pasir Ris Int: Weekdays – 06:00, 06:30, 07:00, 07:30, 08:00, 08:30, 09:00, 09:30, 10:00, 10:30, 11:00, 11:30, 12:00, 13:00, 14:00, 15:00, 16:00, 17:00 & 18:00 Weekends & Public Holidays – 06:30, 07:30, 08:00, 08:30, 09:00, 09:30, 10:00, 10:30, 11:00, 11:30, 12:00, 12:30, 13:00, 13:30, 14:00, 14:30, 15:00, 15:30, 16:00, 16:30, 17:00, 17:30, 18:00 & 18:30 • From Kampong Bahru Ter: Daily – 12:30, 13:30, 14:30, 15:30, 16:00, 16:30, 17:00, 17:30, 18:00, 18:30, 19:00, 19:30, 20:00, 20:30, 21:00, 21:30, 22:00 & 22:30 ###### Route Variants: • Service 12: Pasir Ris Bus Interchange ↔ Kampong Bahru Bus Terminal Daily ###### Operator History • 2018 – Present: Go-Ahead Singapore Pte Ltd The Bus Service Operating License (BSOL) for this route will be renewed in 2026 under the Loyang Bus Package. Past Routings ###### Changes in Departure Times: Day Type / Departures From Pasir Ris From Kampong Bahru From 28 Jan 2018 Weekdays 06:00, 06:30, 06:50, 07:10, 07:30, 07:50, 08:10, 08:30, 09:00, 09:30, 10:00, 10:30, 11:00, 11:30, 12:00, 12:30, 13:00, 13:30, 14:00, 14:30, 15:00, 15:30, 16:00, 16:30, 17:00, 17:30, 18:00, 18:30, 18:50, 19:10, 19:30, 19:50, 20:10, 20:30, 21:00, 21:30, 22:00 (From New Bridge Rd Ter) 07:35, 07:55, 08:15, 08:35, 08:55, 09:15, 09:35, 10:05, 10:35, 11:05, 11:35, 12:05, 12:35, 13:05, 13:35, 14:05, 14:35, 15:05, 15:35, 16:05, 16:35, 17:05, 17:35, 17:55, 18:15, 18:35, 18:55, 19:15, 19:35, 20:05, 20:35, 21:05, 21:35, 22:05, 22:35, 23:05, 23:35 Weekends / Public Holidays 06:00, 06:30, 07:00, 07:30, 08:00, 08:30, 09:00, 09:30, 10:00, 10:30, 11:00, 11:30, 12:00, 12:30, 13:00, 13:30, 14:00, 14:30, 15:00, 15:30, 16:00, 16:30, 17:00, 17:30, 18:00, 18:30, 19:00, 19:30, 20:00, 20:30, 21:00, 21:30, 22:00 (From New Bridge Rd Ter) 07:35, 08:05, 08:35, 09:05, 09:35, 10:05, 10:35, 11:05, 11:35, 12:05, 12:35, 13:05, 13:35, 14:05, 14:35, 15:05, 15:35, 16:05, 16:35, 17:05, 17:35, 18:05, 18:35, 19:05, 19:35, 20:05, 20:35, 21:05, 21:35, 22:05, 22:35, 23:05, 23:35 From 10 Mar 2018 – Route Extension to Kampong Bahru From 9 Feb 2020 Weekdays 06:00, 06:30, 06:50, 07:10, 07:30, 07:50, 08:10, 08:30, 09:00, 09:30, 10:00, 10:30, 11:00, 11:30, 12:00, 12:30, 13:00, 13:30, 14:00, 14:30, 15:00, 15:30, 16:00, 16:30, 17:00, 17:30, 18:00, 18:30 12:30, 13:00, 13:30, 14:00, 14:30, 15:00, 15:30, 16:00, 16:30, 17:00, 17:30, 17:50, 18:10, 18:30, 18:50, 19:10, 19:30, 20:00, 20:30, 21:00, 21:30, 22:00, 22:30, 23:00, 23:30 Weekends / Public Holidays 06:00, 06:30, 07:00, 07:30, 08:00, 08:30, 09:00, 09:30, 10:00, 10:30, 11:00, 11:30, 12:00, 12:30, 13:00, 13:30, 14:00, 14:30, 15:00, 15:30, 16:00, 16:30, 17:00, 17:30, 18:00, 18:30 12:30, 13:00, 13:30, 14:00, 14:30, 15:00, 15:30, 16:00, 16:30, 17:00, 17:30, 18:00, 18:30, 19:00, 19:30, 20:00, 20:30, 21:00, 21:30, 22:00, 22:30, 23:00, 23:30 From 4 Jan 2021 Daily 06:00, 06:30, 07:00, 07:30, 08:00, 08:30, 09:00, 09:30, 10:00, 10:30, 11:00, 11:30, 12:00, 12:30, 13:00, 13:30, 14:00, 14:30, 15:00, 15:30, 16:00, 16:30, 17:00, 17:30, 18:00, 18:30 12:30, 13:00, 13:30, 14:00, 14:30, 15:00, 15:30, 16:00, 16:30, 17:00, 17:30, 18:00, 18:30, 19:00, 19:30, 20:00, 20:30, 21:00, 21:30, 22:00, 22:30, 23:00, 23:30 From 16 Jan 2022 Weekdays 06:00, 06:30, 07:00, 07:30, 08:00, 08:30, 09:00, 09:30, 10:00, 10:30, 11:00, 11:30, 12:00, 13:00, 14:00, 15:00, 16:00, 17:00, 18:00 12:30, 13:30, 14:30, 15:30, 16:00, 16:30, 17:00, 17:30, 18:00, 18:30, 19:00, 19:30, 20:00, 20:30, 21:00, 21:30, 22:00, 22:30 Weekends / Public Holidays 06:30, 07:30, 08:00, 08:30, 09:00, 09:30, 10:00, 10:30, 11:00, 11:30, 12:00, 12:30, 13:00, 13:30, 14:00, 14:30, 15:00, 15:30, 16:00, 16:30, 17:00, 17:30, 18:00, 18:30 12:30, 13:30, 14:30, 15:30, 16:00, 16:30, 17:00, 17:30, 18:00, 18:30, 19:00, 19:30, 20:00, 20:30, 21:00, 21:30, 22:00, 22:30 Back to Express Services Back to Bus Services ### 13 thoughts on “Go-Ahead Express Bus Service 12e” • 6 May 2020 at 5:55 AM Service 12e doesn’t warrant full double deckers,Maybe 50/50 distribution of single and double decker will do just fine. • 19 March 2020 at 8:34 PM
2022-12-05 11:40:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994062185287476, "perplexity": 9247.623093617369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00378.warc.gz"}
http://www.distributome.org/js/sim/BinomialSimulation.html
Distribution graph #### Description This applet simulates a random variable with a binomial distribution with trial parameter $$n$$ and success probability $$p$$. The value is recorded on each update. The parameters $$n$$ and $$p$$ can be varied with the input controls.
2021-05-17 03:55:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7691736221313477, "perplexity": 481.92134281403645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00563.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-x-5-3-10x-6-8-2#240941
# How do you simplify x(5-3)-10x+6-8-:2? Mar 17, 2016 Break the problem into two stages: $\textcolor{w h i t e}{\text{XXX}}$-simplify the quotient $\textcolor{w h i t e}{\text{XXX}}$-divide the simplified quotient by 2 #### Explanation: Simplifying the quotient Given $\textcolor{w h i t e}{\text{XXX}} x \left(5 - 3\right) - 10 x + 6 - 8$ Using the order of operation (e.g. PEDMAS) $\textcolor{w h i t e}{\text{XXX}} = x \left(2\right) - 10 x + 6 - 8$ $\textcolor{w h i t e}{\text{XXX}} = 2 x - 10 x + 6 - 8$ $\textcolor{w h i t e}{\text{XXX}} = - 8 x + 6 - 8$ $\textcolor{w h i t e}{\text{XXX}} = - 8 x - 2$ Divide the simplified quotient by $2$ $\textcolor{w h i t e}{\text{XXX}} \left(- 8 x - 2\right) \div 2$ $\textcolor{w h i t e}{\text{XXX}} = - \left(8 x \div 2\right) - \left(2 \div 2\right)$ $\textcolor{w h i t e}{\text{XXX}} = - 4 x - 1$ Alternate Interpretation I assumed that the division (using the $\div$ symbol) was to be applied to the entire preceding expression. Technically this should not be true. Applying strict PEDMAS to the entire expression would give: $\textcolor{w h i t e}{\text{XXX}} 2 x - 10 x + 6 - 8 \div 2$ $\textcolor{w h i t e}{\text{XXX}} = 2 x - 10 x + 6 - 4$ $\textcolor{w h i t e}{\text{XXX}} = - 8 x + 6 - 4$ $\textcolor{w h i t e}{\text{XXX}} = - 8 x + 2$ Mar 17, 2016 $- 8 x + 2$ #### Explanation: $5 x - 3 x - 10 x + 6 - 8 \div 2$ $- 8 x + 6 - 4$ $- 8 x + 2$ Aug 6, 2017 $- 8 x + 2$ #### Explanation: Count the number of terms first and the simplify each term separately. Combine any like terms in the last step: $\textcolor{b l u e}{x \left(5 - 3\right)} \text{ "color(red)(-10x)" " color(green)(+6)" " color(purple)(-8div2)" } \leftarrow$ there are 4 terms $= \text{ "color(blue)(x(2))" " color(red)(-10x) " "color(green)(+6) " } \textcolor{p u r p \le}{- 4}$ $= \text{ "color(blue)(2x)" " color(red)(-10x) " "color(green)(+6) " "color(purple)(-4)" } \leftarrow$collect like terms $= - 8 x + 2$
2021-10-23 21:40:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 25, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9027078151702881, "perplexity": 4338.347472979744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00596.warc.gz"}
https://www.johnlees.me/posts/sum-from-i1-to-n-of-iki-for-helix-coil-zipper-model/
# Sum from i=1 to N of ik^i (for Helix-Coil zipper model) Contents Couldn’t easily find a derivation of the sum: The result is on wikipedia (http://en.wikipedia.org/wiki/List_of_mathematical_series) but the derivation is not. Here is a version I wrote out quickly (click for full size):
2022-11-26 16:13:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9536978006362915, "perplexity": 2851.29660587184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708010.98/warc/CC-MAIN-20221126144448-20221126174448-00345.warc.gz"}
http://cdsweb.cern.ch/collection/Preprints?ln=zh_CN&as=1
# Preprints 2018-10-17 15:45 Measurement of the charm-mixing parameter $y_{CP}$ A measurement of the charm-mixing parameter $y_{CP}$ using $D^0 \to K^+ K^-$, $D^0 \to \pi^+ \pi^-$, and $D^0 \to K^- \pi^+$ decays is reported. [...] CERN-EP-2018-270 ; LHCB-PAPER-2018-038. - 2018. Fulltext - Related data file(s) - Supplementary information 2018-10-17 04:03 Parton Shower and Matching Uncertainties in Top Quark Pair Production with Herwig 7 / Cormier, Kyle ; Plätzer, Simon ; Reuschle, Christian ; Richardson, Peter ; Webster, Stephen We evaluate the theoretical uncertainties in next-to-leading order plus parton shower predictions for top quark pair production and decay in hadronic collisions. [...] arXiv:1810.06493 ; CERN-TH-2018-219 ; HERWIG-2018-03 ; IPPP/18/91 ; LU-TP 18-33 ; MCNET-18-28 ; UWTHPH-2018-22. - 38 p. Fulltext 2018-10-17 04:03 Demonstration of MeV-Scale Physics in Liquid Argon Time Projection Chambers Using ArgoNeuT / ArgoNeuT Collaboration MeV-scale energy depositions by low-energy photons produced in neutrino-argon interactions have been identified and reconstructed in ArgoNeuT liquid argon time projection chamber (LArTPC) data. [...] FERMILAB-PUB-18-559-ND ; arXiv:1810.06502. - Fulltext 2018-10-17 04:03 $K^+ \to \pi^+ \nu \overline{\nu}$ - NA62 First Result / Velghe, Bob The CERN NA62 experiment uses a novel "kaon decay-in-flight" technique to observe $K^+ \to \pi^+ \nu \overline{\nu}$. [...] arXiv:1810.06424. - Fulltext 2018-10-17 04:03 First observation of the $\mathrm{t\bar{t}H}$ process at CMS / Skovpen, Kirill The top quark and the Higgs boson play a special role in the fundamental interactions of the standard model. [...] arXiv:1810.05715. - Fulltext 2018-10-17 04:03 HEP Software Foundation Community White Paper Working Group - Data and Software Preservation to Enable Reuse / Hildreth, M.D. (Notre Dame U.) ; Boehnlein, A. (Jefferson Lab) ; Cranmer, K. (New York U.) ; Dallmeier-Tiessen, S. (CERN) ; Gardner, R. (Chicago U.) ; Hacker, T. (Purdue U.) ; Heinrich, L. (New York U.) ; Jimenez, I. (UC, Santa Cruz) ; Kane, M. (Unlisted, DE) ; Katz, D.S. (NCSA, Urbana) et al. In this chapter of the High Energy Physics Software Foundation Community Whitepaper, we discuss the current state of infrastructure, best practices, and ongoing developments in the area of data and software preservation in high energy physics. [...] HSF-CWP-2017-06 ; arXiv:1810.01191 ; FERMILAB-FN-1060-CD. - Fulltext 2018-10-16 17:53 Tests of QCD using jets at CMS / Cerci, Salim (Cukurova U.) /CMS Collaboration Inclusive jets, dijets and multijets measurements can be utilized to test perturbative quantum chromodynamics (QCD) predictions and to measure the strong coupling constant $\alpha_{S}$ as well as to constrain parton distribution functions (PDFs). In this talk, the recent measurements using the $pp$ collisions data collected with the CMS detector at the LHC at different center-of-mass energies are presented.. CMS-CR-2018-065.- Geneva : CERN, 2018 - 5 p. Fulltext: PDF; In : 2nd Iran-Turkey joint conference on LHC physics, Tehran, Iran, 23 - 26 Oct 2017 2018-10-16 14:54 Feasibility study of a system for measuring the LHC dipole coil stresses with capacitive probes / Gilquin, J (CERN) CERN-AT-MA-Internal-Note-93-90 ; CERN-AT-93-90-MA ; CERN-AT-MA-93-90. - 1993. - 16 p. Full text 2018-10-16 05:02 Periastron Observations of TeV Gamma-Ray Emission from a Binary System with a 50-year Period / VERITAS Collaboration We report on observations of the pulsar / Be star binary system PSR J2032+4127 / MT91 213 in the energy range between 100 GeV and 20 TeV with the VERITAS and MAGIC imaging atmospheric Cherenkov telescope arrays. [...] arXiv:1810.05271. - 12 p. Fulltext 2018-10-16 05:02 Constraining D-foam via the 21-cm Line / Ellis, John ; Mavromatos, Nick E. ; Nanopoulos, Dimitri V. We have suggested earlier that D-particles, which are stringy space-time defects predicted in brane-inspired models of the Universe, might constitute a component of dark matter, and that they might contribute to the masses of singlet fermions that could provide another component. [...] arXiv:1810.05393 ; CERN-TH-2018-217 ; KCL-PH-TH/2018-45 ; ACT-03-18 ; MI-TH-18-183. - 15 p. Fulltext
2018-10-18 08:54:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8482825756072998, "perplexity": 13119.176954350854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511761.78/warc/CC-MAIN-20181018084742-20181018110242-00009.warc.gz"}
https://mathzsolution.com/is-the-family-of-probabilities-generated-by-a-random-walk-on-a-finitely-generated-amenable-group-asymptotically-invariant/
# Is the family of probabilities generated by a random walk on a finitely generated amenable group asymptotically invariant? Is the family of probabilities $\mu^n$ (convolution) generated by a random walk $\mu$ on a finitely generated amenable group $G$ asymptotically invariant ($\|g\mu^n-\mu\|_{L^1}\to 0$ for any $g\in G$)? I am not familiar with random walks on amenable groups. Please indicate reference on the subject.
2022-12-09 08:11:28
{"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8800193667411804, "perplexity": 197.19258560944863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711394.73/warc/CC-MAIN-20221209080025-20221209110025-00557.warc.gz"}
https://math.stackexchange.com/questions/3358517/determining-range-using-intermediate-value-theorem
# Determining range using Intermediate Value Theorem Question: Let $$f(x) = \frac{x^6 -1}{3x -1}$$ Prove that the range of $$f$$ is $$\Bbb R$$.( Hint: use the Intermediate Value Theorem.) I thought IVT was meant to show that the function has a root? Please help, I don't know how I can use IVT to prove the range. • Welcome to Mathematics Stack Exchange. Consider $f(x)$ as $x\to\pm\infty$ – J. W. Tanner Sep 16 '19 at 13:20 • Doesn't IVT only work for a closed interval? In this case, f(x) domain is (-infinity, 1/3) U (1/3, infinity). How will this work for an open interval? – Amanda Sep 16 '19 at 13:33 Hint: If $$y\in\mathbb R$$, then asserting that $$y$$ belongs to the range of $$f$$ is the same thing as asserting that the equation $$f(x)-y=0$$ has a root. You have $$\lim\limits_{x \to -\infty} f(x)= -\infty$$ and $$\lim\limits_{x \to \frac{1}{3}^-} f(x)= \infty$$. Moreover $$f$$ is continuous on the interval $$(-\infty, \frac{1}{3})$$. Therefore by the IVT, the image of $$(-\infty, \frac{1}{3})$$ under $$f$$ is equal to $$\mathbb R$$. A fortiori, the image of $$f$$ is equal to $$\mathbb R$$. • @Armanda You're right that the usual IVT supposes a closed interval. However it is very easy to extend it to an open one. Suppose that $c < y < d$ where $f : (a,b) \to \mathbb R$ with $\lim\limits_{x \to a^+} f(x) = c$ and $\lim\limits_{x \to b^-} f(x) = d$. Take $a < a^\prime < b^\prime < b$ such that $c < f(a^\prime) <y < f(b^\prime) < d$ and apply IVT to the restriction of $f$ to the closed interval $[a^\prime,b^\prime]$. – mathcounterexamples.net Sep 16 '19 at 13:44
2020-09-28 09:10:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8881696462631226, "perplexity": 155.38458415949862}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401598891.71/warc/CC-MAIN-20200928073028-20200928103028-00226.warc.gz"}
http://xml.coverpages.org/fineTUG99.html
# Active TEX and the DOT input syntax Jonathan Fine 1 24 February 1999 ### Abstract: The usual category codes give TEX its familiar backslash and braces input syntax. With Active TEX, all characters are active. This gives the macro programmer complete freedom in defining the input syntax. It also provides a powerful programming environment. The DOT input syntax, like TROFF, uses a period at the start of the line as an escape character. However, its underlying element, attribute and content structure is based on SGML. It is both easy to use and easy to program for. Conversion to other formats, such as SGML, HTML and XML, or to proprietary formats such as Word and RTF, will be straightforward. This is because the DOT syntax is rigorous. This new syntax will be described and demonstrated. All manner of problems connected with TEX disappear when Active TEX packages are used. For example, all input errors can be detected and corrected before they can cause a TEX error message. This will make TEX accessible to many more users. Visit http://www.active-tex.demon.co.uk for information and macros. This is a preliminary version of a paper to be presented to the 20th Annual Meeting of the TEX Users Group (Vancouver, Canada, 15-19 August 1999). Note: This HTML document has been converted from the TeX source document at http://www.active-tex.demon.co.uk/tug99.tex, 1999-04-01. See the author's Web site for a canonical version and updates. # 1. Introduction Much has changed since the introduction of TEX in 1982. Computers have become cheaper, more plentiful, and more powerful. The Internet has grown from a tool used largely by North American academics to become a mass medium subject to powerful commercial interests. And Microsoft, the supplier of the operating system for IBM's first PC, has become a colossus. Donald Knuth gave us a powerful and reliable typesetting system. Other systems may be easier to use, and have all sorts of useful (and perhaps not so useful) features, but when it comes to the typographic quality of the resulting pages, TEX is still superior in many important respects to all of its competitors. No other software even comes close to matching it on its own home ground, which is technical books, articles and preprints that have large amounts of mathematical material. Both individuals and publishers are now making information available on the Internet. This imposes new demands on the typesetting process. For many users, HTML (and perhaps soon its replacement XML) is the preferred means for supplying and receiving textual material. Twenty years ago the typeset page was the principal result of the typesetting process. Today, users are wanting both typeset pages and HTML or similar pages. By typeset pages I mean both pages for printing in the usual way, and also pages for display, say in the Portable Document Format introduced by Adobe. (In principle, this term also includes the formatting of say HTML for display in a browser.) Most TEX authors use a text editor (such as emacs) to prepare a computer file in say the LATEX syntax. Other authors will use a word processor to create a file that is stored in a proprietary format. Later down the line, these files will be typeset, converted into HTML and so forth. By and large, the closer is the syntax of the author's file to being rigorous and compatible with the processing that will be applied to it, the better will be the outcome. Compromise may be necessary. With TEX each author became his or her own typesetter. Very often (La)TEX files contain for example macro definitions, introduced for the author's own convenience. These can be a great nuisance for those who have to deal with the file later, particular if they reside in an external file that becomes separated from the main manuscript. This then is the present context for the use of TEX. Most TEX users now use the LATEX macro package, together with style files and additional packages. LATEX was developed in the early 1980s. The first edition of its manual was published in 1985, about a year after the The TEXbook. It did a tremendous job of making the resources of TEX available to non-experts. Around 1990, however, its limitations became clear, and more than an inconvenience. One response was the birth of the LATEX3 project. In 1994 this group released LATEX2e. This helped to standardise the current situation. Recently [11], Rahtz described LATEX as hugely powerful, but chaotic, and on the verge of becoming unmanageable.'' He also tells us that the CONTEXT macro package, due to Hans Hagen and Pragma, addresses this problem by incorporating into itself all the facilities you need.'' It does away with document classes and user-contributed packages. Plain TEX, LATEX and CONTEXT all use the familiar backslash and braces' input syntax. This can cause problems, because it is not rigorous. Translation to HTML for example, requires that the source document be parsed. But LATEX for example is in general the only program that can successfully parse LATEX documents. This tends to result in (La)TEX living in a world of its own, isolated from the world of desktop publishing and word processing. For some communities of users, such as mathematicians, this may not be a hardship. Active TEX is a new way of using TEX. It allows us either to avoid or to solve many of our problems. For the technical, its key idea is that each character is active, and is defined to be a macro. For example, the active letter a' is a macro that expands to the control sequence lcletter, followed by an active a'. Uppercase letters, digits and visible symbols are treated in a similar manner. By manipulating these definitions, we can make TEX do whatever we want. In particular, we can choose our input syntax. Both TEX and the system macro programmer work harder, to ease the life of both the user and the application programmer. We will consider the problems relating to macros under three heads, namely input syntax (§2), macro programming (§3), and the processing of text (§4). The final section (§5) gives the history and prospects of Active TEX. This article is somewhat informal, and should not be read as a definitive or legally binding statement. The software is still under development. # 2. The DOT input syntax There are two aspects to an input syntax, namely the concrete and the abstract. The abstract syntax is the structure or organisation that the syntax provides. The concrete syntax is a means of expressing so organised objects. Provided they have the same abstract syntax, translation from one concrete syntax to another will be a routine matter. The parsing process starts with a concrete instance of the structure, and produces from it events that characterise its abstract structure. In SGML the concept of the content model provides a large part of the abstract syntax. It might say, for example, that an article such as this one consists of front matter, sections, and end matter. Each section would be a sequence of paragraphs, together with figures and tables. The end matter might consist of appendices and a bibliography. The latter would be a sequence of bibliographic items. In LATEX one would write \section{Input syntax} to start a section. In SGML one might write <section title="Input syntax"> to start a section. This gives two examples of a concrete syntax. In SGML the title is an attribute of the section tag. In LATEX, Input syntax is a parameter of \section. The abstract syntax provided by SGML is solid and well-understood. It is already widely used in data processing. The concrete syntax however tends to be somewhat verbose, and difficult to use without dedicated software. This has been an obstacle to its widespread use. In the author's view, with XML this problem will become more acute. Twenty years ago or so, the text formatting programs troff and nroff were developed, as part of UNIX. In these systems, a dot at the start of a line is an escape character, that can be used to call a macro. For example .SH 2.1 "Section heading" might introduce a section. The author has developed a syntax whose concrete form is similar to the dot syntax of troff and nroff, but whose abstract syntax is modelled on SGML. This syntax we call the DOT syntax. As in SGML, a tag name can contain digits, period and hyphen as well as letters. As a section is a say second level head, one could write .h2 Input syntax to start a section. In LATEX one might write \documentclass{article} \author{Jonathan Fine} \title{Active \TeX\ and input syntax} \date{20 January 1999} to start an article. In SGML terms, the author, title and date are all attributes of the article element. As in SGML, the DOT syntax allows start tags to have attributes. One might write .article Active &TeX and input syntax ..author Jonathan Fine ..date 20 January 1999 to specify the same information. This double dot notation for attributes is similar to the leading dots notation that TEX the program [8, page 66] uses to show the content of boxes. LATEX does not really have a concept of attributes. An end tag in the DOT syntax is like so: ./article This is a comment but as in SGML end tags can often be implied by the context. For example, if a section cannot contain a section, the start of a new section implies the end of the current one. SGML has the useful concept of a short reference. In the DOT syntax the start of a line, the end of a line, white space at the start of a line and a blank line are the possible short reference events. One can set matters up so that ordinary lines start paragraphs, blank lines end paragraphs, and indented lines commence math mode. Thus the fragment Einstein's famous equation E = m c ^ 2 expresses the equivalence of matter and energy. might be equivalent to Einstein's famous equation .eq E = m c ^ 2 ./eq expresses the equivalence of matter and energy. but the former is easier both to type and to read. In summary, the DOT syntax combines the power of SGML with the simple concrete syntax of troff and nroff. It provides a concrete syntax that ordinary authors can use, whose abstract form is equivalent to that of SGML. # 3. Macro programming This section is particularly for the TEXnically minded. In Active TEX all characters are active. This is both a problem and an opportunity for the macro programmer. Ordinarily a line in a TEX file such as \def \hello {\message{Hello world!}} would define a macro hello, whose execution issues a greeting. This relies on the customary or plain category codes being in force. In Active TEX another approach must be taken. Ordinarily, control sequences are formed using TEX's eyes. Thus, \def in the source file produces the control sequence def. Active TEX uses the mouth of TEX, or more exactly csname and endcsname, to form control sequences. Macro definitions will be built up using aftergroup accumulation. The plain code line \expandafter \aftergroup \csname def\endcsname contributes the control sequence def aftergroup. Similarly, the lines \aftergroup {\iffalse}\fi \iffalse{\fi \aftergroup} contribute left and right braces respectively. Finally, if the macro \def \agchar #1{\expandafter \aftergroup \string #1} is passed a character as an argument, it will contribute aftergroup the inert form of this character. This mechanism allows us to define macros without making use of the ordinary category codes. For example, if we call begingroup, then aftergroup commands as detailed above, and then endgroup, the result could give exactly the same definition of hello as at the beginning of this section. To store such definitions in a file, a syntax is required. Active TEX has been set up so that in a compiled TEX code (ctc) file, a line such as def hello {message {'H'e'l'l'o' 'w'o'r'l'd'!}}; has exactly the same effect as the previous definition. Within a ctc file, a letter takes itself and all visible characters that follow, and uses csname, endcsname and aftergroup to form and contribute a control sequence. Similarly, active { and } contribute explicit (or ordinary) begin- and end- group characters { and } aftergroup. Active right quote ' is as agchar above. Finally, the semicolon ; closes the existing accumulation group and opens a new one. This technique of aftergroup accumulation is enormously powerful. It allows arbitrary control sequences and character tokens to be placed into macro definitions. One can even do calculations, or pick up values from an external file, as the definitions are being made. Tools are required, to make full use of this power. Suitable content in ctc files allow arbitrary macros to be defined. Active TEX has a development environment, which produces ctc files from suitable source code files. For example, def hello { message { "Hello world!" } } ; will when compiled produce the ctc code exhibited above. Here is another example. def ctc.letter { begingroup ; string.visible.chars ; let SP endcs ; let TAB SP ; let RE suspend.RE ; let suspend endcs ; xa endgroup xa ag cs ; } This macro is used within ctc files to produce control sequences aftergroup. Some comments are in order. Any visible characters can appear in control sequence names. This power should not be abused. We rely on the definitions def string.visible.chars { let lcletter string ; let ucletter string ; let digit string ; let symbol string ; } def suspend.RE { suspend ; RE } ; being made already. The tokens SP, TAB and RE in the source file produce (and here it is a mouthful) characters in the ctc file that in turn produce active space, tab and end-of-line characters aftergroup. The tokens xa, ag, cs and endcs in the source file are shorthand for expandafter, aftergroup, csname and endcsname respectively. It is the latter strings that are written to the ctc file by the compiler. Semicolons in macro definitions are for punctuation only. They are ignored. Outside macro definitions they trigger renewal of the aftergroup accumulation group. This process, of defining macros via ctc files, allows many of the basic problems in TEX macro development to be solved. For example, one can insist that identifiers (tokens in the source file) be declared before they can be used. No more misspelt identifier names! One can also apply a prefix to chosen identifiers, thus segmenting the name space. This will allow a module to control access to its identifiers. No more name clashes! In the same way, one can use named rather than numbered parameters in macro definitions. For example, instead of \def \agchar #1{\expandafter \aftergroup \string #1} as above, one could write def ag.char Char { xa ag str Char } ; where Char has been previously declared to be a macro parameter place holder. Although this process is somewhat indirect, it does not cause performance to suffer. The compilation process, to produce the ctc files, needs to be done only once, by the macro developer. With modern machines, this does not take long. Similarly, most files will be loaded only once, in the process of making a preloaded format file. In fact, Active TEX gives two performance benefits. The first is that macro programmers no longer need to resort to tricks, to obtain access to unusual control sequences or character tokens. Thus, more efficient code can be written. The second is that ctc files are generally quite compact. This compression allows them to be retrieved from the hard disc (or network) more rapidly. The DOT syntax gives the same benefits. Tools for macro programmers are under development. For example, short references will cause indentation to indicate code lines. This section has given examples of what has been done already, and a taste of what lies in the future. The author invites comments. # 4. Processing text We now turn to the raison d'être of TEX, which is of course typesetting. In §2 we saw how the DOT input syntax allows a document to be broken down into elements with attributes. In §3 we saw some examples of how Active TEX can be programmed. This section is concerned with the content of the document, or more exactly with the text and the attribute values. Typesetting plain text, such as This is plain text. is straightforward. Each visible character produces itself, and spaces give interword spaces. Thus the values let lcletter string ; let ucletter string ; let digit string ; let symbol string ; def SP { unskip ; ~ } ; will to a first approximation suffice. (The unskip is present so that multiple space characters will count as one. The ~ is Active TEX's way of calling for an ordinary space character.) Occasionally, the user will want to add emphasis. In SGML one uses tags <em>emphasis</em> like so, while in LATEX one uses a macro \emph{emphasis} but in Active TEX one might use Plain text with {emphasis}. for example. Because { and } are active, they can be programmed to open and close an emphasised group. This brings us to perhaps the most important concept of this section, which is that of a data content notation (DCN). Roughly speaking, such tells us how text is to be processed. For example, the plain text above already has a DCN, namely that it is in English. This is very important if we are using a spell checker or a search engine. Computer programming languages are more formal examples of a data content notation. Mathematics encoded in either plain or LATEX is third example. The DOT syntax will be set up so that a data content notation can be associated to the text in each element, and to the text in each attribute value. The DCN will in a more or less formal manner tell us what is admissable, and how it should be processed. The specification of a DCN is not however a matter for the DOT syntax. Rather, it is for the users and experts in the area. Many countries have official bodies that attempt to regulate and bring order to the use, at least in printed form, of a language. The author suggests as a first step that for plain text a DCN along the following lines be used. For emphasis use { and }. For bold use + and +, and for math use $ and $. For verbatim use | and |. Certain nestings will be prohibited. The following Plain text, {emphasis}, |verbatim| and +bold+, with some elementary $2+2=4$ mathematics. is an example of its use. In math mode, new rules will be required. The author suggests that ordinary TEX but without the backslashes, like so sin ^2 theta + cos ^2 theta = 1 as a first approximation. This is only a beginning. Building up a complete system that is capable of handling the complexity of modern mathematical notation, whilst retaining both rigour and ease of use, is not going to be an easy matter. Gaining general acceptance and support of the user community is as much a problem as the formulation and solution of the technical problems. When SGML is used for markup, there is a tendency to use it for everything, regardless of size. This causes enormous problems to users who either do not have the software tools required, or who prefer to work with plain text files. For example, in the C programming language the & operator gives the address of a variable. Code fragments such as ptr = &i ; are common. But in SGML, & followed by a letter gives an entity reference, so for an SGML parser to produce the above as output, it must be given ptr = &amp;i ; as input. Something similar happens if one wishes to produce a<b as parser output, for the <b must not be recognised as a start tag. Part of the philosophy of the DOT syntax is that it deals with the big things (and also some of the medium sized), while the data content notation deals with the little things. The DOT syntax will have its parser, and each DCN will have its parser. They take turns in processing the input stream. Let us now return to typesetting. Most of the time, when TEX is typesetting, it is forming either a horizontal list or a math list. Each DCN will, as it processes characters, add items to the current list. Special characters (such as \$, { and }) will change the mode in some way. From time to time, say at the end of a title, the current list will be closed, and material will be added to say the main vertical list. From here on the main difference between plain TEX, LATEX, CONTEXT and Active TEX will be in the libraries of macros used for page makeup, output and so forth. Typesetting is the purpose of a TEX macro package. Active TEX has been developed to provided a powerful, rigorous and easy to use input syntax. TEX was developed to allow typesetting of the highest quality. However, not until the basic functions of Active TEX have been met will it be possible to move on to the typesetting (composition, hyphenation and justification, galleys and page makeup) aspects of the process. Put another way, macro packages such as plain and LATEX have typesetting as their main purpose. Rigor, power and ease of use are the main goals of Active TEX, at least in this stage of its development. A fourth goal is to provide an enduring fixed point for document source files. # 5. History and prospects Although the basic concept of Active TEX is quite simple--all characters are active--it is surprising just how long it has taken for this idea to emerge. A brief history follows. In plain TEX the tilde ~ is an active character, which produces an unbreakable interword space, and in math mode apostrophe ' is effectively an active character, used for putting primes on symbols, as in f'(x). Technically, a prime is a superscript. In addition space and end-of-line could be made active, to achieve special results such as verbatim listing of files. In 1987 Knuth [9] wrote about some macros he had written for his wife, in which many of the symbols are active. In 1990 Knuth froze the development of TEX. In his announcement [10] he wrote: Of course I do not claim to have found the best solution to every problem. I simply claim that it is a great advantage to have a fixed point as a building block. Improved macro packages can be added on the input side; improved device drivers can be added on the output side. In 1992 Fine [2] produced the \noname macro development environment, which like Active TEX is based on aftergroup accumulation. This solves a major technical problem, namely how to define exactly the macros we wish to define, when the category codes are against us. The \asts problem [8, page 373] at the start of Appendix D (Dirty tricks) is solved using aftergroup accumulation. In 1993 Fine presented a paper [3] to the 1993 AGM of the UK TEX Users Group that contains in embryonic form most if not all of the ideas in this paper. For example, he wrote Let us solve all category code problems once and for all by insisting that the document be read throughout with fixed category codes. and then described how \' for example could be an active character that parses control sequences, in much the same way as ctc.letter does. This paper also contains other prospects that have not been discussed here, such as visual or WYSIWYG TEX (see [7] for a more recent presentation.) In 1994 Fine [4] argued that the deficiencies in TEX the program had been exaggerated, and (page 385) that It would be nice if both TEX and its successor shared at least one syntax for compuscripts that are to be processed into documents.'' This syntax would have to be rigorous. In 1994-95 Fine went the whole way, and made all characters active. Using this, he produced a prototype TEX macro package that was able to typeset directly from SGML document files. Due to lack of both sponsorship and commercial interest, the project was left unfinished. This work was presented at the Bridewell meeting (January 1995) on Portable Document, and published both in Baskerville [5] and MAPS [6], but regrettably not in the special TUGboat issue 16 (2) on TEX and SGML, published later that year. In late 1997 the project was revived, and in May 1998 Fine spoke on Active TEX and input syntax at a meeting of the UK TEX Users Group. Since then a proof-of-concept version of the macros has been available to all those who ask. There have been other developments that make extensive use of active characters. Michael Downes [1] has done important work on the typographic breaking of equations. He writes (p182) Some of the changes are radical enough that it would be more natural to do them in LATEX3 than in LATEX2e--e.g., for LATEX3 there is a standing proposal to have nearly all nonalphanumeric characters active by default; having ^ and _ active in this way would have eased some implementation problems. Werner Lemberg [12] describes the CJK (Chinese, Japanese and Korean) package. This package makes the extended ASCII' or eight-bit characters active. He notes (p215) It's difficult to input Big 5 and SJIS encoding directly into TEX since some of the values used for the encodings' second bytes are reserved for control characters: {', }' and \'. Redefining them breaks a lot of things in LATEX; to avoid this, preprocessors are normally used [...]. Active TEX has been designed from the ground up so that it can go the whole way, and allow problems such as these to be given completely satisfactory solutions, without unnecessary difficulties. The only real price seems to be performance. Because it does much more, it is not as quick as plain TEX or LATEX. This could be a problem for those who use a 286, but on a 486 or better, disk input/output is the real bottleneck. One of the great things about TEX the program is that it is a fixed point. I would like Active TEX to become a similar fixed point, upon which users can build style files and the like. I would also like the DOT syntax to become a fixed point. When TEX was developed, Donald Knuth had [8, page vii] the active interest and support of the American Mathematical Society, the National Science Foundation, the Office of Naval Research, the IBM Corporation, the System Development Foundation, as well as hundreds of more or less ordinary users, many of whom went on to play an active part in the life of the TEX Users Group, and the community generally, and some of whom are still with us. I firmly believe that Active TEX is a worthwhile idea whose time has come. Please give it your support. ## Bibliography 1 Michael Downes, Breaking equations, TUGboat, 18 (3), (1997), 182-194 2 Jonathan Fine, The \noname macros, TUGboat, 13 (4), (1992), 505-509 3 , New perspectives on TEX macros, Baskerville, 3 (2), (1993), 17-19 4 , Documents, programs and compuscripts, TUGboat, 15 (3), (1994), 381-385 5 , Formatting SGML manuscripts, Baskerville, 5 (2), (1995), 4-7 6 , Formatting SGML manuscripts, MAPS, 14 (1), (1995), 49-52 7 , Editing .dvi files, or Visual TEX, TUGboat, 17 (3), (1996), 255-259 8 Donald E. Knuth, The TEXbook, Addison-Wesley, (1984) 9 , Macros for Jill, TUGboat, 8 (3), (1987), 309-314 10 , The future of TEX and METAFONT, TUGboat, 11 (4), (1990), 489 11 Sebastian Rahtz, Editorial, Baskerville 18 (4), (1998), 1 12 Werner Lemberg, The CJK package: multilingual support beyond Babel, TUGboat, 18 (3), (1997), 214-224 Active TEX and the DOT input syntax This document was generated using the LaTeX2HTML translator Version 98.1p1 release (March 2nd, 1998) Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer Based Learning Unit, University of Leeds. The command line arguments were: latex2html -show_section_numbers -numbered_footnotes -split 0 tug99.tex. The translation was initiated by Steve McConnel on 1999-04-01 #### Footnotes ... 1 Active TEX, 203 Coldhams Lane, Cambridge, CB1 3HY, United Kingdom. E-mail: fine@active-tex.demon.co.uk Steve McConnel 1999-04-01
2017-04-28 02:16:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9092453718185425, "perplexity": 2231.3578567559093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122726.55/warc/CC-MAIN-20170423031202-00574-ip-10-145-167-34.ec2.internal.warc.gz"}
https://minimizeregret.com/about/
# About Minimize Regret Hi, my name is Tim. You’re reading Minimize Regret, the site where I write about things I’ve recently learned. I live in Berlin, where I work as a data scientist. I’m interested in quantifying uncertainty, and optimal decision making under uncertainty. Fittingly, topics such as probabilistic programming, reinforcement learning and stochastic optimal control, as well as time series theory and applications are near and dear to my heart. I recently finished my master’s in statistics and graduated with honors. My thesis on multi-armed bandits was supervised by Alexandra Carpentier and Marius Kloft. Before joining Bayer as a data scientist in November 2018, I worked as data scientist for the online shop Amorelie (NSFW, I suppose). At Amorelie, I was responsible for the automated forecast and optimal replenishment of thousands of products, and introduced novel ways of attributing TV marketing cost. Send me an email to [email protected] in case you have any question. Else, try Twitter or Linkedin. Or even better, meet me at the next Berlin Machine Learning or Stan User Group meetup.
2022-05-24 22:29:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1805144101381302, "perplexity": 2527.258299214812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577259.70/warc/CC-MAIN-20220524203438-20220524233438-00451.warc.gz"}
http://mathhelpforum.com/calculus/110086-simple-exponential-integral.html
1. ## Simple exponential integral Solve the following integral (showing the steps): $\int {t{e^t}dt}$ 2. Originally Posted by baseballman Solve the following integral (showing the steps): $\int {t{e^t}dt}$ See - Wolfram|Alpha Click on Show steps. 3. That's good. But I'm looking for a solution in which integration by parts is NOT used. 4. Originally Posted by baseballman That's good. But I'm looking for a solution in which integration by parts is NOT used. there's no scape dear. 5. Originally Posted by baseballman That's good. But I'm looking for a solution in which integration by parts is NOT used. Consider $y = t \sin t$. Then $\frac{dy}{dt} = \sin t + t \cos t$. Now integrate both sides: $y = \int \sin t + t \cos t \, dt = \int \sin t \, dt + \int t \cos t \, dt$. Substitute $y = t \sin t$: $t \sin t = \int \sin t \, dt + \int t \cos t \, dt$. Re-arrange to make the required integral the subject: $\int t \cos t \, dt = t \sin t - \int \sin t \, dt = t \sin t + \cos t + C$.
2017-10-17 21:44:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9601748585700989, "perplexity": 1824.7554540021308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822488.34/warc/CC-MAIN-20171017200905-20171017220905-00889.warc.gz"}
http://kleene.ss.uci.edu/lpswiki/index.php/User:Kzollman
Trouble viewing the formulas? You need a MathML compatible browser. User:Kzollman This is a goat I am Kevin. $P\wedge$ $↔Q$ $P$ $R\vee Q$ $P\wedge Q\vee P$ $¬\left(P\vee Q$ $\frac{x}{y}$ $\frac{h\phantom{\rule{0}{0ex}}i}{b\phantom{\rule{0}{0ex}}y\phantom{\rule{0}{0ex}}e}$
2013-05-22 21:52:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5643873810768127, "perplexity": 7615.295898019331}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00057-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/806179/calculating-limits-using-lh%C3%B4pitals-rule
Calculating limits using l'Hôpital's rule. After a long page of solving limits using l'Hôpital's rule only those 2 left that i cant manage to solve $$\lim\limits_{x\to0}{\sqrt {\cos x} - \sqrt[3]{\cos x}\over \sin^2 x }$$ $$\lim\limits_{x\to\ {pi\over 2}}{\tan 3x - 3\over \tan x - 3 }$$ Thanks in advance for any help :) i edit the second one in mistake i entered $0$ insted of $\pi\over 2$ • Since $\tan x \to 0$ as $x \to 0$ the second one can be evaluated directly using continuity. – copper.hat May 23 '14 at 3:59 • Second one isn't indeterminate form! – evil999man May 23 '14 at 4:01 • For the first, note that $\sin^2 x = 1 -\cos^2 x$ and find the limit of ${ y^{1\over 2} - y^{ 1 \over 3} \over y^2 }$ as $y \to 1$ using l'Hôpital's rule. – copper.hat May 23 '14 at 4:03 $$\lim\limits_{x\to0}{\sqrt {\cos x} - \sqrt[3]{\cos x}\over \sin^2 x }=\lim\limits_{x\to0}{-\frac{1}{2}\sin x(\cos x)^{-\frac{1}{2}} + \frac{1}{3}\sin x (\cos x)^{-\frac{1}{3}}\over 2\sin x \cos x }=\lim\limits_{x\to0}{-\frac{1}{2}(\cos x)^{-\frac{1}{2}} + \frac{1}{3} (\cos x)^{-\frac{1}{3}}\over 2 \cos x }=-\frac{1}{12}$$ For the second one, you can evaluate the limit directly by substituting in $x=0$. As $\displaystyle\sqrt{\cos x}-\sqrt[3]{\cos x}=(\cos x)^{\frac12}-(\cos x)^{\frac13}$ and gcd$\displaystyle\left(\frac12,\frac13\right)=\frac{\text{gcd}(1,1)}{\text{lcm}(2,3)}=\frac16,$ Set $\displaystyle\sqrt[6]{\cos x}=1+h\implies\cos x=(1+h)^6$ and $\displaystyle\sqrt{\cos x}-\sqrt[3]{\cos x}=(1+h)^3-(1+h)^2=(1+h)^2(1+h-1)$ Finally, $\displaystyle\sin^2x=1-(1+h)^{12}=-12h+O(h^2)$ • very smart solution indeed !! – Paramanand Singh May 23 '14 at 18:37 • @ParamanandSingh, Thanks for your feedback – lab bhattacharjee May 24 '14 at 12:26 @copper.hat thanks i think i got it $$\lim\limits_{x\to0}{\sqrt {\cos x} - \sqrt[3]{\cos x}\over \sin^2 x } = \lim\limits_{x\to0}{\sqrt {\cos x} - \sqrt[3]{\cos x}\over 1-\cos^2 x } = \lim\limits_{y\to1}{\sqrt {y} - \sqrt[3]{y}\over 1- y^2 }$$ $$\lim\limits_{y\to1}{\sqrt {y} - \sqrt[3]{y}\over 1- y^2 } = \lim\limits_{y\to1}{\sqrt {y} - \sqrt[3]{y}\over 2y(6\sqrt[6] y) } = 0$$
2020-10-24 00:17:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.938386082649231, "perplexity": 780.3094126238809}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881551.11/warc/CC-MAIN-20201023234043-20201024024043-00187.warc.gz"}
https://tex.stackexchange.com/questions/590568/triangle-with-text-involving-white-spaces
# Triangle with text involving white spaces I am drawing to draw a diagram in Latex exactly equivalent to this image: I am having problems with the white spaces. For instance, between "practice" and "management". How would you code this in LaTeX? • Which problems do you have, excatly? Mar 29, 2021 at 20:00 I would use tikz-package to achieve the desired result. \begin{tikzpicture} \coordinate (a) at (0,0); \coordinate (b) at (4,0); \coordinate (c) at (2,2); \draw (a) -- (b) -- (c) -- (a); % Triangle. \draw (2,1) node[anchor=north]{Platform}; \draw (a) node[anchor=east,align=center] {Case\\ Management}; \draw (b) node[anchor=west,align=center] {Practice\\ Management}; \draw (c) node[anchor=south]{Billing}; \end{tikzpicture} With use labels for text at coordinates and barycentric coordinates for text inside triangle: \documentclass[border=3.141592]{standalone} \usepackage{tikz} \begin{document} \begin{tikzpicture}[ every label/.style = {align=center} ] \coordinate[label=left :Case\\ Management] (a) at (0,0); \coordinate[label=right:Practice\\ Management] (b) at (4,0); \coordinate[label=above:Billing] (c) at (2,2); % \node at (barycentric cs:a=1 ,b=1 ,c=1) {Platform}; \draw (a) -- (b) -- (c) -- cycle; \end{tikzpicture} \end{document}
2022-09-29 02:18:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9154360294342041, "perplexity": 6394.8675572133625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335303.67/warc/CC-MAIN-20220929003121-20220929033121-00136.warc.gz"}
https://www.physicsforums.com/threads/i-need-help-and-fast.144197/
# I need help and fast 1. Nov 16, 2006 ### neji006 hi i'm new here i wanted your help in solving these two questions i tried several times to solve them but the computer kept showing that the amswer is wrong ((on mastering physics site)) so please i need the answers to those problems and fast because the chapter will be closed in an hour. 1.A factory worker pushes a 30.4kg crate a distance of 4.00 m along a level floor at constant velocity by pushing downward at an angle of 28.0 below the horizontal. The coefficient of kinetic friction between the crate and floor is 0.255 . A.What magnitude of force must the worker apply to move the crate at constant velocity? Take the free fall acceleration to be 9.8 B.How much work is done on the crate by this force when the crate is pushed a distance of 4.00 ? Take the free fall acceleration to be 9.8 2.A pump is required to lift a mass of 790 of water per minute from a well of depth 13.8 and eject it with a speed of 18.1 . A.How much work is done per minute in lifting the water? Take the free fall acceleration to be = 9.80 B.How much in giving the water the kinetic energy it has when ejected? C.What must be the power output of the pump? Take the free fall acceleration to be = 9.80 . NOTE; all the quantities are in SI units. 2. Nov 16, 2006 ### neji006 please i tried hard i solved all my 5 problems but those two i just can't get them right 3. Nov 16, 2006 ### fargoth show your work, lets see where you went wrong. 4. Nov 16, 2006 ### neji006 ok the first one i tried to solve using this equation: Fcos(-28)-coe.k. (w) + Fsin(-28)=0 and the answer was wrong so ofcourse i wasn't able to find the work the second problem was hard for me because i didn't take caculus at the first place 5. Nov 16, 2006 ### fargoth hint: pushing a box downwards and foreward on a plane when you don't have friction is like pushing it forward... (with only the horizontal component). when you got friction, and you push something downwards you add you force to its weight... Last edited: Nov 16, 2006 6. Nov 16, 2006 ### Staff: Mentor $$F = \mu N$$ and express N as the sum of the weight of the crate plus the vertical component of the pushing force.... 7. Nov 17, 2006 ### neji006 i did that but it kept telling me it's wrong please help me i don't have much time it's due to 6 pm today 8. Nov 17, 2006 ### Dorothy Weglend You didn't do that in your earlier equation. Friction = uN, what is N? It is the weight of the crate + the additional force supplied by the worker. The weight of the crate is mg, and the additional force is the vertical component of his push, it's obvious you understand this, but you are not equating it to N, perhaps because you didn't draw a free body diagram? So, N = mg + Fv (Fv is the vertical part of the push, you will have to supply this), then Friction = u(mg + Fv) I agree with the other poster that you are making a mistake using -28 as the angle measure, just because it is below the horizontal doesn't make it a -28. It won't bother your cosine, but it will affect your sine function. I don't see any units in the second problem, sorry to say. How much water in how much time? Cubic meters? In any case, I don't think you need calculus for this. It's late. I'm going to bed. Dorothy 9. Nov 17, 2006 ### rieuk Hope you got it done neji006 ;) 10. Nov 17, 2006 ### neji006 i did , boy i was nervous thanks guys i got 94% on the work and kinetic energy chapter.
2017-06-26 17:09:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.480272114276886, "perplexity": 944.9126296701196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320841.35/warc/CC-MAIN-20170626170406-20170626190406-00014.warc.gz"}
http://www.ck12.org/book/CK-12-Geometry-Honors-Concepts/section/2.0/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Rigid Transformations | CK-12 Foundation # Chapter 2: Rigid Transformations Created by: CK-12 0  0  0 Translations, reflections, and rotations are all examples of rigid motions that you have studied in the past. Here, you will formalize the definitions of these transformations and learn how to perform transformations and sequences of transformations with geometry software. You will also learn what it means for a shape to have reflection or rotation symmetry. ## Chapter Outline ### Chapter Summary You looked at different types of transformations. Rigid transformations were those that preserved distance and angles. Some transformations, such as stretches, were not rigid transformations. You formalized the definitions of translations, reflections, and rotations using vectors, circles, and parallel and perpendicular lines. You also learned how to use Geogebra to perform transformations and composite transformations. You saw that when a shape could be reflected across a line and be carried onto itself it had reflection symmetry. When a shape could be rotated less than $360^\circ$ about a point and be carried onto itself it had rotation symmetry. A solid understanding of rigid transformations will inform your formal understanding of triangle congruence and proof. Aug 27, 2013
2014-10-02 05:42:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 1, "texerror": 0, "math_score": 0.659904420375824, "perplexity": 1633.288708174377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663711.39/warc/CC-MAIN-20140930004103-00308-ip-10-234-18-248.ec2.internal.warc.gz"}
http://www.filewatcher.com/d/Debian/all/doc/texlive-science-doc_2007.dfsg.17-1~lenny02_all.deb.7486878.html
File Search Catalog Content Search # texlive-science-doc ## TeX Live: Documentation files for texlive-science This package provides the documentation for texlive-science Homepage: http://www.tug.org/texlive Package version: 2007.dfsg.17-1~lenny02 Architecture: all Distribution: Debian Filename: texlive-science-doc_2007.dfsg.17-1~lenny02_all.deb README TeX live for Debian ========================== First of all, if you need help with TeX on Debian, ie with respect to file placement, configuration options, etc, please see the document TeX-on-Debian in the tex-common package, which can be found in /usr/share/doc/tex-common/ in the pdf, txt, and html format. This file contains additional information specific to TeX live. Differences more» README.source Packaging TeX Live for Debian is a huge task. Development is done in a very specific layout and source packages are generated from that. If you want to know how the *orig.tar.gz* and the *source* packages are generated, please check out the Debian TeX Live packaging infrastructure at http://svn.debian.org/wsvn/debian-tex/texlive-new/trunk/ where you will find a README file expla more» New arrow heads for chemical reaction schemes 4 February 2001 --------------------------------------------- 1) What's the name of the game? LaTeX can be used to typeset many kinds of different documents, but typesetting chemical reactions is esthetically not very pleasing because LaTeX's own arrows \rightarrow, \leftarrow and \rightleftharpoons which you might use for this purpose more» The SIstyle package for SI units and number typesetting more» README for the package SIunits ======= The files in this directory (./SIunits) are Copyri more» File: readme.txt (for the alg package). Copyright (c) 1995, 1999, 2003 Staffan Ulfberg This progra more» Algorithm2e is an environment for writing algorithms in LaTeX2e. An algorithm is defined as a floati more» This package provides many possibilities to customize the layout of algorithms. You can use one of t more» ## Browse inside texlive-science-doc_2007.dfsg.17-1~lenny02_all.deb [DIR] DEBIAN/ (5)  65535+ mirrors [DIR] usr/ (1)  65535+ mirrors
2016-12-08 09:53:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.94487065076828, "perplexity": 12558.32053081817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542520.47/warc/CC-MAIN-20161202170902-00413-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/lagranges-equations-for-impulsive-forces-question.281703/
# Lagrange’s equations for impulsive forces (Question) 1. Dec 27, 2008 ### loranbase 1. The problem statement, all variables and given/known data There is a m1 = m mass and 2 rods with mass m as drowed on the file below. The speed of the system immediately after F impact is being asked. (http://www.bordova.com/Lagranges_equations_for_impulsive_forces.pdf) or doc version (http://www.bordova.com/Lagranges_equations_for_impulsive_forces.doc) 2. Relevant equations Included in the doc file. 3. The attempt at a solution I have tried to solve the problem by myself and my solution is also included in the file above. I have found a strange result. If you see and say any mistakes of my solution I will be glad. Last edited: Dec 27, 2008
2017-02-23 03:08:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.822004497051239, "perplexity": 922.6901955642354}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00265-ip-10-171-10-108.ec2.internal.warc.gz"}
http://encyclopedia.kids.net.au/page/re/Religious_conversion
## Encyclopedia > Religious conversion Article Content # Religious conversion Religious conversion is the adoption of new religious beliefs that differ from the convert's previous beliefs; in some cultures (e.g. Judaism) conversion also signifies joining an ethnic group as well as adopting that group's religious beliefs. A person who has undergone conversion is called a convert. Conversion requires internalization of the new belief system. ### Conversion to Judaism The Tanakh (Hebrew Bible) states that converts deserve special attention. The Hebrew word for "convert", ger, is the same as that for a stranger. It is also related to the root gar - "to dwell'. Hence since the Children of Israel were "strangers" - geirim in Egypt, they are therefore instructed to be welcoming to those who seek to convert and dwell amongst them. Jewish law has strict guidelines for accepting new converts to Judaism. According to Jewish law, which is still followed as normative by Orthodox Judaism and Conservative Judaism, potential converts must want to convert to Judaism for its own sake, and for no ulterior motives. A male convert needs to undergo a ritual circumcision , and there has to be a commitment to observe Jewish law. A convert must accept Jewish principles of faith, and reject the previous theology that he or she had prior to the conversion. Ritual immersion in a small pool of water known as a mikveh[?] is required , and the convert takes a new Jewish name and is considered to be a son or daughter of the biblical patriarch Abraham. The most famous Jewish King , King David was descended from the convert Ruth a princess from Moab . The father of the most famous sage of the Talmud , Rabbi Akiva , was a convert. Christians were forbidden to convert to Judaism on pain of death during most of the Middle Ages. In the 1700s a famous convert by the the name of Count Valentin Potoski[?] in Poland was burned at the stake. He was a disciple of Rabbi Elijah, known as the Vilna Gaon[?] . The Reform Judaism and Conservative Judaism movements are lenient in their acceptance of converts. Many of their members are married to non-Jews, and these movement make an effort to welcome the spouses of Jews who seek to convert. This issue is a lightening rod in modern day Israel as many immigrants from the former Soviet Union are not Jewish. Since around 300 CE, Judaism has stopped encouraging people to join its faith. In fact, converts are often discouraged from becoming Jews and warned that being a Jew entails great risks such as becoming a victim of Anti-Semitism. ### Differences between Jewish and Christian views Judaism does not characterize itself as a religion (although one can speak of the Jewish religion and religious Jews). The subject of the Tanach (Hebrew Bible) is the history of the Children of Israel (also called Hebrews), especially in terms of their relationship with God. Thus, Judaism has also been characterized as a culture or as a civilization. Rabbi Mordecai Kaplan defines Judaism as an evolving religious civilization. One crucial sign of this is that one need not believe, or even do, anything to be Jewish; the historic definition of 'Jewishness' requires only that one be born of a Jewish mother, or that one convert to Judaism in accord with Jewish law. (Today, Reform and Reconstructionist Jews also include those born of Jewish fathers and gentile mothers if the children are raised as Jews.) To Jews, Jewish peoplehood is closely tied to their relationship with God, and thus has a strong theological component. This relationship is encapsulated in the notion that Jews are a chosen people. Although many non-Jews have taken this as a sign of arrogance or exclusivity, Jewish scholars and theologians have emphasized that a special relationship between Jews and God does not in any way preclude other nations having their own relationship with God. For Jews, being "chosen" fundamentally means that Jews have chosen to obey a certain set of laws (see Torah and halakha) as an expression of their covenant with God. Jews hold that other nations and peoples are not required or expected to obey these laws, and face no penalty for not obeying them. Thus, as a national religion, Judaism has no problem with the notion that others have their own paths to God (or "salvation"). Christianity, on the other hand, is characterized by its claim to universality, which marks a break with Jewish identity. As a religion claiming universality, Christianity has had to define itself in relation with religions that make radically different claims about Gods. Christians believe that Christianity represents the fulfillment of God's promise to Abraham and the nation of Israel, that Israel would be a blessing to all nations. This crucial difference between the two religions has other implications. For example, conversion to Judaism is more like a form of adoption (i.e. becoming a member of the nation, in part by metaphorically becoming a child of Abraham), whereas conversion to Christianity is explicitly a declaration of faith. Depending on the denomination, this conversion has a social component, as the individual is in many ways adopted into the Church, with a strong family model. ### Conversion to Christianity In the times of Jesus, gentiles who sought to join him were required to undergo conversion to Judaism first including circumcision for men. This requirement was later dropped after Paul forced the issue. The origin of Christian Baptism in water is derived from the Jewish law requiring a convert to submerege themselves in pure water in order to receive a new pure soul from God. It was only many years after Jesus, that there was split in the movement and those seeking to convert to Christianity were not faced with the major obstacles that Judaism presented. Christianity and Islam are two religions that encourage preaching their faith in order to convert non-believers. In both cases, this missionary property has been used as an excuse for religious wars (crusades) on other countries. In the year 1000, the Viking age parliament of Iceland decided that the entire country should convert to Christianity, and that sacrifice to the old gods, while still allowed, should no longer be made in the open. Similar mass conversions in other Scandinavian countries were not as democratic. ### Conversion to Islam (to be written) All Wikipedia text is available under the terms of the GNU Free Documentation License Search Encyclopedia Search over one million articles, find something about almost anything! Featured Article Quadratic formula ... of the equation. The formula reads $x=\frac{-b \pm \sqrt {b^2-4ac}}{2a}$ The term b2 – 4ac is called the discriminant of the ... This page was created in 26.3 ms
2021-10-26 22:34:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4228721857070923, "perplexity": 4279.085980199213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00700.warc.gz"}
https://www.lil-help.com/questions/259709/cornell-notes-on-martin-luther-kings-i-have-a-dream-speech
.Cornell notes on Martin Luther Kings " I Have A Dream Speech" .Cornell notes on Martin Luther Kings " I Have A Dream Speech" 0 points Cornell Notes Name_Courtney Moore____________________ Topic: I have a Dream Speech Points to Remember ** Notes ** Dr. King emphasis the unity of blacks and whites He states that is America is to be a great country, this has to happen. He states that even in the past when there was slavery. Some Slaves and slave masters could even get along. He states that he has faith, that one day we will all be able to work together, pray together and struggle together. He worries about the future for his 4 children, He hopes that one day his children can stand hand and hand with white boys and girls as sisters and brothers. He states that one day he hopes that his children are judged by the content of their character, no the color of their skin. ... .Cornell notes JASMINE17 0 points Oh Snap! This Answer is Locked Thumbnail of first page Excerpt from file: CornellNotes Name_CourtneyMoore____________________Topic:IhaveaDreamSpeech PointstoRemember Dr.Kingemphasistheunity ofblacksandwhites ** Notes** HestatesthatisAmericaistobeagreatcountry,thishastohappen. Hestatesthateveninthepastwhentherewasslavery.SomeSlavesand slavemasterscouldevengetalong.... Filename: moore-c-m4-a2doc-73.docx Filesize: < 2 MB Print Length: 2 Pages/Slides Words: NA Surround your text in *italics* or **bold**, to write a math equation use, for example, $x^2+2x+1=0$ or $$\beta^2-1=0$$ Use LaTeX to type formulas and markdown to format text. See example.
2017-07-20 14:24:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35220906138420105, "perplexity": 8232.34027314513}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423222.65/warc/CC-MAIN-20170720141821-20170720161821-00594.warc.gz"}
http://www.math.ubc.ca/~andrewr/CLP/clp_2_ic/sec_parfrac.html
## Section1.10Partial Fractions Partial fractions is the name given to a technique of integration that may be used to integrate any rational function  1 Recall that a rational function is the ratio of two polynomials.. We already know how to integrate some simple rational functions \begin{align*} \int \frac{1}{x}\dee{x} &= \log|x|+C & \int \frac{1}{1+x^2} \dee{x} &= \arctan(x) +C \end{align*} Combining these with the substitution rule, we can integrate similar but more complicated rational functions: \begin{align*} \int \frac{1}{2x+3}\dee{x} &= \frac{1}{2} \log|2x+3| +C & \int \frac{1}{3+4x^2}\dee{x} &= \frac{1}{2\sqrt{3}} \arctan\left( \frac{2x}{\sqrt{3}} \right) +C \end{align*} By summing such terms together we can integrate yet more complicated forms \begin{align*} \int \left[ x + \frac{1}{x+1} + \frac{1}{x-1} \right]\dee{x} &= \frac{x^2}{2} + \log|x+1| + \log|x-1| +C \end{align*} However we are not (typically) presented with a rational function nicely decomposed into neat little pieces. It is far more likely that the rational function will be written as the ratio of two polynomials. For example: \begin{gather*} \int \frac{x^3+x}{x^2-1}\dee{x} \end{gather*} In this specific example it is not hard to confirm that \begin{align*} x+\frac{1}{x+1} +\frac{1}{x-1} &=\frac{x(x+1)(x-1) +(x-1) +(x+1)}{(x+1)(x-1)} =\frac{x^3+x}{x^2-1} \end{align*} and hence \begin{align*} \int \frac{x^3+x}{x^2-1}\dee{x} &= \int \left[ x + \frac{1}{x+1} + \frac{1}{x-1} \right]\dee{x} \\ &= \frac{x^2}{2} + \log|x+1| + \log|x-1| +C \end{align*} Of course going in this direction (from a sum of terms to a single rational function) is straightforward. To be useful we need to understand how to do this in reverse: decompose a given rational function into a sum of simpler pieces that we can integrate. Suppose that $N(x)$ and $D(x)$ are polynomials. The basic strategy is to write $\frac{N(x)}{D(x)}$ as a sum of very simple, easy to integrate rational functions, namely 1. polynomials — we shall see below that these are needed when the degree  2 The degree of a polynomial is the largest power of $x\text{.}$ For example, the degree of $2x^3+4x^2+6x+8$ is three. of $N(x)$ is equal to or strictly bigger than the degree of $D(x)\text{,}$ and 2. rational functions of the particularly simple form $\frac{A}{(ax+b)^n}$ and 3. rational functions of the form $\frac{Ax+B}{(ax^2+bx+c)^m}\text{.}$ We already know how to integrate the first two forms, and we'll see how to integrate the third form in the near future. To begin to explore this method of decomposition, let us go back to the example we just saw \begin{align*} x+\frac{1}{x+1} +\frac{1}{x-1} &=\frac{x(x+1)(x-1) +(x-1) +(x+1)}{(x+1)(x-1)} =\frac{x^3+x}{x^2-1} \end{align*} The technique that we will use is based on two observations: 1. The denominators on the left-hand side of are the factors of the denominator $x^2-1=(x-1)(x+1)$ on the right-hand side. 2. Use $P(x)$ to denote the polynomial on the left hand side, and then use $N(x)$ and $D(x)$ to denote the numerator and denominator of the right hand side. That is \begin{align*} P(x)&=x & N(x)&= x^3+x & D(x)&= x^2-1. \end{align*} Then the degree of $N(x)$ is the sum of the degrees of $P(x)$ and $D(x)\text{.}$ This is because the highest degree term in $N(x)$ is $x^3\text{,}$ which comes from multiplying $P(x)$ by $D(x)\text{,}$ as we see in \begin{align*} x + \frac{1}{x+1} + \frac{1}{x-1} &=\frac{ \overbrace{x}^{P(x)} \overbrace{(x+1)(x-1)}^{D(x)} + (x-1) + (x+1) } {(x+1)(x-1)} =\frac{x^3+x}{x^2-1} \end{align*} More generally, the presence of a polynomial on the left hand side is signalled on the right hand side by the fact that the degree of the numerator is at least as large as the degree of the denominator.
2018-03-18 11:41:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9993466734886169, "perplexity": 478.0635206876525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645613.9/warc/CC-MAIN-20180318110736-20180318130736-00531.warc.gz"}
http://smlnj-gforge.cs.uchicago.edu/scm/viewvc.php/papers/modulespaper/design/principles.tex?view=markup&root=smlnj&sortby=author&pathrev=3564
Home My Page Projects Code Snippets Project Openings SML/NJ Summary Activity Forums Tracker Lists Tasks Docs Surveys News SCM Files # SCM Repository [smlnj] View of /papers/modulespaper/design/principles.tex [smlnj] / papers / modulespaper / design / principles.tex # View of /papers/modulespaper/design/principles.tex Thu Sep 30 13:33:05 2010 UTC (8 years, 9 months ago) by dbm File size: 14814 byte(s) initial import \documentclass[9pt]{sigplanconf} \usepackage{cite} % sort citations \usepackage{amsmath,xspace,aux/math-envs,aux/math-cmds,aux/code,aux/proof,stmaryrd} \input{aux/code} \input{aux/mac} \input{aux/defs} \input{syntax} \bibliographystyle{plain} \begin{document} %\conferenceinfo{ML'07,} {October 5, 2007, Freiburg, Germany.} \title{Principles for an Extensible Module System} \authorinfo{}{University of Chicago}{} \maketitle \begin{abstract} The ML module system has inspired a series of formalism describing the workings and mechanisms of variations. These formalisms are quite varied and run the gamut from Leroy's presentation based on syntactic mechanisms alone, to Harper-Lillibridge and Dreyer-Crary-Harper's type-theoretic approach, to the elaboration semantics approach as represented by the Definition. \end{abstract} \category{D.3.1}{Programming Languages}{Formal Definitions and Theory} \category{D.3.3}{Programming Languages}{Language Constructs and Features}[Abstract data types, Modules] \category{F.3.3}{Logics and Meanings of Programs}{Studies of Program Constructs}[Type structure] \terms Languages, Theory \keywords modularity, translucent sum, singleton types, type theory, elaboration semantics, abstract data types, functors, generativity { \section{Outline} \begin{enumerate} \item ML module system is powerful because: \begin{enumerate} \item functors can be typechecked independent of functor applications \item enforces type abstraction by opaque ascription \item hierarchical modularity \item higher-order functors \item type definitions \end{enumerate} \item Background \begin{enumerate} \item Modular module\cite{leroy00} provides a syntactic model \begin{enumerate} \item Functorized over core language syntax and typechecker \item Core language must be aware of paths \item functor can only be applied to rooted path (can be extended to anonymous argument module if parameter dependency in the result signature can be reduced away) \item Because notion of core language (especially types) is abstract, the system is not easily extensible to richly typed languages. \item Does not support shadowing in core language declarations. Shadowing would fundamentally break syntactic notion of opaque ascription. \end{enumerate} \item Definition \cite{mthm97} provides a semantic approach \item Harper-Lillibridge\cite{lillibridge94}, Dreyer-Crary-Harper\cite{dhc03} provide a type theoretic model \begin{enumerate} \item purity \item totality/partiality \item static vs. dynamic effects; strong and weak sealing \item comparability and projectibility \end{enumerate} \item Harper-Pierce \cite{ATTAPL} provides high-level design principles and issues \begin{enumerate} \item sharing type constraints cannot always be expanded out as claimed by Pierce-Harper. Symmetric constraints are necessary in the absence of an ability for type definitions to reference their enclosing signatures and thus specifications that come after the type definition in question. \item determinacy versus static/dynamic \item first-class modules \item In Stone, full signatures are called very precise'' versus abstract; he argues that the avoidance problem is one reason why translucent sums do not have full signatures (most precise); also that restricting all programs to Leroy's named form guarantees the existence of most-specific'' interfaces. This terminology leads to the term natural interface'' -- the most precise interface that can be computed without appealing to strengthening (M-SELF). \end{enumerate} \item Treatments of first-class modules \begin{enumerate} \item Harper-Lillibridge \item Russo \cite{russo00} \item Dreyer-Crary-Harper \end{enumerate} \item Leroy \cite{Leroy-generativity} gives a module system where type generativity and SML90-style definitional sharing = path equivalence + A-normalization (for functor applications) + S-normalization (a consolidation of sharing constraints). Leroy shows that a module system with generative datatypes (but no constructors), sharing between type paths, and abstract type specifications can be expressed in terms of a module system with generative datatypes and manifest types. Leroy's simplified module system does not include value specifications and datatype constructors both of which can constrain the order in which specifications must be written in and therefore result in situations where sharing constraints cannot be in general reduced to manifest types. \end{enumerate} \item Design space: Propagation of types \begin{enumerate} \item Primarily through functor application \item Shao fully transparent signature calculus \item SML90 - only explicit sharing equations and structure (identity) sharing \item SML93 - plus definitional sharing \item SML97 - plus where type and definitional specs; structure identity sharing eliminated \item existential types, dependent sums, translucent sums, singleton kinds/signatures, Shao flexroot \end{enumerate} \item Key problem in modularity: modularizing types and their interpretations \begin{enumerate} \item Example: Symbol table versus Ord \item type class an imperfect solution because limits interpretation to a single instance and the type to only generative nonstructured types (i.e., datatypes) \item applicative functors imperfect because they permit too much sharing \item constructors of higher kind \item relationship to expression problem? \end{enumerate} \item True higher-order functors (MacQueen-Tofte 94) -- {\bf full transparency} \begin{enumerate} \item functor parameter type information should be propagated through functor application; in other words, should transparent signature matching semantics carry over to higher-order functor setting? \item Shao \cite{shao98} cites optimal propagation of types (ensuring that inlined and separately compiled modules receive the same typing) as a benefit of full transparency \item \begin{verbatim} signature FPS = sig type t end signature FRS = sig type t end structure M = struct type t = int end functor F(X: FPS): FRS = struct type t = X.t end functor G(functor F(X: FPS):FRS) = struct structure R = F(M) end \end{verbatim} \item \verb|t = int| should propagate through the HO functor application to \verb|G(F).R| \item type definitions in signatures insufficient because not all parameter functor F's will propagate this type information \item MacQueen-Tofte 94 appears to be the only module system that accounts for this class of type sharing \item Unfortunately, this feature apparently conflicts with separate compilation as noted by Dreyer-Crary-Harper \cite{dhc03} \item Primary criticism of stamp-based operational semantics is the difficulty of extending such a semantics \item The MT94 semantics also stratifies the stamp computation in the peculiar way \item Shao \cite{shao99} offers an alternative example for fully transparent higher-order functors\\ \begin{verbatim} signature S = sig type t val x : t end funsig FS = fsig (X:S): S structure S = struct type t=int val x=1 end functor F1(X:S) = struct type t=X.t val x=X.x end functor F2(X:S) = struct type t=int val x=1 end functor APPS(F:FS) = F(S) structure R = struct structure R1 = APPS(F1) structure R2 = APPS(F2) val res = (R1.x = R2.x) end \end{verbatim} \item Shao offers a signature language based on gathering all flexible components in a higher-order type constructor that can be applied to obtain the fully transparent signature at a later point. The resultant signature language superficially resembles applicative functors. However, applications in the signature language must be on paths. Consequently, it does not address fully transparency in the general case. \item Shao \cite{shao98} extends MacQueen-Tofte fully transparency modules with support for type definitions, type sharing (normalized into type definitions), and hidden module components. \end{enumerate} \item Extending modules to support signature bindings as components \begin{enumerate} \item In ML modules, structures can be arranged in a hierarchy. This feature enables flexible namespace management. In contrast, signatures cannot be arranged in such a hierarchy. In the ML module system, signatures must be defined at the top-level and can never be enclosed in any other signature or module. For complex hierarchies such the SML/NJ's Control module that contains layers of submodules, the corresponding signature CONTROL and the signatures of the submodules PRINT and ELAB are related only incidentally by occurrence in structure specifications in CONTROL. This shortcoming in the signature language unnecessarily pollutes the signature namespace and complicates browsing through and working with highly nested hierarchies. It would be desirable to permit (transparent) signature specifications within signatures. For added flexibility and perhaps increased expressiveness, it may be useful permit signature definitions within structures and functors. Furthermore, in order for modules to match these signatures enriched with signature specifications, modules must permit corresponding signature definitions. \item Leroy \cite{leroy94} offers an example that introducing signature bindings into structures would add polymorphic modules and F$_\omega$-like type operators. In particular, he offers \begin{verbatim} functor(x: sig signature X end) (m{x.X/X}) \end{verbatim} as an encoding of $\Lambda X.m$ \item Swasey \cite{swasey06} and Leroy \cite{leroy94} both cite Harper Lillibridge's proof of the undecidability of $\lambda^{\rightarrow,\exists,\exists=}$ as reason for their skepticism that such a feature can be added to a module system without breaking type-checking \item Harper and Lillibridge's proof establishes that in a type calculus with opaque and binary sums, subtyping is undecidable in the presence of a Forget rule that forgets transparency. The example they use is a subtyping relation on transparent and opaque sums containing a type constructor with a contravariant subtyping rule such as $T\rightarrow \alpha'$ \item Adding signature components does not necessarily provide parametric polymorphism in the style of System F because functor application uses coercive subtyping \end{enumerate} \item Polymorphism and modules \begin{enumerate} \item Interaction with Hindley-Milner polymorphism in core language \item Moscow ML's first-class modules provides first-class polymorphism \item Example: polymorphic data structures, continuation monad \cite{kahrs94} \end{enumerate} \item Claim: Instantiation is an analysis process that can detect cyclic sharing and other behaviors that may result in an unrealizable signature (though certainly not all behaviors) \end{enumerate} The key observation in Leroy's syntactic presentation of a module system is that we can check a sense of type equality by comparing rooted paths that uniquely determine type identity. Unfortunately, this technique suffers from the inability to support core and module language level shadowing of bindings. True separate compilation poses an interesting challenge in the ML module system. In order to have true separate compilation, the surface signature language must be able to express the full signature of all structures and functors. Even in a module language that only supports first-order functors, this requirement proves to be a problem because the signature language would be unable to express generative types in the body of a functor. Generative types in the body of a functor do not have externally expressible names prior to functor application. \begin{figure} \tiny \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline System & higher-order & first-class & sep comp & rec mod & app fct & gen & phase sep\\ \hline HL \cite{lillibridge94} & & \checkmark & & & & \\ \hline Leroy 94 & & & & & & \\ \hline Russo \cite{russo01} & & \checkmark & & \checkmark & & \\ \hline DCH (\cite{dhc03}) & & & & & & \\ \hline RTG (\cite{dreyer05}) & & & & & & \\ \hline $\lambda^{\llparenthesis~\rrparenthesis}$ (\cite{ATTAPL}) & & & & & & \\ \hline $\lambda^S$ (\cite{ATTAPL}) & & N & & N & N & N \\ \hline MT94 (\cite{mt94}) & \checkmark & & & & & \checkmark & \checkmark \\ \hline \end{tabular} \end{figure} %Would a completely nondependent module calculus work? More to the point, signature specs may depend on module terms, but is there a way to avoid this kind of dependence? \input{figs/fig-njmod} \input{figs/fig-statsem} \input{figs/fig-rlzn} \input{figs/fig-evalent} \input{figs/fig-eval-fctent} \input{figs/fig-entdec} \input{figs/fig-elabsig} \input{figs/fig-elabmod} \input{figs/fig-elabtyc} \input{figs/fig-extractsig} \section{Comparison to MacQueen-Tofte} In MacQueen-Tofte, signatures support higher-order functors by including a functor signature environment, denoted $\Phi$, that maps functor maps to functor signatures. Because functor signature components may depend on specifications that came earlier in the enclosing structure signature, the system introduces a binding $\lambda\rho$ that binds $\rho$, the entity variable representing the entire enclosing structure signature. The MT structure signatures require this $\lambda\rho$ binding because they incorporate functor signature environments indexed by functor paths, $\Phi$. Without the $\lambda\rho$-binding, $\Phi$ cannot depend on entities in the enclosing signatures. Structure matching first looks up all $fp$s in $\Phi$ and then attempts to match the static functor with $\Phi[\varphi/\rho]$. In contrast, in the current language, SML/NJ no longer permits nonlocal forms of sharing of the flavor illustrated in the example in the MT94 paper. Instead, structure definitions can express the same sharing. Signature specifications contain the signatures and realizations for structure definitions ({\it i.e.}, $\rho,\Sigma=\Sigma$ and $\rho,\Sigma=_{\overline{\rho}} \Sigma$) at the point of structure signature matching. These definition structure signatures and realizations are filled in during signature elaboration by looking up the static environment and entity path context. \input{primarycomp} \section{Relationship to Harper-Stone Semantics} \input{siglang} \section{FLINT Compilation} The goal of FLINT compilation was to enable all type-based optimizations to work across module boundaries by compiling the module calculus into System F. \input{figs/fig-nrc.tex} \input{figs/fig-tgc.tex} } \bibliography{modules} \end{document}
2019-07-22 23:37:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5025702118873596, "perplexity": 6130.671172756491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528290.72/warc/CC-MAIN-20190722221756-20190723003756-00160.warc.gz"}
https://injurystats.wordpress.com/tag/safety-in-numbers/
# Safety in Numbers Hypothesis for Cycling Safety in Numbers is a well known hypothesis in cycling safety. Essentially, the argument is the number of injuries per cyclist decreases as the amount of cycling increases [1,2]. Or put another way, the risk of cycling injury increases when cycling amounts decrease. The mathematical expression for safety in numbers can be written as $I=I_0\left(\dfrac{C}{C_0}\right)^{0.4}$ where I and C represent number of injuries and amount of cycling respectively. The exponent 0.4 has been suggested by Robinson[2]. Is there evidence supporting this phenomena using NSW hospitalization and cycling participation surveys? The data used can be found here [3,4]. Here are plots of the expected (red dashed line) number of head and arm injuries (left panel) and head injuries only (right panel) if the Safety in Numbers hypothesis is true. The black line represents the observed number of injuries by year. There is a clear divergence between what was observed and what was expected. Therefore, the evidence does not support Robinson’s safety in numbers hypothesis. In fact, the estimated exponent is 0.94 (95% CI: 0.59-1.30) and suggests increases in cycling is associated with a roughly equal increase in injury. A more detailed analysis (and other cycling-related analyses) can be found in our peer-reviewed paper[5]. 1. Jacobsen, P.L. (2003). Safety in numbers: more walkers and bicyclists, safer walking and bicycling. Injury Prevention, 9, 205-209. 2. Robinson, D.L. (2005). Safety in numbers in Australia: more walkers and bicyclists, safer walking and bicycling. Health Promotion Journal of Australia, 16, 47-51. 3. Olivier, J., Walter, S.R., & Grzebieta, R.H. (2013). Long-term bicycle related head injury trends for New South Wales, Australia following mandatory helmet legislation. Accident Analysis and Prevention, 50, 1128–1134. 4. Australian Bureau of Statistics, 2001. Participation in Exercise, Recreation and Sport 2001. ABS, Canberra. 5. Olivier, J., Grzebieta, R., Wang, J.J.J. & Walter, S. (2013). Statistical Errors in Anti-Helmet Arguments. Australasian College of Road Safety Conference.
2017-10-22 15:36:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3190325200557709, "perplexity": 7044.576885642137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825308.77/warc/CC-MAIN-20171022150946-20171022170946-00140.warc.gz"}
https://www.miniphysics.com/scalar-and-vector-quantities.html
Scalar and vector quantities Show/Hide Sub-topics (Measurement | O Level) Show/Hide Sub-topics (Measurement | A Level) Scalar quantities are quantities in which the magnitude is stated, but the direction is either not applicable or not specified. Examples: • Length • Volume • Mass • Speed Vector quantities are quantities in which both the magnitude and the direction must be stated. Examples: • Force • Velocity • Displacement • Acceleration It does not make sense to say that you’re applying a force of 10 N on a box without stating the direction. The force can be directed anywhere. Hence, you need to specify which direction the force is applied: 10 N in the horizontal direction on a box. There will be more about how to differentiate speed, displacement, velocity and acceleration in the later chapter.
2022-05-23 17:47:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8557095527648926, "perplexity": 1011.1897053660409}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662560022.71/warc/CC-MAIN-20220523163515-20220523193515-00010.warc.gz"}
https://calculator.academy/negative-binomial-calculator/
Enter the number of trials, number of successes, and probability of success on trial into the calculator to determine the negative binomial. ## Negative Binomial Formula The following formula can be used to calculate the negative binomial of distribution. P = k*(1-p)/p • Where P is the negative binomial • p is the probability of success • k is the number of success ## Negative Binomial Definition A negative binomial, or pascal distribution, is the probability that the solution of some statistical model will behave the same number of successes in a sequence as a Bernoulli trial. ## Negative Binomial Example How to calculate a negative binomial? 1. First, determine the number of successes. Measure the total number of successes. 2. Next, determine the total probability of success. Calculate the probability of success. 3. Finally, calculate the negative binomial. Calculate the negative binomial using the formula above. ## FAQ What is a negative binomial? Also known as pascal distribution, a negative binomial distribution is a probability solution that models the number of successes in a sequence of Bernoulli trials.
2021-10-17 15:34:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336942791938782, "perplexity": 728.0110734627191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585178.60/warc/CC-MAIN-20211017144318-20211017174318-00664.warc.gz"}
https://ltwork.net/twice-the-sum-of-a-number-and-9-is-30-a-translate-the-statement--7627270
Twice the sum of a number and 9 is 30.a) . Translate the statement into a mathematical equation.b). Question: Twice the sum of a number and 9 is 30. a) . Translate the statement into a mathematical equation. Does anyone know how to fix a hair cut if it’s very short? I don’t have so much money to pay to get a haircut from a spa. Does anyone know how to fix a hair cut if it’s very short? I don’t have so much money to pay to get a haircut from a spa.... What are the flowchart symbols?​ What are the flowchart symbols?​... Mr. Soria wants to teach his students new strategies that will improve their metacognitive knowledge Mr. Soria wants to teach his students new strategies that will improve their metacognitive knowledge and skills. Based on guidelines for teaching such strategies to students, which of the following approaches should Mr. Soria use? a. Encourage students to disregard strategies they have used in the p... Which of the following is a violation of the right to privacy? the police arrest a man Which of the following is a violation of the right to privacy? the police arrest a man caught in the act of dealing drugs on the street corner the police serve a summons to order a woman to appear in court for a crime she has committed the police tap the phone of a suspect criminal without a warran... Julia spends $2.25 on gas for her lawn mower. she earns$15.00 mowing her neighbors yard. what is her Julia spends $2.25 on gas for her lawn mower. she earns$15.00 mowing her neighbors yard. what is her profit?... On a whole, the rate of obesity in the united states is what On a whole, the rate of obesity in the united states is what... Which statement is true about Angle B P D?Line A B intersect line C D at point P. Which statement is true about Angle B P D? Line A B intersect line C D at point P.... Please Help!! I've attached an image of the problem $Please Help!! I've attached an image of the problem$... What is the perimeter of the triangle? (Round to the nearest tenth)1 What is the perimeter of the triangle? (Round to the nearest tenth) 1 $What is the perimeter of the triangle? (Round to the nearest tenth) 1$... THIS IS URGENT, I NEED HELP!! 'In this assessment, you have been given the picture/caption/quote. You will need to write a THOROUGH THIS IS URGENT, I NEED HELP!! "In this assessment, you have been given the picture/caption/quote. You will need to write a THOROUGH analysis. As a guide, use the questions to help you form an analysis. PLEASE DO NOT just answer the questions without putting the responses into a well-developed mini-p... What things bother you in your daily life, what are some minor inconveniences you wish didn't exist. What things bother you in your daily life, what are some minor inconveniences you wish didn't exist.... If p varies inversely as Q2 and p = 4 when Q =0.5, find p when Q = 1.5 If p varies inversely as Q2 and p = 4 when Q =0.5, find p when Q = 1.5... What is the mass of 1.000 L of seawater? kg What is the mass of 1.000 L of seawater? kg... In the book 'Speak' by Laurie Halse Anderson, what are 3 positive events that happened? In the book 'Speak' by Laurie Halse Anderson, what are 3 positive events that happened?... Investigate substances containing carbon or oxygen that relate to biological processes. Explain their main function in those Investigate substances containing carbon or oxygen that relate to biological processes. Explain their main function in those processes....
2023-01-27 17:55:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30671751499176025, "perplexity": 2020.549079269032}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00386.warc.gz"}
https://www.gamedev.net/forums/topic/473447-dx9-vbuffer-ibuffer-question/
# DX9 Vbuffer Ibuffer question This topic is 3742 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi everyone, I thought I figured out how DrawIndexedPrimitive works until it completely confused me 10 minutes ago. I was trying to draw an arbitrary polygon. It has 6 sides and 6 vertices. So basically you start with a vertex and play connect the dots pretty much an voila! there is the polygon. In case you want to picture it: Vertices: v[0] = VertexPos(-1.0f, 2.0f, 1.0f); v[1] = VertexPos(2.0f, 1.75f, 1.0f); v[2] = VertexPos( 1.0f, 0.0f, 1.0f); v[3] = VertexPos( 3.0f, -0.25f, 1.0f); v[4] = VertexPos(0.0f, -2.0f, 1.0f); v[5] = VertexPos(-2.0f, 1.0f, 1.0f); Indices: k[0] = 0; k[1] = 1; k[2] = 1; k[3] = 2; k[4] = 2; k[5] = 3; k[6] = 3; k[7] = 4; k[8] = 4; k[9] = 5; k[10] = 5; k[11] = 0; So, I figured, to the "number of vertices" paramter of the function, I needed to pass 6 vertices since there are 6. Also for the "number of primitives", I thought I should pass 6 again, because it has 6 sides. And I am using line strips. However it didn't completely draw my shape. So I decided to play with numbers, completely confused and found out that the least parameters that work are like this: HR(gd3dDevice->DrawIndexedPrimitive(D3DPT_LINESTRIP, 0, 0, 0, 0, 11)); Can somebody explain to me, why even though I pass 0 vertices, it draws the whole shape and why I need to pass at least 11 primitives? Thanks ##### Share on other sites In your situation, an index buffer is barely beneficial. Try the following : // Indicesk[0] = 0; k[1] = 1; // Connect vertex 0 to vertex 1 tok[2] = 2; k[3] = 3; // vertex 2 to vertex 3 tok[4] = 4; k[5] = 5; // vertex 4 to vertex 5 and back tok[6] = 0; // vertex 0// Draw callgd3dDevice->DrawIndexedPrimitive(D3DPT_LINESTRIP,0, // You'll basically always be using zero here0, 6, // Starting from vertex 0, you're using all of your 6 vertices0, // The first index you're using is index 06); // You're going to draw 6 primitives (6 lines) EDIT : As for why it draws even if you specify 0 vertices, you should know that, when drawing indexed primitives, the vertices you specify are only a hint to the graphics driver so it knows which vertices will be indexed by your draw call. If you pass 0 vertices, I bet that the driver consider that any vertex in your vertex buffer could be indexed. It's a simple matter of optimization. EDIT 2 : The way you defined your indices, by pairs of two points, corresponds to the D3DPT_LINELIST primitive type, not the D3DPT_LINESTRIP primitive type. More on this on msdn Oh I see. Thanks! ##### Share on other sites What you were making was a LINELIST. In a linelist, you can have seperate unconnected lines, and requires the extra indices to point out all the start and end points.
2018-02-21 03:51:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20306311547756195, "perplexity": 1998.6920653824166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813322.19/warc/CC-MAIN-20180221024420-20180221044420-00793.warc.gz"}
https://zbmath.org/?q=an:0766.62038&format=complete
# zbMATH — the first resource for mathematics On estimable and locally-estimable functions in the non-linear regression model. (English) Zbl 0766.62038 Summary: The nonlinear regression model $$y=\eta(\vartheta)+\varepsilon$$ with an error vector $$\varepsilon$$ having zero mean and covariance matrix $$\delta^ 2I$$ $$(\delta^ 2$$ unknown) is considered. Some sufficient conditions for estimability and local estimability of the function of the parameter $$\vartheta$$ are obtained, whilst the regularity of the model (i.e. the regularity of Jacobi matrix of the function $$\eta(\vartheta))$$ is not required. Consequently, there are given — in addition — precisions of A. H. Bird and G. A. Milliken’s research [Commun. Stat., Theory Methods 5, 999-1012 (1976)] concerning local reparameterization of singular models to regular models. ##### MSC: 62J02 General nonlinear regression 62F10 Point estimation 62H12 Estimation in multivariate analysis Full Text: ##### References: [1] A. H. Bird, G. A. Milliken: Eastimable functions in the non-linear model. Com. Statist. Theory Methods 5 (1976, 11, 999 - 1012. · Zbl 0341.62055 [2] J. Dieudonne: Treatise on Analysis. Volume III. Academic Press, New York 1972. · Zbl 0268.58001 [3] M. Golubickij, V. Gijemin: Ustojčivije otobraženija i jich osoběnnosti. Mir, Moskva 1974. [4] V. Jarník: Diferenciální počet II (Differential Calculus). NČSAV, Praha 1956. [5] H. Kohoutková: Exponential regression. Fasciculi Mathematici Nr. 20 (1989), 111 - 116. [6] A. Pázman: Nonlinear least squares - uniqueness versus ambiguity. Math. Operationsforsch. Statist., Ser. Statist. 15 (1984), 323 - 336. · Zbl 0562.62053 [7] R. C. Rao: Lineární metody statistické indukce a jejich aplikace (Linear Methods of Statistical Induction and their Applications). Academia, Praha 1978. [8] V. I. Smirnov: Kurs vysšej matematiki IV (A Course of Higher Mathematics). Gos. izd. tech.-teor. literatury, Moskva 1953. This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-01-20 05:26:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4884951412677765, "perplexity": 6469.844037724052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00390.warc.gz"}
https://www.researcher-app.com/paper/148817
3 years ago # On the relationship between the plateau modulus and the threshold frequency in peptide gels. L. G. Rizzi Relations between static and dynamic viscoelastic responses in gels can be very elucidating and may provide useful tools to study the behavior of bio-materials such as protein hydrogels. An important example comes from the viscoelasticity of semisolid gel-like materials, which is characterized by two regimes: a low-frequency regime where the storage modulus $G^{\prime}(\omega)$ displays a constant value $G_{\text{eq}}$, and a high-frequency power-law stiffening regime, where $G^{\prime}(\omega) \sim \omega^{n}$. Recently, by considering Monte Carlo simulations to study the formation of peptides networks, we found an intriguing and somewhat related power-law relationship between the plateau modulus and the threshold frequency, i.e. $G_{\text{eq}} \sim ( \omega^{*} )^{\Delta}$ with $\Delta = 2/3$. Here we present a simple theoretical approach to describe that relationship and test its validity by using experimental data from a $\beta$-lactoglobulin gel. We show that our approach can be used even in the coarsening regime where the fractal model fails. Remarkably, the very same exponent $\Delta$ is found to describe the experimental data. Publisher URL: http://arxiv.org/abs/1711.02689 DOI: arXiv:1711.02689v1 You might also like Never Miss Important Research Researcher is an app designed by academics, for academics. Create a personalised feed in two minutes. Choose from over 15,000 academics journals covering ten research areas then let Researcher deliver you papers tailored to your interests each day. Researcher displays publicly available abstracts and doesn’t host any full article content. If the content is open access, we will direct clicks from the abstracts to the publisher website and display the PDF copy on our platform. Clicks to view the full text will be directed to the publisher website, where only users with subscriptions or access through their institution are able to view the full article.
2022-06-25 19:50:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40268656611442566, "perplexity": 1581.1767964887972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036099.6/warc/CC-MAIN-20220625190306-20220625220306-00101.warc.gz"}
https://ecommons.cornell.edu/handle/1813/5748?show=full
dc.contributor.author Moczydlowski, Wojciech en_US dc.date.accessioned 2007-04-04T20:42:37Z dc.date.available 2007-04-04T20:42:37Z dc.date.issued 2006-10-11 en_US dc.identifier.citation http://techreports.library.cornell.edu:8081/Dienst/UI/1.0/Display/cul.cis/TR2006-2051 en_US dc.identifier.uri https://hdl.handle.net/1813/5748 dc.description.abstract We propose a set theory strong enough to interpret powerful type theories underlying proof assistants such as LEGO and also possibly Coq, which at the same time enables program extraction from constructive proofs. For this purpose, we axiomatize impredicative constructive version of Zermelo-Fraenkel set theory IZF with Replacement and $\omega$-many inaccessibles, which we call IZF_{R\omega}. Our axiomatization of IZF_{R\omega} utilizes set terms, an inductive definition of inaccessible sets and mutually recursive nature of equality and membership relations. It allows us to define a weakly-normalizing typed lambda calculus \lambda Z_\omega corresponding to proofs in IZF_{R\omega} according to the Curry-Howard isomorphism principle. We use realizability to prove the normalization theorem, which provides basis for extracting programs from IZF_{R\omega} proofs. en_US dc.format.extent 532052 bytes dc.format.mimetype application/postscript dc.language.iso en_US en_US dc.publisher Cornell University en_US dc.subject computer science en_US dc.subject technical report en_US dc.title A Normalizing Intuitionistic Set Theory with Inaccessible Sets en_US dc.type technical report en_US 
2022-09-26 16:02:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.602875828742981, "perplexity": 10788.268862175775}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00154.warc.gz"}
https://socratic.org/questions/a-circle-is-described-by-the-equation-x-12-2-y-32-2-49-what-are-the-coordinates-
# A circle is described by the equation (x−12)^2+(y−32)^2=49. What are the coordinates for the center of the circle and the length of the radius? Center $\left(h , k\right) = \left(12 , 32\right)$ Radius $r = 7$ #### Explanation: The radius - center form of the equation of the circle is given: ${\left(x - h\right)}^{2} + {\left(y - k\right)}^{2} = {r}^{2}$ where $\left(h , k\right)$ the center and $r$ the radius of the Circle. From the given: ${\left(x - 12\right)}^{2} + {\left(y - 32\right)}^{2} = 49$ Also equivalent to ${\left(x - 12\right)}^{2} + {\left(y - 32\right)}^{2} = {7}^{2}$ clearly by inspection , center $\left(h , k\right) = \left(12 , 32\right)$ and Radius $r = 7$ Have a nice day !!! from the Philippines...
2020-04-05 21:14:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9234458804130554, "perplexity": 648.7636175827572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371609067.62/warc/CC-MAIN-20200405181743-20200405212243-00163.warc.gz"}
https://mathoverflow.net/questions/321156/algebraic-geometry-over-the-surreal-and-surrcomplex-numbers
Algebraic Geometry Over the Surreal and Surrcomplex Numbers I was wondering whether or not there is some kind of theory of algebraic geometry over the field of Surreal and Surrcomplex numbers? • I believe that there is some theorem to the effect that “algebraic geometry over any algebraically closed field of characteristic 0 may as well be over $\mathbb{C}$” which would include the surcomplex numbers, I’m not sure about the surreals though. Jan 18, 2019 at 1:09 • @AlecRhea I think there's some theorem of the sort also for real closed fields, saying you might as well do it over the reals. Jan 18, 2019 at 9:28 • @DenisNardin Ah, that seems reasonable, if either of us could produce a reference it'd be ideal though :-). I think it's worth mentioning that these theorems aren't necessarily telling us that there is no interesting and new 'geometry' that takes place over the surreals/surcomplex numbers which can be captured algebraically, but rather that the standard tools of modern algebraic geometry will not be sensitive to any of the new phenomenon we might encounter. It also seems like model-theoretic arguments are proving these results (ref?), so the tools under discussion would be first order (cont.) Jan 25, 2019 at 23:46 • and perhaps some second order machinery would be more successful at algebraically differentiating between geometry over the real/complex vs surreal/surcomplex numbers. Jan 25, 2019 at 23:47
2022-08-11 11:05:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5439642667770386, "perplexity": 418.9796367245167}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571284.54/warc/CC-MAIN-20220811103305-20220811133305-00554.warc.gz"}
http://gate-exam.in/CE/Syllabus/Structural-Engineering/Structural-Analysis
# Questions & Answers of Structural Analysis #### Topics of Structural Analysis 37 Question(s) | Weightage 06 (Marks) The figure shows a two-hinged  parabolic arch of spam L subjected to uniformly distributed load of  intensity  q per unit length The maximum bending moment  in the arch is qual to ##### Show Answer A planar truss tower structure is shown in the figure. Consider the following statement about the external and internal deteriminacies of the truss. (P) Externally Determinate (Q) External static Indeterminancy = 1 (R) External static Indeterminancy = 2 (S) Internally Determinate (T) Internal static Indeterminancy = 1 (U) Internal static Indeterminancy = 2 Which one of the following option is correct? ##### Show Answer The value of M in the beam ABC shown in the figure is such that the joint B does not rotate. The value of support reaction (in kN) at B should be equal to __________________ ##### Show Answer Consider the beam ABCD shown in the figure. For a moving concentraed load of 50kN on the magnitude of the maximum bending moment (in kN-m) obtained at the support C will be equal  to ____________ ##### Show Answer Consider the frame shown in the figure: If the axial and shear deformations in diferent members of the frame are assumed to be negligible, the reduction in the degree of kinematical indeterminacy would be equal to ##### Show Answer Consider  the portal frame shown in the figure and assume the modulus of elasticity, $\style{font-family:'Times New Roman'}{E=2.5\times10^4}$ MPa and the moment of inertia, $\style{font-family:'Times New Roman'}{I=8\times10^8}$ mm4 for all the members of the frame. The rotation (in degrees, up to one decimal place ) at the rigid joint Q would be ____________ ##### Show Answer Consider the plane truss with load P as shown in the figure. Let the horizontal and vertical reactions at the joint B be HB and VB, respectively and VC be the vertical reaction at the joint C. Which one of the following sets gives the correct values of VB, HB and VC? ##### Show Answer A plane truss with applied loads is shown in the figure. The members which do not carry any force are ##### Show Answer The kinematic indeterminacy of the plane truss shown in the figure is ##### Show Answer The portal frame shown in the figure is subjected to a uniformly distributed vertical load w (per unit length). The bending moment in the beam at the joint ‘Q’ is ##### Show Answer For the beam shown below, the stiffness coefficient K22 can be written as ##### Show Answer For the 2D truss with the applied loads shown below, the strain energy in the member XY is _______ kN-m. For member XY, assume AE=30 kN, where A is crosssection area and E is the modulus of elasticity. ##### Show Answer A guided support as shown in the figure below is represented by three springs (horizontal, vertical and rotational) with stiffness kx, ky and $k_\theta$ respectively. The limiting values of kx, ky and $k_\theta$ are : ##### Show Answer A simply supported beam AB of span, L = 24 m is subjected to two wheel loads acting at a distance, d = 5 m apart as shown in the figure below. Each wheel transmits a load, P = 3 kN and may occupy any position along the beam . If the beam is an I-section having section modulus, S = 16.2 cm3, the maximum bending stress (in GPa) due to the wheel loads is_____________. ##### Show Answer The degree of static indeterminacy of a rigid jointed frame PQR supported as shown in the figure is ##### Show Answer In a beam of length L, four possible influence line diagrams for shear force at a section located at a distance of $\frac{L}{4}$ from the left end support (marked as P, Q, R and S) are shown below. The correct influence line diagram is ##### Show Answer For the cantilever beam of span 3 m (shown below), a concentrated load of 20 kN applied at the free end causes a vertical displacement of 2 mm at a section located at a distance of 1 m from the fixed end. If a concentrated vertically downward load of 10 kN is applied at the section located at a distance of 1 m from the fixed end (with no other load on the beam), the maximum vertical displacement in the same beam (in mm) is __________ ##### Show Answer For the truss shown below, the member PQ is short by 3 mm. The magnitude of vertical displacement of joint R (in mm) is _______________ ##### Show Answer The static indeterminacy of the two-span continuous beam with an internal hinge, shown below, is ______________ ##### Show Answer Considering the symmetry of a rigid frame as shown below, the magnitude of the bending moment (in kNm) at P (preferably using the moment distribution method) is ##### Show Answer The pin-jointed 2-D truss is loaded with a horizontal force of 15 KN at joint S and another 15 KN vertical force at joint U, as shown.Find the force in member RS (in KN) and report your answer taking tension as positive and compression as negative. __________ ##### Show Answer All members in the rigid-jointed frame shown are prismatic and have the same flexural stiffness EI.Find the magnitude of the bending moment at Q (in kNm) due to the given loading. __________ ##### Show Answer A uniform beam (EI constant)PQ in the form of a quarter-circle of radius R is fixed at end P and free at the end Q, where a load W is applied as shown. The vertical downward displacement, at the loaded point Q is given by:${\delta }_{q}=\beta \left(\frac{W{R}^{3}}{EI}\right)$.Find the value of $\beta$(correct to 4-decimal places).__________ ##### Show Answer A uniform beam weighing 1800N is supported  at E and F by cable ABCD. Determine the tension(in N) in segment AB of this cable (correct to 1-decimal place). Assume the cables ABCD,BE and CF to be weightless.____________ ##### Show Answer Beam PQRS has internal hinges in spans PQ and RS as shown. The beammay be subjeted to a moving distributed vertical load of maximum intensity 4 kN/m of any length anywhere on the beam. The maximum absolute value of the shear force (in KN) that can occur due to this loading just to the right of support Q shall be: ##### Show Answer For the truss shown in the figure, the force in the member QR is ##### Show Answer A three hinged parabolic arch having a span of 20 m and a rise of 5 m carries a point load of 10 kN at quarter span from the left end as shown in the figure. The resultant reaction at the left support and its inclination with the horizontal are respectively ##### Show Answer The degree of static indeterminacy of a rigidly jointed frame in a horizontal plane and subjected to vertical loads only, as shown in figure below is ##### Show Answer A rigid bar GH of length L is supported by a hinge and a spring of stiffness K as shown in the figure below. The buckling load, Pcr for the bar will be ##### Show Answer The degree of static indeterminacy of the rigid frame having two internal hinges as shown in the figure below, is ##### Show Answer The members EJ and IJ of a steel truss shown in the figure below are subjected to a temperature rise of 30oC. The coefficient of thermal expansion of steel is 0.000012 per oC per unit length. The displacement (mm) of joint E relative to joint H along the direction HE of truss, is ##### Show Answer The span(s) to be loaded uniformly for maximum positive (upward) reaction at support P, as shown in the figure below, is (are) ##### Show Answer The stiffness coefficient kij indicates ##### Show Answer The right triangular truss is made of members having equal cross sectional area of 1550 mm2 and Young’s modulus of 2 × 105 MPa. The horizontal deflection of the joint Q is ##### Show Answer The influence line diagram (ILD) shown is for the member ##### Show Answer A two span continuous beam having equal spans each of length L is subjected to a uniformly distributed load w per unit length. The beam has constant flexural rigidly. The reaction at the middle support is ##### Show Answer A two span continuous beam having equal spans each of length L is subjected to a uniformly distributed load w per unit length. The beam has constant flexural rigidly. The bending moment at the middle support is
2018-08-17 01:37:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7221695184707642, "perplexity": 1961.4925567082018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211403.34/warc/CC-MAIN-20180817010303-20180817030303-00377.warc.gz"}
https://math.stackexchange.com/questions/3762878/is-it-possible-to-justify-these-approximations-about-prime-numbers
# Is it possible to justify these approximations about prime numbers? A recently closed question asked for a possible closed form of the infinite summation $$f(a)=\sum _{i=1}^{\infty } a^{-p_i}$$ for which I already proposed a first simple but totally empirical approximation. Since we quickly face very small numbers, I tried to find approximations of $$g(a)=\Big[\sum _{i=1}^{\infty } a^{-p_i}\Big]^{-1} \qquad \text{and} \qquad h(a)=\Big[\sum _{i=1}^{\infty } (-1)^{i-1} a^{-p_i}\Big]^{-1}$$ All calculations where done with integer values of $$a$$ for the range $$2 \leq a \leq 1000$$. What I obtained is $$\color{blue}{g(a)\sim\frac{(a-1) (2a^3+2a-1)}{2 a^2}}\qquad \text{and} \qquad \color{blue}{h(a)\sim\frac{(a-1) \left(a^3+2 a^2+3 a+4\right)}{a^2}}$$ If the corresponding curve fits were done, in both cases we should have $$R^2 > 0.999999999$$. For the investigated values of $$a$$, $$\text{Round}\left[\frac{(a-1) (2a^3+2a-1)}{2 a^2}-{g(a)}\right]=0$$ $$\text{Round}\left[\frac{(a-1) \left(a^3+2 a^2+3 a+4\right)}{a^2}-{h(a)}\right]=0$$ Not being very used to work with prime numbers, is there any way to justify, even partly, these approximations ? • Clearly any approximation to $g(a)$ needs the factor $(a-1)$ because the sum blows up at $a=1$. I'd try truncating $(a-1)\sum a^{-p_i}$ to get some approximations and then combine them to hopefully get a better approximation. – user10354138 Jul 20 at 5:26 • @user10354138. The $(a-1)$ term was obvious and was included from the start because of what you said (for sure). – Claude Leibovici Jul 20 at 5:31 These estimates are correct within a reasonable degree of accuracy. Below is the explanation for $$f(a)$$; the case for $$h(a)$$ can be dealt similarly. We have $$f(a) = \frac{1}{a^2} + \frac{1}{a^3} + \frac{1}{a^5} + O\bigg(\frac{1}{a^7}\bigg)$$ whereas $$\frac{2a^2}{(a-1)(2a^3 + 2a - 1)} = \frac{1}{a^2} + \frac{1}{a^3} + \frac{1}{2a^5} + \frac{3}{2a^6} + O\bigg(\frac{1}{a^7}\bigg).$$ Hence, $$f(a) = \frac{2a^2}{(a-1)(2a^3 + 2a - 1)} + O\bigg(\frac{1}{a^5}\bigg)$$ For large values of $$a$$ the error would obviously be negligible, since it grows no faster than a constant times $$a^{-5}$$. So this may or may not be a good estimate depending upon weather you are satisfied with the magnitude of the error term $$O(a^{-5})$$. The best possible estimate of the form $$\dfrac{Ax^2}{(x-1)(Bx^3 + Cx^2 + Dx + E)}$$ is obtained by the Laurent series expansion of about the point $$x = \infty$$ and equating the coefficient of smallest non prime powers to zero which gives $$A = B = D = 1, C = 0,E = -1$$. Hence we have, $$f(a) = \frac{a^2}{(a-1)(a^3 + a - 1)} + O\bigg(\frac{1}{a^6}\bigg)$$ which reduces the error by a factor of $$a$$. Update 21-Jul-2020: However, using basic properties of primes we can get remarkably sharper estimates. Since every primes $$\ge 5$$ are of the form $$6k \pm 1$$, by summing up the geometric sequences $$a^{-6k-1} + a^{-6k+1}$$ for $$k = 1,2,\ldots, \infty$$ and adding $$a^{-2} + a^{-3}$$, and taking advantage of the fact the the density of primes among the first few numbers of these form is high we get $$f(a) = \frac{a^7 + a^6 + a^4 + a^2 -a - 1}{a^3(a^6 - 1)} + O\bigg(\frac{1}{a^{25}}\bigg)$$ • The estimate being a bit too large (by $a^{-5}$ plus smaller terms) also makes a lot of sense, since for smaller $a$ (i.e. those at the lower end of the range) the naive truncation as $a^{-2}+a^{-3}+a^{-5}$ will be a worse underestimate than it will at the high end of the range. – Steven Stadnicki Jul 20 at 17:51 • This is a fantastic answer with a nice reasoning and the update is just incredibly nice. Would you do the same for $\frac 1 {h(a)}$ ? I am curious. Thanks and cheers :-) – Claude Leibovici Jul 21 at 6:17 • If I may ask, how did you arrive to this result ? – Claude Leibovici Jul 22 at 13:19 • @ClaudeLeibovici Since every primes $\ge 5$ are of the form $6k \pm 1$, by summing up the geometric sequences $a^{-6k-1} + a^{-6k+1}$ for $k = 1,2,\ldots, \infty$ and adding $a^{-2} + a^{-3}$, and taking advantage of the fact the the density of primes among the first few numbers of these form is high, we the estimate with $O(a^{-25})$. This is because $25$ is the smallest number of this form which is not a prime. – Nilotpal Kanti Sinha Jul 22 at 15:17 • Thanks for the clear explanation. Cheers :-) – Claude Leibovici Jul 22 at 15:20 This a long comment. Here is a possible approach to estimate $$f(a)$$. Using Dusart's approximation (later improved by Axler), The $$n$$-th prime satisfies $$n\log n + n\log\log n - n < p_n < n\log n + n\log\log n$$ where the lower bound holds for all $$n \ge 1$$ and the upper bound holds for $$n \ge 6$$. Hence for $$a > 1$$, we obtain an inequality of the form $$\frac{1}{a^2} + \frac{1}{a^3} + \frac{1}{a^5} + \sum_{n = 6}^{\infty}\frac{1}{a^{n\log n + n\log\log n }} < \sum_{n = 1}^{\infty} \frac{1}{a^{p_n}} < \sum_{n = 1}^{\infty}\frac{1}{a^{n\log n + n\log\log n - n }}$$ This can gives some tight approximations if we can convert left and the right sums to a closed form approximation with controllable error terms, which however is the more tedious task. • As you say, "if we can convert ..." ! Thanks & cheers :-) – Claude Leibovici Jul 20 at 7:20 I found this estimate for $$g$$: $$\ g(a)\sim \dfrac{a^2(a^2-1)}{a^2+a-1}$$. ## First inequality $$f(a) = \displaystyle \sum_{i=1}^{+\infty} \dfrac{1}{a^{p_i}} \leqslant \sum_{i=2}^{+\infty} \dfrac{1}{a^i} -\sum_{i=2}^{+\infty} \dfrac{1}{a^{2i}} = \dfrac{1}{a^2}\dfrac{1}{1-\dfrac{1}{a}} -\dfrac{1}{a^4}\dfrac{1}{1-\dfrac{1}{a^2}}$$ $$f(a)\leqslant \dfrac{1}{a(a-1)}-\dfrac{1}{a^2(a^2-1)}= \dfrac{a^2+a-1}{a^2(a^2-1)}$$ $$\fbox{g(a)\geqslant \dfrac{a^2(a^2-1)}{a^2-a+1}}$$ ## Second inequality $$f(a) \geqslant \dfrac{1}{a^2}+\dfrac{1}{a^3}+\dfrac{1}{a^5}+\dfrac{1}{a^7}$$ $$\fbox{g(a)\leqslant \dfrac{a^7}{a^5+a^4+a^2+1}}$$ ## Quality of the approximation $$0\leqslant g(a)-\dfrac{a^2(a^2-1)}{a^2+a-1}\leqslant \dfrac{a^7}{a^5+a^4+a^2+1} - \dfrac{a^2(a^2-1)}{a^2+a-1}$$ And: $$\dfrac{a^7}{a^5+a^4+a^2+1} - \dfrac{a^2(a^2-1)}{a^2+a-1} = \dfrac{a^2}{(a^5+a^4+a^2+1)(a^2+a-1)}$$ So $$\fbox{0\leqslant g(a)-\dfrac{a^2(a^2-1)}{a^2+a-1}\leqslant \dfrac{1}{a^5}}$$ And: $$\forall a \in [2,+\infty[ \ , \ \text{Round} \left( g(a)-\dfrac{a^2(a^2-1)}{a^2+a-1}\right) = 0$$ • Thank you very much ! Working the bounds is superb. Cheers :) – Claude Leibovici Jul 21 at 6:20
2020-08-13 22:05:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 46, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9611548781394958, "perplexity": 343.0137498565685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739073.12/warc/CC-MAIN-20200813191256-20200813221256-00006.warc.gz"}
https://www.ias.ac.in/describe/article/boms/043/0229
• Studying the change in organic light-emitting diode performance at various vacuum-deposition rates of hole and electron transport layers • # Fulltext https://www.ias.ac.in/article/fulltext/boms/043/0229 • # Keywords Novel gradient and barrier structures; controlling the vacuum-deposition rate; electroluminescence (EL) efficiency; charge hopping rate; charge recombination zone • # Abstract The electroluminescence (EL) of classic and thermally activated delayed fluorescence (TADF) organic light emitting diodes (OLEDs) at various vacuum-deposition rates of hole and electron transport layer (HTL and ETL) has beenstudied. The external quantum efficiency (EQE) measurements showed that the best performance devices were those with a high charge carrier balance inside the emitting layer, which was engineered using hole and electron current manipulation as a result of vacuum-deposition rate control. Changing the vacuum-deposition rate of HTL and ETL leads to a change in the maximum EQE (EQE$_{\rm max}$) of the classic and TADF OLEDs without obvious changes in EQE roll-off ratio at high current density. We used a simple analytical model to clarify that the enhanced hole current in HTL at high deposition rates is dominated by high hole mobility attributed to the increased hole hopping rate due to the reduction of the intermolecular separation between horizontally oriented $N,N'$-diphenyl-$N,N'$-bis(1-naphthyl)-1,1$'$-biphenyl-4,4$'$-diamine($\alpha$-NPD) molecules. The increase in the electron current of tris-(8-hydroxyquinoline) aluminium (Alq$_3$) ETL at low deposition rate was ascribed to high electron injection from cathode into ETL by the fabrication and comparison of $J–V$ characteristic of two electron-only devices with a difference at deposition rate of ETL near cathode interface. Finally, we introduced an OLED with novel gradient and barrier structures for the emitting layer in which high injected charge carriers recombined inside added recombination zone to raise radiative recombination and efficiency of the device. Our results demonstrated that EL efficiency of an OLED can be changed by controlling the vacuum-deposition rate of organic layers. • # Author Affiliations 1. Laser and Plasma Research Institute, Shahid Beheshti University, Tehran 1983963113, Iran • # Bulletin of Materials Science Volume 44, 2021 All articles Continuous Article Publishing mode • # Dr Shanti Swarup Bhatnagar for Science and Technology Posted on October 12, 2020 Prof. Subi Jacob George — Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur, Bengaluru Chemical Sciences 2020 Prof. Surajit Dhara — School of Physics, University of Hyderabad, Hyderabad Physical Sciences 2020 • # Editorial Note on Continuous Article Publication Posted on July 25, 2019 Click here for Editorial Note on CAP Mode © 2021-2022 Indian Academy of Sciences, Bengaluru.
2021-09-24 06:15:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25428980588912964, "perplexity": 9280.03672798148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057504.60/warc/CC-MAIN-20210924050055-20210924080055-00048.warc.gz"}
http://heycjt.blogspot.com/2012/08/logistic-regression-model-likelihood.html
## Tuesday, August 21, 2012 ### Logit Model Likelihood Function In a previous post, I mapped out the relationship between the inverse logit and logistic function.  In this post, I'll present the likelihood function followed by the log likelihood. Since the likelihood is expressed in terms of frequencies (according to the text I'm referencing,  "Biostatistical Methods:  The Assessment of Relative Risks" by John Lachin), consider the following 2x2 table where the Response is some dependent variable, the Group is a binary independent variable (e.g. exposure), and the cell values denote the frequency in each cell.  The marginal totals are represented by m1, m2, n1, n2.  The grand total is N Group 1 2 Response + a $(\pi_1)$ b $(\pi_2)$ m1 - c $(1 - \pi_1)$ d $(1 - \pi_2)$ m2 n1 n2 N The generic likelihood function, $L(\theta)$, is "the total probability of the sample under the assumed model" (pp. 465), denoted thus: $L(y_1, \cdots , y_N; \theta) = \prod_{i=1}^N f(y_i; \theta)$ If we express the generic likelihood function in terms of the frequencies from the table above (a, b, c, d) then the likelihood function becomes $L(\pi_1, \pi_2) = \pi^a_1 (1-\pi_1)^c \pi^b_2 (1-\pi_2)^d$ by the fact that the cell probabilities, $\pi_i$, are exponentiated by the number of subjects in each cell (a, b, c, d). The log likelihood is just the log of the above: $\ell(\pi_1, \pi_2) = a\:log(\pi_1) + c\:log(1-\pi_1) + b\:log(\pi_2) + d\:log(1-\pi_2)$ Since we'll eventually want to derive the maximum likelihood estimates for $\alpha$ and $\beta$, the log likelihood should be expressed in terms of $\alpha$ and $\beta$ (the substitutions for $\pi_i$ follow from the inverse logit to logistic post): $\ell(\theta) = a\:log \Bigl[\frac{e^{\alpha + \beta}}{1 + e^{\alpha + \beta}}\Bigr] + c\:log \Bigl[\frac{1}{1 + e^{\alpha + \beta}}\Bigr] + b\:log \Bigl[\frac{e^{\alpha}}{1 + e^{\alpha}}\Bigr] + d\:log \Bigl[\frac{1}{1 + e^{\alpha}}\Bigr]$ Expanding the above (per logarithmic properties), we get $\ell(\theta) = a\:log\:e^{\alpha + \beta} - a\:log(1 + e^{\alpha + \beta}) - c\:log(1 + e^{\alpha + \beta}) + b\:log\:e^{\alpha} - b\:log\:(1 + e^{\alpha}) - d\:log(1 + e^{\alpha})$ Simplifying and combining terms we get $\ell(\theta) = a(\alpha + \beta) + b\:\alpha - (a + c)log(1 + e^{\alpha + \beta}) - (b + d)log(1 + e^{\alpha})$ $\ell(\theta) = (a + b)\alpha + a\:\beta - (n_1)log(1 + e^{\alpha + \beta}) - (n_2)log(1 + e^{\alpha})$ After one more substitution $(a + b = m_1)$, the log likelihood function is as follows, expressed in terms of the frequencies and marginal totals from the 2x2 table. $\ell(\theta) = (m_1)\alpha + a\:\beta - (n_1)log(1 + e^{\alpha + \beta}) - (n_2)log(1 + e^{\alpha})$ With the log likelihood in this form, the score functions for $\alpha$ and $\beta$ can then be derived and the maximum likelihood estimates obtained (planned for a future blog post).
2018-02-19 13:34:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414024949073792, "perplexity": 634.9371306830667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812665.41/warc/CC-MAIN-20180219131951-20180219151951-00579.warc.gz"}
https://discuss.codechef.com/t/isonum-editorial/10999
# ISONUM - Editorial Author: Sergey Kulik Tester: Kevin Atienza and Istvan Nagy Editorialist: Kevin Atienza Counting ### PROBLEM: Two numbers are isomorphic if the digits of one is just a relabelling of the other, and vice versa. Let F(n) be the lowest number isomorphic to n with the same number of digits. Given N and M, find F(1) \bmod M + F(2) \bmod M + \cdots + F(N) \bmod M. ### QUICK EXPLANATION: Let’s call a number V viable if there is an n such that F(n) = V. Note that a number V is viable if and only if, after we treat V as a string of digits and after removing every digit that has already appeared in the string, the resulting string of digits is a prefix of 1023456789. For example, 1010212 is viable. First, if N = 10^k for some k, compute F(N) \bmod M separately, and decrement N by 1. Therefore, we can now assume that N has at most 11 digits Next, collect all viable numbers up to 10^{11} - 1. One can show that the number of such values for a fixed number of digits is equal to a Bell number, so there are less than a million numbers in total. Next, for any such viable number V, Let C(V) be the number of n s such that 1 \le n \le N and F(n) = V. C(V) can be computed using a backtracking procedure. The answer is therefore \sum_V (V\bmod M)\cdot C(V) for all viable numbers V \le N. ### EXPLANATION: We first note that N can reach up to 10^{11}, so an O(N) solution (or worse) will surely fail. We therefore need to find another strategy to compute the answer. Let’s call a number V viable if there is an n such that F(n) = V. Now, what does a viable number look like? Well, n and V are isomorphic, so each digit in n corresponds to a digit in V. Thus, we must choose a relabelling of n that corresponds to the smallest possible V. In this case, the greedy algorithm works: choose the next digit to relabel in n, and relabel it with the smallest digit not yet used. Care must be taken though that we do not introduce leading zeroes, so we use 1 in the first turn and 0 in the second. But for subsequent relabelings, we use the remaining digits in increasing order. For example, in the number 2424969, we replace 2 with 1, 4 with 0, and 9 with 2, so F(2424929) = 1010212. It is not a difficult task to prove why this works. # Count each F(n) Now that we know how a viable number looks like, one possible strategy to compute the answer is to enumerate all viable numbers \le N and count for each viable number how many times it appears as F(n) for n \le N. In other words, if C(V) is this number for the viable number V, then the answer is \sum_V (V\bmod M)\cdot C(V) for all viable numbers V \le N. This is great, assuming we can do the following things: • Enumerate all viable numbers V • Compute C(V) for every viable number V This also assumes that the number of viable numbers \le N is small enough for this algorithm to pass the time limit. Let’s count them; specifically, let us count the number of viable numbers < 10^{11}. It can be easily shown that the number of K-digit viable numbers is simply the K-th Bell number which is the number of ways to partition a set of K labelled elements (to see why, consider the digits as the “labelled” elements. Any given partition corresponds to exactly one assignment of digits to create a viable number using the greedy procedure above). Therefore, we conclude that there are exactly 820987 viable numbers < 10^{11}, which is definitely manageable! Note: Notice that in the above, we only considered viable numbers of up to 11 digits. However, N can reach up to 10^{11}, which has 12 digits. To get around that, notice that there is exactly one N having 12 digits, i.e. 10^{11} itself, so if N = 10^{11}, then we should just decrement N by 1 and add F(N) \bmod M separately! Notice also that F(10^{11}) = 10^{11}, so we only need to add 10^{11} \bmod M to the final answer. Therefore, we can now assume that N has at most 11 digits # Enumerating all viable numbers Let’s start by enumerating all viable numbers. There’s a way to enumerate all such numbers efficiently. To do so, we first write down an equivalent description of viable numbers: A number V is viable if and only if, after we treat V as a string of digits and after removing every digit that has already appeared in the string, the resulting string of digits is a prefix of S := 1023456789. This means that, if we try to construct a viable number from the most significant to the least significant digit, either we reuse a digit, or add the next digit in S that is unused. This gives us a hint to define the following function: E(K,D), which enumerates all K-digit viable “strings”, assuming D distinct digits have already been introduced. Here’s a possible pseudocode: S = [1,0,2,3,4,5,6,7,8,9] def E(K,D): // enumerates all K-digit viable "strings", assuming D distinct digits can be reused if K == 0: // output only the empty string output "" else: // try reusing a digit for d in 0..D-1: for t in E(K-1,D): output S[d] + t // try using a new digit from S for t in E(K-1,D+1): output S[D] + t After defining E(K,D), we can now enumerate all K-digit viable numbers by calling E(K,0) Note that the you should try to optimize your implementation of the above, because the above runs in 820987\cdot 11 iterations instead of just 820987 (because we’re appending strings). One possible implementation of the above would be to output “numbers” instead of “strings”, and to “append” digits in front of the others quickly, you should precompute powers of ten, multiply the digits with appropriate powers of ten and add. For example, to attach 5 to 1230, we use the fact that 51230 = 5\cdot 10^4 + 1230. # Computing C(V) Now, let’s focus on computing C(V), which is the number of integers n \le N isomorphic to V. Let us forget the requirement n \le N first, i.e. let’s count how many integers are isomorphic to V. Notice that if n is isomorphic to V then there is a mapping from the digits of V to the digits of n. If V has D distinct digits, then by the above, we know that exactly the digits 1, 0, 2, \ldots, D-1, appear. So we should count how many ways we can map these digits to other digits to form n. There are 9 choices for 1 (because leading zeroes are not allowed), 9 choices for 0 (because the digit 0 is now available), 8 choices for 1 (we can’t reuse digits), 7 choices for 2, etc. Therefore, there are \frac{9\cdot 9!}{(10-D)!} ways to map the digits, and there are \frac{9\cdot 9!}{(10-D)!} integers n isomorphic to V Note that \frac{9\cdot 9!}{(10-D)!} is just \frac{9}{10}P_{10,D} where P_{n,k} is the number of k-permutations of n, and \frac{9}{10} is a factor that filters out numbers with leading zeroes. Now, what if we require n \le N? Some of the n's isomorphic to V might be greater than N. But note that n should have the same number of digits as V. This means that if V has fewer digits than N, then any number isomorphic to V will automatically be \le N, i.e. C(V) = \frac{9\cdot 9!}{(10-D)!}. Thus we can now focus on the case where V and N have the same number of digits. Let’s say V and N have K digits, and also V has D distinct digits (so D \le K). Let’s refer to the digits of a K-digit integer n as n_{K-1}, \ldots, n_1, n_0, where n_0 is the least significant digit. Let’s define a function C(V,i,\phi), which returns the number of integers n \le N isomorphic to V with the following constraints: • \phi is the mapping of digits from V to n we have assigned so far, i.e. \phi(V_j) = n_j for all 0 \le j < K. • 0 \le i \le K, and all the digits from V_i to V_{K-1} have already been assigned digits, i.e. all \phi(V_j), j \ge i are defined and are consistent. • \phi(V_j) = n_j = N_j for all j \ge i, i.e. the digits of n and N from the i th least significant match. Note that C(V) is just equal to C(V,K,\{\}), where \{\} denotes an empty mapping. Now, how do we compute C(V,i,\phi)? Let’s handle a few easy cases first. If i = 0, then all digits have been assigned already, so C(V,i,\phi) is simply 1. Otherwise, we have to figure out which digit to assign V_{i-1} to. But if it’s already assigned before, i.e. \phi(V_{i-1}) is defined, then we only have one choice. There are three cases: • If \phi(V_{i-1}) < N_{i-1}, then any choice for the following digits of n will automatically guarantee n < N. Therefore, if we already assigned t digits, i.e. the number of keys of \phi is t, this means there are D-t digits left to be assigned. Since there are 10-t digits remaining, the result is P_{10-t,K-t} = \frac{(10-t)!}{(10-D)!}. • If \phi(V_{i-1}) > N_{i-1}, then any choice for the following digits of n will guarantee n > N, so the result is 0. • If \phi(V_{i-1}) = N_{i-1}, then we’re good so far, and the result, by definition, is C(V,i-1,\phi). Now, what happens when V_{i-1} is not yet assigned, i.e. \phi(V_{i-1}) is not yet defined? Then we can choose it to be any digit \le N_{i-1} that isn’t taken yet, i.e. that is not a value \phi(k) for some key k. To do so, we enumerate all such untaken d s, and handle the case where d < N_{i-1} and d = N_{i-1}: • If d < N_{i-1}, then any choice for the following digits of n will automatically guarantee n < N. Therefore, if we have already assigned t digits, i.e. the number of keys of \phi is t, and we plan to assign V_{i-1} to d, this means there are D-t-1 digits left to be assigned. Since there are 9-t digits remaining, the result is P_{9-t,D-t-1} = \frac{(9-t)!}{(10-D)!}. • If d = N_{i-1}, then we’re good so far, and the result, by definition, is C(V,i-1,\phi'), where \phi' is just \phi with the additional mapping V_{i-1} \rightarrow N_{i-1}. Care must also be taken to avoid adding leading zeroes. Therefore, if i = K, we should exclude the case d = 0 above. We have now covered all cases, and we can show that the above runs in O(K) = O(\log N) time in the worst! Take note though that it runs faster in many cases. # Minor Optimizations If you implement all the things above naïvely, you might get the TLE (time limit exceeded) verdict. If that’s the case, then you should try optimizing your implementation, because the time limit is a bit tight. Here are a few suggestions: • When computing C(V,i,\phi), we also need the values K (the number of digits of N and V), D (the number of distinct digits of V) and \frac{(10-t)!}{(10-K)!}. You should precompute K and all the \frac{i!}{j!} values beforehand, and D can be computed for each viable number during enumeration by using bitmasks and bit counting to keep track which digits are present in each number. • Since we need to access the digits of V in decreasing order, you must be able to extract the digits of V quickly. You can do this either by dividing V by a suitably chosen power of 10 and getting the last digit, or by simply generating the reversal of the digits of V during the enumeration. • While computing C(V,i,\phi) and V_{i-1} is not yet assigned, we need to enumerate all digits d \le N_{i-1} that haven’t yet been assigned. But take note that for all cases d < N_{i-1}, the answer is the same, i.e. \frac{(9-t)!}{(10-D)!}, so you can ditch the loop by just counting all such digits in time using bitmasks to keep track which digits are still available. By carefully implementing the above optimizations, you should be able to get your program accepted within the time limit ### Time Complexity: O(B(N) \log N) where B(N) is the number of distinct values in \{F(1), F(2), \ldots, F(N-1)\}. Note that B(10^{11}) = 820987. ### AUTHOR’S AND TESTER’S SOLUTIONS: 2 Likes @admin, The solutions are not visible. It says, “This XML file does not appear to have any style information associated with it…” Can you please hint on how to calculate C(V) efficiently ? Super question and a fantastic editorial! Kudos 1 Like Is it possible to make second tester’s solution running fast enough to get AC? Version which is provided here is too slow.
2020-09-18 16:38:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8564518690109253, "perplexity": 783.8943370050813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188049.8/warc/CC-MAIN-20200918155203-20200918185203-00172.warc.gz"}
http://mathoverflow.net/questions/45645/bounds-on-regulator-of-elliptic-curve
# bounds on regulator of elliptic curve, Let E be an elliptic curve over Q with positive rank and trivial torsion structure. Is there any sort of upper bound (conjectural or unconditional) on the regulator of E in terms of the conductor of E? (For lower bound we have Lang's conjecture.) - In any specific case, you can get a conjectural upper bound from the leading coefficient of the $L$-function at $s=1$. Is that what you are after or are you looking for a theoretical uniform bound? If the latter, then dependent on what? It is very unlikely that there is a constant uniform bound that works for all curves. – Alex B. Nov 11 '10 at 5:12 I was looking for an upper bound in terms of the conductor or the discriminant of E. I've been trying to figure out if I can give some sort of a bound for $\lim_{s\rightarrow 1}L^{(r)}(s)$ using the conductor or the discriminant of the elliptic curve, but I don't know if that's reasonable or not. – Soroosh Nov 12 '10 at 19:05 There is Lang's conjecture on the regulator times size of Sha. See his book "Survey of Diophantine Geometry", pp. 99, Chapter 3 section 6, conjecture 6.3. If you don't have access to the book (and can't search google), I can post it here. – Dror Speiser Nov 12 '10 at 20:10 $L^r(1)$ can be bounded by complex analysis, in particular, if you don't care about optimal results, use the Phragmen Lindelof theorem. I think it should give $L(1)$ is less than $N^{1/4}$ times some power of $\log N$. Each deriviative introduces another log via a Cauchy derivative formula, among other methods. More explicitly, $L(s)$ is bounded by $(\log N)^{2?}$ on the $s=3/2$ line via a limiting Euler product (or do it on $s=3/2+1/\log N$), and then by the functional equation, you get a bound $\sqrt N$ times this on the $s=1/2$ line, and apply convexity to get $N^{1/4}$ in the middle. – Junkie Feb 28 '11 at 12:30
2016-05-27 08:48:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8585662841796875, "perplexity": 241.46211967077087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276564.72/warc/CC-MAIN-20160524002116-00024-ip-10-185-217-139.ec2.internal.warc.gz"}
http://lptms.u-psud.fr/en/blog/large-deviations-for-the-height-in-1d-kardar-parisi-zhang-growth-at-late-times/
# Large deviations for the height in 1D Kardar-Parisi-Zhang growth at late times ### Pierre Le Doussal 1 Satya N. Majumdar 2 Gregory Schehr 2 #### EPL, European Physical Society/EDP Sciences/Società Italiana di Fisica/IOP Publishing, 2016, 113, pp.60004 We study the atypically large deviations of the height $H \sim {{\cal O}}(t)$ at the origin at late times in $1+1$-dimensional growth models belonging to the Kardar-Parisi-Zhang (KPZ) universality class. We present exact results for the rate functions for the discrete single step growth model, as well as for the continuum KPZ equation in a droplet geometry. Based on our exact calculation of the rate functions we argue that models in the KPZ class undergo a third order phase transition from a strong coupling to a weak coupling phase, at late times. • 1. LPTENS - Laboratoire de Physique Théorique de l'ENS • 2. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
2021-09-17 09:59:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48401638865470886, "perplexity": 1897.6259904288277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055632.65/warc/CC-MAIN-20210917090202-20210917120202-00450.warc.gz"}
https://charlesfrye.github.io/stats/2019/03/06/cosyne19-gender-bias.html
Controversy over hypothesis testing methodology encountered in the wild a second time! At this year’s Computational and Systems Neuroscience conference, CoSyNE 2019, there was disagreement over whether the acceptance rates indicated bias against women authors. As it turns out, part of the disupute turned over which statistical test to run! ### Controversial Data CoSyNe is an annual conference where COmputational and SYstems NEuroscientists to get together. As a conference in the intersection of two male-dominated fields, concerns about gender bias abound. Further, the conference uses single-blind review, i.e. reviewers but not submitters are anonymous, which could be expected to increase bias against women, though effects might be small. During the welcome talk, the slide below was posted (thanks to Twitter user @neuroecology for sharing their image of the slide; they have a nice write-up data mining other CoSyNe author data) to support the claim that bias was “not too bad”, since the ratio of male first authors to female first authors was about the same between submitted and accepted posters. However, this method of viewing the data has some problems: the real metric for bias isn’t the final gender composition of the conference, it’s the difference in acceptance rate across genders. A subtle effect there would be hard to see in data plotted as above. And so Twitter user @meganinlisbon got hold of the raw data and computed the acceptance rates and their ratio in the following tweet: Phrased as “20% higher for men”, the gender bias seems staggeringly high! It seems like it’s time for statistics to come and give us a definitive answer. Surely math can clear everything up! ### Controversial Statistics Shortly afterwards, several other Twitter users, including @mjaztwit and @alexpiet attempted to apply null hypothesis significance testing to determine whether the observed gender bias was likely to be observed in the case that there was, in fact, no bias. Such a result is called significant, and the degree of evidence for significance is quantified by a value $$p$$. For historical reasons, a value of $$0.05$$ is taken as a threshold for a binary choice about significance. And they got different answers! One found that the observation was not significant, with $$p \approx 0.07$$, while the other found the the observation was significant, with $$p \approx 0.03$$. What gives? There were some slight differences in low-level, quantitative approach: one was parametric, the other non-parametric. But they weren’t big enough to change the $$p$$ value. The biggest difference was a choice made at a very high level: namely, are we testing whether there was any gender bias in CoSyNe acceptance, or are we testing whether there was more specifically gender bias against women. The former is called a two-tailed test and is more standard. Especially in sciences like biology and psychology, we don’t know enough about our data to completely discount the possibility that there’s an effect opposite to what we might expect. Because we consider extreme events “in both directions”, the typical effect of switching from a two to a one-tailed test is to cut the $$p$$-value in half. And indeed, we $$0.03$$ is approximately half of $$0.07$$. But is it reasonable to run a two-tailed test for this question? The claims and concerns of most of the individuals concerned about bias was framed specifically in terms of female-identifying authors (to my recollection, choices for gender identification were male, female, and prefer not to answer, making it impossible to talk about non-binary authors with this data). And given the other evidence for misogynist bias in this field (the undeniably lower rate of female submissions, the near-absence of female PIs, the still-greater sparsity of women among top PIs) it would be a surprising result indeed if there were bias that favored women in just this one aspect. Suprising enough that only very strong evidence would be sufficient, which is approximately what a two-tailed test does. Even putting this question aside, is putting this much stock in a single number like the $$p$$ value sensible? After all, the $$p$$ value is calculated from our data, and it can fluctuate from sample to sample. If just two more female-led projects had been accepted or rejected, the two tests would agree on which side of $$0.05$$ the $$p$$ value lay! Indeed, the CoSyNe review process includes a specific mechanism for randomness, namely that papers on the margin of acceptance due to a scoring criterion have their acceptance or rejection determined by the output of a random number generator. And the effect size expected by most is probably not too much larger than what is reported, since the presumption is that the effect is mostly implicit bias from many reviewers or explicit bias from a small cohort. In that case, adhering to a strict $$p$$ cutoff is electing to have your conclusions from this test determined almost entirely by an explicitly random mechanism. This is surely fool-hardy! It would seem to me that the more reasonable conclusion is that there is moderately strong evidence of a gender bias in the 2019 CoSyNe review process, but that the number of submissions is insufficient to make a definitive determination possible based off of a single year’s data. This data is unfortunately not available for previous years. ### Coda At the end of the conference, the Executive Committee announced that they had heard the complaints of conference-goers around this bit of gender bias and others and would be taking concrete steps to address them. First, they would be adding chairs for Diversity and Inclusion to the committee. Second, they would move to a system of double-blind review, in which the authors of submissions are also anonymous to the reviewers. Given the absence of any evidence that such a system is biased against men and the evidence that such a system reduces biases in general, this is an unambiguously good move, regardless of the precise $$p$$ value of the data for gender bias this year.
2021-02-25 04:26:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.498955637216568, "perplexity": 1230.6640144993373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350717.8/warc/CC-MAIN-20210225041034-20210225071034-00250.warc.gz"}
https://quadraticformulacalculator.net/calculator/standard-deviation-calculator/
Got Feedback? Found a bug? Have a suggestion? Fill the form below and we'll take a look! X # Standard Deviation Calculator The Standard Deviation is, by definition, the positive root mean square or root-mean-square error deviation of the data set. Note: Please enter comma separated values e.g. 12, 20, 30, 15 ## What is Standard Deviation? Standard deviation, typically denoted by 𝛔, is a statistic that measures the dispersion or spread of a data distribution around the average, 𝛍. For a particular sample, a lower standard deviation means that the data values are closer to the mean or expected value, that is, they are less spread out. Conversely, a higher standard deviation for the same sample suggests that the data values are far apart from the mean, that is, they are spread out in a wider range of values. ## Formula of Standard Deviation There are multiple ways in which the formula for standard deviation is amended to get unbiased results in different real-life situations. The definitional formula for standard deviation is as follows: $$σ = \sqrt{\dfrac{ 1}{ N } \sum_{i=1}^{N}(x_i – 𝛍)^{2}}$$ Where xi = any value from the data set μ = mean/expected value N = total number of values in the data set/sample space The above equation may seem intimidating for those looking at it for the first time. However, it is not as complicated as it may seem. xi𝛍 is the deviation of a data value from the mean. For those unfamiliar with summation, $$\text{Sum} = \sum_{i=1}^{n}x_i$$ , it merely implies the addition of all values starting from the first one ( i = 1 ) going through i = 2, i = 3, and so on till N (total number of values). To make calculations less messy, the following is a computational formula: $$σ = \sqrt{\dfrac{ 1}{ N } \sum_{i=1}^{N}(x_i)^{2} – (𝛍)^{2}}$$ In statistics, mathematical models are made, which are then used against real data to draw out crucial conclusions. In real life, the calculated mean rarely ever indicates that there is an equal number of data values less than and greater than the mean. Many factors affect the mean, including skewness and range of the data. Two data sets can have the same mean; however, it is only after calculating the standard deviation that statisticians can reach satisfactory conclusions. Here is a simple example for better understanding. ### Standard Deviation Calculation Example The average rainfall in two cities, A and B, is 20mm. You are doing a monthly report on the water level of the two cities. You notice that the water level of city A has increased only by 0.5mm while, on the other hand, the water level of city B has risen abundantly by 10mm. If the amount of rainfall in both the cities was the same, what went wrong? Considering the slope, terrain, weather conditions, and all other factors are kept constant or have a negligible effect, take a look at both the data sets: S.No. Date Rainfall in City A (mm) Rainfall in City B (mm) 1 May 5th 0 20 2 May 12th 0 20 3 May 13th 0 30 4 May 30th 0 5 5 May 17th 0 10 6 May 24th 70 0 7 May 25th 80 0 8 May 26th 100 25 9 May 27th 50 40 10 May 30th 0 50 After taking a look at the data sets, you will notice the following things: The total rain in a span of 10 days for both cities is 200mm. In City A, it rained heavily for four consecutive days, reaching a total of 200mm. In City B, it rained less to moderate on ten days of the month, adding to a total of 200mm. From the afore-mentioned observations, you can conclude that there was more dispersion in the data set for City B than City A. When you calculate the standard deviation, it indicates that the values in the data set for City B are closer to the mean than City A. As a result of consecutive heavy rainfall in City A, all the water from the rain drained to lower areas and did not get enough time to absorb. However, in City B, it rained consistently and moderately so that the water had time to get absorbed, so the water level was higher. When using standard deviation for statistical inferences, two cases must be considered: Population Standard Deviation Sample Standard Deviation ## Population Standard Deviation The population standard deviation is a parameter which is used when all the individual data points can be obtained from the entire population. It is the square root of the variance of a dataset in which all the values can be sampled from a population. The formula for population standard deviation is: $$σ = \sqrt{\dfrac{ 1}{ N } \sum_{i=1}^{N}(x_i – 𝛍)^{2}}$$ Where xi = any value from the population μ = mean/expected value N = total number of values in the population Calculation with example A group of five students compare their scores on a math test. Calculate the standard deviation of their scores of 10. 7, 8, 6, 9, 5 Step 1: Calculate the mean of the scores. This will be your 𝛍. $$= \dfrac{7 + 8 + 6 + 9 + 5}{ 5 }$$ $$= 6$$ Step 2: Subtract the mean from each of the individual scores. These differences are deviations. Scores below the mean will have negative deviations, and scores above the mean will have positive deviations. Scores equal to the mean will naturally have a deviation of 0. Score Deviation (xi – 𝛍) 7 7 – 6 = +1 8 8 – 6 = +2 6 6 – 6 = 0 9 9 – 6 = +3 5 5 – 6 = -1 Step 3: Square each deviation so that it becomes positive. Score Deviation (xi – 𝛍) Squared Deviation (xi – 𝛍)2 7 7 – 6 = +1 1 8 8 – 6 = +2 4 6 6 – 6 = 0 0 9 9 – 6 = +3 9 5 5 – 6 = -1 1 Step 4: Sum up the squared deviations together. $$= 1 + 4 + 0 + 9 + 1$$ $$= 15$$ Step 5: Divide the sum of the squared deviations by the total number of data values in the population. The calculated result is called the variance. $$= \dfrac{ 15 }{ 5 }$$ $$= 3$$ Step 6: Calculate the square root of the variance to get the standard deviation of the scores. $$= 3$$ $$= 1.732$$ ## Sample Standard Deviation Sample standard deviation is a statistic that is most commonly used as an estimator for population standard deviation, σ. In cases where it is not possible to sample every member of the population, a random sample is taken for convenience purposes and the sample standard deviation, s, is calculated using a modified formula. Here is why: While calculating the sample standard deviation, s, the μ is unknown. ( μ is the mean of the population, not sample). Therefore, we use x as the mean of the sample which introduces a slight bias in the calculation. Since x is calculated using the data values from the sample, the data values (xi)will be closer to x than μ on an average. As a result, the square of the deviations will also be smaller for the sample which will result in a greater standard deviation. This bias can be corrected by dividing by N-1 instead of N. N-1 is called the degrees of freedom. ⇒ This corrected version has a significant amount of error for samples N>10. $$s = \sqrt{\dfrac{ 1}{ N – 1 } \sum_{i=1}^{n}(x_i – \overline{x})^{2}}$$ Where xi= any value from the sample x= sample mean N = sample space Calculation with example →A group of five students compare their scores on a math test. Calculate the standard deviation of their scores of 10. 7, 8, 6, 9, 5 Step 1: Calculate the mean of the scores. This will be your 𝛍. $$= \dfrac{ 7 + 8 + 6 + 9 + 5 }{ 5 }$$ $$= 6$$ Step 2: Subtract the mean from each of the individual scores. These differences are deviations. Scores below the mean will have negative deviations, and scores above the mean will have positive deviations. Scores equal to the mean will naturally have a deviation of 0. Score Deviation (xi – 𝛍) 7 7 – 6 = +1 8 8 – 6 = +2 6 6 – 6 = 0 9 9 – 6 = +3 5 5 – 6 = -1 Step 3: Square each deviation so that it becomes positive. Score Deviation (xi – 𝛍) Squared Deviation (xi – 𝛍)2 7 7 – 6 = +1 1 8 8 – 6 = +2 4 6 6 – 6 = 0 0 9 9 – 6 = +3 9 5 5 – 6 = -1 1 Step 4: Sum up the squared deviations together. $$= 1 + 4 + 0 + 9 + 1$$ $$= 15$$ Step 5: Divide the sum of the squared deviations by one less than the total number of data values in the sample. $$= \dfrac{ 15 }{ 4 }$$ $$= 3.75$$ Step 6: Calculate the square root of the variance to get the standard deviation of the scores. $$=\sqrt{3.75}$$ $$= 1.936$$ Notice here that the sample standard deviation is greater (1.936) than the population standard deviation (1.732). ## Why do we need to calculate Standard Deviation? Statistical parameters like the standard deviation are widely used in business, finance, and the manufacturing industry. It is an important tool financial analysts and business-owners use to manage risk and make decisions. Measuring market volatility and performance trends is also done by calculating standard deviation. For example, in investment, the index fund is likely to have a low standard deviation when compared to its benchmark index. On the contrary, aggressive growth funds are expected to have a high standard deviation from relative stock indices due to aggressive bets by portfolio managers to generate higher-than-average returns.IN this scenario, alow standard deviation will not be ideal. In the case of slumping sales or a rise in bad customer reviews, potent risk management maneuvers can be devised using standard deviation. It can also be used to calculate the volatility of stock prices and margins of error in surveys taken by the company. Quality control measures employ standard deviation to monitor and maintain the caliber of manufactured products. For example, a company manufactures potato chips. Each bag of chips is monitored for its weight. If the average weight is 25grams, and the standard deviation is 士2 grams, the minimum amount of chips in a bag can be 23 grams and the maximum can be 27 grams. Anything greater or lesser than that cannot be distributed by the company. In scientific research and reasoning, standard deviation plays a crucial role when it comes to analyzing the obtained results from an experiment. Customer reviews, employee surveys, presidential election polls are all conducted with the aid of standard deviation and calculating margin of error. Even teachers make use of standard deviation when setting exam papers and scoring them. These are only a handful of practical applications of standard deviation out of many others. It is an essential statistical tool that is easy to employ and gives crucial information about a data set. • Embed Calculator Widget • Direct URL
2021-01-22 12:05:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7030008435249329, "perplexity": 579.049853692381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529331.99/warc/CC-MAIN-20210122113332-20210122143332-00734.warc.gz"}
https://eccc.weizmann.ac.il//report/2010/192/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > DETAIL: ### Revision(s): Revision #1 to TR10-192 | 2nd July 2013 09:56 #### Improved bounds for the randomized decision tree complexity of recursive majority Revision #1 Authors: Frederic Magniez, Ashwin Nayak, Miklos Santha, Jonah Sherman, Gabor Tardos, David Xiao Accepted on: 2nd July 2013 09:56 Keywords: Abstract: We consider the randomized decision tree complexity of the recursive 3-majority function. For evaluating height $h$ formulae, we prove a lower bound for the $\delta$-two-sided-error randomized decision tree complexity of $(1/2-\delta) \cdot 2.57143^h$, improving the lower bound of $(1-2\delta)(7/3)^h$ given by Jayram, Kumar, and Sivakumar (STOC'03), and the one of $(1-2\delta) \cdot 2.55^h$ given by Leonardos (ICALP'13). Second, we improve the upper bound by giving a new zero-error randomized decision tree algorithm that has complexity at most $(1.007) \cdot 2.64944^h$. The previous best known algorithm achieved complexity $(1.004) \cdot 2.65622^h$. The new lower bound follows from a better analysis of the base case of the recursion of Jayram et al.The newalgorithm uses a novel interleaving'' of two recursive algorithms. Changes to previous version: New co-authors and improved lower bounds. ### Paper: TR10-192 | 8th December 2010 20:39 #### Improved bounds for the randomized decision tree complexity of recursive majority TR10-192 Authors: Frederic Magniez, Ashwin Nayak, Miklos Santha, David Xiao Publication: 10th December 2010 10:00 Keywords: Abstract: We consider the randomized decision tree complexity of the recursive 3-majority function. For evaluating a height $h$ formulae, we prove a lower bound for the $\delta$-two-sided-error randomized decision tree complexity of $(1-2\delta)(5/2)^h$, improving the lower bound of $(1-2\delta)(7/3)^h$ given by Jayram et al. (STOC '03). We also state a conjecture which would further improve the lower bound to $(1-2\delta)2.54355^h$. Second, we improve the upper bound by giving a new zero-error randomized decision tree algorithm that has complexity at most $(1.007) \cdot 2.64946^h$, improving on the previous best known algorithm, which achieved $(1.004) \cdot 2.65622^h$. Our lower bound follows from a better analysis of the base case of the recursion of Jayram et al. . Our algorithm uses a novel interleaving'' of two recursive algorithms. ISSN 1433-8092 | Imprint
2022-01-24 03:47:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9424228668212891, "perplexity": 1473.3185881852778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304471.99/warc/CC-MAIN-20220124023407-20220124053407-00032.warc.gz"}
http://www.askamathematician.com/
## Q: What causes friction? (and some other friction questions) Physicist: Political conversations with family, for one. “Friction” is a blanket term to cover all of the wide variety of effects that make it difficult for one surface to slide past another. There a some chemical bonds (glue is an extreme example), there are electrical effects (like van der waals), and then there are effects from simple physical barriers.  A pair of rough surfaces will have more friction than a pair of smooth surfaces, because the “peaks” of one surface can fall into the “valleys” of the other, meaning that to keep moving either something needs to break, or the surfaces would need to push apart briefly. This can be used in hand-wavy arguments for why friction is proportional to the normal force pressing surfaces together.  It’s not terribly intuitive why, but it turns out that the minimum amount of force, Ff, needed to push surfaces past each other (needed to overcome the “friction force”) is proportional to the force, N, pressing those surfaces together.  In fact this is how the coefficient of friction, μ, is defined: Ff = μN. The force required to push this bump “up hill” is proportional to the normal force.  This is more or less the justification behind where the friction equation comes from. The rougher the surfaces the more often “hills” will have to push over each other, and the steeper those hills will be.  For most practical purposes friction is caused by the physical roughness of the surfaces involved.  However, even if you make a surface perfectly smooth there’s still some friction.  If that weren’t the case, then very smooth things would feel kinda oily (some do actually). Sheets of glass tend to be very nearly perfectly smooth (down to the level of molecules), and most of the friction to be found with glass comes from the subtle electrostatic properties of the glass and the surface that’s in contact with it.  But why is that friction force also proportional to the normal force?  Well… everything’s approximately linear over small enough forces/distances/times.  That’s how physics is done! That may sound like an excuse, but that’s only because it is. Q: It intuitively feels like the friction force should be directly proportional to the surface area between materials, yet this is never considered in any practical analysis or application.  What’s going on here? A: The total lack of consideration of surface area is an artifact of the way friction is usually considered.  Greater surface area does mean greater friction, but it also means that the normal force is more spread out, and less force is going through any particular region of the surface.  These effects happen to balance out. If you have one pillar the total friction is μN. If you have two pillars each supports half of the weight, and thus exert half the normal force, so the total friction is μN/2 + μN/2 = μN. Pillars are just a cute way of talking about surface area in a controlled way.  The same argument applies to surfaces in general. Q: If polishing surfaces decreases friction, then why does polishing metal surfaces make them fuse together? A: Polishing two metal surfaces until they can fuse has to do with giving them both more opportunities to fuse (more of their surfaces can directly contact each other without “peaks and valleys” to deal with), and polishing also helps remove impurities and oxidized material.  For example, if you want to weld two old pieces of iron together you need to get all of the rust off first.  Pure iron can be welded together, but iron oxide (rust) can’t.  Gold is an extreme example of this.  Cleaned and polished gold doesn’t even need to be heated, you can just slap two pieces together and they’ll fuse together. Inertia welders also need smooth surfaces so that the friction from point to point will be constant (you really don’t want anything to catch suddenly, or everyone nearby is in trouble).  This isn’t important to the question; it’s just that inertia welders are awesome. Q: Why does friction convert kinetic energy into heat? A: The very short answer is “entropy”.  Friction involves, at the lowest level, a bunch of atoms interacting and bumping into each other.  Unless that bumping somehow perfectly reverses itself, then one atom will bump into the next, which will bump into the next, which will bump into the next, etc. And that’s essentially what heat is.  So the movement of one surface over another causes the atoms in each to get knocked about jiggle.  That loss of energy to heat is what causes the surfaces to slow down and stop. Posted in -- By the Physicist, Physics | 2 Comments ## Q: Is fire a plasma? What is plasma? Physicist: Generally speaking, by the time a gas is hot enough to be seen, it’s a plasma. The big difference between regular gas and plasma is that in a plasma a fair fraction of the atoms are ionized.  That is, the gas is so hot, and the atoms are slamming around so hard, that some of the electrons are given enough energy to (temporarily) escape their host atoms.  The most important effect of this is that a plasma gains some electrical properties that a non-ionized gas doesn’t have; it becomes conductive and it responds to electrical and magnetic fields.  In fact, this is a great test for whether or not something is a plasma. For example, our Sun (or any star) is a miasma of incandescent plasma.  One way to see this is to notice that the solar flares that leap from its surface are directed along the Sun’s (generally twisted up and spotty) magnetic fields. A solar flare as seen in the x-ray spectrum.  The material of the flare, being a plasma, is affected and directed by the Sun’s magnetic field.  Normally this brings it back into the surface (which is for the best). We also see the conductance of plasma in “toys” like a Jacob’s Ladder.  Spark gaps have the weird property that the higher the current, the more ionized the air in the gap, and the lower the resistance (more plasma = more conductive).  There are even scary machines built using this principle.  Basically, in order for a material to be conductive there need to be charges in it that are free to move around.  In metals those charges are shared by atoms; electrons can move from one atom to the next.  But in a plasma the material itself is free charges.  Conductive almost by definition. A Jacob’s Ladder.  The electricity has an easier time flowing through the long thread of highly-conductive plasma than it does flowing through the tiny gap of poorly-conducting air. As it happens, fire passes all these tests with flying colors.  Fire is a genuine plasma.  Maybe not the best plasma, or the most ionized plasma, but it does alright. The free charges inside of the flame are pushed and pulled by the electric field between these plates, and as those charged particles move they drag the rest of the flame with them. Even small and relatively cool fires, like candle flames, respond strongly to electric fields and are even pretty conductive.  There’s a beautiful video here that demonstrates this a lot better than this post does. The candle picture is from here, and the Jacob’s ladder picture is from here. Posted in -- By the Physicist, Physics | 10 Comments ## Q: Why are determinants defined the weird way they are? Physicist: This is a question that comes up a lot when you’re first studying linear algebra.  The determinant has a lot of tremendously useful properties, but it’s a weird operation.  You start with a matrix, take one number from every column and multiply them together, then do that in every possible combination, and half of the time you subtract, and there doesn’t seem to be any rhyme or reason why.  This particular math post will be a little math heavy. If you have a matrix, ${\bf M} = \left(\begin{array}{cccc}a_{11} & a_{21} & \cdots & a_{n1} \\a_{12} & a_{22} & \cdots & a_{n1} \\\vdots & \vdots & \ddots & \vdots \\a_{1n} & a_{2n} & \cdots & a_{nn}\end{array}\right)$, then the determinant is $det({\bf M}) = \sum_{\vec{p}}\sigma(\vec{p}) a_{1p_1}a_{2p_2}\cdots a_{np_n}$, where $\vec{p} = (p_1, p_2, \cdots, p_n)$ is a rearrangement of the numbers 1 through n, and $\sigma(\vec{p})$ is the “signature” or “parity” of that arrangement.  The signature is (-1)k, where k is the number of times that pairs of numbers in $\vec{p}$ have to be switched to get to $\vec{p} = (1,2,\cdots,n)$. For example, if ${\bf M} = \left(\begin{array}{ccc}a_{11} & a_{21} & a_{31} \\a_{12} & a_{22} & a_{32} \\a_{13} & a_{23} & a_{33} \\\end{array}\right) = \left(\begin{array}{ccc}4 & 2 & 1 \\2 & 7 & 3 \\5 & 2 & 2 \\\end{array}\right)$, then $\begin{array}{ll}det({\bf M}) \\= \sum_{\vec{p}}\sigma(\vec{p}) a_{1p_1}a_{2p_2}a_{3p_3} \\=\left\{\begin{array}{ll}\sigma(1,2,3)a_{11}a_{22}a_{33}+\sigma(1,3,2)a_{11}a_{23}a_{32}+\sigma(2,1,3)a_{12}a_{21}a_{33}\\+\sigma(2,3,1)a_{12}a_{23}a_{31}+\sigma(3,1,2)a_{13}a_{21}a_{32}+\sigma(3,2,1)a_{13}a_{22}a_{31}\end{array}\right.\\=a_{11}a_{22}a_{33}-a_{11}a_{23}a_{32}-a_{12}a_{21}a_{33}+a_{12}a_{23}a_{31}+a_{13}a_{21}a_{32}-a_{13}a_{22}a_{31}\\= 4 \cdot 7 \cdot 2 - 4 \cdot 2 \cdot 3 - 2 \cdot 2 \cdot 2 +2 \cdot 2 \cdot 1 + 5 \cdot 2 \cdot 3 - 5 \cdot 7 \cdot 1\\=23\end{array}$ Turns out (and this is the answer to the question) that the determinant of a matrix can be thought of as the volume of the parallelepiped created by the vectors that are columns of that matrix.  In the last example, these vectors are $\vec{v}_1 = \left(\begin{array}{c}4\\2\\5\end{array}\right)$, $\vec{v}_2 = \left(\begin{array}{c}2\\7\\2\end{array}\right)$, and $\vec{v}_3 = \left(\begin{array}{c}1\\3\\2\end{array}\right)$. The parallelepiped created by the vectors a, b, and c. Say the volume of the parallelepiped created by $\vec{v}_1, \cdots,\vec{v}_n$ is given by $D\left(\vec{v}_1, \cdots, \vec{v}_n\right)$.  Here come some properties: 1) $D\left(\vec{v}_1, \cdots, \vec{v}_n\right)=0$, if any pair of the vectors are the same, because that corresponds to the parallelepiped being flat. 2) $D\left(a\vec{v}_1,\cdots, \vec{v}_n\right)=aD\left(\vec{v}_1,\cdots,\vec{v}_n\right)$, which is just a fancy math way of saying that doubling the length of any of the sides doubles the volume.  This also means that the determinant is linear (in each column). 3) $D\left(\vec{v}_1+\vec{w},\cdots, \vec{v}_n\right) = D\left(\vec{v}_1,\cdots, \vec{v}_n\right) + D\left(\vec{w},\cdots, \vec{v}_n\right)$, which means “linear”.  This works the same for all of the vectors in $D$. Check this out!  By using these properties we can see that switching two vectors in the determinant swaps the sign. $\begin{array}{ll} D\left(\vec{v}_1,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)\\ =D\left(\vec{v}_1,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)+D\left(\vec{v}_1,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 1}\\ =D\left(\vec{v}_1,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 3} \\ =D\left(\vec{v}_1,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)-D\left(\vec{v}_1+\vec{v}_2,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 1} \\ =D\left(-\vec{v}_2,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 3} \\ =-D\left(\vec{v}_2,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 2} \\ =-D\left(\vec{v}_2,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right)-D\left(\vec{v}_2,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 3} \\ =-D\left(\vec{v}_2,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 1} \end{array}$ 4) $D\left(\vec{v}_1,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)=-D\left(\vec{v}_2,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right)$, so switching two of the vectors flips the sign.  This is true for any pair of vectors in D.  Another way to think about this property is to say that when you exchange two directions you turn the parallelepiped inside-out. Finally, if $\vec{e}_1 = \left(\begin{array}{c}1\\0\\\vdots\\0\end{array}\right)$, $\vec{e}_2 = \left(\begin{array}{c}0\\1\\\vdots\\0\end{array}\right)$, … $\vec{e}_n = \left(\begin{array}{c}0\\0\\\vdots\\1\end{array}\right)$, then 5) $D\left(\vec{e}_1,\vec{e}_2, \vec{e}_3\cdots, \vec{e}_n\right) = 1$, because a 1 by 1 by 1 by … box has a volume of 1. Also notice that, for example, $\vec{v}_2 = \left(\begin{array}{c}v_{21}\\v_{22}\\\vdots\\v_{2n}\end{array}\right) = \left(\begin{array}{c}v_{21}\\0\\\vdots\\0\end{array}\right)+\left(\begin{array}{c}0\\v_{22}\\\vdots\\0\end{array}\right)+\cdots+\left(\begin{array}{c}0\\0\\\vdots\\v_{2n}\end{array}\right) = v_{21}\vec{e}_1+v_{22}\vec{e}_2+\cdots+v_{2n}\vec{e}_n$ Finally, with all of that math in place, $\begin{array}{ll} D\left(\vec{v}_1,\vec{v}_2, \cdots, \vec{v}_n\right) \\ = D\left(v_{11}\vec{e}_1+v_{12}\vec{e}_2+\cdots+v_{1n}\vec{e}_n,\vec{v}_2, \cdots, \vec{v}_n\right) \\ = D\left(v_{11}\vec{e}_1,\vec{v}_2, \cdots, \vec{v}_n\right) + D\left(v_{12}\vec{e}_2,\vec{v}_2, \cdots, \vec{v}_n\right) + \cdot + D\left(v_{1n}\vec{e}_n,\vec{v}_2, \cdots, \vec{v}_n\right) \\= v_{11}D\left(\vec{e}_1,\vec{v}_2, \cdots, \vec{v}_n\right) + v_{12}D\left(\vec{e}_2,\vec{v}_2, \cdots, \vec{v}_n\right) + \cdot + v_{1n}D\left(\vec{e}_n,\vec{v}_2, \cdots, \vec{v}_n\right) \\ =\sum_{j=1}^n v_{1j}D\left(\vec{e}_j,\vec{v}_2, \cdots, \vec{v}_n\right) \end{array}$ Doing the same thing to the second part of D, $=\sum_{j=1}^n\sum_{k=1}^n v_{1j}v_{2k}D\left(\vec{e}_j,\vec{e}_k, \cdots, \vec{v}_n\right)$ The same thing can be done to all of the vectors in D.  But rather than writing n different summations we can write, $=\sum_{\vec{p}}\, v_{1p_1}v_{2p_2}\cdots v_{np_n}D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right)$, where every term in $\vec{p} = \left(\begin{array}{c}p_1\\p_2\\\vdots\\p_n\end{array}\right)$ runs from 1 to n. When the $\vec{e}_j$ that are left in D are the same, then D=0.  This means that the only non-zero terms left in the summation are rearrangements, where the elements of $\vec{p}$ are each a number from 1 to n, with no repeats. All but one of the $D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right)$ will be in a weird order.  Switching the order in D can flip sign, and this sign is given by the signature, $\sigma(\vec{p})$.  So, $D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right) = \sigma(\vec{p})D\left(\vec{e}_{1},\vec{e}_{2}, \cdots, \vec{e}_{n}\right)$, where $\sigma(\vec{p})=(-1)^k$, where k is the number of times that the e’s have to be switched to get to $D(\vec{e}_1, \cdots,\vec{e}_n)$. So, $\begin{array}{ll} det({\bf M})\\ = D\left(\vec{v}_{1},\vec{v}_{2}, \cdots, \vec{v}_{n}\right)\\ =\sum_{\vec{p}}\, v_{1p_1}v_{2p_2}\cdots v_{np_n}D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right) \\ =\sum_{\vec{p}}\, v_{1p_1}v_{2p_2}\cdots v_{np_n}\sigma(\vec{p})D\left(\vec{e}_{1},\vec{e}_{2}, \cdots, \vec{e}_{n}\right) \\ =\sum_{\vec{p}}\, \sigma(\vec{p})v_{1p_1}v_{2p_2}\cdots v_{np_n} \end{array}$ Which is exactly the definition of the determinant!  The other uses for the determinant, from finding eigenvectors and eigenvalues, to determining if a set of vectors are linearly independent or not, to handling the coordinates in complicated integrals, all come from defining the determinant as the volume of the parallelepiped created from the columns of the matrix.  It’s just not always exactly obvious how. For example: The determinant of the matrix ${\bf M} = \left(\begin{array}{cc}2&3\\1&5\end{array}\right)$ is the same as the area of this parallelogram, by definition. The parallelepiped (in this case a 2-d parallelogram) created by (2,1) and (3,5). Using the tricks defined in the post: $\begin{array}{ll} D\left(\left(\begin{array}{c}2\\1\end{array}\right),\left(\begin{array}{c}3\\5\end{array}\right)\right) \\[2mm] = D\left(2\vec{e}_1+\vec{e}_2,3\vec{e}_1+5\vec{e}_2\right) \\[2mm] = D\left(2\vec{e}_1,3\vec{e}_1+5\vec{e}_2\right) + D\left(\vec{e}_2,3\vec{e}_1+5\vec{e}_2\right) \\[2mm] = D\left(2\vec{e}_1,3\vec{e}_1\right) + D\left(2\vec{e}_1,5\vec{e}_2\right) + D\left(\vec{e}_2,3\vec{e}_1\right) + D\left(\vec{e}_2,5\vec{e}_2\right) \\[2mm] = 2\cdot3D\left(\vec{e}_1,\vec{e}_1\right) + 2\cdot5D\left(\vec{e}_1,\vec{e}_2\right) + 3D\left(\vec{e}_2,\vec{e}_1\right) + 5D\left(\vec{e}_2,\vec{e}_2\right) \\[2mm] = 0 + 2\cdot5D\left(\vec{e}_1,\vec{e}_2\right) + 3D\left(\vec{e}_2,\vec{e}_1\right) + 0 \\[2mm] = 2\cdot5D\left(\vec{e}_1,\vec{e}_2\right) - 3D\left(\vec{e}_1,\vec{e}_2\right) \\[2mm] = 2\cdot5 - 3 \\[2mm] =7 \end{array}$ Or, using the usual determinant-finding-technique, $det\left|\begin{array}{cc}2&3\\1&5\end{array}\right| = 2\cdot5 - 3\cdot1 = 7$. Posted in -- By the Physicist, Math | 6 Comments ## Q: Are white holes real? Physicist: The Big Bang is sometimes described as being a white hole.  But if you think of a  white hole as something that’s the opposite of a black hole, then no: white holes aren’t real. They show up when you describe a black hole using some weird coordinates, so they’re essentially just a non-real mathematical artifact.  However, white holes are a cute idea so they show up a lot in sci-fi.  White holes are a mathematical abstraction that necessarily exist in the infinite past.  That is to say, if you follow the mathematical model that physicists use, you’ll never have a situation where a white hole exists at the same time as anything else.  Its existence happens infinitely long ago. Spacetime gets seriously messed up near and inside of a black hole.  To make the math easier, and to help make the situation easier to picture, the Kruskal-Szekeres coordinate system was created. In this (very unintuitive diagram) straight lines through the center are lines of constant time, with the future roughly up.  The event horizon of the black hole is also the infinite future (from an outside perspective it takes forever to fall all the way into a black hole).  That should make very little sense, but keep in mind: black holes and weird spacetime go together like Colonial Williamsburg and a lingering sense of disappointment.  The black hole’s interior is the upper triangle, the entire universe is the right triangular region and the white hole is the lower region. The boundary of this lower region is in the infinite past.  That is; in this goofy mathematical idealization of a static and eternal black hole, a white hole shows up automatically in the infinite past.  One of the issues here is that black holes need to form at some point (in the finite past). Taking this model completely seriously and assuming that it implies that white holes are real is a little like saying “imagine an infinite robot-godzilla”, and then worrying about where it came from.  It’s an abstraction used to think about other things.  Physicists love themselves some math, but the love is tempered by the understanding that writing down an equation doesn’t make things real. Physicists love themselves some math, but (almost always) recognize the scope and limitations of their own equations. For example, we can talk about the location “North 97°, East 40°”, but that doesn’t make it exist (North 90° is the north pole, the farthest north you can get by definition). Sci-fi is about the only place you’ll hear people talking about white holes.  Whites holes are the opposite of black holes: they spit out matter and energy, they’re impossible to enter, they’re very bright, that sort of thing.  In fiction “the opposite of…” is a great way to get weird new ideas (e.g., Bizarro Superman). The Einstein picture was created here. Posted in -- By the Physicist, Astronomy, Math, Physics | 4 Comments ## Q: If a photon doesn’t experience time, then how can it travel? Physicist: It’s a little surprising this hasn’t been a post yet. In order to move from one place to another always takes a little time, no matter how fast you’re traveling.  But “time slows down close to the speed of light”, and indeed at the speed of light no time passes at all.  So how can light get from one place to another?  The short, unenlightening, somewhat irked answer is: look who’s asking. Time genuinely doesn’t pass from the “perspective” of a photon but, like everything in relativity, the situation isn’t as simple as photons “being in stasis” until they get where they’re going.  Whenever there’s a “time effect” there’s a “distance effect” as well, and in this case we find that infinite time dilation (no time for photons) goes hand in hand with infinite length contraction (there’s no distance to the destination). At the speed of light there’s no time to cover any distance, but there’s also no distance to cover.  Left: regular, sub-light-speed movement.  Right: “movement” at light speed. The name “relativity” (as in “theory of…”) comes from the central tenet of relativity, that time, distance, velocity, even the order of events (sometimes) are relative.  This takes a few moments of consideration; but when you say that something’s moving, what you really mean is that it’s moving with respect to you. Everything has its own “coordinate frame”.  Your coordinate frame is how you define where things are.  If you’re on a train, plane, rickshaw, or whatever, and you have something on the seat next to you, you’d say that (in your coordinate frame) that object is stationary.  In your own coordinate frame you’re never moving at all. How zen is that? Everything is stationary from its own perspective.  Movement is something other things do.  When you describe the movement of those other things it’s always in terms of your notion of space and time coordinates. The last coordinate to consider is time, which is just whatever your clock reads.  One of the very big things that came out of Einstein’s original paper on special relativity is that not only will different perspectives disagree on where things are, and how fast they’re moving, different perspectives will also disagree on what time things happen and even how fast time is passing (following some very fixed rules). When an object moves past you, you define its velocity by looking at how much of your distance it covers, according to your clock, and this (finally) is the answer to the question.  The movement of a photon (or anything else) is defined entirely from the point of view of anything other than the photon. One of the terribly clever things about relativity is that we can not only talk about how fast other things are moving through our notion of space, but also “how fast” they’re moving through our notion of time (how fast is their clock ticking compared to mine). The meditating monk picture is from here. Posted in -- By the Physicist, Relativity | 47 Comments ## Q: What is energy? What is “pure energy” like? Physicist: Unfortunately, “pure energy” isn’t really a thing.  Whenever you hear someone talking about something or other being “turned into pure energy”, you’re listening to someone who could stand to be a little more specific about what kind of energy.  And whenever you hear someone talking about something being “made of pure energy”, you’re probably listening to someone who’s mistaken. “Pure energy” shows up a lot in fiction, and most sci-fi/fantasy fans have some notion of what it’s like, but it isn’t a thing you’ll find in reality. Energy comes in a hell of a lot of forms, but they’re all pretty mundane.  For example, when “energy is released” in an explosion (most explosions) that energy mostly takes the form of kinetic energy (things moving and heat).  Light is about the closest anything comes to being pure energy, but it’s not pure energy so much as it’s one of the several kinds of energy that isn’t tied up in matter.  It’s “matterless”, sure, but that doesn’t mean that electromagnetic fields (light) are any closer to being pure than, say, gravity fields (another, very different, massless form of energy).  “Pure” energy: nope.  Some form of energy without matter: that happens. So, energy can change from one form into another into another into another, etc., but the question remains: what is energy?  The answer to that is a little unsatisfying. There’s this quantity, that takes a lot of forms (physical movement, electromagnetic fields, being physically high in a gravitational well, chemical potential, etc., etc.).  We can measure each of them, and we know that the total value between all of the various forms stays constant, and with every constant, measurable thing it gets a name; energy. If fusion in the Sun releases energy*, then the amount released is E = (Δm)c2 (where Δm is the change in mass between the hydrogen input and helium output and c is the speed of light).  If that energy travels from the Sun to the Earth as light, then each photon of that light carries E=hν (Planck’s constant times frequency), of it.  If those photons then fall onto a solar panel, that light energy can be converted into electrical energy.  If that electrical energy runs a motor, then the energy used is E = VIT (voltage times current times time).  If that motor is used to compress a spring, then the energy stored in the spring is E=0.5kA2 (where k is a spring constant, and A is the distance it’s compressed).  If that spring tosses a stone into the air, then at the top of its flight it will have converted all of that energy into gravitational potential, in the amount of E = mgh (mass of the stone times the acceleration of gravity times height).  When it falls back to the ground that energy will become kinetic energy again, E=0.5mv2 (where m is the stone’s mass and v is its velocity).  If that stone falls into water and stirs it up, then the water will heat up by an amount given by E = C(ΔT) (where C is the heat capacity of water, and ΔT is the change in temperature). The “same energy” is being used at every stage of this example (assuming perfect efficiency).  But there’s no “carry through” that makes it from the beginning to the end.  The only thing that really stays the same is the somewhat artificial constant number that we Humans (or more precisely: Newton) call “energy”. When you want to explain the heck out of something that’s a little abstract, it’s best to leave it to professional bongo player, and sometimes-physicist Richard Feynman: “There is a fact, or if you wish, a law governing all natural phenomena that are known to date.  There is no known exception to this law – it is exact so far as we know.  The law is called the conservation of energy.  It states that there is a certain quantity, which we call “energy,” that does not change in the manifold changes that nature undergoes.  That is a most abstract idea, because it is a mathematical principle; it says there is a numerical quantity which does not change when something happens.  It is not a description of a mechanism, or anything concrete; it is a strange fact that when we calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same.  (Something like a bishop on a red square, and after a number of moves – details unknown – it is still on some red square.  It is a law of this nature.) (…) It is important to realize that in physics today, we have no knowledge of what energy ‘is’.  We do not have a picture that energy comes in little blobs of a definite amount.  It is not that way.  It is an abstract thing in that it does not tell us the mechanism or the reason for the various formulas.” -Dick Feynman The Green Lantern picture is from here. *Every time energy is released from anything, that thing ends up weighing less. It’s just that outside of nuclear reactions (either fission or fusion) the change is so small that it’s not worth mentioning. Posted in -- By the Physicist, Physics | 3 Comments
2013-05-24 18:58:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 39, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6151862144470215, "perplexity": 747.0521142166557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704986352/warc/CC-MAIN-20130516114946-00035-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.techwhiff.com/learn/x-1451-in-a-simple-random-sample-of-1500-young/169592
# X) 14.51 In a simple random sample of 1500 young people, 86% had earned a high... ###### Question: x) 14.51 In a simple random sample of 1500 young people, 86% had earned a high school diploma. Complete parts a through d below. a. What is the standard error for this estimate of the percentage of all young people who eamed a high school diploma? Question Help (Round to four decimal places as needed.) Enter your answer in the answer box and then click Check Answer 3 parts remaining Clear All #### Similar Solved Questions ##### About 7/10 of the surface of Earth is covered by water. The rest of the surface is covered by land. How much of Earth's surface is covered by land? About 7/10 of the surface of Earth is covered by water. The rest of the surface is covered by land. How much of Earth's surface is covered by land?... ##### You received partial credit in the previous attempt. Lightning Electronics is a midsize manufacturer of lithium... You received partial credit in the previous attempt. Lightning Electronics is a midsize manufacturer of lithium batteries. The company's payroll records for the November 1-14 pay period show that employees earned wages totaling $52,000 but that employee income taxes totaling$7,200 and FICA taxe... ##### 1. Using the identity: eimø = cos(m) + i sin(m) show that m is real and... 1. Using the identity: eimø = cos(m) + i sin(m) show that m is real and equal to an integer.... ##### From an 8 inch by 10 inch rectangular sheet of paper, squares of equal size will be cut from each corner From an 8 inch by 10 inch rectangular sheet of paper, squares of equal size will be cut from each corner. The flaps will then be folded up to form an open-topped box. Find the maximum possible volume of the box.... ##### How do I evaluate int_0^pisin(x)dx? How do I evaluate int_0^pisin(x)dx?... ##### OTZA Company produces two products, A and B. For the coming period, 180,000 direct labor hours... OTZA Company produces two products, A and B. For the coming period, 180,000 direct labor hours and 225,000 machine hours are available. Information on the two products appears below: contribution margin per unit direct labor hours per unit machine hours per unit Product $8.00 5.00 6.89 Product B$5.... ##### How google should do in China using ethical way? what is main issue for the google... How google should do in China using ethical way? what is main issue for the google in china?... ##### Figure 2 Regression Output SUMMARY OUTPUT Regression Statistics Multiple R R Square Adjusted R Square Standard... Figure 2 Regression Output SUMMARY OUTPUT Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 0.921261 0.848722 0.8055 0.711125 10 ANOVA Significance MS 0.001347 Regression Residual Total 19.86011 9.930053 19.63628 3.539894 0.505699 23.4 Standard Error Upper 95% C... ##### Please show this equality. n-1 p) p p-1-(1 p)"-1 p (1 -p) 7 Please show this equality. n-1 p) p p-1-(1 p)"-1 p (1 -p) 7... ##### Just the final answers needed 4.4.52 g of the oxide of element X react with excess... just the final answers needed 4.4.52 g of the oxide of element X react with excess of H2 forming 2.56 g 8 points of pure element X and water. a) calculate mass of O in the oxide b) Calculate the quantity (mol) of H2O released in the reaction. C) What element is represented by the symbol X? d) How ma... ##### If n is a positive integer and x2 = 2x in Zn then either x =... If n is a positive integer and x2 = 2x in Zn then either x = [0] or x = [2]. = True False... ##### Compsite rate amd journal Presented below is information related to Novak Manufacturing Corporation. Estimated Salvage Estimated... Compsite rate amd journal Presented below is information related to Novak Manufacturing Corporation. Estimated Salvage Estimated Life (in years) Asset Cost $44,900$6,200 33,200 4,400 35,700 3,300 19,800 2,300 24,500 3,200 Compute the rate of depreciation per year to be applied to the plant asse... ##### Specifically looking for help with question number 3 and question number 4. On question 3 there... Specifically looking for help with question number 3 and question number 4. On question 3 there are only eight lines for the journal entry and you must use the correct chart of accounts wording. On question number 4 the number is incorrect but the drop-down of overapplied is correct. Instruc...
2022-09-26 16:28:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4100906252861023, "perplexity": 2357.498012722274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00689.warc.gz"}
https://math.stackexchange.com/questions/2346896/how-to-create-an-ellipse-using-a-light-source-with-a-conical-beam?noredirect=1
# How to create an Ellipse using a Light Source with a Conical Beam A light source with a right circular conical beam is placed at a height $h$ above the origin of the Cartesian $x$-$y$ plane (i.e. positioned at $(0,0,h)$), and directed vertically downwards. The resultant image on the Cartesian plane a circle with radius $b$ centred on the origin, i.e. $x^2+y^2=b^2$. Now we want to move the light source along the positive $x$-axis, such that it is above $(k,0)$, but at height $z$ (i.e. positioned at $(k,0,z)$), and angle of incidence, $\alpha$, from the vertical such that the resultant image on the Cartesian plan is an ellipse centred on the origin with semi-minor axis $b$ and semi-major axis $a$ ($a>b$), i.e. $\frac {x^2}{a^2}+\frac{y^2}{b^2}=1$. What are the values for the horizontal displacement $k$, height $z$ and angle of incidence $\alpha$ required to created the ellipse? (i.e. express $k$, $z$ and $\alpha$ in terms of $h, a, b$) Here's a nice video on youtube by ElicaTeam illustrating something similar to the above. Here's a screenshot from the video. After further reflection, it may not be possible to find a solution under the given constraints. Whilst it may be possible to rotate the light source about $O$ such that the distance from $(0,\pm b)$ is always the same, the resultant ellipse will not be centred on $O$, and $(0,\pm b)$ will no longer be the vertices of the semi-minor axis of the ellipse. See simulation (plan view and side view) here. For the required ellipse, perhaps a cone with a different aperture (or semi-vertical angle) is required. If we assume that the light cone aperture is fixed, then it is possible to create an ellipse-shaped "spotlight" centered on $O$, with semi-minor axis $b$ (specified) but with varying semi-major axis $a$, by changing the angle of tilt $\alpha$, the $x$-axis displacment $k$, and the $z$-axis displacement $z$. See illustration here and screenshots below. Per desmos implementation here it is NOT possible to obtain an origin-centred ellipse with semi-minor radius $b$, if the height of the light source remains the same. The blue curve is the locus of the light source for constant $b$. See also desmos implementation here which allows the resultant ellipse (not necessarily origin-centred) by varying light source position, beam angle and tilt angle. • Hint: as the semi minor axis is the same as the radius, the distance of the light source from the plane must be the same. – N74 Jul 5 '17 at 5:53 • And the major axis is the minor axis divided by the cosine of the zenith angle ($\alpha$). – Cye Waldman Jul 5 '17 at 14:34 • @N74 Logically so, but then $(0,b)$ may no longer necessarily be the vertex of the semi-minor axis. So the problem may not have a solution under the constraints provided. – hypergeometric Jul 6 '17 at 7:32 • If it is not centered on the origin, increase or decrease $k$ until it is. – David K Jul 8 '17 at 20:30 • I think this question can be solved by setting $u=\arctan (b/h)$ and applying the solution of your follow-up question. – David K Jul 9 '17 at 0:22 We know that every ellipse can be obtained by intersecting a right circular cone with a plane, in this case, the $x$-$y$ plane. The eccentricity $e$ of the ellipse determines the angle that the cone’s axis makes with this plane, while the size and position of the ellipse are determined by the location of the cone’s vertex. Let the vertex of the cone be at $(v_x,0,v_z)$ and $0\le\theta\lt\frac\pi4$ be the angle that its axis makes with the $z$-axis. (For clarity, I’ve changed the variable names from those in the original problem statement. Using both $a$ and $\alpha$ in the same equations is just asking for trouble.) By rotating and translating the cone $x^2+y^2=z^2$, and then setting $z=0$, we can find after some manipulation that the intersection of the tilted and shifted cone with the $x$-$y$ plane is the conic given by the equation $$x^2\cos2\theta+y^2-2(v_x\cos2\theta-v_z\sin2\theta)x+(v_x^2-v_z^2)\cos2\theta-2v_xv_z\sin2\theta=0.\tag1$$ The center of this ellipse can be found by the usual technique of differentiation, which results in the condition $$v_x=v_z\tan2\theta\tag2$$ for its center to be at the origin. Substituting this back into (1) reduces the equation to $$x^2\cos2\theta+y^2=v_z^2\sec2\theta,\tag3$$ from which we get $$a^2=v_z^2\sec^22\theta\\b^2=v_z^2\sec2\theta$$ and finally $$v_z=\pm\frac{b^2}a, \cos2\theta={b^2\over a^2}=1-e^2.\tag4$$ To generalize this to a cone with arbitrary aperture angle $2\phi$, start with the equation $x^2+y^2=z^2\tan^2\phi$ instead, and proceed in a similar fashion.
2019-07-19 15:14:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.867913007736206, "perplexity": 156.2612831319266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526254.26/warc/CC-MAIN-20190719140355-20190719162355-00305.warc.gz"}
https://hal-insu.archives-ouvertes.fr/insu-03667092
HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information # New Evidence for a Physical Link between Asteroids (155140) 2005 UD and (3200) Phaethon Abstract : In 2018, the near-Earth object (155140) 2005 UD (hereafter UD) experienced a close fly by of the Earth. We present results from an observational campaign involving photometric, spectroscopic, and polarimetric observations carried out across a wide range of phase angles (0°7-88°). We also analyze archival NEOWISE observations. We report an absolute magnitude of HV = 17.51 ± 0.02 mag and an albedo of pV = 0.10 ± 0.02. UD has been dynamically linked to Phaethon due their similar orbital configurations. Assuming similar surface properties, we derived new estimates for the diameters of Phaethon and UD of D = 5.4 ± 0.5 km and D = 1.3 ± 0.1 km, respectively. Thermophysical modeling of NEOWISE data suggests a surface thermal inertia of ${\rm{\Gamma }}={300}_{-110}^{+120}$ and regolith grain size in the range of 0.9-10 mm for UD and grain sizes of 3-30 mm for Phaethon. The light curve of UD displays a symmetric shape with a reduced amplitude of Am(0) = 0.29 mag and increasing at a linear rate of 0.017 mag/° between phase angles of 0° and ∼25°. Little variation in light-curve morphology was observed throughout the apparition. Using light-curve inversion techniques, we obtained a sidereal rotation period P = 5.235 ± 0.005 hr. A search for rotational variation in spectroscopic and polarimetric properties yielded negative results within observational uncertainties of ∼10% μm-1 and ∼16%, respectively. In this work, we present new evidence that Phaethon and UD are similar in composition and surface properties, strengthening the arguments for a genetic relationship between these two objects. * Partially based on data collected with 2 m RCC telescope at Rozhen National Astronomical Observatory. Keywords : Document type : Journal articles https://hal-insu.archives-ouvertes.fr/insu-03667092 Contributor : Nathalie Pothier Connect in order to contact the contributor Submitted on : Friday, May 13, 2022 - 10:02:20 AM Last modification on : Sunday, May 15, 2022 - 3:03:25 AM ### Citation Maxime Devogèle, Eric Maclennan, Annika Gustafsson, Nicholas Moskovitz, Joey Chatelain, et al.. New Evidence for a Physical Link between Asteroids (155140) 2005 UD and (3200) Phaethon. The Planetary Science Journal, 2020, 1, ⟨10.3847/PSJ/ab8e45⟩. ⟨insu-03667092⟩ Record views
2022-05-21 11:28:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44851136207580566, "perplexity": 6198.503669035287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539101.40/warc/CC-MAIN-20220521112022-20220521142022-00162.warc.gz"}
https://mathematica.stackexchange.com/questions/161829/toggle-frame-color-and-label-on-input-field
# Toggle frame color and label on input field I am attempting to mock up a (somewhat poorly designed) user interface that has input fields that when required AND when the user tries to click OK, changes so that they are outlined in red and show text above them with instructions to "Please supply a value". If the user starts to type text in the field, the red outline and the instructions should disappear. I have been able to achieve most of this using the following code, except for two problems. The first is that as soon as the user types one character, the input field loses focus. The second is that the label text is picking up the wrong background color. I tried to use the technique described in Giving the input focus to a particular input field but could not get this to work either. commentRequired = False; comment = "" commentField = Dynamic@Labeled[Framed[ InputField[ Dynamic[ comment, (comment = #1; If[comment != "", commentRequired = False]) &] , String, FieldHint -> "Comment", FieldSize -> 33, ContinuousAction -> True ] , FrameMargins -> None , FrameStyle -> If[commentRequired, Red, CurrentValue["Background"]] ], Framed[ FontColor -> If[commentRequired, Red, CurrentValue["Background"]]], FrameStyle -> None, Background -> If[commentRequired, LightRed, CurrentValue["Background"]]], Bottom]; okButton = DefaultButton["Ok", If[comment != "", DialogReturn[comment], commentRequired = True]]; CreateDialog[Column[{commentField, okButton}]] Here is a snapshot of the dialog after pressing OK without supplying a comment: After typing one character the red frame goes away and the label changes to white (it needs to disappear), but the input field loses focus: The user must then click the input field again to continue typing. Move the Dynamic on Labeled to the FrameStyle: commentField = Labeled[Framed[InputField[Dynamic[comment, (comment = #1; If[comment != "", commentRequired = False]) &], String, FieldHint -> "Comment", FieldSize -> 33, ContinuousAction -> True], FrameMargins -> None, FrameStyle -> Dynamic@If[commentRequired, Red, CurrentValue["Background"]]], The Dynamic on the Labeled causes the whole Labeled expression to be reevaluated, which creates a new InputField. Hence the focus loss. • Cool. If you have other evaluation issues with it, you might lookup TrackedSymbols and Refresh. – Michael E2 Dec 12 '17 at 17:59
2021-04-23 17:35:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1905723214149475, "perplexity": 5699.3631553989135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039596883.98/warc/CC-MAIN-20210423161713-20210423191713-00268.warc.gz"}
https://docgo.net/detail-doc.html?utm_source=optical-fiber-with-novel-geometry-for-evanescent-wave-sensing
# Optical fiber with novel geometry for evanescent-wave sensing Description First results in the preparation and analysis of an optical fiber with a novel geometry which facilitates the access of chemical species to the evanescent field for sensing purposes are presented. This ‘s-fiber’ is of approximately sectorial cross Categories Published View again All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you. Related Documents Share Transcript cHEBc L E L S E V I E R Sensors and Actuators B 29 (1995) 416-422 Optical fiber with novel geometry for evanescent wave sensing Vlastimil Matejec Miroslav Chomfit Marie Pospi~ilovfi Milo~ Hayer Ivan Ka~ik Institute of Radio Engineering and Electronics, Academy of Scienees of the Czech Republic, Chabersk4 57, 182 51 Prague, C=ech Republic Abstract First results in the preparation and analysis of an optical fiber with a novel geometry which facilitates the access of chemical species to the evanescent field for sensing purposes are presented. This 's-fiber' is of approximately sectorial cross section with the core located in the carefully rounded vertex of the sector. Using a perturbation method, the dependence of the attenuation coefficient of the fundamental mode in a weakly-guiding, step-index s-fiber on the fiber normalized frequency, vertex angle and cladding thickness are determined. Attenuation coefficients several times higher than in D-fibers are theoretically attainable. Preforms for drawing s-fibers are prepared from standard MCVD preforms by accurate grinding and polishing the preforms to a desired sectorial shape. Multimode s-fibers with core dimension of about 30 pm and cladding size of about 170 pm and exhibiting satisfactory strength have been drawn. Resulting shapes of the fiber and core depend on the shape, structure and composition of the preform, drawing temperature and drawing velocity. Results have proved the feasibility of the chosen approach to the laboratory preparation of s-fibers. In preliminary experiments the sensing ability of the drawn fibers has been examined. Keywords: Optical fibers; Novel geometry; Evanescent-wave sensing 1 Introduction For evanescent-wave chemical sensing a variety of optical fibers differing greatly in geometry and structure have been investigated and tested. They include multi- mode PCS fibers with removed or modified cladding and various types of tapered, etched, side-polished, eccentrically clad and D-shaped single-mode or few- mode fibers [1-3]. In order to achieve high detection sensitivity of evanescent-wave fiber-optic chemical sensors the frac- tion of total optical power that is carried by the evanes- cent wave should be large, there should be good access of the detected chemical species to the evanescent field of the fiber, and long fiber interaction lengths should be used. For most practical applications sufficient strength, ruggedness and durability of the sensing fiber are also required. Commercially available and inexpensive multimode PCS fibers with removed or newly formed claddings and with core diameters ranging from 100 to 600 pm as well as tapered fibers of this type still seem to be the most widely used type of sensing fibers. Their evanes- cent field is easily accessible in the whole area where it exists, they possess good mechanical properties and long interaction lengths are easily achievable with them. However, the fraction of the total optical power that is 0925-4005/95/$09.50 1995 --Elsevier Science S.A. All rights reserved SSD 0925-4005(95)0 717-A carried by the evanescent wave, which is under certain assumptions inversely proportional to the normalized frequency of the fiber V [4], is rather low, being usually of the order of 10 4-10 3. It results in relatively low detection sensitivity of the sensor. Of the other types of sensing fibers, single-mode or few-mode D-fibers are very promising for future evanescent-wave sensing [3]. The main reasons for this n3-jk , , ~v ROUNDED COATING, ~ VERTEX • ikx CORE /// i'- ro ' , CLADDING Fig. 1. Geometry and structure of the analyzed s-fiber in an ab- sorbing coating. V. MatOjec et al. / Sensors and Actuators B 29 1995) 416 422 417 are an easy access to the evanescent field in the whole area over the fiber flat, fiber ruggedness, the possibility of achieving long fiber interaction lengths and, finally, their commercial availability. However, the optical power of the evanescent wave which may interact with the chemical species is, due to the D-fiber geometry, only a small part of the total optical power of the evanescent wave that could theoretically be utilized for this purpose. We propose a sensing fiber with novel geometry which brings the possibility of enhancing the evanescent interaction for single-mode as well as for multimode evanescent-wave sensing [5]. Further we report results that have been obtained. 2. Basic features of the s fiber The proposed novel fiber is of approximately secto- rial cross section with rounded corners as sketched in Fig. 1. The core of this 's-fiber' is located on the sector axis very close to the rounded vertex of the sensor. The lateral flats of the s-fiber form an angle fl which is equal to or less than 180 °, which was chosen as the limit angle for the s-fiber. The height H of the sectorial cross section of an s-fiber is chosen to be in the range 100 600 ~tm, which is comparable with diameters of most conventional sensing fibers. The thickness of the fiber cladding over a certain area in the rounded vertex is either very small or the cladding may even be completely missing in this area. Depending on the shape of the core in the pre- form and on conditions under which the s-fiber is drawn, the fiber core is either circular or takes other forms. The envisaged advantage of single-mode or few- mode s-fibers, with a suitably chosen vertex angle, over single-mode or few-mode D-fibers with the same core dimensions and corresponding refractive-index profiles is the possibility of achieving stronger evanescent inter- action which follows from improved access to the evanescent field in s-fibers. The idea of the s-fiber may also be applied to multimode sensing. The evanescent interaction in a multimode circular-core fiber can be made stronger by decreasing the fiber normalized frequency V which can be achieved by decreasing the core diameter. Of course, there are certain practical limits for diminishing the fiber core. In most sensing applications it would be rather difficult to handle and employ long lengths of a declad or thinly coated PCS fiber with a relatively small core diameter, e.g. in the range 20-40 ~tm. On the other hand, handling and employing a multimode s-fiber with core dimensions in the mentioned range and with outer dimensions ranging from 100 to 600 gm should not present a serious problem. Although the beneficial effect of decreasing the core dimensions will be partly reduced due to the impossibility of accessing the whole evanescent field of the s-fiber it can be expected that the resulting net enhancement of the evanescent interaction may be significant. 3. Theoretical determination of attenuation of the fundamental mode of the s fiber We determine the attenuation of the fundamental mode in a weakly-guiding step-index circular-core s- fiber coated with a weakly-absorbing coating with the aim of assessing the improvement that may be brought by the novel fiber geometry. The light-absorbing prop- erty of the coating is caused by the penentration of a chemical species into the coating. We consider an s-fiber with a core of radius a and refractive index n] and with a cladding of refractive index n whose outer radius ro and thickness are con- stant over a certain area in the vertex region (Fig. 1). The complex refractive index of the coating is n3-jk where we assume that k << n 3 and n 3 ~ n2. The possibil- ity of fulfilling the last assumption is in using, for example, a suitable porous glass coating prepared by the sol gel process [4]. Generally, a chemical species that penetrates into the coating influences ,not only the imaginary part k of the refractive index, but, to some degree, also its real part n 3. Induced variations of n 3 cause changes of the evanescent field which also influ- ence the attenuation of the guided mode. Here this effect is not taken into account. For the sake of simplic- ity of calculations we further assume that the coating is uniform and infinite and that n 3 =/'/2 holds. Under these assumptions the coated s-fiber may be treated as a slight perturbation of an unperturbed fiber with the same core and non-absorbing infinite cladding of refractive index n: for which the fundamental mode solution is known and perturbation methods may be applied [6]. The area S in which the refractive index of the cladding is perturbed includes the whole region outside the cross section of the s-fiber. The power attenuation coefficient ), of the guided mode can be expressed as [6] 7 = '%q (1) where ~0 = kko is the power absorption coefficient of the coating (for bulk absorption), k0 = 2~r/2o and q is the fraction of the total power that propagates through the area S [7] f Lel dA s (2) ~7 .Z 418 V. Mat~jec et al. / Sensors and Actuators B 29 (1995) 416 422 In Eq. (2), E is the electric field of the fundamental mode of the unperturbed fiber. The determination of r/ is based on the relation for the fraction r/I of the total power that propagates in the circular area of the cladding of the unperturbed fiber with inner radius r I i> ro and outer radius r 2 -+ o0, which is for the radially symmetric modal field of the funda- mental mode given by U2RI 2 r/l -- V2K~--W) [K12(WRI) - K02(WR1) ] (3) In Eq. (3), U, W and V are given by U = koax/~21 2 -- n~ 2 W = koajne 2 - n2 2 V= v/~+ W 2 (4) where ne is the effective refractive index of the mode, R1 = q/a and K 0, K~ are the modified Bessel functions. Eq. (3) can easily be obtained using the expressions for the fraction of the power in the core given in Table 14-3 in Ref. [6] and the electric field of the mode in the cladding of the unperturbed fiber. On the basis of Eq. (3) and Fig. 1, we can express Eq. (2) as 2= if I=~ rh [Rl(~p)] d( p (5) o The function &(~o) in Eq. (5) is given by RI = Ro, Ro =ro/a in the interval of q) in which the thickness of the cladding is constant, and by R1 = Ro/cOs (p for q) in the intervals (0, 6) and (rc+fl-a, ~+fl), where the meaning of the angle 5 is evident from Fig. 1. Neglect- ing the contribution of ~/ from the angular interval (5, ~ + fl - 6), where the relative thickness of the clad- ding is large and the value of */i is relatively small, we then get for 7 of the fundamental mode O{ U2Ro 2 { ~ 1rV2K12(WRo) (it - fl)[K]2(WRo) - Ko2(WRo)] c~ f 6) a~lim ~ L \cos~o/ \cos~o/j 2 0 The integral term in Eq. (6), which is independent of the vertex angle fl, represents the contribution to the attenuation srcinating from two identical parts of an s-fiber for which ~o is in the intervals (0, 5) and (~ + fl - 6, zc + fl) (see Fig. 1). These fiber parts may be seen as two separated halves of the corresponding D-fiber. The first term in Eq. (6), which linearly de- pends on fl, represents the added contribution to the attenuation appearing owing to the geometry of the s-fiber. For a D-fiber it is fl = ~ and this term vanishes. I I R : I. a) Fiber with --- f~ - 180 ° 0.4 ~ r/emoved cladding e oooe ~ - 150 o "~ ====:- t20 ° % A ~-a-~ 90 ° o0'3 ~~... ..... 360o60 i~'-0.2 V 0.5 Ro = 1.5 b) 0.4 0.3' ..... = o -~ ~ ~ibceirrc Wlot r cladding °eee~ 8 i~: 0'2 ~ & b &-~ i 90 0 ....... 60 o 0.1 ~. .... V Fig. 2. Dependence of the ratio of the attenuation coefficient 7 and the absorption coefficient % on the normalized frequency of the fiber V for several values of the vertex angle fl and the case of a fiber with a circular cladding and for R0 = 1.0 (a), 1.5 (b). Using numerical evaluation of Eq. (6) important depen- dencies of the attenuation coefficient 7 on various fiber parameters have been determined. Fig. 2(a) and (b) shows that rano %/% as a function of the fiber normalized frequency V for several values of the angle fl and two values of R 0. Also shown in Fig. 2 are the curves for a fiber with removed cladding (R0 = 1.0) and a fiber with a circular cladding. In Fig. 3(a) and (b) the ratio 7/7D, where 7D is the attenuation coefficient for the corresponding D-fiber, is plotted as a function of the angle fl for several values of R 0 and two values of V. 4. Experiments and results Preforms for drawing s-fibers were prepared in two steps. In the first step a preform with a cylindrically symmetric glass core GeO2-SiO2 and glass optical clad- ding F-P2Os-SiO2 was prepared by the MCVD method (device Special Gas Company, UK). A typical refractive-index profile of the preform, measured by using refractive-index profiler P101 (York Technology, UK) is shown in Fig. 4. In the second step accurate grinding and polishing of this preform was used to prepare a preform of sectorial cross section with carefully rounded vertex region. The core and cladding structure can be seen in a photo of the prepared sectorial preform with a completely re- V. Mat jet et al. ,' Sensors and Actuators B 29 (1995) 416-422 419 ~ V = 1.6 (a) 2.8 ~'~'~ . Ro= 1.0 - ~.-..'c->.. :::=: Ro=1 1 ~2.2 ~ ~ Z'>.. ..... Ro= 1.,3 1 6 x~ 1.0 20 SO 1 O0 180 B °] ~~ o . V=2 0 2.8 ~i~ b),~ Ro =1 .? ~;~2.2 R°=l Ro=l ,3 Ro=l .5 1.6 1.0 ................... , .................. , .................. , .................. ., 20 60 1 O0 14Q 180 [°] Fig. 3. Dependence of the ratio of the attenuation coefficient ;, for the s-fiber and ?'D for the corresponding D-fiber on the vertex angle fl for several values of the relative radius R~ of the cladding and for V- 1 6 (a), 2.0 (b). moved part of the cladding in the vertex region, shown in Fig. 5. The vertex angle fl of this preform is equal to 70 °. The tomographic refractive-index profile of this preform measured in immersion is shown in Fig. 6. The fibers were drawn from the preforms using a graphite resistance furnace (Centor, USA). Drawing temperatures in the range from 1850 to 1950 °C were investigated. A multimode fiber with core size of about 30 Hm. outer size (height H) of about 170 lain, and coated with a layer of silicone polymer Sylgard 184 (Dow Coming, USA) which was approximately 50 ~tm thick was prepared. A drawing velocity of about 15 m/ min was used. The endface of a broken s-fiber is shown in Fig. 7. 0006 o 004 .c_ 0002 I oooo c]f -0.002 ................... , .................... . .................. , -600 -200 2 O0 60~ Radius [ram] Fig. 5. Photo of a prepared sectorial preform. Preliminary experiments were performed in order to examine the sensing ability of the prepared s-fibers. Solutions of methylene blue in glycerol and ethanol with concentrations of 277 ppm (wt.) and 27 ppm (wt.) were used in the experiments. Their refractive indices were both 1.439. The silicone claddings were removed from the fibers in lengths of about 5 cm. This fiber section was placed into a measuring cell (no solution circulation was used) and its spectral attenuation was measured using a labo- ratory device with a lock-in amplifier. The diagram of the optical measurement system is shown in Fig. 8. A dynamic range of up to 30dB and an accuracy of attenuation measurement of about 0.1 dB can be achieved with this device. In order to verify the detection ability of the optical measurement system for evanescent-wave sensing, ex- periments with commonly used PCS fibers were also carried out. In these experiments PCS fibers with core diameter of 200 ~tm and coated with a Sylgard 184 layer of approximately the same thickness as in the case of the s-fiber were used. The PCS fibers were excited with an input numerical aperture of about 0.19 using the optical system of the device. The s-fibers were excited 0.0(3, t Fig. 6. Tomographic refractive-index profile of a sectorial preform Fig. 4. Refractive-index profile of the MCVD preform. (measured in immersion, n = 1.4565 at 580 nm). 42 V. MatO/ec et al. / Sensors and Actuators B 29 1995) 416- 422 4] METHYLENE LUE 27 p~m (wt.) 37 LENGTH OF OPTICAL PATH 5.5 cm -~g ............ gg6 ........... ggg ........... 4~0 ........... gd5 Wavelength [nm] Fig. 9. Spectral dependence of the attenuation of the s-fiber and PCS fiber in solution of methylene blue. Fig. 7. Endface of the broken s-fiber (excited with numerical aperture 0.19 at 580 nm). with a few-mode fiber with a core of diameter of 10 gm and numerical aperture of 0.125. In the experiments the spectral dependence of the output intensity Io of the fibers in the empty cell in air was measured at first. Then the cell was filled with the solution of methylene blue and the intensity I~ was determined. The attenuation 7. was calculated from the equation [Io\ 7, = 10 log~l) (7) from which ;'] = 4.342;,L, where L is the length of the fiber section without silicon polymer. The results of these measurements for the methylene blue concentra- tion of 27 ppm are shown in Fig. 9. For the concentra- tion of 277 ppm the attenuation at 660 nm was found to be 11.8dB for the s-fiber and 4.3 dB for the PCS fiber. 5 Discussion In the preceding sections theoretical and experimen- tal results on s-fibers have been presented. The aim of ,~_~~cH~LE x,y,z x,y ~ J LEGEND L LA~P 5OW, MC-MONOCNROMATOR LE-LENS, CH-CHOPPER, SO S'¢NCHR DETECTOR 0 DETECTOR, PA PP-4EAMPUFIER, -MOTOR, LR-UNE REFER3JJCE, PLL-PHASE LOCK LOOP, PS-PHA.SE SHIFF~R,$1-SAMPL I ,rT'EGRATOR, A-OIP AMPUFIER. ADC A/D CONVERTER, C-COMPUTER Fig. 8. Diagram of the optical measurement system. the theoretical analysis was to provide basic informa- tion on the sensing ability of the s-fiber in single-mode operation. The analysis confirmed the expected rapid increase of the attenuation coefficient ), of the funda- mental mode as the fiber normalized frequency V de- creases and the mode is less confined. Light launching problems and microbending effects, however, would prevent the use of s-fibers with low V values. A reason- able compromise might be the choice of V in the range 1.6-2.0. The results also reveal the beneficial effect of decreasing the vertex angle on the enhancement of the attenuation coefficient. The experiments have proved the feasibility of the chosen approach for laboratory preparation of s-fibers. Attention was given mainly to the preparation of multi- mode s-fibers with small-diameter cores (20 30 ~tm) which are expected to comprise the advantages of high fractional power in the evanescent field and the strength and ruggedness of D-fibers. A good strength and ruggedness of the s-fiber is expected on the basis of similar methods of preparation of s-fibers and D-fibers utilizing grinding and polishing the preforms which should induce similar types of surface flaws in both cases. The drawn fibers exhibit satisfactory mechanical strength and have not shown any obvious signs of degradation even in the parts from which the polymer cladding had been removed. No serious problems have been met when handling the s-fibers. For these reasons they are seen as very promising for applications. Comparing the shape of the drawn s-fiber in Fig. 7 and the shape of the starting preform in Fig. 5 one can conclude that the s-fiber maintained the srcinal shape of the preform only roughly. For example, the vertex angle fl measured in the vertex region of the fiber is approximately 100 ° in contrast to the value of fi equal to 70 ° at the preform. The resulting shape of the s-fiber is in a complex way determined by the shape of the starting preform, material viscosities, surface tension, drawing temperature and drawing velocity. Establishing the relationship between these parameters will require further extensive research. Nevertheless, from Fig. 7, one can draw a qualitative conclusion that the access to Jan 13, 2019 #### Thulium-doped silica-based optical fibers for cladding-pumped fiber amplifiers Jan 13, 2019 Search Similar documents ### Fiber Bundle Optical Coupler with Circle-Distributed for Diffuse Light Collecting of Grain Sample View more... Related Search Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us. Thanks to everyone for your continued support. No, Thanks SAVE OUR EARTH We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth. More details... Sign Now! We are very appreciated for your Prompt Action! x
2019-10-22 01:46:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3021678328514099, "perplexity": 2657.8967800831592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795403.76/warc/CC-MAIN-20191022004128-20191022031628-00340.warc.gz"}
http://gateoverflow.in/956/gate2003-69?show=143663
+5 votes 684 views The following are the starting and ending times of activities $A, B, C, D, E, F, G$ and $H$ respectively in chronological order: $“a_s \: b_s \: c_s \: a_e \: d_s \: c_e \: e_s \: f_s \: b_e \: d_e \: g_s \: e_e \: f_e \: h_s \: g_e \: h_e”$. Here, $x_s$ denotes the starting time and $x_e$ denotes the ending time of activity X. We need to schedule the activities in a set of rooms available to us. An activity can be scheduled in a room only if the room is reserved for the activity for its entire duration. What is the minimum number of rooms required? 1. 3 2. 4 3. 5 4. 6 asked retagged | 684 views ## 6 Answers +15 votes Best answer Solution: B The problem can be modeled as a graph coloring problem. Construct a graph with one node corresponding to each activity A,B,C,D,E,F,G and H. Connect the activities that occur between the start and end time of an activity now the chromatic number of the graph is the number of rooms required. answered by Active (1.4k points) selected by @rajesh draw a horizontal time line and mark the time positions,here each pair is a edge between vertex a and b, i think the time line method is better all those guys having confusion can also check register allocation problem solution by using graph colouring, the soltion is similar @Abhinav93 Graph is constructed as follows, when a starts inside it's room there are b,c therfore edge is there between (a,b) (a,c) simillarly when room is available for "b" inside when it get finish the room also have (a,c,d,e,f) as follows, hope you understand. Simple Way: For each time unit make a stack . for "s" operation insert into the stack and for "e" operation delete from the stack . Think That you can delete an element from any position of the stack. Maximum height at a particular instance of time is the answer. @Arjun Sir, Is this a Activity Selection Problem ? Plz clear . @aswlna i think so +5 votes Answer: (B) Explanation: Room1 – As Room2 – Bs Room3 – As now A ends (Ae) and now Room3 is free Room3-Ds now A ends (Ae) and Room1 is free Room1-Es Room4-Fs now B ends Room2 is free now D ends Room3 is free Room2-Gs now E ends Room1 free now F ends Room4 free Room1-Hs now G and H ends. Totally used 4 rooms answered by Loyal (4.2k points) +1 vote ans== 4 as we every new room is allocated when rest all allocated previously are filled answered by (399 points) +1 vote This type of problem can be solved by array representation of starting time  ( taking +1) and ending time (taking -1 ) +1 +1 +1 -1 +1 -1 +1 +1 -1 -1 -1 -1 -1 -1 -1 -1 now taking sequence from array and sum  them for getting largest positive no +1 +1 +1 -1 +1 -1 +1 +1 this array is 1st 8 sequences of original array and sum is 1+1+1-1+1-1+1+1=4 so ans is 4 answered by Junior (807 points) Could you please provide some reference for this solution ? +1 vote Total 4 rooms required. Option B is the answer. answered by (349 points) 0 votes ans 4 answered by Boss (5.3k points) @Arjun Sir, Is this a Activity Selection Problem ? Plz clear . Answer: +10 votes 4 answers 1 +9 votes 4 answers 2 +3 votes 3 answers 3
2017-08-23 14:00:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39805033802986145, "perplexity": 3062.407143320042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120573.0/warc/CC-MAIN-20170823132736-20170823152736-00537.warc.gz"}
https://mathoverflow.net/questions/351043/probability-that-one-gaussian-rv-exceeds-all-others-not-the-identically-distrib
# Probability that one Gaussian RV exceeds all others (not the identically distributed case) Imagine we have $$k$$ Gaussian RVs $$X_i \sim N(\mu_i, \sigma_i^2) \text{ for } i=1, \ldots, k$$ and we sample from each of them independently to produce a vector, $$\vec{x} = (x_1, \ldots, x_k)$$. For one of the Gaussian RVs, say $$X_j$$, I am interested in computing the probability that it exceeds all others, i.e. $$\Pr\left\{ \cap_{i\not= j} \, X_j > X_i \right\}.$$ I know I can use Monte Carlo sampling to estimate this probability. But are there any closed-form analytical methods or approximations? • Do you mean $Pr(\{X_j > \max_{i \not= j \colon 1 \leq i \leq k} ~X_i\})$? – Dieter Kadelka Jan 24 at 10:37 • Yes, that's another way to write it. The intersection of the events "$X_j > X_i$" for all $i$ is equivalent to $X_j > \text{max}(\{X_i\}_{i\not=j})$. – ted Jan 24 at 17:31 • this is answered at stats.stackexchange.com/a/114519 – Carlo Beenakker Jan 24 at 19:37
2020-02-27 21:14:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9408947229385376, "perplexity": 384.15623639906454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146809.98/warc/CC-MAIN-20200227191150-20200227221150-00409.warc.gz"}
https://solvedlib.com/n/the-total-e-2-question-glycolysis-l-eaction-jl-6-catana,1899459
# The total E 2 QUESTION Glycolysis L eaction Jl 6 catana WH 1 mol ot AP 0n nerormalion 01 ###### Question: The total E 2 QUESTION Glycolysis L eaction Jl 6 catana WH 1 mol ot AP 0n nerormalion 01 2Moiateano _Moi~ADA 1 V L Mte Po V W etharvol produclion stxri bagins 1 1 1 3 1 1 1 1 1 8 1 L #### Similar Solved Questions ##### In Exercises $25-33, C_{0}, C_{1}, C_{2}, \ldots$ denotes the sequence of Catalan numbers. How many parenthesized expressions are there containing $n$ distinct binary operators, $n+1$ distinct variables, and $n-1$ pairs of parentheses? For example, if $n=2,$ and we choose and $+$ as the operators and $x, y,$ and $z$ as the variables, some of the expressions are $(x+y)+z, \quad x=(y+z), \quad x *(z+y), x+(z=y), \quad z *(y+x)$ In Exercises $25-33, C_{0}, C_{1}, C_{2}, \ldots$ denotes the sequence of Catalan numbers. How many parenthesized expressions are there containing $n$ distinct binary operators, $n+1$ distinct variables, and $n-1$ pairs of parentheses? For example, if $n=2,$ and we choose and $+$ as the operators an...
2022-05-21 21:12:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8818967342376709, "perplexity": 1746.0545200156648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662541747.38/warc/CC-MAIN-20220521205757-20220521235757-00743.warc.gz"}
https://circa.cs.ualberta.ca/index.php?title=CIRCA:CSDH/SCHN_2013&action=getLaTeX
# LaTeX code for CSDH/SCHN 2013 \documentclass[a4paper,11pt]{article} \usepackage{ulem} \usepackage{a4wide} \usepackage[dvipsnames,svgnames]{xcolor} \usepackage[pdftex]{graphicx} \usepackage{hyperref} % commands generated by html2latex \begin{document} On Tuesday June 4, 2013 the History and Archives group will be presenting the paper "Digital Activism and the Digital Humanities" at Congress in Victoria, Canada \begin{tabular} \subsection{Contents} \begin{itemize} \begin{itemize} \item \hyperlink{In_Defense_of_the_Humanities}{2.2In Defense of the Humanities} \end{itemize} \end{itemize} \end{tabular}\hypertarget{Abstract}{} \section{Abstract} At the close of every year TIME magazine awards a person or group of persons the honourific ???Person of the Year???. In 2011 this title was awarded to The Protestor. From the Arab Spring to the Occupy Movement activists worked to gather support, to connect to each other, and to bring about change. In addition to massive mobilizations The Protestor had an arsenal of digital technologies at their disposal and terms such as Twitter Revolution, Revolution 2.0 and \#\_\_\_\_\_\_\_\_\_\_ became ubiquitous. \\ \\Shortly before the unrest of 2011 a collective of digital humanities scholars and practitioners in the U.S., Canada, U.K. and Australia came together to found 4Humanities. In response to alarming funding cuts to many universities and education programs these advocates believe it is their responsibility to act in defense of the humanities; ???The humanities are in trouble today, and digital methods have an important role to play in effectively showing the public why the humanities need to be part of any vision of a future society.???[1] \\ \\This paper will discuss the potential for digital activism in humanities advocacy from within the walls of academia: \\ \\??? First we will define the term digital activism discuss its history and some tactics. ??? Next we will describe the international 4Humanities Initiative, its goals and activities. ??? Finally we will outline one activity undertaken at the University of Alberta to assist in this grassroots endeavour - the creation of an Advocacy Guide for digital humanists. \\ \\ The Advocacy Guide is composed of five sections: \\ \\ 1. What???s at Stake - describes the funding and support issues prevalent in the Humanities. 2. Brief History of the Humanities - describes the historical ???splitting??? of the Arts of Sciences. 3. Arguments FOR and AGAINST - covers the arguments both in support of the Humanities as well as those with a negative view. 4. Preparing for Advocacy - describes the important factors to consider when developing an advocacy campaign for the Humanities. 5. Tactics - discusses appropriate digital advocacy tactics drawn from the literature on digital activism. \\ \\Alan Liu writes that: \\ \\"Truly to contribute, I believe, the digital humanities will need to show that it can also take a leadership role. The obvious leadership role at present is service for the cause of the humanities. Now that the humanities are being systematically or catastrophically defunded by nations, states, and universities, the digital humanities can best serve the humanities by helping it communicate in the new arena of networked and social public knowledge, helping it showcase its unique value, and helping it partner across disciplines with the STEM sciences in ???grand challenge??? projects deemed valuable by the public and its leaders." [2] The digital humanities have an advantage and even a responsibility to make use of the improved analytical and communicative methods afforded to us today. This paper will show some of the ways we can.\hypertarget{Paper}{} \section{Paper} In 2011 TIME magazine awarded the honourific 'Person of the Year' to The Protestor. ???No one could have known that when a Tunisian fruit vendor set himself on fire in a public square, it would incite protests that would topple dictators and start a global wave of dissent. In 2011, protesters didn't just voice their complaints; they changed the world.??? Indeed from the Arab Spring to the Occupy Movement activists worked to gather support, to connect to each other, and to bring about change. In addition to massive mobilizations The Protestor had an arsenal of digital technologies at their disposal and terms such as Twitter Revolution, Revolution 2.0 and hashtag 'insert slogan here' became ubiquitous. \\ The role of digital technologies in activist causes is both widely championed and contested but our purpose here isn't to focus on this debate. Rather our point in this paper is to show how we, as digital humanists, can use these technologies in defense of the humanities. In this paper we will: \begin{itemize} \item Define digital activism; \item Outline the current 'Crisis of the Humanities' and the need for our community to act in defense of the humanities; and \item Introduce the 4Humanities initiative, "a platform and resource for advocacy of the humanities, drawing on the technologies, new-media expertise, and ideas of the international digital humanities community." \end{itemize} \\\hypertarget{Introduction_to_Digital_Activism}{} \subsection{Introduction to Digital Activism} \\Digital activism is one of many possible appellations referring to the the use of digital technology towards the advancement of political and social goals. Others include but are not limited to: cyberactivism, internet activism, networked activism, liberation technologies, or electronic civil disobedience. Following in the steps of Mary Joyce in \textit{Digital Activism Decoded: the New Mechanics of Change} the term digital activism is chosen because of its exhaustiveness and exclusivity: ???Exhaustive in that it encompasses all social and political campaigning practices that use digital network infrastructure; exclusive in that it excludes practices that are not examples of this type of practice.??? For example, electronic civil disobedience is not exclusive as it could refer to any use of electronics and such activities have long been in practice. The cassette tape was integral to the 1979 Iranian Revolution by allowing the Ayatollah Khomeini to distribute his taped speeches in opposition to the American backed Shah (Eric Schmidt and Jared Cohen, "The Digital Disruption", \textit{Foreign affairs}, November/December 2010). On the other hand the terms cyber- and internet activism are not exhaustive. They omit Short Message Service (SMS) one of the most commonly used features on mobile phones. In 2001 when corrupt Philippine President Joseph Estrada was on trial and it appeared that Congress was going to dismiss evidence against him and allow him to remain in power, thousands of Filipinos took to the streets of Manila armed with cell phones. Coordination by text messaging allowed for rapid mobilization and ultimately helped to force Estrada out of office (Clay Shirkey, "The Political Power of Social Media". \textit{Foreign Affairs}, January/February 2011). The term digital activism encompasses all actions that make use of the digital network.By using 0s and 1s to store and process information, and to exchange this information using the standardized language ASCII (American Standard Code for Information Exchange), computers around the world are able to communicate with each other. This universality of binary code, 0s and 1s, is the strength of the digital network. Examples of digital activism include organizing campaigns through email, SMS, or social media, spreading information via list-serves, blogs, micro-blogs and websites, uploading and posting targeted multimedia content to a website or web platform, participating in an on-line e-petition campaign, or even hosting webinars to teach less tech-savvy activists how achieve their goals using digital technology. Hacktivism is another way participants can engage in digital activism. One of the most well known international hacktivist groups is Anonymous whose activities range from Distributed Denial of Service (DDoS) and website defacement to publicizing personal information about accused persons. \\\hypertarget{In_Defense_of_the_Humanities}{} \subsection{In Defense of the Humanities} \\ ===\textbf{The History of the Humanities}=== \\ \\According to Dr. Mike Lippman, University of Arizona, Department of Classics, the Humanities originate in 5th century BC, Greece, where we find the first concentrated development of tragedy or drama, comedy, philosophy, and history, all the major disciplines included in the Humanities today. \\ \\The online dictionary defines the Humanities as one part of what is commonly referred to as the Liberal Arts. Also included under the umbrella of Liberal Arts are the natural sciences, arts, and social sciences. The Liberal Arts include those topics that are not professional or technical subjects. The term 'liberal arts' originates from the mid-eighteenth century, translated from the Latin art??s l??ber??l??s, meaning 'works befitting a freeman'. \\ \\Referring to the core skills employed in the civic life and public debate of classical antiquity, the later termed 'liberal arts' were skills that were thought to foster virtue, knowledge, and articulation. Such skills included grammar, rhetoric, and logic, known in medieval times as the Trivium, three of the foundations that would form the basis for the Humanities. During the era of the medieval church, the Trivium was expanded to include the natural sciences, incorporating arithmetic, geometry, music and astronomy. This new synthesis of the disciplines was referred to as the Quadrivium.The term Humanities comes from the Latin humanus, meaning human, cultured and refined, and originates with the Renaissance ???humanists??? who redefined the traditional subjects of the Trivium as the Studia Humanitatis, removing logic and then adding to their newly defined corpus such disciplines as Greek studies, (to complement the Latin grammar), history, poetry, and ethics. As such, the Humanities were born. \\ \\===\textbf{Two Cultures? - The Historical Splitting of the Arts and Sciences}=== \\ \\The Yale Report of 1828 rallied against a gradual depart in universities from the classical liberal arts education of the core subjects contained in the trivium and quadrivium towards the ever encroaching elective based curriculum. The report was significant in two ways, first, that it was seen by many as a decades long setback in the advancement of education options, and second, that is stands as a historical landmark in the conversation surrounding the dissolution of the classical liberal arts education. \\ \\One of the original and often quoted discourses pertaining to the split in education is Cardinal Newman???s The Idea of a University. Newman wrote and lectured extensively in the 1850???s on the nature of the university, focusing on the value of the liberal education. His belief was that knowledge was universal and that truth was anything but relative. Newman claimed that truth was specific and attainable through reason and intellect. He is often cited as the original proponent of a generalist education as opposed to a vocational education. \\ \\C. P. Snow???s famous 1959 lecture and subsequent book entitled \textit{Two Cultures} stands as the quintessential expression of the split between the Humanities and the Sciences, and is often quoted as the first modern critique of the split between the disciplines, positing the divide as a regrettable loss to humanity and knowledge. Snow???s work became a major catalyst towards the ???Science Wars??? of the 1990???s, an epistemological debate between postmodernist thinking and science that polarized knowledge into objectivist and subjectivist corners, extolling the values of one epistemological view over the other. The debate has resurfaced in recent years as a struggle to unite the so-called 'two cultures', though differing views on the value of such an endeavour surface in both the academy and society in general. \\ \\ ===\textbf{Digital Humanities and the Cuts}=== \\The \textit{Humanist} Listserve is likely well known to many in this room today. In operation since 1987 it is described as "an international online seminar devoted to all aspects of the digital humanities." On October 23rd, 2010 Andrew Prescott, Director of Research at the Humanities Advanced Technology and Information Institute at the University of Glasgow posted a message to the discussion group with the subject heading 'Digital Humanities and the Cuts'. He begins his communique: \\ \\"Dear Willard, \\I am surprised that we have not so far had any discussions on Humanist of the devastating effect that the current financial crisis will have on the study of the arts and humanities internationally." \\Prescott goes on to describe the situation in Britain where dramatic cuts to higher education were resulting in the slashing of state funding for the teaching of arts, humanities and social sciences. Without a measurable economic value the humanities were under attack; not only universities but service providers such as the national museums and libraries as well. He suggests a silver lining may be the survival of the British Arts and Humanities Research Council (AHRC) as in spite of the cuts, funding for scientific research was to be protected. This council, he states, would likely prioritize the digital humanities which could act as a go-between for arts and humanities faculty and their scientific colleagues. He then calls this hopeful prospect into question: \\ \\"But what will be the value of this if the wider study of arts and humanities has been devastated?... Digital humanities cannot thrive if the study of humanities more widely is under attack." \\ \\Also citing the recent closure of the Italian, French, Russian, Classics and Theatre Programmes at SUNY Albany in the USA Prescott than posits two possible responses to this international crisis: \\(1) demonstrate a financial value of the humanities; or \\(2) follow the argument of Stanley Fish writing in the New York Times "drop the deferential pose" \\As fish writes: \\ \\"Leave off being a petitioner and ask some pointed questions: Do you know what a university is, and if you don't. don't you think you should, since you're making its funding decisions? Do you want a university "an institution that takes its place in a tradition dating back centuries" or do you want something else, a trade school perhaps? (Nothing wrong with that.) And if you do want a university, are you willing to pay for it, which means not confusing it with a profit center? And if you don't want a university, will you fess up and tell the citizens of the state that you're abandoning the academic enterprise, or will you keep on mouthing the pieties while withholding the funds?" \\ \\Prescott finishes his piece with a call to arms for digital humanists - researchers holding a diversity of skill sets and perhaps at an advantage to support such an endeavour. And the responses came quickly and passionately. Researchers from the US, UK and Italy wrote about the peril of their own funding experiences. Some wrote to assert the value of humanities study and the resulting skills not "easily quantifiable" (James Rovira, October 29, 2010). Some took the opportunity to vent their frustrations and worries; one respondent wrote: "academia as a whole is pretty thoroughly sold out to the bean counters" another "I fear the worst case scenario will be the most likely one: Most people will go to university to do a vocational degree in the vain hope this is how they get a job, whilst the Humanities and Social Sciences wither on the vine until they become something only rich, privileged people do." Some offered suggestions such as taking on the expertise of marketing departments. \\ \\ \\"Modern Science brought us: \\ \\Mustard Gas - death toll \\Zylon-B - death toll \\Atomic Weapons - death toll \\Weapons Production - death toll (possibly with an image of an AK-47) \\ \\Cut to 3 part screen, teaching of the Koran, the Torah, and the Bible \\Teaching people not to kill - Priceless \\ \\Support the humanities." \\ \\ \begin{verbatim} Mostly respondents agreed with the call to arms. On Monday October 25th Alan Liu of University of California Santa Barbara wrote the first post alluding to what would become an international, interdisciplinary grass roots organization of digital humanists working in defense of the humanities. This post first outlined the reasons to act: \\\end{verbatim} \\(slide?) \\1. If the humanities are in trouble, \\2. If the digital humanities now have a special potential and responsibility to represent the humanities \\ \begin{verbatim} a. because the digital humanities are affecting an ever larger arc of the humanities, \\ b. because the (modest) ability of the digital humanities to gain funding shows that they have the potential to seem relevant to administrators, government agencies, and possibly even legislators who otherwise have already dismissed the humanities as yesterday's news, \\ c. because the digital humanities contribute to the advancement and deployment of technologies that link the humanities with other disciplines (including some STEM fields that need the participation of humanists to pursue interdisciplinary grants) [i.e., the digital humanities have true interdisciplinary potential], \\ d. and because the digital humanities have the means to communicate quickly and directly to the public in ways that short-circuit the traditional, ponderous levers of university and governmental action, \\\end{verbatim} He then listed opportunities for action: \\3. The the natural course of action for interested members of the Humanist list at this extraordinary time is to start a site that, without necessarily engaging in direct political advocacy,... [does] the following: \\ \begin{verbatim} a. Advocates for the public value of humanities discoveries and projects... We could also likely recruit advocacy statements for the humanities, \\ b. Provides tools, templates, media expertise, social-network methods, examples, etc., for the local or national networks of humanities educators to make their case before the public. \\\end{verbatim} \\From this rather exploratory posting behind the scenes scholars from the U.S., Canada, and U.K. collaborated and rose to the call originally presented by Prescott - on Friday November 19, 2010 the launch of the 4Humanities initiative, advocating the humanities, powered by digital humanists, was announced. \\ \\\hypertarget{The_4Humanities_Initiative}{} \subsection{The 4Humanities Initiative} 4Humanities is a collaboration of scholars in the field of digital humanities, the goal of which is towards an advocacy of the future of humanities in a society where the importance of the humanities is increasingly neglected. Recognizing the advantageous position of digital technologies in the humanities, 4Humanities capitalizes on the opportunity to further the cause for humanities through popular and widely distributed information streams. The initiative focuses on the use of Digital Humanities technologies and experience, comprising an international-wide collective. Using multi-media and scholarly experience, 4Humanities advocates on numerous levels, both through the 4Humanities online platform and through the collection of networked initiatives of similar design such as blogs, newsletters, audio-visual formats, and more. Resources utilized include digital technologies founded in best-practices. \\ \\Due to the failing support of government and private funding for the traditional humanities as well as a general societal attitude of apathy towards the humanities, those concerned with the survival and understanding of its value to society have recognized a need for active intervention. As the humanities plays an important role in all sectors of society, including those portions of society that would see its demise, those concerned have began to collaborate in an effort to spread an understanding of its importance, whether for business initiatives, scientific endeavors, or just a basic understanding of human nature as represented through culture and history. 4Humanities understands that society will be better equipped with a humanities background, and likewise, worse off without it. \\ \\The 4Humanities initiative employs multiple mediums in order to reach the widest possible audience, using every means, from newspaper to social media. As stated in their Mission, found on the website \href{http://4humanities.org/mission/|}{http://4Humanities.org}, 4Humanities: \\ \\???is both a platform and a resource for humanities advocacy. As a platform 4Humanities stages the efforts of humanities advocates to reach out to the public. We are a combination newspaper, magazine, channel, blog, wiki, and social network. We solicit well-reasoned or creative demonstrations, examples, testimonials, arguments, opinion pieces, open letters, press releases, print posters, video ???advertisements,??? write-in campaigns, social-media campaigns, short films, and other innovative forms of humanities advocacy, along with accessibly-written scholarly works grounding the whole in research or reflection about the state of the humanities.??? \\ \\4Humanities is an ongoing and evolving project, originally founded in 2010 by scholars from Canada, the United States, Australia, and the U.K. With increasing interest in the initiative and an ever-growing partnership of digital humanities communities from around the world, 4Humanities is establishing itself as a critical and authoritative center in digital activism for the value of traditional humanities. Already partnered with groups such as the Alliance of Digital Humanities Organizations, centerNet, HASTAC, and SDH/SEMI, 4Humanities is quickly rising to prominence on a global scale. \\ \\For More information about 4Humanities, contact any of the 4Humanities coordinators: Christine Henseler, Alan Liu, Geoffrey Rockwell, St??fan Sinclair, Melissa Terras. Contact: ayliu@english.ucsb.edu. \href{/index.php/CIRCA:4Humanities}{ Back to 4Humanities} \end{document}
2020-08-07 13:45:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21331633627414703, "perplexity": 4770.5371249503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737178.6/warc/CC-MAIN-20200807113613-20200807143613-00252.warc.gz"}
https://www.mariakzurek.com/tags/proton-spin/
# Proton Spin ## Studying the proton spin structure with STAR How the spin of a nucleon arises from the spins and orbital angular momenta of quarks and gluons? This fundamental question in hadron physics remains not completely answered. Experiments with polarized proton-proton collisions can bring us closer to the comprehensive understanding of the internal spin structure of matter.
2020-02-27 07:07:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9401082992553711, "perplexity": 928.4750227647562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146665.7/warc/CC-MAIN-20200227063824-20200227093824-00273.warc.gz"}
https://mathematica.stackexchange.com/questions/216826/orient-a-knot-diagram
# Orient a Knot Diagram I need some help to orient a diagram knot For example if I have the diagram KnotData["Trefoil,"KnotDiagram"] How can I orient the diagram? I have tried to find a command in Wolfram that assigns an orientation to the knot, but I still can't find it, the closest I have found is the DirectedEdge command. Does anyone know how I could orient the diagram of a knot or can you give me an idea on which commands to use? • do you get what you need using KnotData["Trefoil", "KnotDiagram"] /. Line -> Arrow and/or KnotData["Trefoil", "KnotDiagram"] /. Line -> (Arrow@*Reverse)? – kglr Mar 22 '20 at 20:32 You can use ReplaceAll to replace Lines with Arrows: KnotData["Trefoil", "KnotDiagram"] /. Line -> Arrow KnotData["Trefoil", "KnotDiagram"] /. Line -> (Arrow @* Reverse)
2021-08-04 19:39:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2541096806526184, "perplexity": 2118.8994851096913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154897.82/warc/CC-MAIN-20210804174229-20210804204229-00101.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-8-polynomials-and-factoring-cumulative-test-prep-gridded-response-page-530/27
## Algebra 1 $10\frac{1}{2}$ To find this answer, you must find the x value by substituting for y with $3\frac{1}{2}$ into the equation $xy=0$. You would then find that the x value is equal to 0. You would then substitute both values into the expression given to produce $10\frac{1}{2}$.
2018-08-14 16:39:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9152477979660034, "perplexity": 182.6700949417481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209165.16/warc/CC-MAIN-20180814150733-20180814170733-00104.warc.gz"}