text
stringlengths 256
16.4k
|
|---|
№ 9
All Issues On the Relation between Curvature, Diameter, and Volume of a Complete Riemannian Manifold Abstract
In this note, we prove that if
N is a compact totally geodesic submanifold of a complete Riemannian manifold M, g whose sectional curvature K satisfies the relation K ≥ k > 0, then \(d(m,N) \leqslant \frac{\pi }{{2\sqrt k }}\) for any point m ∈ M. In the case where dim M = 2, the Gaussian curvature K satisfies the relation K ≥ k ≥ 0, and γ is of length l, we get Vol ( M, g) ≤ \(\frac{{2l}}{{\sqrt k }}\) if k ≠ 0 and Vol ( M, g ≤ 2 ldiam ( M) if k = 0. English version (Springer): Ukrainian Mathematical Journal 56 (2004), no. 11, pp 1873-1883. Citation Example: Nguyen Doan Tuan, Si Duc Quang On the Relation between Curvature, Diameter, and Volume of a Complete Riemannian Manifold // Ukr. Mat. Zh. - 2004. - 56, № 11. - pp. 1576–1583. Full text
|
LaTeX:Symbols
LaTeX About - Getting Started - Diagrams - Symbols - Downloads - Basics - Math - Examples - Pictures - Layout - Commands - Packages - Help
This article will provide a short list of commonly used LaTeX symbols.
Contents Common Symbols Operators Relations Finding Other Symbols
Here are some external resources for finding less commonly used symbols:
Detexify is an app which allows you to draw the symbol you'd like and shows you the code for it! MathJax (what allows us to use on the web) maintains a list of supported commands. The Comprehensive LaTeX Symbol List. Operators Relations
Symbol Command Symbol Command Symbol Command \le \ge \neq \sim \ll \gg \doteq \simeq \subset \supset \approx \asymp \subseteq \supseteq \cong \smile \sqsubset \sqsupset \equiv \frown \sqsubseteq \sqsupseteq \propto \bowtie \in \ni \prec \succ \vdash \dashv \preceq \succeq \models \perp \parallel \mid \bumpeq
Negations of many of these relations can be formed by just putting \not before the symbol, or by slipping an n between the \ and the word. Here are a few examples, plus a few other negations; it works for many of the others as well.
Symbol Command Symbol Command Symbol Command \nmid \nleq \ngeq \nsim \ncong \nparallel \not< \not> \not= \not\le \not\ge \not\sim \not\approx \not\cong \not\equiv \not\parallel \nless \ngtr \lneq \gneq \lnsim \lneqq \gneqq
To use other relations not listed here, such as =, >, and <, in LaTeX, you may just use the symbols on your keyboard.
Greek Letters
Symbol Command Symbol Command Symbol Command Symbol Command \alpha \beta \gamma \delta \epsilon \varepsilon \zeta \eta \theta \vartheta \iota \kappa \lambda \mu \nu \xi \pi \varpi \rho \varrho \sigma \varsigma \tau \upsilon \phi \varphi \chi \psi \omega
Symbol Command Symbol Command Symbol Command Symbol Command \Gamma \Delta \Theta \Lambda \Xi \Pi \Sigma \Upsilon \Phi \Psi \Omega Headline text
Arrows
Symbol Command Symbol Command \gets \to \leftarrow \Leftarrow \rightarrow \Rightarrow \leftrightarrow \Leftrightarrow \mapsto \hookleftarrow \leftharpoonup \leftharpoondown \rightleftharpoons \longleftarrow \Longleftarrow \longrightarrow \Longrightarrow \longleftrightarrow \Longleftrightarrow \longmapsto \hookrightarrow \rightharpoonup \rightharpoondown \leadsto \uparrow \Uparrow \downarrow \Downarrow \updownarrow \Updownarrow \nearrow \searrow \swarrow \nwarrow
(For those of you who hate typing long strings of letters, \iff and \implies can be used in place of \Longleftrightarrow and \Longrightarrow respectively.)
Dots
Symbol Command Symbol Command Symbol Command Symbol Command \cdot \vdots \dots \ddots \cdots \iddots Accents
Symbol Command Symbol Command Symbol Command \hat{x} \check{x} \dot{x} \breve{x} \acute{x} \ddot{x} \grave{x} \tilde{x} \mathring{x} \bar{x} \vec{x}
When applying accents to i and j, you can use \imath and \jmath to keep the dots from interfering with the accents:
Symbol Command Symbol Command \vec{\jmath} \tilde{\imath}
\tilde and \hat have wide versions that allow you to accent an expression:
Symbol Command Symbol Command \widehat{7+x} \widetilde{abc} Others Command Symbols
Some symbols are used in commands so they need to be treated in a special way.
Symbol Command Symbol Command Symbol Command Symbol Command \textdollar or $ \& \% \# \_ \{ \} \backslash
(Warning: Using $ for will result in . This is a bug as far as we know. Depending on the version of this is not always a problem.)
European Language Symbols
Symbol Command Symbol Command Symbol Command Symbol Command {\oe} {\ae} {\o} {\OE} {\AE} {\AA} {\O} {\l} {\ss} !` {\L} {\SS} Bracketing Symbols
In mathematics, sometimes we need to enclose expressions in brackets or braces or parentheses. Some of these work just as you'd imagine in LaTeX; type ( and ) for parentheses, [ and ] for brackets, and | and | for absolute value. However, other symbols have special commands:
Symbol Command Symbol Command Symbol Command \{ \} \| \backslash \lfloor \rfloor \lceil \rceil \langle \rangle
You might notice that if you use any of these to typeset an expression that is vertically large, like
(\frac{a}{x} )^2
the parentheses don't come out the right size:
If we put \left and \right before the relevant parentheses, we get a prettier expression:
\left(\frac{a}{x} \right)^2
gives
\left and \right can also be used to resize the following symbols:
Symbol Command Symbol Command Symbol Command \uparrow \downarrow \updownarrow \Uparrow \Downarrow \Updownarrow Multi-Size Symbols
Some symbols render differently in inline math mode and in display mode. Display mode occurs when you use \[...\] or $$...$$, or environments like \begin{equation}...\end{equation}, \begin{align}...\end{align}. Read more in the commands section of the guide about how symbols which take arguments above and below the symbols, such as a summation symbol, behave in the two modes.
In each of the following, the two images show the symbol in display mode, then in inline mode.
Symbol Command Symbol Command Symbol Command \sum \int \oint \prod \coprod \bigcap \bigcup \bigsqcup \bigvee \bigwedge \bigodot \bigotimes \bigoplus \biguplus
|
SolidsWW Flash Applet Sample Problem 3
Line 290: Line 290:
Answer submission and checking is done within WeBWorK. The applet is intended to aid with visualization and is not used to evaluate the student submission.
Answer submission and checking is done within WeBWorK. The applet is intended to aid with visualization and is not used to evaluate the student submission.
</p>
</p>
+ + + + + + + + + + + + + + + + + +
|- style="background-color:#ffcccc;"
|- style="background-color:#ffcccc;"
| <pre>
| <pre>
Revision as of 21:33, 9 August 2011 Flash Applets embedded in WeBWorK questions solidsWW Example Sample Problem 3 with solidsWW.swf embedded
A standard WeBWorK PG file with an embedded applet has six sections:
A tagging and description section, that describes the problem for future users and authors, An initialization section, that loads required macros for the problem, A problem set-up sectionthat sets variables specific to the problem, An Applet link sectionthat inserts the applet and configures it, (this section is not present in WeBWorK problems without an embedded applet) A text section, that gives the text that is shown to the student, and An answer and solution section, that specifies how the answer(s) to the problem is(are) marked for correctness, and gives a solution that may be shown to the student after the problem set is complete.
The sample file attached to this page shows this; below the file is shown to the left, with a second column on its right that explains the different parts of the problem that are indicated above. A screenshot of the applet embedded in this WeBWorK problem is shown below:
There are other example problems using this applet: solidsWW Flash Applet Sample Problem 1 solidsWW Flash Applet Sample Problem 2 And other problems using applets: Derivative Graph Matching Flash Applet Sample Problem USub Applet Sample Problem trigwidget Applet Sample Problem solidsWW Flash Applet Sample Problem 1 GraphLimit Flash Applet Sample Problem 2 Other useful links: Flash Applets Tutorial Things to consider in developing WeBWorK problems with embedded Flash applets
PG problem file Explanation ##DESCRIPTION ## Solids of Revolution ##ENDDESCRIPTION ##KEYWORDS('Solids of Revolution') ## DBsubject('Calculus') ## DBchapter('Applications of Integration') ## DBsection('Solids of Revolution') ## Date('7/31/2011') ## Author('Barbara Margolius') ## Institution('Cleveland State University') ## TitleText1('') ## EditionText1('2011') ## AuthorText1('') ## Section1('') ## Problem1('') ########################################## # This work is supported in part by the # National Science Foundation # under the grant DUE-0941388. ##########################################
This is the
The description is provided to give a quick summary of the problem so that someone reading it later knows what it does without having to read through all of the problem code.
All of the tagging information exists to allow the problem to be easily indexed. Because this is a sample problem there isn't a textbook per se, and we've used some default tagging values. There is an on-line list of current chapter and section names and a similar list of keywords. The list of keywords should be comma separated and quoted (e.g., KEYWORDS('calculus','derivatives')).
DOCUMENT(); loadMacros( "PGstandard.pl", "AppletObjects.pl", "MathObjects.pl", );
This is the
The
TEXT(beginproblem()); $showPartialCorrectAnswers = 1; Context("Numeric"); $a = 2*random(2,6,1); $b = 2*$a; $xy = 'x'; $func1 = "x"; $func2 = "2*$a-x"; $xmax = Compute("2*$a"); $shapeType = 'poly'; $sides = random(3,8,1); $correctAnswer = Compute("2*$a^3*$sides*tan(pi/$sides)");
This is the
The solidsWW.swf applet will accept a piecewise defined function either in terms of x or in terms of y. We set
######################################### # How to use the solidWW applet. # Purpose: The purpose of this applet # is to help with visualization of # solids # Use of applet: The applet state # consists of the following fields: # xmax - the maximum x-value. # ymax is 6/5ths of xmax. the minima # are both zero. # captiontxt - the initial text in # the info box in the applet # shapeType - circle, ellipse, # poly, rectangle # piece: consisting of func and cut # this is a function defined piecewise. # func is a string for the function # and cut is the right endpoint # of the interval over which it is # defined # there can be any number of pieces # ######################################### # What does the applet do? # The applet draws three graphs: # a solid in 3d that the student can # rotate with the mouse # the cross-section of the solid # (you'll probably want this to # be a circle # the radius of the solid which # varies with the height #########################################
<p> This is the
Those portions of the code that begin the line with
################################### # Create link to applet ################################### $appletName = "solidsWW"; $applet = FlashApplet( codebase => findAppletCodebase ("$appletName.swf"), appletName => $appletName, appletId => $appletName, setStateAlias => 'setXML', getStateAlias => 'getXML', setConfigAlias => 'setConfig', maxInitializationAttempts => 10, height => '550', width => '595', bgcolor => '#e8e8e8', debugMode => 0, submitActionScript => '' );
<p>You must include the section that follows
################################### # Configure applet ################################### $applet->configuration(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='$sides' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0x0066cc</theColor> <profile> <piece func='$func1' cut='$a'/> <piece func='$func2' cut='$xmax'/> </profile> </plot></xml>}); $applet->initialState(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='$sides' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0x0066cc</theColor> <profile> <piece func='$func1' cut='$a'/> <piece func='$func2' cut='$xmax'/> </profile> </plot></xml>}); TEXT( MODES(TeX=>'object code', HTML=>$applet->insertAll( debug=>0, includeAnswerBox=>0, )));
The lines
The configuration of the applet is done in xml. The argument of the function is set to the value held in the variable
The code
Answer submission and checking is done within WeBWorK. The applet is intended to aid with visualization and is not used to evaluate the student submission.
TEXT(MODES(TeX=>"", HTML=><<'END_TEXT')); <script> if (navigator.appVersion.indexOf("MSIE") > 0) { document.write("<div width='3in' align='center' style='background:yellow'> You seem to be using Internet Explorer. <br/>It is recommended that another browser be used to view this page.</div>"); } </script> END_TEXT
The text between the
BEGIN_TEXT $BR $BR Find the volume of the figure shown. The cross-section of the figure is a regular $sides-sided polygon. The area of the polygon can be computed as a function of the length of a line segment from the center of the $sides-sided polygon to the midpoint of one of its sides and is given by \($sides x^2\tan\left(\frac{\pi}{$sides}\right)\) where \(x\) is the length of the bisector of one of the sides (shown in black on the cross-section graph). A formula similar to the cylindrical shells formula will then provide the volume of the figure. Simply replace \(\pi\) in the formula \[V=2\pi\int x f(x) dx\] with \($sides \tan\left(\frac{\pi}{$sides}\right)\) to find the volume of the solid shown where for this solid \[f(x)=\begin{cases}x&x\le $a\\ $b-x&$a<x\le $b\end{cases}\] for \(x=0\) to \($b\). \{ans_rule(35) \} $BR END_TEXT Context()->normalStrings;
This is the
#################################### # # Answers # ## answer evaluators ANS( $correctAnswer->cmp() ); ENDDOCUMENT();
This is the
The
|
I suspect there is confusion about understanding the problem. The problem is really asking you whether you are allowed to conclude that $(a_n)$ is Cauchy if
all you know is the inequality as stated.
Thus, in order to solve (a) you must find a
particular concrete sequence which satisfies the condition but is not Cauchy. In order to solve (b) you must show that every sequence satisfying the condition is Cauchy. So, here are the solutions:
(a) We must give a sequence $(a_n)$ such that $|a_{n+1} - a_n| < 1/n$
and the sequence $(a_n)$ is not Cauchy. Here is one: define $a_n = 1 + 1/2 + \cdots + 1/n$. Then for all $n$ we have$$|a_{n+1} - a_n| = 1/(n+1) < 1/n,$$however, as is well known the sequence $(a_n)$ is divergent (it is known as the harmonic series).
NB: There are sequences which satisfy $|a_{n+1} - a_n| < 1/n$ but are Cauchy. For instance, the sequence $a_n = 0$ is Cauchy and $|a_{n+1} - a_n| = |0 - 0| = 0 < 1/n$.
(b) We must show that
every sequence $(a_n)$ which satisfies $|a_{n+1} - a_n| < 1/2^n$ is Cauchy. Proof: suppose $(a_n)$ is a sequence such that $|a_{n+1} - a_n| < 1/2^n$ for all $n$. To prove that $(a_n)$ is Cauchy, consider any $\epsilon > 0$. We must find $N$ (depending on $\epsilon$ only) such that $|a_m - a_n| < \epsilon$ for all $n, m$ satisfying $m \geq n \geq N$. We claim that $N = 1 + \log_2 \epsilon$ is good. As in @HeeKwonLee's answer, we may compute that for all $m \geq n \geq N$ we have$$|a_m - a_n| < 1/2^{n-1} \leq 1/2^{N - 1} = 1/2^{\log_2 \epsilon} = \epsilon$$
|
For predicted labels $\hat{y}$ and true labels $y\in\{0,1\}$, the confusion matrix is given by
\begin{array}{c|c:c|c} & y=0 & y=1 & \\\hline\hat{y}=0 & \mathrm{TN} & \mathrm{FN} & \hat{\mathrm{N}} \\\hdashline\hat{y}=1 & \mathrm{FP} & \mathrm{TP} & \hat{\mathrm{P}} \\\hline& \mathrm{N} & \mathrm{P} & (n_{\mathrm{obs}})\end{array}
where the entries are counts, $\mathrm{N}$ = "Negative", $\mathrm{P}$ = "Positive", $\mathrm{T}$ = "True", and $\mathrm{F}$ = "False".
The confusion matrix proper is contained within the solid-outlined box, to which I have added the column sums ($\mathrm{N}$,$\mathrm{P}$), column sums ($\hat{\mathrm{N}}$,$\hat{\mathrm{P}}$), and total sum ($n_{\mathrm{obs}}$ = number of paired observations).
The confusion matrix is essentially an empirical estimate of the joint distribution between $\hat{y}$ and $y$, i.e. when the entries are normalized by $n_{\mathrm{obs}}$ we get
\begin{array}{c|c:c|c} & y=0 & y=1 & \\\hline\hat{y}=0 & p[\sim\!\hat{y},\sim\!y] & p[\sim\!\hat{y},\phantom{\sim\!}y] & p[\sim\!\hat{y}] \\\hdashline\hat{y}=1 & p[\phantom{\sim}\,\hat{y},\sim\!y] & p[\phantom{\sim}\,\hat{y},\phantom{\sim\!}y] & p[\phantom{\sim}\,\hat{y}] \\\hline& p[\phantom{\sim\hat{y}}\sim\!y] & p[\phantom{\sim\hat{y},,}\,y] & (1)\end{array}where I have switched to a Boolean-style notation with $\sim$ = "not".
In the
margins of the table (outside the box), the normalized row and column sums are now the marginal probabilities.
Within this framework, many of the standard confusion matrix based metrics correspond directly to the various conditional probabilities of the above joint distribution.
If we condition on $\boldsymbol{y}$ the table becomes\begin{array}{|c:c|}\hlinep[\sim\!\hat{y}\mid\sim\!y] & p[\sim\!\hat{y}\mid\phantom{\sim\!}y] \\\hdashlinep[\phantom{\sim}\,\hat{y}\mid\sim\!y] & p[\phantom{\sim}\,\hat{y}\mid\phantom{\sim\!}y] \\\hline\end{array}
where the entries correspond to the metrics\begin{array}{|c:c|}\hline\text{specificity}&\text{miss rate} \\\hdashline\text{fall-out}&\text{sensitivity (recall)}\\\hline\end{array} (Note that these metrics can also be referred to by appending "rate" to the corresponding name from the confusion matrix.)
Alternatively, if we condition on $\boldsymbol{\hat{y}}$ the table becomes\begin{array}{|c:c|}\hlinep[\sim\!y\mid\sim\!\hat{y}] & p[\phantom{\sim\!}y\mid\sim\!\hat{y}] \\\hdashlinep[\sim\!y\mid\phantom{\sim}\hat{y}] & p[\phantom{\sim\!}y\mid\phantom{\sim}\hat{y}] \\\hline\end{array}
where the entries correspond to the metrics\begin{array}{|c:c|}\hline\text{negative predictive value}&\text{false omission rate}^* \\\hdashline\text{false discovery rate}&\text{positive predictive value (precision)}\\\hline\end{array}(*This one was not in Wikipedia except in their "big table". I was curious why it was the
only one of the conditional probabilities not given a special name.)
|
Above, we defined velocity as the derivative of position and acceleration as the derivative of velocity. Integration allows us to go the other way! As you learned in first semester calculus, integration allows us to generate an object's velocity as a function of time given its acceleration and an initial velocity. We can also find the object's position function by integrating the velocity and using an initial position.
Let's consider an example.
Example \(\PageIndex{8}\): Finding an object's velocity from its acceleration
An object has an acceleration over time given by \(\vecs a(t) = \sin t \,\hat{\mathbf i} - 2t \,\hat{\mathbf j}\), and it's initial velocity was \(\vecs v(0) = 2 \,\hat{\mathbf i} + \,\hat{\mathbf j}\).
a. Find the object's velocity as a function of time.
b. Assuming the object was located at the point (2, 3, -1) when time \(t = 0\), determine the object's position function and find its location at time \(t = 3\) sec.
Solution
a. First, we find the velocity as the antiderivative of the acceleration.
\[\begin{align*} \vecs v(t) = \int \vecs a(t) \, dt &= \int \left( \sin t \,\hat{\mathbf i} - 2t \,\hat{\mathbf j}\right)\,dt \\
&= -\cos t \,\hat{\mathbf i} - t^2\,\hat{\mathbf j} + \vecs C_1 \end{align*}\]
Now we use the initial velocity to determine \(\vecs C_1\).
\[\begin{align*} \vecs v(0) = -\cos 0 \,\hat{\mathbf i} - (0)^2\,\hat{\mathbf j} + \vecs C_1 &= 2 \,\hat{\mathbf i} + \,\hat{\mathbf j} \\
-\hat{\mathbf i} + \vecs C_1 &= 2 \,\hat{\mathbf i} + \,\hat{\mathbf j} \\ \\ \text{And so} \quad \vecs C_1 &= 3 \,\hat{\mathbf i} + \,\hat{\mathbf j} \end{align*}\]
Incorporating this constant vector into our velocity function from above, we obtain the velocity describing this object's motion over time:
\[\vecs v(t) = \left( 3 -\cos t \right) \,\hat{\mathbf i} + \left(1 - t^2\right) \,\hat{\mathbf j}\]
b. Since the object's position is at the point (2, 3, -1) when time \(t = 0\), we know \(\vecs r(0) = 2 \,\hat{\mathbf i} + 3 \,\hat{\mathbf j} - \,\hat{\mathbf k}\).
Now, to determine the object's position function, we integrate it's velocity.
\[\begin{align*} \vecs r(t) = \int \vecs v(t) \, dt &= \int \bigg[\left( 3 -\cos t \right) \,\hat{\mathbf i} + \left(1 - t^2\right) \,\hat{\mathbf j}\bigg]\,dt \\
&= \left( 3t -\sin t \right) \,\hat{\mathbf i} + \left(t - \frac{t^3}{3}\right) \,\hat{\mathbf j} + \vecs C_2 \end{align*}\]
Now we use the initial position to determine \(\vecs C_2\).
\[\begin{align*} \vecs r(0) = \left( 3(0) -\sin 0 \right) \,\hat{\mathbf i} + \left( 0 - \frac{(0)^3}{3}\right) \,\hat{\mathbf j} + \vecs C_2 &= 2 \,\hat{\mathbf i} + 3 \,\hat{\mathbf j} - \,\hat{\mathbf k}\\ \\ \text{And so} \quad \vecs C_2 &= 2 \,\hat{\mathbf i} + 3 \,\hat{\mathbf j} - \,\hat{\mathbf k} \end{align*}\]
Incorporating this constant vector into our position function from above, we obtain:
\[\vecs r(t) = \left( 2 + 3t -\sin t \right) \,\hat{\mathbf i} + \left(3 + t - \frac{t^3}{3}\right) \,\hat{\mathbf j} - \,\hat{\mathbf k} \]
To find the object's position at time \(t = 3\) seconds, we just evaluate this position function at \(t = 3\).
\[\begin{align*} \vecs r(3) &= \left( 2 + 3(3) -\sin 3 \right) \,\hat{\mathbf i} + \left(3 + 3 - \frac{(3)^3}{3}\right) \,\hat{\mathbf j} - \,\hat{\mathbf k} \\
&= \left( 11 -\sin 3 \right) \,\hat{\mathbf i} - 3 \,\hat{\mathbf j} - \,\hat{\mathbf k} \end{align*}\]
This position vector indicates that the object will be located at the point \( (11 -\sin 3, -3, -1) \) at time \(t = 3\) seconds.
|
I am trying to understand the energy spectrum difference between the analytical and the approximated solution for a quantum well. The particle is inside a box with domain $\Omega=(0,0)$X$(1,1)$. For this I have $\hbar = m = 1$ and the energy is given analytically by $E_{m,n}=\frac{\pi^2}{2}(n^2 + m^2)$
My approximation is done using finite differences with a grid of $60$X$60$. The eigenvalues that I'm getting with the exact solution are always positive whereas the eigenvalues I'm getting with the approximated solution are always negative.
I am not familiar with quantum mechanics. That said, can you help me understand this? What can be happening, is it a big error due to the approximation? Where or what can I think of to try to understand these results?
I hope the question and the problem are well stated if not just tell me, probably I missed some important data and/or assumption.
UPDATE
Here I paste the code used to do this little simulation
clc;xO = 0;xL = 1;N = 60;%First build the matrix for a 1D-meshh = (xL - xO) / (N-1);H = diag( (-2/h^2)*ones(1,N), 0 ) + ... diag( (1/h^2)*ones(1,N-1), 1 ) + ... diag( (1/h^2)*ones(1,N-1), -1 );%Using the tensor product build the matrix for the 2D-meshH = kron(H, eye(N)) + kron(eye(N), H);H = (-1/2) .* H;%Compute the eigenvalues E and eigenvectors Psi[Pcomp, Ecomp] = eig(H);fEexac = @(m,n) ((pi^2)/2) * (n^2 + m^2);Eexac = [];for n=1:N for m=1:N Eexac((n-1)*N+m,(n-1)*N+m) = fEexac(n,m); endend%Plot energy from analytical vs computedfigure(1); hold on;plot(1:N^2, sort(diag(Eexac)), 'b', 1:N^2, sort(diag(Ecomp)), 'r');%Plot discretized energy spectrumfigure(2); hold on;plot(1:40, diag(Ecomp(1:40,1:40)), 'r');%Compare lowest 300 eigenvalues (exact and computed)figure(3); hold on;x = 1:300;plot(1:300, sort(diag(Eexac(1:300,1:300))), 'b', ... 1:300, sort(diag(Ecomp(1:300,1:300))), 'r');
|
Sometimes, you may end up having to calculate the volume of shapes that have cylindrical, conical, or spherical shapes and rather than evaluating such triple integrals in Cartesian coordinates, you can simplify the integrals by transforming the coordinates to cylindrical or spherical coordinates. For this topic, we will learn how to do such transformations then evaluate the triple integrals.
Introduction
As you learned in Triple Integrals in Rectangular Coordinates, triple integrals have three components, traditionally called
x, y, and z. When transforming from Cartesian coordinates to cylindrical or spherical or vice versa, you must convert each component to their corresponding component in the other coordinate system.
There are three coordinate systems that we will be considering. The first is the traditional
x, y, and z system also know as the Cartesian coordinate the system; the other two are explored below. Converting to Cylindrical Coordinates
The second set of coordinates is known as cylindrical coordinates. Working in cylindrical coordinates is essentialy the same as working in polar coordinates in two dimensions except we must account for the
z-component of the system. When transforming from Cartesian to cylindircal, x and y become their polar counterparts. Recall that \(x=r*cos \theta\), \(y=r*sin \theta\), \(r^2=x^2+y^2\), and \(tan\theta=\dfrac{y}{x}\). Now, the conversion for z is simply \(z=z\). \(r\) and \(\theta\) create a plane parallel to the xy-plane, and adding the z component simply gives the plane a "height".
Now, say we have the equation \(r=1\) for \(0\leq\theta<2\pi\). In two dimensions, that would simply give us a circle centered at \((0,0)\) with a radius of 1. By adding the
z-axis, the circle has a height of z, which gives it the shape of a cylinder, hence the name cylindrical coordinates.
As seen in Double Integrals in Polar Form, when converting a double integral from Cartesian to polar coordinates, the \(dA\) term, \(dx\,dy\) in Cartesian gets converted to its polar equivilent.
\[\iint_{D}f(x,y) dxdxy \Rightarrow \iint_{D}f(r\cos \theta ,r\sin \theta) rd\,r\,d\theta\]
The same conversion happens with triple integrals from Cartesian to cylindrical for the \(dV\) term except you must account for the z-axis with a \(dz\) term.
\[\iiint_{D}f(x,y,z) dxdydz \Rightarrow \iiint_{D}f(r\cos \theta ,r\sin \theta ,z) r\,dr\,d\theta \;dz\]
Example \(\PageIndex{1}\): Using Cylindrical Coordinates
Convert this triple integral into cylindrical coordinates and evaluate
\[\int_{-1}^{1}\int_{0}^{\sqrt{1-x^2}}\int_{0}^{y}x^2dz\; dy\; dx \nonumber\]
Solution
There are three steps that must be done in order to properly convert a triple integral into cylindrical coordinates.
First, we must convert the bounds from Cartesian to cylindrical. By looking at the order of integration, we know that the bounds really look like
\[\int_{x=-1}^{x=1}\int_{y=0}^{y=\sqrt{1-x^2}}\int_{z=0}^{z=y} \nonumber \]
Using the Cartesian to cylindrical conversions, we see that the new bounds are
\[\int_{x=-1}^{x=1}\int_{y=0}^{y=\sqrt{1-x^2}}\int_{z=0}^{z=y} \Rightarrow \int_{\theta=0}^{\theta=\pi}\int_{r=0}^{r=1}\int_{z=0}^{z=r\sin\theta} \nonumber \]
Next, we convert the integrand to its cylindrical equivalent
\[x^2 \Rightarrow r^2cos\theta \nonumber \]
Thirdly, we convert the differentials at the end of the integral to their cylindrical equivalent being careful to denote the correct order of integration
\[dz\,dy\,dx \Rightarrow r\, dz\, dr\, d\theta \nonumber \]
Finally, we put it all together, and we have our newly cylindrically-converted integral
\[\int_{0}^{\pi}\int_{0}^{1}\int_{0}^{r \sin\theta}r^2\cos^2\theta r\,dz \,dr \, d\theta \nonumber \]
Now, we actually evaluate the integral
\[\begin{align} &\int_{0}^{\pi}\int_{0}^{1}\int_{0}^{r\sin\theta}r^3\cos^2\theta dz\, dr\, d\theta \nonumber \\ &= \int_{0}^{\pi}\int_{0}^{1} \left [r^3 \cos^2\theta*z \right]_{z=0}^{z=r\sin\theta} dr\, d\theta \nonumber \\ &= \int_{0}^{\pi}\int_{0}^{1} r^4 \cos^2\theta \sin \, \theta dr\, d\theta \nonumber \\ &= \int_{0}^{\pi} \left[\dfrac{r^5}{5}\cos^2\theta \sin\theta \right]_{r=0}^{r=1} d\theta \nonumber \\ &= \dfrac{1}{5}\int_{0}^{\pi}cos^2\theta \sin\theta d\theta \nonumber \end{align}\]
Using u-substitution, we find that the integrand \(\cos^2\theta \sin \theta\) integrates to
\[\dfrac{1}{5} \left[-\dfrac{1}{3}\cos^3\theta \right]_{\theta=0}^{\theta=\pi} \nonumber \]
Which evaluates to
\[\dfrac{1}{5} \left[-\dfrac{1}{3}\cos^3(\pi)+\dfrac{1}{3}\cos^3(0) \right] = \dfrac{2}{15} \nonumber \]
Converting to Spherical Coordinates
Figure 1 shows a visual representation of spherical coordinates. We define \(\rho\) as the distance from the origin to point \(P\). The point \(P\) and the origin create a line segment we will call \(\bar{OP}\), \(O\) being the origin. \(\theta\) is the angle in the
x-y plane from the projection of \(\bar{OP}\), which is shown as \(\bar{OQ}\). \(\phi\) is the angle between the z-axis and \(\bar{OP}\).
The conversions of Cartesian to Spherical are as follows
Just as \(r=\sqrt{x^2+y^2}\), \(\rho=\sqrt{x^2+y^2+z^2}\) and like with cylindrical, \(\theta=tan^{-1}(\dfrac{y}{x})\)
As you can see from Figure 2, \(r=\rho sin\phi\), and using this and other trigonometric relations visible here, we can find conversions for
x, y, and z. x and y look like their cylindrical counterparts; however \(r\) is replaced with \(\rho sin\phi\). So \(x=\rho \sin\phi cos\theta\) and \(y=\rho \sin\phi \sin\theta\). Also, from the diagrams, we see that \(z=\rho cos\phi\).
As for the \(dV\) term of a triple integral, when converted to spherical coordinates, it becomes \(dV=\rho^2 \sin\phi d\rho d\phi d\theta\).
Example \(\PageIndex{2}\): Using Spherical Coordinates
Solution
First we must set up an integral to calculate the volume:
\[V=\int_{\theta_0}^{\theta_1}\int_{\phi_0}^{\phi_1}\int_{\rho_0}^{\rho_1}dV\]
Now we replace the \(dV\) term and fill in the bounds of integration:
\[V=\int_{\theta_0=0}^{\theta_1=2\pi}\int_{\phi_0=0}^{\phi_1=\dfrac{\pi}{2}}\int_{\rho_0=cos\phi}^{\rho_1=6}\rho^2 \sin\phi d\rho d\phi d\theta\]
From there we evaluate the integral:
\[\begin{align} V&=\dfrac{1}{3}\int_{0}^{2\pi}\int_{0}^{\dfrac{\pi}{2}} \left(216-cos^3\phi \right) \sin \phi d\phi d\theta \\ & =\dfrac{1}{3}\int_{0}^{2\pi}[-216 \cos\phi+\dfrac{ \cos^4\phi}{4}]^{\dfrac{\pi}{2}}_{0} d\theta \\ & =\dfrac{1}{3}\int_{0}^{2\pi} \left(216-\dfrac{1}{4} \right) d\theta \\ & \dfrac{863}{4}(2\pi) \\ & \dfrac{863\pi}{2} \end{align}\]
Example \(\PageIndex{3}\)
Michael wants to eat a bowl of Fruity Hoops Cereal. However, he needs to go to the store and get milk for his cereal, and he is unsure of how much milk to buy. He needs your help deciding the appropriate amount to purchase. The volume of his cereal bowl can be represented by the region bounded below by \(\rho=4\cos\phi\) and bounded above by \(z=4\). Using this information, find how much milk Michael will need to fill his cereal bowl. The units are in ounces.
First off, we want to draw a diagram, representing the situation, in order to assist us with choosing our bounds of integration.
Contributors Paul Salessi (UCD)
|
Nature's Ron Cowen reviewed a technical paper in Nature that is one month old, Remotely related: sci-fi gets real:tech junkies should look at 27 science-fiction concepts that morphed into reality in 2012. Writing and Deleting Single Magnetic Skyrmions (Niklas Romming and 7 co-authors from Hamburg).See also reviews in Gizmodo and those via Google News. Thanks to Viktor K. for the link.
Skyrmions, some topologically non-trivial solutions of non-linear sigma-models first described by Tony Skyrme in the 1960s, may be thought of as tiny vortices of atoms. Because in this very recent breakthrough, Romming et al. became able to create and destroy them at will, it's plausible that they may be used in future magnetic information storage technologies.
I've been in love with skyrmions decades before I knew their name.
It really began when I was 15. I was obsessively reading Albert Einstein's book "My World View" ("Mein Weltbild", in a Czech translation) that I had found somewhere in the bookshelves (I guess that it would originally belong to my paternal grandfather, a professional painter/artist and geometry teacher).
In this book, one that probably overlaps with "Ideas and Opinions" heavily, Einstein popularly presents his views and insights about relativity, religion, socialism, Jewish questions, Nazism, meanders, Max Planck, alleged incompleteness of quantum mechanics, and other things.
Einstein wrote many inspiring things, many things that looked deeply ethical, many political ideas I would later find myself in disagreement with, many ideas about physics that were right, and some ideas about physics that were wrong.
Those nearly 25 years ago, I was only beginning to be exposed to quantum mechanics and for a year, I was an employee of Einstein's dream to construct the unified field theory de facto as a classical field theory, if you allow me to use the standard terminology. After some months, I had to begin to steal ideas from proper quantum mechanics to explain the hydrogen atom, before I was forced to steal all of quantum mechanics, of course, but let me avoid the hydrogen atom here.
While its quantum dynamics implies that some quantum numbers are discrete, there are also other observables that have to be discrete in the real world (because they were observed as discrete!) although such a quantization rule seems hard to get in a classical field theory. In one of the essays, Einstein wrote something like (using a modernized terminology):
Quantum mechanics is probably incomplete and a complete description should still be looked for. There is no proof that an old-fashioned, realist, classical theory may not account for the quantum phenomena. For example, the quantization of the electric charge could follow from a classical field theory. There could be a classical field theory that allows us to derive that whenever the charge density vanishes on the boundary of a region, the region contains a charge that is an integral multiple of the elementary charge.I took that as a homework and apparently found a solution. Imagine that in each point of the spacetime, there is a field that takes values on a three-sphere. If \(\vartheta(x,y,z,t)=0\) corresponds to a conventionally preferred point of the sphere (the North Pole) in the same way that we know from the two-sphere, we may add a potential energy term to our action such as\[
S_{\rm pot}\sim - C \int \dd^4 x\,\vartheta^2
\] that will place the value of \(\vartheta\) in the majority of the spacetime close to the value zero. However, in a limited three-dimensional region, the field \(\vartheta\) may probe all points of the target three-sphere. We may figure out that in those regions, the real space may be "wrapped" on the target space three-sphere.
The charge density may be calculated as the "solid angle" spanned by the infinitesimal region of space in the three-sphere. That means that the charge current is proportional to the Hodge dual of a Jacobian of a sort,\[
j^\kappa = \frac{e}{6}\cdot\frac{1}{2\pi^2} \varepsilon^{\kappa\lambda\mu\nu} \partial_\lambda V^a \partial_\mu V^b \partial_\nu V^c \varepsilon_{abcd} V^d
\] where \(V^a\) is the four-vector embedding the three-sphere pointer into a four-dimensional Euclidean space of a sort; we always have \(V^a V_a=1\). I hope that I inserted the right normalization factor above; \(1/6\) avoids the multiple counting over the permutations of \(a,b,c\) while the other factor divides by the "full solid hyperangle" i.e. the surface/volume of the unit three-sphere \(2\pi^2\) for the integral of \(j^0\) over the regions where something happens to be an integer multiple of \(e\).
This seemed like a cute idea. Later, I learned that the magnetic (monopole) charge density actually
isrepresented by a similar topological trick. However, the electric charge is quantized for purely quantum mechanical reasons. Due to the quantization of energy in the quantum harmonic oscillators, one may only add energy to charged fields by creation operators whose electric charges are quantized. There's nothing wrong about this intrinsically quantum explanation for the electric charge.
In the 1990s, the discovery of dualities (and S-duality in particular) showed that these two constructions or explanations for the charge quantization are equivalent although the proof is in no way obvious.
In 1998, I still didn't know the word "skyrmion" although my adviser Tom Banks was telling me that I should have found out what the word meant. ;-) But when I and Ori Ganor asked how the cylindrical M2-branes stretched between pairs of M5-branes are represented at the Coulomb branch of the 6-dimensional (2,0) theory, they are represented by skyrmions, too. The fivebranes become knitted.
This 6-dimensional construction differs from the 4-dimensional construction above by some changes to the dimension only. First, the pointer field isn't labeling a three-sphere but a four-sphere. You may obtain the corresponding vector \(V^a\) from the 5-dimensional transverse Euclidean space as the separation of the corresponding points of the two M5-branes normalized so that it is a unit vector, i.e. as\[
V^a = \frac{\Phi^a_M - \Phi^a_N}{|\Phi_M -\Phi_N|}
\] where the index \(a=1,2,3,4,5\) labels the transverse dimensions to the M5-branes and \(M,N\) label the M5-branes themselves (Chan-Paton indices of a sort).
There's one more difference between the six-dimensional and four-dimensional case. The six-dimensional theory has five and not just four spatial dimensions. So the four-sphere may only be wrapped by four spatial dimensions and the solution remains constant in 1 remaining spatial dimension (plus 1 temporal dimension). That's why the resulting skyrmionic objects are strings rather than point-like objects. They become tensionless strings ("the" tensionless strings known in this theory) in the \(\Phi_M-\Phi_N\to 0\) limit where the Coulomb-branch-based description of the theory breaks down.
In 2000, Ken Intriligator used some nice anomaly considerations to derive structurally similar terms in the six-dimensional theory. I've tried to see that the equations are equivalent to the skyrmion-based ones but the two papers always seemed slightly inequivalent at the end.
In various effective descriptions of nuclear physics, one encounters nonlinear sigma-models and the baryon number seems to be exactly given by the skyrmionic wrapping number. I guess that the detailed implementation of the nonlinear sigma-models is inequivalent in the condensed-matter-physics setup by Romming et al. but the mathematics is going to be analogous. In a foreseeable future, this 50-year-old piece of mathematical physics that has appeared at various places of real physics may dramatically improved magnetic information storage systems.
|
The answer is quite simple.
The correlation matrix is defined thus:
Let $X = [x_1, x_2, ..., x_n]$ be the $m\times n$ data matrix: $m$ observations, $n$ variables.
Define $X_b= [\frac{(x_1-\mu_1 e)}{s_1}, \frac{(x_2-\mu_2 e)}{s_2}, \frac{(x_3-\mu_3 e)}{s_3}, ...]$ as the matrix of normalized data, with $\mu_1$ being mean for the variable 1, $\mu_2$ the mean for variable 2, etc., and $s_1$ the standard deviation of variable 1, etc., and $e$ is a vector of all 1s.
The correlation matrix is then
$$C=X_b' X_b$$
A matrix $A$ is positive semi-definite if there is no vector $z$ such that $z' A z <0$.
Suppose $C$ is not positive definite. Then there exists a vector w such that $w' C w<0$.
However $(w' C w)=(w' X_b' X_b w)=(X_b w)'(X_b w) = {z_1^2+z_2^2...}$, where $z=X_b w$, and thus $w' C w$ is a sum of squares and therefore cannot be less than zero.
So not only the correlation matrix but any matrix $U$ which can be written in the form $V' V$ is positive semi-definite.
|
Defining parameters
Level: \( N \) = \( 27 = 3^{3} \) Weight: \( k \) = \( 3 \) Nonzero newspaces: \( 3 \) Newforms: \( 4 \) Sturm bound: \(162\) Trace bound: \(1\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{3}(\Gamma_1(27))\).
Total New Old Modular forms 69 51 18 Cusp forms 39 35 4 Eisenstein series 30 16 14 Decomposition of \(S_{3}^{\mathrm{new}}(\Gamma_1(27))\)
We only show spaces with odd parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Label \(\chi\) Newforms Dimension \(\chi\) degree 27.3.b \(\chi_{27}(26, \cdot)\) 27.3.b.a 1 1 27.3.b.b 2 27.3.d \(\chi_{27}(8, \cdot)\) 27.3.d.a 2 2 27.3.f \(\chi_{27}(2, \cdot)\) 27.3.f.a 30 6
|
Let $u,v\in W^{1,p}(\Omega )\cap L^\infty (\Omega )$, $p\in[1,\infty ]$. Then, $u,v\in W^{1,p}(\Omega )$ and $$\partial _i(uv)=u\partial _iv+v\partial _iu.$$
I have problem to understand the proof. Let $p\in [1,\infty )$ and let $D\subset \subset \Omega $ an open. Let $\rho_n$ ba a standard mollifier. Define for $n$ large enough $$u_n=\rho_n* u\quad \text{and}\quad v_n=\rho_n*v.$$ Then, $$u_n\longrightarrow u\text{ in }W^{1,p}(\Omega )\quad \text{and}\quad v_n\longrightarrow v\text{ in }W^{1,p}(\Omega ),$$ and $$\|u_n\|_{L^\infty (\Omega )}\leq \|u\|_{L^\infty (\Omega )}\quad \text{and}\quad \|v_n\|_{L^\infty (\Omega )}\leq \|v\|_{L^\infty (\Omega )}.$$
Quest 1 : Why such $\rho_n$ exist and why do we have the previous convergence in $W^{1,p}(\Omega )$ and the inequality $\|u_n\|_{L^\infty (\Omega )}\leq \|u\|_{L^\infty (\Omega )}$ (and same with $v_n$). So as I see $\rho_n$ is more an approximation of identity, but still, why can I do that ?
WLOG, one may assume the $u_n\to u$ a.e. in $D$ and $\partial _i u_n\to \partial _i u$ a.e. in $D$.
Quest 2 : Why can we assume that ?
We have in $D$ that $$\partial _i(u_nv_n)=u_n\partial _i v_n+v_n\partial _i u_n\to v\partial _i u+u\partial _i v\in L(D).$$
Quest 3: We do we have this relation ? Isn't it what we wanted to prove at the beginning ? I really don't understand why $$\partial _i(u_nv_n)=u_n\partial _i v_n+v_n\partial _i u_n,$$I have the impression that it's what we want to prove, no ? I neither don't understand why it converge to $v\partial _i u+u\partial _i v$... This proof looks so weird...
If I can understand all what happen before, the conclusion will be fine.
|
Kale, GM and Jacob, KT (1989)
Gibbs Energies of Formation of $CuYO_2$ and $Cu_2Y_2O_5$ and Phase Relations in the System Cu-Y-O. In: Chemistry of Materials, 1 (5). pp. 515-519.
PDF
2008-14.pdf
Restricted to Registered users only
Download (578kB) | Request a copy
Abstract
Thermodynamic properties of cuprous and cupric yttrates $(CuYO_2 and Cu_2Y_2O_5)$ and oxygen potentials corresponding to three three-phase fields in the system Cu-Y-O have been determined by using solid-state galvanic cells; Pt, Cu + $CuYO_2$ + $Y_2O_3$ \parallel $(Y_2O_3)ZrO_2$ \parallel Cu + $Cu_2O$, Pt; Pt, $CuYO_2$ + $Cu_2Y_2O_5$ \parallel $(Y_2O_3)ZrO_2$ \parallel $Cu_2O$ + CuO, Pt; and Pt, $CuYO_2$ + $Cu_2Y_2O_5$ + $Y_2O_3$ \parallel $(Y_2O_3)ZrO_2$ \parallel $Cu_2O$ + CuO, Pt. Yttria-stabilized zirconia was used as the solid electrolyte in the temperature range 873-1323 K. The compound $CuYO_2$ was prepared by the reduction of $Cu_2Y_2O_5$ at 1373 K under argon gas with a residual oxygen partial pressure of \sim 1 Pa. For the reaction $\frac {1} {2} Cu_2O(s) + \frac {1} {2} Y_2O_3(s) \rightarrow CuYO_2(s)$, \Delta G° = -5800 + 3.90T (\pm 30) J $mol^{-1}$, and for $2CuO(s) + Y_2O_3(s) \rightarrow Cu_2Y_2O_5(s)$, \Delta G° = 11 210 - 15.072T (\pm 120) J $mol^{-1}$. The oxygen potentials corresponding to the coexistence of phases $CuYO_2 + Cu_2Y_2O_5$ and $CuYO_2 + Cu_2Y_2O_5 + Y_2O_3$ were found to be the same over the temperature range of measurement, thus indicating negligible solid solubility of $Y_2O_3$ in $CuYO_2$ and $Cu_2Y_2O_5$. On the basis of the present results and auxiliary thermodynamic data from the literature, phase relations in the Cu-Y-0 system at 723, 950, and 1373 K have been deduced.
Item Type: Journal Article Additional Information: Copyright of this article belongs to American Chemical Society. Department/Centre: Division of Mechanical Sciences > Materials Engineering (formerly Metallurgy) Depositing User: SIDDESHWAR KANBARGI Date Deposited: 13 Feb 2008 Last Modified: 19 Sep 2010 04:42 URI: http://eprints.iisc.ac.in/id/eprint/12931 Actions (login required)
View Item
|
I read on the arXiv the following:
Let $\mathcal{\mathbf{C}}$ be a semisimple spherical tensor category with simple unit and let $\mathbf{\Gamma}$ be the set of isomorphism classes of simple objects.
Unfortunately I could not read further since I didn't understand the jargon:
category: a collection of objects with morphisms satisfying certain rules. e.g. the category Setof Sets and functions between them. e.g. the category tensor category has a notion of "tensor product" $\otimes$ e.g. Vectthe category of vector spaces and linear maps between them e.g. spherical tensor category A spherical category is a monoidal category with duals that behaves as if its morphisms can be drawn and moved around on a sphere. Confused A spherical category is a pivotal category where the left and right trace operations coincide on all objects. Even more so since I don't know what "pivot" means or why there is left and right "trace" A spherical category is a monoidal category with duals that behaves as if its morphisms can be drawn and moved around on a sphere. pivotal category A pivotal category is an autonomous category equipped with a monoidal natural isomorphism A→(A∗)∗. Pivotal categories have also been called “sovereign categories.” This is a kind of category with duals. E.g. (possibly?) Vectwith the duality operation $(V^\ast)^\ast = V$ semicimple category A semisimple category is a category in which each object is a direct sum of finitely many simple objects, and all such direct sums exist. E.g. Category of representation of a finite group $G$ which have notions of tensor product $\otimes$ and direct sum $\oplus$, so they behave almost like a ring.
Another look through the paper suggests I am looking for a Frobenius algebra, which has zany rules like these:
Now I can read the second half the sentence:
If $\mathbf{\Gamma}$ is finite we can define $\dim \mathbf{\mathcal{C}} = \sum_{i \in \Gamma} d(X_i)^2$
Because the category is irreducible every object has a reduction to the direct sum of simple object. Presumably there is a way to compute these dimensions of these things? Continuing...
If $\mathcal{\mathcal{C}}$ is finite dimensional and braided then the Gauss sums of $\mathcal{\mathbf{C}}$ are defined by $$ \Delta_{\pm} \mathcal{\mathcal{C}} = \sum_{i \in \Gamma} \omega(X_i)^{\pm 1} d(X_i)^2$$ where $\theta(X) = \omega(X) id_X$ is the twist of the simple object $X$ defined by the spherical structure.
So I look up braided monoidal category the tensor product $\otimes$ has to satisfy "hexagon rules"...
What is an example of a
finite dimensional braided spherical tensor cateogry ? And how do I compute each term in the sum above in such an instance?
|
On p. 76 of the 1996 edition of Serre's
A Course in Arithmetic, one reads the following (inline) remark:
One can prove that, if $A$ has natural density $k$, the analytic density of $A$ exists and is equal to $k$.
Here, $A$ is a subset of $\bf P$ (the set of all positive rational primes), and the natural density of $A$ is actually the natural density of $A$ relative to $\bf P$, viz. the limit $$\lim_{n \to \infty} \frac{|A \cap [1,n]|}{|\mathbf P \cap [1,n]|}$$ (if it exists), while the analytic density of $A$ is actually the analytic density relative to $\bf P$, viz. the limit $$\lim_{s \to 1^+} \frac{\sum_{p \in A} p^{-s}}{\sum_{p \in \mathbf P} p^{-s}}$$ (again, if it exists). Here are then my questions:
Q1.Was it Serre the first who made this observation explicit? Q2.Do you know of a paper or book where a proof is provided? Serre doesn't even give a hint about it. Notes (added later).
On
Q1: In the light of Lucia's comment below, let me make it clear that I myself find it very unreasonable that the result hadn't been known before Serre's remark in the 1970 French edition of his book (p. 126). I'd just like to find out if it was Serre the first who made it explicit.
On
Q2: I have my own proof, but would appreciate a reference. The reason is that something sensibly stronger is true, and I'm hoping to understand from the inspection of the proof he may have had in mind if this is intentional (e.g., it is evident from the proof he may have had in mind that something sensibly stronger is true, but he just didn't care), or not. Edit (Feb 09, 2016). For future reference, I think it can be useful to make order and summarize, here in the OP, what has emerged from the answers and comments of those who have so far contributed to this discussion:
1) As expected, it wasn't Serre the one who first made explicit the relation between the analytic and natural densities
relative to the primes. The result is already stated on p. 118 of:
E. Landau,
Handbuch der Lehre von der Verteilung der Primzahlen, Erster Band, Teubner: Leipzig, 1909,
where a detailed proof is also presented. This answers both Q1 and Q2.
2) Franz Lemmermeyer, in a comment to the OP, had suggested since the outset that the result should have appeared almost surely in some of Landau's books. This was confirmed by so-called friend Don in his answer (here), where it's also reported that the result was mentioned on p. 225 of the 1st edition of:
H. Hasse,
Vorlesungen über Zahlentheorie, Die Grundlehren der mathematischen Wissenschaften 59, Springer-Verlag: Berlin, 1950.
Interestingly enough, Hasse made a mistake here, by stating that not only the existence of the natural density (relative to the primes) implies that the analytic density (always relative to the primes) also exists, and the two are then equal: He went on asserting that also the converse is true! As still noted by so-called friend Don, the mistake was fixed in the 2nd (1964) edition of the book (p. 236), and it was mentioned in a comment to his answer that we know by now that Hasse was
really wrong, for an example attributed by Serre to a private communication from E. Bombieri (p. 126 in the 1970 French edition of A Course in Arithmetic, or p. 76 in the 1996 English edition) proves the existence of a set of primes that has analytic (relative) density, but not natural (relative) density.
3) Comparison results in the same spirit of those considered in this question, but involving
densities on $\mathbf N^+$, are not so rare in the literature. Most notably, it is known (and easy to prove by Abel's summation formula) that the upper analytic density (on $\mathbf N^+$) is not greater than the upper logarithmic density, which is in turn not greater than the upper asymptotic density, see, e.g., Theorem 2 in Section III.1.3 of:
G. Tenenbaum,
Introduction to Analytic and Probabilistic Number Theory, Cambridge Stud. Adv. Math. 46, Cambridge Univ. Press: Cambridge, 1995.
It follows at once that the existence of the natural density (on $\mathbf N^+$) implies the existence of the logarithmic density, and the existence of the logarithmic density implies the existence of the analytic density.
4) On the other hand, it is known that upper and lower asymptotic and natural densities are pretty much independent from each other, in a sense that was first made precise by L. Mišík in:
L. Mišík,
Sets of positive integers with prescribed values of densities, Math. Slovaca 52(2002), No. 3, pp. 289-296. see here for further reading on the subject.
You may want to read the comments to Question 103111: Prescribed values for the uniform density for a more accurate account of Mišík's results and generalizations theoreof.
5) Furthermore, it is known that the existence of the analytic density (on $\mathbf N^+$) implies that also the logarithmic density exists, and the two are then equal. This is a non-trivial result, which goes back at least to H. Davenport and P. Erdős, who make an implicit reference to it in the proof of Theorem 1 from:
H. Davenport and P. Erdős,
On sequences of positive integers, Acta Arith. 2(1936), No. 1, 147-151.
The proof is based on the Hardy-Littlewood tauberian theorem. All of this was pointed out by so-called friend Don in a comment to GH from MO's answer (here). An alternative proof, that rather uses Karamata's tauberian theorem, is given by Tenenbaum in his book (Theorem 3 in Section III.1.3). The same Tenenbaum mentioned in a private communication that the special case of Karamata's theorem needed here goes back to:
O. Szász,
Münchner Sitzungsberichte(1929), 325-340.
6) Last but not least, Christian Elsholtz added some further elements to the story (here).
|
Consider a Lagrangian theory of fields $\phi^a(x)$. Sometime such a theory posseses a symmetry (let's talk about internal symmetries for simplicity), which means that the Lagrangian is invariant under replacement $\phi^a\to \phi'^a=\phi'^a(\phi,\epsilon)$. Here $\epsilon$ are some continuous transformation parameters. Usually one encounters symmetries that are linear in $\phi$, for example $\phi'=e^{i\epsilon}\phi$ for a single complex scalar or $\phi'^a=\epsilon^{ab}\phi^b$ with orthogonal matrix $\epsilon^{ab}$ for Lagrangian $\mathcal{L}=\frac12\left(\partial_\mu\phi^a\right)^2$.
My question is whether there are examples of non-linear symmetry transformations appearing in physical models? At the time I am mostly concerned with the classical fields but comments on the quantum extensions are surely welcome.
Clarification.
I appreciate references to various actual models. However I would like first to see an example as simple and explicit as possible, where the essence is not obstructed by technicalities. If there are some principle difficulties to construct a really simple example, there must be a reason for that?
Let me also narrow what I mean by a non-linear internal transformation. Assume that replacement $\phi^a(x)\to \phi^a_\epsilon(x)=f(\phi^b(x),\epsilon)$ with some function $f$ leaves Lagrangian invariant $L(\phi,\partial\phi)=L(\phi_\epsilon,\partial\phi_\epsilon)$. Parameter $\epsilon$ could be a vector. Then call such a transformation linear if $\frac{\partial \phi^a_\epsilon}{\partial\phi^b}$ is independent of co-ordinates, $\partial_\mu\frac{\partial \phi^a_\epsilon}{\partial\phi^b}=0$. In this sense, the shift transformation proposed by Andrew is also linear.
|
Ready to get your head cracked? Ok, let’s define a simple function that multiplies each number of a list of numbers by 2. We will give this function the name of “by2”.
So, you have a function that takes a list of numbers as a parameter, and after the computation process shows a list of numbers. How do you write this in Haskell? Well:
by2 :: [Int] -> [Int]
The brackets represent a list of something, in this case Integers. This is how you define a
type in Haskell, I talked about that in my previous post.
Once the type is defined, the next step is to define the actual function. First, you call the function, with the parameters, and then you type the process that the function must do, at the end, you should have something like this:
by2 x:xs = (2 * x) : by2 xs
Hm, let me explain what is that code saying, you have a list with at least one element (an Integer in this case), by2 takes the first element and multiplies it by 2, nothing new, but then I’m including the element multiplied by 2 in… the function without the first element? Seems weird? Well, next and last line of code, and then I will explain much deeply a few concepts that you need to know to fully understand this function.
by2 [] = []
First af all, I’m ussing
pattern matching, a way to defining functions that compares patterns, the one given in the function definition with the paramater given. If you want to know more about it, check this.
Remember at the start when I sayd “a simple function”? When, actually I may have been wrong, this one is not quite simple, because introduces an important concept besides pattern matching,
recursion. by2 is a recursive function, meaning that the function is applied inside its own definition. To make a function work with recursion, you must make the recursive call smaller that the parameter given. Go back to your code and you will se this, the first element of the list is no more appearing in the recursive call of by2. Also, you always need to write your base case, the smaller case that the function may encounter, in this case is a list with no elements, just like in the second line of code that I showed you. Then, the recursive call will be getting smaller until the parameter given to the function will be… an empty list! And there, each element, in order, will be inserted in this list.
It’s tough to understand, you may get it better with an example:
by2 [7,3,5][14,6,10]
This is what happened there:
$$(2 \times 7) \triangleright by2 [3,5] \rightarrow$$ $$(2 \times 7) \triangleright ((2 \times 3) \triangleright by2 [5]) \rightarrow$$ $$(2 \times 7) \triangleright ((2 \times 3) \triangleright ((2 \times 5) \triangleright by2 [\hspace{2mm}])) \rightarrow $$ $$(2 \times 7) \triangleright ((2 \times 3) \triangleright ((2 \times 5) \triangleright [\hspace{2mm}])$$
And then, solves the multiplications and each number goes back where it was; 10 goes inside the empty list, then 6 and last 14.
Well, I think we explained a few concepts in the way, pattern matching and recursion. But… have you tried the function? Open the terminal, then the compiler, load your file and test by2!
You can try other parameters, like the empty list, or a list with words. Start playing around, changing the definition of by2, creating a by3, or a plus2; new functions that accept more and different parameters, with new definitions.
If anything went wrong, tell me and I’ll be glad to help you out!
You’ve made it this far, kid. Impressive. Don’t stop learning!
|
This is a long comment, not an answer.
Let $E_k$ be the total space of the orientable bundle over $S^2$ with fiber $\mathbb R^2$ and Euler class $k$. $E_k = \mathbb R^2 \rtimes_k S^2$. Let $\pi : E_k \to S^2$ be the bundle projection.
$C_2 E_k = \{ (x,y) \in E_k^2 : x \neq y\}$ is the configuration space, with $p : C_2 E_k \to E_k$ the map $p(x,y)=x$.
Consider the composite of $\pi \circ p : C_2 E_k \to S^2$. It's a fibration and the fibers are homotopy-equivalent to $S^3 \vee S^2$, although that's not the most honest way of perceiving the fibers. The idea is to think of $\pi \circ p(x,y) = \pi(x)$ as a point in the $0$-section of $\pi$. If $\pi(y) \neq \pi(x)$ you homotope $y$ to $\pi(y)$, i.e. the 0-vector over $\pi(y)$. If $\pi(y) = \pi(x)$ you can't do this in $E_k$, so you can homotope $\pi(y)$ to be a unit vector in $\pi^{-1}(\pi(x))$. In other words, the fiber of $\pi \circ p$ over $\pi(x)$ looks like the sphere bundle of $\pi$ with all each circle fiber over points in a neighbourhood of $\pi(x)$ collapsed. So what's really going on is the fibers have as a deformation-retract a subspace that's $S^3$ union a $2$-cell, but the attachment map for the $2$-cell is along a great circle. The nice thing about this deformation-retract, is it's equivariant with respect to the monodromy. Precisely,
$$C_2 E_k \simeq (S^3 \cup e^2) \rtimes S^2$$
where the monodromy $SO_2 \to Aut(S^3 \cup e^2)$ is rotation about this great circle. Specifically, if you think of $S^3$ as the unit sphere in $\mathbb C^2$, and let the great circle be $S^1 \times \{0\} \subset S^3$, then its the action of $S^1$ on $S^3$ given by $(z, (z_1,z_2)) = (z_1,zz_2)$. The action is trivial on the $2$-cell attachment.
So as a space, it's $S^3 \rtimes S^2$ union a $D^2 \times S^2$ attached along the $S^1 \times S^2 \subset S^3 \rtimes S^2$ corresponding to where the monodromy is trivial.
I think the attaching map is null-homologous in $H_* (S^3 \rtimes_k S^2)$. So this means
$H_*(C_2 E_k)$ is free abelian, with ranks $1, 0, 2, 1, 1, 1, 0, 0, 0$ in dimensions 0 through 8 respectively. The $H_4(C_2 E_k)$ class is interesting, have you computed its self-intersection number?
|
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.orgis added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
For a discrete abelian cancellative semigroup$\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}S$with a weight function$\omega $and associated multiplier semigroup$M_\omega (S)$consisting of$\omega $-bounded multipliers, the multiplier algebra of the Beurling algebra of$(S,\omega )$coincides with the Beurling algebra of$M_\omega (S)$with the induced weight.
Towards an involutive analogue of a result on the semisimplicity of${\ell }^{1} (S)$by Hewitt and Zuckerman, we show that, given an abelian$\ast $-semigroup$S$, the commutative convolution Banach$\ast $-algebra${\ell }^{1} (S)$is$\ast $-semisimple if and only if Hermitian bounded semicharacters on$S$separate the points of$S$; and we search for an intrinsic separation property on$S$equivalent to$\ast $-semisimplicity. Very many natural involutive analogues of Hewitt and Zuckerman’s separation property are shown not to work, thereby exhibiting intricacies involved in analysis on$S$.
Given a morphism T from a Banach algebra ℬ to a commutative Banach algebra 𝒜, a multiplication is defined on the Cartesian product space 𝒜×ℬ perturbing the coordinatewise product resulting in a new Banach algebra 𝒜×Tℬ. The Arens regularity as well as amenability (together with its various avatars) of 𝒜×Tℬ are shown to be stable with respect to T.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.
|
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
|
Prove that if $(f_n)$ is a sequence of Borel measurable functions and if $f(x)=\lim_{n\to \infty}f_n(x)$ exists in $\mathbb{R}$, then $f$ is Borel measurable. In fact $f$ is Borel measurable even if we only have $f(x)=\lim_{n\to\infty}$ almost everywhere on $D$, some measurable domain.
Attempt/Thoughts:
Suppose $(f_n)$ is Borel measurable. That is, the preimage of $f_n$ for every interval of the form, (for $\alpha\in\overline{\mathbb{R}})$, $((\alpha,\infty])$ is a Borel set. We are given pointwise convergence, so by definition, $\forall\epsilon>0\exists N\in\mathbb{N}: \forall n\geq N: |f_n(x)-f(x)|<\epsilon$
I'm not sure how to proceed from here.
I'm not sure how to start the second part at all. I know that it means that $f$ is Borel measurable even if $f(x)=\lim_{n\to\infty}f_n(x)$ on $D\setminus E$, where $m(E)=0$.
Any suggestions would be welcome. Thanks.
|
This is just a curiosity. I have come across multiple proofs of the fact that there are infinitely many primes, some of them were quite trivial, but some others were really, really fancy. I'll show you what proofs I have and I'd like to know more because I think it's cool to see that something can be proved in so many different ways.
Proof 1 : Euclid's. If there are finitely many primes then $p_1 p_2 ... p_n + 1$ is coprime to all of these guys. This is the basic idea in most proofs : generate a number coprime to all previous primes.
Proof 2 : Consider the sequence $a_n = 2^{2^n} + 1$. We have that $$ 2^{2^n}-1 = (2^{2^1} - 1) \prod_{m=1}^{n-1} (2^{2^m}+1), $$ so that for $m < n$, $(2^{2^m} + 1, 2^{2^n} + 1) \, | \, (2^{2^n}-1, 2^{2^n} +1) = 1$. Since we have an infinite sequence of numbers coprime in pairs, at least one prime number must divide each one of them and they are all distinct primes, thus giving an infinity of them.
Proof 3 : (Note : I particularly like this one.) Define a topology on $\mathbb Z$ in the following way : a set $\mathscr N$ of integers is said to be open if for every $n \in \mathscr N$ there is an arithmetic progression $\mathscr A$ such that $n \in \mathscr A \subseteq \mathscr N$. This can easily be proven to define a topology on $\mathbb Z$. Note that under this topology arithmetic progressions are open and closed. Supposing there are finitely many primes, notice that this means that the set $$ \mathscr U \,\,\,\, \overset{def}{=} \,\,\, \bigcup_{p} \,\, p \mathbb Z $$ should be open and closed, but by the fundamental theorem of arithmetic, its complement in $\mathbb Z$ is the set $\{ -1, 1 \}$, which is not open, thus giving a contradiction.
Proof 4 : Let $a,b$ be coprime integers and $c > 0$. There exists $x$ such that $(a+bx, c) = 1$. To see this, choose $x$ such that $a+bx \not\equiv 0 \, \mathrm{mod}$ $p_i$ for all primes $p_i$ dividing $c$. If $a \equiv 0 \, \mathrm{mod}$ $p_i$, since $a$ and $b$ are coprime, $b$ has an inverse mod $p_i$, call it $\overline{b}$. Choosing $x \equiv \overline{b} \, \mathrm{mod}$ $p_i$, you are done. If $a \not\equiv 0 \, \mathrm{mod}$ $p_i$, then choosing $x \equiv 0 \, \mathrm{mod}$ $p_i$ works fine. Find $x$ using the Chinese Remainder Theorem.
Now assuming there are finitely many primes, let $c$ be the product of all of them. Our construction generates an integer coprime to $c$, giving a contradiction to the fundamental theorem of arithmetic.
Proof 5 : Dirichlet's theorem on arithmetic progressions (just so that you not bring it up as an example...)
Do you have any other nice proofs?
|
There is no acceptable/viable mechanism for a free electron to absorb or emit energy, without violating energy or momentum conservation. So its wavefunction cannot collapse into becoming a particle, right? How do 2 free electrons repel each other then?
It is true that the reactions $$e + \gamma \to e, \quad e \to e + \gamma$$ cannot occur without violating energy or momentum conservation. But that doesn't mean that electrons can't interact with anything! For example, scattering $$e + \gamma \to e + \gamma$$ is perfectly allowed. And a classical electromagnetic field is built out of many photons, so the interaction of an electron with such a field can be thought of as an interaction with many photons at once. There are plenty of ways a free electron can interact without violating energy or momentum conservation, so there's no problem here.
To resolve this paradox requires study of time dependent perturbation theory; solving Schrodinger's equation with a time dependent perturbation corresponding to the interaction time of two particles.
If you do this you arrive at the following conclusions:
A single free electron cannot absorb a free photon ( $e + \gamma \to e$ is not a valid interaction)
A single free electron cannot emit a free photon ( $e \to e + \gamma$ is not a valid interaction)
However, two electrons can scatter by exchange of energy ( $ e + e \to e + e$ is a valid interaction)
In this later case it is common to refer to this process being due to exchange of "a virtual photon" between the two electrons. But this is just a description of the calculation of time dependent perturbation theory.
Your first statement is false: energy can indeed be added at will to electrons by accelerating them with electrostatic charge distributions, as for example in the case of rapidly varying radio frequency (electromagnetic) fields. Neither energy nor momentum conservation is violated in this case. Search on
SLAC for more details about this.
Your other questions are unclear. I recommend you do the search, read a bit, and return here if you have further questions.
Basically, because the process
doesn't simultaneously conserve energy and momentum; this is why we say that it's mediated by a virtual photon.
In more technical language, this means that the photons that are exchanged between two interacting electrons are allowed to be "off shell", where the "shell" is the relationship $E^2 = p^2c^2$ (for a massless particle). Real particles are required to be on-shell, but virtual particles are allowed to stray from that condition, at least by some amount.
In the Feynman-diagram description of the scattering between two electrons you often find diagrams where one electron emits a photon which is then absorbed by the other; this is a virtual photon and as such both the 'emission' and 'absorption' processes are exempt from these considerations.
As always, though, it bears repeating that Feynman diagrams are calculational tools, and none of the virtual particles that appear in those diagrams actually physically exist in any definable sense. The fact that the 'emission' and 'absorption' processes, as well as the virtual photon itself, seem to defy energy or momentum conservation is purely a quirk of the way in which we've chosen to interpret limited chunks of our calculation.
Let me start with a simple counter-question. How a free electron in a laser cooling process loses kinetic energy? The photon, hitting the compliant electron, gets absorbed and after is re-emitted with a higher frequency (with a higher energy content).
There is no acceptable/viable mechanism for a free electron to absorb or emit energy,...
There is. Photons are indivisible particles only between their emission and absorption. And the term photons is a summary for a class of particles over all possible frequencies (energy contents). So the re-emission of a photon mostly happens not with the same frequency as the absorbed photon.
So I rewrite the equation from another answer to an interaction between the electron and the photon:
$$e + \gamma \equiv e \leftrightarrow (\gamma_1 + \gamma_2) \to (e + \gamma_1) + \gamma_2 $$
How do 2 free electrons repel each other then?
Beside explanations with virtual photons another explanation is that for equaly charged particles the fields do not exchange energy but work like springs. The electric fields get deformed like springs and get relaxed after by pushing the particles back. But the particles lose meanwhile some amount of their kinetic energy (in relation to each over) by emitting photons. You remember, any acceleration is accompanied by photon emission.
|
So I'm already aware of the quantum mechanical operator for momentum and how to derive the kinetic energy operator from this: $$\hat T=\frac{\hat p^2}{2m}=\frac{-\hbar^2}{2m}\frac{\partial^2}{\partial x^2}$$ But I'm wondering how to derive the kinetic energy operator solely from the statistical definition of an expectation value.
I've successfully derived the momentum expectation value this way to find: $$\lt p\gt =-i\hbar \int_{-\infty}^{\infty} \psi ^\star \frac{\partial \psi}{\partial x} dx = \int_{-\infty}^{\infty} \biggl(\psi ^\star \biggl(\frac{\hbar}{i}\frac{\partial}{\partial x}\biggr) \psi \biggr)dx = \int_{-\infty}^{\infty} \biggl(\psi ^\star\hat p \psi\biggr) dx $$
It seems to follow the same derivation as before, namely: $$\lt T \gt = \frac{\lt p \gt^2}{2m} =\frac{-\hbar^2}{2m} \biggl( \int_{-\infty}^{\infty} \psi ^\star \frac{\partial \psi}{\partial x} dx \biggr)^2 $$ But I dont see how to manipulate this such that $\biggl( \int_{-\infty}^{\infty} \psi ^\star \frac{\partial \psi}{\partial x} dx \biggr)^2 = \frac{\partial^2}{\partial x^2}$
Any help clarifying this issue would be greatly appreciated.
|
If we want to describe a static spherically symmetric star we can use a metric which matches the Schwarzschild solution with correct mass on the outside of the star but differs from Schwartzschild in the inside of the matter distribution.
Basically we solve the Einstein equations with a source $T_{\mu\nu}$, for instance $$T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+p\,g_{\mu\nu}$$ where $u_{\mu}$ has zero spatial components, meaning it is the velocity in a static fluid (this can also be seen as a consequence of Einstein equations).
Can we do something similar for a rotating star using the metric for a Kerr black hole?
I heard that it is a much more difficult problem and I would like to understand how difficult it is (Is it possible?) and what makes it so difficult.
|
Use the following figure as an aid in identifying the relationship between the rectangular, cylindrical, and spherical coordinate systems.
For exercises 1 - 4, the cylindrical coordinates \( (r,θ,z)\) of a point are given. Find the rectangular coordinates \( (x,y,z)\) of the point.
1) \( (4,\frac{π}{6},3)\)
Answer: \( (2\sqrt{3},2,3)\)
2) \( (3,\frac{π}{3},5)\)
3) \( (4,\frac{7π}{6},3)\)
Answer: \( −2\sqrt{3},−2,3)\)
4) \( (2,π,−4)\)
For exercises 5 - 8, the rectangular coordinates \( (x,y,z)\) of a point are given. Find the cylindrical coordinates \( (r,θ,z)\)of the point.
5) \( (1,\sqrt{3},2)\)
Answer: \( (2,\frac{π}{3},2)\)
6) \( (1,1,5)\)
7) \( (3,−3,7)\)
Answer: \( (3\sqrt{2},−\frac{π}{4},7)\)
8) \( (−2\sqrt{2},2\sqrt{2},4)\)
For exercises 9 - 16, the equation of a surface in cylindrical coordinates is given. Find the equation of the surface in rectangular coordinates. Identify and graph the surface.
9) [T] \( r=4\)
Answer:
A cylinder of equation \( x^2+y^2=16,\) with its center at the origin and rulings parallel to the \(z\)-axis,
10) [T] \( z=r^2cos^2θ\)
11) [T] \( r^2cos(2θ)+z^2+1=0\)
Answer:
Hyperboloid of two sheets of equation \( −x^2+y^2−z^2=1,\) with the
y-axis as the axis of symmetry,
12) [T] \( r=3sinθ\)
13) [T] \( r=2cosθ\)
Answer:
Cylinder of equation \( x^2−2x+y^2=0,\) with a center at \( (1,0,0)\) and radius \( 1\), with rulings parallel to the
z-axis,
14) [T] \( r^2+z^2=5\)
15) [T] \( r=2secθ\)
Answer:
Plane of equation \( x=2,\)
16) [T] \( r=3cscθ\)
For exercises 17 - 22, the equation of a surface in rectangular coordinates is given. Find the equation of the surface in cylindrical coordinates.
17) \( z=3\)
Answer: \( z=3\)
18) \( x=6\)
19) \( x^2+y^2+z^2=9\)
Answer: \( r^2+z^2=9\)
20) \( y=2x^2\)
21) \( x^2+y^2−16x=0\)
Answer: \( r=16\cos θ,r=0\)
22) \( x^2+y^2−3\sqrt{x^2+y^2}+2=0\)
For exercises 23 - 26, the spherical coordinates \( (ρ,θ,φ)\) of a point are given. Find the rectangular coordinates \( (x,y,z)\) of the point.
23) \( (3,0,π)\)
Answer: \( (0,0,−3)\)
24) \( (1,\frac{π}{6},\frac{π}{6})\)
25) \( (12,−\frac{π}{4},\frac{π}{4})\)
Answer: \( (6,−6,\sqrt{2})\)
26) \( (3,\frac{π}{4},\frac{π}{6})\)
For exercises 27 - 30, the rectangular coordinates \( (x,y,z)\) of a point are given. Find the spherical coordinates \( (ρ,θ,φ)\) of the point. Express the measure of the angles in degrees rounded to the nearest integer.
27) \( (4,0,0)\)
Answer: \( (4,0,90°)\)
28) \( (−1,2,1)\)
29) \( (0,3,0)\)
Answer: \( (3,90°,90°)\)
30) \( (−2,2\sqrt{3},4)\)
For exercises 31 - 36, the equation of a surface in spherical coordinates is given. Find the equation of the surface in rectangular coordinates. Identify and graph the surface.
31) [T] \( ρ=3\)
Answer:
Sphere of equation \( x^2+y^2+z^2=9\) centered at the origin with radius \( 3\),
32) [T] \( φ=\frac{π}{3}\)
33) [T] \( ρ=2cosφ\)
Answer:
Sphere of equation \( x^2+y^2+(z−1)^2=1\) centered at \( (0,0,1)\) with radius \( 1\),
34) [T] \( ρ=4cscφ\)
35) [T] \( φ=\frac{π}{2}\)
Answer:
The \(xy\)-plane of equation \( z=0,\)
36) [T] \( ρ=6cscφsecθ\)
For exercises 37 - 40, the equation of a surface in rectangular coordinates is given. Find the equation of the surface in spherical coordinates. Identify the surface.
37) \( x^2+y^2−3z^2=0, z≠0\)
Answer: \( φ=\frac{π}{3}\) or \( φ=\frac{2π}{3};\) Elliptic cone
38) \( x^2+y^2+z^2−4z=0\)
39) \( z=6\)
Answer: \( ρcosφ=6;\) Plane at \( z=6\)
40) \( x^2+y^2=9\)
For exercises 41 - 44, the cylindrical coordinates of a point are given. Find its associated spherical coordinates, with the measure of the angle φ in radians rounded to four decimal places.
41) [T] \( (1,\frac{π}{4},3)\)
Answer: \( (\sqrt{10},\frac{π}{4},0.3218)\)
42) [T] \( (5,π,12)\)
43) \( (3,\frac{π}{2},3)\)
Answer: \( (3\sqrt{2},\frac{π}{2},\frac{π}{4})\)
44) \( (3,−\frac{π}{6},3)\)
For exercises 45 - 48, the spherical coordinates of a point are given. Find its associated cylindrical coordinates.
45) \( (2,−\frac{π}{4},\frac{π}{2})\)
Answer: \( (2,−\frac{π}{4},0)\)
46) \( (4,\frac{π}{4},\frac{π}{6})\)
47) \( (8,\frac{π}{3},\frac{π}{2})\)
Answer: \( (8,\frac{π}{3},0)\)
48) \( (9,−\frac{π}{6},\frac{π}{3})\)
For exercises 49 - 52, find the most suitable system of coordinates to describe the solids.
49) The solid situated in the first octant with a vertex at the origin and enclosed by a cube of edge length \( a\), where \( a>0\)
Answer: Cartesian system, \( {(x,y,z)|0≤x≤a,0≤y≤a,0≤z≤a}\)
50) A spherical shell determined by the region between two concentric spheres centered at the origin, of radii of \( a\) and \( b\), respectively, where \( b>a>0\)
51) A solid inside sphere \( x^2+y^2+z^2=9\) and outside cylinder \( (x−\frac{3}{2})^2+y^2=\frac{9}{4}\)
Answer: Cylindrical system, \( {(r,θ,z)∣r^2+z^2≤9,r≥3cosθ,0≤θ≤2π}\)
52) A cylindrical shell of height \( 10\) determined by the region between two cylinders with the same center, parallel rulings, and radii of \( 2\) and \( 5\), respectively
53) [T] Use a CAS or CalcPlot3D to graph in cylindrical coordinates the region between elliptic paraboloid \( z=x^2+y^2\) and cone \( x^2+y^2−z^2=0.\)
Answer:
The region is described by the set of points \( {(r,θ,z)∣∣0≤r≤1,0≤θ≤2π,r^2≤z≤r}.\)
54) [T] Use a CAS or CalcPlot3D to graph in spherical coordinates the “ice cream-cone region” situated above the xy-plane between sphere \( x^2+y^2+z^2=4\) and elliptical cone \( x^2+y^2−z^2=0.\)
55) Washington, DC, is located at \( 39°\) N and \( 77°\) W (see the following figure). Assume the radius of Earth is \( 4000\) mi. Express the location of Washington, DC, in spherical coordinates.
Answer: \( (4000,−77°,51°)\)
56) San Francisco is located at \( 37.78°N\) and \( 122.42°W.\) Assume the radius of Earth is \( 4000\)mi. Express the location of San Francisco in spherical coordinates.
57) Find the latitude and longitude of Rio de Janeiro if its spherical coordinates are \( (4000,−43.17°,102.91°).\)
Answer: \( 43.17°W, 22.91°S\)
58) Find the latitude and longitude of Berlin if its spherical coordinates are \( (4000,13.38°,37.48°).\)
59) [T] Consider the torus of equation \( (x^2+y^2+z^2+R^2−r^2)^2=4R^2(x^2+y^2),\) where \( R≥r>0.\)
a. Write the equation of the torus in spherical coordinates.
b. If \( R=r,\) the surface is called a horn torus. Show that the equation of a horn torus in spherical coordinates is \( ρ=2Rsinφ.\)
c. Use a CAS or CalcPlot3D to graph the horn torus with \( R=r=2\) in spherical coordinates.
Answer:
a. \(ρ=0, ρ+R2−r2−2Rsinφ=0\)
c.
60) [T] The “bumpy sphere” with an equation in spherical coordinates is \( ρ=a+bcos(mθ)sin(nφ)\), with \( θ∈[0,2π]\) and \( φ∈[0,π]\), where \( a\) and \( b\) are positive numbers and \( m\) and \( n\) are positive integers, may be used in applied mathematics to model tumor growth.
a. Show that the “bumpy sphere” is contained inside a sphere of equation \( ρ=a+b.\) Find the values of \( θ\) and \( φ\) at which the two surfaces intersect.
b. Use a CAS or CalcPlot3D to graph the surface for \( a=14, b=2, m=4,\) and \( n=6\) along with sphere \( ρ=a+b.\)
c. Find the equation of the intersection curve of the surface at b. with the cone \( φ=\frac{π}{12}\). Graph the intersection curve in the plane of intersection.
Contributors
Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
Exercises and LaTeX edited by Paul Seeburger
|
In a previous question, I was looking for an equation for counting the number of the number of integers between $1$ and $x$ that have a prime factor besides $2$ or $3$.
There were 2 iterative equations that came up:
$x−\left\lfloor{log_2x}\right\rfloor−\left\lfloor{log_2\frac{x}{3}}\right\rfloor−\left\lfloor{log_2\frac{x}{9}}\right\rfloor− \dots$
$x - \left\lfloor{log_2x}\right\rfloor - \left\lfloor{log_3x}\right\rfloor - \left\lfloor{\frac{x}{6}}\right\rfloor + $ count integers in [$1,\frac{x}{6}$] with factor other than $2$ or $3$
There were two answers that involved interesting approximations:
$\dfrac{\log(2n) \log(3n)}{2 \log 2 \log 3}$ with error $O(\frac{n}{\log n})$
$\dfrac{\log(n)^2}{2 \log(2)\log(3)}$ with error $O(\log n)$
Are there any standard methods for getting from the iterative equations above to the approximation equations below? If not, would anyone be able to show an approach to be able to approximating either of the equations above with an error estimate?
Thanks very much,
-Larry
|
Learning Outcomes
Find the union of two sets. Find the intersection of two sets. Combine unions intersections and complements.
All statistics classes include questions about probabilities involving the union and intersections of sets. In English, we use the words "Or", and "And" to describe these concepts. For example, "Find the probability that a student is taking a mathematics class or a science class." That is expressing the union of the two sets in words. "What is the probability that a nurse has a bachelor's degree and more than five years of experience working in a hospital." That is expressing the intersection of two sets. In this section we will learn how to decipher these types of sentences and will learn about the meaning of unions and intersections.
Unions
An element is in the union of two sets if it is in the first set, the second set, or both. The symbol we use for the union is \(\cup\). The word that you will often see that indicates a union is "or".
Example \(\PageIndex{1}\): Union of Two sets
Let:
\[A=\left\{2,5,7,8\right\} \nonumber\]
and
\[B=\lbrace1,4,5,7,9\rbrace \nonumber \]
Find \(A\cup B\)
Solution
We include in the union every number that is in A or is in B:
\[A\cup B=\left\{1,2,4,5,7,8,9\right\} \nonumber \]
Example \(\PageIndex{2}\): Union of Two sets
Consider the following sentence, "Find the probability that a household has fewer than 6 windows or has a dozen windows." Write this in set notation as the union of two sets and then write out this union.
Solution
First, let A be the set of the number of windows that represents "fewer than 6 windows". This set includes all the numbers from 0 through 6:
\[A=\left\{0,1,2,3,4,5,6\right\} \nonumber \]
Next, let B be the set of the number of windows that represents "has a dozen windows". This is just the set that contains the single number 12:
\[B=\left\{12\right\} \nonumber \]
We can now find the union of these two sets:
\[A\cup B=\left\{0,1,2,3,4,5,12\right\} \nonumber \]
Intersections
An element is in the intersection of two sets if it is in the first set and it is in the second set. The symbol we use for the intersection is \(\cap\). The word that you will often see that indicates an intersection is "and".
Example \(\PageIndex{3}\): Intersection of Two sets
Let:
\[A=\left\{3,4,5,8,9,10,11,12\right\} \nonumber \]
and
\[B=\lbrace5,6,7,8,9\rbrace \nonumber \]
Find \(A\cap B\).
Solution
We only include in the intersection that numbers that are in both A and B:
\[A\cap B=\left\{5,8,9\right\} \nonumber \]
Example \(\PageIndex{4}\): Intersection of Two sets
Consider the following sentence, "Find the probability that the number of units that a student is taking is more than 12 units and less than 18 units." Assuming that students only take a whole number of units, write this in set notation as the intersection of two sets and then write out this intersection.
Solution
First, let A be the set of numbers of units that represents "more than 12 units". This set includes all the numbers starting at 13 and continuing forever:
\[A=\left\{13,\:14,\:15,\:...\right\} \nonumber \]
Next, let B be the set of the number of units that represents "less than 18 units". This is the set that contains the numbers from 1 through 17:
\[B=\left\{1,\:2,\:3,\:...,\:17\right\} \nonumber \]
We can now find the intersection of these two sets:
\[A\cap B=\left\{13,\:14,\:15,\:16,\:17\right\} \nonumber \]
Combining Unions, Intersections, and Complements
One of the biggest challenges in statistics is deciphering a sentence and turning it into symbols. This can be particularly difficult when there is a sentence that does not have the words "union", "intersection", or "complement", but it does implicitly refer to these words. The best way to become proficient in this skill is to practice, practice, and practice more.
Example \(\PageIndex{5}\)
Consider the following sentence, "If you roll a six sided die, find the probability that it is not even and it is not a 3." Write this in set notation.
Solution
First, let A be the set of even numbers and B be the set that contains just 3. We can write:
\[A=\left\{2,4,6\right\},\:\:\:B\:=\:\left\{3\right\} \nonumber \]
Next, since we want "not even" we need to consider the complement of A:
\[A^c=\left\{1,3,5\right\} \nonumber \]
Similarly since we want "not a 3", we need to consider the complement of B:
\[B^c=\left\{1,2,4,5,6\right\} \nonumber \]
Finally, we notice the key word "and". Thus, we are asked to find:
\[A^c\cap B^c=\:\left\{1,3,5\right\}\cap\left\{1,2,4,5,6\right\}=\left\{1,5\right\} \nonumber \]
Example \(\PageIndex{6}\)
Consider the following sentence, "If you randomly select a person, find the probability that the person is older than 8 or is both younger than 6 and is not younger than 3." Write this in set notation.
Solution
First, let A be the set of people older than 8, B be the set of people younger than 6, and C be the set of people younger than 3. We can write:
\[A=\left\{x\mid x>8\right\},\:\:\:B\:=\:\left\{x\mid x<6\right\},\:C=\left\{x\mid x<3\right\} \nonumber \]
We are asked to find
\[A\cup\left(B\cap C^c\right) \nonumber \]
Notice that the complement of "\(< \)" is "\(\ge\)". Thus:
\[C^c=\left\{x\mid x\ge3\right\} \nonumber \]
Next we find:
\[B\cap C^c=\left\{x\mid x<6\right\}\cap\left\{x\mid x\ge3\right\}=\left\{x\mid3\le x<6\right\} \nonumber \]
Finally, we find:
\[A\cup\left(B\cap C^c\right)=\:\left\{x\mid x>8\right\}\cup\left\{x\mid3\le x<6\right\} \nonumber \]
The clearest way to display this union is on a number line. The number line below displays the answer:
Exercise
Suppose that we pick a person at random and are interested in finding the probability that the person's birth month came after July and did not come after September. Write this event using set notation.
|
Sometimes as a result of learning new things you realize that you are incredibly confused about something you thought you understood very well, and that perhaps your intuition needs to be revised. This happened to me when thinking about non-Lagrangian descriptions of QFT's. Below I'll provide a brief description of my intuition and why I think it's been challenged, but for the sake of clarity here is my question: do ''typical" or "generic" QFT's have Lagrangian descriptions? How can one quantify the size of the set of QFT's with and without Lagrangian descriptions? When a QFT is said to not have a Lagrangian descriptions, does this mean it really does not have one, or only that such a description is difficult or impossible to find?
As a young student of QFT, I studied the Wilsonian approach to RG and it left me with a very simple and geometric understanding of field theory. To describe some physical process as a QFT, one first has to understand the symmetries of the problem (such as Poincare symmetry, gauge and global symmetries). Then one writes down a general
polynomial Lagrangian consistent with these symmetries. As an example, consider the case of an $O(n)$ scalar field $\vec{\phi}$ (and let's restrict to Lagrangians with the standard, two derivative kinetic term just for simplicity):
$ \mathcal{L} = -\frac{1}{2} (\nabla \vec{\phi})^2 + a_2 \vec{\phi}^2 + a_4 \left( \vec{\phi}^2 \right)^2 + a_6 \left( \vec{\phi}^2 \right)^3 + ...$
From Wilsonian RG, I'm used to thinking about the space of possible field theories (with the symmetry restrictions imposed above) as corresponding to the infinite-dimensional parameter space $a_2, a_4, a_6, ...$. A point in this space specifies a Lagrangian and defines a field theory. RG flow is simply represented as a trajectory from one point (in the UV) to another (in the IR). Many different starting points can have the same endpoint, which allows a simple pictorial description of universality classes to be drawn.
So from this logic I would think that all QFT's admit Lagrangian descriptions, but some of them might require an infinite number of interaction terms. This intuition was challenged by reading about CFT's and gauge/gravity duality. In these contexts Lagrangian descriptions of the field theory are almost never written down. In fact, according to generalized gauge/gravity (i.e. the belief that gravity with AdS boundary conditions is dual to some CFT), it might seem that many QFT's do not admit Lagrangian descriptions. This generalized gauge/gravity should work just fine in $D=100$, and the UV fixed point field theory certainly doesn't admit a simple Lagrangian description since in sufficiently high dimension all the interaction terms are relevant (and therefore negligible in the UV), which would suggest that the UV fixed point is simply free, but that of course is not the case.
I arrived at this confusion by thinking about AdS/CFT, but I'd be very happy to simply have a strong understanding of what exactly it means for a QFT to not have a Lagrangian description, and a sense for how "typical" such theories are.
Edit: And let me add a brief discussion on CFT's. From the bootstrap approach to CFT's, one starts with CFT "data", i.e. a set of conformal dimensions and OPE coefficients, and then in principle one should be able to solve the CFT (by that I mean calculate all correlation functions). So here is an entirely different way of characterizing field theories, which only applies for conformal theories. Non-CFT's can be obtained by RG flowing away from these fixed points. It would be helpful to understand the connection between this way of thinking about general QFT's and the above Wilsonian one.
|
If I take the definition of David Aldous and Jim Fill, a finite state space Markov chain is time-reversible if it satisfies the detailed balance equation$$\pi_i\,p_{ij}=\pi_j\,p_{ji}$$where the $p_{ij}$'s are the terms of the Markov transition matrix and the $\pi_i$'s are the terms of a probability distribution. Then, by summing both sides of the equation in $i$, we derive that$$\sum_{i=1}^N \pi_i\,p_{ij}=\sum_{i=1}^N \pi_j\,p_{ji}=\pi_j\,\sum_{i=1}^N p_{ji}=\pi_j$$which implies that $(\pi_1,\ldots,\pi_N)$ is a stationary distribution for the Markov transition. If the chain is assumed to be irreducible, then the stationary distribution is unique. And the Markov chain is then ergodic if it is aperiodic.
The converse is not true, in that there exist non-reversible ergodic Markov chains. An example is provided by the Gibbs sampler associated with a vector $(X_,X_2,X_3)$ and a stationary distribution $P(x_1,x_2,x_3)$. Considering the transition from $\mathbf{X}^t$ to $\mathbf{X}^{t+1}$
Generate $X_1^{t+1}\sim P(x_1|X_2^t,X_3^t)$ Generate $X_2^{t+1}\sim P(x_1|X_1^{t+1},X_3^t)$ Generate $X_3^{t+1}\sim P(x_1|X_1^{t+1},X_2^{t+1})$
is not time-reversible, but if the three conditional distributions have no restriction on their support, the resulting Markov chain is ergodic with distribution $P$.
|
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say
|
I have been thinking about quotients lately and pondered the following:
Let $G$ be a connected linear algebraic group and $X$ a $G$-variety where the action is the morphism $\sigma:G\times X\rightarrow X$. Let $p:L\rightarrow X$ be a line bundle on $X$.
A $G$-linearisation of $L$ is an action of $G$ on $L$ such that $p(g\cdot l)= g\cdot p(l)$, for $l\in L, g\in G$, and which restricts to a linear isomorphism $L_{x}\rightarrow L_{g\cdot x}$ on the fibres. This last condition can be expressed as saying that there is an isomorphism $L\rightarrow g^{\ast}L$, for each $g\in G$ (here $g^{\ast}L$ is the pullback bundle by the automorphism $g$ of $X$). In fact, since $G$ is connected, a $G$-linearisation of $L$ exists if and only if there is an isomorphism $p_{2}^{\ast}L\rightarrow \sigma^{\ast}L$ of bundles on $G\times X$, with $p_{2}$ the projection to $X$.
It is known (Corollary 7.2, p.109, 'Lectures on Invariant Theory' - Dolgachev) that if $X$ is normal then for any $L$ there is some power of $L$ that admits a $G$-linearisation.
Question 1: Can someone provide an example of a non-normal $G$-variety $X$ and a line bundle $L$ for which no power $L^{n}$ admits a $G$-linearisation?
Question 1': If no such example can exist can someone point me towards the literature (if any) where this question is addressed?
The existence result for normal $X$ relies on that fact that there is an exact sequence
$ 0\rightarrow K \rightarrow Pic^{G}(X)\rightarrow Pic(X) \rightarrow Pic(G)$
and that $Pic(G)$ is finite. Here $K$ is the group of rational characters of $G$ and $Pic^{G}(X)$ is the group of line bundles with a $G$-linearisation (or line $G$-bundles in Dolgachev's terminology).
Question 2: Can we extend the exact sequence
$0\rightarrow K \rightarrow Pic^{G}(X) \rightarrow Pic(X)$
to the right for arbitrary $X$ and in a 'canonical' manner? (i.e., is this exact sequence the tail of a canonical long exact sequence for any $G$-variety X?)
Question 2': If so, what groups appear? Do they have any 'down-to-earth' interpretations? (e.g., we have $Pic(G)$ appearing for normal $X$).
Thanks in advance and apologies if this is standard material in GIT - I only have a copy of Dolgachev's notes at hand and these questions are not addressed.
|
Reading Ravenel's "green book", I wonder about his question on p.15 "that the spectrum MU may be constructed somehow using formal group law theory without using complex manifolds or vector bundles. Perhaps the corresponding infinite loop space is the classifying space for some category defined in terms of formal group laws. Infinite loop space theorists, where are you?". What is the state of things on that now?
As far as I know, there is still no such interpretation. The closest I've heard is some rumored (but unpublished) work in derived algebraic geometry interpreting MU as some kind of representing object.
Such a construction of MU in terms of formal group data be very welcome (probably even more now than when Ravenel wrote the green book).
EDIT: Some elaboration.
We do know a lot about MU. We know that it has an orientation (Chern classes for vector bundles), and in it's universal for this property. It's not then extremely suprising that we get a formal group law from the tensor product for line bundles, but the fact that MU carries a universal formal group law, and that MU ^ MU carries a universal pair of isomorphic formal group laws, is surprising. At this point it's something we
observe algebraically. Even Lurie's definition of derived formal group laws, assuming I understand correctly, is geared to construct formal group laws objects in derived algebraic geometry carrying a connection to the formal group law data that we already know is there on the spectrum level, and hence ties it to the story we already knew for MU implicitly.
Some reasons these days we might want to know how to construct MU from formal group law data:
Selfish, ordinary homotopy-theoretic reasons. It's very useful to be able to construct other spectra with specific connections to formal group law data (like K-theory, TMF, etc) and constructing them is generally very difficult. Things like the Landweber exact functor theorem, the Hopkins-Miller theorem, and Lurie's recent work give us a lot of progress in this direction, but they only apply to restricted circumstances. None of these general methods will construct ordinary integral cohomology, corresponding to the additive formal group law (only rational cohomology). If we understood how to build MU, we might understand how to generalize. Equivariant homotopy theory. I would tentatively say that we don't have nearly as good computational and "qualitative" pictures of the equivariant stable categories, because we don't have something like the startling MU-picture that relates it all to some stack like the moduli stack of 1-dimensional formal group laws. If we found MU by _accident_ then we don't really know how the analogue should play out in other, more general, stable categories. Motivic homotopy theory. Hopkins and Morel found that there is some data to formal group laws appearing in motivic stable homotopy theory via the motivic bordism spectrum MGL. I'm not up with the state of the art here but a better understanding of this connection would be very important too - for understanding MGL itself, but also hopefully for understanding the analogues of chromatic data in these categories related to algebraic geometry. (space reserved for connections to other subjects that I've forgotten)
To elaborate on Tyler's comment (and please correct my inaccuracies), the idea is that the moduli space of DERIVED one-dimensional formal group laws (defined appropriately --- roughly formal group laws in which rings are replaced with Eoo ring spectra) is an affine derived scheme, which is the spectrum (in sense of AG) of the spectrum (in the sense of AT) MU. This is a derived version of Quillen's theorem that the formal group law of MU is the universal formal group law. If I remember correctly Jacob Lurie said this is fairly obvious. It's the natural analog of the (much harder) theorem of Lurie's that the moduli stack of derived elliptic curves (roughly, versions of elliptic curves with structure sheaves given by Eoo ring spectra) is representable by a derived enhancement of the moduli of ordinary elliptic curves (the coordinate ring given by the canonical line bundle is then TMF). Another example of the philosophy is Tyler's work with Mark Behrens. But this one is supposed to be easier and less useful (more formal).
If you take the moduli STACK of derived formal groups, then the global sections of the structure sheaf are just the sphere I think -- this is a version of the Adams-Novikov spectral sequence. Anyway the idea is to reinterpret the Quillen-Morava-Ravenel-Devinatz-Hopkins-Smith-.. picture for the stable category via formal group laws as describing the algebraic geometry of the derived moduli stack of formal groups.
There is a very easy theorem along much weaker but related lines in Adams's "Stable Homotopy and Generalized Homology." I refer to Lemma 4.6 of section II.
It isn't written quite like this, but essentially it says that if E is a complex orientable spectrum together with a complex orientation $x \in E^{2}(\mathbf{C}P^{\infty})$ then there is a unique (up to homotopy) map of ring spectra $MU \rightarrow E$ taking the fixed (better fix one) complex orientation of $MU$ to the given complex orientation of $E$.
This doesn't build $MU$ out of formal group laws of course, but it shows $MU$ has this universal property for complex oriented cohomology theories, and this book was around of course when Ravenel wrote the green book.
It does seem like with the modern point of view these ideas should yield a construction out of complex oriented theories if not quite out of formal group laws.
|
Three standard deviations mean that the null hypothesis doesn't seem "great". The deviation from the predictions isn't enough for a discovery in a hard scientific discipline such as particle physics. However, I am convinced that a 3-sigma deviation – formally equal to a 99.7% certainty of a new effect – simply has to be publicized because it's interesting enough. We've covered lots of 2-sigma or 2.5-sigma deviations so it would be unfair to be silent about a 3-sigma one.
The new ATLAS preprint with this potentially interesting finding is called Search for supersymmetry in events containing a same-flavour opposite-sign dilepton pair, jets, and large missing transverse momentum in \(\sqrt{s}=8\TeV\) \(pp\)-collisions with the ATLAS detectorSo look at it: we want final states that contain \(\ell^+\ell^-\), a lepton pair – it's the leptons that apparently follow from a decay of the Z-bosons that will matter; jets; and large missing transverse momentum (assumed to be composed of the lightest superpartners such as the lightest neutralinos – possibly particles of dark matter). The Standard Model didn't do too well.
One shouldn't overstate the deviations from the Standard Model. Most of the channels are OK. For example, a particular channel in the ATLAS data refused to confirm a recently announced CMS' 2.6-sigma excess – although the kinematic constraints were slightly different in the two experiments.
However, while rejecting this CMS excess, ATLAS saw an even larger one. It was in the events in which the lepton pair looked like one from a Z-boson of the correct mass (no neutrinos are produced in that). The key table is table 7 which describes SR-Z (the signal region with a Z-boson):\[
\begin{array}{|c|c|c|c|}
\hline {\rm Channel} & e^+e^- & \mu^+\mu^- & \ell^+\ell^- combo\\
\hline {\rm expect} &4.2\pm 1.6& 6.4\pm 2.2 & 10.6\pm 3.2\\
\hline {\rm observed} & 16&13&29 \\
\hline
\end{array}
\] The excess is mostly in the electron pair channel, 3.0 sigma again, but even in the dimuon channel, there is some excess, 1.7 sigma. When combined, the dilepton channels are back to 3 sigma. One expects \(10\pm 3\) and gets \(29\) events, not bad! I vaguely remember some other recent excess that – surprisingly or disappointingly – appeared mostly in the electron channel only but I can't figure out where it was. Do you know where we saw it?
As far as I understand, CMS hasn't released its own data on the on-Z signal region where ATLAS is seeing the excess yet.
If you want to believe that this excess is from new physics, it may be a gluino or a squark that is decaying. One exclusion plot – Figure 13b – weakly suggests that the exclusion curve is being repelled from a point with a \(1\TeV\) gluino and a \(600\GeV\) lightest neutralino which may be "the truth" for those who want to be real optimists.
(On Monday, March 16th, this paper by Barenboim et al. will appear that will make pretty much the same conclusion as I did 4 days earlier. A SUSY scenario of "general gauge mediation" is the simplest thing that is needed. The gravitino is light and it's the LSP. Fundamental higher-energy scales are at hundreds of TeV. Only this paragraph was added after March 12th.)
Your humble correspondent doesn't really understand whether they assume these strongly interacting superpartners to be pair-produced and if they do, why the other member of the pair doesn't destroy the identity of the final state. But maybe it's giving the jets and the missing transverse energy is only required to "be there" and its magnitude isn't important. Can you help me?
New Higgs CMS excess
CMS will release a paper tomorrow looking for heavier friends of the now familiar \(125\GeV\) Higgs boson. There is a 2.56-2.64 sigma excess in the mass range \(700\)-\(800\GeV\). The newer papers released recently seem to have many more excesses than 1+ year ago – which supports the conjecture that whenever they see something "not quite mundane", they (both ATLAS and CMS) are delaying the publication, an approach that may be reasonable but also a potentially dangerous bias we should be aware of (and force us to deduce lessons from "comparative literature" – like the statistical evaluation "how many excesses have been seen anywhere" – much more carefully than otherwise).
Incidentally, a few hours ago, there was a CERN webcast kickstarting the new season, the 2015 run. You may watch the recorded press conference here. Be ready for lots of French and German accents. I believe that the content isn't important enough to be analyzed.
|
Jha, Ramanand (1994)
A Cosmology without Big Bang. In: General Relativity and Gravitation, 26 (11). pp. 1067-1073.
PDF
A_Cosmology-119.pdf
Restricted to Registered users only
Download (281kB) | Request a copy
Abstract
In contrast to standard ECSK theory with theBrans-Dicke scalar field $(\Phi)$ nonminimally coupled to the curvature scalar, an additional new pseudo scalar term $\Phi^nE^\alpha^\beta^\mu^\nu R_\alpha_\beta_\mu_\nu$ (contraction between Levi-Civita pseudo ten- sor and curvature tensor) has been included in the Lagrangian. The new term is non-zero due to the non-symmetric nature of the connec- tion and vanishes identically in the general theory of relativity. We show that there exists a nonsingular cosmological solution for a spatially fiat (k = 0) Robertson-Walker line element in the radiation era; therefore our model has no big bang.
Item Type: Journal Article Additional Information: The copyright of this article belongs to Springer. Department/Centre: Division of Physical & Mathematical Sciences > Physics Depositing User: Anka Setty Date Deposited: 04 Jul 2006 Last Modified: 19 Sep 2010 04:29 URI: http://eprints.iisc.ac.in/id/eprint/7795 Actions (login required)
View Item
|
For a parametrically defined curve we had the definition of arc length. Since vector valued functions are parametrically defined curves in disguise, we have the same definition. We have the added benefit of notation with vector valued functions in that the square root of the sum of the squares of the derivatives is just the magnitude of the velocity vector.
Definition: Arc Length
Let
\[ \textbf{r}(t) = x(t) \, \hat{\textbf{i}} + y(t) \, \hat{\textbf{j}} + z(t) \, \hat{\textbf{k}} \]
be a differentiable vector valued function on [a,b]. Then the arc length \(s\) is defined by
\[ s=\int_{a}^{b}\sqrt{ \left(\frac{dx}{dt}\right)^2+\left(\frac{dy}{dt}\right)^2+\left(\frac{dz}{dt}\right)^2}\, dt = \int _a^b \left|v(t)\right| \,dt .\]
Example \(\PageIndex{1}\)
Suppose that
\[ \textbf{r}(t) = 3t\,\hat{\textbf{i}} + 2\,\hat{\textbf{j}} + t^2\,\hat{\textbf{k}} \]
Set up the integral that defines the arc length of the curve from 2 to 3. Then use a calculator or computer to approximate the arc length.
Solution
We use the arc length formula
\[ s = \int _2^3 \sqrt{9 + 0 + 4t^2} \, dt = \int_2^3 \sqrt{9+4t^2} \, dt .\]
Notice that we could do this integral by hand by letting \(t = 9/2 \tan\, q\), however the question only asked us to use a machine to approximate the integral:
\[ s = 5.8386 .\]
Parameterization by Arc Length
Recall that like parametric equations, vector valued function describe not just the path of the particle, but also how the particle is moving. Among all representations of a curve there is a "simplest" one. If the particle travels at the constant rate of one unit per second, then we say that the curve is
parameterized by arc length. We have seen this concept before in the definition of radians. On a unit circle one radian is one unit of arc length around the circle. When we say "simplest" we in no way mean that the equations are simple to find, but rather that the dynamics of the particle are simple. To aid us in parameterizing by arc length, we define the arc length function.
Definition: Arc Length Function
If \(\textbf{r}(t)\) is a differentiable vector valued function, then the
arc length function is defined by
\[ s(t) = \int _0^t || \textbf{v}(u) || \, du. \]
Remark: By the second fundamental theorem of calculus, we have
\[ s'(t) = ||v(t)|| .\]
If a vector valued function is parameterized by arc length, then
\[ s(t) = t .\]
If we have a vector valued function\(r(t)\) with arc length s(t), then we can introduce a new variable
\[ s = s^{-1}(t) .\]
So that the vector valued function \(r(s)\) will have arc length equal to
\[ s\left(s^{-1}(t)\right) = t .\]
and \(r(s)\) will be parameterized by arc length. Unfortunately, this process is usually impossible for two reasons.
The integral that defines arc length involves a square root in the integrand; this integral is usually impossible to determine. Even if the integral is possible to evaluate, finding the inverse of a function is often impossible. There are a few special curves that can be parameterized by arc length and one is demonstrated below.
Example \(\PageIndex{2}\): Parameterizing by Arc Length
Find the arc length parameterization of the helix defined by
\[ \textbf{r}(t) = \cos\, t \hat{\textbf{i}} + \sin\,t \hat{\textbf{j}} + t \hat{\textbf{k}} .\]
Solution
First find the arc length function
\[ s(t) = \int_0^t \sqrt{\sin^2 u + \cos^2u + 1}\, dt = \int_0^t \sqrt{2}\,dt = \sqrt{2}\, t .\]
Solving for \(t\) gives
\[ t= \dfrac{s}{\sqrt2} .\]
Now substitute back into the position equation to get
\[ \textbf{r}(s) = \cos \dfrac{s}{\sqrt2} \, \hat{\textbf{i}} + \sin \dfrac {s}{\sqrt2} \, \hat{\textbf{j}} + \dfrac{s}{\sqrt2} \, \hat{\textbf{k}} .\]
Concepts: Curvature and Normal Vector
Consider a car driving along a curvy road. The tighter the curve, the more difficult the driving is. In math we have a number, the
curvature, that describes this "tightness". If the curvature is zero then the curve looks like a line near this point. While if the curvature is a large number, then the curve has a sharp bend. Figure \(\PageIndex{1}\): Below image is a part of a curve \(\mathbf{r}(t)\) Red arrows represent unit tangent vectors, \(\mathbf{\hat{T}}\), and blue arrows represent unit normal vectors, \(\mathbf{\hat{N}}\).
Before learning what curvature of a curve is and how to find the value of that curvature, we must first learn about
unit tangent vector. As the name suggests, unit tangent vectors are unit vectors (vectors with length of 1) that are tangent to the curve at certain points. Because tangent lines at certain point of a curve are defined as lines that barely touch the curve at the given point, we can deduce that tangent lines or vectors have slopes equivalent to the instantaneous slope of a curve at the given point. In other words,
\[ \mathbf {T} = \frac{d \mathbf{r}}{dt}\mathrm{,}\]
which means
\[ \mathbf{\hat{T}} = \frac{\mathbf{T}}{\left | \mathbf{T} \right |}= \frac{d\mathbf{r}/dt}{\left | d\mathbf{r}/dt \right|} .\]
Based on what we learned previosuly, we know that \(\frac{d\mathbf{r}}{dt} = \mathbf{v} \), where \(\mathbf{v} \) is the velocity at which a point is moving at a given time. Furthermore, the absolute value of the velocity vector is the speed vector of the curve, meaning \(\left | \frac{d\mathbf{r}}{dt} \right | = \frac{ds}{dt} \). So the formula for unit tangent vector can be simplified to:
\[\mathbf{\hat{T}} = \frac{\mathrm{velocity}}{\mathrm{speed}} = \frac{d\mathbf{r}/dt}{ds/dt} .\]
And now, let's think about the unit tangent vector when the curve is explained in terms of arc length, that is, \(r(s)\) instead of \(r(t)\). This means:
\[\mathbf{T} = \frac{d\mathbf{r}}{ds}\]
\[\text{and }\mathbf{\hat{T}} = \frac{d\mathbf{r}/ds}{ds/ds} = \frac{d\mathbf{r}}{ds} .\]
With this information, we will be learning what curvature really is and how we can calculate the curvature, denoted as \(\kappa\).
Curvature of a Curve
Curvature is a measure of how much the curve deviates from a straight line. In other words, the curvature of a curve at a point is a measure of how much the change in a curve at a point is changing, meaning the curvature is the magnitude of the second derivative of the curve at given point (let's assume that the curve is defined in terms of the arc length \(s\) to make things easier). This means:
\[k= \left | \frac{d^2\mathbf{r}}{ds^2} \right | .\]
Since we know that \(\mathbf{\hat{T}} = d\mathbf{r} / ds\), we can formulate an equation for \(\kappa\) in terms of \(\mathbf{\hat{T}}\):
\[k= \left | \frac{d\mathbf{\hat{T}}}{ds} \right | .\]
Never the less, we know that most curves are written in parametric equations in terms of some dummy variable, most commonly \(t\). So let's assume that the curve is in terms of \(t\), such that \(\mathbf{r}(t)\) is a curve. In such case, we must formulate another equation to find the curvature without taking derivatives in terms of \(s\).
First, we know that
\[ k= \left | \frac{d\mathbf{\hat{T}}}{ds} \right | \]
Using Chain Rule, we get
\[ k= \left | \frac {d\mathbf{\hat{T}}}{dt} \cdot \frac{dt}{ds} \right | \]
\[= \frac{1}{\left | ds/dt \right |} \left |\frac{d\mathbf{\hat{T}}}{dt} \right | \]
therefore
\[k= \frac{1}{\left | \mathbf{v} \right |} \left | \frac{d\mathbf{\hat{T}}}{dt} \right |. \]
Definition of Curvature (repeat)
More formally, if \(\textbf{T}(t)\) is the
unit tangent vector function then the curvature, \(k\), is defined at the rate at which the unit tangent vector changes with respect to arc length.
\[ k = ||\dfrac{d}{ds} (\textbf{T}(t)) || = ||\textbf{r}''(s)||\]
As stated previously, this is not a practical definition, since parameterizing by arc length is typically impossible. Instead we use the chain rule to get
\[ ||\dfrac{d}{ds} (\textbf{T}(t)) || = ||\textbf{T}'(t) \dfrac{dt}{ds}|| \]
\[ \dfrac{||\textbf{T}'(t)|| }{ ||\dfrac{ds}{dt}|| } = \dfrac{ ||\textbf{T}'(t)||}{ ||\textbf{r}'(t)||}. \]
This formula is more practical to use, but still cumbersome. \(\textbf{T}'(t)\) is typically a mess. Instead we can borrow from the formula for the normal vector to get the curvature
\[ K(t) = \dfrac{ ||r'(t) \times r''(t)||}{||r'(t)||^3}. \]
Normal Vector of a Curve
A unit normal vector of a curve, by its definition, is perpendicular to the curve at given point. This means a normal vector of a curve at a given point is perpendicular to the tangent vector at the same point. Furthermore, a normal vector points towards the center of curvature, and the derivative of tangent vector also points towards the center of curvature. In summary, normal vector of a curve is the derivative of tangent vector of a curve.
\[\mathbf{N} = \frac{d\mathbf{\hat{T}}}{ds}\mathrm{ or } \frac{d\mathbf{\hat{T}}}{dt}\]
To find the unit normal vector, we simply divide the normal vector by its magnitude:
\[\mathbf{\hat{N}} = \frac{d\mathbf{\hat{T}}/ds}{\left | d\mathbf{\hat{T}}/ds\right |}\mathrm{ or } \frac{d\mathbf{\hat{T}}/dt}{\left | d\mathbf{\hat{T}}/dt \right |} .\]
Notice that \( \left | d\mathbf{\hat{T}}/ds\right | \) can be replaced with \( \kappa \), such that:
\[\mathbf{\hat{N}} = \frac{1}{\kappa} \frac{d\mathbf{\hat{T}}}{ds} \]
\[\therefore \mathbf{\hat{N}} = \frac{1}{\kappa} \frac{d\mathbf{\hat{T}}}{ds} \mathrm{ or } \frac{d\mathbf{\hat{T}}/dt}{\left | d\mathbf{\hat{T}}/dt \right |} .\]
Example \(\PageIndex{3}\)
Find the curvature at \(t=\frac{\pi}{2}\) if
\[ r(t) = \cos \,t\, \hat{\textbf{i}} - \frac{1}{t} \hat{\textbf{j}} + \sin\, t\, \hat{\textbf{k}} .\]
Solution
We take derivatives \[ \textbf{r}'(t) = -\sin\, t\, \hat{\textbf{i}} + \frac{1}{t^2}\, \hat{\textbf{j}} + \cos\, t \, \hat{\textbf{k}} \]
\[ \textbf{r}''(t) = -\cos\, t \,\hat{\textbf{i}} - \frac{2}{t^3}\, \hat{\textbf{j}} - \sin\, t\, \hat{\textbf{k}} . \]
Plugging in \(t=\frac{\pi}{2}\) gives
\[\begin{align} \textbf{r}' \left(\frac{\pi}{2} \right) &= -\hat{\textbf{i}} + \dfrac{4}{\pi^2} \,\hat{\textbf{j}} \\ &= -\dfrac{16}{\pi^3}\, \hat{\textbf{j}} - \hat{\textbf{k}} \end{align}\]
\[ \textbf{r}''\left(\frac{\pi}{2}\right) .\]
Now take the cross product to get
\[ \textbf{r}'(\pi/2) \times \textbf{r}''(\pi/2) = -\dfrac{4}{\pi^2} \, \hat{\textbf{i}} -\hat{\textbf{j}} + \dfrac{16}{\pi^3} \, \hat{\textbf{k}} \]
Finally, we plug this information into the curvature formula to get
\[ \dfrac{\sqrt{\dfrac{16}{\pi^4}+1+\dfrac{256}{\pi^6}}}{\left(\sqrt{1+\dfrac{16}{\pi^4}}\right)^3} \approx 0.952 . \]
Curvature of a Plane Curve
If a curve resides only in the xy-plane and is defined by the function \(y = f(t)\) then there is an easier formula for the curvature. We can parameterize the curve by
\[ \textbf{r}(t) = t \, \hat{\textbf{i}} + f(t)\, \hat{\textbf{j}} .\]
We have
\[ \textbf{r}'(t) = \hat{\textbf{i}} + f '(t) \, \hat{\textbf{j}} \]
\[ \textbf{r}''(t) = f ''(t) \, \hat{\textbf{j}} .\]
Their cross product is just
\[r'(t) \times r''(t) = f''(t) \hat{\textbf{k}} \]
which has magnitude
\[ ||\textbf{r}'(t) \times r''(t)|| = |f''(t)| . \]
The curvature formula gives
Definition: Curvature of Plane Curve
\[ K(t) = \dfrac{|f''(t)|}{ \left[1+\left(f'(t) \right)^2 \right]^{3/2}}. \]
Example \(\PageIndex{4}\)
Find the curvature for the curve \[ y = \sin\, x \].
Solution
We have
\[ f '(x) = \cos \, x \] \[ f ''(x) = -\sin \, x .\]
Plugging into the curvature formula gives \[ K(t) = \dfrac{|-\sin\, t|}{[1+\cos^2t]^{3/2}}\]
The Osculating Circle
In first year calculus, we saw how to approximate a curve with a line, parabola, etc. Instead we can find the best fitting
circle at the point on the curve. If \(P\) is a point on the curve, then the best fitting circle will have the same curvature as the curve and will pass through the point \(P\). We will see that the curvature of a circle is a constant \(1/r\), where \(r\) is the radius of the circle. The center of the osculating circle will be on the line containing the normal vector to the circle. In particular the center can be found by adding
\[ OP + 1/K N . \]
Exercise \(\PageIndex{2}\)
Find the equation of osculating circle to \(y = x^2\) at \(x = -1\).
The Normal Component of Acceleration Revisited
How is the normal component of acceleration related to the curvature. If you remember, the normal component the acceleration tells us how fast the particle is changing direction. If a curve has a sharp bend (high curvature) then the directional change will be faster. We now show that there is a definite relationship between the normal component of acceleration and curvature.
\[ \textbf{a}(t) = a_{\textbf{T}}\textbf{T}(t) + a_{\textbf{N}}\textbf{N}(t) \]
We have
\[ \textbf{a}(t) = \textbf{r}''(t) = \dfrac{d}{dt} (\textbf{r}'(t)) = \dfrac{d}{dt} \left(||\textbf{r}'(t)||\textbf{T}(t)\right) = \dfrac{d}{dt} \left(||r'(t)||)\textbf{T}(t) + ||r'(t)|| \textbf{T}'(t) \right) \]
\[ = s''(t)\textbf{T}(t) + s'\textbf{T}'(t) = s''(t)\textbf{T}(t) + s'||\textbf{T}'(t)||\textbf{N}(t) = s''(t)\textbf{T}(t) + ks'^2 \textbf{N}(t) .\]
So that the tangential component of the acceleration is \(s''(t)\) and the normal component is \(k(t)s'^2(t)\).
Exercise \(\PageIndex{3}\)
Find the tangential and normal components of \( \textbf{r}(t) = t\, \hat{\textbf{i}}- 2t\, \hat{\textbf{j}} + t^2 \,\hat{\textbf{k}} \).
Contributors Larry Green (Lake Tahoe Community College) Joseph Sanghun Lee (UCD)
Integrated by Justin Marshall.
|
EDIT based on comments below:I add the mathematical formulation of my problem below. I am trying to solve an equation of the form$$\partial_t f(x,y,t)= (\partial^2_x +\partial^2_y) f(x,y,t) \equiv G(x,y,t),$$discretizing this equation we have$$f^{k+1}_{i,j}= f^k_{i,j} + \Delta t G^k_{i,j}$$where $i,j$ refer to discretized spatial coordinates $x,y$ and $k$ corresponds to the iteration step.
However here $\Delta t $ is a fixed step size, I want to use a line search to find an optimal step size. Defining $\Delta t \equiv \alpha_k$, I want to find $\alpha_k $ such that $f^{k+1}(i,j) < f^{k}(i,j)-c\alpha_k G^\top G$ which is a backtracking Armijo line search. So the equation I am trying to solve is : $$ f^{k+1}_{i,j}= f^k_{i,j}+\alpha^k G^k (i,j) $$ Below is a back tracking line search algorithm to find $\alpha_k$ but it is not being computed correctly I realize.
I updated my algorithm based on the comment below however it still seems my stepsize at each iteration, $\alpha_k$ is not being updated properly. When I print it out it just prints the initial value I inputted for it. Is this algorithm not updating $\alpha_k$ correctly? I thought the point of the backtracking line search was to find me an optimal value $\alpha_k$ such that I get to the minimum. How can I fix this? thanks!
I am trying to code the backtracking-Armijo line search algorithm on page 10 here https://people.maths.ox.ac.uk/hauser/hauser_lecture2.pdf.
Below is a sample code for a back tracking line search algorithm . I can check that the algorithm is not correct but I am not sure where I am going wrong. A few errors i've realized are possibly the if statement condition and updating alphak at the very end ; lastly I'm not sure what else is wrong (I am unsure how to fix these problems). I tried to follow the algorithm in the book but it is not too detailed. It is very clear there is a problem with the algorithm but I am not sure what.
Note, the search direction I choose given by Pk is in the direction of the negative gradient, which I call -g. Assume below that g(i,j) and fk(i,j) are given at the first iteration, and are 2D arrays since they depend on spatial positions i,j.
integer, parameter :: nx=10,ny=10, k=10real, dimension(-nx:nx,-ny:ny) :: fk,fk1,g,gt,Pkinteger :: i,j,mreal :: alphak,c,rho !step size at iteration kc=0.0001rho=0.5do m=1,k alphak = 1.0 Pk(i,j) = -g(i,j) !search direction = -gradient fk1(i,j) = fk(i,j) + alphak*Pk(i,j) gt(i,j) = g(j,i) !transpose of g if (fk1(i,j) > fk(i,j)-c*alphak*gt*g) then alphak = rho*alphak do j=-ny+1,ny-1 do i=-nx+1,nx-1 fk1(i,j) = fk1(i,j)-alphak*g(i,j) end do end do end if print*, "print out alphak for m=', m print "(//(5(5x,e22.14)))", alphakend do
|
55 1
I'm reading a book on AdS/CFT by Ammon and Erdmenger and chapter 3 covers supersymmetry. This isn't my first look at SUSY but it's my first in depth look to really try to understand it, and when they talk about constructing a Lagrangian for ##\mathcal{N}=1## chiral superfields they write the most general form,
$$\mathcal{L} = \underbrace{K(\Phi^k,\Phi^{k\dagger})_{|\theta^2\bar{\theta}^2}}_{\text{D-term}} + \underbrace{\left( W(\Phi^k)_{|\theta^2} + W^{\dagger}(\Phi^{k\dagger})_{|\bar{\theta}^2}\right)}_{\text{F-terms}}$$ Initially this rustled my Jimmies because they had just spent the preceding section having me jump through hoops to deal with all the other component fields then decided only to use these 3, but then they address this directly: "In the Lagrangian, only the D-term ... and the F-terms enter". Unfortunately, this is as detailed as the explanation gets and I was hoping someone could please explain why this is the case and why the remaining 6 component fields don't show up? Thanks in advance
$$\mathcal{L} = \underbrace{K(\Phi^k,\Phi^{k\dagger})_{|\theta^2\bar{\theta}^2}}_{\text{D-term}} + \underbrace{\left( W(\Phi^k)_{|\theta^2} + W^{\dagger}(\Phi^{k\dagger})_{|\bar{\theta}^2}\right)}_{\text{F-terms}}$$
Initially this rustled my Jimmies because they had just spent the preceding section having me jump through hoops to deal with all the other component fields then decided only to use these 3, but then they address this directly: "In the Lagrangian, only the D-term ... and the F-terms enter". Unfortunately, this is as detailed as the explanation gets and I was hoping someone could please explain why this is the case and why the remaining 6 component fields don't show up?
Thanks in advance
|
Thank you for using the timer!We noticed you are actually not timing your practice. Click the START button first next time you use the timer.There are many benefits to timing your practice, including:
Does GMAT RC seem like an uphill battle? e-GMAT is conducting a free webinar to help you learn reading strategies that can enable you to solve 700+ level RC questions with at least 90% accuracy in less than 10 days. Sat., Oct 19th at 7 am PDT
Re: How many of the integers that satisfy the inequality (x+2)(x[#permalink]
Show Tags
09 Jun 2012, 02:51
35
1
73
macjas wrote:
How many of the integers that satisfy the inequality (x+2)(x+3) / (x-2) >= 0 are less than 5?
A. 1 B. 2 C. 3 D. 4 E. 5
\(\frac{(x+2)(x+3)}{x-2}\geq{0}\) --> the roots are -3, -2, and 2 (equate the expressions to zero to get the roots and list them in ascending order), this gives us 4 ranges: \(x<-3\), \(-3\leq{x}\leq{-2}\), \(-2<x<2\) and \(x>2\) (notice that we have \(\geq\) sign, so, we should include -3 and -2 in the ranges but not 2, since if \(x=2\) then the denominator becomes zero and we cannot divide by zero).
Now, test some extreme value: for example if \(x\) is very large number then all three terms will be positive which gives the positive result for the whole expression, so when \(x>2\) the expression is positive. Now the trick: as in the 4th range expression is positive then in 3rd it'll be negative, in 2nd it'l be positive again and finally in 1st it'll be negative: - + - +. So, the ranges when the expression is positive are: \(-3\leq{x}\leq{-2}\), (2nd range) and \(x>2\) (4th range).
\(-3\leq{x}\leq{-2}\) and \(x>2\) means that only 4 integers that are less than 5 satisfy given inequality: -3, -2, 3, and 4.
Re: How many of the integers that satisfy the inequality (x+2)(x[#permalink]
Show Tags
09 Jun 2012, 03:20
2
2
Bunuel wrote:
macjas wrote:
How many of the integers that satisfy the inequality (x+2)(x+3) / (x-2) >= 0 are less than 5?
A. 1 B. 2 C. 3 D. 4 E. 5
\(\frac{(x+2)(x+3)}{x-2}\geq{0}\) --> the roots are -3, -2, and 2 (equate the expressions to zero to get the roots and list them in ascending order), this gives us 4 ranges: \(x<-3\), \(-3\leq{x}\leq{-2}\), \(-2<x<2\) and \(x>2\) (notice that we have \(\geq\) sign, so, we should include -3 and -2 in the ranges but not 2, since if \(x=2\) then the denominator becomes zero and we cannot divide by zero).
Now, test some extreme value: for example if \(x\) is very large number then all three terms will be positive which gives the positive result for the whole expression, so when \(x>2\) the expression is positive. Now the trick: as in the 4th range expression is positive then in 3rd it'll be negative, in 2nd it'l be positive again and finally in 1st it'll be negative: - + - +. So, the ranges when the expression is positive are: \(-3\leq{x}\leq{-2}\), (2nd range) and \(x>2\) (4th range).
\(-3\leq{x}\leq{-2}\) and \(x>2\) means that only 4 integers that are less than 5 satisfy given inequality: -3, -2, 3, and 4.
Re: How many of the integers that satisfy the inequality (x+2)(x[#permalink]
Show Tags
02 Dec 2012, 06:13
1
Bunuel wrote:
macjas wrote:
How many of the integers that satisfy the inequality (x+2)(x+3) / (x-2) >= 0 are less than 5?
A. 1 B. 2 C. 3 D. 4 E. 5
\(\frac{(x+2)(x+3)}{x-2}\geq{0}\) --> the roots are -3, -2, and 2 (equate the expressions to zero to get the roots and list them in ascending order), this gives us 4 ranges: \(x<-3\), \(-3\leq{x}\leq{-2}\), \(-2<x<2\) and \(x>2\) (notice that we have \(\geq\) sign, so, we should include -3 and -2 in the ranges but not 2, since if \(x=2\) then the denominator becomes zero and we cannot divide by zero).
Now, test some extreme value: for example if \(x\) is very large number then all three terms will be positive which gives the positive result for the whole expression, so when \(x>2\) the expression is positive. Now the trick: as in the 4th range expression is positive then in 3rd it'll be negative, in 2nd it'l be positive again and finally in 1st it'll be negative: - + - +. So, the ranges when the expression is positive are: \(-3\leq{x}\leq{-2}\), (2nd range) and \(x>2\) (4th range).
\(-3\leq{x}\leq{-2}\) and \(x>2\) means that only 4 integers that are less than 5 satisfy given inequality: -3, -2, 3, and 4.
Re: How many of the integers that satisfy the inequality (x+2)(x[#permalink]
Show Tags
02 Dec 2012, 06:15
2
2
eaakbari wrote:
Bunuel wrote:
macjas wrote:
How many of the integers that satisfy the inequality (x+2)(x+3) / (x-2) >= 0 are less than 5?
A. 1 B. 2 C. 3 D. 4 E. 5
\(\frac{(x+2)(x+3)}{x-2}\geq{0}\) --> the roots are -3, -2, and 2 (equate the expressions to zero to get the roots and list them in ascending order), this gives us 4 ranges: \(x<-3\), \(-3\leq{x}\leq{-2}\), \(-2<x<2\) and \(x>2\) (notice that we have \(\geq\) sign, so, we should include -3 and -2 in the ranges but not 2, since if \(x=2\) then the denominator becomes zero and we cannot divide by zero).
Now, test some extreme value: for example if \(x\) is very large number then all three terms will be positive which gives the positive result for the whole expression, so when \(x>2\) the expression is positive. Now the trick: as in the 4th range expression is positive then in 3rd it'll be negative, in 2nd it'l be positive again and finally in 1st it'll be negative: - + - +. So, the ranges when the expression is positive are: \(-3\leq{x}\leq{-2}\), (2nd range) and \(x>2\) (4th range).
\(-3\leq{x}\leq{-2}\) and \(x>2\) means that only 4 integers that are less than 5 satisfy given inequality: -3, -2, 3, and 4.
Re: How many of the integers that satisfy the inequality (x+2)(x[#permalink]
Show Tags
19 Jan 2013, 03:19
Bunuel wrote:
macjas wrote:
How many of the integers that satisfy the inequality (x+2)(x+3) / (x-2) >= 0 are less than 5?
A. 1 B. 2 C. 3 D. 4 E. 5
\(\frac{(x+2)(x+3)}{x-2}\geq{0}\) --> the roots are -3, -2, and 2 (equate the expressions to zero to get the roots and list them in ascending order), this gives us 4 ranges: \(x<-3\), \(-3\leq{x}\leq{-2}\), \(-2<x<2\) and \(x>2\) (notice that we have \(\geq\) sign, so, we should include -3 and -2 in the ranges but not 2, since if \(x=2\) then the denominator becomes zero and we cannot divide by zero).
Now, test some extreme value: for example if \(x\) is very large number then all three terms will be positive which gives the positive result for the whole expression, so when \(x>2\) the expression is positive. Now the trick: as in the 4th range expression is positive then in 3rd it'll be negative, in 2nd it'l be positive again and finally in 1st it'll be negative: - + - +. So, the ranges when the expression is positive are: \(-3\leq{x}\leq{-2}\), (2nd range) and \(x>2\) (4th range).
\(-3\leq{x}\leq{-2}\) and \(x>2\) means that only 4 integers that are less than 5 satisfy given inequality: -3, -2, 3, and 4.
Re: How many of the integers that satisfy the inequality (x+2)(x[#permalink]
Show Tags
23 Feb 2013, 21:08
1
Can someone please explain why we take the denominator (x-2) as one of the roots of this inequality? I thought when you set the equation to = 0 and bring the denominator to the right side it becomes 0. For example (x^2+ 5x-6)/(x^2- 4x+3)=0 we would only consider the solutions of the numerator NOT the denominator.
Re: How many of the integers that satisfy the inequality (x+2)(x[#permalink]
Show Tags
30 Mar 2013, 06:36
5
2
How many of the integers that satisfy the inequality (x+2)(x+3)/(x-2)>=0 are less than 5? A. 1 B. 2 C. 3 D. 4 E. 5
We can analize the numerator >=0 \((x+2)(x+3)=0\) \(x+2=0, x=-2\) \(x+3=0, x=-3\) Since we have a ">=" we take the external values \(x>=-2\) and \(x<=-3\) Then we analyze the denominator >0 (it can't be =0) \(x-2>0, x>2\)
~~~~~~~(-3)~~~~~(-2)~~~~~~~(+2) negative, negative,negative|positive For the D positive | negative| positive , positive For the N You sum up the sign of the values and obtain: negative | positive | negative | positive
We are looking for >=0 value, so we keep the positive intervals and discard the negative ones. -3>=x>=-2 (in this we have also the =) and x>2 ( no = here) The values less than 5 are : -3,-2,3,4
Is it clear?_________________
It is beyond a doubt that all our knowledge that begins with experience.
Re: How many of the integers that satisfy the inequality (x+2)(x[#permalink]
Show Tags
01 Apr 2013, 16:06
Bunuel wrote:
rakeshd347 wrote:
How many of the integers that satisfy the inequality (x+2)(x+3)/(x-2)>=0 are less than 5? A. 1 B. 2 C. 3 D. 4 E. 5
I am not really good with inequalities to be honest. I have solved this question and found the answer but It took me 4minutes. Is there any short approach please.
Merging similar topics. Please refer to the solutions above.
I still don't understand how -2 and -3 are solutions. Don't they make the numerator = to 0? I kind of understand the theory, but i'm having trouble reconciling the number picking strategy with the theory.
Re: How many of the integers that satisfy the inequality (x+2)(x[#permalink]
Show Tags
Updated on: 09 Jun 2013, 08:00
5
1
mp2469 wrote:
Bunuel wrote:
rakeshd347 wrote:
How many of the integers that satisfy the inequality (x+2)(x+3)/(x-2)>=0 are less than 5? A. 1 B. 2 C. 3 D. 4 E. 5
I am not really good with inequalities to be honest. I have solved this question and found the answer but It took me 4minutes. Is there any short approach please.
Merging similar topics. Please refer to the solutions above.
I still don't understand how -2 and -3 are solutions. Don't they make the numerator = to 0? I kind of understand the theory, but i'm having trouble reconciling the number picking strategy with the theory.
We are given (x+2)(x+3)/(x-2)>=0
Now we can not cross multiply (x-2) as we don't about its sign. All we know from the problem is that x can not be equal to 2 as because that will make the expression undefined.
Now, as know that \((x-2)^2\) is a positive quantity. Safely multiply it on both sides, thus we get, (x-2)(x+2)(x+3)>=0. AS because there is an equality sign in the given inequality, we can say that x=-2 and x=-3 are two valid solutions, for which the expression assumes the value of zero. X can't be equal to 2, as stated before._________________
I guess inputting numbers [-4, -5 etc] will make the inequality true but when solving practice questions, instinctively, I am missing this range. Is this something I can get good at only by practice? any tips?
I guess inputting numbers [-4, -5 etc] will make the inequality true but when solving practice questions, instinctively, I am missing this range. Is this something I can get good at only by practice? any tips?
To solve this : \((x+2)(x+3) \geq{0}\), we can use an old method. Think it this way \((x+2)(x+3) = 0\) the solutions are x=-2 and x=-3; now I use an old trick: if the sign of \(x^2\) and the operator are "the same" ie (+,>) or (-,<) we take the external values : \(x\leq{-3}\) and \(x\geq{-2}\). In the other two cases (+,<) (-,>) we take the internal values. If the sign was < (\((x+2)(x+3) \leq{0}\)) the solution would be \(-3\leq{x}\leq{-2}\).
Let me know if it's clear now_________________
It is beyond a doubt that all our knowledge that begins with experience.
Re: How many of the integers that satisfy the inequality (x+2)(x[#permalink]
Show Tags
09 Jun 2013, 07:16
samara15000 wrote:
How many of the integers that satisfy the inequality (x+2)(x+3)/(x-2) >=0 are less than 5?
a. 1 b.2 c.3 d.4 e.5
The answer is [D] indeed. There are only 4 values for which the equation will hold considering x < 5
Consider \((x+2)(x+3)/(x-2) >=0\) for values like {0, 5}, we can clearly see that x =2 is not a acceptable value. Further x = 1,0 will also not hold. Only 3,4 are acceptable values. Now considering the negative range, {-infinity, 0}, we can see that -3 and -2 are the values for which the inequality is equal to 0. For all other values under the range, the values are -ve. Hence 4 values with the solution set as {-3,-2,3,4}.
Re: How many of the integers that satisfy the inequality (x+2)(x[#permalink]
Show Tags
25 Aug 2013, 11:05
Disclaimer: The below is not going to be helpful for your GMAT
Strictly speaking when x takes the value of 2, the value of the expression leads to infinity. I know GMAT is way too scared of infinity but the question asks if the expression leads to equal to or greater than zero, and since numerator is positive, the infinity is in the positive direction which indeed meets the inequality criteria.
Re: How many of the integers that satisfy the inequality (x+2)(x[#permalink]
Show Tags
25 Aug 2013, 11:08
nave81 wrote:
Disclaimer: The below is not going to be helpful for your GMAT
Strictly speaking when x takes the value of 2, the value of the expression leads to infinity. I know GMAT is way too scared of infinity but the question asks if the expression leads to equal to or greater than zero, and since numerator is positive, the infinity is in the positive direction which indeed meets the inequality criteria.
<apologies for the pedantry>
\(\frac{(x+2)(x+3)}{x-2}\geq{0}\) holds true for \(-3\leq{x}\leq{-2}\) and \(x>2\). Thus four integers that are less than 5 satisfy given inequality: -3, -2, 3, and 4. Notice that 2 is not among these four integers._________________
|
Let $k$ be the length of any edge of a regular tetrahedron.Show that the angle between any edge and a face not containing the edge is $\arccos(\frac{1}{\sqrt3})$.
Let the regular tetrahedron be $OABC$.Let $O$ be the origin and position vectors of $A,B,C$ are the $\vec{a},\vec{b},\vec{c}$.
Let us find the angle between face $OAB$ and the edge $OC$.Angle between a plane and a line is found by finding the angle between the normal to the plane and the line. The plane $OAB$ is spanned by the vectors $\vec{a}$ and $\vec{b}$.So its normal is given by $\vec{a}\times\vec{b}$.And the edge $OC$ is $\vec{c}$. Let $\theta$ be the angle between the face $OAB$ and $OC$.So angle between the normal to the face $OAB$ and $OC$ is $\frac{\pi}{2}-\theta$ $\cos(\frac{\pi}{2}-\theta)=\frac{(\vec{a}\times\vec{b}).\vec{c}}{|\vec{a}\times\vec{b}||\vec{c}|}$ $\sin\theta=\frac{(\vec{a}\times\vec{b}).\vec{c}}{|\vec{a}||\vec{b}||\vec{c}|\sin\frac{\pi}{3}}$
I am stuck here and could not solve further.Please help me.Thanks.
|
Suppsoe we are given an integer $n$. Define \begin{align*} \psi \left( n \right) = \left| \left\{ a \in \mathbb{Z}/ n\mathbb{Z}^\times \vert a^{n-1} \neq 1\right\} \right| \end{align*} Show: if $\psi \left( n \right) \geq 1$, it holds that $\psi \left( n \right) \geq \frac{1}{2} \phi \left( n \right)$, where $\phi \left( n \right)$ is the totient function.
I do not really have an idea how to solve this. I would be happy to get some hints.
At the moment I know: $\phi \left( n \right) = |\mathbb{Z} / n\mathbb{Z}^\times|$ and I believe there is a way to reduce the problem to using Euler's Totient Theorem, but I am stuck at the first assumption in the theorem being $gcd\left(a , n \right) = 1$.
Edit: I think we can inspect two cases: $n$ being prime and $n$ being not prime.
Let $n$ be a prime number then it holds that $\phi \left( n \right) = n-1$. With Euler's Totient Theorem we get $a^{\phi\left( n \right)} = a^{n-1} \equiv 1 \, mod \, (n)$ meaning $\psi\left( n \right) = 0$ and the assumption is not fullfilled.
Now we do know that $n$ has to be not prime, meaning $n$ is even and $n\neq 2$, any suggestions how to proceed?
|
So we try by applying the definition and see how it goes.
So we deal with 2 cases, either $\sup S=\infty$ or $\sup S=M<\infty$.
If it is the second case, this means that for all $x\in S$, $x\leq M$, so naturally $ax\leq aM$. By definition of the least upper bound, we have $\sup aS\leq aM=a\sup S$.
For the reverse, let $\epsilon>0$ be given. If $a>0$, then $\frac{\epsilon}{a}>0$. By the definition of sup, there exists $x\in S$ such that $M-\frac{\epsilon}{a}\leq x\leq M$. Hence $aM-\epsilon\leq x\leq aM$. Since this $\epsilon$ is arbitrary, so $\sup S=M$.
If $a=0$ then we have nothing to say.
And as for the infinity case, it should be quite evident, by choosing a sequence in $S$ such that it goes to infinity.
|
Does math
require an $\infty$? This assumes that all of math is somehow governed by a single set of universally agreed upon rules, such as whether infinity is a necessary concept or not. This is not the case.
I might claim that math does not
require anything, even though a mathematician requires many things (such as coffee and paper to turn into theorems, etc etc). But this is a sharp (like a sharp inequality) concept, and I don't want to run conversation off a valuable road.
So instead I will claim the following: there are branches of math that rely on infinity, and other branches that do not. But most branches rely on infinity. So in this sense, I think that most of the mathematics that is practiced each day relies on a system of logic and a set of axioms that include infinities in various ways.
Perhaps a different question that is easier to answer is - "Why does math have the concept of infinity?" To this, I have a really quick answer - because $\infty$ is useful. It lets you take more limits, allows more general rules to be set down, and allows greater play for fields like Topology and Analysis.
And by the way - in your question you distinguish between $\lim _{x \to \infty} f(x)$ and $\lim _{y \to 0} f(\frac{1}{y})$. Just because we hide behind a thin curtain, i.e. pretending that $\lim_{y \to 0} \frac{1}{y}$ is just another name for infinity, does not mean that we are actually avoiding a conceptual infinity.
So to conclude, I say that math does not
require $\infty$. If somehow, no one imagined how big things get 'over there' or considered questions like How many functions are there from the integers to such and such set, math would still go on. But it's useful, and there's little reason to ignore its existence.
|
I was told some days ago that the possibility of two randomly picked numbers are relatively prime to each other is $6/(\pi^2)$. And it is well known that the value of Riemann zeta function at 2 is $(\pi^2)/6$. So I guess there is a correspondence between them. Maybe the possibility of $n$ randomly picked numbers are relatively prime to each other(there are two cases here: (1)these $n$ numbers are pairly relatively prime to each other (2)the common divisor of all of these $n$ numbers is 1) equals $1/\zeta (n)$? And I think when we consider $n=1$, the possibility of one randomly picked number is prime is 0, and meanwhile $\zeta(1)=\infty$. So in this case with this sense this proposition still holds. But I think I must be daydreaming... Please let me know if you find the formula above is wrong for some $n$.
Pick two random numbers less than $n$, then
$\lfloor n/2\rfloor^2$ pairs are both divisible by 2. $\lfloor n/3\rfloor^2$ pairs are both divisible by 3. $\lfloor n/5\rfloor^2$ pairs are both divisible by 5. ...
The number of relatively prime pairs less than or equal to $n$ is:
$$ n^2 - \sum\lfloor \frac np\rfloor^2 + \sum\lfloor \frac n{pq}\rfloor^2- \sum\lfloor \frac n{pqr}\rfloor^2 + ... $$
Sums are taken over the distinct primes $p,q,r,...$ less than n. Let $\mu(x)$ be the Möbius function this is
$$\sum\mu(k)\lfloor n/k\rfloor^2$$
The probability is the limit as $n$ goes to infinity divided by $n^2$, or
$$ \sum\frac{\mu(k)}{k^2} .$$
Now, the Dirichlet series that generates the Möbius function is the (multiplicative) inverse of the Riemann zeta function $$ \sum_{n=1}^\infty \frac{\mu(n)}{n^s}=\frac{1}{\zeta(s)}. $$ So we get $\frac{1}{\zeta(2)}=\frac6{\pi^2}$.
MathWork says:
This result is related to the fact that the greatest common divisor of $m$ and $n$, $(m,n)=k$, can be interpreted as the number of lattice points in the plane which lie on the straight line connecting the vectors $(0,0)$ and $(m,n)$ (excluding $(m,n)$ itself). In fact, $6/\pi^2$ is the fractional number of lattice points visible from the origin (Castellanos 1988, pp. 155-156).
|
I am wondering how to find the eigenvalues of some sparse matrix in given interval [a, b] by iterative method. To my personal understanding, it is more obvious to use Krylov subspace method to find the extreme eigenvalues rather than the interior ones.
The following strategy is called
shift and invert and depends upon two important facts: $A-\tau I$ has the same spectrum as $A$, but shifted down by $\tau$, i.e., if $\lambda \in \sigma(A)$ then $\lambda-\tau \in \sigma(A-\tau I)$. Assuming that $A$ is invertible, the matrix $A^{-1}$ has a spectrum which is equal to the element-wise inverse of the spectrum of $A$, i.e., if $\lambda \in \sigma(A)$ then $1/\lambda \in \sigma(A^{-1})$.
Since $A-\frac{a+b}{2}I$ will have shifted the portion of $A$'s spectrum which is close to $\frac{a+b}{2}$ near the origin, the eigenvalues of $A$ near $\frac{a+b}{2}$ will be very large in $(A-\frac{a+b}{2}I)^{-1}$, and so it is reasonable to expect a Krylov algorithm to pick them up.
|
In the following picture why Ksp is not simply S2- ? Why is f included with one species only? Please explain the last part.
What does f represent and why it is used in molar solubility?
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community
Let assume we have the following equilibria: $$\ce{MX \rightleftharpoons M+ + X-}$$ $$\ce{HX \rightleftharpoons H+ + X-}$$ The expression for their respective equilibrium constants are the following: $$\mathrm{K_{sp} = [M^+][X-]}$$ $$\mathrm{K_a = \frac{[H^+][X^-]}{[HX]}}$$ We also know the following (let S be solubility): $$\mathrm{S = [M^+] = [X^-]_{tot}}$$ Now we just need one more equation before we can start solving for S which is a mass balance. Since all the $\mathrm{X^-}$ that is produced either reacts with water to form $\mathrm{HX}$ or remains as $\mathrm{X^-}$, we get the following equation : $$\mathrm{[X^-]_{tot} = [X^-] + [HX]}$$ Now lets put $\mathrm{[X^-]}$ and $\mathrm{[HX]}$ in terms of $\mathrm{[M^+]}$: $$\mathrm{[X^-] = \frac{K_{sp}}{[M^+]}}$$ $$\mathrm{[HX] = \frac{[H^+][X^-]}{K_a} = \frac{K_{sp}[H^+]}{K_a[M+]}}$$ Now let plugs those values into our mass balance and replace $\mathrm{[X^-]_{tot}}$ and $\mathrm{[M^+]}$ with S: $$\mathrm{[X^-]_{tot} = \frac{K_{sp}}{[M^+]} + \frac{K_{sp}[H^+]}{K_a[M+]}}$$ $$\mathrm{S = \frac{K_{sp}}{S} + \frac{K_{sp}[H^+]}{K_a\times S}}$$ $$\mathrm{S^2 = K_{sp} + \frac{K_{sp}[H^+]}{K_a}}$$ $$\mathrm{S = \sqrt{K_{sp} + \frac{K_{sp}[H^+]}{K_a}}}$$ This gives you the exact same expression as the one on the sheet but hopefully this is much clearer for you.
|
I'm learning to integrate and I'd like to hear what are you favorite integration tricks?
I can't contribute much to this thread, but I like the fact that:
$$\int_{-a}^{a}{f(x)}dx=0 \space\text{if}\space f(x) \space\text{is odd}$$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
There is even a separate thread on stack-exchange on integration by parts.
The discrete analog of integration by parts i.e. the summation by parts is also an important tool especially in analytical number theory when we want to find asymptotic. For instance, my recent post here, uses this to get an estimate of $\displaystyle \sum_{n=N+1}^{\infty} \dfrac1{n^s}$.
One I really like is this one :
If $f$ is a continuous function for which $f(a+b-t)=f(t)$ then $$\int_a^b t\cdot f(t) \mathrm{d}t=\frac{a+b}{2}\int_a^bf(t) \mathrm{d}t$$
Example :
$$\begin{align} \int_0^{\pi} \frac{x\sin(x)}{1+\cos^2 (x)}\mathrm{d}x &=\frac{\pi}{2}\int_0^{\pi} \frac{\sin(x)}{1+\cos^2 (x)}\mathrm{d}x\\ &=\frac{\pi}{2} \left[-\arctan(\cos(x))\right]_0^{\pi} \\ &=\frac{\pi^2}{4}\end{align}$$
|
Solve for $-\pi <\theta < \pi$: $$\tan\theta=\cos\theta$$
I can't get to the correct solution using the identities: $$\tan\theta=\frac{\sin\theta}{\cos\theta} \quad\text{and}\quad \sin^2\theta+\cos^2\theta=1$$
The answer I'm getting is
$$\sin\theta=-\frac12\pm\frac12 \sqrt{5}$$
giving: $0.62$ and $-1.62$.
The answers in the back of the book are $0.67$ and $2.48$.
Any hints much appreciated. Thanks!
|
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues?
Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson...
Hmm, it seems we cannot just superimpose gravitational waves to create standing waves
The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line
[The Cube] Regarding The Cube, I am thinking about an energy level diagram like this
where the infinitely degenerate level is the lowest energy level when the environment is also taken account of
The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume
Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings
@Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer).
Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it?
Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks.
I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh...
@0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P)
Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio...
the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above
If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\...
@ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there.
@CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
|
Recall the theorem that says that if a first order differential satisfies continuity conditions, then the initial value problem will have a unique solution in some neighborhood of the initial value. More precisely,
Theorem: A Result For Nonlinear First Order Differential Equations
Let
\[ y'=f(x,y) \;\;\; y(x_0)=y_0 \]
be a differential equation such that both partial derivatives
\[f_x \;\;\; \text{and} \;\;\; f_y \]
are continuous in some rectangle containing \((x_0,y_0)\)/
Then there is a (possibly smaller) rectangle containing \((x_0,y_0)\) such that there is a unique solution \(f(x)\) that satisfies it.
Although a rigorous proof of this theorem is outside the scope of the class, we will show how to construct a solution to the initial value problem. First by translating the origin we can change the initial value problem to
\[y(0) = 0.\]
Next we can change the question as follows. \(f(x)\) is a solution to the initial value problem if and only if
\[f'(x) = f(x,f(x)) \;\;\; \text{and} \;\;\; f(0) = 0.\]
Now integrate both sides to get
\[ \phi (t) = \int _0^t f(s,\phi (s)) \, ds .\]
Notice that if such a function exists, then it satisfies \(f(0) = 0\).
The equation above is called the
integral equation associated with the differential equation.
It is easier to prove that the integral equation has a unique solution, then it is to show that the original differential equation has a unique solution. The strategy to find a solution is the following. First guess at a solution and call the first guess \(f_0(t)\). Then plug this solution into the integral to get a new function. If the new function is the same as the original guess, then we are done. Otherwise call the new function \(f_1(t)\). Next plug in \(f_1(t)\) into the integral to either get the same function or a new function \(f_2(t)\). Continue this process to get a sequence of functions \(f_n(t)\). Finally take the limit as \(n\) approaches infinity. This limit will be the solution to the integral equation. In symbols, define recursively
\[f_0(t) = 0\]
\[ \phi_{n+1} (t) = \int _0^t f(s,\phi_n (s)) \, ds .\]
Example \(\PageIndex{1}\)
Consider the differential equation
\[y' = y + 2, \;\;\; y(0) = 0.\]
We write the corresponding integral equation
\[ y(t) = \int_0^t \left(y(s)+2 \right) \, ds .\]
We choose
\[ f_0(t) = 0\]
and calculate
\[ \phi_1(t) = \int_0^t \left(0+2 \right) \, ds = 2t\]
and
\[ \phi_2(t) = \int_0^t \left(2s+2 \right) \, ds = t^2 + 2t\]
and
\[ \phi_3(t) = \int_0^t \left(s^2+2s+2 \right) \, ds = \frac{t^3}{3}+t^2 + 2t\]
and
\[ \phi_4(t) = \int_0^t \left(\frac{s^3}{3}+s^2+2s+2 \right) \, ds = \frac{t^4}{3.4}+ \frac{t^3}{3}+t^2 + 2t.\]
Multiplying and dividing by 2 and adding 1 gives
\[\frac{f_4(t)}{2} + 1 = \frac{t^4}{4.3.2}+\frac{t^3}{3.2}+\frac{t^2}{2}+\frac{t}{1}+\frac{1}{1}.\]
The pattern indicates that
\[\frac{f_n(t)}{2} + 1 = \sum\frac{t^n}{n!}\]
or
\[\frac{f(t)}{2} + 1 = e^t.\]
Solving we get
\[f(t) = 2\left(e^t - 1\right).\]
This may seem like a proof of the uniqueness and existence theorem, but we need to be sure of several details for a true proof.
Does \(f_n(t)\) exist for all \(n\). Although we know that \(f(t,y)\) is continuous near the initial value, the integral could possible result in a value that lies outside this rectangle of continuity. This is why we may have to get a smaller rectangle. Does the sequence \(f_n(t)\) converge? The limit may not exist. If the sequence \(f_n(t)\) does converge, is the limit continuous? Is \(f(t)\) the only solution to the integral equation?
Larry Green (Lake Tahoe Community College)
Integrated by Justin Marshall.
|
Let's say we have two waves moving along a string. One of them is represented by the function: $$f_1(t)=\sin(\omega t)$$
The other one is represented by a function:
$$f_2(t)=-\sin(\omega (\tau-t))$$
Both of these functions are defined over one period.
At time $t=\tau/2$, the waves are overlapping perfectly and destructively interfere. This means we have: $$y(\tau/2)=f_1(\tau/2)+f_2(\tau/2)=\sin\left(\omega\left(\frac \tau 2\right)\right)-\sin\left(\omega\left(\frac \tau 2\right)\right)=0$$
This is fine and good; it shows that the waves have destructively interfered. But there's a weird part to this. But obviously, not only does $y(t)=0$, but $y^{(n)}(t)$ must also be zero (as $f_1(t)+f_2(t)=0$). However, we know full well that, because they're waves on a string, if we advance to time $\tau$ the waves will pass each other and head in the opposite directions.
How can this be? There's an instant at $\tau/2$ where the wave is not only flat, but there is no velocity, acceleration, jerk, snap, or anything that would cause a change in motion.
Where did the energy go? How does the wave start moving again?
|
Hello, how do I find dy/dx of Please work it out for me to see rather than just give an answer. Thanks!And also, since I'm still trying to figure out LaTex, will you show me how to display this equation using this coding?
Follow Math Help Forum on Facebook and Google+
Use the derivative for $\displaystyle y=a^x$ and use the chain rule for the index.
Originally Posted by Archie Meade Use the derivative for $\displaystyle y=a^x$ and use the chain rule for the index. So would it be2x ln10 10^[(x^2)-1] ?Please excuse my lack of coding.
The $\displaystyle ln10$ will be a denominator, otherwise yes.
Originally Posted by Archie Meade The $\displaystyle ln10$ will be a denominator, otherwise yes. his derivative is correct. $\displaystyle \frac{d}{dx} (a^u) = a^u \cdot \ln{a} \cdot \frac{du}{dx}$ $\displaystyle \int a^u \, du = \frac{a^u}{\ln{a}} + C$
yes!
i accidentally gave the version from the integral.
View Tag Cloud
|
I apologize if this question is trivial, but I am new to physics and am struggling with some of the basic concepts.
Working in $\mathbb{R}^2$ with standard coordinates $(x,y)$, suppose we have a particle of mass $m$ moving on a curve $(x(t),y(t))\in\mathbb{R}^2$. It's tangent vector (velocity vector) is $$x^\prime(t)\frac{\partial}{\partial x}+y^\prime(t)\frac{\partial}{\partial y} \ \ \ \ \ \ \ \ \ \ (1)$$This particle's kinetic energy is $\frac{1}{2}m\left((x^\prime(t))^2+(y^\prime(t))^2\right)$. Also, suppose we have some conservative force $F$ so that $F=\left(\frac{\partial U}{\partial x},\frac{\partial U}{\partial y}\right)$ where $U$ is some smooth potential $U:\mathbb{R}^2\to\mathbb{R}$.
Anything I've read says the kinetic energy in polar coordinates is $$\frac{1}{2}m\left((\dot r)^2+(r\dot\theta)^2\right)$$ and the force in the $r$ and $\theta$ directions are $$F_r=-\frac{\partial U}{\partial r} \ \ \ \ \text{ and } \ \ \ \ F_\theta=\frac{1}{r}\frac{\partial U}{\partial \theta}$$
For the second point, I don't understand what it means to say force in the $r$ or $\theta$-direction. It's clear the force in the $x$-direction is just the first component of $F$, but is the force in the $r$-direction just the first component of $F$ in polar coordinates? I don't see how that really makes sense. Also, computing $\frac{\partial U}{\partial x}=\frac{\partial U}{\partial r}\frac{\partial r}{\partial x}+\frac{\partial U}{\partial \theta}\frac{\partial \theta}{\partial x}$ (and similarily $\frac{\partial U}{\partial y}$) I can see where these terms pop up, but don't get how to put the concepts together.
For the first point, I don't understand how they are getting these equations, and especially how they get them so fast! If you use the change of variables formula (i.e. $\frac{\partial}{\partial x}=\frac{\partial r}{\partial x}\frac{\partial}{\partial r}+\frac{\partial\theta}{\partial x}\frac{\partial}{\partial \theta}$ and so on) on equation $(1)$, compute $x^\prime , y^\prime$, and collect like terms you get that the velocity vector above is $\dot r\frac{\partial}{\partial r}+r\dot\theta\frac{\partial}{\partial\theta}$. This takes some work but in this form it makes sense, to me, to say that the kinetic energy in polar coordinates is $\frac{1}{2}m\left((\dot r)^2+(r\dot\theta)^2\right)$. But any book I've read just computes this extremely quick by saying $$x^\prime(t)=\dot r\cos\theta +r\dot\theta\sin\theta \ \ \ \ \text{ and } \ \ \ \ y^\prime(t)=\dot r\sin\theta-r\dot\theta\cos\theta$$I see sometimes $\hat r=(\cos\theta,\sin\theta)$ and $\hat\theta=(-\sin\theta,\cos\theta)$ but how can you have a "basis" that changes at every point?
Any help would be greatly appreciated!!
|
There's a bit more to the story. Mathematica treats variables as complex by default, and I for one have had trouble figuring out how
Limit figures out how to treat variables such as
c in this case.
Some analysis
First, let's examine
a0 (
= a in OP) with the assumption that
c is real:
a0 = (h^2 + c^2 h^2 + Sqrt[4 h^2 + (h^2 + c^2 h^2)^2])^2 /
(4 (h^2 + 1/4 (h^2 + c^2 h^2 + Sqrt[4 h^2 + (h^2 + c^2 h^2)^2])^2));
aR = FullSimplify[a0, h > 0 && c \[Element] Reals]
(* -> (h + c^2 h + Sqrt[4 + (1 + c^2)^2 h^2])/(2 Sqrt[ 4 + (1 + c^2)^2 h^2]) *)
$$\frac{\sqrt{\left(c^2+1\right)^2 h^2+4}+c^2 h+h}{2 \sqrt{\left(c^2+1\right)^2 h^2+4}}$$
It simplifies a little if we split the fraction up:
1/2 + Factor[aR - 1/2]
(* -> 1/2 + ((1 + c^2) h)/(2 Sqrt[4 + (1 + c^2)^2 h^2]) *)
$$\frac{\left(c^2+1\right) h}{2 \sqrt{\left(c^2+1\right)^2 h^2+4}}+\frac{1}{2}$$
And let's look at it if
c is an imaginary number
k I:
aI = FullSimplify[a0 /. c -> k I, h > 0 && k \[Element] Reals]
(* -> (h - h k^2 + Sqrt[4 + h^2 (-1 + k^2)^2])/(2 Sqrt[4 + h^2 (-1 + k^2)^2]) *)
$$\frac{\sqrt{h^2 \left(k^2-1\right)^2+4}-h k^2+h}{2 \sqrt{h^2 \left(k^2-1\right)^2+4}}$$
Again it simplifies if we split the fraction up:
1/2 + Factor[aI - 1/2]
(* -> 1/2 - (h (-1 + k) (1 + k))/(2 Sqrt[4 + h^2 (-1 + k^2)^2]) *)
$$\frac{1}{2}-\frac{h (k-1) (k+1)}{2 \sqrt{h^2 \left(k^2-1\right)^2+4}}$$
We see if
c is
+/-I (if
k is
+/-1), the fraction is
0, so the limit is
1/2. If
k > 1 or
k < -1, then the limit of the fraction is
-1/2, so the limit of
a0 is
0. And if
-1 < k < 1, then the limit of the fraction is
+1/2, so the limit of
a0 is
1. If
c is real, then up above one can see the limit of the fraction is
1/2, so the limit of
a0 is
1.
Here are some examples:
Table[Print["c = ", c, " --> ", HoldForm[Limit[a0, h -> Infinity]],
" = ", Limit[a0, h -> Infinity]], {c, {1, I/2, I, 2 I}}];
c = 1 --> Limit[a0, h -> Infinity] = 1
c = I/2 --> Limit[a0, h -> Infinity] = 1
c = I --> Limit[a0, h -> Infinity] = 1/2
c = 2 I --> Limit[a0, h -> Infinity] = 0
How to find the limit sought
One should probably use assumptions:
a1 = Simplify[a];
a2 = FullSimplify[a];
Block[{$Assumptions = c \[Element] Reals}, Limit[a0, h -> Infinity]]
Block[{$Assumptions = c \[Element] Reals}, Limit[a1, h -> Infinity]]
Block[{$Assumptions = c \[Element] Reals}, Limit[a2, h -> Infinity]]
(* -> 1, 1, 1 *)
Then the answers are correct.
EDIT - Addendum
Prompted to investigate further by various comments, I have something to add, including what the actual limit for a generic complex $c$ is (which one might expect Mathematica could return as the correct answer). This goes beyond the current version of the OP's question, which stipulates that $c$ be real, but I hope it sheds some light on the issues
Limit has dealing with this function.
What should the limit be?
Working it out by hand, I get the limit for a complex $c$ to be
$$\begin{cases} 1 & {\rm Im}(c)^2-{\rm Re}(c)^2<1 \\ {1}/{2} & {\rm Im}(c)^2-{\rm Re}(c)^2=1 \\ 0 & {\rm Im}(c)^2-{\rm Re}(c)^2>1\end{cases}$$
Here is a plot of the real part of
a0 vs.
c when
h = 100. The imaginary parts are nearly zero, and the real part is nearly equal to the limit. You can see the discontinuity forming along the red cliff. The limits at six test values are shown by the green spheres. We will use these
test values for
c again later.
testvalues = {1, I/2, I, 2 I, 1 + 2 I, 2 + 2 I};
Show[ParametricPlot3D[
Evaluate[{Re[c], Im[c], Re[a0]} /. {c -> r E^(I q), h -> 100.}],
{r, 0, 3}, {q, 0, 2 π}, PlotPoints -> {50, 150}, MaxRecursion -> 5,
Exclusions -> None, MeshFunctions -> {#2^2 - #1^2 &, #4 &, #5 &},
MeshStyle -> {Red, GrayLevel[0.5], GrayLevel[0.5]},
Mesh -> {{-1, 0, 1, 2}; {0.97, 1.03}, 11, {0, π/2, π, (3 π)/2}},
MeshShading -> {{{Yellow, Red, Lighter[Purple, 0.5]}}},
BoundaryStyle -> GrayLevel[0.5], Lighting -> "Neutral", PlotRange -> Full,
AxesLabel -> {Re[c], Im[c], ""}],
Graphics3D[{Darker@Green,
Sphere[{Re[#], Im[#], Limit[a0 /. {c -> #}, h -> Infinity]}, 0.07] & /@
testvalues}]
]
There are spikes (poles) where the denominator of
a0 is zero:
Solve[((4 (h^2 + 1/4 (h^2 + c^2 h^2 + Sqrt[4 h^2 + (h^2 + c^2 h^2)^2])^2)) /. h -> 100) == 0, c] // N
(* -> {{c -> -0.0099995 + 1.00005 I}, {c -> 0.0099995 - 1.00005 I}, {c -> -0.0099995 - 1.00005 I}, {c -> 0.0099995 + 1.00005 I}} *)
How could Mathematica get different answers?
First, it is not a problem with
Simplify (
a1 above) or
FullSimplify (
a2 above). All give the same correct limits with definite values for
c:
Limit[a0 /. {c -> #}, h -> Infinity] & /@ testvalues
Limit[a1 /. {c -> #}, h -> Infinity] & /@ testvalues
Limit[a2 /. {c -> #}, h -> Infinity] & /@ testvalues
(* -> {1, 1, 1/2, 0, 0, 1}
{1, 1, 1/2, 0, 0, 1}
{1, 1, 1/2, 0, 0, 1} *)
I came up with ways to arrive algebraicallly at the different answers for the limits of
a0 and
a2 for a generic
c. I doubt this is how
Mathematica arrives at its answers, but it shows how the mistake might arise.
To find a limit at infinity, we can replace
h by
1/k and let
k approach 0. In calculus we learn to simplify the expression until we can plug in
k = 0.
b0 = Simplify[a0 /. h -> 1/k, k > 0]
b2 = Simplify[a2 /. h -> 1/k, k > 0]
(* -> (1 + c^2 + Sqrt[1 + 2 c^2 + c^4 + 4 k^2])^2/(2 (1 + c^4 + 4 k^2 + Sqrt[1 + 2 c^2 + c^4 + 4 k^2] + c^2 (2 + Sqrt[1 + 2 c^2 + c^4 + 4 k^2]))) *)
(* -> (2 k^2)/(1 + c^4 + 4 k^2 - Sqrt[1 + 2 c^2 + c^4 + 4 k^2] - c^2 (-2 + Sqrt[1 + 2 c^2 + c^4 + 4 k^2])) *)
b0 // TeXForm // Print
b2 // TeXForm // Print
$$\frac{\left(c^2+\sqrt{c^4+2 c^2+4 k^2+1}+1\right)^2}{2 \left(c^4+c^2 \left(\sqrt{c^4+2 c^2+4 k^2+1}+2\right)+\sqrt{c^4+2 c^2+4 k^2+1}+4 k^2+1\right)}$$
$$\frac{2 k^2}{c^4-c^2 \left(\sqrt{c^4+2 c^2+4 k^2+1}-2\right)-\sqrt{c^4+2 c^2+4 k^2+1}+4 k^2+1}$$
So far, so good:
b0 == b2 // Simplify
(* -> True *)
For
b0, setting
k to zero yields 1 the same as
Limit[a0, h -> Infinity]
b0 /. k -> 0 // TeXForm
$$\frac{\left(c^2+\sqrt{c^4+2 c^2+1}+1\right)^2}{2
\left(c^4+\left(\sqrt{c^4+2 c^2+1}+2\right)
c^2+\sqrt{c^4+2 c^2+1}+1\right)}$$
which we can see by multiplying out the numerator and simplifying:
b0 /. k -> 0 // Expand // Together
(* -> 1 *)
Surprisingly, Simplify does something else and returns
b0 /. k -> 0 // Simplify // TeXForm
$$\frac{c^2+\sqrt{\left(c^2+1\right)^2}+1}{2 c^2+2}$$
whose value depends on
c:
% /. {c -> #} & /@ testvalues
(* -> {1, 1, Indeterminate, 0, 0, 1} *)
Turning to
b2, setting
k to zero yields 0, the same as
Limit[a2, h -> Infinity]
b2 /. k -> 0
(* -> 0 *)
As with
a0 and
a2, the limits of
b0 and
b2 at definite values of
c are correct:
Limit[b0 /. {c -> #}, k -> 0] & /@ testvalues
Limit[b2 /. {c -> #}, k -> 0] & /@ testvalues
(* -> {1, 1, 1/2, 0, 0, 1}
{1, 1, 1/2, 0, 0, 1} *)
A related example
Change
1+c^2 to
1-c^2:
aIm = a0 /. c -> I c
(* -> (h^2 - c^2 h^2 + Sqrt[4 h^2 + (h^2 - c^2 h^2)^2])^2/(4 (h^2 + 1/4 (h^2 - c^2 h^2 + Sqrt[4 h^2 + (h^2 - c^2 h^2)^2])^2)) *)
$$\frac{\left(-c^2 h^2+\sqrt{\left(h^2-c^2
h^2\right)^2+4 h^2}+h^2\right)^2}{4
\left(\frac{1}{4} \left(-c^2
h^2+\sqrt{\left(h^2-c^2 h^2\right)^2+4
h^2}+h^2\right)^2+h^2\right)}$$
Now I'm primarily interest in real
c and the limit at
h -> Infinity.It is similar to previous one above:
$$\begin{cases} 1 & \left| c\right| <1 \\ {1}/{2} & \left| c\right| =1 \\ 0 & \left| c\right| >1\end{cases}$$
Mathematica gives one of the answers by default
Limit[aIm, h -> Infinity]
(* -> 1 *)
At definite values of
c, we get the right answers
Limit[aIm /. {c -> #}, h -> Infinity] & /@ {0, 1, 2}
(* -> {1, 1/2, 0} *)
With various assumptions, one cannot get an answer that depends on
c.
Limit[aIm, h -> Infinity, Assumptions -> c \[Element] Reals]
(* -> 1 *)
The correct answer appears in some cases:
Limit[aIm, h -> Infinity, Assumptions -> c > 1 || c < -1]
Limit[aIm, h -> Infinity, Assumptions -> c == 1]
Limit[aIm, h -> Infinity, Assumptions -> c == -1]
Limit[aIm, h -> Infinity, Assumptions -> -1 < c < 1]
(* -> 0, 1/2, 1/2, 1 *)
These are wrong; the limit is 1/2 in both cases:
Limit[aIm, h -> Infinity, Assumptions -> c \[Element] Reals && Abs[c] == 1]
Limit[aIm, h -> Infinity, Assumptions -> c == -1 || c == 1]
(* -> 0, 0 *)
Conclusion
Whatever definite value I set
c to, I always got the correct limit, but
Limit does not handle
c as a symbol correctly. With some careless algebraic operations, I was able to derive the answers
Limit gave for original function and the
FullSimplified version. Even with
Assumptions that yield a unique limit,
Limit does not always give the right answer. I would be surprised if it were mathematically impossible to develop an algorithm for finding a large class of limits including the OP's, given what can be done for integrals and by standard calculus techniques. I've never found a lot of use for
Limit, but perhaps it is because of its limitations.
|
Posted by Nate on August 24, 2017 A Data-Driven Approach to LaTeX Autocomplete Autocomplete
Nearly anywhere you go on the web today you will find some sort of autocomplete feature. Start typing into Google and you get immediate suggestions related to your query. If you code in other languages, many IDEs have built-in, or configurable, autocomplete tools that complete variables, functions, methods, etc., with varying degrees of success. At the best, these tools speed up the process of programming by actively bug checking, suggesting variables of the correct type, methods of the correct object/class, and can sometimes offer documentation when opening functions. These tools allow the user to focus their time on more valuable concepts and ideas, rather than syntax.
When learning a new programming language, especially if it is your first language, it can be difficult to remember syntax and \(\mathrm{\LaTeX}\) is no exception. “Was it
\product,
\times,
\mult,
\prodor something else to produce \(\prod\)?” These questions are often asked by new users and can range from being a rather minor nuisance, to a painstakingly slow and annoying time sink. By the way, it is
\prod☺.
To help combat the aforementioned issues (and others), Overleaf has included a default list of commands which it will suggest. Just type
\into the editor to see the dropdown list. The list is by no means comprehensive, but it does offer
mostof the frequently used commands that are needed to build a basic \(\mathrm{\LaTeX}\) document. What can we do to improve?
When suggesting commands, as of now, we simply use a fuzzy-search using Fuse.js which works well in some cases, but surely does not account for the popularity of commands. For example, when typing
\c, the fuzzy search ranks commands beginning with
cfirst, and hence
columnbreakis the first completion. While yes, according to the algorithm this is a good match, it isn't the best for productivity. Wouldn't it be nice if
chapter,
cite,
caption, and
centeringwere suggested before that?
To make this happen we have begun studying which commands are being used frequently in publicly available \(\mathrm{\LaTeX}\) documents. Fortunately, there are many collections of public \(\mathrm{\LaTeX}\) documents that we can use, such as the arXiv, and also the Overleaf Gallery, which contains just under 8000
.texdocuments (we define a document as a single
.texfile). For this blog post, we’re going to use the Overleaf gallery, which contains a mixture of research articles, presentations, and CVs. It also contains a large number of \(\mathrm{\LaTeX}\) examples and \(\mathrm{\LaTeX}\) templates, which are not necessarily the most representative documents, but it provides a good starting point, and as we will see, a useful one.
The reason we want
chapter,
cite,
caption, and
centeringto be ranked ahead of
columnbreakis because they are used more. So to find the commands that should top the suggestion list one might think to simply look at raw counts of commands. Doing this for our given corpus we find some odd commands in the top ten list we didn't quite expect (
pgf, and
pdfglyphtounicode). After a little investigation we found these commands were appearing tens of thousands of times in very few documents and nowhere else. To avoid such extreme cases we weight commands by the number of documents they appear in (see Methodology for details).
It is not too surprising that we find many of \(\mathrm{\LaTeX}\)’s structural commands in the top ten list, but perhaps
textbfis somewhat surprising. I guess bolding is more fashionable than italicizing. Another feature which might spark some interest is the relatively high frequency of
chapterwhen excluding no appearances. This is because of the context in which this command appears. Often, when writing documents with multiple chapters, authors will break these chapters into separate files and have
main.texcall the respective chapters in via
\chapter{foo}\input{chapters/foo}. This somewhat artificially inflates the frequency of the
chaptercommand (and possibly other commands). We say artificially, because really the input files are all apart of the same project, and they should be considered together. This, however, has not yet been done in our analysis.
We can produce an analagous bar plot for environments (anything that starts with
\begin{…}and ends with
\end{…}) where we view an entire environment as a single entity. The following shows
mostlywhat we would expect, with
documentappearing the most often among all documents, but rather peculiarly, we see the frequency of the
frameenvironment (from the
beamerpackage) is very large when we have enforced a single appearance. This means that while it's not the most frequently occurring environment, when it is used, it constitutes nearly 40% of the document's environments!
With this data we will rank commands based on their corpus frequency so you spend less time looking for your command, and more time focusing on what’s important. Now you may say:
Hold on, so even once I start my
documentenvironment, the next time I open an environment I will be suggested
\begin{document}as the number one completion?
Well, it takes some fine tuning. In particular one thing we can do is look at the median number of times these commands are used in a document (excluding no appearances). Doing this gives us a better picture of how many times commands are being used in a given document. So if a command is usually being used one time per document, then we probably shouldn’t continue suggesting it after that one use (or at least push it down the list). Below you can see the the median number of uses of commands in documents in which they appear. Use the dropdown menu to toggle between the top 10 commands and environments!
Dissociating the Data
We have a fair amount of data at this point, and while what we have seen thus far is helpful, we can do better. \(\mathrm{\LaTeX}\) documents should not solely be considered as a stream of input tokens; rather they have logical structure. We would very likely get cleaner, more representative data if we took this into account.
\(\mathrm{\LaTeX}\) Structure
\(\mathrm{\LaTeX}\) documents are built up from smaller pieces, namely commands and environments. Given we have already studied the global use of commands and environments, it is then important to look more closely at how these are used together.
Preamble
Ideally we want to suggest commands based on the
contextof the cursor's position within your document. An important example of context is the \(\mathrm{\LaTeX}\) document preamble: there, it is highly unlikely that you will need to use commands such as
\section{…},
\chapter{…}, or many math commands. Wouldn’t it be nice if we didn’t suggest them?
We can perform a very similar analysis as above to find the commands which occur in the preamble of all the documents (that is commands that occur before
\begin{document}, and ignoring documents that contained no
documentenvironment).
If you have ever composed a preamble and loaded some packages then it is unlikely that these results will come as a surprise. The long tail on the above plot is attributed to the fact that preambles, while sharing some structure, can vary wildly based on which packages are loaded. It is often the case that commands used in the preamble are dependent upon which packages have already been loaded—we’ll address that point in a minute.
Environments
Just as above we can study which commands are being used most frequently in given environments. In particular, we explore the top 10 as case studies and these can be viewed in the following plot's dropdown menu.
And here we begin to see some real structure emerging from this data. We see much more definitive trends in the data such as the
itemcommand being used extremely heavily in list-like environments, the
includegraphicscommand being used heavily in the
figureenvironment, and so on. This data will allow us to provide context-sensitive autocomplete suggestions based on which document element is currently being edited—providing a much more effective and efficient editing experience.
An important feature to note is the seemingly high frequency of
beginand
endcommands appearing within environments. Naturally, this suggests that documents often have nested environments—-which can be common in \(\mathrm{\LaTeX}\) documents, depending on which environments are being used; for example,
\begin{table}\begin{tabular}…. If we could understand these nesting patterns we would even be able to provide context-aware environment suggestions! Of course we have a very finite data set, so we can only take this so far.
Packages
For future work, we will begin to explore links between which packages have been loaded and which commands are used most frequently in conjunction with those packages—to suggest commands based on the packages you have loaded.
What's Next?
While getting this data is one thing, implementing it is another. We've already started to improve ShareLaTeX's autocomplete (since we’ve now joined forces): now, along with suggesting commands you have already used in your document, it will suggest the top 100 most frequent commands as indicated in the analysis above!
While we acknowledge this data set is not completely representative, it has given us a great birds-eye view of what
.texdocuments look like and how people are using the language. In order to obtain data with more predictive power, we are continuing to study the structure and use of \(\mathrm{\LaTeX}\) documents. Along with this, we plan to add more corpora to our existing Overleaf Gallery such as source files from the arXiv and maybe even GitHub.
Methodology
In order to compute the frequencies plotted in the What can we do to improve section let's establish a bit of notation. Let the corpus, or collection of documents, be \(\mathsf{D}\) and the collection of all commands used in \(\mathsf{D}\) be \(\mathsf{C}_\mathsf{D}\). Fun fact: there are roughly 15000 unique commands used throughout this corpus and over 900000 total command uses! For each command \(\mathsf{c}\) in \(\mathsf{C}_\mathsf{D}\), we can calculate its local frequency with respect to each document \(\mathsf{d}\in\mathsf{D}\) as the simple ratio
\[f_\mathsf{c,d} = \frac{n_\mathsf{c}}{N_\mathsf{d}}\]
where \(n_\mathsf{c}\) is the number of times the command \(\mathsf{c}\) appears in document \(\mathsf{d}\), and \(N_\mathsf{d}\) is the total number of command uses in the document \(\mathsf{d}\). Note that for many commands \(f_\mathsf{c,d}\) will be 0 if command \(\mathsf{c}\) does not appear in document \(\mathsf{d}\). We can now calculate the global frequency of each command in the given corpus by averaging all local frequencies. \[f_\mathsf{c} = \frac{1}{|\mathsf{D}|} \sum_{\mathsf{d}\in\mathsf{D}} f_{\mathsf{c,d}}\] where \(|\mathsf{D}|\) is the number of documents in the corpus. This method of calculating frequencies weights commands not only by how many uses they have, but also how many documents we find them in. This gives an effective measure of the permeability of commands through a wide range of documents, and it is what you see plotted above in the lighter green .
With this information alone we can rank commands based on how often they are used. What is also interesting to look at once we have this data is, given the most used commands, how often are they used in documents that they
doappear in. So a modified frequency \(\tilde{f}_{!\mathsf{c}}\) dependent on the set \(\mathsf{D}_\mathsf{c}\) which consists of all documents \(\mathsf{d}\) such that command \(\mathsf{c}\) is found in \(\mathsf{d}\) (that is \(\mathsf{D}_\mathsf{c} = {\mathsf{d}\in\mathsf{D}\,|\,\mathsf{c}\in\mathsf{d}}\)). \[\tilde{f}_{!\mathsf{c}} = \frac{1}{|\mathsf{D}_\mathsf{c}|} \sum_{\mathsf{d}\in\mathsf{D}_\mathsf{c}} f_{\mathsf{c,d}}\] This quantity expresses more features about how commands are being used in their respective documents, rather than taking a corpus view. In the plots above, this is represented with the darker shade of green . Note the plots are sorted by their corpus, or global, frequency.
|
Yes, LED dimming can be done with constant current drivers and can even be done with that particular chip. However, you will need additional circuitry to achieve it.
To imagine what's needed and how, think about how LED PWM control is done professionally.
A constant current driver set to a specific current value in order to provide a 100% brightness level for the LED and operated at 100% duty cycle. A PWM switch control to modulate the current and provide dimming.
That's it, really. What you have already is only the first half. The TLC5916 is a great chip for what it doing -- setting up and monitoring a constant current sink for some number of LEDs. But it doesn't include a PWM control. So you need to add a PWM control circuit. With both those in hand, you are good to go.
Since the TLC5916 is a low-side current sink controller, you'll need a high side PWM switch. You don't say if you are trying to PWM more than one LED. (What you do say, reading carefully, is that you are trying to PWM one of them.) If you intend on modulating more than one, you might consider using a specialized IC that provides a block of 8 source (high side) drivers like the Allegro 2981 and 2982 or the Toshiba TD62783. You can wire the controls over to your microcontroller device (whatever it is) and control up to 8 LEDs that way. Or you can just wire up your own external circuitry, especially if all you want to do is PWM just one LED.
Try adding this schematic to your existing situation and see if it helps you with just one LED (either left or right schematic):
simulate this circuit – Schematic created using CircuitLab
The transistors may be fine as a small-signal variety -- whatever you have laying around. But keep in mind that you really do need to consider all of the various power dissipations involved; including that for your TLC5916.
Some of the resistor values are left out because I don't know enough to help there. But I can provide guidance.
Given that you are using the TLC5916, your high side voltage rail probably isn't higher than \$V_{+}=5\:\textrm{V}\$. However, the TLC5916 outputs can support a maximum rail voltage of \$V_{+}=20\:\textrm{V}\$ so there is quite a range here for actual operation of your LED (or series chain of LEDs.) The TLC5916 gets its work done by regulating current on the low side (at the expense of a small
working voltage there.) So, let's call the LED rail voltage \$V_{+}\$ and the current setting you've designed to be \$I_{set}\$. Your microcontroller output voltage will be \$V_{io}\$.
Then in the left side schematic, we'll operate both \$Q_1\$ and \$Q_2\$ as switches. So \$Q_1\$'s base current needs to be a tenth, or \$I_{B_1}=\frac{I_{set}}{10}\$ (and this sets the collector current of \$Q_2\$.) \$Q_2\$'s base then will need a tenth of that, so \$I_{B_2}=\frac{I_{set}}{100}\$. Therefore, \$R_3\approx\frac{V_{io}-700\:\textrm{mV}}{I_{B_2}}\$ and \$R_2\approx\frac{V_{+}-1\:\textrm{V}-300\:\textrm{mV}}{I_{B_1}}\$. Don't worry about exact values -- you can use nearby standard values. In this left hand circuit, the I/O pin will have to provide \$I_{B_2}\$ or about a hundredth of whatever you are specifying for the LED's 100% current value, \$I_{set}\$.
In the right side schematic, \$R_5\$ sets the current as \$Q_4\$ is being operated as an emitter follower. (The current loading on your I/O pin will be lower than for the left side circuit, though, since \$Q_4\$ isn't operating as a switch and more of its \$\beta\$ becomes available here.) Here, you compute \$R_5\approx 10\cdot\frac{V_{io}-700\:\textrm{mV}}{I_{set}}\$ and pick a nearby standard resistor value. (To those worried about oscillation, it's unlikely here because a microcontroller output typically has \$100\:\Omega\$ of impedance towards the base of \$Q_4\$.)
Using PWM like this won't hurt the TLC5916 IC. (It may signal an error bit, but you can ignore that.) It's output pins are designed to handle loaded and unloaded cases. So it should just work here.
|
Measurement of diffractive photoproduction of vector mesons at large momentum transfer at HERA 92 Downloads Citations Abstract.
Elastic and proton–dissociative photoproduction of \(\rho^0\), \(\phi\) and \(J/\psi\) vector mesons (\(\gamma p\rightarrow Vp\), \(\gamma p\rightarrow VN\), respectively) have been measured in \(e^+p\) interactions at HERA up to \(-t=3\) GeV\(^2\), where
t is the four-momentum transfer squared at the photon–vector–meson vertex. The analysis is based on a data sample in which photoproduction reactions were tagged by detection of the scattered positron in a special-purpose calorimeter. This limits the photon virtuality, \(Q^2\), to values less than 0.01 GeV\(^2\), and selects a \(\gamma p\) average center-of-mass energy of \(\langle W\rangle\) = 94 GeV. Results for the differential cross sections, \(\mbox{d}\sigma/\mbox{d}t\), for \(\rho^0\), \(\phi\) and \(J/\psi\) mesons are presented and compared to the results of recent QCD calculations. Results are also presented for the t-dependence of the pion-pair invariant-mass distribution in the \(\rho^0\) mass region and of the spin-density matrix elements determined from the decay-angle distributions. The Pomeron trajectory has been derived from measurements of the W dependence of the elastic differential cross sections \(\mbox{d}\sigma/\mbox{d}t\) for both \(\rho^0\) and \(\phi\) mesons. KeywordsMatrix Element Momentum Transfer Differential Cross Section Vector Meson Differential Cross Preview
Unable to display preview. Download preview PDF.
|
Learning Objectives
Identify a conic in polar form. Graph the polar equations of conics. Define conics in terms of a focus and a directrix.
Most of us are familiar with orbital motion, such as the motion of a planet around the sun or an electron around an atomic nucleus. Within the planetary system, orbits of planets, asteroids, and comets around a larger celestial body are often elliptical. Comets, however, may take on a parabolic or hyperbolic orbit instead. And, in reality, the characteristics of the planets’ orbits may vary over time. Each orbit is tied to the location of the celestial body being orbited and the distance and direction of the planet or other object from that body. As a result, we tend to use polar coordinates to represent these orbits.
In an elliptical orbit, the
periapsis is the point at which the two objects are closest, and the apoapsis is the point at which they are farthest apart. Generally, the velocity of the orbiting body tends to increase as it approaches the periapsis and decrease as it approaches the apoapsis. Some objects reach an escape velocity, which results in an infinite orbit. These bodies exhibit either a parabolic or a hyperbolic orbit about a body; the orbiting body breaks free of the celestial body’s gravitational pull and fires off into space. Each of these orbits can be modeled by a conic section in the polar coordinate system. Identifying a Conic in Polar Form
Any conic may be determined by three characteristics: a single
focus, a fixed line called the directrix, and the ratio of the distances of each to a point on the graph. Consider the parabola \(x=2+y^2\) shown in Figure \(\PageIndex{2}\).
We previously learned how a parabola is defined by the focus (a fixed point) and the directrix (a fixed line). In this section, we will learn how to define any conic in the polar coordinate system in terms of a fixed point, the focus \(P(r,\theta)\) at the pole, and a line, the directrix, which is perpendicular to the polar axis.
If \(F\) is a fixed point, the focus, and \(D\) is a fixed line, the directrix, then we can let \(e\) be a fixed positive number, called the
eccentricity, which we can define as the ratio of the distances from a point on the graph to the focus and the point on the graph to the directrix. Then the set of all points \(P\) such that \(e=\dfrac{PF}{PD}\) is a conic. In other words, we can define a conic as the set of all points \(P\) with the property that the ratio of the distance from \(P\) to \(F\) to the distance from \(P\) to \(D\) is equal to the constant \(e\).
For a conic with eccentricity \(e\),
if \(0≤e<1\), the conic is an ellipse if \(e=1\), the conic is a parabola if \(e>1\), the conic is an hyperbola
With this definition, we may now define a conic in terms of the directrix, \(x=\pm p\), the eccentricity \(e\), and the angle \(\theta\). Thus, each conic may be written as a
polar equation, an equation written in terms of \(r\) and \(\theta\).
THE POLAR EQUATION FOR A CONIC
For a conic with a focus at the origin, if the directrix is \(x=\pm p\), where \(p\) is a positive real number, and the eccentricity is a positive real number \(e\), the conic has a polar equation
\[r=\dfrac{ep}{1\pm e \cos \theta}\]
For a conic with a focus at the origin, if the directrix is \(y=\pm p\), where \(p\) is a positive real number, and the eccentricity is a positive real number \(e\), the conic has a polar equation
\[r=\dfrac{ep}{1\pm e \sin \theta}\]
Example \(\PageIndex{1}\): Identifying a Conic Given the Polar Form
For each of the following equations, identify the conic with focus at the origin, the directrix, and the eccentricity.
\(r=\dfrac{6}{3+2 \sin \theta}\) \(r=\dfrac{12}{4+5 \cos \theta}\) \(r=\dfrac{7}{2−2 \sin \theta}\) Solution
For each of the three conics, we will rewrite the equation in standard form. Standard form has a \(1\) as the constant in the denominator. Therefore, in all three parts, the first step will be to multiply the numerator and denominator by the reciprocal of the constant of the original equation, \(\dfrac{1}{c}\), where \(c\) is that constant.
Multiply the numerator and denominator by \(\dfrac{1}{3}\).
\(r=\dfrac{6}{3+2\sin \theta}⋅\dfrac{\left(\dfrac{1}{3}\right)}{\left(\dfrac{1}{3}\right)}=\dfrac{6\left(\dfrac{1}{3}\right)}{3\left(\dfrac{1}{3}\right)+2\left(\dfrac{1}{3}\right)\sin \theta}=\dfrac{2}{1+\dfrac{2}{3} \sin \theta}\)
Because \(\sin \theta\) is in the denominator, the directrix is \(y=p\). Comparing to standard form, note that \(e=\dfrac{2}{3}\).Therefore, from the numerator,
\[\begin{align*} 2&=ep\\ 2&=\dfrac{2}{3}p\\ \left(\dfrac{3}{2}\right)2&=\left(\dfrac{3}{2}\right)\dfrac{2}{3}p\\ 3&=p \end{align*}\]
Since \(e<1\), the conic is an
ellipse. The eccentricity is \(e=\dfrac{2}{3}\) and the directrix is \(y=3\). Multiply the numerator and denominator by \(\dfrac{1}{4}\).
\[\begin{align*} r&=\dfrac{12}{4+5 \cos \theta}\cdot \dfrac{\left(\dfrac{1}{4}\right)}{\left(\dfrac{1}{4}\right)}\\ r&=\dfrac{12\left(\dfrac{1}{4}\right)}{4\left(\dfrac{1}{4}\right)+5\left(\dfrac{1}{4}\right)\cos \theta}\\ r&=\dfrac{3}{1+\dfrac{5}{4} \cos \theta} \end{align*}\]
Because \(\cos \theta\) is in the denominator, the directrix is \(x=p\). Comparing to standard form, \(e=\dfrac{5}{4}\). Therefore, from the numerator,
\[\begin{align*} 3&=ep\\ 3&=\dfrac{5}{4}p\\ \left(\dfrac{4}{5}\right)3&=\left(\dfrac{4}{5}\right)\dfrac{5}{4}p\\ \dfrac{12}{5}&=p \end{align*}\]
Since \(e>1\), the conic is a
hyperbola. The eccentricity is \(e=\dfrac{5}{4}\) and the directrix is \(x=\dfrac{12}{5}=2.4\). Multiply the numerator and denominator by \(\dfrac{1}{2}\).
\[\begin{align*} r&=\dfrac{7}{2-2 \sin \theta}\cdot \dfrac{\left(\dfrac{1}{2}\right)}{\left(\dfrac{1}{2}\right)}\\ r&=\dfrac{7\left(\dfrac{1}{2}\right)}{2\left(\dfrac{1}{2}\right)-2\left(\dfrac{1}{2}\right) \sin \theta}\\ r&=\dfrac{\dfrac{7}{2}}{1-\sin \theta} \end{align*}\]
Because sine is in the denominator, the directrix is \(y=−p\). Comparing to standard form, \(e=1\). Therefore, from the numerator,
\[\begin{align*} \dfrac{7}{2}&=ep\\ \dfrac{7}{2}&=(1)p\\ \dfrac{7}{2}&=p \end{align*}\]
Because \(e=1\), the conic is a parabola. The eccentricity is \(e=1\) and the directrix is \(y=−\dfrac{7}{2}=−3.5\).
Exercise \(\PageIndex{1}\)
Identify the conic with focus at the origin, the directrix, and the eccentricity for \(r=\dfrac{2}{3−\cos \theta}\).
Answer
ellipse; \(e=\dfrac{1}{3}\); \(x=−2\)
Graphing the Polar Equations of Conics
When graphing in Cartesian coordinates, each conic section has a unique equation. This is not the case when graphing in polar coordinates. We must use the eccentricity of a conic section to determine which type of curve to graph, and then determine its specific characteristics. The first step is to rewrite the conic in standard form as we have done in the previous example. In other words, we need to rewrite the equation so that the denominator begins with \(1\). This enables us to determine \(e\) and, therefore, the shape of the curve. The next step is to substitute values for \(\theta\) and solve for \(r\) to plot a few key points. Setting \(\theta\) equal to \(0\), \(\dfrac{\pi}{2}\), \(\pi\), and \(\dfrac{3\pi}{2}\) provides the vertices so we can create a rough sketch of the graph.
Example \(\PageIndex{2A}\): Graphing a Parabola in Polar Form
Graph \(r=\dfrac{5}{3+3 \cos \theta}\).
Solution
First, we rewrite the conic in standard form by multiplying the numerator and denominator by the reciprocal of \(3\), which is \(\dfrac{1}{3}\).
\[\begin{align*} r &= \dfrac{5}{3+3 \cos \theta}=\dfrac{5\left(\dfrac{1}{3}\right)}{3\left(\dfrac{1}{3}\right)+3\left(\dfrac{1}{3}\right)\cos \theta} \\ r &= \dfrac{\dfrac{5}{3}}{1+\cos \theta} \end{align*}\]
Because \(e=1\),we will graph a
parabola with a focus at the origin. The function has a \(\cos \theta\), and there is an addition sign in the denominator, so the directrix is \(x=p\).
\[\begin{align*} \dfrac{5}{3}&=ep\\ \dfrac{5}{3}&=(1)p\\ \dfrac{5}{3}&=p \end{align*}\]
The directrix is \(x=\dfrac{5}{3}\).
Plotting a few key points as in Table \(\PageIndex{1}\) will enable us to see the vertices. See Figure \(\PageIndex{3}\).
A B C D \(\theta\) \(0\) \(\dfrac{\pi}{2}\) \(\pi\) \(\dfrac{3\pi}{2}\) \(r=\dfrac{5}{3+3 \cos \theta}\) \(\dfrac{5}{6}≈0.83\) \(\dfrac{5}{3}≈1.67\) undefined \(\dfrac{5}{3}≈1.67\)
We can check our result with a graphing utility. See Figure \(\PageIndex{4}\).
Example \(\PageIndex{2B}\): Graphing a Hyperbola in Polar Form
Graph \(r=\dfrac{8}{2−3 \sin \theta}\).
Solution
First, we rewrite the conic in standard form by multiplying the numerator and denominator by the reciprocal of \(2\), which is \(\dfrac{1}{2}\).
\[\begin{align*} r &=\dfrac{8}{2−3\sin \theta}=\dfrac{8\left(\dfrac{1}{2}\right)}{2\left(\dfrac{1}{2}\right)−3\left(\dfrac{1}{2}\right)\sin \theta} \\ r &= \dfrac{4}{1−\dfrac{3}{2} \sin \theta} \end{align*}\]
Because \(e=\dfrac{3}{2}\), \(e>1\), so we will graph a hyperbola with a focus at the origin. The function has a \(\sin \theta\) term and there is a subtraction sign in the denominator, so the directrix is \(y=−p\).
\[\begin{align*} 4&=ep\\ 4&=\left(\dfrac{3}{2}\right)p\\ 4\left(\dfrac{2}{3}\right)&=p\\ \dfrac{8}{3}&=p \end{align*}\]
The directrix is \(y=−\dfrac{8}{3}\).
Plotting a few key points as in Table \(\PageIndex{2}\) will enable us to see the vertices. See Figure \(\PageIndex{5}\).
A B C D \(\theta\) \(0\) \(\dfrac{\pi}{2}\) \(\pi\) \(\dfrac{3\pi}{2}\)
\(r=\dfrac{8}{2−3\sin \theta}\)
\(4\)
\(−8\)
\(4\)
\(\dfrac{8}{5}=1.6\)
Example \(\PageIndex{2C}\): Graphing an Ellipse in Polar Form
Graph \(r=\dfrac{10}{5−4 \cos \theta}\).
Solution
First, we rewrite the conic in standard form by multiplying the numerator and denominator by the reciprocal of 5, which is \(\dfrac{1}{5}\).
\[\begin{align*} r &= \dfrac{10}{5−4\cos \theta}=\dfrac{10\left(\dfrac{1}{5}\right)}{5\left(\dfrac{1}{5}\right)−4\left(\dfrac{1}{5}\right)\cos \theta} \\ r &= \dfrac{2}{1−\dfrac{4}{5} \cos \theta} \end{align*}\]
Because \(e=\dfrac{4}{5}\), \(e<1\), so we will graph an
ellipse with a focus at the origin. The function has a \(\cos \theta\), and there is a subtraction sign in the denominator, so the directrix is \(x=−p\).
\[\begin{align*} 2&=ep\\ 2&=\left(\dfrac{4}{5}\right)p\\ 2\left(\dfrac{5}{4}\right)&=p\\ \dfrac{5}{2}&=p \end{align*}\]
The directrix is \(x=−\dfrac{5}{2}\).
Plotting a few key points as in Table \(\PageIndex{3}\) will enable us to see the vertices. See Figure \(\PageIndex{6}\).
A B C D \(\theta\) \(0\) \(\dfrac{\pi}{2}\) \(\pi\) \(\dfrac{3\pi}{2}\) \(r=\dfrac{10}{5−4 \cos \theta}\) \(10\) \(2\) \(\dfrac{10}{9}≈1.1\) \(2\)
Analysis
We can check our result using a graphing utility. See Figure \(\PageIndex{7}\).
Exercise \(\PageIndex{2}\)
Graph \(r=\dfrac{2}{4−\cos \theta}\).
Answer Defining Conics in Terms of a Focus and a Directrix
So far we have been using polar equations of conics to describe and graph the curve. Now we will work in reverse; we will use information about the origin, eccentricity, and directrix to determine the polar equation.
How to: Given the focus, eccentricity, and directrix of a conic, determine the polar equation
Determine whether the directrix is horizontal or vertical. If the directrix is given in terms of \(y\), we use the general polar form in terms of sine. If the directrix is given in terms of \(x\), we use the general polar form in terms of cosine. Determine the sign in the denominator. If \(p<0\), use subtraction. If \(p>0\), use addition. Write the coefficient of the trigonometric function as the given eccentricity. Write the absolute value of \(p\) in the numerator, and simplify the equation.
Example \(\PageIndex{3A}\): Finding the Polar Form of a Vertical Conic Given a Focus at the Origin and the Eccentricity and Directrix
Find the polar form of the conic given a focus at the origin, \(e=3\) and directrix \(y=−2\).
Solution
The directrix is \(y=−p\), so we know the trigonometric function in the denominator is sine.
Because \(y=−2\), \(–2<0\), so we know there is a subtraction sign in the denominator. We use the standard form of
\(r=\dfrac{ep}{1−e \sin \theta}\)
and \(e=3\) and \(|−2|=2=p\).
Therefore,
\[\begin{align*} r&=\dfrac{(3)(2)}{1-3 \sin \theta}\\ r&=\dfrac{6}{1-3 \sin \theta} \end{align*}\]
Example \(\PageIndex{3B}\): Finding the Polar Form of a Horizontal Conic Given a Focus at the Origin and the Eccentricity and Directrix
Find the polar form of a conic given a focus at the origin, \(e=\dfrac{3}{5}\), and directrix \(x=4\).
Solution
Because the directrix is \(x=p\), we know the function in the denominator is cosine. Because \(x=4\), \(4>0\), so we know there is an addition sign in the denominator. We use the standard form of
\(r=\dfrac{ep}{1+e \cos \theta}\)
and \(e=\dfrac{3}{5}\) and \(|4|=4=p\).
Therefore,
\[\begin{align*} r &= \dfrac{\left(\dfrac{3}{5}\right)(4)}{1+\dfrac{3}{5}\cos\theta} \\ r &= \dfrac{\dfrac{12}{5}}{1+\dfrac{3}{5}\cos\theta} \\ r &=\dfrac{\dfrac{12}{5}}{1\left(\dfrac{5}{5}\right)+\dfrac{3}{5}\cos\theta} \\ r &=\dfrac{\dfrac{12}{5}}{\dfrac{5}{5}+\dfrac{3}{5}\cos\theta} \\ r &= \dfrac{12}{5}⋅\dfrac{5}{5+3\cos\theta} \\ r &=\dfrac{12}{5+3\cos\theta} \end{align*}\]
Exercise \(\PageIndex{3}\)
Find the polar form of the conic given a focus at the origin, \(e=1\), and directrix \(x=−1\).
Answer
\(r=\dfrac{1}{1−\cos\theta}\)
Example \(\PageIndex{4}\): Converting a Conic in Polar Form to Rectangular Form
Convert the conic \(r=\dfrac{1}{5−5\sin \theta}\) to rectangular form.
Solution:
We will rearrange the formula to use the identities \(r=\sqrt{x^2+y^2}\), \(x=r \cos \theta\),and \(y=r \sin \theta\).
\[\begin{align*} r&=\dfrac{1}{5-5 \sin \theta} \\ r\cdot (5-5 \sin \theta)&=\dfrac{1}{5-5 \sin \theta}\cdot (5-5 \sin \theta)\qquad \text{Eliminate the fraction.} \\ 5r-5r \sin \theta&=1 \qquad \text{Distribute.} \\ 5r&=1+5r \sin \theta \qquad \text{Isolate }5r. \\ 25r^2&={(1+5r \sin \theta)}^2 \qquad \text{Square both sides. } \\ 25(x^2+y^2)&={(1+5y)}^2 \qquad \text{Substitute } r=\sqrt{x^2+y^2} \text{ and }y=r \sin \theta. \\ 25x^2+25y^2&=1+10y+25y^2 \qquad \text{Distribute and use FOIL. } \\ 25x^2-10y&=1 \qquad \text{Rearrange terms and set equal to 1.} \end{align*}\]
Exercise \(\PageIndex{4}\)
Convert the conic \(r=\dfrac{2}{1+2 \cos \theta}\) to rectangular form.
Answer
\(4−8x+3x^2−y^2=0\)
Visit this website for additional practice questions from Learningpod.
Key Concepts Any conic may be determined by a single focus, the corresponding eccentricity, and the directrix. We can also define a conic in terms of a fixed point, the focus \(P(r,\theta)\) at the pole, and a line, the directrix, which is perpendicular to the polar axis. A conic is the set of all points \(e=\dfrac{PF}{PD}\), where eccentricity \(e\) is a positive real number. Each conic may be written in terms of its polar equation. See Example \(\PageIndex{1}\). The polar equations of conics can be graphed. See Example \(\PageIndex{2}\), Example \(\PageIndex{3}\), and Example \(\PageIndex{4}\). Conics can be defined in terms of a focus, a directrix, and eccentricity. See Example \(\PageIndex{5}\) and Example \(\PageIndex{6}\). We can use the identities \(r=\sqrt{x^2+y^2}\), \(x=r \cos \theta\),and \(y=r \sin \theta\) to convert the equation for a conic from polar to rectangular form. See Example \(\PageIndex{7}\).
|
What is the Jacobian matrix?
What are its applications?
What is its physical and geometrical meaning?
Can someone please explain with examples?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Here is an
example. Suppose you have two implicit differentiable functions
$$F(x,y,z,u,v)=0,\qquad G(x,y,z,u,v)=0$$
and the functions, also differentiable, $u=f(x,y,z)$ and $v=g(x,y,z)$ such that
$$F(x,y,z,f(x,y,z),g(x,y,z))=0,\qquad G(x,y,z,f(x,y,z),g(x,y,z))=0.$$
If you differentiate $F$ and $G$, you get
\begin{eqnarray*} \frac{\partial F}{\partial x}+\frac{\partial F}{\partial u}\frac{\partial u}{ \partial x}+\frac{\partial F}{\partial v}\frac{\partial v}{\partial x} &=&0\qquad \\ \frac{\partial G}{\partial x}+\frac{\partial G}{\partial u}\frac{\partial u}{ \partial x}+\frac{\partial G}{\partial v}\frac{\partial v}{\partial x} &=&0. \end{eqnarray*}
Solving this system you obtain
$$\frac{\partial u}{\partial x}=-\frac{\det \begin{pmatrix} \frac{\partial F}{\partial x} & \frac{\partial F}{\partial v} \\ \frac{\partial G}{\partial x} & \frac{\partial G}{\partial v} \end{pmatrix}}{\det \begin{pmatrix} \frac{\partial F}{\partial u} & \frac{\partial F}{\partial v} \\ \frac{\partial G}{\partial u} & \frac{\partial G}{\partial v} \end{pmatrix}}$$
and similar for $\dfrac{\partial u}{\partial y}$, $\dfrac{\partial u}{\partial z}$, $\dfrac{\partial v}{\partial x}$, $\dfrac{\partial v}{\partial y}$, $% \dfrac{\partial v}{\partial z}$. The compact notation for the denominator is
$$\frac{\partial (F,G)}{\partial (u,v)}=\det \begin{pmatrix} \frac{\partial F}{\partial u} & \frac{\partial F}{\partial v} \\ \frac{\partial G}{\partial u} & \frac{\partial G}{\partial v} \end{pmatrix}$$
and similar for the numerator. Then
$$\dfrac{\partial u}{\partial x}=-\dfrac{\dfrac{\partial (F,G)}{\partial (x,v)}}{% \dfrac{\partial (F,G)}{\partial (u,v)}}$$
where $\dfrac{\partial (F,G)}{\partial (x,y)},\dfrac{\partial (F,G)}{\partial(u,v)}$ are
Jacobians (after the 19th century German mathematician Carl Jacobi).
The
absolute value of the Jacobian of a coordinate system transformation is also used to convert a multiple integral from one system into another. In $\mathbb{R}^2$ it measures how much the unit area is distorted by the given transformation, and in $\mathbb{R}^3$ this factor measures the unit volume distortion, etc. Another example: the following coordinate transformation (due to Beukers, Calabi and Kolk)
$$x=\frac{\sin u}{\cos v}$$
$$y=\frac{\sin v}{\cos u}$$
For this transformation you get (see Proof 2 in this collection of proofs by Robin Chapman)
$$\dfrac{\partial (x,y)}{\partial (u,v)}=1-x^2y^{2}.$$
Jacobian
sign and orientation of closed curves. Assume you have two small closed curves, one around $(x_0,y_0)$ and another around $u_0,v_0$, this one being the image of the first under the mapping $u=f(x,y),v=g(x,y)$. If the sign of $\dfrac{\partial (x,y)}{\partial (u,v)}$ is positive, then both curves will be travelled in the same sense. If the sign is negative, they will have opposite senses. (See Oriented Regions and their Orientation.)
The Jacobian $df_p$ of a differentiable function $f : \mathbb{R}^n \to \mathbb{R}^m$ at a point $p$ is its best linear approximation at $p$, in the sense that $f(p + h) = f(p) + df_p(h) + o(|h|)$ for small $h$. This is the "correct" generalization of the derivative of a function $f : \mathbb{R} \to \mathbb{R}$, and everything we can do with derivatives we can also do with Jacobians.
In particular, when $n = m$, the determinant of the Jacobian at a point $p$ is the factor by which $f$ locally dilates volumes around $p$ (since $f$ acts locally like the linear transformation $df_p$, which dilates volumes by $\det df_p$). This is the reason that the Jacobian appears in the change of variables formula for multivariate integrals, which is perhaps the basic reason to care about the Jacobian. For example this is how one changes an integral in rectangular coordinates to cylindrical or spherical coordinates.
The Jacobian specializes to the most important constructions in multivariable calculus. It immediately specializes to the gradient, for example. When $n = m$ its trace is the divergence. And a more complicated construction gives the curl. The rank of the Jacobian is also an important local invariant of $f$; it roughly measures how "degenerate" or "singular" $f$ is at $p$. This is the reason the Jacobian appears in the statement of the implicit function theorem, which is a fundamental result with applications everywhere.
In single variable calculus, if $f:\mathbb R \to \mathbb R$, then \begin{equation} f'(x) = \lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x}. \end{equation} A very useful way to think about $f'(x)$ is this: \begin{equation} \tag{$\spadesuit$} f(x + \Delta x) \approx f(x) + f'(x) \Delta x. \end{equation}
One of the advantages of equation $(\spadesuit)$ is that it still makes perfect sense in the case where $f:\mathbb R^n \to \mathbb R^m$:
\begin{equation} f(\underbrace{x}_{n \times 1} + \underbrace{\Delta x}_{n\times 1}) \approx \underbrace{f(x)}_{m \times 1} + \underbrace{f'(x)}_{?} \underbrace{\Delta x}_{n \times 1}. \end{equation} You see, if $f'(x)$ is now an $m \times n$ matrix, then this equation makes perfect sense. So, with this idea, we can extend the idea of the derivative to the case where $f:\mathbb R^n \to \mathbb R^m$. This is the first step towards developing calculus in a multivariable setting. The matrix $f'(x)$ is called the "Jacobian" of $f$ at $x$, but maybe it's more clear to simply call $f'(x)$ the derivative of $f$ at $x$.
The matrix $f'(x)$ allows us to approximate $f$ locally by a linear function (or, technically, an "affine" function). Linear functions are simple enough that we can understand them well (using linear algebra), and often understanding the local linear approximation to $f$ at $x$ allows us to draw conclusions about $f$ itself.
(I know this is slightly late, but I think the OP may appreciate this)
As an application, in the field of control engineering the use of Jacobian matrices allows the local (approximate) linearisation of non-linear systems around a given equilibrium point and so allows the use of linear systems techniques, such as the calculation of eigenvalues (and thus allows an indication of the type of the equilibrium point).
Jacobians are also used in the estimation of the internal states of non-linear systems in the construction of the extended Kalman filter, and also if the extended Kalman filter is to be used to provide joint state and parameter estimates for a linear system (since this is a non-linear system analysis due to the products of what are then effectively inputs and outputs of the system).
I found the most beautiful usage of jacobian matrices in studying differential geometry, when one abandons the idea that analysis can be done "only on balls of $\mathbb{R}^n$". The definition of tangent space in a point $p$ of a manifold $M$ can be given via the kernel of the jacobian of a suitable submersion, or via the image of the differential of a suitable immersion from an open set $U\subseteq\mathbb{R}^{\dim M}$. Quite a simple example, but when I was an undergrad four years ago it gave me the "right" idea of what a linear transformation
does in a differential (analytical) framework.
This is not a rigorous explanation, but here is the best intuitive explanation/motivation for the Jacobian Matrix. Start with an interval $[x_1,x_2] \subset \mathbb{R}$. What is a common measurement of space for this interval ? It is length. To find the length of $[x_1,x_2]$, take $x_2-x_1$. Now suppose I define an invertible linear transformation $T:\mathbb{R} \rightarrow \mathbb{R}$, where $$T(x)=\begin{bmatrix}a\end{bmatrix}x,$$ where $\begin{bmatrix}a\end{bmatrix}$ is a $1\times 1$ matrix with a nonzero entry $a$. The image of $[x_1,x_2]$ under $T$ is the interval $[ax_1,ax_2]$, and the length of this new interval is $ax_2-ax_1=a(x_2-x_1)$. Now we ask ourselves this question. How does the length of the new interval relate to the length of the old interval ? The length of the $[ax_1,ax_2]$ is $|a|$ times the length of $[x_1,x_2]$. But notice that: $$|a|=\left |\det\begin{bmatrix}a\end{bmatrix}\right |.$$ Now suppose you are doing u substitution to evaluate an integral in the form $$\int_{S} f(x) dx.$$ We define $x=x(u)$ and the differential $dx$ becomes $\frac{dx}{du}du$. If you view $dx$ and $du$ as vectors in $\mathbb{R}$, you get $$dx=\begin{bmatrix}\frac{dx}{du}\end{bmatrix}du.$$ The determinant of $\begin{bmatrix}\frac{dx}{du}\end{bmatrix}$ plays the same role as $a$ in that it is a scaling factor between different "infinitesimal" interval lengths.
The higher dimensional analogue of the interval in $\mathbb{R}$ is a parallelepiped in $\mathbb{R}^n$. Measurement of space in $\mathbb{R}^n$ is the $n$-dimensional volume. If you define an invertible linear transformation $T:\mathbb{R}^n \rightarrow \mathbb{R}^n$, and if you write $T(x)=Ax$, where $A$ is an $n \times n$ matrix, the absolute value of $\det A$, scales the volume of a parallelepiped. Similarly, if you are dealing with the multidimensional integral: $$\int_{S}f(x_1,...,x_n)dx_1...dx_n$$ and wish to use change of variables: $$x_i=x_i(u_1,...,u_n),1 \leq i \leq n$$ you can regard $dx=(dx_1,...,dx_n),du=(du_1,...,du_n)$ as vectors in $\mathbb{R}^n$ and relate them by $$dx=\begin{bmatrix}\frac{\partial x_i}{\partial u_j}\end{bmatrix}_{ij}du.$$ The Jacobian Matrix here is: $$\begin{bmatrix}\frac{\partial x_i}{\partial u_j}\end{bmatrix}_{ij},$$ and the notation means the $i$th row and $j$th column entry is $\frac{\partial x_i}{\partial u_j}$. The absolute value of the determinant of the Jacobian Matrix is a scaling factor between different "infinitesimal" parallelepiped volumes. Again, this explanation is merely intuitive. It is not rigorous as one would present it in a real analysis course.
I don't know much about this, but I know is used in programming robotics for transforming between two frame of references. The equations become very simple. So moving from one frame to another to another is just the matrix product of Jacobian matrix.
A very short contribution for the applicability question: it is a matrix of partial derivatives. One of the applications is to find local solutions of a system of nonlinear equations. When you have a system of nonlinear equation, the x`s that are solutions of the system are not easy to find, because it is difficult to invert a the matrix of nonlinear coefficients of the system. However, you can take the partial derivative of the equations, find the local linear approximation near some value, and then solve the system. Because the system becomes locally linear, you can solve it using linear algebra.
The simplest answer I can give is - Jacobian Matrix is used when there is a change of variable requirement in the greater than one dimensional space.
One of the explanations above explains it simplistically in the single variable concept.
|
You may indeed identify the generators in the way you did. However, the Lie algebras and Lie groups are different because – as quickly said by Qmechanic – you must use different reality conditions for the coefficients.
A general matrix in the $SU(2)$ group is written as$$ M = \exp[ i( \alpha J_+ + \bar\alpha J_- + \gamma J_0 )] $$where $\alpha\in {\mathbb C}$ and $\gamma\in {\mathbb R}$ while the general matrix in $SU(1,1)$ is given by $$ M' = \exp [ i( \alpha_+ J_+ + \alpha_- J_- + \beta J_0 ] $$where $\alpha_+,\alpha_-,\beta\in {\mathbb R}$ are three different real numbers.
To summarize, for $SU(2)$, the coefficients in front of $J_\pm$ are complex numbers conjugate to each other, while for $SU(1,1)$, they are two independent real numbers. (And I apologize that I am not sure whether the $i$ should be omitted in the exponent of $SU(1,1)$ only according to your convention. Probably.)
If you allow all three coefficients in front of $J_\pm,J_0$ to be three independent complex numbers, you will obtain the complexification of the group. And as Qmechanic also wrote, the complexification of both $SU(2)$ and $SU(1,1)$ is indeed the same, namely $SL(2,{\mathbb C})$.
|
I wish to solve an equation of the form,
$$ \frac{\partial}{\partial t} \left( \frac{\partial \phi}{\partial x} \right) = -\frac{\partial}{\partial x}(\mathcal{F}) $$
for the variable $\phi$ (e.g. mass).
On the right-hand side is the flux $\mathcal{F}$ of quantity $\phi$.
This equation
"looks like" an advection-diffusion equation, but with the rate of change of the spatial derivative of $\phi$ appearing on the left-hand side.
Applying the finite volume approach we integrate the equation over the cell $\Omega$,
$$ \frac{\partial}{\partial t}\int_{\Omega}\left( \frac{\partial \phi}{\partial x} \right)dx = - \int_{\Omega}\frac{\partial\mathcal{F}}{\partial x} dx $$
$$ \frac{\partial}{\partial t}\left( \frac{\phi_{j+1/2}}{h_j} - \frac{\phi_{j-1/2}}{h_j}\right) = -\left( \frac{\mathcal{F}_{j+1/2}}{h_j} - \frac{\mathcal{F}_{j-1/2}}{h_j}\right) $$
where $h_j$ width of the cell.
Is this basic approach correct? I have never needed to solve an equation which is the time-derivative of a spatial derivative before, this is the approach I have taken, does anyone have any advice or direction? I have not yet tried to implement this numerically.
|
I'm studying about the finite element method in a class but I don't come from a civil engineering background. Anyways, it hasn't been made clear to me what the difference between constitutive laws and governing equations are. To me they both relate physical quantities with one another.
A constitutive law is generally an algebraic relation which tells you the coefficients of a differential equation, while the governing equations are the differential equations themselves.
For example, if I have a metal piston on top of a gas, I can write down the equation of motion for the piston
$$m \ddot X - PA = 0$$
Where P is the pressure in the gas and A is the area of the piston. Without knowing how the pressure depends on the piston position, this is not a closed equation--- it refers to an undetermined quantity, the pressure. But the ideal gas law, that the pressure $P=C/(V-AX)$ where C,V are constants, determines the pressure in terms of X, and gives
$$ m \ddot X -{ AC\over (V - AX)} =0$$
Now the equation is closed--- it tells you the future behavior of X knowing X alone. The ideal gas law is the constitutive relation in this case, while the differential equation is the governing equation.
Constitutive equations are algebraic, governing equations are differential.
A constitutive law is often an approximate solution to another differential (governing) equation which has much smaller transient scale. In the example with a piston we used a static law for pressure in a dynamical situation. We supposed that sound waves and other complications in the gas due to piston's moving relax much faster than $X$ changes, so we do not solve the other differential equation with unknown piston position, but we use its solution with a given (static) piston position. Thus we reduce the number of coupled governing equations.
Another example is a relationship between the time-dependent electric field $\vec{E}(t)$ and current $\vec{j}(t)$ via conductivity $\sigma$: $$\vec{j}(t) = \sigma \vec{E}(t)$$
Here the inertial properties of electrons are neglected (no retardation, no relaxation time, immediate reaction to ( or "following") the external filed). As a matter of fact, there is a governing equation for electron velocity containing an inertial term: something like $m_e \frac{d\vec{v}}{dt}$ in the left-hand side and the forces in the right-hand side. The forces are the external electric field and the internal friction proportional to velocity (electric resistance). When we neglect the right hand side, the solution is obtained from equality of forces (driving and friction); this is how one obtains the solution $\vec{j}(t) = \sigma \vec{E}(t)$ (a constitutive law) in place of a governing equation.
|
My question is regarding Andreev bound states and their transmission probabilities. But to make this self-contained, lets quickly recap, for which I will draw from Tosi, L., Metzger, C., Goffman, M. F., Urbina, C., Pothier, H., Park, S.,Krogstrup, P. (2018). Spin-orbit splitting of Andreev states revealed by microwave spectroscopy as I like their descriptions:
The Josephson supercurrent that flows through a weak link between two superconductors is a manifestation of the coherence of the many-body superconducting state. The link can be an insulating barrier, a small piece of normal metal, a constriction or any other type of coherent conductor. Regardless of its specific nature the supercurrent is a periodic function of the phase difference $\delta$ between the electrodes. However, the exact function is determined by the geometry and material properties of the weak link. A unifying microscopic description of the effect has been achieved in terms of the spectrum of discrete quasiparticle states that form at the weak link: the Andreev bound states (ABS).
Andreev bound states are formed from the coherent Andreev reflections that quasiparticles undergo at both ends of a weak link. Quasiparticles acquire a phase at each of these Andreev reflections and while propagating along the weak link of length $L$. Therefore, the ABS energies depends on $\delta$, on the transmission probabilities for electrons through the weak link and on the ratio $\lambda = L/\xi$ where $\xi$ is the superconducting coherence length. Assuming ballistic propagation, $\xi = \hbar v_F/\Delta$ is given in terms of the velocity $v_F$ of quasiparticles at the Fermi level within the weak link and of the energy gap $\Delta$ of the superconducting electrodes. In a short junction, defined by $L \gg \xi$, each of the $N$ conduction channels of the weak link, with transmission probability $\tau_i$, gives rise to a single spin-degenerate Andreev level at energy $E_{A,i} = \sqrt{1-\tau_i\sin^2{\delta/2}}$
Now, if I have $N$ conduction channels occupied at chemical potential $\mu$, and assume parabolic dispersion, no SOI, and no Zeeman field, there is a clear hierarchy in Fermi velocities of the aforementioned conduction channels. The 'deepest' lying band (its band minimum is furthest away from $\mu$) will have the highest $v_F$, while the outer most subband will have the lowest $v_F$. My question is then as follows. Does this then also mean that the outer most subband will always have the lowest transmission probability $\tau_N$? Is it true that $\tau_i \geq \tau_j$, for $i>j$? Can we directly relate $\tau_i$ to $v_{F,i}$, or does that require specific assumptions, and when are those not satisfied? I know at least one case in which it is true; in the (excellent, in my opinion) thesis by Bretheau he derives an expression for $\tau_i$ when the junction hosts a delta scatterer of strength $V$ such that $\tau = \frac{1}{1+\left(\frac{m_eV/\hbar^2}{k_F}\right)^2}$, where one does have this hierarchy. What about the case of a smooth barrier potential? And what if every conduction band does not have to see the same scattering potential (say due to where the wavefunction lives in the 3D structure)? Could that lead to a case where this is broken? And finally, and perhaps crucially, what about if the junction is disordered?
|
In 2D CFT, we have the Virasoro generators $L_m$ and the generators $\bar L_m$, which are such that $[L_m,\bar L_n]=0$. Hence I thought that the full conformal algebra was $Vir\oplus \overline{Vir}$. But I see in the literature that they write $Vir\otimes \overline{Vir}$ instead. The same happens in the more general case of a symmetry algebra $A\otimes \overline{A}$. Why is this?
This is just a matter of notation. Suppose $G_1$ and $G_2$ are two (Lie) groups of dimension $d_1$ and $d_2$. We all know the natural way to construct the direct sum of these groups: it's a group of dimension $d_1 + d_2$ that mathematicians usually denote as $G_2 \oplus G_2$. However, physicists usually just write $G_1 \times G_2$. The same holds for (Lie) algebras. Moreover, in the physics literature it is rare to distinguish the Lie algebra from its group, unless there is a possible ambiguity or confusion that could arise.
As a set, the conformal symmetry algebra is $Vir \times \overline{Vir}$. As a vector space, it is $Vir \oplus \overline{Vir}$.
It is also useful to consider the universal enveloping algebra $U(Vir)$, whose generators are products of Virasoro generators of the type $\prod_i L_{m_i}$. This is now an associative algebra, instead of a Lie algebra. Then we have $U(Vir \times \overline{Vir}) = U(Vir) \otimes \overline{U(Vir)}$.
So physicists' writings are right and consistent, provided you accept that $Vir$ may mean various different things (including $U(Vir)$) depending on the context.
Ignoring fineprints, the take-home message is that there are basically only 2 correct notations:
$\mathfrak{g}\oplus\mathfrak{h}$ for the direct sum of Lie algebras $\mathfrak{g}$ and $\mathfrak{h}$.
--
$^1$ Let us for simplicity assume that the Lie groups are
not vector spaces, which are often the case.
|
Assume I have two functions $f$ and $g$, with derivatives of $g$ at point $x$ and derivatives of $f$ at point $g(x)$ available.
What is the fastest way of computing derivatives of $f \circ g (x)$ ?
Computational Science Stack Exchange is a question and answer site for scientists using computers to solve scientific problems. It only takes a minute to sign up.Sign up to join this community
Assume I have two functions $f$ and $g$, with derivatives of $g$ at point $x$ and derivatives of $f$ at point $g(x)$ available.
What is the fastest way of computing derivatives of $f \circ g (x)$ ?
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
For real valued function g(x), you can do that as follows,
\begin{equation} \frac{\textrm{d} f}{\textrm{d} x}= \frac{\textrm{d} f}{\textrm{d} g}\frac{\textrm{d} g}{\textrm{d} x} \approx \frac{\textrm{d} f}{\textrm{d} g}\frac{\textrm{Im}\left\{g(x+i\epsilon)\right\}}{\epsilon} \end{equation} where $i$ is imaginary number. $\textrm{Im}\{z\}$ gives the imaginary part of the complex number z.
You can implement that in any language which can do operations on complex numbers. You can have very small $\epsilon$, to have small truncation error $O(\epsilon^2)$ and avoid subtractive cancellation error. I don't know how fast it is but is very fast to implement.
For more details see,
Squire, William; Trapp, George, Using complex variables to estimate derivatives of real functions, SIAM Rev. 40, No.1, 110-112 (1998). ZBL0913.65014.
|
Proving that $$\sum_{n=0}^{\infty }\frac{1}{(2n)!!}=\sqrt{e}$$ Firstly, I tried to check the value with the exponential function at $x=.5$ but I found its terms not equal to the series terms.
Note that $$(2n)!! = 2\cdot4\cdot 6 \cdots 2n = 2^n n!$$ so your series is just $$\sum_n \frac{(1/2)^n}{n!} = e^{\frac{1}{2}}$$
$$e^x = \sum_{n=0}^\infty\dfrac{x^n}{n!}$$
Plug $x = \dfrac{1}{2}$, then $\dfrac{x^n}{n!} = \dfrac{1}{2^n n!} = \dfrac{1}{2 \cdot 4 \cdot 6 \cdots 2n} = \dfrac{1}{(2n)!!}$
|
In the context of Noether's theorem , the Hamiltonian is the constant of motion associated with the time-translational invariance of the Lagrangian. Time-translational invariance is equivalent to the Lagrangian not depending explicitly on time that is $$\dfrac{∂L}{∂t}=0 .$$
The reason they're equivalent is that for an infinitesimal time translation, we can approximate the Lagrangian as the first order expansion of its Taylor series, that is
$$δL ≡ L(q, \dot{q},t +\epsilon ) − L(q, \dot{q},t)= \dfrac{∂L}{∂t}\epsilon $$ $$\text{the right one}$$
But Shouldn't $t \mapsto t+ \epsilon$ induce $q(t) \mapsto q(t+ \epsilon)$ and $\dot{q}(t) \mapsto \dot{q}(t+ \epsilon)$ ? and if that is the case then $$δL ≡ L( q(t+ \epsilon), \dot{q}(t+ \epsilon),t +\epsilon ) − L(q, \dot{q},t)= \dfrac{∂L}{∂t}\epsilon +\dfrac{∂L}{∂q}\epsilon+\dfrac{∂L}{∂ \dot{q}}\epsilon $$ $$\text{ the wrong one}$$
So that a Lagrangian is time transational invariant if and only if it does not explicitly depend on $q$,$\dot{q}$ and $t$ which does not make sense. So How is it possible to vary time without affect the coordinates or their derivatives which are themselves functions of time?
|
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say
|
I have to solve the Klein Gordon equation for a scalar field, in global $AdS_3$ (covering space, with non periodic $\tau$) written as $$ ds^2=\frac{R^2}{\cos^2\rho}(-d\tau^2+d\rho^2+\sin^2\rho d\theta^2) $$ I Fourier transformed the scalar field as $$ \phi(\rho,\theta,\tau)=\sum_l\int d\omega Y_l(\cos\theta)e^{ik\omega\tau}\chi(\rho) $$ The equation for $\chi(\rho)$ is solved in terms of hypergeometric function, with arguments depending on $l, k, \omega$. Suppose now that I am able to come back to coordinate space, summing over Fourier modes the exact, regular, solution that I found and I impose that the non normalizable mode of this solution goes as a delta $\delta(\theta)\delta(\tau)$ near the boundary ($\rho=\pi/2$), that means that I want a localised source. (This constrains the arbitrary coefficients of the solution to be the inverse of all the stuff $k,l,\omega$-dependent behind the non normalisable mode, such that when I sum over Fourier modes I trivially obtain a delta).
From Klebanov Witten argument we know that we can write a regular solution using the bulk to boundary propagator $$ \phi(z,x)=\int dx'K(z,x-x')\phi_0(x') $$ with $\phi_0(x')$ the source. In case of a localised source $\delta(x')$ the solution become simply the propagator $\phi(z,x)=K(z,x)$.
Now,
my question is, if I take my solution in global coordinates, I perform the correct coordinate transformation from global to Poincare', where the propagator $K$ is computed, and I sum over Fourier modes will I recover the propagator $K(z,x)$? The question is motivated by the fact that if this is true I believe that I can use the second approach to recover my results without use the machinery of sums. I hope I was clear.
|
"A crystal is formed by a large number of repetitions of basic pattern of particles in space.
The basic structural unit which when repeated in three spatial directions generates the crystal structure is called unit cell."
Problem:
Assume I take some same perfect geometrical cubes and arrange them in all three dimensional directions, a larger box similar to the smaller ones will be introduced.
From this assumption and the information above it, I deduce that a unit cell should be similar to the relevant crystal. That means, the only difference would be lying between their size.
If this is the case, why consider the geometry of unit cell and not directly the respective crystal? We know in the assumption above; the fundamental cubical boxes would be determined by same geometrical information as would the derived cubical box.
That is; if the lengths of three axes of cube are represented by letters $a,b$ and $c$ and the angles between axes are represented by Greek letters $\alpha$, $\beta$ and $\gamma$ then for a cube,
$$a=b=c$$$$\alpha=\beta=\gamma=\pi^c$$
This information is general. It is for all size of cubes. Then, why to define a unit cell and think it simpler to handle with than the crystal?
|
Note that by denoting $f(x) = \tan x \sin x -x^2$, you found that $$f'''(x)=-\sin x (1-6\sec^4x+\sec^2x) = \sin x (1+3\sec^2 x)(2\sec^2 x -1 ) \geq 0 $$Hence $f''(x)$ is increasing, with $f''(0)=0$, we conclude that $f''(x) \geq 0$.
Hence $f'(x)$ is increasing, with $f'(0)=0$, we conclude that $f'(x) \geq 0$.
Hence $f(x)$ is increasing, with $f(0)=0$, we conclude that $f(x) \geq 0$.
This is what we wish to prove.
From a more advanced perspective, the inequality follows from the fact that Taylor expansion of $$\tan x \sin x = x^2+\frac{x^4}{6}+\cdots$$ at $x=0$ have all coefficients positive, the radius of convergence of this series is $\pi/2$.
To see why all coefficients are positive, write$$\tan x \sin x = \frac{1}{\cos x} - \cos x$$
The Taylor expansion of $\sec x$ at $x=0$ is $$\sec x = \sum_{n=0}^{\infty} \frac{(-1)^n E_{2n}}{(2n)!} x^{2n}$$where $E_{2n}$ are Euler number. The fact that $(-1)^n E_{2n}$ is positive follows from the series evaluation:$$\beta(2n+1) = \frac{(-1)^n E_{2n} \pi^{2n+1}}{4^{2n+1} (2n)!}$$with $\beta(n)$ the Dirichlet beta function.
Also note that we have $|E_{2n}| > 1 $ when $n>1$, hence the power series of $\frac{1}{\cos x}-\cos x$ has all coefficients positive.
From this, you might want to prove the stronger inequality:
When $0<x<\frac{\pi}{2}$,
$$\tan x \sin x > x^2 + \frac{x^4}{6} $$
$$\tan x \sin x > x^2 + \frac{x^4}{6} + \frac{31x^6}{360} $$
|
I am trying to understand the derivation of the following equation which describes the motion of Newtonian viscous fluids:
Equation 1: $\rho \dfrac{D u}{Dt} = \rho g_x - \dfrac{\partial p}{\partial x} + \mu \nabla^2 u$
Where $\mu$ is the viscosity of the fluid. I am following the proof written in Hibbeler's Fluid Dynamics which uses the following steps.
First the following formula is stated based on the free-body diagram below: (I am just writing the equations related to the $x$ component of the velocity, namely $u$)
Equation 2: $\rho \dfrac{D u}{Dt} = \rho g_x + \dfrac{\partial \sigma_{xx}}{\partial x} + \dfrac{\partial \tau_{yx}}{\partial y} + \dfrac{\partial \tau_{zx}}{\partial z}$
In the second step, the normal stress and shear stress variables in the previous equation are related to the velocity and viscosity of the fluid.
$\sigma_{xx} = -p + 2\mu \dfrac{\partial u}{\partial x}$
$\tau_{yx} = \mu (\dfrac{\partial u}{\partial y} + \dfrac{\partial v}{\partial x})$
$\tau_{zx} = \mu (\dfrac{\partial u}{\partial z} + \dfrac{\partial w}{\partial x})$
Here the book claims that by replacing the last three equations in equation(2), we yield equation(1).
I do not understand how the last three equations are derived. Moreover, after replacing these last three equations in equation(2), the result is not even similar to equation(1). This is what I got after doing so:
$\dfrac{Du}{Dt} = \rho g_x - \dfrac{\partial p}{\partial x} + \mu (\nabla^2 u + \dfrac{\partial^2 u}{\partial^2 x} + \dfrac{\partial^2 v}{\partial x \partial y} + \dfrac{\partial^2 w}{\partial x \partial z})$
What am I doing wrong? Is it possible the three equations presented in step 2 are wrong? Is there a source which explicitly explains this topic?
Thanks for your help!
|
I am interested in solving the Poisson equation using the finite-difference approach. I would like to better understand how to write the matrix equation with Neumann boundary conditions. Would someone review the following, is it correct?
The finite-difference matrix
The Poisson equation,
$$ \frac{\partial^2u(x)}{\partial x^2} = d(x) $$
can be approximated by a finite-difference matrix equation,
$$ \frac{1}{(\Delta x)^2} \textbf{M}\bullet \hat u = \hat d $$
where $\textbf{M}$ is an $n \times n$ matrix and $\hat u$ and $\hat d$ are $1 \times n$ (column) vectors,
Adding a Neumann boundary condition
A Neumann boundary condition enforces a know flux at the boundary (here we apply it at the left-hand side where the boundary is at $x=0$),
$$ \frac{\partial u(x=0)}{\partial x} = \sigma $$ writing this boundary condition as a centred finite-difference,
NB. I originally made an error here, sign error and didn't divide by 2. The following has been corrected.$$\frac{u_2 - u_0}{2\Delta x} = \sigma$$
Note the introduction of a mesh point outside the original domain ($u_0$). This term can be eliminated by introducing the second equation, $$ \frac{u_0 - 2u_1 + u_2}{(\Delta x)^2} = d_1 $$
The equation arrises from having more information because of the introduction of the new mesh point. It allows us to write the double derivative of the $u_1$ as the boundary in terms of $u_0$ using a centred finite-difference.
The part I'm not sure about
Combining these two equations $u_0$ can be eliminated. To show the working, let's first re-arrange for the unknown,
$$ u_0 = -2\sigma\Delta x + u_2 \\ u_0 = (\Delta x)^2 d_1 + 2 u_1 - u_2 $$
Next they are set equal and rearranged into the form,
$$ \frac{u_2 - u_1}{(\Delta x)^2} = \frac{d_1}{2} + \frac{\sigma}{\Delta x} $$
I chose this form because it is the same form as the matrix equation above. Notice that the $u$ terms are divide by $(\Delta x)^2$ both here and in the original equation. Is this the correct approach?
Finally, using this equation as the first row of the matrix,
Some final thoughts,
Is this final matrix correct? Could I have used a better approach? Is there a standardway of writing this matrix?
|
Let $p$ be a prime. Let $G$ be a solvable, non-regular, transitive permutation group such that some element fixes no point, and each element fixing some point fixes exactly $p$ points. Suppose that for $g \notin N_G(G_{\alpha})$ we have $$ G_{\alpha}^g \cap G_{\alpha} = 1 $$ and that $p$ does not divide the order of $G_{\alpha}$. Then $G$ has a regular normal subgroup $R$ or $G$ has an intransitive normal subgroup $F$ of index $p$ which acts as a Frobenius group on its $p$ orbits.
Proof: Let $G$ be a counter-example of minimal order. Let $N$ be an elementary abelian normal subgroup of $G$. As $N_{\alpha} \le G_{\alpha}$, but $p$ does not divide the order of $G_{\alpha}$ we have $N_{\alpha} = 1$, i.e. $N$ acts semi-regularly on $\Omega$. Since $G$ is a counter-example, $N$ is not transitive. Let $\Sigma$ be the orbits of $N$, and let $\Delta$ be the orbit containing $\alpha$. Then $G/N$ acts on the set $\Sigma$ and $G_{\alpha}N / N$ is the stabilizer of $\Delta$. Let $\overline \alpha$ be the set of fixed points of $G_{\alpha}$. Since $G$ acts transitive, the size of each block divides the order of $G$, and hence $\overline \alpha$ is a minimal $G$-block. Also since the orbits of $N$ are $G$-blocks, we have that either $\Delta \cap \overline \alpha = \overline \alpha$ or $\Delta \cap \overline \alpha = \{\alpha\}$.
Let $g \in G_{\alpha}$ and assume that $g$ fixes an orbits $\Delta'$ of $N$, i.e. $\langle g \rangle$ opertes on $\Delta'$. If $g$ would have no fixed point, then we would have by an orbit decomposition that $|\Delta'| = k\cdot o(g)$ for some $k$. But $|\Delta'| = |N|$, which is a $p$-group, and $o(g)$ does not divide $p$, so $g$ must have a fixed point in $\Delta'$ (which must come from $\overline a$). Hence if $\overline \alpha \subseteq \Delta$, $G / N$ acts as a Frobenious group on $\Delta$ and the action of $G/N$ on $\Sigma$ is faithful.
I do not understand why $G/N$ acts as a Frobenious group, i.e. that it is non-regular and each $gN$ fixes at most one point. If $gN$ fixes some point, then this is equivalent to the statement that $g$ fixes some orbit $\Delta'$ of $N$, but in general we do not have $g \in G_{\alpha}$, so I do not see how the previous comments apply?
Let $R / N$ be the Frobenious kernel. Then $R$ acts regularly on $\Omega$, which is impossible.
If $r \in R, r \ne 1$ would fix some $\beta^r = \beta$ and $\Delta'$ denotes the orbit of $N$ containing $\beta$, then as this is a block we would have $(\Delta')^r = \Delta'$, i.e. $rN$ fixes a point, which is not possible by definition of the kernel. So this is clear to me.
Therefore each of the orbits of $N$ contains at most one of the points of $\overline \alpha$ and $G / N$ acts in such a way that some element fixes no point, and each element fixing some point fixes exactly $p$ points.
Here I do not understand why $G/N$ acts in that way?
If $|\Sigma| = p$ then $G$ would not be a counter-example.
What condition is precisely violated so that this is not a counter-example?
Hence $|\Sigma| > p$ and $G/N$ acts faithfully on $\Sigma$. Therefore the assertion holds for $G/N$.
I do not see why the point stabilizers are not divisible by $p$ and the intersection of different conjugates of the point stabilizers is trivial?
Again, if $R/N$ acts regularly on $\Sigma$ then $R$ acts regularly on $\Sigma$, which is impossible. Thus $G/N$ has an intransitive normal subgroup $F/N$ of index $p$ acting as a Frobenious group on its $p$ orbits. Each of the $p$ orbits of $F$ on $\Sigma$ contains exactly one of the points of $\overline \alpha$. Hence $F$ acts as a Frobenious group on $\Omega$, and we have a contradiction, proving the assertion. $\square$
I have some difficulty following the arguments in the proof. I have marked the parts I am unsure about. Maybe some additional remark, by saying $\overline \alpha$ is the set of fixed points of $G_{\alpha}$ we used the fact that each element from the point stabilizer has the same set of $p$ fixed points, i.e. the set $\overline \alpha$. This uses the assumption that different point stabilizers intersect trivial, I can supply a proof of this fact if wanted!
|
Let $(\Omega, \mathcal A, \mu)$ be a $\sigma$-finite measure space and $g$ a measureable function on $\Omega$. Fix $p\in[1,\infty]$ and consider the multiplication operator $$M_g:L^p(\Omega, \mu)\to L^p(\Omega, \mu),\quad f\mapsto fg$$ Then $M\in B(L^p(\Omega, \mu))$ if and only if $g \in L^\infty(\Omega, \mu)$ and in that case $\|M\| = \|g\|_{L^\infty}$.
My attempt:
I proved the"$\impliedby$" direction.
"$\implies$"Suppose $M$ is a bounded operator and assume that $g$ is not essentially bounded. Then there exists a set $A$ of nonzero measure such that $g$ is unbounded on $A$. By $\sigma$-finiteness, we can choose an increasing sequence $(A_n)\subset \mathcal A$ with $\bigcup_n A_n=\Omega$. I tried considering the functions $f_n=\mathbf 1_{A\cap A_n}$ to get a contradiction, but my attempts have failed.
I'm also not sure how to prove $\|M\| = \|g\|_{L^\infty}$. I'm fine with the inequality $\|M\|\leq \|g\|_\infty$, but I don't know how to show the other one.
Thank you for any help.
|
Two parts of this answer: (1) a quick sketch of how one gets the result cited and (2) why I believe one can't get a less awkward, more succinct result.
Proof Sketch
The one thing that the numerical version of Snell's law does not give us, but which we always unconsciously use, is the equally important fact that the incident $\vec{i}$ and refracted rays $\vec{r}$ and the interface normal $\vec{n}$ all lie in the same plane. In other words, the refracted ray lies in the span $\left<\left\{\vec{i},\,\vec{n}\right\}\right>$ of $\vec{i}$ and $\vec{n}$ and so there are constants $\alpha$ and $\beta$ such that:
$$\vec{r} = \alpha\,\vec{i}+\beta\,\vec{n}\tag{1}$$
Another implication of this assertion is as follows: the vector $\vec{n}\times\vec{i}$ is the normal to this span plane. So is $\vec{n}\times\vec{r}$. So therefore, $\vec{n}\times\vec{r}\propto \vec{n}\times\vec{i}$. Now use Snell's numerical law to find the proportionality constant and you get the first equation you cite.
Take the cross product of both sides of (1) with $\vec{n}$:
$$\vec{n}\times\vec{r}=\alpha\,\vec{n}\times\vec{i}\tag{2}$$
and apply the form of Snell's law you have cited to find:
$$\alpha=\mu\tag{3}$$
Let's agree to work with normalized vectors, in which case taking the inner product of (1) with itself yields:
$$0 = 1-\mu^2 + \beta^2 + 2\,\mu\,\beta\, \langle \vec{i},\,\vec{n}\rangle\tag{4}$$
whence, on finding $\beta$ from (3), the result follows with a bit of messing about. You also need to use the fact that the components of $\vec{r}$ and $\vec{i}$ normal to the interface are in the same, not the opposite, direction to decide the sign of the root of the equation for $\beta$.
A Simpler Version?
This is a complicated and awkward expression, hard to work with, so one wonders whether there might be an eleganter expression for something so trivial as Snell's law. I don't believe so, and this is because:
(1) Snells law results from a
boundary condition involving medium optical densities, and is thus not simply a geometric result. If it were, simple vector or tensor expressions would result;
(2) As a result of (1), Snell's law is a nonlinear function of $\vec{i}$ and the ratio $\mu$. It is this nonlinearity that begets optical wavefront aberration in lens systems. You can write the equation derived above as:
$$\vec{r} = \vec{n} + \mu\,(\mathrm{id}-\vec{n}\otimes\vec{n})\,\vec{i} +\left(-\frac{\mu^2}{2}\left(1-\langle\vec{n},\,\vec{i}\rangle^2\right)+\frac{\mu^4}{8}\left(1-\langle\vec{n},\,\vec{i}\rangle^2\right)^2 + \cdots\right)\vec{n}$$
The first two terms above are the normal component ($\vec{n}$) and $\mu$ times the projection $(\mathrm{id}-\vec{n}\otimes\vec{n})\,\vec{i}$ projection of $\vec{i}$ onto the interface. This second term expresses the conservation of the transverse optical momentum that upholds the Hamiltonian description of ray optics and conservation of Étendue (the optical Liouville Theorem or second law of thermodynamics applied to light) across discontinuous interfaces. See the latter part of my answer here for a discussion of this point. The other terms are those responsible for optical aberrations; consideration of the linear terms together with the two written explicitly out above alone gives the Seidel theory of aberration. If Snell's law were simply expressible in vector notation, lens design would be trivial.
(3) If we wanted a purely geometric theory, one could perhaps begin with defining the metric tensor in the medium as $n^2(x,\,y,\,z)$ times the identity matrix; the distance function is then the optical path difference, and, as discussed in my answer, ray optics becomes the purely geometric theory of finding geodesics in this conformally flat Riemannian geometry, with the geodesics found from the Euler-Lagrange equation $\ddot{X}^k + \Gamma^k_{ij}\,\dot{X}^i\,\dot{X}^j = 0$ parameterized by the affine parameter equal to the optical pathlength. So we might hope for simple vector expressions expressing purely geometrical relationships in this case. Unfortunately, as discussed in the answer, the six dimensional Hamiltonian / Riemann-geometric approach breaks down at discontinuous interfaces and we are forced back to the four dimensional Hamiltonian approach involving only transverse components, because Snell's law only conserves the transverse optical momentums but not the normal and so we are again forced to deal with awkward boundary conditions. So again I think we are forced to the same conclusion that no eleganter geometric expression exists for Snell's law.
|
Čech cohomology fails for the plane with the doubled origin, $\mathbb A^2_{00}=\mathbb A^2\cup_{\mathbb A^2-0}\mathbb A^2$, with cohomology in the structure sheaf: $\check{H}^2(\mathbb A^2_{00};\mathcal O)=0\ne H^2(\mathbb A^2_{00};\mathcal O)$. Čech cohomology fails for both the Zariski and étale cohomologies.
Mayer-Vietoris shows that the punctured plane $\mathbb A^2-0$ has nontrivial cohomology in degree 1. Similarly, if we take any open subset of $\mathbb A^2$ that contains the origin, the same group appears in its first cohomology. Mayer-Vietoris pushes that into degree 2: $H^2(\mathbb A^2_{00};\mathcal O)=H^1(\mathbb A^2-0;\mathcal O)$. (In other words, if we push forward from the punctured plane to the affine plane $j\colon \mathbb A^2-0\hookrightarrow A^2$, then $R^1j_*\mathcal O$ is nontrivial and concentrated on the origin.)
The Čech cohomology for a particular cover amounts to pretending that the open sets of that cover and all of their $n$-fold intersections are acyclic for the sheaf. For a manifold and the constant sheaf, there do exist such “good covers” (eg, convex balls on a Riemannian manifold) and they are cofinal. For a separated scheme, affines are acyclic, so if intersections of affines are affine, then a single cover by affines computes quasi-coherent cohomology. This is why separated often appears as a hypothesis in cohomology (although it could be reduced to “semi-separated,” ie, affine diagonal).
If $U\to X$ is a cover and $U^n$ is the $n$-fold fiber product over $X$, then the Čech complex is: $$F(U)\to F(U^2)\to F(U^3)\to\cdots$$ This defines $\check{H}^*(U\to X;F)$. For any cover, the derived version of this computes the full cohomology:$$RF(U)\to RF(U^2)\to RF(U^3)\to\cdots$$This gives the Čech-to-cohomology spectral sequence: $E_1^{pq}=H^q(U^p;F)\Rightarrow H^{p+q}(X;F)$ and $E_2^{pq}=\check{H}^p(U\to X;R^qF)$). The edge map is the comparison from Čech cohomology to real cohomology and the groups off of the edge are obstructions to Čech cohomology working. Of course, we are not interested in a single cover, but in the limit over all covers.
For any cohomology class there exists a cover such that the restriction to the cover kills the class. Refining by that cover moves that class off of $E_1^{p,0}=H^p(U;F)$ to at least $E_1^{(p-1),1}$. If $p=1$ then it has moved into Čech cohomology. That is why first Čech is always correct in the limit over all covers. But there is no guarantee that a cycle can be moved all the way to the Čech edge $E^{0,p}$.
Indeed, this cannot be done in the example from the beginning: the second cohomology of the affine plane with the doubled origin with coefficients in the structure sheaf. Switching back to the language of ordinary spaces, our cover must have two open sets, each containing one of the two origins; their intersection is punctured in codimension 2 and thus has cohomology. Moreover, it is essentially the same cohomology, independent of the choice of cover. It cannot be killed by refining the cover, only by leaving the cover the same and introducing an auxiliary cover of the intersection, ie, Verdier’s theory of hypercovers, a small change to Čech cohomology that always works. I have identified a cycle in $E_1^{1,1}$ that I claim obstructs Čech cohomology from being correct in this example. I have shown that it survives the limit of all covers. I have not shown that it survives to $E_2$, let alone $E_\infty$. For proofs of such pathologies, see the literature on hypercovers.
|
501 0
In the vortex panel method the following equation is used
[tex] V_{freestream}sin \beta_i - \sum_{j=1}^n \displaystyle\frac{\lambda_i}{2\pi} \int \displaystyle\frac{d\theta_{ij}}{dn_i} ds_j = 0[/tex]
where n is the panel number, i is the control point at which the vortex strength is being calculated and j is the panel which is inducing some vortex at i, [tex]\lambda_i[/tex] is the vortex strength at i and
[tex] \theta_{ij} = \arctan{\displaystyle\frac{y_i-y_j}{x_i-x_j}}[/tex]
My question is what is the value of [tex]\theta_{ij}[/tex] when i = j?
[tex]
V_{freestream}sin \beta_i - \sum_{j=1}^n \displaystyle\frac{\lambda_i}{2\pi} \int \displaystyle\frac{d\theta_{ij}}{dn_i} ds_j = 0[/tex]
where n is the panel number, i is the control point at which the vortex strength is being calculated and j is the panel which is inducing some vortex at i, [tex]\lambda_i[/tex] is the vortex strength at i and
[tex] \theta_{ij} = \arctan{\displaystyle\frac{y_i-y_j}{x_i-x_j}}[/tex]
My question is what is the value of [tex]\theta_{ij}[/tex] when i = j?
|
Statistical properties of stochastic 2D Navier-Stokes equations from linear models
1.
University of Wyoming, Department of Mathematics, Dept. 3036, 1000 East University Avenue, Laramie, WY 82071
2.
Università di Pavia, Dipartimento di Matematica, via Ferrata 5, 27100 Pavia, Italy
In this paper, we investigate this conjecture for the 2D Navier-Stokes equations driven by an additive noise. In order to check this conjecture, we analyze the coupled system Navier-Stokes/linear advection system in the unknowns $(u,w)$. We introduce a parameter $\lambda$ which gives a system $(u^\lambda,w^\lambda)$; this system is studied for any $\lambda$ proving its well posedness and the uniqueness of its invariant measure $\mu^\lambda$.
The key point is that for any $\lambda \neq 0$ the fields $u^\lambda$ and $w^\lambda$ have the same scaling exponents, by assuming universality of the scaling exponents to the force. In order to prove the same for the original fields $u$ and $w$, we investigate the limit as $\lambda \to 0$, proving that $\mu^\lambda$ weakly converges to $\mu^0$, where $\mu^0$ is the only invariant measure for the joint system for $(u,w)$ when $\lambda=0$.
Mathematics Subject Classification:Primary: 60H15, 60G10; Secondary: 76D0. Citation:Hakima Bessaih, Benedetta Ferrario. Statistical properties of stochastic 2D Navier-Stokes equations from linear models. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 2927-2947. doi: 10.3934/dcdsb.2016080
References:
[1]
L. Angheluta, R. Benzi, L. Biferale, I. Procaccia and T. Toschi, Anomalous scaling exponents in nonlinear models of turbulence,,
97 (2006).
doi: 10.1103/PhysRevLett.97.160601.
Google Scholar
[2] [3]
R. Benzi, B. Levant, I. Procaccia and E. S. Titi, Statistical properties of nonlinear shell models of turbulence from linear advection model: rigorous results,,
20 (2007), 1431.
doi: 10.1088/0951-7715/20/6/006.
Google Scholar
[4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]
U. Frisch,
[19]
K. Gawędzki and A. Kupiainen, Anomalous Scaling of the Passive Scalar,,
75 (1995), 3834.
Google Scholar
[20] [21] [22] [23] [24]
R. Temam,
[25]
R. Temam,
[26] [27]
K. Yosida,
show all references
References:
[1]
L. Angheluta, R. Benzi, L. Biferale, I. Procaccia and T. Toschi, Anomalous scaling exponents in nonlinear models of turbulence,,
97 (2006).
doi: 10.1103/PhysRevLett.97.160601.
Google Scholar
[2] [3]
R. Benzi, B. Levant, I. Procaccia and E. S. Titi, Statistical properties of nonlinear shell models of turbulence from linear advection model: rigorous results,,
20 (2007), 1431.
doi: 10.1088/0951-7715/20/6/006.
Google Scholar
[4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]
U. Frisch,
[19]
K. Gawędzki and A. Kupiainen, Anomalous Scaling of the Passive Scalar,,
75 (1995), 3834.
Google Scholar
[20] [21] [22] [23] [24]
R. Temam,
[25]
R. Temam,
[26] [27]
K. Yosida,
[1]
P.E. Kloeden, Pedro Marín-Rubio, José Real.
Equivalence of invariant measures and stationary statistical solutions for the
autonomous globally modified Navier-Stokes equations.
[2]
Tomás Caraballo, Peter E. Kloeden, José Real.
Invariant measures and Statistical solutions of the globally modified Navier-Stokes equations.
[3]
Fabio Ramos, Edriss S. Titi.
Invariant measures for the $3$D
Navier-Stokes-Voigt equations and their Navier-Stokes limit.
[4]
Hi Jun Choe, Hyea Hyun Kim, Do Wan Kim, Yongsik Kim.
Meshless method for the stationary incompressible Navier-Stokes equations.
[5]
Hi Jun Choe, Do Wan Kim, Yongsik Kim.
Meshfree method for the non-stationary
incompressible Navier-Stokes equations.
[6]
V. V. Chepyzhov, A. A. Ilyin.
On the fractal dimension of invariant sets: Applications to Navier-Stokes equations.
[7] [8] [9]
Maxim A. Olshanskii, Leo G. Rebholz, Abner J. Salgado.
On well-posedness of a velocity-vorticity formulation of the stationary Navier-Stokes equations with no-slip boundary conditions.
[10]
Chuong V. Tran, Theodore G. Shepherd, Han-Ru Cho.
Stability of stationary solutions of the forced Navier-Stokes equations on the two-torus.
[11]
Li Li, Yanyan Li, Xukai Yan.
Homogeneous solutions of stationary Navier-Stokes equations with isolated singularities on the unit sphere. Ⅲ. Two singularities.
[12]
Zhendong Luo.
A reduced-order SMFVE extrapolation algorithm based on POD technique and CN method for the non-stationary Navier-Stokes equations.
[13] [14] [15] [16]
C. Foias, M. S Jolly, I. Kukavica, E. S. Titi.
The Lorenz equation as a metaphor for the Navier-Stokes equations.
[17] [18] [19] [20]
Andrei Fursikov.
Local existence theorems with unbounded set of input data and unboundedness of stable invariant manifolds for 3D
Navier-Stokes equations.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top]
|
I need to give an option talk about elementary number theory module. I will discuss how it is study of positive integers particularly the primes and give some cryptography applications. What is a good hook to stipulate in this talk regarding an introduction to elementary number theory?
I have so many ideas for this ... but will select one. I promise. But all my ideas about this satisfy this criterion:
Try talking about something they can start computing while you are talking, but that can quickly be connected to something interesting and mysterious (if not to say unsolved).
Today's suggestion is the "number of divisors" or "sum of divisors" functions, often called $\tau$ and $\sigma$ (or $\sigma_0$ and $\sigma_1$). It is easy to get people computing these for small positive integers, and perhaps to very quickly see patterns (e.g. $\tau(p)=2$, $\sigma(p)=p+1$ for $p$ prime). And prime numbers/factorization obviously are useful in computing them.
But one can quickly ask related questions. Such as, "What is the
average number of divisors of a number?" Of course, that requires definitions, but for this kind of talk you can sweep that under the rug and say "here is a sample definition" - like $\frac{1}{n}\sum_{k=1}^n \tau(k)$. Turns out this is asymptotically $\log(n)+2\gamma-1$ - yes, that gamma-the-number. On the other hand, I don't think a hard error has been given - you may have to peruse Apostol, Stopple, or Hardy/Wright to see, I think it is only known to be between $O(1/\sqrt[4]{n})$ and $O(1/\sqrt[3]{n})$ but my info may be out of date.
Alternately, to show how quickly new questions can arise, one can make up functions like $\tau_{o}$ and $\tau_{e}$, the number of odd and even divisors, respectively - are they equally good? What is "good" here? They are easy to compute small values of, but different - can we ask/answer the same questions about formulas and so forth?
Or if the students are quite erudite you can talk about its Dirichlet series (number 19 here) but that probably is beyond what you can do ...
In any case, you don't have to take this example, but I very much recommend finding some topic that is very easy to investigate that leads really fast to unknown/hard questions. Whether you look for one related to cryptography, algebra, or geometry, you will find one! I hope they all pick number theory. And that many other people answer this question so it becomes a repository for ideas for talks advertising it!
|
I was trying to solve this 1D diffusion problem \begin{equation} \dfrac{\partial^2 T}{\partial \xi^2} = \dfrac{1}{\kappa_S}\dfrac{\partial T}{\partial t}\, , \label{eq_diff_xi} \end{equation} with the boundary conditions \begin{align} &T(\xi = 2Bt^{1/2},t) = A t^{1/2}\, ,\\ &T(\xi=\infty,t) = 0\, ,\\ &T(\xi,0) = 0\, , \end{align} where $A$ is a constant.
I know that the solution is $T = A \sqrt{t}\, \rm{erfc}(\xi/2\sqrt{kt})/\rm{erfc}(B\sqrt{k})$
I tried by using the Laplace transformation, but I found a problem since I have conditions on $\xi = 2Bt^{1/2}$ instead of $\xi = 0$.
More precisely, if the Laplace function of $T(\xi,t)$ is $\Theta(\xi,s)$, after apply the Laplace transformation plus $T(\xi=\infty,t) = 0$ and $T(\xi,0) = 0$, I got
\begin{equation} \Theta(\xi,s) = C_1(s)\exp{\left(-\sqrt{\dfrac{s}{\kappa_T}}\xi\right)}\, . \end{equation}
So now, to find $C_1(s)$ and use the convolution property of the Laplace transformation, I need a condition on $\xi = 0$, but I only now that $T(\xi = 2Bt^{1/2},t) = A t^{1/2}$.
Does any of you know if the Laplace transform has some other properties that allow me to solve the problem?
|
The theoretical tools at my disposal are Abel's test and Dirichlet's test. To recap those:Say I have an integral of the form $$\int_{a}^{b}f\cdot g \hspace{1.5mm} dx$$with Improperness (vertical or horizontal asymptote) at
b.
Abel's test guarantees convergence for
$\bullet$ $g$ monotone and bounded on $(a,b)$ $\hspace{5mm}$ $\bullet$ $\int_{a}^{b}f $ convergent.
Dirichlet's test guarantees convergence for
$\bullet$ $g$ monotone on $(a,b)$ and $lim_{x\to b}\hspace{1.5mm} g(x) = 0 $ $\hspace{5mm}$ $\bullet$ $\lim_{\beta \to b}$ $\int_{a}^{\beta}f $ bounded.
So for $\int_{1}^{\infty} \frac{cos(x)}{x^r} dx$, $cos(x)$ must be my $f$ and since $\int_{1}^{\infty} cosx \hspace{1mm} dx$ diverges, I must show it is bounded, which is intuitively easy given the graph of $cos(x). $ But how to make it precise? With that done, for $r > 0 $ I get my bounded function $ g(x)=\frac{1}{x^r} $ as required by Dirichlet's test, right?
|
This post is a sequel to Concatenate and average NetCDFs where I introduced NCO (NetCDF operators), how to install NCO, and provided examples for the essential operations of record and ensemble concatenation and averaging of variables across multiple NetCDF files. If you are new to NCO I suggest familiarizing yourself with that post and experimenting with some NetCDF files as well as reading the NCO user manual. This post explains how to calculate weighted and non-weighted zonal statistics on variables over different dimensions (e.g. temporal and spatial averages) using the ncwa command line tool. Including several examples as well as information regarding open-source tools for visualizing NetCDF variables and NCO commands for accessing NetCDF metadata. These tools and examples will aid you in your work flow as they did for me!
Calculate zonal means using ncwa
In the previous NCO post we used the record averager (ncra) command to average variables over multiple files over their record dimension, typically time. But what if we need to average or use other aggregating statistics on a variable over space, time, or any other combination of dimensions we desire? The NetCDF weighted averager is just the right tool for the task.
Let's say we have 12 NetCDF files that each hold monthly model output for the water year 1992, in this case they contain simulation results from the Community Earth System Model:
Side Note: visualizing data in NetCDFs!
Any NetCDF file can be viewed quickly with the free tool ncview which you can easily get through the aptitude repository:
Using ncview is simple
Alternatively you could use the NASA's free Panoply software to view and create custom high quality figures from your NetCDF files. Panoply has a lot of options and is great for publication plots but I prefer ncview for taking quick views of a NetCDF file on the fly while doing analysis. The figures below were created with Panoply.
For example, here is a plot of simulated ground temperature in the model_out_1992-06.nc file (June ground temperature).
Now, before we do any weighted averaging lets concatenate the monthly output files along the record dimension (time) using ncrcat to make one NetCDF file TG_concated.nc which contains a time series of the monthly data. For simplicity let's only consider the variable ground temperature (TG) shown above. We can specify the variable we want to subset using the
-v option with ncrcat. Alternatively we could have averaged the variables across their record dimension using ncra but then we would no longer have the time dimension in the output file.
The resulting file now only contains the TG variable,
-O just says to overwrite the output file if it already exists. We can get useful information about the ground temperature variable with NCO as well using ncinfo:
We can see TG is a 3D variable in time, latitude, and longitude with a corresponding array shape of (12, 192, 288) (months, latitude nodes, longitude nodes). We can leverage this information if we need to aggregate and reduce the dimensionality for plotting or analysis.
Side Note: viewing metadata of NetCDFs!
You might be wondering how to view meta data of NetCDF files-- there are several different ways: 1)
ncinfo netcdf_file will print most basic and important information of the contents such as all variables, dimensions, file history, etc.; 2)
ncdump -h netcdf_file provides more detailed info for each variable; 3)
ncks -m will produce output pertaining to dimensions, -m is useful for finding length and units of dimensions and you can pass it variables as well with
or -M netcdf_file -v. The -M option will print out information regarding global attributes. Experiment with these as they are all super useful!
As we viewed using ncinfo our variable of interest is 3 dimensional in time, latitude, and longitude so we can average over these dimensions using ncwa (NetCDF weighted averager) thus reducing its dimensionality, actually the default behaviour of ncwa is to average over all dimensions thus reducing the variable to a scalar. For example just calling ncwa on TG_concated.nc:
The resulting scalar value for TG in the output file of 267.074 Kelvin is the global average ground temperature over the 12 months. To avoid averaging over all dimensions you must use the
-a option followed by the dimensions you want to average over. If you used ncwa -a followed by the record dimension of the input file this would be equivalent to the default behaviour of the NCO record averager ncra. By specifying dimensions with -a we can get the latitudinal zonal mean ground temperature as a function of time by only averaging over the longitudinal axis:
The new shape of lat_zonal_mean.nc is 192 which is the size of the latitude array in the original file. What we have created is a file with ground temperature that has been averaged over 12 months and over every longitude cell resulting in a singular value for each latitude node. This output file is useful for making a latitudinal zonal mean plot of mean ground temperature:
It is not an error that the line breaks around -60 degrees latitude, this is because there is no land in this stretch of the world and remember we are viewing
ground temperature.
The NCO weighted average operator ncwa also supports sub-setting the domain of operation over one or more dimensions using the syntax
ncwa -d dim,[min],[max],[stride] -d dim,[min],[max],[stride] -d dim,[min],[max],[stride] in.nc out.nc. The resulting output file will have the weighted average only for the desired dimensional domain. If stride is not given then all data between min and max are included, a stride of 2 will select every other value of the dimension (e.g. every other day if dim is time), 3 every third and so on. You need to include one of the values or both from the min and max parameters, using a comma for an open ended range. There is no limit on how many dimensions and ranges of dimensions you can subset over. For example, say we wanted to calculate the latitudinal zonal mean as before but only for latitudes above the equator and say we wanted to see how the zonal average evolved through time:
The resulting output contains the latitudinal zonal mean over the twelve months of data, the resulting 2D plot shows the mean ground temperature over the latitudinal axis (x axis) versus time (y axis).
The global monthly temperature from the model is out of phase because the order of the concatenated files that we started with was in water year order, that is October through September. I chose not to interpolate the monthly data in the plot, as a result you can see 12 distinct bands for each month starting with October. That is not to say that the concatenation did not keep track of the dates correctly- it did, but this example shows that you have flexibility on which date you would like the time series to start from.
Masking with logical conditionals
In addition to being able to subset our domain of operation by dimensions, ncwa supports using logical conditional statements for selection based on a variable's value and a relational operator ($=, \ne, \gt, \lt, \ge, \le$). There are two methods to do this: the long hard way:)
ncwa -m mask_variable -M mask_value -T Fortran_relational in.nc out.nc; or the short simple way:)
ncwa -B 'mask_variable relational mask_value' in.nc out.nc. Where
-m proceeds the variable used for the masking; -T proceeds a Fortran relational operator ($eq, ne, gt, lt, ge, le$) which correspond to ($=, \ne, \gt, \lt, \ge, \le$) in Fortran; -M proceeds the masking value; and if you use the -B option the valid relational operators are ==, !=, >, <, >=, <=. You should make sure to enclose the conditional expression after -B in quotes.
For example lets say we wanted to look at two variables from the model: soil evaporation rate (QSOIL) and plant transpiration rate (QVEGT). With ncwa we could calculate the evaporation for locations where the transpiration is high or vice versa just to see how this NCO operation works. This will require three steps: first we will concatenate the monthly files this time keeping these two variables, second we will calculate the global mean transpiration, and last we will calculate the mean global mean evaporation for locations where transpiration is above the global mean and plot our results.
The resulting plot of soil evaporation (QSOIL) in the output file looks like:
The logical indexing resulting from
-B 'QVEGT > 9.58266e-06' appears to have masked the worlds largest deserts in the output file. This is an obvious result considering there is a dearth vegetation in these regions; although this example does not clearly illustrate the relationship between evaporation and transpiration (which I expect would be a dynamic relationship seasonally as well), with a well thought-out plan the masking option can be quite useful for scientific data analysis.
Weighted averaging
You might be wondering by now how to calculate
weighted averages with ncwa, after all ncwa stands for NetCDF weighted averager! Well it's simple the only extra switch that is required is -w followed by the variable to use for the weights. Sometimes it is useful to include the -N option which tells ncwa to only use the numerator of the weights, i.e. integrating the variable over the selected weighting variable such as grid cell area. It is often that variables (at least from model output) will be area-weighted to begin with, if so
-w area without
-N will produce the same output as would not weighting by area in the first place. However if you use
-N -w area -v var ncwa will calculate the area$\times$var product in each cell without ever dividing by the sum of the areas thus integrating the variable over area. Here are two examples to clarify.
For the first example we will calculate the annual mean leaf area index (LAI) as simulated by the Community Earth System Model. For comparison we will calculate the same mean weighted by simulated plant transpiration (QVEGT). The result from the latter will cause grid cells that have high LAI and high QVEGT to stand out while lessening the value of LAI where QVEGT is below average.
Here is the corresponding plot for the non-weighted mean LAI:
And here is the corresponding plot for the mean LAI weighted by QVEGT:
There are clear differences between the weighted and non-weighted LAI maps, weighting allows us to visualize the locations where both vegetation is dense and where vegetation is transpiring. One observation I made by viewing both plots is there are large regions in northern North America and Asia that transpire quite a bit yet they do not correspond to the highest LAI, most of this region is home to the Boreal or Taiga coniferous forests. Perhaps the Boreal forests are transpiring significantly but are not as dense as say the tropical rainforests of the world. We could of seen the impact of transpiration on LAI more clearly by simply differencing these two plots however I wanted to keep the interpretation of the weighting operation as simple as possible.
Just to clarify the use of the
-N option this example shows how to integrate a variable over grid cell area and time. This is straightforward.
Now let's take a look at the familiar LAI variable after it has been integrated with respect to grid cell area and time:
By integrating with respect to grid cell area (km$^2$) and time the values of LAI are now very huge and not meaningful, to be precise the values are simply the product of LAI and the grid cell area summed over the time length-12 months: $$\mbox{Values for each grid cell above}~=~ \sum_{n=1}^{12} LAI_{i,j}\times GCA_{i,j}$$ where $GCA_{i,j}$ is the grid cell area (km$^2$) for the cell at the $i^{th}$ latitude index and the $j^{th}$ longitude index on the model grid which is 1$^\circ$ resolution. We see that by doing this the values near the equator increased and as we move towards the poles the LAI values decreased as compared to the non-weighted plot of LAI above. This is simply a result of the spherical grid that the model utilizes which has much higher grid cell areas near the equator.
The
-N option is equivalent to -y ttl which tells ncwa to take the sum as opposed to the mean over the specified dimensions after -a and since area is not a dimension in the files we are working with we needed to list area as the weight variable. Note, integrating some variables may or may not make sense depending on what you are trying to accomplish and remember some variables may have already been area weighted and take special care to understand what units you are dealing with. For example QVEGT above has units of mm/s. If you weighted QVEGT by grid cell area (which happens to be in $km^2$) and summed over time as shown above for LAI you would get units of $12\cdot mm\cdot km^2/s$. You could then calculate the average volumetric flux by dividing by 12 and converting the $km^2$ to $mm^2$. Computing other aggregating statistics besides averaging
The ncwa averages by default but also allows additional aggregating statistic calculations by stating them after the
-y option. These methods also work for ncra, ncea, and nces. The ncra (NetCDF record averager) is a less sophisticated version of ncwa, you can find more about it and ncea (NetCDF ensemble averager) here. As mentioned the -N option is the equivalent to -y ttl and tells ncwa to sum over the specified dimensions listed after the -a option. Here is a list of all the statistics that ncwa, ncra, ncea, and nces support with the operation keyword followed by a colon and then its description:
avg: mean value
ttl: sum of values
sqravg: square of mean
sqrt: square root of the mean
avgsqr: mean of sum of squares
max: maximum value
min: minimum value
mabs: maximum absolute value
mebs: mean of absolute value
mibs: minimum absolute value
rms: root-mean-square (normalized by N)
rmssdn: root-mean-square (normalized by N-1)
For example,
Will calculate the max of the monthly simulated QVEGT for the rectangular region around South America.
Final remarks
As we saw the NCO command line tool for weighted averaging (ncwa) is powerful and flexible, it is not limited to weighted averaging but can do a range of aggregating statistics, with or without weights, including subsetting and logical indexing or masking. This makes ncwa one of my favorite NCO tools because it can do so many jobs. Once you get familiar with it I think you will prefer using it over opening NetCDF files in your language of choice and doing the operations there. This is because the NCO syntax is succinct and the inner workings are optimized for speed and efficient memory use. Once you couple NCO with a scripting language of your choice it becomes even more powerful. If you enjoyed this post or have suggestions please comment like subscribe and keep visiting! Cheers and happy NetCDF-ing :)
|
Is there any hope in solving the following linear system efficiently with an iterative method?
$A \in \mathbb{R}^{n \times n}, x \in \mathbb{R}^n, b \in \mathbb{R}^n \text{, with } n > 10^6$
$Ax=b$
with
$ A=(\Delta - K) $, where $\Delta$ is a very sparse matrix with a few diagonals, arising from the discretization of the Laplace Operator. On it's main diagonal there is $-6$ and there are $6$ other diagonals with $1$ on it.
$K$ is a full $\mathbb{R}^{n \times n}$ matrix that consists completely of ones.
Solving $A=\Delta$ works fine with iterative methods like Gauss-Seidel, because it's a sparse diagonally dominant matrix. I suspect that the problem $A=(\Delta - K)$ is pretty much impossible to solve efficiently for large numbers of $n$, but is there any trick to maybe solve it, exploiting the structure of $K$?
EDIT: Would doing something like
$\Delta x^{k+1} = b + Kx^{k}$ // solve for $x^{k+1}$ with Gauss-Seidel
converge to the correct solution? I read that such a Splitting Method converges if $\rho(\Delta^{-1} K) < 1$, where $\rho$ is the spectral norm. I manually calculated the eigenvalues of $\Delta^{-1} K$ for some different small values of $n$ and they're all zero except the one which has a pretty high negative value. (about ~500 for $n=256$) So I guess that wouldn't work.
EDIT: More information about $\Delta$:
$\Delta \in \mathbb{R}^{n \times n}$ is symmetric and is negative definite and diagonally dominant.
It is created the following way in matlab
n=W*H*D;
e=ones(W*H*D,1);
d=[e,e,e,-6*e,e,e,e];
delta=spdiags(d, [-W*H, -W, -1, 0, 1, W, W*H], n, n);
|
For $z > 0$, you have $B = - \sinh(ka) / \cosh (ka)$, so
$$ \phi = \frac{A}{\cosh (ka)} \left( \cosh (ka) \sinh (kz) - \sinh(ka) \cosh(kz) \right) = \frac{A}{\cosh(ka)} \sinh(k(z-a)).$$
[By the way, if you had written the general solution in the form $\phi = C \sinh(k(z - a)) + D \cosh (k(z - a))$, then it would have been obvious that $D = 0$ from the boundary condition at $z = a$, which would have led you immediately to this expression for $\phi$. Of course, $C = A / \cosh(ka)$.]
Similarly, for $z < 0$, we have$$ \phi = \frac{A'}{\cosh (ka)}\sinh(k(z + a)).$$
All that remains is to find $A$ and $A'$. You do this by matching the boundary conditions at $z = 0$.
The original equation is$$ -k^2 \phi + \frac{d^2\phi}{dz^2} = -2\delta(z).$$
If we integrate both sides over an infinitesimally thin interval around $z = 0$, we have
$$ \lim_{z \to 0^+} \frac{d\phi }{dz} - \lim_{z \to 0^-} \frac{d\phi }{dz} = -2$$
So you need to choose $A$ and $A'$ such that $d\phi / dz$ has a discontinuity of $-2$ at $z = 0$ (but such that $\phi$ itself is continuous). In other words, you need
$$ \frac{A}{\cosh (ka)} \times k\cosh(k(0-a)) - \frac{A'}{\cosh(ka)} \times k\cosh(k(0+a)) = -2$$$$ \frac{A}{\cosh (ka)} \times \sinh(k(0-a)) + \frac{A'}{\cosh(ka)} \times \sinh(k(0+a)) = 0$$and this is solved by$$ A = -1, \ \ \ A' = +1.$$
|
I am solving this for particles distributed in a 2D space. First we know that the moment of inertia of a particle about an axis is given by
$$I=Mr^2$$
And we know that the axes passing through COM have the minimum moment of inertia. Our job is to find the slope of the axes with min Inertia, then we can use the slope point form to find the equation of the required axis :
$$y - y_0 = m ( x - x_0 )$$
(Where m is the slope of the line and $(x_0, y_0)$ is the point through which it passes)
Now let us consider the distribution of masses, the coordinate of COM is given by:$$x_{cm} = \frac{1}{M} \sum_i M_i x_i$$$$y_{cm} = \frac{1}{M} \sum_i M_i y_i$$
And now let us shift the origin to COM to make our calculations easy. If the original coordinate of mass $M_i$ was $(x_i, y_i)$ then the new shifted coordinates are:\begin{align}x_i' & = x_i - x_{cm} \\y_i' & = y_i - y_{cm} \end{align}Hence the axis that we seek now passes to the new shifted origin $(x_0, y_0)$.
Now let the equation of the line be
$$y = mx$$(as this line passes through origin, $c = 0$).
Then the distance $r_i$ of the $i$th particle from this line, is given by :\begin{align}r_i = \frac{|m x_i' -y_i'|}{\sqrt{1+m^2}}\end{align}
Hence the total moment of Inertia of the system is :
\begin{align}I = \sum M_i r_i^2\end{align}
Now we can differentiate it with respect to $m$ (slope) and equate it to zero to find the minima:$$\frac{dI}{dm} = \sum \frac{2M_i(mx_i'-y_i)(x_i'+my_i)}{(1+m^2)^2}$$
Equating it to zero gives :
$$\sum M_i(mx_i'-y_i)(x_i'+my_i)=0$$
Solving for m gives the following quadratic equation :
$$\left(\sum M_i x_i'y_i'\right)m^2 + \left(\sum M_i (x_i'^2-y_i'^2)\right)m - \left(\sum M_i x_i'y_i'\right) = 0$$
Note that the product of roots (that is slopes) is -1 i.e. this gives two two axes perpendicular to each other and one among them has minimum momentum and other has max about it (they can be found out by differentiating the above quadratic equation again)
Thus the the axis we seek, in the original coordinate system maybe be written as:$$y-y_{cm} = m(x-x_{cm}).$$where $m$ is the slope which corresponds to minimum inertia.
|
Do you want iPad2 or iPod Touch for free?
The prize for KAIST Math POW is getting better, thanks to the department support. From this Fall semester, we will have the following as the prizes:
1st prize: iPad2 16GB
2nd prize: iPod Touch 32GB
3rd prize: 5 WEEKDAY DINNER gift certificates for for a buffet restaurant in Yuseong
4th prize: 3 WEEKDAY DINNER gift certificates for a buffet restaurant in Yuseong
5th prize: 2 WEEKDAY DINNER gift certificates for a buffet restaurant in Yuseong
GD Star Rating loading...
In Seoul Subway Line 2, subway stations are placed around a circular subway line. Assume that each segment of Seoul Subway Line 2 has a fixed price. Suppose that you hid money at each subway station so that the sum of the money is only enough for one roundtrip around Seoul Subway Line 2.
Prove that there is a station that you can start and take a roundtrip tour of Seoul Subway Line 2 while paying each segment by the money collected at visited stations.
The best solution was submitted by Kang, Dongyub (강동엽), 전산학과 2009학번. Congratulations!
Here is his Solution of Problem 2011-22. (typo in the lemma: replace a
n+i=a n with a n+i=a i.)
Alternative solutions were submitted by 서기원 (수리과학과 2009학번, +3 Alternative Solution), 장경석 (2011학번, +3), 김태호 (2011학번, +3), 김범수 (수리과학과 2010학번, +3), 박준하 (하나고등학교 2학년, +3).
GD Star Rating loading...
Let \(f:\mathbb{R}^n\to \mathbb{R}^{n-1}\) be a function such that for each point a in \(\mathbb{R}^n\), the limit $$\lim_{x\to a} \frac{|f(x)-f(a)|}{|x-a|}$$ exists. Prove that f is a constant function.
GD Star Rating loading...
For a nonnegative integer n, let \(F_n(x)=\sum_{m=0}^n \frac{(-2)^m (2n-m)! \Gamma(x+1)}{m! (n-m)! \Gamma(x-m+1)}\). Find all x such that F
n(x)=0.
The best solution was submitted by Bumsu Kim (김범수), 수리과학과 2010학번.
Here is his Solution of Problem 2011-21.
GD Star Rating loading...
In Seoul Subway Line 2, subway stations are placed around a circular subway line. Assume that each segment of Seoul Subway Line 2 has a fixed price. Suppose that you hid money at each subway station so that the sum of the money is only enough for one roundtrip around Seoul Subway Line 2.
Prove that there is a station that you can start and take a roundtrip tour of Seoul Subway Line 2 while paying each segment by the money collected at visited stations.
GD Star Rating loading...
For a real number x, let d(x)=min
n:integer (x-n) 2. Evaluate the following double infinite series:
. . . + 8 d(x/8)+4 d(x/4) + 2 d(x/2) + d(x) + d(2x) / 2 + d(4x)/4 + d(8x)/8 + . . .
The best solution was submitted by Gee Won Suh (서기원), 수리과학과 2009학번. Congratulations!
Here is his Solution of Problem 2011-20.
Alternative solutions were submitted by 박승균 (수리과학과 2008학번, Alternative Solution, +3) and 장경석 (2011학번, +3).
GD Star Rating loading...
For a nonnegative integer n, let \(F_n(x)=\sum_{m=0}^n \frac{(-2)^m (2n-m)! \Gamma(x+1)}{m! (n-m)! \Gamma(x-m+1)}\). Find all x such that F
n(x)=0.
GD Star Rating loading...
Find all n≥2 such that the polynomial x
n-x n-1-x n-2-…-x-1 is irreducible over the rationals.
The best solution was submitted by Gee Won Suh (서기원), 수리과학과 2009학번. Congratulations!
Here is his Solution of Problem 2011-19.
One incorrect solution by W.J. Kim was submitted.
GD Star Rating loading...
For a real number x, let d(x)=min
n:integer (x-n) 2. Evaluate the following double infinite series: . . . + 8 d(x/8)+4 d(x/4) + 2 d(x/2) + d(x) + d(2x) / 2 + d(4x)/4 + d(8x)/8 + . . .
GD Star Rating loading...
|
36 0 1. Homework Statement
Hi all, I'm currently reviewing for a final and would like some help understanding a certain part of this particular problem: Determine the retarded Green's Function for the D'Alembertian operator ##D = \partial_s^2 - \Delta##, where ##\Delta \equiv \nabla \cdot \nabla## , and which satisfies $$ (\partial_s^2 - \Delta)G(\vec{x},s) = \delta^3(\vec{x}) \delta(s). $$
2. Homework Equations
Define the Fourier Transform as $$ \mathfrak{F}[f(\vec{x})](\vec{k}) = \hat f (\vec{k}) = \frac{1}{(\sqrt{2\pi})^3} \int d\vec{x} e^{i\vec{k} \cdot \vec{x}} f(\vec{x}). $$
3. The Attempt at a Solution
I know how to solve the spatial part of this problem. That is, taking the Fourier transform of the spatial part of the RHS of the differential equation given, $$ \mathfrak{F}[\delta^3(\vec{x})] = \frac{1}{(\sqrt{2\pi})^3} \int d\vec{x} e^{i\vec{k} \cdot \vec{x}} \delta^3(\vec{x}) = \frac{1}{(\sqrt{2\pi})^3}. $$ And, for the LHS, while a bit longer, doing out the integrals yields that $$ \mathfrak{F}[\Delta G](\vec{k}) = -k^2 \hat G(\vec{k}), k^2 = k_1^2 + k_2^2 + k_3^2. $$ Now, using these results to rewrite the D'Alembertian acting on the Green's Function, we have that $$ (\partial_s^2 + k^2)\hat G = \frac{\delta(s)}{(\sqrt{2\pi})^3}. $$ Now, the homework assignment gives as a hint to next verify that $$ \hat G (\vec{k}, s) = H(s)\frac{sin(|\vec{k}|s)}{|\vec{k}|} $$ is a solution to the equation, where ##H(s)## is the Heaviside function and is given by $$ H(s) = \begin{cases} 0 & \text{if } s< 0 \\ 1 & \text{if } s \geq 0 \end{cases}. $$ My first question is why you would think to use this particular solution involving the Heaviside function, and where this comes from. Is it just because you want an (oscillating? why?) solution that turns on for ##s > 0## so that you preserve causality?
Next, I'm told to show the following result formally: $$\mathfrak{F}[\delta(|\vec{x}| - R)] = 4\pi R\frac{sin(|\vec{k}|R)}{|\vec{k}|}. $$ This I can also do and feel comfortable showing (and will save a lot of time not writing here in latex). I am then told to use this result to calculate Inverse Fourier Transform of ##\hat G##. But I'm not sure how to do this correctly, since I'm told I'm supposed to arrive at $$ G_R(\vec{x}, s) = \frac{H(s)}{4\pi s}\delta(s - |\vec{x}|), s=ct,$$ and I have written for my solution just that $$ \hat G(\vec{k}, s) = H(s)\frac{sin(|\vec{k}|s)}{|\vec{k}|} = \frac{H(s)}{4\pi s}\delta(|\vec{x}|-s) \Rightarrow G(\vec{x},s) = \frac{H(s)}{4\pi s}\delta(s - |\vec{x}|). $$ Now clearly this doesn't work (and if it does, it doesn't make much sense to me). Why is it that the arguments within the delta function switch signs? Moreover, I'm not sure the correct way to get the retarded Green's Function from its Fourier Transform.
My professor also wrote this in her course book without an explanation, and simply uses the result to eventually obtain the electromagnetic potentials in the Lorentz gauge. If anyone has any other thoughts to help make this method and steps more intuitive, that would also be greatly appreciated, thanks!
|
Let $\{X_\alpha\}_{\alpha \in I}$ be a collection of mutually disjoint measurable subsets of $\mathbb{R}. Show that at most countable of them has positive measure.
I want to see if my proof is correct.
Proof:
Let $\{X_\alpha\}_{\alpha \in \Gamma \subset I}$ be the subcollection such that $m(X_\alpha) > 0, \forall \alpha \in \Gamma$.
Since each set is measurable and has positive measure,
$$ \forall X_{\alpha}, \alpha \in \Gamma, \exists O_{\alpha} = (a_\alpha,b_\alpha) \mbox{ such that } O_\alpha \subset X_{\alpha}$$
Now we must show that $|\Gamma| \leq |\mathbb{N}|$
Let $f: \{O_{\alpha}\}_{\alpha \in \Gamma} \to \mathbb{Q}$ be the function that $$f(O_\alpha) = q_{\alpha}, \mbox{ where }q_\alpha \in (a_\alpha,b_\alpha)$$
This is possible because $\mathbb{Q}$ is dense in $\mathbb{R}$
This function is clearly injective since $\{O_\alpha\}$ is a disjoint collection of open sets. Hence, the same rational can't be in two different open intervals from this collection.
Since $f$ is injective, $\{O_\alpha\}_{\alpha \in \Gamma}| = |\Gamma| \leq |\mathbb{Q}| = |\mathbb{N}| = \aleph_0$
Q.E.D
|
Consider a test particle of mass $m$ which is in orbit around a spherical-symmetric body with mass $M$. It therefore has a position as described by the coordinates $r,\phi$, and its motion can be described by the Lagrangian $L$ of the Einstein-Infeld-Hoffman-Equations:
$$L = \frac{mv^2}{2}+ \frac{GmM}{r}+\frac{mv^4}{8c^2} + \frac{3GmMv^2}{2c^2r}-\frac{kmM\left(m+M\right)}{2c^2r^2},$$
where $v$ is the particles velocity.
But the orbit of the test-particle can also be described by the Schwarzschild-Metric and the corresponding Lagrangian $\mathcal{L}$
$$\mathcal{L} = -\frac{1}{2}\left[-\left(1-\frac{2 G m}{c^2 r}\right) c^2 \dot{t}^2 + \left(1-\frac{2 G m}{c^2 r}\right)^{-1}\dot{r}^2 + r^2 \dot{\varphi^2}\right].$$ Where the dot denotes the derivative with respect to the proper time of the particle $\tau$ along the world line.
I know that the Newtonian Lagrangian for a testparticle can be derived by requiring $\frac{v}{c}\rightarrow 0$.
Since $L$ simply adds some extra terms to the Lagrangian, it should be possible to do something similar here.
But what kind of expansion is needed to arrive at $L$ from $\mathcal{L}$?
|
Attractors¶ Visualizing Attractors¶
An attractor is a set of values to which a numerical system tends to evolve. An attractor is called a strange attractor if the resulting pattern has a fractal structure. This notebook shows how to calculate and plot two-dimensional attractors of a variety of types, using code and parameters primarily from Lázaro Alonso, François Pacull, Jason Rampe, Paul Bourke, and James A. Bednar.
Clifford Attractors¶
For example, a Clifford Attractor is a strange attractor defined by two iterative equations that determine the
x,y locations of discrete steps in the path of a particle across a 2D space, given a starting point (x0,y0) and the values of four parameters (a,b,c,d):
\begin{equation} x_{n +1} = \sin(a y_{n}) + c \cos(a x_{n})\\ y_{n +1} = \sin(b x_{n}) + d \cos(b y_{n}) \end{equation}
At each time step, the equations define the location for the following time step, and the accumulated locations show the areas of the 2D plane most commonly visited by the imaginary particle.
It's easy to calculate these values in Python using Numba. First, we define the iterative attractor equation:
import numpy as np, pandas as pd, datashader as dsfrom datashader import transfer_functions as tffrom datashader.colors import inferno, viridisfrom numba import jitfrom math import sin, cos, sqrt, fabs@jit(nopython=True)def Clifford(x, y, a, b, c, d, *o): return sin(a * y) + c * cos(a * x), \ sin(b * x) + d * cos(b * y)
We then evaluate this equation 10 million times, creating a set of
x,y coordinates visited. The
@jit here and above is optional, but it makes the code 50x faster.
n=10000000@jit(nopython=True)def trajectory_coords(fn, x0, y0, a, b=0, c=0, d=0, e=0, f=0, n=n): x, y = np.zeros(n), np.zeros(n) x[0], y[0] = x0, y0 for i in np.arange(n-1): x[i+1], y[i+1] = fn(x[i], y[i], a, b, c, d, e, f) return x,ydef trajectory(fn, x0, y0, a, b=0, c=0, d=0, e=0, f=0, n=n): x, y = trajectory_coords(fn, x0, y0, a, b, c, d, e, f, n) return pd.DataFrame(dict(x=x,y=y))
%%timedf = trajectory(Clifford, 0, 0, -1.3, -1.3, -1.8, -1.9)
CPU times: user 1.8 s, sys: 196 ms, total: 1.99 s Wall time: 1.99 s
df.tail()
x y 9999995 1.816108 -1.518004 9999996 2.198861 0.040714 9999997 1.675459 -2.176648 9999998 1.334090 0.987108 9999999 -0.665912 -1.525518
We can now aggregate these 10,000,000 continuous coordinates into a discrete 2D rectangular grid with Datashader, counting each time a point fell into that grid cell:
%%timecvs = ds.Canvas(plot_width = 700, plot_height = 700)agg = cvs.points(df, 'x', 'y')print(agg.values[190:195,190:195],"\n")
[[ 34 38 32 43 24] [ 25 29 30 34 34] [117 30 37 36 29] [136 180 117 63 44] [ 59 86 132 130 78]] CPU times: user 771 ms, sys: 105 µs, total: 771 ms Wall time: 769 ms
A small portion of that grid is shown above, but it's difficult to see the grid's structure from the numerical values. To see the entire array at once, we can turn each grid cell into a pixel, using a greyscale value from white to black:
ds.transfer_functions.Image.border=0tf.shade(agg, cmap = ["white", "black"])
|
Suppose $A$ is a Borel measurable subset of [0,1], $m$ is Lebesgue measure, and $\varepsilon\in (0,1)$. Prove that there exists a continuous function $f: [0,1]\to \mathbb{R}$ such that $0\le f\le 1$ and $$ m(\{x:f(x)\ne\chi_A(x)\})<\varepsilon. $$ Here $\chi_A(x)$ is the indicator function on $A$.
If $A$ is Borel measurable, then it is certainly Lebesgue measurable. The Lebesgue measure is
regular, meaning that for any $\epsilon > 0$, you can find a closed set $F$ and an open set $U$ with $F \subset A \subset U$ such that$ m(U \backslash F) < \epsilon.$
By Urysohn's lemma, there exists a continuous function $f:[0,1]\to \mathbb R$ with $0 \leq f \leq 1$ such that $f(x) = 0$ for $x \notin U$ and $f(x) = 1$ for $x \in F$. Thus the only place where $f \neq \chi_A$ is the region $U \backslash F$, whose measure is less than $\epsilon$ by assumption.
|
I am assuming a very simple case, where there is only a mass $m$ with position $x$ under an external force $F$. we know that the Lagrangian takes the form $L = (1/2) m \dot{x}^2$ from which equations of motion follow as $$\frac{d}{d t} \frac{\partial L}{\partial \dot{x}}= m \ddot{x} = F\tag{1}$$ respectively.
Now, consider a coordinate transformation $$y(t) = \frac{x(t)}{n(t)} + \int_0^t\frac{x(\tau)\dot{n}(\tau)}{n^2(\tau)} d\tau\tag{2}$$ which yields $$\dot{y}(t)=\frac{\dot{x}(t)}{n(t)}.\tag{3}$$ I am wondering what is wrong with the following derivation if I want to find the equations of motion through Euler-Lagrange equation in this new coordinate system:
The external force can be mapped to new coordinate system by the equivalence of virtual work $F_\text{new} = n(t) F$.
The Lagrangian can be expressed in the new frame as $L=(1/2) m (\dot{y}(t) n(t))^2$.
Therefore, the EL equation is obtained as $$\frac{d}{d t} \frac{\partial L}{\partial \dot{y}} =m n^2(t) \ddot{y}(t)+ 2m \dot{n}(t) n(t) \dot{y}(t)= F_\text{new},\tag{4}$$ which doesn't seem correct to me, since a simple substitution into the equation of motion directly gives $$m n^2(t) \ddot{y} + m \dot{n} n(t) \dot{y}(t) = F_\text{new}\tag{5}$$ instead.
I appreciate your ideas about this discrepancy.
|
I'll begin with a general remark: first-order information (i.e., using only gradients, which encode slope) can only give you directional information: It can tell you that the function value decreases in the search direction, but not for how long. To decide how far to go along the search direction, you
need extra information (gradient descent with constant step lengths can fail even for convex quadratic problems). For this, you basically have two choices: Use second-order information (which encodes curvature), for example by using Newton's method instead of gradient descent (for which you can always use step length $1$ sufficiently close to the minimizer). Trial and error (by which of course I mean using a proper line search such as Armijo).
If, as you write, you don't have access to second derivatives, and evaluating the obejctive function is very expensive, your only hope is to compromise: use enough approximate second-order information to get a good candidate step length such that a line search needs only $\mathcal{O}(1)$ evaluations (i.e., at most a (small) constant multiple of the effort you need to evaluate your gradient).
One possibility is to use
Barzilai--Borwein step lengths (see, e.g., Fletcher: On the Barzilai-Borwein method. Optimization and control with applications, 235–256, Appl. Optim., 96, Springer, New York, 2005). The idea is to use a finite difference approximation of the curvature along the search direction to get an estimate of the step size. Specifically, choose $\alpha_0>0$ arbitrary, set $g^0:=\nabla f(x^0)$ and then for $k=0,...$: Set $s^k = -\alpha_k^{-1} g^k$ and $x^{k+1}=x^k+s^k$ Evaluate $g^{k+1}=\nabla f(x^{k+1})$ and set $y^k = g^{k+1}-g^{k}$ Set $\alpha_{k+1} = \frac{(y^k)^Ty^k}{(y^k)^Ts^k}$
This choice can be shown to converge (in practice very quickly) for quadratic functions, but the convergence is
not monotone (i.e., the function value $f(x^{k+1})$ can be larger than $f(x^k)$, but only once in a while; see the plot on page 10 in Fletcher's paper). For non-quadratic functions, you need to combine this with a line search, which needs to be modified to deal with the non-monotonicity. One possibility is choosing $\sigma_k \in (0,\alpha_k^{-1})$ (e.g., by backtracking) such that$$ f(x^k - \sigma_k g^k) \leq \max_{\max(k-M,1)\leq j\leq k} f(x^j) - \gamma \sigma_k (g^k)^Tg^k,$$where $\gamma\in(0,1)$ is the typical Armijo parameter and $M$ controls the degree of monotonicity (e.g., $M=10$). There's also a variant that uses gradient values instead of function values, but in your case the gradient is even more expensive to evaluate than the function, so that doesn't make sense here. (Note: You can of course try to blindly accept the BB step lengths and trust your luck, but if you need any sort of robustness -- as you wrote in your comments -- that would be a really bad idea.)
An alternative (and, in my opinion, much better) approach would be to use this finite difference approximation already in the computation of the search direction; this is called a
quasi-Newton method. The idea is to incrementally build an approximation of the Hessian $\nabla^2 f(x^k)$ by using differences of gradients. For example, you could take $H_0=\mathrm{Id}$ (the identity matrix) and for $k=0,\dots$ solve$$H_{k}s^{k} = -g^{k},\label{cc1}\tag{1}$$and set$$H_{k+1} = H_k + \frac{(y^k-H_ks^k)^T(s^k)^T}{(s^k)^Ts^k}$$with $y^k$ as above and $x^{k+1} = x^k +s^k$. (This is called Broyden update and is rarely used in practice; a better but slightly more complicated update is the BFGS update, for which -- and more information -- I refer to Nocedal and Wright's book Numerical Optimization.) The downside is that a) this would require solving a linear system in each step (but only of the size of the unknown which in your case is an initial condition, hence the effort should be dominated by solving PDEs to get the gradient; also, there exist update rules for approximations of the inverse Hessian, which only require computing a single matrix-vector product) and b) you still need a line search to guarantee convergence...
Luckily, in this context there exists an alternative approach that makes use of every function evaluation. The idea is that for $H_k$ symmetric and positive definite (which is guaranteed for the BFGS update), solving \eqref{cc1} is equivalent to minimizing the quadratic model$$q_k(s) = \frac12 s^T H_k s + s^T g^k.$$In a
trust region method, you would do so with the additional constraint that $\|s\| \leq \Delta_k$, where $\Delta_k$ is an appropriately chosen trust region radius (which plays the role of the step length $\sigma_k$). The key idea is now to choose this radius adaptively, based on the computed step. Specifically, you look at the ratio$$ \rho_k := \frac{f(x^k)-f(x^k+s^k)}{f(x^k)-q_k(s^k)}$$of the actual and predicted reduction in function value. If $\rho_k$ is very small, your model was bad, and you discard $s^k$ and try again with $\Delta_{k+1}<\Delta_k$. If $\rho_k$ is close to $1$, your model is good, and you set $x^{k+1}=x^k+s^k$ and increase $\Delta_{k+1}>\Delta_k$. Otherwise you just set $x^{k+1}=x^k+s^k$ and leave $\Delta_k$ alone. To compute the actual minimizer $s^k$ of $\min_{\|s\|\leq \Delta_k} q_k(s)$, there exist several strategies to avoid having to solve the full constrained optimization problem; my favorite is Steihaug's truncated CG method. For more details, I again refer to Nocedal and Wright.
|
It's well-known that $\sin(\pi \frac pq)$ is always algebraic. In particular, as I understand, it can always be expressed in terms of radicals, because it can be connected to the abelian group of $e^{i\pi \frac pq}$. Because these can always be expressed in terms of radicals -- this means there must be some other algebraic numbers (for instance, the roots of irreducible quintics) whose inverse sines are not a rational multiple of pi. So what form do these take? Is $\sin(\sqrt 2 \pi)$ known to be transcendental, for instance? That would be the "first place to start looking", to me, but I have little basis for that.
$\sin \pi \alpha$, for $\alpha$ irrational but algebraic, is transcendental by the Gelfond-Schneider theorem.
To start, write
$$\sin \pi \alpha = \frac{e^{i \pi \alpha} - e^{-i \pi \alpha}}{2i}.$$
We will actually show that $e^{i \pi \alpha}$ is transcendental. This follows from writing it as $(e^{i \pi})^{\alpha} = (-1)^{\alpha}$. (This notation may look funny; for complex numbers, $a^b$ is multivalued, and Gelfond-Schneider applies to all of the possible values.)
So I don't know what there is to say beyond "take the inverse sine of some algebraic number, then divide by $\pi$."
|
This question already has an answer here:
Let $n$ be a nonnegative integer, and $k$ a positive integer. Could someone explain to me why the identity $$ \sum_{i=0}^n\binom{i+k-1}{k-1}=\binom{n+k}{k} $$ holds?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
Let $n$ be a nonnegative integer, and $k$ a positive integer. Could someone explain to me why the identity $$ \sum_{i=0}^n\binom{i+k-1}{k-1}=\binom{n+k}{k} $$ holds?
One way to interpret this identity is to consider the number of ways to choose $k$ integers from the set $\{1,2,3,\cdots,n+k\}$.
There are $\binom{n+k}{k}$ ways to do this, and we can also count the number of possibilities by considering the largest integer chosen. This can vary from $k$ up to $n+k$, and if the largest integer chosen is $l$, then there are $\binom{l-1}{k-1}$ ways to choose the remaining $k-1$ integers.
Therefore $\displaystyle\sum_{l=k}^{n+k}\binom{l-1}{k-1}=\binom{n+k}{k}$, and letting $i=l-k$ gives $\displaystyle\sum_{i=0}^{n}\binom{i+k-1}{k-1}=\binom{n+k}{k}$.
I'm sure there are some clever combinatorial arguments. I've never been very clever, so I'll induct on $n$.
An easy computation shows that the base case $n=0$ holds.
Now, suppose inductively that $$\sum_{i=0}^n\binom{i+k-1}{k-1}=\binom{n+k}{k}$$ Then \begin{align*} \sum_{i=0}^{n+1}\binom{i+k-1}{k-1} &= \sum_{i=0}^n\binom{i+k-1}{k-1}+\binom{(n+1)+k-1}{k-1} \\ &= \binom{n+k}{k}+\binom{n+k}{k-1} \\ &\overset{\circledast}{=} \binom{(n+1)+k}{k} \end{align*} where the equality marked $\circledast$ uses the recursive formula for binomial coefficients. This completes the induction.
Generating function can do the job quite easily: \begin{align*} \frac{1}{(1-x)^k} &= \sum_{i\ge 0} \binom{i+k-1}{k-1}\, x^i \end{align*} Using convolution of generating functions, \begin{align*} \frac{1}{(1-x)}\cdot \frac{1}{(1-x)^k} &= \sum_{n\ge 0} \left(\sum_{i=0}^n \binom{i+k-1}{k-1}\right) x^n \\ \frac{1}{(1-x)^{k+1}} &= \sum_{n\ge 0} \binom{n+k}{k} x^n \\ \implies \sum_{i=0}^n \binom{i+k-1}{k-1} &= \binom{n+k}{k} \end{align*}
How many solutions are there to $a_1+\cdots+a_k\leq n$ in nonnegative integers?
On the one hand, this is $$\begin{align*}&\sum_{i=0}^n\#\{(a_1,\ldots,a_k)\mid a_1+\cdots+a_k=i\}\\ =& \sum_{i=0}^n\binom{i+k-1}{k-1}\end{align*}$$
On the other had, every solution to $a_1+\cdots+a_k\leq n$ corresponds to a unique solution of $a_1+\cdots+a_k+t=n$ by setting $t=n-(a_1+\cdots+a_k)$. Hence $$\sum_{i=0}^n\binom{i+k-1}{k-1}=\binom{n+k}k.$$
|
Derivatives of PGF of Bernoulli Distribution Theorem $\dfrac {\mathrm d^k} {\mathrm d s^k} \Pi_X \left({s}\right) = \begin{cases} p & : k = 1 \\ 0 & : k > 1 \end{cases}$ $\Pi_X \left({s}\right) = q + ps$
where $q = 1 - p$.
We have that for a given Bernoulli distribution, $p$ and $q$ are constant. $\dfrac {\mathrm d} {\mathrm d s} \Pi_X \left({s}\right) = p$ Again, $p$ is constant, so from Derivative of Constant: $\dfrac {\mathrm d} {\mathrm d s} p = 0$ Higher derivatives are also of course zero, also from Derivative of Constant.
$\blacksquare$
Follows directly from Derivatives of PGF of Binomial Distribution, setting $n = 1$.
$\blacksquare$
|
I only have a partial answer for 1. and a hopefully non-confusing answer to 2.
To start with, let us work with the fundamental groupoid, which is more, ahem, fundamental and better suited to generalisation. In particular, we can consider the set $\pi^J(X,a,b)$ of homotopy classes (rel endpoints) of maps $(J,0,1) \to (X,a,b)$, which is more natural in the setting you outline. This is, assuming it isn't empty, a torsor for the groups $\pi^J(X,a,a)$ and $\pi^J(X,b,b)$, so you're not really losing too much. But the more important structure is the whole groupoid.
The unit interval is at least weakly initial in the category of
path-connected bipointed spaces and homotopy classes of maps (and we always have a torsor as above). If you don't assume path-connected, then the two-point set (with any of its topologies) can be allowed, but is completely useless in measuring homotopy. This is an important fact using $[0,1]$, and this can't be derived from formal homotopy theory. One could define $J$-connectedness for other bipointed spaces $J$, but the utility of such a definition is debatable unless you put in extra conditions, like making it a cylinder object.
The 'reason' we get a fundamental groupoid is that $[0,1]$ is an $A_\infty$ topological cogroupoid, namely a groupoid object in $Top^{op}$, up to homotopy, and then coherence of that up to homotopy, and so on, all the way up. Woah, I hear you say, that's a bit extreme. But it is true, and we can just focus on the first few layers.
First, we have a cocomposition $[0,1] \to [0,1]\sqcup_{1,0}[0,1]$ and a coidentity $[0,1] \to \ast$. Then instead of coassociativity, which would be the equality of the to obvious maps$$[0,1] \to [0,1]\sqcup_{1,0}[0,1]\sqcup_{1,0}[0,1],$$we have a homotopy between these two maps. We also have a map$$[0,1] \sqcup_{1,0}[0] \to [0,1]$$expressing the identity on the right, and a similar one on the left. Again, these aren't equal to the identity maps of $[0,1]$, but are homotopic to them. And again, we have coinverses up to homotopy. The choices of all these homotopies aren't important (although you can look up representatives in any book on algebraic topology), because the spaces of such homotopies are contractible.
When we want to involve another space and actually get $\Pi_1(X)$, what we do is hom this topological $A_\infty$-cogroupoid into the space $X$, and get an $A_\infty$-groupoid, and then we truncate it to a groupoid, by quotienting out by these homotopies that we have chosen (but remember the choices are unimportant). It is important that $[0,1]$ is path-connected, because this makes the $A_\infty$-cogroupoid contractible in certain technical ways which are important for generalisations to higher categories (most of the ideas in this answer come from Todd Trimble's work). For instance, in my thesis I defined a certain sort of fundamental bigroupoid which could be applied to topological stacks, and I relied heavily on the $A_\infty$-cogroupoid structure, because it was the only way I could prove I even
had a fundamental bigroupoid (I confess I did have much more complicated interval objects than here).
|
Let $M$ be a smooth manifold and denote $C^\infty_0(M)$ the space of smooth functions with compact support. In Mathematics a
distribution is defined to be a continuous linear functional $\phi : C^\infty_0(M)\to \mathbb{R}$. The space of distributions is usually denoted $\mathfrak{D}'(M)$.
So a distribution is a map that takes a function and outputs a number in a linear and continuous way. The Delta distribution centered at $x\in M$ is for example
$$\delta_x[f]=f(x).$$
Another way to create distributions is to pick $f\in C^\infty_0(M)$ and define
$${f}^\diamond[g]=\int_M fg.$$
That much is fine. The problem is the following: in Physics one often forgets all this and treats distributions as functions. So a Physicst will almost never bother writing $\phi[f]$ or just $\phi$. They write $\phi(x)$ which is not really correct, since $\phi$ isn't a function on $M$ at all.
The issue though is that there is a terminology around which makes me quite confused. One often talks about "smeared" fields written as
$$\varphi[f]=\int_M\varphi(x)f(x)$$
and talks about the field in "unsmeared form" writting it just $\varphi(x)$. This confuses further, because it is known that it is not true that given $\varphi$ there is $f$ such that $\varphi = f^\diamond$.
This terminology may be found for example in Fewster's notes on QFT on curved spacetime, but I've seem it elsewhere.
This seems to imply that when one picks $\phi\in C^\infty_0(M)$ it is unsmeared and when one picks $\phi^\diamond\in \mathfrak{D}'(M)$ and apply it to a function it is smeared (but notice that $\phi^\diamond[f]$ is a real number, not even a field anymore after applying to $f$).
So what really is this smeared and unsmeared terminology about and how does this makes contact with distribution theory from mathematics?
|
1. Rational Functions (Definition)
Definition: Rational Function
A
rational function is a quotient of polynomials \(\dfrac{P(x)}{Q(x)}\).
Example 1
\[\dfrac{(x^2 + x - 1)}{(3x^3+ 1)},\]
\[\dfrac{(x - 1)}{(x^2 +1)}, \text{ and}\]
\[\dfrac{x^2}{(x + 1)}\]
are all Rational Functions
Example 2
Find the domain of
\[\dfrac{(x^2 + 1)}{(x^2 -1)}.\]
Solution
The domain of this rational function is the set of all real numbers that do not make the denominator zero. We find
\[x^2 -1 = 0\]
solving
\[x = 1, \;\;\; \text{or} \;\;\; x = -1.\]
So that the domain is
\[\{x | x \text{ is not }1 \text{ or } -1\}.\]
2. Vertical Asymptotes
Definition: Vertical Asymptote
A
Vertical Asymptote of a rational function occurs where the denominator is 0.
Example 3
Graph the vertical asymptotes of
\[\dfrac{(x^2 + 1)}{(x^2 -1)}\]
Solution
From the last example, we see that there are vertical asymptotes at 1 and -1.
Since \(f(x)\) is positive a little to the left of -1, we say that as
\[x \rightarrow -1^{-} \text{ ("x goes to -1 from the left")},\]
\[f(x) \rightarrow \infty.\]
Similarly since \(f(x)\) is negative a little to the right of -1, we say that as
\[x\rightarrow -1^{+} \text{( "x goes to -1 from the right")}, \]
\[f(x) \rightarrow -\infty.\]
Since \(f(x)\) is negative a little to the left of 1, as
\[x \rightarrow1^{-},\]
\[f(x) \rightarrow -\infty.\]
Similarly since \(f(x)\) is positive a little to the right of 1, as
\[x\rightarrow1^{+},\]
\[f(x) \rightarrow\infty.\]
Four Types of Vertical Asymptotes
Below are the four types of vertical asymptotes:
3. Horizontal Asymptotes
Example 4
Consider the rational function
\[f(x) = \dfrac{(3x^2 + x - 1)}{(x^2 - x - 2)}.\]
For the numerator, the term \(3x^2\) dominates when \(x\) is large, while for the denominator, the term \(x^2\) dominates when \(x\) is large. Hence as
\[x \rightarrow \infty,\]
\[f(x)\rightarrow\dfrac{3x^2}{x^2}=3.\]
3 is called the
horizontal asymptote and we have the the left and right behavior of the graph is a horizontal line \(y = 3\). 4. Oblique Asymptotes
Consider the function
\[f(x) = \dfrac{(x^2 - 3x - 4)}{(x + 3)}\]
\(f(x)\) does not have a horizontal asymptote, since
\[\dfrac{x^2}{x}= x \]
is not a constant, but we see (on the calculator) that the left and right behavior of the curve is like a line. Our goal is to find the equation of this line.
We use synthetic division to see that
\[\dfrac{(x^2 - 3x - 4)}{(x + 3)} = x - 6 + \dfrac{14}{(x+3)}.\]
For very large \(x\),
\[\dfrac{14}{x} + 3\]
is very small, hence \(f(x)\) is approximately equal to
\[x - 6\]
on the far left and far right of the graph. We call this line an
Oblique Asymptote.
To graph, we see that there is a vertical asymptote at
\[x = -3\]
with behavior:
left down and right up
The graph has x-intercepts at 4 and -1, and a y intercept at \(-\frac{4}{3}\).
Exercise
Graph
\[\dfrac{(x^3 + 8)}{(x^2 - 3x - 4)}\]
5. Rational Functions With Common Factors
Consider the graph of
\[y = \dfrac{x-1}{x-1}\]
What is wrong with the picture? When
\[f(x) = \dfrac{g(x)(x - r)}{h(x)(x - r)}\]
with neither \(g(r)\) nor \(h(r)\) zero, the graph will have a
hole at \(x = r\). We call this hole a removable discontinuity.
Example
Graph
\[\begin{align} f(x) &= \dfrac{(x^2 - 2)}{(x^2 - x - 2)} = \dfrac{(x - 2)(x + 2)}{(x - 2)(x + 1)}.\end{align}\]
This graph will have a vertical asymptote at \(x =-1\) and a
hole at \((2,2)\).
We end our discussion with a list of steps for graphing rational functions.
Steps in graphing rational functions: Step 1Plug in \(x = 0\) to find the y-intercept Step 2Factor the numerator and denominator. Cancel any common factors remember to put in the appropriate holes if necessary. Step 3Set the numerator = 0 to find the x-intercepts Step 4Set the denominator = 0 to find the vertical asymptotes. Then plug in nearby values to fine the left and right behavior of the vertical asymptotes. Step 5If the degree of the numerator = degree of the denominator then the graph has a horizontal asymptote. To determine the value of the horizontal asymptote, divide the term highest power of the numerator by the term of highest power of the denominator. If the degree of the numerator = degree of the denominator + 1, then use polynomial or synthetic division to determine the equation of the oblique asymptote. Step 6Graph it!
Larry Green (Lake Tahoe Community College)
Integrated by Justin Marshall.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.