text stringlengths 256 16.4k |
|---|
The typical difference-in-differences estimator (as fixed effects) fits a model of the form $$ y_{it} = \alpha_i + \delta T_{it} + X_{it}'\beta + \epsilon_{it} $$
where $T$ is some treatment that happens to $i$ at time $t$.
The coefficient $\delta$ is identified from the jump between time periods when T goes from zero to one, essentially using as counterfactuals those that didn't get treated during that period, after controlling for unobservables that don't vary in time.
Normally the (panel) dataset starts with everyone un-treated, and ends with some remaining untreated while others get treated. Alternatively, if everyone (eventually) gets treated, you can still include post-treatment data to improve statistical precision -- the $\delta$ is still identified from the time periods where some got treated and others didn't.
My question: is it legit to fit a model where one group starts treated, the other group starts untreated, and then the untreated group gets treated? This is basically the mirror image of a situation in which one group stayed untreated and one group got treated -- we still have heterogeneity in some time periods. Mathematically it seems identical -- the standard error components motivations seems to still apply.
Am I missing something? |
I think in common literature about statstics the authors are often very imprecise when it comes to residuals and errors. So far, I could not work that difference out completely and therefore have several questions.
Setting: Let us consider the simple linear regression model, $$ Y = \beta_0 + \beta_1 x + \epsilon,$$ or for a specific value $Y=y_i$ and $x=x_i$,
$$ y_i = \beta_0 + \beta_1 x_i + \epsilon_i$$ with the error terms being normally distributed, i.e., $\epsilon_i$ ~ $N(0, \sigma^2)$. In this setting we consider the $Y=y_i$ and the errors $\epsilon_i$ to be random variables, and the independent variables $x_i$ to be nonrandom variables. The parameters $\beta_0$ and $\beta_1$ are unknown and fixed.
Therefore we have $E[Y] = \beta_0 + \beta_1 x$ and $Var[Y] = \sigma^2$, which means the $Y=y_i$ are also normally distributed according to $Y$~$N(\beta_0 + \beta_1 x_i, \sigma^2)$.
The errors $\epsilon_i$ are now defined as the deviations of the observations $y_i$ from the "true" (deterministic model) $E[Y] = \beta_0 + \beta_1 x$, i.e. $$ y_i - E[Y=y_i] = (\beta_0 + \beta_1 x_i + \epsilon_i)-(\beta_0 + \beta_1 x_i) = \epsilon_i.$$
We now want to estimate the model $E[Y] = \beta_0 + \beta_1 x$, as the coefficients $\beta_0$ and $\beta_1$ are unkown. We do so by $$ \hat{Y} = \hat{\beta_0} + \hat{\beta_1} x.$$
The residuals, lets call them $\delta_i$, are definded (as I understand it) as the deviations of the observations $y_i$ from the estimated model $\hat{y_i}$, i.e. $$\delta_i = y_i - \hat{y_i}.$$
Now the questions:
1.) In least squares estimation some authors reduce the squared sum of errors (SSE), $\sum \epsilon_i^2$, and some reduce the residual sum of square (RSS), $\sum \delta_i^2$, which is obviously not the same thing. Some even write they reduce $\sum \left(y_i - \hat{y_i}\right)^2$ but then they reduce the SSE, which is not even consistent within their own chosen framework. What is now the correct procedure of least square estimation, reducing SSE or RSS?
2.) How comes that there are so many different definitions of residuals, errors and least squares in various text books and wikipedia pages? Are the defintions of errors and residuals indeed not strictly clear, i.e. is it debatable what we consider to be the residuals and what the errors?
3.) Can we say anything about the distribution of the residuals, i.e. are they also normally distributed and if so with which mean and variance? It appears to me that for some reason it is generally assumed that the residuals are normal distributed, because it is often suggested to plot the residuals against the observation $y_i$ and check if they really are normal random distributed (which is expected to be the correct result).
4.) Some authors also write the estimated fromula as $ \hat{y} = \hat{\beta_0} + \hat{\beta_1} x_i + \hat{\epsilon_i}$, where they depict the residuals $\delta_i$ as $\hat{\epsilon_i}$. But I think this is a wrong formula because then $y_i - \hat{y_i}$, which is the defintion of the residuals, leads to a result that is different from the residuals. Also, as I understand it, the estimated model should not contain any random noise term as we want to plot a straight line. Is there any justification why some would write the residuals into the estimated formula?
EDIT: The books I was mainly looking at were:
i) Mathematical Statistics, Wackerly et al.: Here they reduce $\sum (y_i - \hat{y_i})^2$, but call this the sum of squared errors, i.e. they explicitly call $y_i - \hat{y_i}$ the errors. (p.569)
ii) An Introduction to Statistical Learning, James et al.: They reduce the residual sum of square and indeed define the residuals as I did above, i.e. $\delta_i = y_i - \hat{y_i}.$
iii) The Elements of Statistical Learning, Hastie et al.: They say that say that they would be minimizing the residual sum of squares (RSS) but what they actuall do is reducing the sum of squared errors, i.e. $\sum (y_i - \beta_0 + \beta_1 x_i)^2$, p.63
iv) Regression-Models, Methods and Applications, Fahrmeir et al.: They define the residuals as I did, i.e. as $\delta_i = y_i - \hat{y_i}$ and also discuss both residuals and errors but end up minimizing the sum of squared errors (SSE) in the least squares framework. |
Mathematical operators, such as function names, should be set in roman type, not italics. Latex already has commands for some operators, including
\max,
\min, and
\log. How can I define additional such commands?
\DeclareMathOperator{\foo}{foo} and
\DeclareMathOperator*{\hocolim}{hocolim} for sub- and superscripts in the limits position.
This requires
\usepackage{amsmath}
which is recommended for math documents anyway.
Minimal example:
\documentclass{article}\usepackage{amsmath}\DeclareMathOperator{\foo}{foo}\DeclareMathOperator*{\hocolim}{hocolim}\begin{document}Example of $\foo(x)$ and $\foo x$.Example of $\hocolim_{x\in X} f(x)$ and displayed\begin{equation*}\hocolim_{x\in X} f(x)\end{equation*}\end{document}
Alternatively, if you are using any of the packages from the AMS (
amsart.cls or
amsmath.sty) then there is a command
\DeclareMathOperator which does what it says on the tin! For example,
\DeclareMathOperator{\Det}{Det}
I think that it can handle variants, but I don't recall off the top of my head.
As mentioned before, the amsmath command
\DeclareMathOperator{\Det}{Det} is a good way to do this, but this is actually basically a wrapper for
\newcommand{\Det}{\operatorname{Det}}. So if you only want to use the command once and don't want to define a symbol (especially useful if you are using an online tex editor), then just use
\operatorname
And just like
\DeclareMathOperator*, you can use
\operatorname* to specify that underbraces should go underneath. This is useful for something like
\operatorname*{minimize}. More info here
If you're looking for something one-off, you can always use
\mathrm in a math environment like so:
\mathrm{ultimatefunction}(x)
Which will display 'ultimatefunction' in a roman type.
Define the command \newoperator as follows:
\providecommand{\newoperator}[3]{% \newcommand*{#1}{\mathop{#2}#3}}
Here is an example that defines \FD as an operator:
\newoperator{\FD}{\mathrm{FD}}{\nolimits}
If you need use this new operator only one time, maybe you could considere use
\text{operator}
in the math formula.
For example: $$ 3 \cdot \text{FoO} (x) $$
A one-off operator that you don’t think is worth defining a new macro for can be typeset with
\operatorname. It uses the same font as
\mathrm, which by default is main the text font. However,
unicode-math also defines a command
\setoperatorfont to change this. (In LaTeX, it is possible to redefine
\operator@font.)
\( \operatorname{foo}_a^b f \equiv \int_a^b \int_{-\infty}^{\infty} f(x) \,\mathrm{d}y \,\mathrm{d}x\)
This also requires
amsmath. |
Is there any formula for this series?
$$1 + \frac12 + \frac13 + \cdots + \frac 1 n .$$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
There is no formula for the nth partial sum of the harmonic series, only approximations. A well known approximation by Euler is that the nth partial sum is approximately $$\ln(n) + \gamma $$ where $\gamma$ is the Euler–Mascheroni constant and is close to $0.5772$. The amount of error in this approximation gets arbitrarily small for sufficiently large values of $n$.
A well known fact in mathematics is that the harmonic series is divergent which means that if you add up enough terms in the series you can make their sum as large as you wish.
Let $H_n=1+1/2+\cdots+1/n$, as usual. Then, for $n>1$,
$$H_n =\log(n+1/2) + \gamma + \varepsilon(n)$$
with
$$0 < \varepsilon(n) < 1/(24n^2)$$
so if you can use a high-precision estimate rather than an exact answer this may be good enough. For example, $1+1/2+1/3+\cdots+1/10^{100}$ is between
230.83572496430610126240565755851882319115230819881722120213855733164212869451291269453666757225157658376140985147843194582191305052276721850285291090752309248454422116629840542211766342541591511108644544
and
230.83572496430610126240565755851882319115230819881722120213855733164212869451291269453666757225157658376140985147843194582191305052276721850285291090752309248454422116629840542211766342541591511108644546
where the numbers are identical up to their last digit. If you need higher precision, a series expansion will yield formulas that are increasingly precise for large values of $n$. (For small values of $n$, calculate directly...)
Here is a way to interpret the harmonic numbers combinatorially: $$H_n=\dfrac {\genfrac{[}{]}{0pt}{}{n+1}{2}}{n!}$$
where $\genfrac{[}{]}{0pt}{}{n+1}{2}$ is the absolute value of the Stirling number of the first kind, namely, the number of permutations of ${1,2,\dots,n,n+1}$ that have exactly $2$ cycles.
These satisfy the following recurrence: $$\genfrac{[}{]}{0pt}{}{n}{k}=\genfrac{[}{]}{0pt}{}{n-1}{k-1}+(n-1)\genfrac{[}{]}{0pt}{}{n-1}{k}$$ which makes it algebraically obvious that they are related to the Harmonic numbers the way they are, though there is also a purely combinatorial proof. The virtue of this interpretation is that you can prove a whole host of crazy identities involving the Harmonic numbers by just translating the natural identities for the Stirling numbers.
The Stirling numbers of the first kind are also the coefficients of $x(x-1)(x-2)\cdots(x-(n-1))$, so their absolute values are the coefficients of $x(x+1)(x+2)\cdots (x+n-1)$, and so in particular
The n
thHarmonic number is the coefficient of $x^2$ in $\frac1{n!}x(x+1)(x+2)\dots(x+n)$.
All of this can be found in Chapter 7 of Benjamin & Quinn's wonderful book
Proofs That Really Count.
This question made me recall this wonderful formula for $H_n$ due to Gregory Fontana:
$$H_n = \gamma + \log n + {1 \over 2n} - \sum_{k=2}^\infty { (k-1)! C_k \over n(n+1)\ldots(n+k-1)}, \qquad \textrm{ for } n=1,2,3,\ldots,$$
where the coefficients $C_k$ are the Gregory coefficients given by $${ z \over \log(1-z)} = \sum_{k=0}^\infty C_k z^k \qquad \textrm{ for } |z|<1.$$
Although I remember proving this as a student, about 30 years ago, I've just spent a frustrating 90 minutes trying to recover the proof without success, so unfortunately I cannot sketch it here. PS: I'll post a question shortly, and perhaps someone can beat me to it and put me out of my misery :-)
Well there are some formulae for the harmonic numbers. For example, one of the most well known is $ H_n = \int_0^1 {1 - x^n \over 1 - x} dx$, which has certain useful properties. However, it is not in any way simpler than the original definition. And if you're looking for numerical approximations rather than exact identities, you certainly are better off with the approximation $H_n= \ln (n+1/2) +\gamma + \epsilon (n)$ |
LaTeX supports many worldwide languages by means of some special packages. In this article is explained how to import and use those packages to create documents in
Italian.
Contents
Italian language has some accentuated words. For this reason the preamble of your document must be modified accordingly to support these characters and some other features.
documentclass{article} \usepackage[utf8]{inputenc} \usepackage[italian]{babel} \usepackage[T1]{fontenc} \begin{document} \tableofcontents \vspace{2cm} %Add a 2cm space \begin{abstract} Questo è un breve riassunto dei contenuti del documento scritto in italiano. \end{abstract} \section{Sezione introduttiva} Questa è la prima sezione, possiamo aggiungere alcuni elementi aggiuntivi e tutto digitato correttamente. Inoltre, se una parola è troppo lunga e deve essere troncato babel cercherà per troncare correttamente a seconda della lingua. \section{Teoremi Sezione} Questa sezione è quello di vedere cosa succede con i comandi testo definendo \[ \lim x = \sin{\theta} + \max \{3.52, 4.22\} \] \end{document}
There are two packages in this document related to the encoding and the special characters. These packages will be explained in the next sections.
If your are looking for instructions on how to use more than one language in a sinlge document, for instance English and Italian, see the International language support article.
Modern computer systems allow you to input letters of national alphabets directly from the keyboard. In order to handle a variety of input encodings used for different groups of languages and/or on different computer platforms LaTeX employs the
inputenc package to set up input encoding. In this case the package properly displays characters in the Italian alphabet. To use this package add the next line to the preamble of your document: \usepackage[utf8]{inputenc} The recommended input encoding is utf-8. You can use other encodings depending on your operating system.
To proper LaTeX document generation you must also choose a font encoding which has to support specific characters for Italian language, this is accomplished by the
package: fontenc \usepackage[T1]{fontenc} Even though the default encoding works well in Italian, using this specific encoding will avoid glitches with some specific characters. The default LaTeX encoding is
OT1.
To extended the default LaTeX capabilities, for proper hyphenation and translating the names of the document elements, import the
babel package for the Italian language. \usepackage[italian]{babel} As you may see in the example at the introduction, instead of "abstract" and "Contents" the Italian words "Sommario" and "Indice" are used.
Sometimes for formatting reasons some words have to be broken up in syllables separated by a
- (
hyphen) to continue the word in a new line. For example, matematica could become mate-matica. The package babel, whose usage was described in the previous section, usually does a good job breaking up the words correctly, but if this is not the case you can use a couple of commands in your preamble.
\usepackage{hyphenat} \hyphenation{mate-mati-ca recu-perare}
The first command will import the package
hyphenat and the second line is a list of space-separated words with defined hyphenation rules. On the other side, if you want a word not to be broken automatically, use the
{\nobreak word} command within your document.
For more information see |
I have the following question, I really appreciate if someone can help me to clarify ideas and I apologize if is a stupid question: This is from Conway's complex analysis book:
Let $f: G \to \bf{C}$ and $g: G\to \bf{C}$ be branches of $z^a$ and $z^b$ respectively. Show that $fg$ is a branch of $z^{a+b}$ and $f/g$ is a branch of $z^{a-b}$. Suppose that $f(G) \subset G$ and $g(G) \subset G$ and prove that both $f\circ g$ and $g \circ f$ are branches of $ z^{ab}$.
Maybe is a very stupid question but I have a bad time trying to understand. As long as I know $z^b = \exp(b \ell(z))$ where $\ell$ is a branch of the logarithm in some region $G$. Therefore if we fixed a branch $L(z)$ for the logarithm in the region we have $\ell(z)=L(z)+i2\pi k$ ($k\in \bf{Z}$ and fixed).
My idea goes as follows: Let $L(z)$ a branch of the logarithm in $G$. Then $f(z)= \exp(a(L(z)+i2\pi m))$ and $g(z)= \exp(b(L(z)+i2\pi n))$ ($m,n \in \bf{Z}$). Thus
$$f(z)\cdot g(z)= \exp(a (L(z)+i2\pi m))+b(L(z)+i2\pi n))$$ $$f(z)\cdot g(z)=\exp((a+b)L(z))\cdot \exp(i2\pi (am+bn))$$
Does this really defined a legitimate branch of $z^{a+b}$? I think the answer is no, because $a,b$ can be anything, not just integers, and for what I understand we need $\exp[(a+b)[L(z)+i2\pi k]]$ ($k\in \bf{Z}$) in order to be a branch for $z^{a+b}$.
For the other cases I have the same difficulties, at least that $f$ and $g$ are defined in the same branch of the logarithm, that is $m=n$, this seems to solve the difficulties but as written the statement does not mentioned that the branches are the same. I really need someone to help me to clarify ideas.
I have more question of the same sort but I think is not a good idea to put them all in one post.
Thanks in advance. |
thinking about invariants of 2-knots ... is there an obvious map from $H_2(X)$ (or from part of it) into $\pi_1(X)$?
Where in the derived series of $\pi_1(X)$ would the image of $H_2(X)$ live?
Have you heard of things like Dwyer's filtration on $H_2$ and Stallings/Cochran-Harvey theorems about the lower central/derived series? I cannot resist quoting the opening sentence of Krushkal's paper:
"The lower central series of the fundamental group of a space $X$ is closely related to the Dwyer’s [D] filtration $\phi_k(X)$ of the second homology $H_2(X;\Bbb Z)$."
The Dwyer subspace $\phi_k(X)\subset H_2(X;\Bbb Z)$ is defined as the kernel of the composition $$H_2(X) \to H_2(\pi_1X) \to H_2(\pi_1X/\gamma_{k-1}\pi_1X).$$ The gamma notation for the lower central series runs $\gamma_1G=G$, and $\gamma_{k+1}G=[G,\gamma_k G]$.
Freedman and Teichner showed that
$\phi_k(X)$ coincides with the set of
homology classes represented by maps of closed $k$-gropes into $X$.
Now their $k$-gropes are gropes of
class $k$, but you can also do symmetric gropesof depth $k$, or whatever they are called, to get into the derived series. Which series to choose, and how to make use of it? The lower central series has a long standing record of applications in knot/link theory due in part to Stallings' theorem (1963, also rediscovered by Casson; note Cochran's topological proof):
Let $\phi:A\to B$ be a homomorphism that induces an isomorphism on $H_1(−;\Bbb Z)$ and an epimorphism on $H_2(−;\Bbb Z)$. Then, for each $n$, $\phi$ induces an isomorphism
$A/\gamma_nA\simeq B/\gamma_nB$.
Dwyer (1975) extended Stallings’ theorem by weakening the hypothesis on $H_2$:
Let $\phi: A\to B$ be a homomorphism that induces an isomorphism on $H_1(−;\Bbb Z)$. Then for any $n\ge 2$ the following are equivalent:
• $\phi$ induces an isomorphism $A/\gamma_nA\simeq B/\gamma_nB$;
• $\phi$ induces an epimorphism $H_2(A;\Bbb Z)/\phi_n(A)\simeq H_2(B;\Bbb Z)/\phi_n(B)$.
Now if you do want the derived series rather than the lower central series, take a look at the work of Cochran and Harvey who had a series of papers about the analogues of the Stallings and Dwyer theorems for torsion-free derived series. In fact their motivation was also knot theory.
Also Mikhailov has some other generalization of the Dwyer filtration (see also his book with Passi in Springer Lecture Notes in Math) though I'm not sure if this has had any application to knots.
In fact, I think Mikhailov, and possibly Orr and Cochran, also did something about the transfinite Dwyer filtration, which might be not unrelevant to knots and links (at least this is where the transfinite business originated from, in papers by Orr and Levine in the 80s). |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
Soil composition
by
V
olume and
M
ass, by phase:
a
ir,
w
ater,
v
oid (pores filled with water or air),
s
oil, and
t
otal.
Water content or moisture content is the quantity of water contained in a material, such as soil (called soil moisture), rock, ceramics, fruit, or wood. Water content is used in a wide range of scientific and technical areas, and is expressed as a ratio, which can range from 0 (completely dry) to the value of the materials' porosity at saturation. It can be given on a volumetric or mass (gravimetric) basis.
Contents Definitions 1 Measurement 2 Direct methods 2.1 Laboratory methods 2.2 Soil moisture 3 Geophysical methods 3.1 Satellite remote sensing method 3.2 Classification and uses 4 Earth and agricultural sciences 4.1 Agriculture 4.1.1 Groundwater 4.1.2 See also 5 References 6 Further reading 7 Definitions
Volumetric water content, θ, is defined mathematically as: \theta = \frac{V_w}{V_\text{wet}}
where V_w is the volume of water and V_\text{wet} = V_h + V_w + V_a is the volume of wet material, that is, host material (e.g., soil particles, vegetation tissue) volume + water volume + air space.
Gravimetric water content [1] is expressed by mass (weight) as follows: u = \frac{m_w}{m}
where m_w is the mass of water and m is the mass of the substance. Normally the latter is taken before drying:
u' = \frac{m_w}{m_\text{wet}}
except for woodworking, geotechnical and soil science applications where oven-dried material is used instead:
u'' = \frac{m_w}{m_\text{dry}}
To convert gravimetric water content to volumetric water content, multiply the gravimetric water content by the bulk specific gravity of the material.
Derived quantities
In soil mechanics and petroleum engineering, the term
water saturation or degree of saturation, S_w is used, defined as S_w = \frac{V_w}{V_v} = \frac{V_w}{V_T\phi} = \frac{\theta}{\phi}
where \phi = V_v / V_T is the porosity and V_v is the volume of void or pore space. Values of
S can range from 0 (dry) to 1 (saturated). In reality, w S never reaches 0 or 1 - these are idealizations for engineering use. w
The
normalized water content, \Theta, (also called effective saturation or S_e) is a dimensionless value defined by van Genuchten [2] as: \Theta = \frac{\theta - \theta_r}{\theta_s-\theta_r}
where \theta is the volumetric water content; \theta_r is the residual water content, defined as the water content for which the gradient d\theta/dh becomes zero; and, \theta_s is the saturated water content, which is equivalent to porosity, \phi.
Measurement Direct methods
Water content can be directly measured using a known volume of the material, and a drying oven. Volumetric water content, θ, is calculated
[3] via the volume of water V_w and the mass of water m_w: V_w = \frac{m_w}{\rho_w} = \frac{m_{\text{wet}}-m_{\text{dry}}}{\rho_w}
where
m_{\text{wet}} and m_{\text{dry}} are the masses of the sample before and after drying in the oven; \rho_w is the density of water; and
For materials that change in volume with water content, such as coal, the water content,
u, is expressed in terms of the mass of water per unit mass of the moist specimen: u' = \frac{m_{\text{wet}} - m_{\text{dry}}}{m_{\text{wet}}}
However, geotechnics requires the moisture content to be expressed with respect to the sample's dry weight (often as a percentage, i.e. % moisture content =
u×100%) u'' = \frac{m_{\text{wet}} - m_{\text{dry}}}{m_{\text{dry}}}
For wood, the convention is to report moisture content on oven-dry basis (i.e. generally drying sample in an oven set at 105 deg Celsius for 24 hours). In wood drying, this is an important concept.
Laboratory methods
Other methods that determine water content of a sample include chemical titrations (for example the Karl Fischer titration), determining mass loss on heating (perhaps in the presence of an inert gas), or after freeze drying. In the food industry the Dean-Stark method is also commonly used.
From the Annual Book of ASTM (American Society for Testing and Materials) Standards, the total evaporable moisture content in Aggregate (C 566) can be calculated with the formula:
p = \frac{W-D}{D}
where p is the fraction of total evaporable moisture content of sample, W is the mass of the original sample, and D is mass of dried sample.
Soil moisture Geophysical methods
There are several geophysical methods available that can approximate
in situ soil water content. These methods include: time-domain reflectometry (TDR), neutron probe, frequency domain sensor, capacitance probe, amplitude domain reflectometry, electrical resistivity tomography, ground penetrating radar (GPR), and others that are sensitive to the physical properties of water . [4] Geophysical sensors are often used to monitor soil moisture continuously in agricultural and scientific applications. Satellite remote sensing method
Satellite microwave remote sensing is used to estimate soil moisture based on the large contrast between the dielectric properties of wet and dry soil. The microwave radiation is not sensitive to atmospheric variables, and can penetrate through clouds. Also, microwave signal can penetrate, to a certain extent, the vegetation canopy and retrieve information from ground surface.
[5] The data from microwave remote sensing satellite such as: WindSat, AMSR-E, RADARSAT, ERS-1-2, Metop/ASCAT are used to estimate surface soil moisture. [6] Classification and uses
Moisture may be present as adsorbed moisture at internal surfaces and as capillary condensed water in small pores. At low relative humidities, moisture consists mainly of adsorbed water. At higher relative humidities, liquid water becomes more and more important, depending or not depending on the pore size can also be a influence of volume. In wood-based materials, however, almost all water is adsorbed at humidities below 98% RH.
In biological applications there can also be a distinction between physisorbed water and "free" water — the physisorbed water being that closely associated with and relatively difficult to remove from a biological material. The method used to determine water content may affect whether water present in this form is accounted for. For a better indication of "free" and "bound" water, the water activity of a material should be considered.
Water molecules may also be present in materials closely associated with individual molecules, as "water of crystallization", or as water molecules which are static components of protein structure.
Earth and agricultural sciences
In soil science, hydrology and agricultural sciences, water content has an important role for groundwater recharge, agriculture, and soil chemistry. Many recent scientific research efforts have aimed toward a predictive-understanding of water content over space and time. Observations have revealed generally that spatial variance in water content tends to increase as overall wetness increases in semiarid regions, to decrease as overall wetness increases in humid regions, and to peak under intermediate wetness conditions in temperate regions .
[7]
There are four standard water contents that are routinely measured and used, which are described in the following table:
Name Notation Suction pressure (J/kg or kPa) Typical water content (vol/vol) Conditions Saturated water content θ s 0 0.2–0.5 Fully saturated soil, equivalent to effective porosity Field capacity θ fc −33 0.1–0.35 Soil moisture 2–3 days after a rain or irrigation Permanent wilting point θ pwp or θ wp −1500 0.01–0.25 Minimum soil moisture at which a plant wilts Residual water content θ r −∞ 0.001–0.1 Remaining water at high tension
And lastly the available water content, θ
a, which is equivalent to: θ a ≡ θ fc − θ pwp
which can range between 0.1 in gravel and 0.3 in peat.
Agriculture
When a soil becomes too dry, plant transpiration drops because the water is increasingly bound to the soil particles by suction. Below the wilting point plants are no longer able to extract water. At this point they wilt and cease transpiring altogether. Conditions where soil is too dry to maintain reliable plant growth is referred to as agricultural drought, and is a particular focus of irrigation management. Such conditions are common in arid and semi-arid environments.
Some agriculture professionals are beginning to use environmental measurements such as soil moisture to schedule irrigation. This method is referred to as
smart irrigation or soil cultivation. Groundwater
In saturated groundwater aquifers, all available pore spaces are filled with water (volumetric water content = porosity). Above a capillary fringe, pore spaces have air in them too.
Most soils have a water content less than porosity, which is the definition of unsaturated conditions, and they make up the subject of vadose zone hydrogeology. The capillary fringe of the water table is the dividing line between saturated and unsaturated conditions. Water content in the capillary fringe decreases with increasing distance above the phreatic surface.
One of the main complications which arises in studying the vadose zone, is the fact that the unsaturated hydraulic conductivity is a function of the water content of the material. As a material dries out, the connected wet pathways through the media become smaller, the hydraulic conductivity decreasing with lower water content in a very non-linear fashion.
A water retention curve is the relationship between volumetric water content and the water potential of the porous medium. It is characteristic for different types of porous medium. Due to hysteresis, different wetting and drying curves may be distinguished.
See also References ^ T. William Lambe & Robert V. Whitman (1969). "Chapter 3: Description of an Assemblage of Particles". Soil Mechanics (First ed.). John Wiley & Sons, Inc. p. 553. ^ van Genuchten, M.Th. (1980). "A closed-form equation for predicting the hydraulic conductivity of unsaturated soils" (PDF). Soil Science Society of America Journal 44 (5): 892–898. ^ Dingman, S.L. (2002). "Chapter 6, Water in soils: infiltration and redistribution". Physical Hydrology (Second ed.). Upper Saddle River, New Jersey: Prentice-Hall, Inc. p. 646. ^ F. Ozcep, M. Asci, O. Tezel, T. Yas, N. Alpaslan, D. Gundogdu (2005). "Relationships Between Electrical Properties (in Situ) and Water Content (in the Laboratory) of Some Soils in Turkey" (PDF). Geophysical Research Abstracts 7. ^ [3] ^ [4] ^ Lawrence, J. E., and G. M. Hornberger (2007). "Soil moisture variability across climate zones". Geophys. Res. Lett. 34 (L20402): L20402. Further reading
Field Estimation of Soil Water Content: A Practical Guide to Methods, Instrumentation and Sensor Technology (PDF), Vienna, Austria: International Atomic Energy Agency, 2008, p. 131,
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
The (asymptotically) most efficient deterministic primality testing algorithm is due to Lenstra and Pomerance, running in time $\tilde{O}(\log^6 n)$. If you believe the Extended Riemann Hypothesis, then Miller's algorithm runs in time $\tilde{O}(\log^4 n)$. There are many other deterministic primality testing algorithms, for example Miller's paper has an $\tilde{O}(n^{1/7})$ algorithm, and another well-known algorithm is Adleman–Pomerance–Rumley, running in time $O(\log n^{O(\log\log\log n)})$.
In reality, no one uses these algorithms, since they are too slow. Instead, probabilistic primality testing algorithms are used, mainly Miller–Rabin, which is a modification of Miller's algorithm mentioned above (another important algorithm is Solovay–Strassen). Each iteration of Miller–Rabin runs in time $\tilde{O}(\log^2 n)$, and so for a constant error probability (say $2^{-80}$) the entire algorithm runs in time $\tilde{O}(\log^2 n)$, which is much faster than Lenstra–Pomerance.
In all of these tests, memory is not an issue.
In their comment, jbapple raises the issue of deciding which primality test to use in practice. This is a question of implementation and benchmarking: implement and optimize a few algorithms, and experimentally determine which is fastest in which range. For the curious, the coders of PARI did just that, and they came up with a deterministic function
isprime and a probabilistic function
ispseudoprime, both of which can be found here. The probabilistic test used is Miller–Rabin. The deterministic one is BPSW.
Here is more information from Dana Jacobsen:
Pari since version 2.3 uses an APR-CL primality proof for
isprime(x), and BPSW probable prime test (with "almost extra strong" Lucas test) for
ispseudoprime(x).
They do take arguments which change the behavior:
isprime(x,0) (default.) Uses combination (BPSW, quick Pocklington or BLS75 theorem 5, APR-CL).
isprime(x,1) Uses Pocklington–Lehmer test (simple $n-1$).
isprime(x,2) Uses APR-CL.
ispseudoprime(x,0) (default.) Uses BPSW (M-R with base 2, "almost extra strong" Lucas).
ispseudoprime(x,k) (for $k\geq 1$.) Does $k$ M-R tests with random bases. The RNG is seeded identically in each Pari run (so the sequence is deterministic) but is not reseeded between calls like GMP does (GMP's random bases are in fact the same bases every call so if
mpz_is_probab_prime_p(x,k) is wrong once it will always be wrong).
Pari 2.1.7 used a much worse setup.
isprime(x) was just M-R tests (default 10), which led to fun things like
isprime(9) returning true quite often. Using
isprime(x,1) would do a Pocklington proof, which was fine for about 80 digits and then became too slow to be generally useful.
You also write
In reality, no one uses these algorithms, since they are too slow. I believe I know what you mean, but I think this is too strong depending on your audience. AKS in of course, stupendously slow, but APR-CL and ECPP are fast enough that some people use them. They are useful for paranoid crypto, and useful for people doing things like
primegaps or
factordb where one has enough time to want proven primes.
[My comment on that: when looking for a prime number in a specific range, we use some sieving approach followed by some relatively quick probabilistic tests. Only then, if at all, we run a deterministic test.]
In all of these tests, memory is not an issue. It is an issue for AKS. See, for instance, this eprint. Some of this depends on the implementation. If one implements what numberphile's video calls AKS (which is actually a generalization of Fermat's Little Theorem), memory use will be extremely high. Using an NTL implementation of the v1 or v6 algorithm like the referenced paper will result in stupid large amounts of memory. A good v6 GMP implementation will still use ~2GB for a 1024-bit prime, which is a lot of memory for such a small number. Using some of the Bernstein improvements and GMP binary segmentation leads to much better growth (e.g. ~120MB for 1024-bits). This is still much larger than other methods need, and no surprise, will be millions of times slower than APR-CL or ECPP. |
This question was inspired by my attempt to understand the duration of a floating rate note, or FRN for short. Several answers, like this, say the duration of a FRN is just time to next coupon payment. But I'm still a bit confused even with the very definition of durations of FRNs.
In a continuous time model, let $\{P(0,t), t\ge 0\}$ be the YTM curve of zero bonds. Then in this answer by @Gordon it is pointed out that the coupon a FRN with a unit principal pays at $T_2$ with the coupon rate $L(T_1;T_1,T_2)$ to be set at $T_1<T_2$ should be valued $P(0, T_1) - P(0, T_2)$ at time $0$. Hence, with a little bit extension, if I consider a FRN that pays coupon one at $T_1$ set at $T_0:=0$, pays coupon two at $T_2$ set at $T_1$, and so on until it pays the last coupon (set at $T_{n-1}$) together with the principal (assumed $1$) at $T_n$. Then its value at $t<T_1$ should be $$V_t=\sum_{i=1}^nV(\text{coupon}_i) + P(t, T_n) = \sum_{i=1}^n(P(t, T_{i-1})-P(t, T_i)) + P(t, T_n) = P(t, T_1).$$
And my question is, how to evaluate the (Macaulay) duration of this FRN? The main problem is I don't know what rate I should differentiate $V$ in.
As a guess, if I define the current discount rate to be $r_c$ such that $e^{-r_c\tau} = P(t, T_1)$ where $t\in [0, T_1)$ and $\tau = T_1-t$ is time to next payment of coupon, then I may write $$V_t = P(t, T_1) = e^{-r_c\tau}$$ And if I differentiate in $r_c$, I got $$\frac{dV_t}{dr_c} = -\tau e^{-r_c\tau} = -\tau V_t$$ or $-\frac1V_t\frac{dV_t}{dr_c} = \tau$, which seems to align with the "time to next payment" theory. But I'm just not very sure, so could anybody kindly tell me if this is the correct way to define the duration for such a FRN, or more generally for any continuous time bond model? |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ... |
Consider the finite 1D wedge-shaped potential well given by
$$V(x)=V_0\left(\frac{|x|}{a}-1\right) \hspace{10pt}\mathrm{for}\hspace{3pt} |x|<a;\hspace{6pt}V(x)=0 \hspace{10pt}\mathrm{for}\hspace{3pt} |x|>a.$$
I'm trying to find the reflection and transmission coefficients for a stream of electrons coming from $x=-\infty$ (so $E>0$, no bound states involved). In order to do that, I've divided the domain into 4 parts (like in the picture), solved the Schrödinger equation on each one, got a linear system for the coefficients of each wavefunction and solved it. Meanwhile I've also solved the problem numerically, and got pretty expected results, like the one below.
All was well untill I plugged the analytic solution for the transmission coefficient into Python to render a graph and got this. $E$ is in eV, $x$ and $a$ in nm.
Now, I've gone over my calculations a dozen times, and I still can't spot a mistake, so I want to know wheather it's possible to have a transmission coefficient greater than one for certain values of energy, like in this scenario.
I'm aware that this breaks conservation of energy (red alert!), and I'm still hoping that this is due to a numerical error, but I want to hear some other ideas regarding this phenomenon. I've found this as a related question, but I'm not really satisfied with the answers. Why doesn't this phenomenon happen in a rectangular well? Should I discard the $T>1$ solutions and leave energy gaps in the graph? That seems pretty arbitrary arbitrary to me, but it may be the case here.
EDIT: The comments suggest to give an insight into my analytical solution.
Some initial substitutions: $k=\sqrt{\frac{2mE}{\hbar^2}}$, $k_0=\sqrt{\frac{2mV_0}{\hbar^2}}$, $q_0=(k_0a)^{2/3}$, $\epsilon^2=\frac{E}{V_0}$.
The time-independant Schrödinger equation for the wavefunction on each section reads
$$\frac{d^2\psi_I}{dx^2}+k^2\psi_I=0$$ $$\frac{d^2\psi_{II}}{dx^2}+(-\frac{x}{a}+\epsilon^2-1)k_0^2\psi_{II}=0$$ $$\frac{d^2\psi_{III}}{dx^2}+(\frac{x}{a}+\epsilon^2-1)k_0^2\psi_{III}=0$$ $$\frac{d^2\psi_{IV}}{dx^2}+k^2\psi_{IV}=0$$
Solutions to $I$ and $IV$ may be immediatly given as
$$\psi_I(x)=Ae^{ikx}+Be^{-ikx}$$ $$\psi_{IV}(x)=Ge^{ikx},$$
the first corresponding to the incoming and reflected wave, and the other to the transmitted wave.
Introducing the change of variables
$$\xi(x)=q_0(\frac{|x|}{a}+\epsilon^2-1)$$ transforms the other two equations into Airy differential equations:
$$\frac{d^2\psi_{II}}{d\xi^2}=-\xi\psi_{II}$$ $$\frac{d^2\psi_{III}}{d\xi^2}=-\xi\psi_{III}$$
with general solutions in the form
$$\psi_{II}(x)=C\mathrm{Ai}(-\xi)+D\mathrm{Bi}(-\xi)$$ $$\psi_{III}(x)=E\mathrm{Ai}(-\xi)+F\mathrm{Bi}(-\xi),$$
where $\mathrm{Ai}(z)$ and $\mathrm{Bi}(z)$ are the standard Airy functions of the first and second kind.
Using the fact that both $\psi(x)$ and $\psi'(x)$ are continuous, this gives the linear system:
$$\left(\begin{matrix} e^{-ika} & e^{ika} & -\mathrm{Ai}(-\xi_a) & -\mathrm{Bi}(-\xi_a) & 0 & 0 \\ -\frac{ika}{q_0}e^{-ika} & \frac{ika}{q_0}e^{ika} & \mathrm{Ai}'(-\xi_a) & \mathrm{Bi}'(-\xi_a) & 0 & 0 \\ 0 & 0 & \mathrm{Ai}(-\xi_0) & \mathrm{Bi}(-\xi_0) & -\mathrm{Ai}(-\xi_0) & -\mathrm{Bi}(-\xi_0) \\ 0 & 0 & \mathrm{Ai}'(-\xi_0) & \mathrm{Bi}'(-\xi_0) & \mathrm{Ai}'(-\xi_0) & \mathrm{Bi}'(-\xi_0) \\ 0 & 0 & 0 & 0 & \mathrm{Ai}(-\xi_a) & \mathrm{Bi}(-\xi_a) \\ 0 & 0 & 0 & 0 & \mathrm{Ai}'(-\xi_a) & \mathrm{Bi}(-\xi_a) \end{matrix} \right)\left(\begin{matrix} A \\ B \\ C \\ D \\ E \\ F \end{matrix} \right)=\left( \begin{matrix} 0 \\ 0 \\ 0 \\ 0 \\ Ge^{ika} \\ -G\frac{ika}{q_0}e^{ika} \end{matrix} \right) $$
where $\xi_a\equiv\xi(x=a)=\xi(-a)=q_0\epsilon^2$ and $\xi_0\equiv\xi(0)=q_0(\epsilon^2-1)$ appear as shorthand notations. Prime corresponds to $\frac{d}{dx}$. The currents are given by
$$J_I=\frac{\hbar}{m}\Im\left(\psi_I^*\frac{d\psi_I}{dx}\right)=\frac{\hbar}{m}(|A|^2-|B|^2)=J_{in}-J_{ref}$$ $$J_{IV}=\frac{\hbar}{m}\Im\left(\psi_{IV}^*\frac{d\psi_{IV}}{dx}\right)=\frac{\hbar}{m}|G|^2=J_{tr}$$
Since $T+R=1$, it follows that $|A|^2-|B|^2=|G|^2$. $G$ is actually undetermined and is used as a free coefficient (every other coefficient can be expressed as $L=G\cdot blabla$), so it may be freely set to be 1. It follows that $T=\frac{J_{tr}}{J_{in}}=\frac{1}{|A|^2}$ and $R=1-\frac{1}{|A|^2}$. Hence, only $A$ is needed to calculate the transmission and reflexion coefficients. It may be calculated from the above system using Cramer's rule.
My result:
\begin{equation*}\label{parametara} \begin{split} A&=\frac{iq_0\pi^2e^{2ika}}{ka} \left[\left(\left(\frac{ka}{q_0}\right)^2\mathrm{Ai}^2(-\xi_a)+\mathrm{Ai}'^2(-\xi_a)\right)\mathrm{Bi}(-\xi_0)\mathrm{Bi}'(-\xi_0)\right.\\ &+\left(\left(\frac{ka}{q_0}\right)^2\mathrm{Bi}^2(-\xi_a)+\mathrm{Bi}'^2(-\xi_a)\right)\mathrm{Ai}(-\xi_0)\mathrm{Ai}'(-\xi_0)\\ &\left.-\left(\left(\frac{ka}{q_0}\right)^2\mathrm{Ai}(-\xi_a)\mathrm{Bi}(-\xi_a)+\mathrm{Ai}'(-\xi_a)\mathrm{Bi}'(-\xi_a)\right)(\mathrm{Ai}(-\xi_0)\mathrm{Bi}'(-\xi_0)+\mathrm{Ai}'(-\xi_0)\mathrm{Bi}(-\xi_0)) \right], \end{split} \end{equation*}
where use was made of the Wronskian for Airy functions:
$$\mathcal{W}[\mathrm{Ai}(z),\mathrm{Bi}(z)]=\pi^{-1}.$$
The next step was to plug the absolute square of $A$ into expressions for $T$ and $R$, which rendered the problematic graph. |
1,062 6
Quick question, what is the approach to this problem?
Keep in mind I am supposed to use the Fundamental Theorem of Line Integrals.
[tex]\int_{C} 2ydx + 2xdy [/tex]
Where C is the line segment from (0,0) to (4,4).
Unless I am missing something I need to make that into the form of [tex]\vec{F} \cdot \vec_d{R}[/tex] to use the F.T. of L.I., but I have no clue where to start there. Thanks.
Keep in mind I am supposed to use the Fundamental Theorem of Line Integrals.
[tex]\int_{C} 2ydx + 2xdy [/tex]
Where C is the line segment from (0,0) to (4,4).
Unless I am missing something I need to make that into the form of [tex]\vec{F} \cdot \vec_d{R}[/tex] to use the F.T. of L.I., but I have no clue where to start there. Thanks. |
Given a complex manifold $M$, its complexified tangent bundle is $TM \otimes \mathbb C$. It is quite confusing for me as to why we want to do this since at each point $TM$ can already be viewed as a complex vector space.
A complex manifold is given by two alternative definitions. First you have $M$ a "nice" topological space that has an atlas $\phi_j \colon U_j \longrightarrow V_j\subset \mathbb{C}^n$ such that $\phi_j\circ\phi_i^{-1}$ is a biholomorphism. This way you'll get a complex tangent bundle.
Another way is to consider a differentiable manifold of (real) dimension $2n$ together with an automorphism $J$ of the tangent bundle $TM$ such that $J^2= -Id$. Complexifying $TM$ gives a diagonalization for $J$ in the fibers. Locally, it decomposes $$ TM\otimes \mathbb{C} = E_i \oplus E_{-i} $$ as the sum of eigenspaces. Note that $TM\otimes \mathbb{C}$ has real rank $4n$ and from $J^2=-Id$, $E_i$ and $E_{-i}$ both have real rank $2n$. This is called an almost complex structure. If $[E_{-i},E_{-i}] \subset E_{-i}$ then this decomposition is global and we call $E_i$ (which has complex rank $n$) the holomorphic tangent bundle of $M$ and $J$ a complex structure.
The equivalence of these two definitions is not trivial and is given by the Newlander-Nirenberg theorem.
See Huybrecht's book. From the begining he starts this discussion exploring the identification of $\mathbb{R}^2$ and $\mathbb{C}$. |
Let $a_0=1,a_n=\tan{a_{n-1}}$. Then is $\{a_n\}_{n=0}^\infty$ dense in $\Bbb{R}$? I've drawn a map of this dynamical system and it seems that the sequence is dense on $\Bbb{R}$.
Before I start, I want to point out that I do not answer the specific question. I believe that no one can with the current tools available in mathematics. However, what I will give an answer that is at the state of the art. And here comes the second disclaimer: I will summon quite recent research (about 15 years old), with a wealth of technical definitions and theorems. Giving precise proofs would take a small article, so I will have to be a little sketchy to keep things short. The upside is that I can get quite precise propositions.
For all $x \in \mathbb{R}$, we define recursively $a_n (x)$ by $a_0 (x) = x$ and $a_{n+1} (x) = \tan (x)$. I will not be able to say anything about the sequence $(a_n(1))_{n \geq 0}$, which is why I do not answer your question. Nevertheless, I will be able to say quite a lot of things about $(a_n(x))_{n \geq 0}$ for a Lebesgue-generic real number $x$. Then, you just have to hope that $1$ has these generic properties (there is no obvious reason it has not). This restriction comes from the fact that I work with a chaotic dynamical system, which exhibit sensitivity to initial conditions. For such systems you can usually work out a lot of generic properties, but you cannot tell if any given point is indeed generic or not (and there are indeed a lot of non-generic points and weird potential behaviours).
We can see $(\mathbb{R}, \tan)$ as a dynamical system. It is well-defined up to a countable set, which does not matter in the following since a countable set is negligible for the Lebesgue measure. The first thing to note is that the $\tan$ function is $\pi$-periodic. Hence, to dynamical system goes to the quotient. Let $I := (-\pi/2, \pi/2)$, let $\pi : \ \mathbb{R} \to I$ be the reduction modulo $\pi$, and let:
\begin{equation*} T : \ \left\{ \begin{array}{lll} I & \to & I \\ x & \mapsto & \pi \circ \tan(x) \end{array} \right. . \end{equation*}
Then, for all $n \geq 0$,
\begin{equation*} a_{n+1} = \tan \circ T^n \circ \pi. \end{equation*}
Hence, the density of $(a_n(x))$ in $\mathbb{R}$ is equivalent to the density of $T^n (x)$ in $I$. We can go further, and relate the equidistribution of $T^n (x)$ in $I$ with the equidistribution of $(a_n(x))$ in $\mathbb{R}$ (more on that later).
Now, we only need to study the system $(I, T)$. What does it look like? Here is its plot (courtesy of Wolfram Alpha):
This system has a few nice properties, and a few not-so-nice properties. It has countably many branches (not so nice), but these branches are surjective, which ensure a Markov property and a big image property (nice). It is piecewise $\mathcal{C}^2$, which ensures a bounded distortion property. Hence, the fact that it has countably many branches is manageable. At first sight, it share a lot of properties with the Gauss map $x \to \{1/x\}$, which has been thoroughly studied:
Now, $T$ has a real not-so-nice property. We like our maps to be expanding, that is, that $\inf |T'| > 1$: that gets us positive Lyapunov exponents, positive entropy, and chaos in any sense of the word. The Gauss map is expanding everywhere except at $1$, where its derivative is $-1$. But that is not a hard problem; you can check that if you iterate the Gauss map, what you get is expanding. For $T$, however, the problem is more serious: the derivative at $0$ is $1$, but $0$ is a fixed point. Hence, $(T^n)' (0) = 1$ for all $n$. A point which lands close to $0$ will be eventually repelled (that's because $|\tan (x) > x|$ for non-zero $x$), but that may take a lot of time. Actually, the orbit spend most of their time close to zero. This is an expanding map with a neutral fixed point, and it belongs to a class of dynamical systems which has been studied at length in the past twenty years.
The phenomena which arise are about the same as for one of the Liverani-Saussol-Vaienti maps [LSV] with a parameter $\alpha = 2$:
\begin{equation*} T_2 : \ \left\{ \begin{array}{lll} (0,1] & \to & (0,1] \\ x & \mapsto & \left\{ \begin{array}{lll} x(1+4x^2) & \text{ if } & x \in (0,1/2],\\ 2x-1 & \text{ if } & x \in (1/2,1] \end{array}\right. \end{array} \right. . \end{equation*}
The choice of the parameter $\alpha = 2$ is due to the fact that $T_2 (x) = x + 4x^3 + o(x^3)$ at $0$, just as $T (x) = x + x^3/3 + o(x^3)$ at $0$: the second-order terms are of the same order.
One way of studying these LSV maps, or $T$ is to induce the system away from the neutral fixed point (here $0$). For instance, let $A := (-\pi/2, -\pi/4) \cup (\pi/4, \pi/2)$, where $-\pi/4$ and $\pi/4$ are chosen as the end-points of the branch of $T$ containing $0$. Let $\varphi(x) := \inf_{n > 0} \{ T^n (x) \in A\}$, and let $T_A (x) := T^{\varphi (x)} (x)$. By looking closely at what happens around $0$ (points are repelled), it can be shown that $\varphi (x) < + \infty$ for all but countably many $x$. Hence, $T_A$ is well-defined almost everywhere. Finally, $T_A$ is expanding. Taking into account the Markov property, the big image property and the bounded distortion property, $(A, T_A)$ can be endowed with a Gibbs-Markov structure (see [A], Chapter 4, and in particular Sections 4.7 for definitions and 4.8 for an application to interval maps).
We can then show that $T_A$ has a unique absolutely continuous invariant probability measure, with respect to which it is ergodic and mixing (literature says has finitely many ergodic components of positive Lebesgue measure, and the surjectivity of the branches of $T$ gives transitivity and aperiodicity, hence mixing), and which has full support. In particular, for almost every $x \in A$, the orbit $(T_A^n (x))_{n \geq 0}$ is equidistributed (for this invariant measure), and thus dense.
The map $(I, T)$ can then be seen as a Rokhlin tower over $(A, T_A)$ of height $\varphi$. This is already sufficient to prove that almost every $x \in I$ has dense orbits in $I$, and thus that $(a_n (x))_{n \geq 0}$ is dense in $\mathbb{R}$ for Lebesgue-almost every $x$. But we can go further!
Using [LSV], we can show that the transformation $(I, T)$ has, up to positive multiplicative constant, a unique positive $\sigma$-finite measure $\mu$ which is $T$-invariant and absolutely continuous with respect to the Lebesgue measure. Its density has a piecewise continuous version $g$, which is bounded away from $0$, and such that $g(x) = \Theta (x^{-2})$ at $0$. The dynamical system $(I, T, \mu)$ is ergodic. The measure $\mu$ is isomorphic the the natural invariant measure on the Rokhlin tower over $A$; but the height of the tower has infinite mean (because the trajectories spend a lot of time to get away from $0$), so the measure $\mu$ is infinite.
Then, $\nu := \tan_* \mu$ is a $\sigma$-finite measure on $\mathbb{R}$ which is $\tan$-invariant. It also has a piecewise continuous density $h$, such that $h(x) = \Theta (x^{-2})$ at $0$ and $h(x) \sim c x^{-2}$ at $\pm \infty$ for some $c > 0$, and which satisfies the functional equation:
\begin{equation*} \sum_{n \in \mathbb{Z}} h (\arctan (x) + n \pi) = (1+x^2) h(x). \end{equation*}
To finish with this answer, here are a few consequences.
Density: for almost every $x \in \mathbb{R}$, the sequence $(a_n (x))_{n \geq 0}$ is dense in $\mathbb{R}$.
Trajectories spend most of their time close to $0$: for almost every $x \in \mathbb{R}$, for any neighborhood $B$ of $0$,
\begin{equation*} \lim_{n \to + \infty} \frac{\# \{0 \leq k < n : \ a_k (x) \in B\}}{n} = 1. \end{equation*}
Local time: there is a constant $C$ such that for all measurable $B$ bounded away from $0$, for all probability measure $\mathbb{P}$ absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}$,
\begin{equation*} \lim_{n \to + \infty} \frac{\# \{0 \leq k < n : \ a_k \in B\}}{\sqrt{n}} = C \int_B h(t) \ dt \cdot |\mathcal{N}|, \end{equation*}
where $\mathcal{N}$ is a Gaussian random variable of variance $\pi/2$ and the convergence is in distribution in $(\mathbb{R}, \mathbb{P})$. This comes from [A], Corollary 3.7.3. This tells you roughly that about $1/\sqrt{n}$ of the $(a_k (x))_{0 \leq k < n}$ will be in any subset bounded away from zero, with a significant stochastic variation. That can make for nice numerical experiments, I you wish so: compute $\# \{0 \leq k < n : \ |a_k| (x) > N \}/\sqrt{n}$ for fixed large $n$ and many random values of $x$ (so as to get a statistical approximation of its distribution), and look at what happens when you let $N$ grow.
It is likely (to my knowledge, this specific result is not in the literature, but I do not think it is that hard to show) that a typical trajectory $(\# \{0 \leq k < N : \ a_k (x) \in B\})_{n \geq 0}$ will behave at large scale like a typical trajectory of the local time of a $1$-dimensional simple random walk.
We can hope that such properties are true for $(a_n (1))_{n \geq 0}$.
[A] J. Aaronson,
An introduction to infinite ergodic theory, Mathematical Surveys and Monographs, Vol.50, American Mathematical Society, 1997.
[LSV] C. Liverani, B. Saussol and S. Vaienti, A probabilistic approach to intermittency,
Ergodic Theory and Dynamical Systems 19 (1999), 671--685, available here. |
In canonical quantization, the particles arise as quantized excitations on the vacuum $|0\rangle$. For example, a one-particle state with four momentum $p=(E,\textbf{p})$ is given by $$|p\rangle\sim a^\dagger_{p}|0\rangle.$$ Is it possible to arrive at the particle picture in the path-integral formulation of quantum field theory (QFT)? What is the notion of a particle in path-integral way of doing QFT?
Path integrals compute transition amplitudes between quantum states.
Disclaimer.
I don't want to go into issues of rigged Hilbert spaces and mathematical correctness of definitions here. This answer gives the general rule of thumb for the path integral formulation.
Notation.
In this answer round brackets denote ordinary function arguments (like $\Psi(\vec{x})$), while square brackets denote functional arguments (like $S[\phi(x)]$), which are themselves functions (elements of infinite-dimensional spaces).
As a simple toy model, consider quantum mechanics.
Particle state can be described by a wavefunction $\Psi(\vec{x})$. Moreover, transition amplitudes are characterized by initial and final wavefunctions:
$$ \Psi_I(\vec{x}), \Psi_F(\vec{x}) $$
at times $0$ and $\tau$ respectively. The transition amplitude is given by
$$ \left< \Psi_F \right| \hat{U} \left| \Psi_I \right> = \int d^3 x_I \Psi_I (\vec{x}_I) \int d^3 x_F \Psi_F^{*}(\vec{x}_F) \cdot U(\vec{x}_I, \vec{x}_F), $$
where $U(\vec{x}_I, \vec{x}_F)$ are the matrix elements of the evolution operator. These are given by path integrals:
$$ U(\vec{x}_I, \vec{x}_F) = \intop_{\vec{x}(0)=\vec{x}_I}^{\vec{x}(\tau)=\vec{x}_F} {\cal D}x(t) \, e^{i S[\vec{x}(t)]}. $$
The boundary conditions are essential, because they determine the matrix element that is to be evaluated.
Generalization to QFT.
In QFT, states live on the infinite-past and infinite-future hyperplanes. We can loosely associate functionals of form
$$ \Psi_I[\phi(\vec{x})], \Psi_F[\phi(\vec{x})] $$
to them. Note that $\Psi$ depends on values of the field at the (initial and final) 3d hyperplanes.
The transition amplitude is given by the path integral
$$ \left< \Psi_F \right| \hat{U} \left| \Psi_I \right> = \int {\cal D}\phi_I \Psi_I[\phi_I] \int {\cal D}\phi_F \Psi_F^{*}[\phi_F] \cdot U[\phi_I(\vec{x}), \phi_F(\vec{x})], $$ $$ U[\phi_I(\vec{x}), \phi_F(\vec{x})] = \intop_{\phi(t_I, \vec{x}) = \phi_I(\vec{x})}^{\phi(t_F, \vec{x}) = \phi_F(\vec{x})} {\cal D}\phi e^{i S[\phi]}, $$
where the integral $\int {\cal D}\phi$ is over field configurations in between of two boundaries. It depends on the chosen boundary field configurations $\phi_I$ and $\phi_F$, and thus can't be factored out.
General rule.
States and Hilbert spaces are associated to boundaries. Path integrals are over the bulk region and compute transition amplitudes for the given pair of boundary states.
Transition amplitudes between particle states.
In QFT, it is logical to choose the Fock basis (and to label functionals associated to it with
asymptotic particle states). Thus, path integrals give transition amplitudes between particle states.
The Fock basis is the same as in canonical quantization. It spans the space of rapidly decreasing functionals $\Psi[\phi(\vec{x})]$, and there are actually expressions for the $\Psi$ functional associated to elements of the Fock space. These are given by Hermitian polynomials times decreasing exponentials, just like for the simple Harmonic oscillator.
Conclusion
Path integrals are a tool for computation of quantum dynamics, i.e. transition amplitude between quantum states. They don't replace the canonical formalism of Hilbert spaces and self-adjoint operators. They instead complement it by providing a covariant way of deriving transition amplitudes. You still have to do the Hilbert space quantization, and there will be elementary particles as before. |
Just curious. If the purpose of a proof is to inform and persuade, why don't Venn diagrams count? Is it just convention or is there a more, umm, formal reason haha. Thanks!
I don't know exactly what you have written, but I would venture to say that anything you "prove" with Venn diagrams probably has an extremely direct translation into set theory, which would certainly be an acceptable form of proof.
The strongest reason to not let you just use a Venn diagram alone is that your teacher probably wants you to verbalize your explanation. This is a key part of mathematics. Drawing a picture can really help illustrate the idea involved, but it does not always explain the connection to the logic you are working within.
There is also a huge drawback to proving things by Venn diagram: your visual preconceptions may fool you into making a mistake. This cannot happen (or happens to a much smaller degree) when you work in the language of set theory.
Using the default Venn diagram with two intersecting circles, it is "evident" that $A\cup B\ne A\cap B$. But of course this statement is not true in general.
The point is, you need to take care to be specific about what is being talked about. Symbols tend to be more specific, but mathematicians still sometimes make equivocation errors in the language. Really it boils down to not confusing the subjects and objects in a proof. As some other people have said, a Venn diagram typically applies to one set or another, so to generalize is a risky business. The same errors can occur in symbolic math manipulation, though. So there is nothing inherently invalid about the type of proof.
Venn diagrams are hardly distinct from proofs in Geometry by drawing figures.
I do agree with the comment that we are just talking about convention. Over the centuries mathematicians have changed their opinions on what constitutes a valid proof. Some of Euler's work, for example, comes to mind. It was rather arbitrary for us today to call his proofs weaker than ours today. Its a cultural chance in mindset, really.
Venn diagrams, geometrical figures... these are just concepts illustrated on paper. Thats it. How is that different than algebraic symbols written on paper? Or English? Its a language that conveys a concept. To de-emphasize the worth of one simply because it takes a different form is at best arbitrary and subjective. At worst its irrational or bigoted. Let me ask you, is there a formal proof as to why figures and shapes and Venn diagrams are less effective than symbols? It would seem to me that this would require a proof of its own.
Until such time that this is proven, the only criticism I can give the use of any one proof method over another is that we are fallible humans that may not interpret or capture the meaning of the question adequately enough in one system of proof, but we do in another. Its a failure on our own part.
Using Venn diagrams we can prove set identities with three set variables. Why this is correct?
There is theorem that claim that Boolean algebra $\mathcal{B}$ is free on $X$ for class of Boolean algebras if for all different $x_1,\dots,x_n$ it's true $x_1^{\alpha_1} \wedge x_2^{\alpha_2} \wedge \dots \wedge x_n^{\alpha_n} \neq 0$, where $$x^{\alpha}=\begin{cases}x, ~~~\alpha=1\\ x', ~~\alpha=0\end{cases}$$ So, if $A,B,C \subseteq K$ like on diagram $(V)$,
then clearly $A^{\alpha}\cap B^{\beta} \cap C^{\gamma} \neq \emptyset,$ where $$A^{\alpha}=\begin{cases}A, ~~~~~~~~~~~~~~~~~\alpha=1\\ A^c=K\backslash A, ~~\alpha=0\end{cases}$$ Let $\mathbf{\Omega}=\langle A,B,C \rangle$ be set algebra define with $\cap,\cup,^c$. Using theorem that I mentioned, we see that $\mathbf{\Omega}$ is free algebra on $X=\{A,B,C\}$ for class of Boolean algebras. Now, using theorem 1., every Boolean identity, and therefore every set identity with three variables, it's true in Boolean algebras, and therefore in $\mathscr{P}(X)$.
Theorem 1. Let $\mathcal{A}$ is free algebra on $X$, $|X|=n,$ for every class algebras $\mathfrak {M}$ algebraic language $L$. If $u(x_1,\dots,x_n)=v(x_1,\dots,x_n)$ is algebraic identity in language $L$ and $u=v$ is true in $\mathcal{A}$, then $u=v$ is also true for all $\mathcal{C} \in \mathfrak{M}.$
But, four circle $A,B,C,D$ can divide plane in fourteen parts (but we want sixteen). So, $A^{\alpha}\cap B^{\beta} \cap C^{\gamma} \cap D^{\delta}= \emptyset$ for some choice of $\alpha, \beta, \gamma, \delta.$ So, for every Venn diagrams $V(A,B,C,D)$ it exist set identity $u=v$ that is true in $V(A,B,C,D)$, but which is not correct for some sets.
Mh I think Venn-diagrams are very helpful and in fact are the proof on several set identities, I even know mathematicans which accept them as a proof.
Generally because Venn diagrams only illustrate determined examples, and most proofs you find in elementary set theory are about statements for any set, so if want to use Venn diagrams to prove something that should work for every set, you would have to actually use them for every possible set. If there are infinite sets, you're screwed.
It depends on teacher. If I was one and could be sure that student understands Venn diagrams
and has given all steps in constructing both Venn diagrams, I would agree to it.
At the start of course. Later students must learn more formal style of proof, because Venn diagrams is only for very specific problem. Although they can still use Venn diagrams as visual help in constructing proofs if they want. It's actually good and trains intuitions of the set theory.
[Note - I'm not teacher or professor.]
The purpose of a proof is
not to inform or persuade (you might not even have anybody to show your proof to). The purpose of a proof is to prove: in a sequence of small and easily verified, or known to be true, steps to prove some conclusion.
Ideally, a proof should be verifiable by a computer. You may contrast this with the fact that it is theoretically impossible to write a computer program that would verify if a given statement is
provable. Update: of course this is somewhat circular as a definition, but you cannot really define a proof. (You can of course use a metalanguage to define a proof in some other formal language, etc.) Defining a proof as a persuasion is just proposing a not completely equivalent plain-language synonym as a "definition".
To address the Venn diagrams, the problem with them is that they are usually hard to convert into a sequence of small easily verifiable steps: either you look at it and
guess how to prove, or not, but a Venn diagram itself is only an illustration of a proof.
Here is another possible reason for not accepting a Venn diagram as a proof. (Here I am using the term "Venn diagram" as opposed to "Euler diagram".) If what is to be proven is a set equality, the definition of equality must be met. What is missing is a logical connection between this equality and the use of Venn diagrams. The result "If two Venn diagrams give the same picture, then the sets they refer to are equal" would suffice. So if you haven't proved this or are not allowed to use it, then you would not be allowed to use a Venn diagram to prove your set equality.
A Venn Diagram could be a proof if it captures all possible cases. In the case above where the author uses A U B does not equal A intersect B is not captured by the standard two circles with an intersection, the proof with Venn Diagrams using only one case (Two circles with a football intersection) isn't complete. If I want to prove a statement for ANY sets, I must include strange sets like the Cantor set and weird possibilities. With Venn Diagrams, you will have to prove that you have caught EVERY SINGLE possibility. This may not be feasible. Thus, we move to rigorous proof using first principles straight from the definitions and theorems. Those who said even mathematicians accept Venn Diagrams as proofs, well, I am a mathematician. There may be a couple reasons. Maybe the instructor is happy that their students were able to think things through with Venn Diagrams. Proof writing is hard and the first step is believing what you are trying to prove-which is what Venn Diagrams are for. I (mathematician) would not accept it because of the false message it sends as seen on these threads. Students may internalize that Venn diagrams are proofs for all situations. |
Consider a small system in thermal contact with a large system which has coolness $\beta$. For simplicity, the small system has just one state per energy level. It could be a harmonic oscillator with a small quantum energy. The large system has a multiplicity $\Omega_0$ when the small system is in its ground state.
When the small system absorbs a quantum of energy, the multiplicity of the large system decreases, but its $\beta$ remains the same. It is a heat reservoir, either because it is large or because it is for example melting ice in water. This means that the next quantum will change the multiplicity of the reservoir with the same fractional amount. This results in a negative exponential for multiplicity of the heat reservoir as a function of the amount of energy in the small system. Because the small system has only one state per energy, this means that the multiplicity and the probability of finding the total system in a state where the small system has an energy $E$ is a negative exponential of $E$.
Mathematically, one can start this reasoning with the definition of the thermodynamic beta: $$\beta = \Omega^{-1}\ \frac{{\rm d}\Omega}{{\rm d}E}.$$
One can rewrite that as a differential equation:
$$\frac{{\rm d}}{{\rm d}E} \Omega = - \beta \Omega,$$ where the minus sign is a consequence of the energy $E$ of the small system is taken from the heat reservoir. This has the solution $$ \Omega = \Omega_0 e^{-\beta E}.$$ The probability of finding the small system in a state with energy $E$ is therefore $$ P(E) \propto e^{-\beta E}.$$ This is the Boltzmann factor. This derivation relies on the small oscillator being distinguishable, that is why this gives the classical distribution function. |
First consider a scheme $X$ with an open cover $\mathcal{U}=\{U_i\}$. An object with descent data on $\mathcal{U}$ is a collection $(\mathcal{E}_i,\phi_{ij})$ where $\mathcal{E}_i$ is a quasi-coherent sheaf on each $U_i$ and $\phi_{ij}$ is an isomorphism $pr_2^*\mathcal{E}_j\to pr_1^*\mathcal{E}_i$ in $Qcoh(U_{ij})$ which satisfies the cocycle condition $$ pr_{13}^*\phi_{ik}=pr_{12}^*\phi_{ij}\circ pr_{23}^*\phi_{jk}. $$ We can make the descent data into a category where the morphisms are those compatible with the $\phi_{ij}$'s. Moreover we have an equivalence of categories $$ \text{Desc}(X,\mathcal{U})\simeq Qcoh(X). $$
Of course the definition of descent data has various generalizations. For example instead of the category of quasi-coherent sheaves we can consider an arbitrary fibered category $\mathcal{F}$ over spaces. For any map between spaces $\pi: U\to X$ we have a cosimplicial diagram of categories $$ \mathcal{F}(X)\to \mathcal{F}(U)\rightrightarrows \mathcal{F}(U\times_X U)\ldots $$ An object with descent data on is a collection $(\mathcal{E},\phi)$ where $\mathcal{E}$ is an object in $\mathcal{F}(U)$ and $\phi$ is an isomorphism $pr_2^*\mathcal{E}\to pr_1^*\mathcal{E}$ which satisfies the cocycle condition $$ pr_{13}^*\phi=pr_{12}^*\phi\circ pr_{23}^*\phi. $$ We notice that the category $\text{Desc}(U\to X)$ is equivalent to the fiber product of categories $\mathcal{F}(U)\times_{\mathcal{F}(U\times_X U)} \mathcal{F}(U)$ (as Zhen Lin points out in the comment, we need to also consider $\mathcal{F}(U\times_X U\times_X U)$).
We also notice that if $\pi$ is flat then $\phi$ can be also expressed as the comodule structure $\mathcal{E}\to \pi^*\pi_*\mathcal{E}$ since $\pi^*\pi_*\mathcal{E}\cong (pr_2)_*pr_1^*\mathcal{E}$. Moreover in the flat case (together with some condition on $F$ I guess) we have $$ \text{Desc}(U\to X)\simeq \mathcal{F}(X). $$
Now we consider the case that $\mathcal{F}$ contains some higher structure. In this case we have an augmented cosimplicial diagram of (higher) categories $$ \mathcal{F}(X)\to \mathcal{F}(U)\rightrightarrows \mathcal{F}(U\times_X U)\ldots $$ and the definition of descent data should be modified. For example in the recent version of Yekutieli's Twisted Deformation Quantization of Algebraic Varieties Section 5, an explicit definition of descent data of cosimplicial cross groupoid is given.
Before given the explicit construction, I would like to know what SHOULD the descent data be. Or more precisely what are the properties that descent data must satisfy?
One attempt is to say that descent data are extra structures (say comodule) on $\mathcal{F}(U)$ such that we have the (weak) equivalence $\text{Desc}(U\to X)\simeq \mathcal{F}(X)$, but the equivalence only exists in good (say flat) cases while the descent data can be given in general cases.
Therefore is the alternative description better? The category of descent data should be the homotopy limit (or totalization) of the cosimplicial diagram $$ \mathcal{F}(U)\rightrightarrows \mathcal{F}(U\times_X U)\ldots $$ |
12 0
Hi,
I'm currently reading the book "Quantum Field Theory for Mathematicians" by Ticciati and in section 2.3 he mentions that the Lorentz action on the free scalar field creation operators [itex] \alpha(k)^\dagger [/itex] is given by
[tex] U(\Lambda)\alpha(k)^\dagger U(\Lambda)^\dagger = \alpha(\Lambda k)^\dagger .[/tex] Can someone tell me how to show this, or at least how to get started?
I'm currently reading the book "Quantum Field Theory for Mathematicians" by Ticciati and in section 2.3 he mentions that the Lorentz action on the free scalar field creation operators
[itex] \alpha(k)^\dagger [/itex] is given by
[tex] U(\Lambda)\alpha(k)^\dagger U(\Lambda)^\dagger =
\alpha(\Lambda k)^\dagger .[/tex]
Can someone tell me how to show this, or at least how to get started? |
Many thanks to Bart Andrews for this contribution!
Question
Consider a gas in contact with a solid surface. The molecules of the gas can adsorb to specific sites on the surface. These sites are sparsely enough distributed over the surface that they do not directly interact. In total, there are N adsorption sites, and each can adsorb n = 0, n = 1, or n = 2 molecules. When an adsorption site is unoccupied, the energy of the site is zero.
When an adsorption site is occupied by a single molecule, the energy of the site is ε
1. When an adsorption site is doubly occupied, the adsorption energy is ε 2. In addition, the two adsorbed molecules can interact in a vibrational mode with frequency ω, so that the energy of the doubly occupied adsorption site is \[\varepsilon_2+\nu \hbar \omega \text{ with } \nu = 0,1,2,...\]
The gas above the surface can be considered as a heat and particle reservoir with temperature T and chemical potential μ.
Calculate the grand canonical partition sum Z G. Calculate the grand canonical potential J. Calculate the mean number of adsorbed molecules on the surface directly from Z G. Calculate the mean number of adsorbed molecules on the surface directly from J. Calculate the probability that an adsorption site is in the state with \(n = 2\) and \(ν = 3\). Solution 1. Calculate the Grand Canonical Partition Sum
In the question we are told that the binding sites are “sparsely enough distributed over the surface that they do not directly interact” and so we can use the relation \(Z_G = z_G^N\) where \(z_G\) is the single binding site partition sum. From this, we can write down the grand canonical partition sum.
\[ Z_G = z_G^N = \left( \underbrace{1}_{n=0} + \underbrace{e^{- \beta ( \epsilon_1 - \mu)}}_{n=1} + \underbrace{\sum_{\nu = 0}^{\infty} e^{- \beta ( \epsilon_2 + \nu \hslash \omega - 2 \mu)}}_{n=2} \right) ^N \]
So the first term is for the \(n=0\) state and the second term is for the \(n=1\) state. These terms were covered in lectures and so should be fairly apparent. The third term has a factor of two before the \(\beta\) to show that there are two particles in the microstate and the expression for the energy of this microstate is given in the question. In a partition sum we have to sum over all possible microstates and hence over all possible \(\nu\) in this case. This infinite geometric sum can be evaluated if desired since \(e^{2 \beta \hslash \omega} > 1 \). However, this does not simplify things much and so it will not be done here.
2. Calculate the Grand Canonical Potential
For this part you can use the definition of the grand canonical potential and simply substitute in the expression for the grand canonical partition sum.
\begin{eqnarray}
J & = & -k_B T \ln Z_G \nonumber \\ & = & -N K_B T \ln z_G \nonumber \\ & = & -N k_B T \ln \left( 1 + e^{- \beta ( \epsilon_1 – \mu)} + \sum_{\nu = 0}^{\infty} e^{- \beta ( \epsilon_2 + \nu \hslash \omega – 2 \mu)} \right) \nonumber \end{eqnarray} 3. Calculate the mean number of adsorbed molecules from Z G
For this part we can use a similar approach as to what was shown in lectures.
\begin{eqnarray}
\langle n \rangle & = & N \left( \frac{0+1 \cdot e^{- \beta ( \epsilon_1 – \mu)} + 2 \cdot \sum_{\nu = 0}^{\infty} e^{- \beta ( \epsilon_2 + \nu \hslash \omega – 2 \mu)}}{z_G} \right) \nonumber \\ & = & N \left( \frac{e^{- \beta ( \epsilon_1 – \mu)} + \sum_{\nu = 0}^{\infty} 2e^{- \beta ( \epsilon_2 + \nu \hslash \omega – 2 \mu)}}{z_G} \right) \nonumber \end{eqnarray}
So in the brackets we are calculating the mean number of adsorbed molecules per binding site and then we multiply this by \(N\) to get mean number of adsorbed molecules on the surface. The numerator is sum of number of molecules times the microstate and this is then divided by the total number of states. This works just like any arithmetic mean.
4. Calculate the mean number of adsorbed molecules from J
Here, we need to make use of the thermodynamic relation between the grand canonical potential, the chemical potential and the particle number.
\begin{eqnarray}
- \langle n \rangle & = & \partial J \over \partial \mu \nonumber \\ & = & – N k_B T {\partial \ln z_G \over \partial \mu} \nonumber \\ \langle n \rangle & = & N k_B T \left( \frac{z’_G}{z_G} \right) \nonumber \\ & = & N k_B T \beta \left( \frac{e^{- \beta ( \epsilon_1 – \mu)} + \sum_{\nu = 0}^{\infty} 2e^{- \beta ( \epsilon_2 + \nu \hslash \omega – 2 \mu)}}{z_G} \right) \nonumber \\ & = & N \left( \frac{e^{- \beta ( \epsilon_1 – \mu)} + \sum_{\nu = 0}^{\infty} 2e^{- \beta ( \epsilon_2 + \nu \hslash \omega – 2 \mu)}}{z_G} \right) \nonumber \end{eqnarray}
This is the same expression as we derived in part three, as required.
5. Probability an adsorption site is in the state \(n=2\) and \(\nu = 3\)
This expression is simply the state divided by the sum of all possible states.
\[ P(n=2 ; \nu = 3 ) = \frac{ e^{- \beta (\epsilon_2 + 3 \hslash \omega - 2 \mu)}}{z_G} \] |
It’s been a while since my last math post. It seems that my previous assertion correlating the quality of question with the period of final exams could not have been further from the truth. It now seems quite likely that the occurrence of a few “good” questions on Yahoo Answers in a short period of time a few weeks back was more of a fluke rather than the norm.
Problem: Find the standard matrix for the linear transformation which reflects points in the x-y plane across the line y = \frac{-2x}{3}. Solution: To find the matrix representing a given linear transformation all we need to do is to figure out where the basis vectors, i.e.
\vec{e_{1}}=\begin{bmatrix} 1\\ 0 \end{bmatrix} and \vec{e_{2}}=\begin{bmatrix} 0\\ 1 \end{bmatrix}
are mapped under the transformation.
This is easy to see from the fact that since \vec{e_{1}} and \vec{e_{2}} form a basis of \mathbb{R}^{2} any element
\vec{u}\ \epsilon \ \mathbb{R}^{2}
can be written as
\vec{u} = a \cdot \vec{e_{1}} + b \cdot \vec{e_{2}}
for some a,b\ \epsilon\ \mathbb{R}.
Now let
A : \mathbb{R}^{2} \to \mathbb{R}^{2}
be a linear transformation. Then, by definition
A(\vec{u}) = A(a \cdot \vec{e_{1}} + b \cdot \vec{e_{2}}) = a \cdot A( \vec{e_{1}} ) + b \cdot A( \vec{e_{2}} )
Thus, the image of any element \vec{u}\ \epsilon \ \mathbb{R}^{2} under a linear transformation is completely determined by the image of the basis of the transformation’s domain.
Let’s find the image of \vec{e_{1}}=\begin{bmatrix} 1\\ 0 \end{bmatrix} under a reflection across the line given by y = \frac{-2x}{3}. First, we need to find a line perpendicular to y = \frac{-2x}{3} that passed through the point p_{1} = (1, 0).
In general, a line perpendicular to y = mx + b is given by y = \frac{-x}{m}, i.e in our case, y = \frac{3x}{2}. To find the one particular line with the same slope, that passes through p_{1} = (1, 0), we need to simply plug in the values of x = 1 and y = 0 into the line equation and add a constant value so that the equality will hold. Then we have
(0) = \frac{3(1)}{2}+b
Thus, the line perpendicular to y = \frac{-2x}{3} that passes through p_{1} = (1, 0) is given by y = \frac{3x}{2} – \frac{3}{2}
Now let’s find the point q_{1} = (x, y) at which our two lines y = \frac{-2x}{3} and y = \frac{3x}{2} – \frac{3}{2} intersect. This is done easily by solving the two line equation simultaneously:
y = \frac{-2x}{3} \ (1)
y = \frac{3x}{2} – \frac{3}{2} \ (2)
Equating (1) and (2) (and multiplying by 6 for simplicity) gives
-4x = 9x – 9
x = \frac{9}{13}
which then gives
y = \frac{-6}{13}
Thus, our two lines intersect at
q_{1} = \left (\frac{9}{13}, \frac{-6}{13} \right )
Now, let {p_{1}}’ = (x, y) be the reflection of p_{1} = (1, 0) across y = \frac{-2x}{3}. Since {p_{1}}’ and p_{1} are symmetric across q_{1} (because that is how the reflection is actually constructed) it follows that
x = q_{1}x – \left | p_{1}x – q_{1}x \right | = \frac{9}{13} – \left | 1 – \frac{9}{13} \right | = \frac{5}{13}
y = q_{1}y – \left | p_{1}y – q_{1}y \right | = \frac{-6}{13} – \left | 0 – \frac{-6}{13} \right | = \frac{-12}{13}
so
A( \vec{e_{1}} ) = \begin{bmatrix} \frac{5}{13} \\ \frac{-12}{13} \end{bmatrix}
Similarly as above, we find the image of \vec{e_{2}}=\begin{bmatrix} 0\\ 1 \end{bmatrix} under a reflection across the line given by y = \frac{-2x}{3}. The line perpendicular to y = \frac{-2x}{3} that passed through the point p_{2} = (0, 1) is given by
y = \frac{3x}{2} + 1 \ (3)
Equating (1) and (3) (and multiplying by 6 again) gives
-4x = 9x + 6
x = \frac{-6}{13}
and so
y = \frac{4}{13}
Therefore, (1) and (3) intersect at
q_{2} = \left (\frac{-6}{13}, \frac{4}{13} \right )
and the image {p_{2}}’ = (x, y) of p_{2} = (0, 1) under the reflection across y = \frac{-2x}{3} is given by
x = q_{2}x – \left | p_{2}x – q_{2}x \right | = \frac{-6}{13} – \left | 0 – \frac{-6}{13} \right | = \frac{-12}{13}
y = q_{2}y – \left | p_{2}y – q_{2}y \right | = \frac{4}{13} – \left | 1 – \frac{4}{13} \right | = \frac{-5}{13}
which yields
A( \vec{e_{2}} ) = \begin{bmatrix} \frac{-12}{13} \\ \frac{-5}{13} \end{bmatrix}
Finally, the matrix A : \mathbb{R}^{2} \to \mathbb{R}^{2} for the linear transformation which reflects points in the x-y plane across the line y = \frac{-2x}{3} is given by
A = \begin{bmatrix} \frac{5}{13} & \frac{-12}{13} \\ \frac{5}{13} & \frac{-5}{13} \end{bmatrix}
It turns out (although I’m not going to show to derivation here) that in general, the matrix for a linear transformation which reflects points in the x-y plane across an arbitrary line y = mx + b is given by
A = \begin{bmatrix} cos2\theta & sin2\theta \\ sin2\theta & -cos2\theta \end{bmatrix}
where \theta is the angle that the line makes with the positive x-axis, i.e
\theta = arctan(m) if m \geq 0
and
\theta = arctan(\left | m \right |) + \frac{\pi }{2} if m < 0 |
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D
OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a...
@NeuroFuzzy awesome what have you done with it? how long have you been using it?
it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game
As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity
@Secret I mean more along the lines of the fluid dynamics in that kind of game
@Secret Like how in the dan-ball one air pressure looks continuous (I assume)
@Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A.
I would bet you get lots of cool reaction-diffusion-like patterns with that rule.
(Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ...
Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a...
Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl...
@ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-)
What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ...
and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles
The documentary then showed one of the bird's eye view of the farmlands
(which pardon my sketchy drawing skills...)
Most of the farmland is tiled into grids
Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array
In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl
and in others grass grew
Two blue steel bars were visible laying across the grid, holding up a triangle pool of water
Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e.
ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it
At the end of the documentary, near a university lodge area
I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends
Reality check: I have been to London, but not Belgium
Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order
Presumably one can formulate it (using an example of a 4th order tensor) as follows:
$$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$
and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array
while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$
However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers
@DavidZ in the recent meta post about the homework policy there is the following statement:
> We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems.
This is an interesting statement.
I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking".
I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea.
I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments).
@DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic.
@peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive.
@DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds.
@EmilioPisanty Yes, but I had liked to talk to him here.
@DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things.
@peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck.
4
Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful.
@EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging". |
The theorem is: Let be f, g : X→Y two continuous maps between topogolical spaces and
H : X$\times$[0, 1] a homotopy such that H(x,0)=f(x) and H(x,1)=g(x).
Given $x_0\in X$ let be $y_0=f(x_0)$, $y_1=g(x_0)$ and $\tau(t)=H(x_0,t)$ a path in Y from $y_0$ and $y_1$.
Then, $g_*=\varphi_{[\tau]}\circ f_*$
i.e. $\require{AMScd}$ \begin{CD} \pi_1(X,x_0) @>f_*>> \pi_1(Y,y_0)\\ @V V V @VV \varphi_{[\tau]} V\\ @>>g_*> \pi_1(Y,y_1) \end{CD}
where
$f_*:\pi_1(X,x_0) \rightarrow \pi_1(Y,y_0),\space f_*([\gamma])=[f\circ \gamma]$,
$g_*:\pi_1(X,x_0) \rightarrow \pi_1(Y,y_1),\space g_*([\gamma])=[g\circ \gamma]$ and
$\varphi_{[\tau]}:\pi_1(Y,y_0) \rightarrow \pi_1(Y,y_1),\space\varphi_{[\tau]}([\gamma])=[\tau^{-1}]·[\gamma]·[\tau]$ In the diagram it's supposed that $\pi_1(X,x_0)$ goes directly to $\pi_1(Y,y_1)$ by $g_*$
$f_*$ and $g_*$ are group homomorphism and $\varphi_{[\tau]}$ is a group isomorphism.
I began supposing
$g_*([\gamma])=\varphi_{[\tau]}\circ f_*([\gamma])=[\tau^{-1}]·[f\circ\gamma]·[\tau] \iff [\tau]·g_*([\gamma])·[\tau^{-1}]·f_*([\gamma]^{-1})=e_{y_0}$
and from here I don't know how to continue, I think there's something with four components which can help me to complete the proof.
Thanks. |
I am struggeling to derive the squared-bias and variance based on an eigendecomposition for the OLS-procedure.
The model
Consider the univariate model $y_i = f(x_i) + \epsilon_i, \ i = 1, \dots, n$ with $E(\epsilon_i)=0$ and $Var(\epsilon_i)=\sigma^2$.
Let $S$ be a projection matrix (obtained by OLS) such that the fitted values are $\hat{Y}=SY$.
Let $S=ODO^T$ with $OO^T=I$ and $D=diag(d_1, \dots, d_n)$ be an eigendecomposition of $S$.
I derived a formula for the squared bias and variance:
(1) $Var(\cdot{}) = \sigma^2n^{-1} \sum_{k=1}^{n}d_k^2$
(2) $bias^2 = n^{-1}f^TO diag((1-d_k)^2)O^Tf,$ where $f = (f(x_1), \dots, f(x_n))^T$
Question
Since the eigenvalues are $d_1=d_2=1$ (univariate model with slope and intercept) and $d_k=0 \ \forall k =3, \dots, n$, I obtain from (1) the variance of the OLS procedure, i.e. $Var(\cdot{}) \approx \sigma^2 2/n$ (which seems to be correct).
However, I am not sure what the squared bias is (based on the above formula). I thought it is 0 but cannot see it as the entries in the diagonal matrix from (2) $diag((1-d_k)^2)$ for $k=3, \dots, n$ are not zero but one. I know that the first two eigenvectors (columns) of the matrix $O$ form a linear subspace. What about the remaining $(n-2)$-eigenvectors?
Sketch of derivation of above expressions
For (1), simplify $Cov(SY)$ and then take the trace to obtain the variance
For(2), $(E(SY)-f)^T(E(SY)-f)$ and simplify using orthonormality of $O$. |
In the paper http://www.sciencedirect.com/science/article/pii/S0045782509003521, an HDG element-local equation is described on page 584 equation (4), with one of the equations taking the following form
$$-(u_h,\nabla q)_K = -\left\langle\hat{u}_h \cdot n, q - \bar{q}\right\rangle_{\partial K}$$
Which is the variational approximation to the continuous equation $\nabla \cdot u = 0$, with a scalar-valued test function $q$ in a space that makes sense.
The paper defines $$\bar{q} = \frac{1}{|\partial K|} \int_{\partial K} q $$.
How is this interpreted, in a finite element sense? From my understanding, we multiply both sides by a test function $q$ and then attempt to find the solution which satisfies the equation for all possible choices of $q$. How is it possible to modify the test space in this manner?
The paper also states that this is necessary to enforce the identity $$\left\langle\hat{u}_h\cdot n, q - \bar{q}\right\rangle_{\partial K} = 0$$ I agree with this statement, but how might a test function $q - \bar{q}$ be implemented in code? Should I take the basis functions on the element and subtract their mean when assembling the element local linear system? |
The
amsmath package provides a handful of options for displaying equations. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line.
Contents
The standard LaTeX tools for equations may lack some flexibility, causing overlapping or even trimming part of the equation when it's too long. We can surpass these difficulties with
amsmath. Let's check an example:
\begin{equation} \label{eq1} \begin{split} A & = \frac{\pi r^2}{2} \\ & = \frac{1}{2} \pi r^2 \end{split} \end{equation}
You have to wrap your equation in the
equation environment if you want it to be numbered, use equation* (with an asterisk) otherwise. Inside the equation environment, use the split environment to split the equations into smaller pieces, these smaller pieces will be aligned accordingly. The double backslash works as a newline character. Use the ampersand character &, to set the points where the equations are vertically aligned.
This is a simple step, if you use LaTeX frequently surely you already know this. In the preamble of the document include the code:
\usepackage{amsmath}
To display a single equation, as mentioned in the introduction, you have to use the
equation* or equation environment, depending on whether you want the equation to be numbered or not. Additionally, you might add a label for future reference within the document.
\begin{equation} \label{eu_eqn} e^{\pi i} + 1 = 0 \end{equation} The beautiful equation \ref{eu_eqn} is known as the Euler equation
For equations longer than a line use the
multline environment. Insert a double backslash to set a point for the equation to be broken. The first part will be aligned to the left and the second part will be displayed in the next line and aligned to the right.
Again, the use of an asterisk * in the environment name determines whether the equation is numbered or not.
\begin{multline*} p(x) = 3x^6 + 14x^5y + 590x^4y^2 + 19x^3y^3\\ - 12x^2y^4 - 12xy^5 + 2y^6 - a^3b^3 \end{multline*} Split is very similar to multline. Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. This environment must be used inside an equation environment. For an example check the introduction of this document.
If there are several equations that you need to align vertically, the
align environment will do it:
Usually the binary operators (>, < and =) are the ones aligned for a nice-looking document.
As mentioned before, the ampersand character
& determines where the equations align. Let's check a more complex example:
\begin{align*} x&=y & w &=z & a&=b+c\\ 2x&=-y & 3w&=\frac{1}{2}z & a&=b\\ -4 + 5x&=2+y & w+2&=-1+w & ab&=cb \end{align*}
Here we arrange the equations in three columns. LaTeX assumes that each equation consists of two parts separated by a
&; also that each equation is separated from the one before by an &.
Again, use * to toggle the equation numbering. When numbering is allowed, you can label each row individually.
If you just need to display a set of consecutive equations, centered and with no alignment whatsoever, use the
gather environment. The asterisk trick to set/unset the numbering of equations also works here.
For more information see |
Consider the wave equation:
$$ \square A(t,x^i) = S(t,x^i) , $$
where $\square = -\partial_\mu \partial^\mu = \partial_t ^2 - \nabla^2 $, $S(t,x^i)$ is the source term and $A(t,x^i)$ is the field of interest.
There are various ways to find the Green's function for the wave equation.
$$ \square G(t,x^i) = \delta^{(4)}(t,x^i) \left( = \delta(t) \delta^{(3)}(x^i) \right) $$
For example, Landau starts by setting $G(t,x^i) = \chi(t) \cdot \frac{1}{4\pi r^2} $ and deduce that $\chi(t) = \delta(t-r)$ ($r:= (x_i x^i )^{1/2} $). On the other hand, the most popular and systematic way is to apply Fourier transformation:
$$ \ast \ \ast \ \ast $$
$$ \ast \ \ast \ \ast $$
After dealing with some complex integration with an appropriate contour, one gets $G_{\rm{ret}}(t,x^i) = \delta(t-r) \frac{1}{4\pi r^2}$. However, what I found fascinating was the following approach, using Wick Rotation.
First, one performs the Wick rotation $ t = x^0 := x^4 / i $ . Then,
$$\square = \eta^{\mu\nu} \partial_\mu \partial_\nu = \delta^{AB}\partial_A \partial_B \ \ \ (\mu, \nu = 0,1,2,3, \ A, B = 1,2,3,4)$$
so,
$$\frac{1}{\square} \delta^{(4)}(x) = \frac{1}{4\pi^2 } \cdot \frac{1}{\delta_{AB} x^A x^B}, $$
in analogy with the three-dimensional Euclidean result
$$ \frac{1}{- \nabla^2} \delta^{(3)}(x) = \frac{1}{4\pi } \cdot \frac{1}{r^1} $$
( $4 \pi$ and $ 4\pi^2 $ is the solid angle in $\mathbb{E}^3$ and $\mathbb{E}^4$ , respectively).
Therefore,
$$ A(t,x^i) = \frac{1}{\square} S(t,x^i) = \frac{1}{\square} \iiint d^3 x' \int dx'^4 \ S(x'^4/i, x'^i) \delta ^{(4)} (x) $$
$$ = \iiint d^3 x' \int dx'^4 \ S(x'^4/i, x'^i) \frac{1}{4 \pi^2} \frac{1}{-(t-t')^2 + R^2 } $$
$$ = \frac{i}{4 \pi^2} \iiint d^3 x' \int^{-i \infty}_{i \infty} dt' \ S(t', x'^i) \frac{1}{-(t-t')^2 + R^2 } $$
$$ = \frac{i}{4 \pi^2 } \iiint d^3 x' \int^{i \infty}_{-i \infty} dt' \frac{S(t', x'^i)}{ (t' -t +R)(t'-t - R ) } $$
where $R:= \left( (x_i - x'_i)(x^i - x'^i) \right)^{1/2} $. This approach is adapted from Lightman et al.'s "Problem book in relativity and gravitation, Princeton University Press."
At the last step, the author performs complex integration with a following path,
and gets the correct result (the retarded Green's function):
$$ = \frac{i}{4 \pi^2 } \iiint d^3 x' \left[ (2\pi i)\cdot \frac{ S(t-R , x'^i)}{-2R} \right] $$
$$ = \frac{1}{4 \pi } \iiint d^3 x' \left[ \frac{S(t-R , x'^i)}{R} \right] $$
However, I am not sure that why should the contour pass between two poles $ t' = t-R$ and $t' = t+R$. I understand that the orientation of folding the original contour (the imaginary line $- i \infty$ to $ i \infty$) corresponds to different boundary conditions (retarded and advanced), but what if $0<t-R$ or $t+R < 0$ so that there are no residue contribution from poles and the $t'$-integral goes to zero? (Note that the original contour passes over $0$.) |
Linear code constructors that do not preserve the structural information¶
This file contains a variety of constructions which builds the generator matrixof special (or random) linear codes and wraps them in a
sage.coding.linear_code.LinearCode object. These constructions aretherefore not rich objects such as
sage.coding.grs_code.GeneralizedReedSolomonCode.
All codes available here can be accessed through the
codes object:
sage: codes.random_linear_code(GF(2), 5, 2)[5, 2] linear code over GF(2)
REFERENCES:
AUTHORS:
David Joyner (2007-05): initial version David Joyner (2008-02): added cyclic codes, Hamming codes David Joyner (2008-03): added BCH code, LinearCodeFromCheckmatrix, ReedSolomonCode, WalshCode, DuadicCodeEvenPair, DuadicCodeOddPair, QR codes (even and odd) David Joyner (2008-09) fix for bug in BCHCode reported by F. Voloch David Joyner (2008-10) small docstring changes to WalshCode and walsh_matrix
sage.coding.code_constructions.
DuadicCodeEvenPair(
F, S1, S2)¶
Constructs the “even pair” of duadic codes associated to the “splitting” (see the docstring for
_is_a_splittingfor the definition) S1, S2 of n.
Warning
Maybe the splitting should be associated to a sum of q-cyclotomic cosets mod n, where q is a
prime.
EXAMPLES:
sage: from sage.coding.code_constructions import _is_a_splitting sage: n = 11; q = 3 sage: C = Zmod(n).cyclotomic_cosets(q); C [[0], [1, 3, 4, 5, 9], [2, 6, 7, 8, 10]] sage: S1 = C[1] sage: S2 = C[2] sage: _is_a_splitting(S1,S2,11) True sage: codes.DuadicCodeEvenPair(GF(q),S1,S2) ([11, 5] Cyclic Code over GF(3), [11, 5] Cyclic Code over GF(3))
sage.coding.code_constructions.
DuadicCodeOddPair(
F, S1, S2)¶
Constructs the “odd pair” of duadic codes associated to the “splitting” S1, S2 of n.
Warning
Maybe the splitting should be associated to a sum of q-cyclotomic cosets mod n, where q is a
prime.
EXAMPLES:
sage: from sage.coding.code_constructions import _is_a_splitting sage: n = 11; q = 3 sage: C = Zmod(n).cyclotomic_cosets(q); C [[0], [1, 3, 4, 5, 9], [2, 6, 7, 8, 10]] sage: S1 = C[1] sage: S2 = C[2] sage: _is_a_splitting(S1,S2,11) True sage: codes.DuadicCodeOddPair(GF(q),S1,S2) ([11, 6] Cyclic Code over GF(3), [11, 6] Cyclic Code over GF(3))
This is consistent with Theorem 6.1.3 in [HP2003].
sage.coding.code_constructions.
ExtendedQuadraticResidueCode(
n, F)¶
The extended quadratic residue code (or XQR code) is obtained from a QR code by adding a check bit to the last coordinate. (These codes have very remarkable properties such as large automorphism groups and duality properties - see [HP2003], Section 6.6.3-6.6.4.)
INPUT:
n- an odd prime
F- a finite prime field F whose order must be a quadratic residue modulo n.
OUTPUT: Returns an extended quadratic residue code.
EXAMPLES:
sage: C1 = codes.QuadraticResidueCode(7,GF(2)) sage: C2 = C1.extended_code() sage: C3 = codes.ExtendedQuadraticResidueCode(7,GF(2)); C3 Extension of [7, 4] Cyclic Code over GF(2) sage: C2 == C3 True sage: C = codes.ExtendedQuadraticResidueCode(17,GF(2)) sage: C Extension of [17, 9] Cyclic Code over GF(2) sage: C3 = codes.QuadraticResidueCodeOddPair(7,GF(2))[0] sage: C3x = C3.extended_code() sage: C4 = codes.ExtendedQuadraticResidueCode(7,GF(2)) sage: C3x == C4 True
AUTHORS:
David Joyner (07-2006)
sage.coding.code_constructions.
QuadraticResidueCode(
n, F)¶
A quadratic residue code (or QR code) is a cyclic code whose generator polynomial is the product of the polynomials \(x-\alpha^i\) (\(\alpha\) is a primitive \(n^{th}\) root of unity; \(i\) ranges over the set of quadratic residues modulo \(n\)).
See QuadraticResidueCodeEvenPair and QuadraticResidueCodeOddPair for a more general construction.
INPUT:
n- an odd prime
F- a finite prime field F whose order must be a quadratic residue modulo n.
OUTPUT: Returns a quadratic residue code.
EXAMPLES:
sage: C = codes.QuadraticResidueCode(7,GF(2)) sage: C [7, 4] Cyclic Code over GF(2) sage: C = codes.QuadraticResidueCode(17,GF(2)) sage: C [17, 9] Cyclic Code over GF(2) sage: C1 = codes.QuadraticResidueCodeOddPair(7,GF(2))[0] sage: C2 = codes.QuadraticResidueCode(7,GF(2)) sage: C1 == C2 True sage: C1 = codes.QuadraticResidueCodeOddPair(17,GF(2))[0] sage: C2 = codes.QuadraticResidueCode(17,GF(2)) sage: C1 == C2 True
AUTHORS:
David Joyner (11-2005)
sage.coding.code_constructions.
QuadraticResidueCodeEvenPair(
n, F)¶
Quadratic residue codes of a given odd prime length and base ring either don’t exist at all or occur as 4-tuples - a pair of “odd-like” codes and a pair of “even-like” codes. If \(n > 2\) is prime then (Theorem 6.6.2 in [HP2003]) a QR code exists over \(GF(q)\) iff q is a quadratic residue mod \(n\).
They are constructed as “even-like” duadic codes associated the splitting (Q,N) mod n, where Q is the set of non-zero quadratic residues and N is the non-residues.
EXAMPLES:
sage: codes.QuadraticResidueCodeEvenPair(17, GF(13)) # known bug (#25896) ([17, 8] Cyclic Code over GF(13), [17, 8] Cyclic Code over GF(13)) sage: codes.QuadraticResidueCodeEvenPair(17, GF(2)) ([17, 8] Cyclic Code over GF(2), [17, 8] Cyclic Code over GF(2)) sage: codes.QuadraticResidueCodeEvenPair(13,GF(9,"z")) # known bug (#25896) ([13, 6] Cyclic Code over GF(9), [13, 6] Cyclic Code over GF(9)) sage: C1,C2 = codes.QuadraticResidueCodeEvenPair(7,GF(2)) sage: C1.is_self_orthogonal() True sage: C2.is_self_orthogonal() True sage: C3 = codes.QuadraticResidueCodeOddPair(17,GF(2))[0] sage: C4 = codes.QuadraticResidueCodeEvenPair(17,GF(2))[1] sage: C3.systematic_generator_matrix() == C4.dual_code().systematic_generator_matrix() True
This is consistent with Theorem 6.6.9 and Exercise 365 in [HP2003].
sage.coding.code_constructions.
QuadraticResidueCodeOddPair(
n, F)¶
Quadratic residue codes of a given odd prime length and base ring either don’t exist at all or occur as 4-tuples - a pair of “odd-like” codes and a pair of “even-like” codes. If n 2 is prime then (Theorem 6.6.2 in [HP2003]) a QR code exists over GF(q) iff q is a quadratic residue mod n.
They are constructed as “odd-like” duadic codes associated the splitting (Q,N) mod n, where Q is the set of non-zero quadratic residues and N is the non-residues.
EXAMPLES:
sage: codes.QuadraticResidueCodeOddPair(17, GF(13)) # known bug (#25896) ([17, 9] Cyclic Code over GF(13), [17, 9] Cyclic Code over GF(13)) sage: codes.QuadraticResidueCodeOddPair(17, GF(2)) ([17, 9] Cyclic Code over GF(2), [17, 9] Cyclic Code over GF(2)) sage: codes.QuadraticResidueCodeOddPair(13, GF(9,"z")) # known bug (#25896) ([13, 7] Cyclic Code over GF(9), [13, 7] Cyclic Code over GF(9)) sage: C1 = codes.QuadraticResidueCodeOddPair(17, GF(2))[1] sage: C1x = C1.extended_code() sage: C2 = codes.QuadraticResidueCodeOddPair(17, GF(2))[0] sage: C2x = C2.extended_code() sage: C2x.spectrum(); C1x.spectrum() [1, 0, 0, 0, 0, 0, 102, 0, 153, 0, 153, 0, 102, 0, 0, 0, 0, 0, 1] [1, 0, 0, 0, 0, 0, 102, 0, 153, 0, 153, 0, 102, 0, 0, 0, 0, 0, 1] sage: C3 = codes.QuadraticResidueCodeOddPair(7, GF(2))[0] sage: C3x = C3.extended_code() sage: C3x.spectrum() [1, 0, 0, 0, 14, 0, 0, 0, 1]
This is consistent with Theorem 6.6.14 in [HP2003].
sage.coding.code_constructions.
ToricCode(
P, F)¶
Let \(P\) denote a list of lattice points in \(\ZZ^d\) and let \(T\) denote the set of all points in \((F^x)^d\) (ordered in some fixed way). Put \(n=|T|\) and let \(k\) denote the dimension of the vector space of functions \(V = \mathrm{Span}\{x^e \ |\ e \in P\}\). The associated toric code \(C\) is the evaluation code which is the image of the evaluation map\[\mathrm{eval_T} : V \rightarrow F^n,\]
where \(x^e\) is the multi-index notation (\(x=(x_1,...,x_d)\), \(e=(e_1,...,e_d)\), and \(x^e = x_1^{e_1}...x_d^{e_d}\)), where \(eval_T (f(x)) = (f(t_1),...,f(t_n))\), and where \(T=\{t_1,...,t_n\}\). This function returns the toric codes discussed in [Joy2004].
INPUT:
P- all the integer lattice points in a polytope defining the toric variety.
F- a finite field.
OUTPUT: Returns toric code with length n = , dimension k over field F.
EXAMPLES:
sage: C = codes.ToricCode([[0,0],[1,0],[2,0],[0,1],[1,1]],GF(7)) sage: C [36, 5] linear code over GF(7) sage: C.minimum_distance() 24 sage: C = codes.ToricCode([[-2,-2],[-1,-2],[-1,-1],[-1,0],[0,-1],[0,0],[0,1],[1,-1],[1,0]],GF(5)) sage: C [16, 9] linear code over GF(5) sage: C.minimum_distance() 6 sage: C = codes.ToricCode([ [0,0],[1,1],[1,2],[1,3],[1,4],[2,1],[2,2],[2,3],[3,1],[3,2],[4,1]],GF(8,"a")) sage: C [49, 11] linear code over GF(8)
This is in fact a [49,11,28] code over GF(8). If you type next
C.minimum_distance()and wait overnight (!), you should get 28.
AUTHOR:
David Joyner (07-2006)
sage.coding.code_constructions.
WalshCode(
m)¶
Return the binary Walsh code of length \(2^m\).
The matrix of codewords correspond to a Hadamard matrix. This is a (constant rate) binary linear \([2^m,m,2^{m-1}]\) code.
EXAMPLES:
sage: C = codes.WalshCode(4); C [16, 4] linear code over GF(2) sage: C = codes.WalshCode(3); C [8, 3] linear code over GF(2) sage: C.spectrum() [1, 0, 0, 0, 7, 0, 0, 0, 0] sage: C.minimum_distance() 4 sage: C.minimum_distance(algorithm='gap') # check d=2^(m-1) 4
REFERENCES:
sage.coding.code_constructions.
from_parity_check_matrix(
H)¶
Return the linear code that has
Has a parity check matrix.
If
Hhas dimensions \(h \times n\) then the linear code will have dimension \(n-h\) and length \(n\).
EXAMPLES:
sage: C = codes.HammingCode(GF(2), 3); C [7, 4] Hamming Code over GF(2) sage: H = C.parity_check_matrix(); H [1 0 1 0 1 0 1] [0 1 1 0 0 1 1] [0 0 0 1 1 1 1] sage: C2 = codes.from_parity_check_matrix(H); C2 [7, 4] linear code over GF(2) sage: C2.systematic_generator_matrix() == C.systematic_generator_matrix() True
sage.coding.code_constructions.
permutation_action(
g, v)¶
Returns permutation of rows g*v. Works on lists, matrices, sequences and vectors (by permuting coordinates). The code requires switching from i to i+1 (and back again) since the SymmetricGroup is, by convention, the symmetric group on the “letters” 1, 2, …, n (not 0, 1, …, n-1).
EXAMPLES:
sage: V = VectorSpace(GF(3),5) sage: v = V([0,1,2,0,1]) sage: G = SymmetricGroup(5) sage: g = G([(1,2,3)]) sage: permutation_action(g,v) (1, 2, 0, 0, 1) sage: g = G([()]) sage: permutation_action(g,v) (0, 1, 2, 0, 1) sage: g = G([(1,2,3,4,5)]) sage: permutation_action(g,v) (1, 2, 0, 1, 0) sage: L = Sequence([1,2,3,4,5]) sage: permutation_action(g,L) [2, 3, 4, 5, 1] sage: MS = MatrixSpace(GF(3),3,7) sage: A = MS([[1,0,0,0,1,1,0],[0,1,0,1,0,1,0],[0,0,0,0,0,0,1]]) sage: S5 = SymmetricGroup(5) sage: g = S5([(1,2,3)]) sage: A [1 0 0 0 1 1 0] [0 1 0 1 0 1 0] [0 0 0 0 0 0 1] sage: permutation_action(g,A) [0 1 0 1 0 1 0] [0 0 0 0 0 0 1] [1 0 0 0 1 1 0]
It also works on lists and is a “left action”:
sage: v = [0,1,2,0,1] sage: G = SymmetricGroup(5) sage: g = G([(1,2,3)]) sage: gv = permutation_action(g,v); gv [1, 2, 0, 0, 1] sage: permutation_action(g,v) == g(v) True sage: h = G([(3,4)]) sage: gv = permutation_action(g,v) sage: hgv = permutation_action(h,gv) sage: hgv == permutation_action(h*g,v) True
AUTHORS:
David Joyner, licensed under the GPL v2 or greater.
sage.coding.code_constructions.
random_linear_code(
F, length, dimension)¶
Generate a random linear code of length
length, dimension
dimensionand over the field
F.
This function is Las Vegas probabilistic: always correct, usually fast. Random matrices over the
Fare drawn until one with full rank is hit.
If
Fis infinite, the distribution of the elements in the random generator matrix will be random according to the distribution of
F.random_element().
EXAMPLES:
sage: C = codes.random_linear_code(GF(2), 10, 3) sage: C [10, 3] linear code over GF(2) sage: C.generator_matrix().rank() 3
sage.coding.code_constructions.
walsh_matrix(
m0)¶
This is the generator matrix of a Walsh code. The matrix of codewords correspond to a Hadamard matrix.
EXAMPLES:
sage: walsh_matrix(2) [0 0 1 1] [0 1 0 1] sage: walsh_matrix(3) [0 0 0 0 1 1 1 1] [0 0 1 1 0 0 1 1] [0 1 0 1 0 1 0 1] sage: C = LinearCode(walsh_matrix(4)); C [16, 4] linear code over GF(2) sage: C.spectrum() [1, 0, 0, 0, 0, 0, 0, 0, 15, 0, 0, 0, 0, 0, 0, 0, 0]
This last code has minimum distance 8.
REFERENCES: |
principal or initial loan balance Interest rate Loan term Loan product, important parameter which fully specifies a loan Repayment rate (dollars per time) \(P = \frac{rB_0}{1- e^{-rt_\text{term}}}\) Fraction of initial payment to interest Total payment to interest Sum of all payments Overpay ratio
Because most loans are advertised by specifying \(r\) and \(t_\text{term}\), we will focus this discussion assuming those values are given. This is not always true but it is the most common case (otherwise loans are usually quoted with the repayment rate and loan term). For reasons of mathematical simplicity, we will focus on the overpay ratio \( V/B_0 = (B_0 +I)/B_0\) instead of directly solving for interest, \(I\). We will show that \(V/B_0\) is linear for very large or very small loan products, \(rt_\text{term}\), but non-linear over the range of most loans. We will show that the repayment rate, \(P/B_0\), goes to infinity as the loan product goes to 0, but levels out to a constant for large loan terms.
In the article on the continuous solution to the loan problem, we solved for the overpay ratio (and by extension, the total interest paid).
Now we want to know how this function behaves in the limit of very large and very small loan products, \(rt_\text{term} \rightarrow \infty \) and \(rt_\text{term} \rightarrow\) 0.
The derivative as \(rt_\text{term} \rightarrow 0\) is found by two applications of L'Hôpital's rule after multiplying top and bottom by \( e^{2rt_\text{term}}\) for simplicity.
At extreme loan products, the overpay ratio behaves as follows:
for \(rt_\text{term}\) near 0 (in practice \(rt_\text{term} < 0.4\)) \(\displaystyle \frac{V}{B_0} \approx 1+\frac{rt_\text{term}}{2}\) \(\displaystyle \frac{V}{B_0} \approx rt_\text{term}\)
This is the first order Taylor series for the function \(\frac{V}{B_0} = f(rt_\text{term}) \) centered on \(rt_\text{term} =\) 0 and \(rt_\text{term} \rightarrow \infty\), respectively. Technically, Taylor series about \(\infty\) is not the right way to describe it but for the non-mathematicians, it communicates the idea.
Begin with the equation for the repayment rate from the continuous solution.
In the limit of a 0 interest loan, \(P = \frac{B_0}{t_\text{term}}\), while in the limit of a long loan term \(t_\text{term}\) the repayment rate just covers the interest, \(P = rB_0\). As the loan term approaches 0, the repayment rate goes to \(\infty\) regardless of interest rate.
Graphically, here is what these limits look like.
Below is the ultimate loan graph. It describes all the loans a reasonable person is likely to consider during their lifetime, which is to say loan products from 0 to 2. Normal mortgages are between 2% and 6% for 15 to 30 years (\(rt_\text{term}\) between 0.3 and 1.8). A subprime auto loan may be up to 25% or 30% and last up to 8 years giving a loan product of just over 2 in the worst case scenario. If the loan you are considering is outside this range, it is highly inadvisable under all circumstances. Loans with loan products above 2 require the borrower to pay back over twice as many dollars as they borrowed over the life of the loan. Usually, that situation results in default which is why loans with the highest interest rates, like payday loans, are only given over short loan terms. Loan products higher than 2 are almost always predatory in that they are typically issued to borrowers who are incapable of understanding the terms of the contract and unlikely to ever be able to pay them in full. The business model is built on the idea that enough harassment and legal bullying will eventually yield a decent return for the lender despite knowing full well they are unlikely to ever collect on the full agreed-upon amount.
The graph below also shows where the loan is on the repayment curve for a given interest rate. |
Given \(n\) circles with radius \(r_i\) and value \(v_i\) try to fill a rectangle of size \(W\times H\) with a subset of the circles (such that they don't overlap) and that maximizes the total value of the "payload".The data looks like:
The area was computed with the familiar formula: \(a_i = \pi r_i^2\). The mathematical model can look like:\[\bbox[lightcyan,10px,border:3px solid darkblue]{\begin{align}\max\>&\sum_i v_i \delta_i\\ & (x_i-x_j)^2 + (y_i-y_j)^2 \ge (r_i+r_j)^2 \delta_i \delta_j & \forall i\lt j \\ & r_i \le x_i \le W - r_i\\ & r_i \le y_i \le H - r_i\\ &\delta_i \in \{0,1\}\end{align}}\] Here the binary variable \(\delta_i\) indicates whether we select circle \(i\) for inclusion in the container. The variables \(x_i\) and \(y_i\) determine the location of the selected circles.
This turns out to be a small but difficult problem to solve. The packing problem is a difficult non-convex problem in its own right and we added a knapsack problem on top of it.
In [1] we found the following solutions:
Solver Objective Local solver (BONMIN) 58.57 Improved by global solver (LINDOGLOBAL) 60.36 Combinatorial Benders Decomposition
In the comments of [1] a variant of Benders Decomposition called Combinatorial Benders Decomposition is suggested by Rob Pratt. Let's see if we can reproduce his results. I often find it useful to do such an exercise to fully understand and appreciate a proposed solution.
First, lets establish that the total number of solutions in terms of the binary knapsack variables \(\delta_i\) is: \(2^n\). For \(n=20\) we have: \[2^{20} = 1,048,576\] This is small for a MIP. In our case unfortunately we have this non-convex packing sub-problem to worry about.
One constraint that can be added is related to the area: the total area of the selected circles should be less than the total area of the container: \[\sum_i a_i \delta_i \le A\] where \(A=W \times H\). This constraint is not very useful to add to our MIQCP directly, but it will be used in the Benders approach. The number of solutions for \(\delta_i\) that are feasible w.r.t. the area constraint is only 60,145. The number of these solutions with a value larger than 60.3 is just 83.
To summarize:
Solutions Count Possible configurations for \(\delta_i\) 1,048,576 Of which feasible w.r.t area constraint 60,145 Of which have a value better than 60.3 83
Now we have a number that is feasible to enumerate. We know the optimal objective value should be in this set of 83 problem that are better than 60.3. So a simple approach is just to enumerate solutions starting with the best.
We can give this scheme a more impressive name:
Combinatorial Benders Decomposition[2]. The approach can be depicted as:
We add a cut in each iteration to the master. This cut \[\sum_{i\in S} (1-\delta_i)\ge 1\] where \(S = \{i|\delta_i=1\}\), will cut off exactly one solution. In theory it can cut off more solutions, but because of the order we generate proposals, just one solution is excluded each cycle.
The sub model is used only to see if the proposal is feasible w.r.t. to our rectangle. The first sub problem that is feasible gives us an optimal solution to the whole problem. I implement the sub problem as a multistart NLP problem [3]. A slight rewrite of the sub problem gives a proper optimization problem:\[\begin{align} \max \>& \phi\\& (x_i-x_j)^2 + (y_i-y_j)^2 \ge \phi (r_i+r_j)^2&i\lt j\end{align}\] If the optimal value \(\phi^* \ge 1\) we are feasible w.r.t. to the original problem.
Of course, we could as well generate all possible configurations \(\delta_i\) that are feasible w.r.t. to the area constraint (there are 60,145 of them), order them by value (largest value first) and run through. Stop as soon as we find a feasible solution (i.e. where the packing NLP is feasible). No fancy name for this algorithm: "sorting" does not sound very sophisticated.
We can find a better solution with value 60.6134 using this scheme. Of course it is not easy. We need to solve 69 rather nasty non-convex NLPs to reach this solution.
Best Solution Found References Knapsack + packing: a difficult MIQCP, http://yetanothermathprogrammingconsultant.blogspot.com/2018/05/knapsack-packing-difficult-miqcp.html Codato G., Fischetti M. (2004) Combinatorial Benders’ Cuts. In: Bienstock D., Nemhauser G. (eds) Integer Programming and Combinatorial Optimization. IPCO 2004. Lecture Notes in Computer Science, vol 3064. Springer, Berlin, Heidelberg Multi-start Non-linear Programming: a poor man's global optimizer, http://yetanothermathprogrammingconsultant.blogspot.com/2018/05/multi-start-non-linear-programming-poor.html |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
Notation. By $\mathbb{Z}_{(2)}$, I mean the localization of $\mathbb{Z}$ at the prime ideal $(2).$ So basically, this is obtained by adjoining a multiplicative inverse for every positive prime number distinct from $2$. Equivalently, we can think of $\mathbb{Z}_{(2)}$ as the set of rational numbers with odd denominator. So: $\mathbb{Z}_{(2)} \subseteq \mathbb{Q}.$ This is a local ring, of course, and judging by how often my commutative algebra lecturer used the phrase "local ring" during those lectures I mostly didn't understand, that's probably important. Background. Although localization is usually thought of as an "advanced" topic, despite this, I recently recently that $\mathbb{Z}_{(2)}$ actually arises in high school math. In particular, given $x \in \mathbb{R}_{\neq 0}$ and $q \in \mathbb{Z}_{(2)}$, we can make good sense of the expression $x^q.$ Such expressions make sense, in particular even if $x$ is negative. Of course, we can (and often do) replace $q \in \mathbb{Z}_{(2)}$ with $q \in \mathbb{Q}$ or even $q \in \mathbb{R}$, but notice this forces us to assume that $x$ is positive to compensate. There's also a semiring $\mathbb{N}_{(2)}$ obtained by localizing the natural numbers at the prime ideal $(2)$. More concretely: $\mathbb{N}_{(2)} = \{q \in \mathbb{Z}_{(2)} : q \geq 0\}.$
This gives us three exponentiation functions, all denoted $x,q \mapsto x^q$.
\begin{align*} \mathbb{R} \times \mathbb{N}_{(2)} &\rightarrow \mathbb{R} \\ \mathbb{R}_{\neq 0} \times \mathbb{Z}_{(2)} &\rightarrow \mathbb{R}_{\neq 0} \\ \mathbb{R}_{> 0} \times \mathbb{R}_{\color{white}{(2)}} &\rightarrow \mathbb{R}_{> 0} \end{align*}
Question.I'd like to learn a bit more about the number systems $\mathbb{Z}_{(2)}$ and $\mathbb{N}_{(2)}$ from the point of view of these exponentiation functions. For instance, does the localness of $\mathbb{Z}_{(2)}$ tell us anything important about the function $\mathbb{R}_{\neq 0} \times \mathbb{Z}_{(2)} \rightarrow \mathbb{R}_{\neq 0}$ denoted $x,q \mapsto x^q$? If so, I'd like to know about this. Also, I'm interested in "elementary" things that I can teach to high school students about $\mathbb{Z}_{(2)}$ and $\mathbb{N}_{(2)},$ that do not require a lot of abstract mathematics.
Links or references preferred, but direct explanations and further thoughts are also welcome. |
$0 \rightarrow \mathbb{Z} \stackrel{f}{\rightarrow} \mathbb{Z}^3 \stackrel{h}{\rightarrow} H_1(X) \rightarrow 0$
Where $f(1) = (1,0,2)$
Then $H_1(X) \cong \frac{\mathbb{Z}^3}{im(f)} \cong \frac{\mathbb{Z}^3}{\langle 1,0,2 \rangle}$
Is this all correct so far? The image of $1$ under $f$ is $(1,0,2)$, but i'm just confused. The correct answer is supposed to be $H_1(X) \cong \mathbb{Z} \oplus \mathbb{Z}_2$.. I'm trying to think about the best way to make sense of this. I often have problems on steps like this when analyzing a short exact sequence. Maybe I should think of $im(g) = \mathbb{Z} \oplus 2\mathbb{Z}$ and then $\frac{\mathbb{Z}^3}{\mathbb{Z} \oplus 2\mathbb{Z}} \cong \frac{\mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z}}{\mathbb{Z} \oplus 2\mathbb{Z}} \cong \mathbb{Z} \oplus \mathbb{Z}_2$
Any insight in general is appreciated!! Thanks! |
[I already posted this question on the math forum of stackexchange and I was advised that I should post this question here]
In Evans and Jovanovic (1989) you will find a model for entrepreneurs with credit constraints. The part that is important for my question follows. Here it is the production function and the income equation that one should maximize:
$$\begin{aligned} y &= \theta k^{\alpha}\\ I &= y + r(z - k)\\ \max(\theta k^{\alpha} + r(z - k)) &\text{such that } k \leq \lambda z \end{aligned}$$
Where ${\lambda}\ge1$ and it is a measure of constraints. For the rest of the notation: $y$ are the earnings of the individual from production; $I$ is income; $k$ is capital; $\theta$ is a skill measurement; $r$ it is the interest rate; $z$ is the initial wealth of the individual or household that he, or the household, lends too; $\alpha$ is a technology parameter. If you are interested in more details of the model, here it is a link: Evans and Jovanovic (1989).
So here comes my doubt. This model is about putting credit constraints for a hypothetical entrepreneur. The entrepreneur borrows $k$ in such situation but only pays the interests: $rk$; for example, in each period we could be analyzing (I mean hypothetically, I did not give a time frame for the equation), he only pays the interests of what he borrowed i.e. he does not pay a principal on what he borrowed plus interests, only interests are deducted from his income. The individual seems to
rent capital rather than borrow it in a more familiar way. So I'll ask you: does this interpretation makes sense? If so, is there a larger reason for economic modeling making this kind of implicit assumption? |
I'm making a compound fraction for a review worksheet. I currently have this:
\begin{multicols}{2}\begin{enumerate}\itemSimplify\[\dfrac{\frac{3}{5} + \frac{5}{5x}}{1 - \frac{1}{10x}}\]
The problem is that the fractions 3/5 are extremely tiny. If I replace \dfrac with \frac, I see that they are exactly the same. I want the main fraction to be large, and the smaller fractions to be normal. Am I missing a package? Is there a better way to get what I need?
Thanks in advance |
The theorem is called the noiseless coding theorem, and it is often proven in clunky ways in information theory books. The point of the theorem is to calculate the minimum number of bits per variable you need to encode the values of N identical random variables chosen from $1...K$ whose probabilities of having a value $i$ between $1$ and $K$ is $p_i$. The minimum number of bits you need on average per variable in the large N limit is defined to be the information in the random variable. It is the minimum number of bits of information per variable you need to record in a computer so as to remember the values of the N copies with perfect fidelity.
If the variables are uniformly distributed, the answer is obvious: there are $K^N$ possiblities for N throws, and $2^{CN}$ possiblities for $CN$ bits, so $C=\log_2(k)$ for large N. Any less than CN bits, and you will not be able to encode the values of the random variables, because they are all equally likely. Any more than this, you will have extra room. This is the information in a uniform random variable.
For a general distribution, you can get the answer with a little bit of law of large numbers. If you have many copies of the random variable, the sum of the probabilities is equal to 1,
$$ P(n_1, n_2, ... , n_k) = \prod_{j=1}^N p_{n_j}$$
This probability is dominated for large N by those configurations where the number of values of type i is equal to $Np_i$, since this is the mean number of the type i's. So that the P value on any typical configuration is:
$$ P(n_1,...,n_k) = \prod_{i=1}^k p_i^{Np_i} = e^{N\sum p_i \log(p_i)}$$
So for those possibilities where the probability is not extremely small, the probability is more or less constant and equal to the above value. The total number M(N) of these not-exceedingly unlikely possibilities is what is required to make the sum of probabilities equal to 1.
$$M(N) \propto e^{ - N \sum p_i \log(p_i)}$$
To encode which of the M(N) possiblities is realized in each N picks, you therefore need a number of bits B(N) which is enough to encode all these possibilities:
$$2^{B(N)} \propto e^{ - N \sum p_i \log(p_i)}$$
which means that
$${B(N)\over N} = - \sum p_i \log_2(p_i)$$
And all subleading constants are washed out by the large N limit. This is the information, and the asymptotic equality above is the Shannon noiseless coding theorem. To make it rigorous, all you need are some careful bounds on the large number estimates.
Replica coincidences
There is another interpretation of the Shannon entropy in terms of coincidences which is interesting. Consider the probability that you pick two values of the random variable, and you get the same value twice:
$$P_2 = \sum p_i^2$$
This is clearly an estimate of how many different values there are to select from. If you ask what is the probability that you get the same value k-times in k-throws, it is
$$P_k = \sum p_i p_i^{k-1}$$
If you ask, what is the probability of a coincidence after $k=1+\epsilon$ throws, you get the Shannon entropy. This is like the replica trick, so I think it is good to keep in mind.
Entropy from information
To recover statistical mechanics from the Shannon information, you are given:
the values of the macroscopic conserved quantities (or their thermodynamic conjugates), energy, momentum, angular momentum, charge, and particle number the macroscopic constraints (or their thermodynaic conjugates) volume, positions of macroscopic objects, etc.
Then the statistical distribution of the microscopic configuration is the maximum entropy distribution (as little information known to you as possible) on phase space satisfying the constraint that the quantities match the macroscopic quantities. |
Does a bond paying floating coupon
LIBOR, still has the value at par when we use the
OIS as discount factor?It seems only when the Identity:$$B(t,T_2)(1+(T_2-T_1)F(t,T_1,T_2))=B(t,T_1)$$still holds, the proposition above will be true. Here $B(t,T)$ is the value of zero coupon bond, $F(t,T_1,T_2)$ is the
forward LIBOR.
In John Hull's book
Options, Futures and Other Derivatives 9th
page 205 ,shows the way to calculate the forward LIBOR implied in
Swap rate under
OIS discounting. But it's the case we know $B(t,T_1),$ but don't know $B(t,T_2).$
If we know both $B(t,T_1)$ and $B(t,T_2).$ Can we calculate the forward LIBOR still as above identity?
Denote
$D_{ois}(t):$ the discounted factor of OIS
$B(t,T):$ Bond price
$E_t[]:$ Conditional expectation at time $t$ under OIS-risk neutral measure which makes $D_{ois}(t)B(t,T)$ martingale for all $T.$
Use $N(t) = D_{ois}(t)B(t,T_1)$ as a numeraire to change the measure into OIS $T_1$-forward measure $E^{T_1}_t[]$(simply use expectation represent new measure).
Then $$ \dfrac{B(t,T)}{B(t,T_1)} = \dfrac{D_{ois}(t)B(t,T)}{D_{ois}(t)B(t,T_1)}$$ should be martingale under $E^{T_1}_t[].$ Then use the definition of the forward LIBOR $F(t, T, T_{1})$ we can prove that $$\dfrac{1}{D_{ois}(T)}E_{T}\left[D_{ois}(T_1)\Big((T_1-T) \cdot F(T, T, T_{1})+1\Big)\right] = 1.$$ |
In thermodynamics,
chemical potential of a species is energy that can be absorbed or released due to a change of the particle number of the given species, e.g. in a chemical reaction or phase transition. The chemical potential of a species in a mixture is defined as the rate of change of free energy of a thermodynamic system with respect to the change in the number of atoms or molecules of the species that are added to the system. Thus, it is the partial derivative of the free energy with respect to the amount of the species, all other species' concentrations in the mixture remaining constant. The molar chemical potential is also known as partial molar free energy [1] . When both temperature and pressure are held constant, chemical potential is the partial molar Gibbs free energy. At chemical equilibrium or in phase equilibrium the total sum of the product of chemical potentials and stoichiometric coefficients is zero, as the free energy is at a minimum. [2] [3] [4]
Particles tend to move from higher chemical potential to lower chemical potential. In this way, chemical potential is a generalization of "potentials" in physics such as gravitational potential. When a ball rolls down a hill, it is moving from a higher gravitational potential (higher internal energy thus higher potential for work) to a lower gravitational potential (lower internal energy). In the same way, as molecules move, react, dissolve, melt, etc., they will always tend naturally to go from a higher chemical potential to a lower one, changing the particle number, which is conjugate variable to chemical potential.
A simple example is a system of dilute molecules diffusing in a homogeneous environment. In this system, the molecules tend to move from areas with high concentration to low concentration, until eventually the concentration is the same everywhere.
The microscopic explanation for this is based in kinetic theory and the random motion of molecules. However, it is simpler to describe the process in terms of chemical potentials: For a given temperature, a molecule has a higher chemical potential in a higher-concentration area, and a lower chemical potential in a low concentration area. Movement of molecules from higher chemical potential to lower chemical potential is accompanied by a release of free energy. Therefore, it is a spontaneous process.
Another example, not based on concentration but on phase, is a glass of liquid water with ice cubes in it. Above 0 °C, an H
2O molecule that is in the liquid phase (liquid water) has a lower chemical potential than a water molecule that is in the solid phase (ice). When some of the ice melts, H 2O molecules convert from solid to liquid where their chemical potential is lower, so the ice cubes shrink. Below 0 °C, the molecules in the ice phase have the lower chemical potential, so the ice cubes grow. At the temperature of the melting point, 0 °C, the chemical potentials in water and ice are the same; the ice cubes neither grow nor shrink, and the system is in equilibrium.
H
A H + + A −Vinegar contains acetic acid. When acid molecules dissociate, the concentration of the undissociated acid molecules (HA) decreases and the concentrations of the product ions (H + and A −) increase. Thus the chemical potential of HA decreases and the sum of the chemical potentials of H + and A − increases. When the sums of chemical potential of reactants and products are equal the system is at equilibrium and there is no tendency for the reaction to proceed in either the forward or backward direction. This explains why vinegar is acidic, because acetic acid dissociates to some extent, releasing hydrogen ions into the solution.
Chemical potentials are important in many aspects of equilibrium chemistry, including melting, boiling, evaporation, solubility, osmosis, partition coefficient, liquid-liquid extraction and chromatography. In each case there is a characteristic constant which is a function of the chemical potentials of the species at equilibrium.
In electrochemistry, ions do
not always tend to go from higher to lower chemical potential, but they do always go from higher to lower electrochemical potential. The electrochemical potential completely characterizes all of the influences on an ion's motion, while the chemical potential includes everything except the electric force. (See below for more on this terminology.)
The chemical potential
μ of species i
d
U= TdS- PdV
+
n \sum i=1 \mu d i
where d
U is the infinitesimal change of internal energy U, d S the infinitesimal change of entropy S, and d V is the infinitesimal change of volume V for a thermodynamic system in thermal equilibrium, and d N is the infinitesimal change of particle number i
From the above equation the chemical potential is given by
\mu = i
\partial U \partial N i
\right) S,V, N j G= U+ PV- TS
d
G=d U+ PdV+ VdP- TdS- SdT
d
U
d
G
d
G=- SdT+ VdP
+
n \sum i=1 \mu d i
As a consequence, another expression for
\mu i \mu = i
\partial G \partial N i
\right) T,P, N j
and the change in Gibbs free energy of a system that is held at constant temperature and pressure is simply
d
G=
n \sum i=1 \mu d i
In thermodynamic equilibrium, when the system concerned is at constant temperature and pressure but can exchange particles with its external environment, the Gibbs free energy is at its minimum for the system, that is
d
G=0 \mu 1 dN 1+ \mu 2 d
Use of this equality provides the means to establish the equilibrium constant for a chemical reaction.
H= U+ PV F= U- TS \mu = i
\partial H \partial N i
\right) S,P, N j
and
\mu = i
\partial F \partial N i
\right) T,V, N j
These different forms for the chemical potential are all equivalent, meaning that they have the same physical content, and may be useful in different physical situations.
The Gibbs–Duhem equation is useful because it relates individual chemical potentials. For example, in a binary mixture, at constant temperature and pressure, the chemical potentials of the two participants are related by
d\mu
d\mu A
Every instance of phase or chemical equilibrium is characterized by a constant. For instance, the melting of ice is characterized by a temperature, known as the melting point at which solid and liquid phases are in equilibrium with each other. Chemical potentials can be used to explain the slopes of lines on a phase diagram by using the Clapeyron equation, which in turn can be derived from the Gibbs–Duhem equation.
[7] They are used to explain colligative properties such as melting-point depression by the application of pressure. [8] Both Raoult's law and Henry's law can be derived in a simple manner using chemical potentials. [9]
Chemical potential was first described by the American engineer, chemist and mathematical physicist Josiah Willard Gibbs. He defined it as follows:
Gibbs later noted also that for the purposes of this definition, any chemical element or combination of elements in given proportions may be considered a substance, whether capable or not of existing by itself as a homogeneous body. This freedom to choose the boundary of the system allows chemical potential to be applied to a huge range of systems. The term can be used in thermodynamics and physics for any system undergoing change. Chemical potential is also referred to as
partial molar Gibbs energy (see also partial molar property). Chemical potential is measured in units of energy/particle or, equivalently, energy/mole.
In his 1873 paper
A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, Gibbs introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e. bodies, being in composition part solid, part liquid, and part vapor, and by using a three-dimensional volume–entropy–internal energy graph, Gibbs was able to determine three states of equilibrium, i.e. "necessarily stable", "neutral", and "unstable", and whether or not changes will ensue. In 1876, Gibbs built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other. In his own words, to summarize his results in 1873, Gibbs states:
The abstract definition of chemical potential given above—total change in free energy per extra mole of substance—is more specifically called
total chemical potential. [10] [11] If two locations have different total chemical potentials for a species, some of it may be due to potentials associated with "external" force fields (Electric potential energy differences, gravitational potential energy differences, etc.), while the rest would be due to "internal" factors (density, temperature, etc.) [10] Therefore, the total chemical potential can be split into internal chemical potential and external chemical potential: \mu tot= \mu int+ \mu ext \mu ext= qV+ mgh+ … U U int U ext= N (qV+ mgh+ … ) U int+ U ext \mu tot
The phrase "chemical potential" sometimes means "total chemical potential", but that is not universal.
[10] In some fields, in particular electrochemistry, semiconductor physics, and solid-state physics, the term "chemical potential" means internal chemical potential, while the term electrochemical potential is used to mean total chemical potential. [12] [13] [14] [15] [16]
See main article: Fermi level. Electrons in solids have a chemical potential, defined the same way as the chemical potential of a chemical species: The change in free energy when electrons are added or removed from the system. In the case of electrons, the chemical potential is usually expressed in energy per particle rather than energy per mole, and the energy per particle is conventionally given in units of electronvolt (eV).
Chemical potential plays an especially important role in solid-state physics and is closely related to the concepts of work function, Fermi energy, and Fermi level. For example, n-type silicon has a higher internal chemical potential of electrons than p-type silicon. In a p–n junction diode at equilibrium the chemical potential (
internal chemical potential) varies from the p-type to the n-type side, while the total chemical potential (electrochemical potential, or, Fermi level) is constant throughout the diode.
As described above, when describing chemical potential, one has to say "relative to what". In the case of electrons in semiconductors, internal chemical potential is often specified relative to some convenient point in the band structure, e.g., to the bottom of the conduction band. It may also be specified "relative to vacuum", to yield a quantity known as work function, however work function varies from surface to surface even on a completely homogeneous material. Total chemical potential on the other hand is usually specified relative to electrical ground.
In atomic physics, the chemical potential of the electrons in an atom is sometimes
[17] said to be the negative of the atom's electronegativity. Likewise, the process of chemical potential equalization is sometimes referred to as the process of electronegativity equalization. This connection comes from the Mulliken electronegativity scale. By inserting the energetic definitions of the ionization potential and electron affinity into the Mulliken electronegativity, it is seen that the Mulliken chemical potential is a finite difference approximation of the electronic energy with respect to the number of electrons., i.e., \mu Mulliken=- \chi Mulliken=-
IP+ EA = \left[ 2
\delta E[N] \delta N
\right] N= N 0
In recent years, thermal physics has applied the definition of chemical potential to systems in particle physics and its associated processes. For example, in a quark–gluon plasma or other QCD matter, at every point in space there is a chemical potential for photons, a chemical potential for electrons, a chemical potential for baryon number, electric charge, and so forth.
In the case of photons, photons are bosons and can very easily and rapidly appear or disappear. Therefore, the chemical potential of photons is always and everywhere zero. The reason is, if the chemical potential somewhere was higher than zero, photons would spontaneously disappear from that area until the chemical potential went back to zero; likewise if the chemical potential somewhere was less than zero, photons would spontaneously appear until the chemical potential went back to zero. Since this process occurs extremely rapidly (at least, it occurs rapidly in the presence of dense charged matter), it is safe to assume that the photon chemical potential is never different from zero.
Electric charge is different, because it is conserved, i.e. it can be neither created nor destroyed. It can, however, diffuse. The "chemical potential of electric charge" controls this diffusion: Electric charge, like anything else, will tend to diffuse from areas of higher chemical potential to areas of lower chemical potential.
[18] Other conserved quantities like baryon number are the same. In fact, each conserved quantity is associated with a chemical potential and a corresponding tendency to diffuse to equalize it out. [19]
In the case of electrons, the behavior depends on temperature and context. At low temperatures, with no positrons present, electrons cannot be created or destroyed. Therefore, there is an electron chemical potential that might vary in space, causing diffusion. At very high temperatures, however, electrons and positrons can spontaneously appear out of the vacuum (pair production), so the chemical potential of electrons by themselves becomes a less useful quantity than the chemical potential of the conserved quantities like (electrons minus positrons).
Generally the chemical potential is given as a sum of an ideal contribution and an excess contribution:
\mu = i
excess , \mu i
In an ideal solution, the chemical potential of species i (μ
i) is dependent on temperature and pressure. μ i0(T,P) is defined as the chemical potential of pure species i. Given this definition the chemical potential of species i in an ideal solution is:
ideal ≈ \mu i \mu + i0
where R is the gas constant and
x i x i
This equation assumes that
\mu i x i \mu i= \mu i0 (T,P)+ RTln (x i)+ RTln (\gamma i)= \mu i0 (T,P)+ RTln (x i\gamma i)
The figures to the right give a rough picture of the ideal and non-ideal situation.
. Charles Kittel . Herbert Kroemer . Thermal Physics (2nd Edition) . W. H. Freeman . 1980-01-15 . 357 . Herbert Kroemer . |
I have been given the following homework problem... struggling. Any help would be appreciated.
With the following functions state;
a) State if the function is monotone. 1
b) Decide if it is injective, surjective or bijective on the given domain.
c)Find the Supremum and Infimum (if they exist); in each case state whether or not the function attains its bounds.
$$1.\ \ f(x)=\frac1{1+x^4}:\mathbb{R}\to(0,1]$$ $$2.\ \ f(x)=\tan x:(-\frac\pi2,\frac\pi2)\to\mathbb{R}$$ I understand;
Injective is one to one (does monotone check a similar thing?) Supremum is the greatestlower bound of the set (given the constraints, $\mathbb{R}$/$\mathbb{N}$/$\mathbb{Z}$/etc) Infimum is the lowest upper bound of the set (given the constraints, $\mathbb{R}$/$\mathbb{N}$/$\mathbb{Z}$/etc)
How do you apply these when you are given functions, rather than sets? |
Electronic Journal of Probability Electron. J. Probab. Volume 24 (2019), paper no. 23, 29 pp. The speed of critically biased random walk in a one-dimensional percolation model Abstract
We consider biased random walks in a one-dimensional percolation model. This model goes back to Axelson-Fisk and Häggström and exhibits the same phase transition as biased random walk on the infinite cluster of supercritical Bernoulli bond percolation on $\mathbb{Z} ^d$, namely, for some critical value $\lambda _{\mathrm{c} }>0$ of the bias, it holds that the asymptotic linear speed $\overline{\mathrm {v}} $ of the walk is strictly positive if the bias $\lambda $ is strictly smaller than $\lambda _{\mathrm{c} }$, whereas $\overline{\mathrm {v}} =0$ if $\lambda \geq \lambda _{\mathrm{c} }$.
We show that at the critical bias $\lambda = \lambda _{\mathrm{c} }$, the displacement of the random walk from the origin is of order $n/\log n$. This is in accordance with simulation results by Dhar and Stauffer for biased random walk on the infinite cluster of supercritical bond percolation on $\mathbb{Z} ^d$.
Our result is based on fine estimates for the tails of suitable regeneration times. As a by-product of these estimates we also obtain the order of fluctuations of the walk in the sub-ballistic and in the ballistic, nondiffusive phase.
Article information Source Electron. J. Probab., Volume 24 (2019), paper no. 23, 29 pp. Dates Received: 10 August 2018 Accepted: 9 February 2019 First available in Project Euclid: 23 March 2019 Permanent link to this document https://projecteuclid.org/euclid.ejp/1553306439 Digital Object Identifier doi:10.1214/19-EJP277 Citation
Lübbers, Jan-Erik; Meiners, Matthias. The speed of critically biased random walk in a one-dimensional percolation model. Electron. J. Probab. 24 (2019), paper no. 23, 29 pp. doi:10.1214/19-EJP277. https://projecteuclid.org/euclid.ejp/1553306439 |
Jacob, KT and Matthews, T and Hajra, JP (1990)
Low oxygen potential boundary for the stability of $YBA_2Cu_3O_{7−\delta}$. In: Materials Science and Engineering B, 7 (1-2). pp. 25-29.
PDF
Low_Oxygen_Potential_Boundary_for_the_Stability_of_YBa2Ctl307__6.pdf
Restricted to Registered users only
Download (432kB) | Request a copy
Abstract
On lowering the oxygen potential, the tetragonal phase of $Yba_2Cu_3O_{7−\delta}$ was found to decompose into a mixture of $Y_2BaCuO_5, BaCuO_2$ and $BaCu_2O_2 $ in the temperature range 773–1173 K. The 123 compound was contained in a closed crucible of yttria-stabilized zirconia in the temperature range 773–1073 K. Oxygen was removed in small increments by coulometric titration through the solid electrolyte crucible at constant temperature. The oxygen potential was calculated from the open circuit e.m.f. of the solid state cell after successive titrations. Pure oxygen at a pressure of $1.01 \times 10^5 $ Pa was used as the reference electrode. The decomposition of the 123 compound manifested as a plateau in oxygen potential. The decomposition products were identified by x-ray diffraction. At temperatures above 1073 K there was some evidence of reaction between the 123 compound, solid electrolyte crucible and platinum. For measurements above 1073 K, the 123 compound was contained in a magnesia crucible placed in a closed outer silica tube. The oxygen potential in the gas phase above the 123 compound was controlled and measured by a solid state cell based on yttria-stabilized zirconia which served both as a pump and sensor. The lower oxygen potential limit for the stability of the 123 compound is given by $\bigtriangleup\mu_{o2} =- 181 450 + 105.37T (\pm400) J mol^1$ The oxygen non-tachometric parameter$\delta$ for the 123 compound has a value of $0.98 (\pm 0.01)$ at dissociation.
Item Type: Journal Article Additional Information: Copyright of this article belongs to Elsevier Sequoia. Keywords: Low oxygen;potential boundary Department/Centre: Division of Mechanical Sciences > Materials Engineering (formerly Metallurgy) Depositing User: CA Sharada Date Deposited: 14 Dec 2007 Last Modified: 19 Sep 2010 04:41 URI: http://eprints.iisc.ac.in/id/eprint/12526 Actions (login required)
View Item |
Briefly,
Question: Is it "good enough" to use least prime factor in choosing a maximal set of coprime integers in an interval?
The post title comes from a 1993 paper of Erdos and Sarkozy. They define some functions and show their asymptotic growth without showing explicit bounds. To restate their theorem 7, given natural number $k \gt 1$ and positive real number $\epsilon$, there is a real number $\alpha_k$ depending only on $k$ and numbers $n_0$ and $C$ (and both $n_0$ and $C$ depend on $k$ and on $\epsilon$) so that: for any $n \gt n_0$, for any integer $m$ and any subset $A$ of (integers of) the interval $[m+1,m+n]$ with $\mid A \mid \gt n*(\alpha_k + \epsilon)$, there are at least $Cn^k$ many ways to pick $k$ members of $A$ which are mutually coprime.
While the order of growth $n^k$ is nailed down, the numbers $C$ and $n_0$ and $\alpha_k$ are not made explicit (although it becomes clear how $\alpha_k$ should tend to 1 with $k$). Part of my goal is to make some other results like these with explicit values. In particular, in another post (link forthcoming ( here it is : A generalization of Landau's function)) I made the claim that the generalized Landau function $g(n,k)$ is like $(n/k)^k$ for $n$ greater than $k^5$. I expect something stronger than this holds, but I do not have a proof of this claim, and while Theorem 7 points in the direction of this claim, the theorem is not explicit enough to confirm or refute this claim.
I now introduce my setup to rephrase the above question more precisely.
If we pick the subset $A$ of all even numbers out of the interval $I=[m+1,m+n]$, it is clear that we can't get even two coprime numbers from $A$, much less $k$ mutually coprime numbers. So for $k \gt 2$, $\alpha_k$ should be larger than 1/2. If $B_{k-1}$ is that subset of $I$ with all members having some prime factor strictly less than the $k$th prime, it is also clear that we can't pick $k$ mutually coprime numbers from this subset. (We can vary this by choosing the set of numbers in $I$ having a prime factor coming from some chosen set of $k-1$ many primes.) So $\alpha_k$ has to grow like $1 - \prod (1 - 1/p)$ with the product being over the first $k$ primes $p$. One can look at the complement of $B_{k-1}$ in the interval and do some analysis and show that, for $n$ sufficiently large, there are sets of $k$ coprime integers in the complement. Because of the estimates used, the 1993 paper can't say much if $n \lt 2^k$.
Indeed, we should be able to do better with $n$ smaller than $2^k$. Let's collect least prime factors of numbers in an interval. Using $LPF$ for least prime factor, define $L(m,n)=\{ LPF(m+i) : 1 \leq i \leq n \}$; I abuse notation and have $L$ stand for the number of elements of $L(m,n)$. Indeed, any set of $k$ mutually coprime numbers have among them $k$ distinct $LPF$ values, so if we can pick $k$ coprimes from the interval, then $k \leq L$. So, given $k$, if we pick $n$ so that for every $m$ we look at $L(m,n)$ and find that $L \geq k$ for all these $m$, we have a nice $n$ for $k$ many coprimes in an interval of length $n$, and we can work on an explicit formula and get our bound, right?
Not so fast. It is possible that for a given $m,n$, we cannot pick $L$ coprimes from the interval. I have not constructed an example, but I imagine that we can pick $m$ and $n$ so that all even numbers in $I$ have an odd prime factor which is at most $n$. (This follows from work of Westzynthius and earlier on prime gaps.). If $L(m,n)$ contains all primes less than $n$, then any maximal set of coprime must avoid either an even number or it must avoid all numbers which have $LPF=p$, an odd prime that divides the even representative. So we may not have $L$ many coprimes.
So first question: Is there an interval $[m+1,m+n]$ of integers with $L(m,n)$ of size $L$ but with no subset of $L$ mutually coprime integers in this interval? If so, what is $n$?
I believe the answer is yes, but I have not worked it out. Note that one estimate of the size of $n$ involves summing prime reciprocals for odd primes, and that the answer thus is no if $n \lt 23$. The answer is probably still no for $n \lt 50$, but I am unsure of this.
However, a yes answer backed up with detail about $m$ and $n$ is not a deal breaker for me. I am willing to make a construction where I start with $n$ large enough (so that $L \geq 2k$ for example) to get what I need. In fact, what I really need is (with minimum having $m$ taken over all integers) $K(n) = \min_m \{ $ the size of the largest subset of mutually coprime integers from $I \}$. Let us define in parallel $L(n) = \min_m \mid L(m,n) \mid$ .
Next question : How do the growth rates of $L(n)$ and $K(n)$ compare with $n$? In particular, is $2*K(n) \gt L(n)$ for every $n$?
One can prove (as is done in the above post asking about Landau's function; there $C(k)$ is a function of Jacobsthal) that $L(n)=K(n)=$ function related to $C(K(n))$ for $n$ at most 22. I tried there to build a set using $L(m,n)$; a problem arises in that an element poorly chosen later may not be coprime to an earlier chosen element. Another problem is that it is not clear what the set $L(m,n)$ looks like in general. Westzynthius gives explicitly that there are $m$ where max $L(m,n)$ is less than any fraction of $n$ for $n$ sufficiently large (so pick a fraction $\epsilon$, then there are $n$ large such that the Max is less than $\epsilon n$). In particular, there are intervals of consecutive numbers with every number having a significant smooth factor, I.e. every number is a multiple of some number with several distinct prime factors less than $n$.
However, if $L(n)$ does not grow much faster than $K(n)$, then if we want $k$ coprimes, we pick $n$ with $L(n)$ not much larger than (say) $2*k$, do our construction using $LPF$, and get the $k$ coprimes, and do this in a small enough interval. Then we can use the results to give asymptotics to the generalized Landau function.
So finally, Question: Is there any literature on (or approaching) $L(n)$ and its relation to $K(n)$?
This feels like a decent and original academic research topic to me. If it is, and a student wants to work on it (or an advisor wants to suggest it to a student), I would like to know about it and share some further ideas. Please let me know of this happens.
Gerhard "To Start 2019 Off Right" Paseman, 2019.01.01. |
I am dealing with the vector field:
$$v = \dfrac{\hat{\mathscr r} }{{\mathscr r}^2}$$
And I am studying its divergence. If we compute it we get:
$$\nabla\cdot\left(\dfrac{\hat{\mathscr r} }{{\mathscr r}^2}\right) = 0, \qquad {\mathscr r} \ne 0.$$
I understand we are dealing with a delta function, which explains why we get $\nabla\cdot v = 0$ everywhere but in the origin, where it blows up.
But the fact that $\nabla\cdot v = 0$ does not make sense to me looking at the graph of the vector function:
Where we can see how the vector field spreads out. The only reasoning I see is that at the origin, the vector field spreads out so much that once we look out of it the field cannot spread any more, thus we get zero divergence. But I insist, this cannot be seen in the plot of the function |
The time series is governed by the equation $S(T)=S(0)e^{(\mu-\frac{\delta^2}{2})T+\delta(w(T)-w(0))}$, in which $w(t)$ is a standard Brownian motion. Now given the data $\{S(t)\}_{t=0}^{t=T}$, how to estimate $\delta$ and $\mu$?
By considering the log of the time series, i.e.
$$ \{\log{S(t)}\}_{t=0}^{t=T}$$
we have
$$ \log{S(0)} + (\mu - \delta^2/2)t + \delta w(t) $$
( $w(0) = 0$ ). Taking first differences of this series, $\log{S_{t_i}} - \log{S_{t_{i-1}} }$, gives a new series:
$$ (\mu - \delta^2/2)( t_i - t_{i-1} ) + \delta( w(t_i) - w(t_{i-1} ) )$$
This new series is independent and Normally distributed, with mean $(\mu - \delta^2/2)( t_i - t_{i-1} )$ and standard deviation $\delta\sqrt{ t_i - t_{i-1} }$. One can use MLE to find the "best" estimators of the two unknowns. |
GloVe word vectors
GloVe stands for Global Vectors for word representation. Previously we were picking up context (c) and target(t) in the window randomly. GloVe makes this selection explicit.
$X_{ij}$ = NUmber of times $i$ appear in context of $j$.
Here $i$ and $j$ play the role of context (c) and target (t). The definition of context is whether or not to words appear in the same window of +/- 10. Context can be defined in a different way too, for example the context might be the word coming just befor another word.
Also, depending on the selection, sometimes $X_{ij}$ = $X_{ji}$ (symmetrical relationship)
$X_{ij}$ is a count which denotes how often a word i and j appear together. The GloVe model optimizes the following:
Objective: How related are words i and j
Define $f(X_{ij})$ such that $f(X_{ij})$=0 if $X_{ij}$ = 0. Also $f(X_{ij})$ should be such that it doesn’t give too much weight to the frequent words (the, a, of, this etc) or it doesn’t get too small weight to infrequent words (durain, maui).
Also, in the equation below, when $X_{ij}$ = 0, log $X_{ij}$ is undefined. But $f(X_{ij})$=0 in this case, so 0 log 0 = 0.
minimize $\sum_{i=1}^{10000} \sum_{j=1}^{10000} f(X_{ij})(\theta_i^T e_j + b_i + b_j^{'} - log X_{ij})^2$
$b_i$ and $b_j$ are corresponding bias units for i and j (target and context)
The equation learns the parameters $\theta_i$ and $e_j$ using gradient descent. $\theta_i$ and $e_j$ are symmetric due to the definition of $X_{ij}$. The final embedding vector for a word $e_w$ is calculated as $e_w = \frac{e_w + \theta_w}{2}$.
Features learnt by the embedding matrix
While the example shows the features in the embedding matrix as ‘age’ or ‘gender’ in reality these features might be learnt by the algorithm and may not be interpretable.
Source material from Andrew NG’s awesome course on Coursera. The material in the video has been written in a text form so that anyone who wishes to revise a certain topic can go through this without going through the entire video lectures. |
John's answer is a good one, I just wanted to add some equations and addition thought. Let me start here:
Heating is really only significant when you get a shock wave i.e. above the speed of sound.
The question asks specifically about a $200^{\circ} C$ increase in temperature in the atmosphere. This qualifies as "significant" heating, and the hypothesis that this would only happen at supersonic speeds is valid, which I'll show here.
When something moves through a fluid, heating happens of both the object and the air. Trivially, the total net heating is $F d$, the drag force times the distance traveled. The problem is that we don't know what the breakdown is between the object and the air is. This dichotomy is rather odd, because consider that in steady-state movement
all of the heating goes to the air. The object will heat up, and if it continues to move at the same speed (falling at terminal velocity for instance), it is cooled by the air the exact same amount it is heated by the air.
When considering the exact heating mechanisms, there is heating from boundary layer friction on the surface of the object and there are forms losses from eddies that ultimately are dissipated by viscous heating. After thinking about it, I must admit I think John's suggestion is the most compelling - that the compression of the air itself is what matters most. Since a $1 m$ ball in air is specified, this should be a fairly high Reynolds number, and the skin friction shouldn't matter quite as much as the heating due to stagnation on the leading edge.
Now, the exact amount of pressure increase at the stagnation point may not be exactly $1/2 \rho v^2$, but it's close to that. Detailed calculations for drag should give an accurate number, but I don't have those, so I'll use that expression. We have air, at $1 atm$, with the prior assumption the size of the sphere doesn't matter, I'll say that air ambient is at $293 K$, and the density is $1.3 kg/m^3$. We'll have to look at this as an adiabatic compression of a diatomic gas, giving:
$$\frac{T_2}{T_1} = \left( \frac{P_2}{P_1} \right)^{\frac{\gamma-1}{\gamma}}$$
Diatomic gases have:
$$\gamma=\frac{7}{5}$$
Employ the stagnation pressure expression to get:
$$\frac{P_2}{P_1} = \frac{P1+\frac{1}{2} \rho v^2}{P1} = 1+\frac{1}{2} \rho v^2 / P1 $$
Put these together to get:
$$\frac{T_2}{T_1} = \left( 1+\frac{1}{2} \rho v^2 / P1 \right)^{2/7}$$
Now, our requirement is that $T2/T1\approx (293+200)/293 \approx 1.7$. I get this in the above expression by plugging in a velocity of about $2000 mph$. At that point, however, there might be more complicated physics due to the supersonic flow. To elaborate, the compression process at supersonic speeds might dissipate more energy than an ideal adiabatic compression. I'm not an expert in supersonic flow, and you can say the calculations here assumed subsonic flow, and the result illustrates that this is not a reasonable assumption.
addition:
The Concorde could fly at about Mach 2. The ambient temperature is much lower than room temperature, but the heatup compared to ambient was about $182 K$ for the skin and $153 K$ for the nose. This is interesting because it points to boundary layer skin friction playing a bigger role than I suspected, but that is also wrapped up in the physics of the sonic wavefront which I haven't particularly studied.
You have to ask yourself, what pressure is the nose at and what pressure is the skin at. The flow separates (going under or above the craft) at some point, and that should be the highest pressure, but maybe it's not the highest temperature, and I can't really explain why. We've pretty much reached the limit of the back-of-the-envelope calculations.
(note: I messed up the $\gamma$ value at first and then changed it after a comment. This caused the value to go from 1000 mph to 2000 mph. This is actually much more consistent with the Concorde example since it gets <200 K heating at Mach 2.) |
Notice:
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!
Test-drive the GATK tools and Best Practices pipelines on Terra Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything. Assigning per-sample genotypes (HaplotypeCaller)
This document describes the procedure used by HaplotypeCaller to assign genotypes to individual samples based on the allele likelihoods calculated in the previous step. For more context information on how this fits into the overall HaplotypeCaller method, please see the more general HaplotypeCaller documentation. See also the documentation on the QUAL score as well as the one on PL and GQ.
This procedure is NOT applied by Mutect2 for somatic short variant discovery. See this article for a direct comparison between HaplotypeCaller and Mutect2.
Contents Overview Preliminary assumptions / limitations Calculating genotype likelihoods using Bayes' Theorem Selecting a genotype and emitting the call record 1. Overview
The previous step produced a table of per-read allele likelihoods for each candidate variant site under consideration. Now, all that remains to do is to evaluate those likelihoods in aggregate to determine what is the most likely genotype of the sample at each site. This is done by applying Bayes' theorem to calculate the likelihoods of each possible genotype, and selecting the most likely. This produces a genotype call as well as the calculation of various metrics that will be annotated in the output VCF if a variant call is emitted.
Note that this describes the
regular mode of HaplotypeCaller, which does not emit an estimate of reference confidence. For details on how the reference confidence model works and is applied in
GVCF modes (
-ERC GVCF and
-ERC BP_RESOLUTION) please see the reference confidence model documentation.
2. Preliminary assumptions / limitations Quality
Keep in mind that we are trying to infer the genotype of each sample given the observed sequence data, so the degree of confidence we can have in a genotype depends on both the quality and the quantity of the available data. By definition, low coverage and low quality will both lead to lower confidence calls. The GATK only uses reads that satisfy certain mapping quality thresholds, and only uses “good” bases that satisfy certain base quality thresholds (see documentation for default values).
Ploidy
Both the HaplotypeCaller and GenotypeGVCFs assume that the organism of study is diploid by default, but the desired ploidy can be set using the
-ploidy argument. The ploidy is taken into account in the mathematical development of the Bayesian calculation using a generalized form of the genotyping algorithm that can handle ploidies other than 2. Note that using ploidy for pooled experiments is subject to some practical limitations due to the number of possible combinations resulting from the interaction between ploidy and the number of alternate alleles that are considered. There are some arguments that aim to mitigate those limitations but they are not fully documented yet.
Paired end reads
Reads that are mates in the same pair are not handled together in the reassembly, but if they overlap, there is some special handling to ensure they are not counted as independent observations.
Single-sample vs multi-sample
We apply different genotyping models when genotyping a single sample as opposed to multiple samples together (as done by HaplotypeCaller on multiple inputs or GenotypeGVCFs on multiple GVCFs). The multi-sample case is not currently documented for the public but is an extension of previous work by Heng Li and others.
3. Calculating genotype likelihoods using Bayes' Theorem
We use the approach described in Li 2011 to calculate the posterior probabilities of non-reference alleles (Methods 2.3.5 and 2.3.6) extended to handle multi-allelic variation.
The basic formula we use for all types of variation under consideration (SNPs, insertions and deletions) is:
$$ P(G|D) = \frac{ P(G) P(D|G) }{ \sum_{i} P(G_i) P(D|G_i) } $$
If that is meaningless to you, please don't freak out -- we're going to break it down and go through all the components one by one. First of all, the term on the left:
$$ P(G|D) $$
is the quantity we are trying to calculate for each possible genotype: the conditional probability of the genotype
G given the observed data D.
Now let's break down the term on the right:
$$ \frac{ P(G) P(D|G) }{ \sum_{i} P(G_i) P(D|G_i) } $$
We can ignore the denominator (bottom of the fraction) because it ends up being the same for all the genotypes, and the point of calculating this likelihood is to determine the most likely genotype. The important part is the numerator (top of the fraction):
$$ P(G) P(D|G) $$
which is composed of two things: the prior probability of the genotype and the conditional probability of the data given the genotype.
The first one is the easiest to understand. The prior probability of the genotype
G:
$$ P(G) $$
represents how probably we expect to see this genotype based on previous observations, studies of the population, and so on. By default, the GATK tools use a flat prior (always the same value) but you can input your own set of priors if you have information about the frequency of certain genotypes in the population you're studying.
The second one is a little trickier to understand if you're not familiar with Bayesian statistics. It is called the conditional probability of the data given the genotype, but what does that mean? Assuming that the genotype
G is the true genotype,
$$ P(D|G) $$
is the probability of observing the sequence data that we have in hand. That is, how likely would we be to pull out a read with a particular sequence from an individual that has this particular genotype? We don't have that number yet, so this requires a little more calculation, using the following formula:
$$ P(D|G) = \prod{j} \left( \frac{P(D_j | H_1)}{2} + \frac{P(D_j | H_2)}{2} \right) $$
You'll notice that this is where the diploid assumption comes into play, since here we decomposed the genotype
G into:
$$ G = H_1H_2 $$
which allows for exactly two possible haplotypes. In future versions we'll have a generalized form of this that will allow for any number of haplotypes.
Now, back to our calculation, what's left to figure out is this:
$$ P(D_j|H_n) $$
which as it turns out is the conditional probability of the data given a particular haplotype (or specifically, a particular allele), aggregated over all supporting reads. Conveniently, that is exactly what we calculated in Step 3 of the HaplotypeCaller process, when we used the PairHMM to produce the likelihoods of each read against each haplotype, and then marginalized them to find the likelihoods of each read for each allele under consideration. So all we have to do at this point is plug the values from that table into the equation above, and we can work our way back up to obtain:
$$ P(G|D) $$
for the genotype
G. 4. Selecting a genotype and emitting the call record
We go through the process of calculating a likelihood for each possible genotype based on the alleles that were observed at the site, considering every possible combination of alleles. For example, if we see an A and a T at a site, the possible genotypes are AA, AT and TT, and we end up with 3 corresponding probabilities. We pick the largest one, which corresponds to the most likely genotype, and assign that to the sample.
Note that depending on the variant calling options specified in the command-line, we may only emit records for actual variant sites (where at least one sample has a genotype other than homozygous-reference) or we may also emit records for reference sites. The latter is discussed in the reference confidence model documentation.
Assuming that we have a non-ref genotype, all that remains is to calculate the various site-level and genotype-level metrics that will be emitted as annotations in the variant record, including QUAL as well as PL and GQ. For more information on how the other variant context metrics are calculated, please see the corresponding variant annotations documentation. |
A point charge \(Q\) is at the centre of a sphere of radius \(r\). Calculate the \(D\)-flux through the sphere. Easy. The magnitude of \(D\) at a distance \(a\) is \(Q/(4(\pi r^2)\) and the surface area of the sphere is 4\(\pi\)
r 2. Therefore the flux is just \(Q\). Notice that this is independent of \(r\); if you double \(r\), the area is four times as great, but \(D\) is only a quarter of what it was, so the total flux remains the same. You will probably agree that if the charge is surrounded by a shape such as shown in Figure \(I\).8, which is made up of portions of spheres of different radii, the \(D\)-flux through the surface is still just \(Q\). And you can distort the surface as much as you like, or you may consider any surface to be made up of an infinite number of infinitesimal spherical caps, and you can put the charge anywhere you like inside the surface, or indeed you can put as many charges inside as you like – you haven’t changed the total normal component of the flux, which is still just \(Q\). This is Gauss’s theorem, which is a consequence of the inverse square nature of Coulomb’s law. \(\text{FIGURE I.8}\)
Definition: Gauss’s theorem
The total normal component of the \(D\)-flux through any closed surface is equal to the charge enclosed by that surface.
Examples
A long rod carries a charge of \(\lambda\) per unit length. Construct around it a cylindrical surface of radius \(r\) and length \(l\). The charge enclosed is \(l\lambda\), and the field is directed radially outwards, passing only through the curved surface of the cylinder. The \(D\)-flux through the cylinder is \(l\lambda\) and the area of the curved surface is 2\(\pi rl\), so \(D = l\lambda /(2\pi rl)\) and hence \(E=\lambda / (2\pi\epsilon r)\).
\(\text{FIGURE I.9}\)
A flat plate carries a charge of \(\sigma\) per unit area. Construct around it a cylindrical surface of cross-sectional area \(A\). The charge enclosed by the cylinder is \(A\sigma\), so this is the \(D\)-flux through the cylinder. It all goes through the two ends of the cylinder, which have a total area 2\(A\), and therefore \(D\) = \(\sigma\)/2 and \(E = \sigma\)/(2\(\epsilon\)).
\(\text{FIGURE I.10}\)
A hollow spherical shell of radius
a carries a charge \(Q\). Construct two gaussian spherical surfaces, one of radius less than \(a\) and the other of radius \(r > a\). The smaller of these two surfaces has no charge inside it; therefore the flux through it is zero, and so \(E\) is zero. The charge through the larger sphere is \(Q\) and is area is 4\(\pi r^2\). Therefore \(D=Q/(4\pi r^2)\text{ and }E=Q/(4\pi \epsilon r^2 )\). (It is worth going to Chapter 5 of Celestial Mechanics, subsection 5.4.8, to go through the calculus derivation, so that you can appreciate Gauss’s theorem all the more.)
A point charge \(Q\) is in the middle of a cylinder of radius \(a\) and length \(2l\). Calculate the flux through the cylinder.
An infinite rod is charged with \(\lambda\) coulombs per unit length. It passes centrally through a spherical surface of radius\(a\). Calculate the flux through the spherical surface.
These problems are done by calculus in section 5.6 of Celestial Mechanics, and furnish good examples of how to do surface integrals, and I recommend that you work through them. However, it is obvious from Gauss’s theorem that the answers are just \(Q\) and \(2a\lambda\) respectively.
A point charge \(Q\) is in the middle of a cube of side \(2a\). The flux through the cube is, by Gauss’s theorem, \(Q\), and the flux through one face is \(Q\)/6. I hope you enjoyed doing this by calculus in section 1.8. |
Sign up for the new posts via the RSS feed.
Below I show a neat application of perfect hashing, which is one of my favorite (cluster of) algorithms. Amazingly, we use it to obtain a purely information-theoretic (rather than algorithmic) statement.
Suppose we have a finite universe $U$ of size $n$ and a $k$-element subset of it $S \subseteq U$ with $k \ll n$. How many bits do we need to encode it? The obvious answer is $\log_2 \binom{n}{k} = \Theta(k \cdot \log(n / k))$.
Can we, however, improve this bound if we allow some approximation?
Even if $n = 2k$, it is not difficult to show the lower bound of $k \cdot \log_2(1 / \delta)$ bits if we allow to be wrong when answering queries “does $x$ belong to $S$?” with probability at most $\delta$ (hint: $\varepsilon$-nets). Can we match this lower bound?
One approach that does not quite work is to hash each element of $S$ to an $l$-bit string using a sufficiently good hash function $h \colon U \to \{0, 1\}^l$, and, when checking if $x$ lies in $S$, compute $h(x)$ and check if this value is among the hashes of $S$. To see why it does not work, let us analyze it: if $x \notin S$, then the probability that $h(x)$ coincides with at least one hash of an element of $S$ is around $k \cdot 2^{-l}$. To make the latter less than $\delta$, we need to take $l = \log_2(k / \delta)$ yielding the overall bound of $k \cdot \log_2(k / \delta)$ falling short of the desired size.
To get the optimal size, we need to avoid using the union bound in the above argument. In order to accomplish this, let us use perfect hashing on top of the above hashing scheme! It is convenient to use a particular approach to perfect hashing called Cuckoo hashing. In short, there is a way to generate two simple hash functions $h_1, h_2 \in U \to [m]$ for $m = O(k)$ and place the elements of our set $S$ into $m$ bins without collisions so that for every $x \in S$, the element $x$ is placed either in $h_1(x)$ or in $h_2(x)$. Now, to encode our set $S$, we build a Cuckoo hash table for it, and then for each of the $m$ bins, we either store one bit indicating that it’s empty, or store an $l$-bit hash of an element that is placed into it. Now we can set $l = \log_2(2 / \delta)$, since we compare the hash of a query to merely two hashes, instead of $k$. This gives the overall size $m + k \cdot \log_2 (2 / \delta) = k \cdot (\log_2(1 / \delta) + O(1))$, which is optimal up to a low-order term. Of course, the encoding should include $h_1$, $h_2$ and $h$, but it turns out they can be taken to be sufficiently simple so that their size does not really matter.
Two remarks are in order. First, in this context people usually bring up Bloom filters. However, they require space, which is $1.44$ times bigger, and, arguably, they are more mysterious (if technically simple). Second, one may naturally wonder why anyone would care about distinguishing bounds like $k \cdot \log_2 (1 / \delta)$ and $k \cdot \log_2(k / \delta)$. In my opinion, there are two answers to this. First, it is just a cool application of perfect hashing (an obligatory link to one of my favorite comic strips). Second, compressing sets is actually important in practice and constant factors do matter, for instance when we are aiming to transfer the set over the network.
Update Kasper Green Larsen observed that we can combine the naive and not-quite-working solutions to obtain the optimal bound. Namely, by hashing everything to $\log_2(k / \delta)$ bits, we effectively reduce the universe size to $n’ = k / \delta$. Then, the naive encoding takes $\log_2 \binom{n’}{k} \approx H(\delta) \cdot n’ = H(\delta) \cdot k / \delta \approx k \cdot \log_2 (1 / \delta)$ bits. |
I study by myself with the QFT, in the page 197 of book of Lewis H. Ryder (2nd edition), The author wrote that he take the functional derivative of equation 6.69:
$$\frac {\delta\widehat{Z}[\phi]}{\delta\phi}$$
where $$\widehat {Z}[\phi]=\frac{{e}^{iS}}{\int{{e}^{iS}}{\cal D}\phi}\tag{6.69}$$
and
$$S=-\int{\left[\frac {1}{2} \phi(\Box+{m }^{ 2 })\phi -{\cal L }_{ int } \right] { d }^{ 4 }x }.\tag{6.71} $$
The result in Eq. 6.72 is:
$$\frac { \delta }{ \delta \phi } \left\{ \exp\left[ -i\int { \left[ \frac { 1 }{ 2 } \phi (\Box +{ m }^{ 2 })\phi-{\cal L}_{ int } \right] } { d }^{ 4 }x \right] \right\} { \left[ \int { \exp\left[ iS \right] } {\cal D}\phi \right] }^{ -1 }\\= \left( \Box +{ m }^{ 2 } \right) \phi \widehat { Z } [\phi ]-\frac { \partial { \cal L }_{ int } }{ \partial \phi }\widehat { Z } [\phi ].\tag{6.72} $$
I don't understand how the calculating procedure taking place. I have known how to calculate the functional derivative to a functional,
but I do not know how to take it to a functional integral like $\widehat{Z}[\phi]$. I would be most thankful if anyone help me.
PS: Is there are some detailed textbook or literature about this technique? |
Search
Now showing items 1-10 of 22
The circle space of a spherical circle plane
(BELGIAN MATHEMATICAL SOC TRIOMPHE, 2014)
We show that the circle space of a spherical circle plane is a punctured projective 3-space. The main ingredient is a partial solution of the problem of Apollonius on common touching circles.
On kleinewillingh¨ofer types of finite laguerre planes with respect to homotheties
(2016)
© 2016, University of Queensland. All rights reserved. Kleinewillinghöfer classified Laguerre planes with respect to central automorphisms and obtained a multitude of types. In this paper we investigate the Kleinewillinghöfer ...
Maps between curves and arithmetic obstructions
(2017)
Let X and Y be curves over a finite field. In this article we explore methods to determine whether there is a rational map from Y to X by considering L-functions of certain covers of X and Y and propose a specific family ...
Asymptotic enumeration of symmetric integer matrices with uniform row sums
(CAMBRIDGE UNIV PRESS, 2013)
We investigate the number of symmetric matrices of nonnegative integers with zero diagonal such that each row sum is the same. Equivalently, these are zero-diagonal symmetric contingency tables with uniform margins, or ...
Maximal differential uniformity polynomials
(2017)
We provide an explicit infinite family of integers m such that all the polynomials of F2n [x] of degree m have maximal differential uniformity for n large enough. We also prove a conjecture of the third author in these cases.
New characterisations of tree-based networks and proximity measures
(2017)
Phylogenetic networks are a type of directed acyclic graph that represent how a set X of present-day species are descended from a common ancestor by processes of speciation and reticulate evolution. In the absence of ...
Tate-Shafarevich groups of constant elliptic curves and isogeny volcanos
(2019)
We describe the structure of Tate-Shafarevich groups of a constant elliptic curves over function fields by exploiting the volcano structure of isogeny graphs of elliptic curves over finite fields.
Improved rank bounds from 2-descent on hyperelliptic Jacobians
(2018)
© 2018 World Scientific Publishing Company. We describe a qualitative improvement to the algorithms for performing 2-descents to obtain information regarding the Mordell-Weil rank of a hyperelliptic Jacobian. The improvement ...
On automorphism groups of toroidal circle planes
(2018)
© 2018, Springer International Publishing AG, part of Springer Nature. Schenkel proved that the automorphism group of a flat Minkowski plane is a Lie group of dimension at most 6 and described planes whose automorphism ...
The Drinker Paradox and its Dual
(2018)
The Drinker Paradox is as follows. In every nonempty tavern, there is a person such that if that person is drinking, then everyone in the tavern is drinking. Formally, \[ \exists x \big(\varphi \rightarrow \forall ... |
The Annals of Statistics Ann. Statist. Volume 24, Number 3 (1996), 1235-1249. On the existence of inferences which are consistent with a given model Abstract
4 If ${p_{\theta}$ is a $\sigma$-additive statistical model and $\pi$ a finitely additive prior, then any statistic
T is sufficient, with respect to a suitable inference consistent with ${p_{\theta}$ and $\pi$, provided only that $p_{\theta}(T = t) = 0$ for all $\theta$ and t. Here, sufficiency is to be intended in the Bayesian sense, and consistency in the sense of Lane and Sudderth. As a corollary, if ${p_{\theta}$ is $\sigma$-additive and diffuse, then, whatever the prior $\pi$, there is an inference which is consistent with ${p_{\theta}$ and $\pi$. Two versions of themain result are also obtained for predictive problems. Article information Source Ann. Statist., Volume 24, Number 3 (1996), 1235-1249. Dates First available in Project Euclid: 20 September 2002 Permanent link to this document https://projecteuclid.org/euclid.aos/1032526966 Digital Object Identifier doi:10.1214/aos/1032526966 Mathematical Reviews number (MathSciNet) MR1401847 Zentralblatt MATH identifier 0866.62001 Citation
Berti, Patrizia; Rigo, Pietro. On the existence of inferences which are consistent with a given model. Ann. Statist. 24 (1996), no. 3, 1235--1249. doi:10.1214/aos/1032526966. https://projecteuclid.org/euclid.aos/1032526966 |
There are many possible definitions; see e.g. Rutanen et al. [1].
Generally speaking, if you define your $O$ over domain $X$, you will need an order relation $\leq$ on $X$. The definition then reads:
$\qquad O(g) = \{ f : X \to \mathbb{R}_{\geq 0} \mid \exists\,x_0 \in X, c \in \mathbb{R}_{>0}.\ \forall\,x\in X.\\\qquad\qquad\qquad x_0 \leq x \implies f(x) \leq c \cdot g(x) \}.$
So if $X = \mathbb{N}^k$ you can use lexicographic order, element-wise $\leq$, ...
Beware that the properties your $O$ has depend on both the definition and the the order you choose. The outcome may not be as expected.
A general definition of the O-notation for algorithm analysis by K. Rutanen et al. (2015) |
The electric field and the magnetic field are not the same thing. An electric dipole with positive charge on one end and negative charge on the other is not the same thing as a magnetic dipole having a north and a south pole. More specifically: An object can have positive charge but it can’t have “northness”.
On the other hand, electricity and magnetism are not unrelated. In fact, under certain circumstances, a magnetic field will exert a force on a charged particle that has no magnetic dipole moment. Here we consider the effect of a magnetic field on such a charged particle.
FACT: A magnetic field exerts no force on a charged particle that is at rest in the magnetic field.
FACT: A magnetic field exerts no force on a charged particle that is moving along the line along which the magnetic field, at the location of the particle, lies.
FACT: A magnetic field does exert a force on a charged particle that is in the magnetic field, and, is moving, as long as the velocity of the particle is not along the line, along which, the magnetic field is directed. The force in such a case is given by:
\[\vec{F}=q\vec{v}\times \vec{B}\label{16-1}\]
Note that the cross product yields a vector that is perpendicular to each of the multiplicands. Thus the force exerted on a moving charged particle by the magnetic field within which it finds itself, is always perpendicular to both its own velocity, and the magnetic field vector at the particle’s location.
Consider a positively-charged particle moving with velocity \(v\) at angle \(\theta\) in the \(x-y\) plane of a Cartesian coordinate system in which there is a uniform magnetic field in the \(+x\) direction.
To get the magnitude of the cross product \(\vec{v}\times \vec{B}\) that appears in \(\vec{F}=q\vec{v}\times \vec{B}\) we are supposed to establish the angle that \(\vec{v}\) and \(\vec{B}\) make with each other when they are placed tail to tail. Then the magnitude \(|\vec{v}\times \vec{B}|\) is just the absolute value of the product of the magnitudes of the vectors times the sine of the angle in between them. Let’s put the two vectors tail to tail and establish that angle. Note that the magnetic field as a whole is an infinite set of vectors in the \(+x\) direction. So, of course, the magnetic field vector \(\vec{B}\), at the location of the particle, is in the \(+x\) direction.
Clearly the angle between the two vectors is just the angle \(\theta\) that was specified in the problem. Hence,
\[|\vec{v}\times \vec{B}|=|vB\sin \theta|,\]
so, starting with our given expression for \(\vec{F}\), we have:
\[\vec{F}=q\vec{v}\times \vec{B}\]
\[|\vec{F}|=|q\vec{v} \times \vec{B}|\]
\[|\vec{F}|=|qvB\sin\theta|\]
Okay, now let’s talk about the direction of \(\vec{F}=q\vec{v}\times \vec{B}\). We get the direction of \(\vec{v}\times \vec{B}\) and then we think. The charge \(q\) is a scalar. If \(q\) is positive, then, when we multiply the vector \(\vec{v}\times \vec{B}\) by \(q\) (to get \(\vec{F}\) ), we get a vector in the same direction as that of \(\vec{v}\times \vec{B}\). So, whatever we get (using the right-hand rule for the cross product) for the direction of \(\vec{v}\times \vec{B}\) is the direction of \(\vec{F}=q\vec{v}\times \vec{B}\). But, if \(q\) is negative, then, when we multiply the vector \(\vec{v}\times \vec{B}\) by \(q\) (to get \(\vec{F}\)), we get a vector in opposite direction to that of \(\vec{v}\times \vec{B}\). So, once we get the direction of \(\vec{v}\times \vec{B}\) by means of the righthand rule for the cross product of two vectors, we have to realize that (because the charge is negative) the direction of \(\vec{F}=q\vec{v}\times \vec{B}\) is opposite the direction that we found for \(\vec{v}\times \vec{B}\).
Let’s do it. To get the direction of the cross product vector \(\vec{v}\times \vec{B}\) (which appears in \(\vec{F}=q\vec{v}\times \vec{B}\), draw the vectors \(\vec{v}\) and \(\vec{B}\) tail to tail.
Extend the fingers of your right hand so that they are pointing directly away from your right elbow. Extend your thumb so that it is at right angles to your fingers.
Now, keeping your fingers aligned with your forearm, align your fingers with the first vector appearing in the cross product \(\vec{v}\times \vec{B}\), namely \(\vec{v}\).
Now rotate your hand, as necessary, about an imaginary axis extending along your forearm and along your middle finger, until your hand is oriented such that, if you were to close your fingers, they would point in the direction of the second vector.
The direction in which your right thumb is now pointing is the direction of \(\vec{v}\times \vec{B}\). We depict a vector in that direction by means of an \(\times\) with a circle around it. That symbol is supposed to represent the tail feathers of an arrow that is pointing away from you.
Let’s not forget about that \(q\) in the expression \(\vec{F}=q\vec{v}\times \vec{B}\). In the case at hand, the charged particle under consideration is positive. In other words \(q\) is positive. So, \(\vec{F}=q\vec{v}\times \vec{B}\) is in the same direction as \(\vec{v}\times \vec{B}\).
A magnetic field will also interact with a current-carrying conductor. We focus our attention on the case of a straight current-carrying wire segment in a magnetic field:
FACT: Given a straight, current carrying conductor in a magnetic field, the magnetic field exerts no force on the wire segment if the wire segment lies along the line along which the magnetic field is directed. (Note: The circuit used to cause the current in the wire must exist, but, is not shown in the following diagram.)
FACT: A magnetic field exerts a force on a current-carrying wire segment that is in the magnetic field, as long as the wire is not collinear with the magnetic field.
The force exerted on a straight current-carrying wire segment, by the (uniform) magnetic field in which the wire is located, is given by
\[\vec{F}=I\vec{L}\times \vec{B}\label{16-2}\]
where:
\(\vec{F}\) is the force exerted on the wire-segment-with-current by the magnetic field the wire is in,
\(I\) is the current in the wire,
\(\vec{L}\) is a vector whose magnitude is the length of that segment of the wire which is actually in the magnetic field, and, whose direction is the direction of the current (which depends both on how the wire segment is oriented and how it is connected in the (not-shown) circuit.)
\(\vec{B}\) is the magnetic field vector. The magnetic field must be uniform along the entire length of the wire for this formula to apply, so, \(\vec{B}\) is the magnetic field vector at each and every point along the length of the wire.
Note that, in the preceding diagram, \(\vec{F}\) is directed into the page as determined from \(\vec{F}=I\vec{L}\times \vec{B}\) by means of the right-hand rule for the cross product of two vectors.
Effect of a Uniform Magnetic Field on a Current Loop
Consider a rectangular loop of wire. Suppose the loop to be in a uniform magnetic field as depicted in the following diagram:
Note that, to keep things simple, we are not showing the circuitry that causes the current in the loop and we are not showing the cause of the magnetic field. Also, the magnetic field exists throughout the region of space in which the loop finds itself. We have not shown the full extent
of either the magnetic field lines depicted, or, the magnetic field itself.
Each segment of the loop has a force exerted on it by the magnetic field the loop is in. Let’s consider the front and back segments first:
Because both segments have the same length, both segments make the same angle with the same magnetic field, and both segments have the same current; the force \(\vec{F}=I\vec{L}\times \vec{B}\) will be of the same magnitude in each. (If you write the magnitude as \(F=ILB\sin\theta\), you know the magnitudes are the same as long as you know that for any angle \(\theta\), \(\sin (\theta)=\sin (180^{\circ}-\theta)\).) Using the right-hand rule for the cross product to get the direction, we find that each force is directed perpendicular to the segment upon which it acts, and, away from the center of the rectangle:
The two forces, \(F_{\mbox{FRONT}}\) and \(F_{\mbox{BACK}}\) are equal in magnitude, collinear, and opposite in direction. About the only effect they could have would be to stretch the loop. Assuming the material of the loop is rigid enough not to stretch, the net effect of the two forces is no effect at all. So, we can forget about them and focus our attention on the left and right segments in the diagram.
Both the left segment and the right segment are at right angles to the magnetic field. They are also of the same length and carry the same current. For each, the magnitude of \(\vec{F}=I\vec{L}\times \vec{B}\) is just \(IwB\) where \(w\) is the width of the loop and hence the length of both the left segment and the right segment.
Using the right-hand rule for the cross product of two vectors, applied to the expression \(\vec{F}=I\vec{L}\times \vec{B}\) for the force exerted on a wire segment by a magnetic field, we find that the force \(F=IwB\) on the right segment is upward and the force \(F=IwB\) on the left segment is downward.
The two forces are equal (both have magnitude \(F=IwB\) ) and opposite in direction, but, they are not collinear. As such, they will exert a net torque on the loop. We can calculate the torque about the central axis:
by extending the lines of action of the forces and identifying the moment arms:
The torque provided by each force is \(r_{\perp}F\). Both torques are counterclockwise as viewed in the diagram. Since they are both in the same direction, the magnitude of the sum of the torques is just the sum of the magnitudes of the two torques, meaning that the magnitude of the total torque is just \(\tau=2r_{\perp}F\). We can get an expression for \(2r_{\perp}\) by recognizing, in the diagram, that \(2r_{\perp}\) is just the distance across the bottom of the triangle in the front of the diagram:
and defining the angle \(\theta\), in the diagram, to be the angle between the plane of the loop and the vertical.
From the diagram, it is clear that \(2r_{\perp}=l \sin \theta\).
Thus the magnetic field exerts a torque of magnitude
\[\tau=r_{\perp} F\]
\[\tau=[l(\sin\theta)](IwB)\]
on the current loop.
The expression for the torque can be written more concisely by first reordering the multiplicands so that the expression appears as
\[\tau=IlwB \sin \theta\]
and then recognizing that the product \(lw\) is just the area \(A\) of the loop. Replacing \(lw\) with \(A\) yields:
\[\tau=I\space AB \sin\theta\]
Torque is something that has direction, and, you might recognize that \(\sin\theta\) appearing in the preceding expression as something that can result from a cross product. Indeed, if we define an area vector to have a magnitude equal to the area of the loop,
\[|\vec{A}|=lw\]
and, a direction perpendicular to the plane of the loop,
we can write the torque as a cross product. First note that the area vector as I have defined it in words to this point, could point in the exact opposite direction to the one depicted in the diagram. If, however, we additionally stipulate that the area vector is directed in accord with the righthand rule for something curly something straight, with the loop current being the something curly and the area vector the something straight (and we do so stipulate), then the direction of the area vector is uniquely determined to be the direction depicted in the diagram.
Now, if we slide that area vector over to the right front corner of the loop,
it becomes more evident (you may have already noticed it) that the angle between the area vector \(\vec{A}\) and the magnetic field vector \(\vec{B}\), is the same \(\theta\) defined earlier and depicted in the diagram just above.
This allows us to write our expression for the torque \(\tau=IAB \sin\theta\) counterclockwise as viewed in the diagram, as:
\[\vec{\tau}=I\vec{A}\times \vec{B}\]
Check it out. The magnitude of the cross product \(|\vec{A}\times \vec{B}|\) is just \(AB\sin \theta\), meaning that our new expression yields the same magnitude \(\tau=I AB\sin \theta\) for the torque as we had before. Furthermore, the right-hand rule for the cross product of two vectors yields the torque direction depicted in the following diagram.
Recalling that the sense of rotation associated with an axial vector is determined by the righthand rule for something curly, something straight; we point the thumb of our cupped right hand in the direction of the torque vector and note that our fingers curl around counterclockwise, as viewed in the diagram.
Okay, we’re almost there. So far, we have the fact that if you put a loop of wire carrying a current \(I\) in it, in a uniform magnetic field \(\vec{B}\), with the loop oriented such that the area vector \(\vec{A}\) of the current loop makes an angle \(\theta\) with the magnetic field vector, then, the magnetic field exerts a torque
\[\vec{\tau}=I\vec{A}\times \vec{B}\]
on the loop.
This is identical to what happens to a magnetic dipole when you put it in a uniform magnetic field. It experiences a torque \(\vec{\tau}=\vec{\mu}\times \vec{B}\). In fact, if we identify the product \(I\vec{A}\) as the magnetic dipole moment of the current loop, then the expressions for the torque are completely identical:
\[\vec{\tau}=\vec{\mu}\times \vec{B}\label{16-3}\]
where:
\(\vec{\tau}\) is the torque exerted on the victim. The victim can be either a particle that has an inherent magnetic dipole moment, or, a current loop.
\(\vec{\mu}\) is the magnetic dipole moment of the victim. If the victim is a particle, \(\vec{\mu}\) is simply the magnitude and direction of the inherent magnetic dipole moment of the particle. If the victim is a current loop, then \(\vec{\mu}=I\vec{A}\) where \(I\) is the current in the loop and \(\vec{A}\) is the area vector of the loop, a vector whose magnitude is the area of the loop and whose direction is the direction in which your right thumb points when you curl the fingers of your right hand around the loop in the direction of the current. (See the discussion below for the case in which the victim is actually a coil of wire rather than a single loop.)
\(\vec{B}\) is the magnetic field vector at the location of the victim.
A single loop of wire can be thought of as a coil of wire that is wrapped around once. If the wire is wrapped around \(N\) times, rather than once, then the coil is said to have \(N\) turns or \(N\) windings. Each winding makes a contribution of \(I\vec{A}\) to the magnetic dipole moment of the current loop. The contribution from all the loops is in one and the same direction. So, the magnetic moment of a current-carrying coil of wire is:
\[\vec{\mu}=NI\vec{A}\label{16-4}\]
where:
\(\vec{\mu}\) is the magnetic moment of the coil of wire.
\(N\) is the number of times the wire was wrapped around to form the coil. \(N\) is called the number of windings. \(N\) is also known as the number of turns.
\(I\) is the current in the coil. The coil consists of one long wire wrapped around many times, so, there is only one current in the wire. We call that one current the current in the coil.
\(\vec{A}\) is the area vector of the loop or coil. Its magnitude is the area of the plane shape whose perimeter is the loop or coil. Its direction is the direction your extended right thumb would point if you curled the fingers of your right hand around the loop in the direction of the current.
Some Generalizations Regarding the Effect of a Uniform Magnetic Field on a Current Loop
We investigated the effect of a uniform magnetic field on a current loop. A magnetic field will exert a torque on a current loop whether or not the magnetic field is uniform. Since a current loop has some spatial extent (it is not a point particle), using a single value-plus-direction for \(\vec{B}\) in \(\vec{\tau}=\vec{\mu} \times \vec{B}\) will yield an approximation to the torque. It is a good approximation as long as the magnetic field is close to being uniform in the region of space occupied by the coil.
We investigated the case of a rectangular loop. The result for the torque exerted on the current-carrying loop or coil is valid for any plane loop or coil, whether it is circular, oval, or rectangular. |
If the external potential $V$ in the time-dependent Schroedinger equation doesn't depend on time, then we can separate the wavefunction as spatial part and time part.
$$ \Psi(x,t)=\psi(x) \theta(t) $$
$\psi(x)$ is the solution of well-known time-independent Schroedinger equation, and $\theta(t) = e^{-Et/\hbar}$. Then the probability density is time-independent, or stationary:
$$ \rho=|\Psi(x,t)|^{2} = |\psi(x)|^{2} $$
Thus, $ \frac{d}{dt} \int \rho d \tau =0 $.
However, introduce an imaginary potential $V(\vec{x})=V_{1}(\vec{x})+i V_{2}(\vec{x})$, where $V_{1}$ and $V_{2}$ are real function.
Then the time-dependent Schroedinger equation is
$$ i \hbar \frac{\partial}{\partial t}\Psi(\vec{x},t)= -\frac{\hbar^{2}}{2m} \nabla^{2} \Psi(\vec{x},t)+ \left\{V_{1}(\vec{x})+i V_{2}(\vec{x}) \right\}\Psi(x,t) $$
Let's calculate $ \frac{d}{dt} \int \rho d \tau $ at first.
$$ \begin{align*} \frac{d}{dt} \int \rho d \tau &= \frac{d}{dt} \int \Psi^{\ast}\Psi d \tau = \int \frac{\partial}{\partial t}(\Psi^{\ast} \Psi)d \tau \\ &=\int \left (\Psi^{\ast} \frac{\partial}{\partial t} \Psi + \Psi \frac{\partial}{\partial t} \Psi^{\ast} \right)d \tau\\ &= \int \frac{1}{2} \mathrm{Re} \left(\Psi^{\ast} \frac{\partial}{\partial t}\Psi \right) d \tau \end{align*} $$
From the given Schroedinger equation,
$$ \frac{1}{2} \mathrm{Re} \left(\Psi^{\ast} \frac{\partial}{\partial t} \Psi \right) =\frac{V_{2}(\vec{x})}{2 \hbar} \Psi^{\ast}\Psi $$
Thus,
$$ \frac{d}{dt} \int \rho d \tau = \frac{V_{2}(\vec{x})}{2 \hbar} \neq 0 $$
$V_{2}$ is generally not zero, meaning that the density isn't stationary.
This would violate the statement that the wavefunction is stationary if the external potential doesn't depend on time.
There no problem if the potential is real, but I guess imaginary potential has a kind of physical meaning. How can I think about this? |
''Diamond Paradox'' by Diamond (1971)
This is a "less-known paradox," usually put as a counter to famous Bertrand paradox. It is a starting point in the literature on informational frictions in consumer markets, and the scientists in the field agree on its significance.
Its idea is diametrically opposite to that of Bertrand. Consider the following simple example. There are $2$ firms which produce homogeneous goods at zero marginal cost and compete in prices, $p$. This simultaneously set prices. Also there is a single consumer who supplies a demand given by $1-p$. Importantly, the consumer does not observe prices set by firms and, therefore, needs to search for them sequentially, where search is costly. Suppose that cost of visiting a firm is given by $0 < c \leq \frac{1}{2}$. Then, the unique equilibrium of the market is that both firms charge monopoly price $$p^M= \frac{1}{2}.$$
This is a diametrically opposite result to that of Bertrand.
The reasoning behind the result is as follows. Suppose both firms charge $p=0$. Then, the consumer randomly visits one of the firms, say firm $i$, and buys. However, firm $i$ could have charged $c$ and made positive profits as the consumer would have bought goods anyway because she would have suffered cost $c$ had she left firm $i$ in order to buy from the rival firm. By the same argument, one can see that $p=c$ cannot be an equilibrium as now firm $i$ can charge $c+c$ and improve its profit. Continuing this way, it is easy to arrive to an equilibrium where both firms charge $p^M$. A firm does not want to charge $p^M+c$ simply because its profit is maximized at $p^M$.
Formal Analysis of the Example
Timing: First, the firms simultaneously set prices. Second, the consumer without knowing prices engage into sequential search. The first search is free and the consumer visit each firm with equal probability. The consumer can come back to the previously searched firm for free. The consumer has to observe a price of a firm to buy goods from that firm.
Beliefs: In equilibrium, the consumer has correct belief about strategies of firms. If, upon visiting a firm, she observes a price different from an equilibrium one, the consumers assumes that the rival firm has deviated to the same price too. Thus, the consumer has symmetric (out-of-equilibrium beliefs). Note: the results of the game does not change if the consumers has passive beliefs.
Strategies: Strategies of the firms are prices. As mixing is allowed, let $F(p)$ represent the probability that a firm charges a price no greater than $p$. Strategy of the consumer is whether to search for the second price, upon observing the first one. This strategy is given by a reservation price $r$, such that upon observing a price lower than $r$ she buys outright, upon observing a price greater than $r$ she searches further, and upon observing a price equal to $r$ she is indifferent between buying immediately and searching further.
Equilibrium Notion: Concept of Perfect Bayesian Equilibrium (PBE) is employed. A PBE is characterized by price distribution $F(p)$ for each firm and the consumer's reservation price strategy given by $r$ such that $(i)$ each firms chooses $F(p)$ that maximizes its profit, given the equilibrium strategy of the other firm and the consumer's optimal search strategy, and (ii) the consumer searches according to the reservation price rule $r$, given correct beliefs concerning equilibrium strategies of firms.
Theorem: For any $c>0$, there exists a PBE characterized by triple $(p^M, p^M, r)$, where $p^M$'s are charged with probability $1$ and $$r=1.$$
Proof: First, I prove that $r=1$, or that the consumer buys outright when she observes any price lower than $1$. Clearly, if she observes a price greater than $1$ she does not buy from that firm as this yields a negative payoff to the consumer. Now, suppose she observes price $p'<r$. Then, she expects the rival firm to charge $p'$ too. Thus, if she buys outright her payoff is $\int_{p'}^{1}(1-p)dp$, and if she searches she expects a payoff equal to $\int_{p'}^{1}(1-p)dp - c$. As the former is greater than the latter, she better-off when she buys immediately. This proves that $r=1$.
Next, I prove that both firms charge $p^M$. Clearly, firms never charge above $1$ as they will never sell. Then, the expected profit of a firm is $\frac{1}{2}(1-p)p$ because the consumer visits a firm half of the time. It is easy to see that the profit is maximized at $p^M$.
QED. |
Search
Now showing items 1-2 of 2
Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector
(Elsevier, 2014-11-10)
This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ...
Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector
(Elsevier, 2014-11-10)
Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ... |
Search
Now showing items 1-2 of 2
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ... |
Answer
30.4 meters per second
Work Step by Step
We first must find the value of $\mu$: $\mu = \sqrt{\frac{14}{18^2}}=.04$ Thus, we can find the new velocity. $ v = \sqrt{\frac{F}{\mu}}=\sqrt{\frac{40}{.0432}}=30.4 \ m/s$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$
Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.Let\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.\end{align*}
(a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$.
(b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$.
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$.
After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\]
(a) What is the dimension of $V$?
(b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$?
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\]
(a) Show that $W$ is a subspace of $V$.
(b) Find a basis of $W$.
(c) Find the dimension of $W$.
(The Ohio State University, Linear Algebra Midterm)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let\[\mathbf{a}_1=\begin{bmatrix}1 \\2 \\3\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}2 \\-1 \\4\end{bmatrix}, \mathbf{b}=\begin{bmatrix}0 \\a \\2\end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5.Find the inverse matrix of\[A=\begin{bmatrix}0 & 0 & 2 & 0 \\0 &1 & 0 & 0 \\1 & 0 & 0 & 0 \\1 & 0 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.
Problem 6.Consider the system of linear equations\begin{align*}3x_1+2x_2&=1\\5x_1+3x_2&=2.\end{align*}
(a) Find the coefficient matrix $A$ of the system.
(b) Find the inverse matrix of the coefficient matrix $A$.
(c) Using the inverse matrix of $A$, find the solution of the system.
(Linear Algebra Midterm Exam 1, the Ohio State University) |
Search
Now showing items 1-10 of 167
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... |
Basic LaTeX Equation Maker supports a subset of LaTeX commands, including the most commonly used commands. Common Commands
Whitespace \space Normal text \text { content } Bold text \textbf { content } Italic text \textit { content } Underline \underline { content } Superscripts base^{ super } Subscripts base_{ sub } Square Root \sqrt[ root ]{ expression } Fractions \frac{ numerator }{ denominator } Summation \sum_{ start }^{ end } Product \prod_{ start }^{ end } Integration \int_{ start }^{ end } Limits \lim_{ x \to number } Math sizing and formatting To change the math text size, use the following commands: Group the math to be sized inside braces, for example {\tiny i=0} will produce a tiny sized i=0. To format math as bold use \bf: {\bf \vec{B}}. Similary, use \it for italic. \tiny \scriptsize \small \normalsize \large \Large \LARGE \huge \Huge Matrices Matrices start with the command \begin{matrix} and end with the command \end{matrix}. Columns are delimited with the & symbol, rows are delimited with two backslashes: \\.
Example of a 2x2 matrix:
\begin{matrix}
1 & 2 \\ 8 & 1 \end{matrix}
This renders as
Align
To show multiple lines of equations use the command \begin
{align} and end with the command \end
{align}. Like matrices, rows are delimited with two backslashes: \\.
Example of an equation with two lines:
\begin{align}
y = (x-1)(x+1) \\ =x^2-1 \end{align}
This renders as
Parentheses, Brackets, and Brace
To place parenthesis, brackets, or braces around an expression use the \left
and \right
commands, each followed by the appropriate symbol. For example, to place brackets around a matrix:
\left[ \begin{matrix} 1 & 2 \\ 8 & 1 \end{matrix} \right]
To show a brace, you must escape it by placing a backslash immediately before it: \{ and \}. There are also shortcut commands to place parentheses/braces around a matrix: use \begin{pmatrix} ... \end{pmatrix} to place a parenthesis around a matrix, and use \begin{bmatrix} ... \end{bmatrix} to place a brace around a matrix.
Greek letters Greek letters are show by a backslash plus the name of the Greek letter. For example, Greek pi is shown with \pi. Uppercase Greek letters are shown with capitalized names. |
Chemical Bonding and Molecular Structure Molecular Orbital Theory MOED (molecular orbital energy diagram) : (from O 2 molecule onwards) \sigma 1s < \sigma^{*}1s <\sigma2s < \sigma^{*}2s < \sigma2p_{z} < \pi2p_{x} = \pi2p_{y} < \pi^*2p_{x} < \sigma^* 2p_{z} MOED (From N 2 molecule onwards) \sigma 1s < \sigma^{*}1s <\sigma2s < \sigma^{*}2s < \pi2p_{x} = \pi2p_{y} < \pi2p_{z} < \pi^*2p_{x} = \pi^*2p_{y} < \sigma2p_{z} H 2 molecule (2e-)
Bond \ order = \frac{2 - 0}{2}
= 1 (Dia magnetic)
Be 2 (8e-)
MOED = σ1s
2σ*1s 2σ2s 2σ*2s 2
BO = 0 ⇒ \frac{4 - 4}{2} (can't exist)
dia magnetic
Short cut to determine magnetic nature :
no.of electrons
Bond order
magnetic character 10 1 P 11 1.5 P 12 2 D 13 2.5 P 14 3 D 15 2.5 P 16 2 P 17 1.5 P 18 1 D
P - Paramagnetic
D - Diamagnetic Part1: View the Topic in this Video from 0:01 to 9:50 Part2: View the Topic in this Video from 0:59 to 45:55
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. |
Current browse context:
cs.SI
Change to browse by: References & Citations Bookmark(what is this?) Computer Science > Social and Information Networks Title: Comparative Resilience Notions and Vertex Attack Tolerance of Scale-Free Networks
(Submitted on 1 Apr 2014)
Abstract: We are concerned with an appropriate mathematical measure of resilience in the face of targeted node attacks for arbitrary degree networks, and subsequently comparing the resilience of different scale-free network models with the proposed measure. We strongly motivate our resilience measure termed \emph{vertex attack tolerance} (VAT), which is denoted mathematically as $\tau(G) = \min_{S \subset V} \frac{|S|}{|V-S-C_{max}(V-S)|+1}$, where $C_{max}(V-S)$ is the largest connected component in $V-S$. We attempt a thorough comparison of VAT with several existing resilience notions: conductance, vertex expansion, integrity, toughness, tenacity and scattering number. Our comparisons indicate that for artbitrary degree distributions VAT is the only measure that fully captures both the major \emph{bottlenecks} of a network and the resulting \emph{component size distribution} upon targeted node attacks (both captured in a manner proportional to the size of the attack set). For the case of $d$-regular graphs, we prove that $\tau(G) \le d\Phi(G)$, where $\Phi(G)$ is the conductance of the graph $G$. Conductance and expansion are well-studied measures of robustness and bottlenecks in the case of regular graphs but fail to capture resilience in the case of highly heterogeneous degree graphs. Regarding comparison of different scale-free graph models, our experimental results indicate that PLOD graphs with degree distributions identical to BA graphs of the same size exhibit consistently better vertex attack tolerance than the BA type graphs, although both graph types appear asymptotically resilient for BA generative parameter $m = 2$. BA graphs with $m = 1$ also appear to lack resilience, not only exhibiting very low VAT values, but also great transparency in the identification of the vulnerable node sets, namely the hubs, consistent with well known previous work. Submission historyFrom: Gunes Ercal [view email] [v1]Tue, 1 Apr 2014 01:55:33 GMT (692kb,D) |
"La variante di Lüneburg" and China's rice productionIn 1993, Paolo Maurensig, an Italian writer from Gorizia, wrote a novel entitled "La variante di Lüneburg" (Adelphi, 1995, pp. 164, ISBN 88-459-0984-0). The novel is set in Nazi Germany during World War II and the main theme is the game of chess.
At the beginning of the story, Maurensig tells a legend according to which the
game of chess was invented by a Chinese peasantwith a formidable gift for mathematics. (There are different versions of this story, as far as I understand, the earliest written record is contained in the Shahnameh and takes place in India, instead.) The peasant asks the king, in exchange for the game he invented, for a quantity of rice equal to that obtained with the following procedure: First one grain of rice should be placed on the first square on the chess board, then two grains on the second square, than four on the third and so on, every time doubling the number of rice grains. The king accepts not realizing what he is agreeing on. Let's try to calculate how much rice that would be. The series implied by the peasant has a more general form called geometric seriesand is well known in mathematics:
$$
s_m = \sum_{k=0}^m x^k
$$
If we calculate the first steps in the sum we obtain:
\begin{eqnarray*}
s_0 &=& 1 \\
s_1 &=& 1+x \\
s_2 &=& 1+x+x^2 \\
\dots && \\
\end{eqnarray*}
To see how this relates to our problem, we can set $x=2$ and see that the sum will be: $1+ 2+ 4+ 8+ \dots $
If we observe $s_1$ and $s_2$ in the previous equations, we see that we can write the second in terms of the first in two different ways:
\begin{eqnarray*}
s_2 &=& s_1+x^2 \\
s_2 &=& 1 + x (1+x) = 1 + x s_1.\\
\end{eqnarray*}
In the first case, we grouped the first two terms in $s_2$, whereas in the second case we group the last two terms and realized that they shared a common factor $x$.
If we continue writing the terms of the sum for higher orders, we realize that what we obtained above is true in general:
\begin{eqnarray*}
s_{m} &=& 1+x+\dots+x^m \\
s_{m+1} &=& 1+x+\dots+x^m+x^{m+1} = s_m+x^{m+1} \\
&=& 1 + x (1+\dots+x^{m-1}+x^m) = 1 + x s_m, \\
\end{eqnarray*}
which also means that the right-hand side of the last two equations above must be equal:
\begin{eqnarray*}
s_m+x^{m+1} &=& 1 + x s_m,
\end{eqnarray*}
and, therefore, rearranging:
\begin{eqnarray*}
s_m &=& \frac{x^{m+1}-1}{x-1}.
\end{eqnarray*}
This is the general solution for the sum of the geometric series. If we want to know
how many grains of rice the king will have to give to the peasant, we need to substitute the values of $x$ and $m$. We already saw above that $x$ should be equal to 2. We also know that the chess board has 8 rows and 8 columns, giving 64 squares. Because we start with $s_0$ in the first square, we need to calculate the series for $m=63$, that will correspond to the last square:
$$ s_{63} = \frac{2^{64}-1}{2-1} = 18\,446\,744\,073\,709\,551\,615, $$
which in words would sound something like eighteen quintillions...
In 1999, China produced approximately 198 million tons of rice. Which corresponds to $198\,000\,000\,000\,000$ grams. If we assume for simplicity that the production is constant over the years and that one gram of rice is approximately 50 grains, the king will have to give the peasant the
entire rice production in China for more than 18 hundred years.
Needless to say, when the king realized the mistake, he killed the peasant. |
I am trying to derive Galerkin type weak formulation for the Stokes equations. I'm having a bit of a problem reconciling the notation in the integration by parts. I know that the answer I'm looking for is: $ \int_\Omega \Delta\mathbf{u}\cdot\mathbf{v}d\Omega = \int_\Gamma (\mathbf{n}\cdot\nabla\mathbf{u})\cdot\mathbf{v}d\Gamma - \int_\Omega \nabla\mathbf{u}:\nabla\mathbf{v}d\Omega $
When I integrate by parts myself I get: $ \int_\Omega \nabla u\cdot\mathbf{v}d\Omega = \int_\Gamma u(\mathbf{v}\cdot\mathbf{n})d\Gamma - \int_\Omega u \nabla\cdot\mathbf{v}d\Omega\\\ \quad\quad\quad \Rightarrow \int_\Omega\Delta\mathbf{u}\cdot\mathbf{v}d\Omega = \int_\Omega (\nabla\cdot (\nabla\mathbf{u}))\cdot\mathbf{v}d\Omega = \int_\Gamma \nabla\mathbf{u} (\mathbf{v}\cdot\mathbf{n})d\Gamma - \int_\Omega\nabla\mathbf{u}\nabla\cdot\mathbf{v}d\Omega $
I assume I should be using a dot product for the vector/matrix multiplication, but even so I can't reconcile my answer with what I know the correct answer to be. For instance the line integral should be a scalar, but with my answer $\nabla\mathbf{u}$ is a matrix and $\mathbf{v}\cdot\mathbf{n}$ is a scalar so I fail to see how their product could be a scalar.
I did notice that the formula I used applies for scalar $u$'s. Is there another identity I should be using when $u$ is a vector? |
To be frustratingly concise, they are the same quantity. In fact, that was the whole genius of Einstein to realize that the inertia of a particle (the "measurement of how much a particle accelerates given a force") is really a measure of its energy. Thus, the title of his $1905$ paper:
"Does the . inertia of a body depend on its energy content?"
Now, the playground for this story is a bit muddled due to one of the most unfortunate notions in the history of modern physics: the relativistic mass. This is the reason you misspelled the Einstein mass-energy relation as $E=m_0 c^2$ instead of correctly spelling it as $E_0=mc^2$. But, I will come to this point later, for now, I will try to bypass this issue. Anyway, let's dive in.
So, in special relativity, the total energy of a particle (or, simply, the energy of the particle) and the momentum of a particle are given by $$E=\frac{mc^2}{\sqrt{1-\frac{v^2}{c^2}}}=mc^2\Big[1+\frac{1}{2}\frac{v^2}{c^2}+\frac{3}{8}\frac{v^4}{c^4}+\cdot\cdot\cdot\Big]$$$$\vec{p}=\frac{m\vec{v}}{\sqrt{1-\frac{v^2}{c^2}}}=m\vec{v}\Big[1+\frac{1}{2}\frac{v^2}{c^2}+\frac{3}{8}\frac{v^4}{c^4}+\cdot\cdot\cdot\Big]$$Here, the $m$ that I have used is the mass of the particle, or, as people liked to call it in the early days of relativity, the rest mass of the particle. Now, the key feature to notice here is that it is the same $m$ in the expression for the energy as it is in the expression for the momentum. Now, when you take the limit in which $\frac{v}{c}$ is small and retain the first non-trivial contribution from $v$, you get the usual Newtonian relations $$E=mc^2+\frac{1}{2}mv^2$$ $$\vec{p}=m\vec{v}$$Now, at this moment, you can easily see that it is the same $m$ in the formula for the Newtonian $\vec{p}$ that would appear in $\vec{F}=m\vec{a}$. Moreover, you also notice that it is the same $m$ in all the four expressions that I have written. In particular, it is the same $m$ in $E=mc^2+\frac{1}{2}mv^2$ (which contains the famous $E_0=mc^2$ where $E_0$ means the energy of the particle when $v=0$, or, in other words, its rest energy). In Newtonian mechanics, we don't carry along the $mc^2$ term in the expression for $E$ because, essentially, we assume that $m$ doesn't change and thus, $\frac{1}{2}mv^2$ alone is a good enough (conserved) quantity which is worthy of being named energy. But, the main point is that it is the same $m$ that dictates the inertia (via entering the formula for the momentum) of the particle that dictates its rest-energy (via entering the formula for, well, the rest-energy).
Extra Comments
So, in order to preserve the relation that momentum is mass times velocity, people invented the term
relativistic mass $M$ defined as $\frac{m}{\sqrt{1-\frac{v^2}{c^2}}}$ and happily wrote $\vec{p}=M\vec{v}$ while being relativistically correct. But, apart from being a linguistic nightmare where you necessarily needed to invent the term rest mass to distinguish the relativistically invariant mass from the made-up relativistic mass, it was also a physically problematic scheme as best explained by Lev Okun in this article. So, now, we only have one mass, the mass, or if you really wish, the rest mass. But, the total energy is not equal to $mc^2$ it is only the rest energy which is equal to $mc^2$. So, one should only write $E_0=mc^2$. With the forbidden relativistic mass, you could simply write $E=Mc^2$ where $E$ would have the right to be called the total energy and not just the rest energy, but, it was just not worth it! |
Lee and Bryk (1989) analyzed a set of data in illustrating the use of multilevel modeling. The data set includes mathematics scores for senior-year high school students from 160 schools. For each student, information on her/his social and economic status (SES) is also available. For each school, data are available on the sector (private or public) of the schools. In addition, a school level variable called average SES (MSES) was created from the students' SES to measure the SES of each school.
There is a clear two-level structure in the data. At the first level, the student level, we have information on individual students. At the second level, the school level, we have information on schools. The students are nested within schools. This kind of data are called multilevel data.
As an example, we use a subset of data from the Lee and Bryk (1989) study. In total, there are \(n=7,185\) students at the first level. At the second level, there are \(J=160\) schools. The data are saved in the file
LeeBryk.csv. A subset of data are shown below.
sector=0 indicates a public school and
sector=1 indicates a private Catholic high school.
> usedata('LeeBryk') > head(LeeBryk) schoolid math ses mses sector 1 1 5.88 -1.53 -0.43 0 2 1 19.71 -0.59 -0.43 0 3 1 20.35 -0.53 -0.43 0 4 1 8.78 -0.67 -0.43 0 5 1 17.90 -0.16 -0.43 0 6 1 4.58 0.02 -0.43 0 >
We first look at the mathematical variable alone. The variance of the variable math is 47.31. The variance can be viewed as the residual variance after removing the mean of it or the residual variance by fitting a regression model with intercept only to it. This assumes that every student across all schools has the same mean score or intercept such that
\[math_{ij} = \beta_0 + e_{ij}.\]
Given there are 160 schools, it is more reasonable to believe that the intercept (or average math score) is different for each school. Using a regression model way, we then have
\[math_{ij}=\beta_{j}+e_{ij},\]
where $\beta_j$ is the average math score or intercept for each school $j$.
Because the intercept is different, it can also has its own variation or variance across schools which can also be calculated using a regression model with intercept only
\[\beta_{j}=\beta_{0}+v_{j}\]
where $v_j$ is the deviation for the average score of school $j$ from the overall average $\beta_0$.
With such a specification, the variance of math can be expressed as the sum of the variance of the residuals \(e_{ij}\) -- within-school variance, and the variance of \(v_{j}\) -- between-school variance. Specifically, the variance of math is equal to \(8.614 + 39.148\) = between-school variance + within-school variance.
Combining the two regressions, we have a two-level regression model. Note that the model can be written as
\[math_{ij}=\beta_{0}+v_{j}+e_{ij}.\]
The model is called a mixed-effects model in which \(\beta_{0}\) is called the fixed effect. It is the average intercept for all schools and \(v_{j}\) is called the random effect.
A multilevel model or a mixed-effects model can be estimated using the R package
lme4. Particularly, the function
lmer() should be used. The function not only estimates the fixed-effects $\beta_0$ but also the random-effects $v_{j}$. The function use the format
lmer(math~1 + (1|schoolid), data=school). In the function, the first "1" tells to estimate a fixed-effects as the overall intercept.
(1|schoolid) tells there is a random component in the intercept.
The output includes the
Fixed effects, for which a t-test is also conducted for its significance. The
Random effects part includes the variance of the residuals ($e_{ij}$) and the variance of the random intercept ($v_j$).
> library(lme4) Loading required package: Matrix > usedata('LeeBryk') > > m2<-lmer(math ~ 1 + (1|schoolid), data=LeeBryk) > summary(m2) Linear mixed model fit by REML ['lmerMod'] Formula: math ~ 1 + (1 | schoolid) Data: LeeBryk REML criterion at convergence: 47116.7 Scaled residuals: Min 1Q Median 3Q Max -3.06294 -0.75337 0.02658 0.76060 2.74221 Random effects: Groups Name Variance Std.Dev. schoolid (Intercept) 8.614 2.935 Residual 39.148 6.257 Number of obs: 7185, groups: schoolid, 160 Fixed effects: Estimate Std. Error t value (Intercept) 12.6374 0.2444 51.71 >
The random-effects $v_j$ can be obtained using the function
ranef(). With them, the intercept or average math score for each school can be calculated and also plotted as shown below.
> library(lme4) Loading required package: Matrix > usedata('LeeBryk') > > m2<-lmer(math ~ 1 + (1|schoolid), data=LeeBryk) Warning message: 'rBind' is deprecated. Since R version 3.2.0, base's rbind() should work fine with S4 objects > bi<-ranef(m2) > > school.intercept <- bi$schoolid + 12.6374 > plot(school.intercept[,1],type='h', + xlab='School ID', ylab='Intercept') > abline(h=12.6374) > > hist(school.intercept[,1]) >
The intraclass correlation coefficient is defined as the ratio of the variance explained by the multilevel structure and the variance of the outcome variable. For the example above, we have intraclass correlation coefficient
\[\tau=\frac{8.614}{8.614+39.148}=0.18.\]
In social science, it often ranges from 0.05 to 0.25. When ICC is large, it means the between-class variance cannot be ignored and therefore a multilevel model is preferred. It has been suggested that if ICC > 0.1, one should consider the use of a multilevel model.
We have shown the differences in the average score or intercept for each school. What causes the differences? School level covariates, such as the average SES or whether the school is private or public, can be used to explore potential factors related to it. In using a two-level model, we can specify a model as
\begin{eqnarray*}math_{ij} & = & \beta_{j}+e_{ij}\\ \beta_{j} & = & \beta_{0}+\beta_{1}mses_{j}+\beta_{2}sector_{j}+v_{j}. \end{eqnarray*}
Using a mixed-effect model, it can be written as
\[math_{ij} = \beta_{0}+\beta_{1}mses_{j}+\beta_{2}sector_{j}+v_{j}+e_{ij},\]
where $\beta_k, k=0,1,2$ are fixed-effects parameters. Based on the output of the
lmer(), both mses and sector are significant given the t-values in the fixed effects table. Note that to get the associated p-value, the R package
lmerTest can be used.
> library(lme4) Loading required package: Matrix > library(lmerTest) Attaching package: 'lmerTest' The following object is masked from 'package:lme4': lmer The following object is masked from 'package:stats': step > usedata('LeeBryk') > > m3<-lmer(math~1 + mses + sector + (1|schoolid), data=LeeBryk) > summary(m3) Linear mixed model fit by REML t-tests use Satterthwaite approximations to degrees of freedom [lmerMod] Formula: math ~ 1 + mses + sector + (1 | schoolid) Data: LeeBryk REML criterion at convergence: 46946.3 Scaled residuals: Min 1Q Median 3Q Max -3.08323 -0.75079 0.01932 0.76659 2.78831 Random effects: Groups Name Variance Std.Dev. schoolid (Intercept) 2.312 1.520 Residual 39.161 6.258 Number of obs: 7185, groups: schoolid, 160 Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 12.0994 0.1986 160.5500 60.919 < 2e-16 *** mses 5.3336 0.3685 151.0000 14.475 < 2e-16 *** sector 1.2196 0.3058 149.6000 3.988 0.000104 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Correlation of Fixed Effects: (Intr) mses mses 0.246 sector -0.698 -0.357 > > anova(m3) Analysis of Variance Table of type III with Satterthwaite approximation for degrees of freedom Sum Sq Mean Sq NumDF DenDF F.value Pr(>F) mses 8205.1 8205.1 1 151.0 209.522 < 2.2e-16 *** sector 622.9 622.9 1 149.6 15.907 0.0001038 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 >
The variable math can be predicted by certain variables such as individual SES. If we ignore the multilevel structure, we can fit a simple regression as
\[math_{ij}=\beta_{0}+\beta_{1}ses_{ij}+e_{ij}.\]
This is to assume that the relationship, the slope, between math and ses is the same. However, as we showed earlier, the intercepts are different for different schools. So the slopes can also be different. In this case, the model should be written as
\[ math_{ij}=\beta_{0j}+\beta_{1j}ses_{ij}+e_{ij},\]
where \(\beta_{0j}\) and \(\beta_{1j}\) are intercept and slope for the $j$th school. We can also predict the intercept and slope using the school level covariates such as the sector of the school and the SES of the school. Then, we would have a two-level model shown below:
\begin{eqnarray*} math_{ij} & = & \beta_{0j}+\beta_{1j}ses_{ij}+e_{ij}\\ \beta_{0j} & = & \gamma_{0}+\gamma_{1}mses_{j}+\gamma_{2}sector_{j}+v_{0j}.\\ \beta_{1j} & = & \gamma_{3}+\gamma_{4}mses_{j}+\gamma_{5}sector_{j}+v_{1j}\end{eqnarray*}
The above two-level model can again be written as a mixed-effects model
\begin{eqnarray} math_{ij} & = & \beta_{0j}+\beta_{1j}ses_{ij}+e_{ij}\nonumber \\ & = & \gamma_{0}+\gamma_{1}mses_{j}+\gamma_{2}sector_{j}+v_{0j}\nonumber \\ & & +(\gamma_{3}+\gamma_{4}mses_{j}+\gamma_{5}sector_{j}+v_{1j})*ses_{ij}+e_{ij}. \end{eqnarray}
To use the R package for model estimation, we first need to plug in the second level equation to the first level to get a mixed model. In the mixed model, $\gamma$s are fixed-effects and $v_{0j}$ and $v_{1j}$ are random effects. The variances of $v_{0j}$ and $v_{1j}$ and their correlation can also be obtained. We can also get the variance of $e_{ij}$. Those variance parameters are called random-effects parameters while $\gamma$s are called fixed-effects parameters. In R, each term in the mixed-effects model needs to be specified except for $e_{ij}$. The random effects are specified using the notation
| where before it are the random-effects terms and after it is the grouping variable or class variable.
For the math example, the R input and output shown below.
> library(lme4) Loading required package: Matrix > library(lmerTest) Attaching package: 'lmerTest' The following object is masked from 'package:lme4': lmer The following object is masked from 'package:stats': step > usedata('LeeBryk') > > m4<-lmer(math~1 + mses + sector + + ses + ses*mses + ses*sector + + (1 + ses|schoolid), data=LeeBryk) > summary(m4) Linear mixed model fit by REML t-tests use Satterthwaite approximations to degrees of freedom [lmerMod] Formula: math ~ 1 + mses + sector + ses + ses * mses + ses * sector + (1 + ses | schoolid) Data: LeeBryk REML criterion at convergence: 46505.1 Scaled residuals: Min 1Q Median 3Q Max -3.14225 -0.72479 0.01375 0.75506 2.98329 Random effects: Groups Name Variance Std.Dev. Corr schoolid (Intercept) 2.40703 1.5515 ses 0.01443 0.1201 1.00 Residual 36.75766 6.0628 Number of obs: 7185, groups: schoolid, 160 Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 12.1026 0.2028 166.0000 59.678 < 2e-16 *** mses 3.3225 0.3885 178.0000 8.553 5.33e-15 *** sector 1.1882 0.3081 148.0000 3.857 0.000171 *** ses 2.9057 0.1483 4352.0000 19.595 < 2e-16 *** mses:ses 0.8476 0.2717 3546.0000 3.119 0.001829 ** sector:ses -1.5793 0.2246 4347.0000 -7.033 2.34e-12 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Correlation of Fixed Effects: (Intr) mses sector ses mss:ss mses 0.213 sector -0.676 -0.345 ses 0.077 -0.146 -0.065 mses:ses -0.143 0.179 -0.082 0.279 sector:ses -0.062 -0.081 0.094 -0.679 -0.357 > anova(m4) Analysis of Variance Table of type III with Satterthwaite approximation for degrees of freedom Sum Sq Mean Sq NumDF DenDF F.value Pr(>F) mses 2688.9 2688.9 1 178.5 73.15 5.329e-15 *** sector 546.8 546.8 1 148.3 14.88 0.0001707 *** ses 14113.4 14113.4 1 4351.7 383.96 < 2.2e-16 *** mses:ses 357.6 357.6 1 3546.5 9.73 0.0018288 ** sector:ses 1818.0 1818.0 1 4346.7 49.46 2.342e-12 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 >
The parameters in the table of
Fixed effects give the estimates for $\gamma$s. Hence, we have
\begin{eqnarray*} \beta_{0j} & = & 12.10+3.32mses_{j}+1.19sector_{j}+v_{0j}\\ \beta_{1j} & = & 2.91+0.85mses_{j}-1.58sector_{j}+v_{1j} \end{eqnarray*}
Note that based on the F-test, both ses and the sector of the school are significant predictors of both intercept and slope.
The parameters in the table of
Random effects give the variance estimates. Therefore, the variance of $e_{ij}$ is 36.76. The variance of $v_{0j}$ is 2.41 and the variance of $v_{1j}$ is .014. The correlation between $v_{0j}$ and $v_{1j}$ is almost 1. Individual values for $v_{0j}$ and $v_{1j}$ can be obtained using the function
ranef().
Note that with the fixed effects and random effects, we can calculate the intercept $\beta_{0j}$ and slope $\beta_{1j}$ for each school. Then, the individual regression line can be plotted together with the scatterplot of all data.
> library(lme4) Loading required package: Matrix > library(lmerTest) Attaching package: 'lmerTest' The following object is masked from 'package:lme4': lmer The following object is masked from 'package:stats': step > usedata('LeeBryk') > attach(LeeBryk) > > m4<-lmer(math~1 + mses + sector + + ses + ses*mses + ses*sector + + (1 + ses|schoolid), data=LeeBryk) Warning message: 'rBind' is deprecated. Since R version 3.2.0, base's rbind() should work fine with S4 objects > > ## calculate betas > #random effects > rancof<-ranef(m4)$schoolid > beta0<-beta1<-rep(0, 160) > > for (i in 1:160){ + index<-min(which(schoolid==i)) + beta0[i] <- 12.1026+2.3225*mses[index] + +1.1882*sector[index]+rancof[i,1] + beta1[i] <- 2.9057+0.8476*mses[index] + -1.5793*sector[index]+rancof[i,2] + } > > hist(beta0) > hist(beta1) > > ## plot the relationship for each school > plot(ses, math) > > for (i in 1:160){ + abline(beta0[i], beta1[i]) + } > |
Surely yes, and in more generality, but can it be proved?
It seems that most, if not all, statements about quadratic forms representing primes fall back on algebraic number theory (i.e. splitting of primes in $\mathbb{Q}(\sqrt{7})$) for their proofs, and so are incompatible with the condition that $0 < y < x/10$.
Some related references which didn't lead to a proof: First of all there is this previous MO post, which suggests a negative answer.
There is also this paper of Iwaniec, which uses sieve methods but which also uses the multiplicative structure of solutions to the quadratic form.
There is also the interesting Theorem 5.36 of Iwaniec and Kowalski, which states that the arguments of prime elements of $\mathbb{Z}[i]$ are equidistributed in $(0, 2\pi)$. This is proved using the Hecke $L$-function $\sum_{\alpha \in \mathbb{Z}[i]} \big( \frac{\alpha}{|\alpha|} \big)^{ik} |\alpha|^{-s}$, for all $k$ divisible by 4. This generalizes further, but presumably not to real quadratic fields, where the infinite unit group would foul the construction up.
Finally, using a straight-up sieve (with only the additive structure of solutions to $x^2 - 7 y^2$) seems hopeless, as sieves tend to be bad at finding primes. There is the recent work of Friedlander-Iwaniec on $x^2 + y^4$ and Heath-Brown on $x^3 + 2y^3$, but these use algebraic number theory in $\mathbb{Q}(i)$ and $\mathbb{Q}(\sqrt{-3})$, and seem unlikely to generalize here.
I wonder if there is a promising approach out there which I have overlooked? Thank you! |
By the electrostatic induction a total charge $-q$ will appear in the inner surface of $A$ (let's call it $S_{A, int}$) with a density $\sigma_{A,int}$, and a total charge $+q$ will appear on the outer surface $S_{A,ext}$ with a density $\sigma_{A,ext}$.
Let's call the electrostatic fields generated by the charge densities $\sigma_{B}$, $\sigma_{A,int}$ and $\sigma_{A,ext}$ rispectively $\bf{E}_{B}$, $\bf{E}_{\sigma_{A,int}}$ and $\bf{E}_{\sigma_{A,ext}}$.
My question is: how to prove (possibly in a rigorous way) the two following facts?
$\bf{E}_{\sigma_{A,int}}$ is such that $$ \bf{E}_{B}+\bf{E}_{\sigma_{A,int}}=0$$
everywhereoutside the cavity (therefore also outside the conductor $A$).
Both $\bf{E}_{\sigma_{A,int}}$ and $\bf{E}_{\sigma_{A,ext}}$ are zero everywhere inside the cavity.
Attempts: Griffiths- Introduction to Electrodyinamicsproposes a similar situation in Chapter 2.5 (Example 2.9) and he states that $\sigma_{A,int}$ is such that "its field cancels that of $B$, for all point exterior to the cavity".
He explains the statement saying "I cannot give you a satisfactory explanation at the moment" nevertheless I did not find a proper explanation of this fact in all the book.
Anyway he tries to justify the fact saying "For that same cavity could have been curved out of a huge spherical conductor with radius of 27 light years or whatever. In that case the density $\sigma_{A,ext}$ is simply too far away to produce a significant field and the two other fields ($\bf{E}_{B}$ and $\bf{E}_{\sigma_{A,int}}$) would
haveto accomplish the cancellation by themselves".
This does make sense to me, but I'm looking for a more rigourous explanation (or at least where I can find one).
I would guess that, for the conservativity of electrostatic fields: $$\oint_{\gamma} \bf{E}_{\sigma_{A,int}} \cdot \bf{ds}=\int_{\gamma_1}\bf{E}_{\sigma_{A,int}}\cdot \bf{ds}+\int_{\gamma_2}\bf{E}_{\sigma_{A,int}}\cdot \bf{ds}=0\,\,\,\, \forall \gamma_1,\gamma_2\tag{*}$$
Where $\gamma_1$ is
anycurve like the red one in picture (connecting any two points in the cavity $C$ and $D$ passing through the cavity) and $\gamma_2$ is anycurve like the green one in picture (hence passing inside the conductor). Since $\gamma_2$ passes in the conductor, surely $$\int_{\gamma_2}\bf{E}_{\sigma_{A,int}}\cdot \bf{ds}=0\,\,\,\,\,\forall \gamma_2$$
Therefore, from $(*)$
$$\int_{\gamma_1}\bf{E}_{\sigma_{A,int}}\cdot \bf{ds}=0 \,\,\,\,\forall \gamma_1$$
Nevertheless I'm not totally sure of the following implication
$$\int_{\gamma_1}\bf{E}_{\sigma_{A,int}}\cdot \bf{ds}=0 \,\,\,\,\forall \gamma_1\implies \bf{E}_{\sigma_{A,int}}=0 \,\,\,\,\, \mathrm{inside} \,\,\, \mathrm{the} \,\,\, \mathrm{cavity}$$
Can I come to this conclusion with this reasoning? (The same reasoning would lead to the conclusion that also $\bf{E}_{\sigma_{A,ext}}=0$ inside the cavity). |
Re: One form <-----> vector fieldThat makes sense, but doesn't that only work for a perfect differential df, whose coefficients are \frac{\partial f }{\partial x^i}? If df is not the differential of some function f, but instead a general 1-form, would you still writedf[V]= V[f]only this...
Re: One form <-----> vector fieldI got a question about L*_i.L*_i is a function that takes a form wj and maps it to a number. L*_i is also a vector. Would it be true that L*_i(wj) =wj(L*_i)?I'm used to forms being linear functions of vectors. I'm not used to vectors being linear...
The basis doesn't have to be the partials, but the partials are the most natural choice. You can convert between the basis of partials and any other basis by the vielbein.I'm not sure if this is accurate to say or not, but the basis in classical differential geometry is:e_i=\frac{\partial...
The components of a vector change when the basis is changed, but the vector does not, since a vector is something that exists without coordinates.So your question 1 is right.As for your other question, you don't count the basis when counting rank, just the indices on the component. If there...
I thought the Lie derivative is linear in both arguments. The Lie derivative L of the tensor T in the direction of the vector field X obeys:L_X (T_1+T_2)=L_X (T_1)+L_X(T_2)L_{X_1+X_2} (T)=L_{X_1} (T)+L_{X_2}(T)That has always confused me. I understand what you mean, that...
Given a smooth manifold with no other structure (like a metric), one can define a derivative for a vector field called the Lie derivative. One can also define a Lie derivative for any tensor, including covectors.Incidentally, with antisymmetric covectors (differential forms) one can define...
Thanks.According to Wikipedia, if U and V are simply connected open subsets of R^n, and f: U->V is smooth, then having the Jacobian non-singular is enough for U and V to be diffeomorphic.So I guess that's why your function from the line to the circle fails, as the circle is not simply...
What is the relationship between being globally diffeomorphic and the Jacobian of the diffeomorphism?All I can think of is that if the Jacobian at a point is non-zero, then the map is bijective around that point. For example, if:f(x)=x_0+J(x_0)(x-x_0)where J(x0) is the Jacobian...
Maybe I'm reading my book wrong, but it claims that a flow is a group of global diffeomorphisms.The flow \sigma(t,x) , where t is the group parameter and x is a point on the manifold, is given by:\sigma^\mu(t,x)=e^{tX^\mu(x)}x^\muwhere X^\mu(x) is the vector field at the point x in the...
When calculating the derivative of a vector field X at a point p of a smooth manifold M, one uses the Lie derivative, which gives the derivative of X in the direction of another vector field Y at the same point p of the manifold.If the manifold is a Riemannian manifold (that is, equipped...
That makes really good intuitive sense. But for some reason the book I have defines embedding in a slightly esoteric way. Instead of topological spaces, it talks about embedding of differentiable manifolds.First it defines an immersion. Basically, a smooth map f: M -> N between manifolds...
Two topological spaces can be diffeomorphic to each other, but here we're dealing with one topological space, the real line, and two incompatible atlases that we call structures. When saying that a manifold is diffeomorphic, I think what is usually meant is that it is locally diffeomorphic to...
The definition of having multiple differentiable structures is that given two atlases, {(U_i ,\phi_i)} and {(V_j,\psi_j)} (where the open sets are the first entry and the homeomorphisms to an open subset of Rn are the second entry), that the union {(U_i,V_j;\phi_i,\psi_j)} is not necessarily...
The action of a dual f on a vector v is: f_i v^i where the index i is summed over the dimension of the vector space.So how would it go when you write it in functional form like you did. Would \int \frac{\partial }{\partial x^\mu} dx^\nu be equal to\int v^\mu\frac{\partial }{\partial...
Re: homeomorphismThat's a good one. So the point {1} is actually an open set on the domain: [0,1]\cup (2,3] . So if you take an open set on this domain to be say {1}\cup(2,2.5) then it maps to [1,1.5) which is not open on the codomain [0,2]. So I guess this is an example of where Hurkyl...
Re: homeomorphismI can't think of one. The book mentions f(x)=x^2 on (-a,a) maps to [0,a^2) which is not open. However, the problem with this is that f(x)=x^2 is not a bijection, so it doesn't make sense to even speak about an inverse.Can you have a continuous bijection whose inverse is...
The definition of a homeomorphism between topological spaces X, Y, is that there exists a function Y=f(X) that is continuous and whose inverse X=f-1(Y) is also continuous.Can I assume that the function f is a bijection, since inverses only exist for bijections?Also, I thought that if a...
Yeah. I got confused because of the overloading of the parameter t. So it'd be:t=tx,y,z=x(t),y(t),z(t)instead of something like:t,x,y,z={t(s)=s},x(s),y(s),z(s)Also, it is true that for parallel transport:\partial_k <X,Y>=0but I guess that's just a path with a direction only on the...
You're right. Your proof is much simpler than the one my instructor gave. In fact, the proof is practically tautological.The reason I made a distinction is because usually little d means a coordinate derivative and big D means a covariant derivative, and the person wrote it with a little d. I...
Do you mean:D_t <X,Y> = <D_tX,Y> + <X,D_tY> ?Then this is just the Leibniz rule and there is no need for metric compatibility.In all the standard physics books the proof goes something like this:\partial_k (V^iU_i)=\partial_k (V^i) U_i+V^i\partial_k (U_i)parallel transport implies...
How do we define open sets in a topological space? In a metric space it is simple as you can define an open ball when given a metric (distance function).I read somewhere that for a manifold you determine if a set is open by defining a one-to-one map from that set to Euclidean space, and if...
In a lot of textbooks on relativity the Levi-Civita connection is derived like this:V=V^ie_idV=dV^ie_i+V^ide_idV=\partial_jV^ie_idx^j+V^i \Gamma^{j}_{ir}e_j dx^rwhich after relabeling indices:dV=(\partial_jV^i+V^k \Gamma^{i}_{kj})e_i dx^jso that the covariant derivative is...
So are you saying that for Riemannian space (i.e., positive definite distance), the metric contains the complete information about the manifold (i.e., all the coordinate charts and the set of points)? So if you are given a metric, then that uniquely determines the manifold?Also, I was...
I always thought one could define a manifold as a collection of points with a distance function or metric tensor.But in a layperson's book by Penrose, he defined a manifold as a collection of points with a rule for telling you if a function defined on the manifold is smooth. He says this is...
I was reading Penrose's layperson book, "The Road to Reality", and he says that the covariant derivative is the difference between the vector at the new point, and the old vector (at the nearby point) parallel transported to the new point.Using that interpretation, I tried to derive the...
Thanks. That was very helpful. I have a few questions.How would you define continuity? Say you are mapping two vectors A and B in the tangent plane at point p to the tangent plane in point p'. Then would it be the typical way in analysis, where gij(A-B)i(A-B)j less than delta implies the same...
Here's an example. Consider spherical coordinates in R3. Some of the basis are:\vec{e_r}=(sin \theta cos\phi,sin\theta sin\phi, cos \theta)\vec{e_\theta}=(r cos \theta cos \phi, r cos \theta \sin \phi, -r sin\theta)Now\partial_\theta\vec{e_\theta}=-r\vec{e_r}follows...
If you have the expression: px-f(x), then that doesn't tell you anything. Given an f(x), how do you find a new function?px-f(x) is written in terms of 2 variables. You want it only in terms of one variable. The Legendre transform solves it by saying max{px-f(x)}. This is a function of one... |
What are the best current lower bounds for time and circuit depth for 3SAT?
As far as I know, the best known "model-independent" time lower bound for SAT is the following. Let $T$ and $S$ be the running time and space bound of any SAT algorithm. Then we must have $T \cdot S \geq n^{2 \cos(\pi/7) - o(1)}$ infinitely often. Note $2 \cos(\pi/7) \approx 1.801$. (The result that Suresh cites is a little obsolete.) This result appeared in STACS 2010, but that is an extended abstract of a much longer paper, which you can get here: http://www.cs.cmu.edu/~ryanw/automated-lbs.pdf
Of course, the above work builds on a lot of prior work which is mentioned in Lipton's blog (see Suresh's answer). Also, as the space bound S gets close to n, the time lower bound T gets close to n as well. You can prove a better "time-space tradeoff" in this regime; see Dieter van Melkebeek's survey of SAT time-space lower bounds from 2008.
If you restrict yourself to multitape Turing machines, you can prove $T \cdot S \geq n^{2-o(1)}$ infinitely often. That was proved by Rahul Santhanam, and follows from a similar lower bound that's known for PALINDROMES in this model. We believe you should be able to prove a quadratic lower bound that is "model-independent" but that has been elusive for some time.
For non-uniform circuits with bounded fan-in, I know of no depth lower bound better than $\log n$.
A partial answer: as Richard Lipton outlines in this post, the best bounds are time-space tradeoffs, that ask for a lower bound on time with space $o(n)$. The best known bound in this vein is due to Ryan Williams, who gives a bound of the form $n^c$, where $c$ is slightly more than $\sqrt{3}$.
My understanding is that, without additional assumptions, we do not have a superlinear time, as in $\Omega(n^c)$ for constant $c > 1$, lower bound for 3SAT.
My understanding is the same as Lev Reyzin. It is possible that there exists a deterministic complete algorithm for SAT which runs in space O(n) and in time O(n). It's amazing that the existence of such an efficient algorithm is not prohibited. |
Rayleigh-Bénard Convection A model of thermal convection Rayleigh-Bénard convection (RBC) is the buoyancy-driven flow of a fluid heated from below and cooled from above.
This model of thermal convection is a paradigm for nonlinear and chaotic dynamics, pattern formation and fully developed
turbulence (Kadanoff 2001 [1]). RBC plays an important
role in a large range of phenomena in geophysics, astrophysics, meteorology,
oceanography and engineering.
The problem under investigation is:
Given an incompressible fluid enclosed in a container heated from below and cooled from above, what are the flow dynamics? In particular, what is the heat transfer from the bottom to the top?
The mechanisms that determine the dynamics are:
Thermal diffusion:heat flux, due to the temperature gradient in the box. Buoyancy-driven convection:bulk fluid movement due to temperature differences which affect the viscosity and thus the relative buoyancy. Inner friction:force that resists the deformation of the fluid parcels. It's effect is very strong near the plates, where the fluid is at rest.
In the numerical simulation below, the fluid is at rest in the bulk until
convection rolls appear which in turn are destabilzed by
finger-like structures that detach from the boundary layers, called plumes.
Eventually these formations trigger the breakdown of the convection rolls and the
motion becomes chaotic/turbulent. Video by M. Zimmermann. Click to play.
In the Oberbeck (1879) -Boussinesq (1903) approximation, where the density \(\rho\) is assumed to depend linearly on the temperature \(T\) , the dimensionless equations of motion are :
\( \begin{cases} \frac{\partial }{\partial t}T+\mathbf{u}\cdot \nabla T=\Delta T & (1)\\ \frac{1}{Pr}\left(\frac{\partial }{\partial t}\mathbf{u}+\mathbf{u}\cdot \nabla \mathbf{u}\right)-\Delta \mathbf{u}+\nabla p=Ra T\mathbf{e_z} & (2)\\ \nabla\cdot \mathbf{u}=0 & (3)\\ \end{cases} \)
where \(\mathbf{u}\) is the velocity field and \(p\) is the pressure.
On the two plates at height \(z = 0\) and \(z = 1\), respectively, the velocity field
satisfies
no-slip boundary conditions and the temperature is \(1\) at \(z = 0\) and
\(0\) at \(z = 1\).
All the quantities \((\textbf{u},T,p)\) are assumed to be periodic in the the horizontal variables \((x,y)\). From the non-dimensionalization, only two control parameters are left: the Prandtl number, \(Pr\), and the Rayleigh number, \(Ra\). By definition
\(Pr=\frac{\nu}{\kappa}\)
where \(\nu\) is the fluid's kinematic viscosity and \(\kappa\) is the thermal diffusivity and
\(Ra\propto (T_{hot}-T_{cold}) H^3\)
where \(H\) is the hight of the box.
The Nusselt number
In the applications, it is of interest to measure the effectiveness of the motion.
The most natural and accepted measure that quantifies the enhancement of vertical heat flux due to convection
is the
Nusselt number, \(Nu\).
Integrating in \(z\in (0,1)\) the long-time and horizontal-space averaged heat flux \(uT-\nabla T\) in the vertical direction, we obtain the mathematical definition of the Nusselt number
\(Nu=\int\limits_0^1 \langle (\mathbf{u} T-\nabla T)\cdot \mathbf{e_z}\rangle dz.\)
Since the convective fluid flow increases vertical heat transport beyond the purely conductive flux, our challange is to determine the relationship
\(Nu=Nu(Ra,Pr) \)
from the equations of motion.
In 1954 W.V.R Malkus [2] predicted the scaling law
\(Nu\sim Ra^{\frac{1}{3}}\)
by a
marginally stable boundary layer argument,
based on the concept that
the boundary layer thickness \(\delta\) adjust itself so as to be,
as a convection layer, marginally stable.
The scaling
\(Nu\sim Pr^{\frac{1}{2}} Ra^{\frac{1}{2}}\)
has been postulated (Kraichnan (1962) [3] and Spiegel (1971) [4]) as an asymptotic regime in which the heat transfer and the strength of turbulence become independent of the kinematic viscosity and the thermal diffusivity.
Infinite Prandtl-number limit
In many situations in nature, when the fluid is very viscous (e.g, Earth's mantel and engine oils), the Prandtl number is very big. This motivates the interest in studying the limiting case when the inertia of the fluid can be neglected:
\(\begin{cases} \frac{\partial }{\partial t}T+\mathbf{u}\cdot \nabla T=\Delta T\\ -\Delta \mathbf{u}+\nabla p=Ra T \mathbf{e_z}\\ \nabla\cdot \mathbf{u}=0\\ \end{cases}\)
In this case of infinite-Prandtl-number fluids the scaling
\(Nu\sim Ra^{\frac{1}{3}}\)
is believed to be valid in the asymptotically high-Rayleigh number regime (Grossmann & Lohse 2000 [5]).
The derivation of physically relevant upper bounds has a long history, that goes
back to the sixties (Howard (1963) [6] and Busse (1969) [7]). Several decades after,
the introduction of the
background field method (Doering & Constantin [11]) has produced significant
results. This method consists of decomposing the temperature field into a background profile
and a perturbation term,
\(T(x,y,z,t)=\tau(z)+\theta(x,y,z,t)\) , where
\(\tau(0)=1\), \(\tau(1)=0\quad\) and \(\quad\theta(x,y,0,t)=0=\theta(x,y,1,t)\).
By the background field method (BFM) the problem of finding upper bounds for \(Nu\) is
reduced to the problem of constructing a background profile \(\tau\) that satisfies a
marginal stability constraint.
Then the Dirichlet integral of a marginally stable
background temperature profile produces an upper bound
for the Nusselt number.
Here, the possibility to construct a "good" stable background profile is possible thanks to the instantaneous linear slaving of the velocity
to the temperature field.
Within the BFM (via marginal stability) many important results have been obtained. The first rigorous upper bound, optimal up to logarithmic correction, was proved by P. Constantin and C. Doering in [8]:
\(Nu\lesssim \big(\ln^{\frac{2}{3}}(Ra)\big) Ra^{\frac{1}{3}}.\)
Their proof relies on an \(L^{\infty}\) maximal regularity argument for the Stokes equation together
with bounds on the average Laplacian squared of the temperature.
The construction of a non-monotone background profile with a "log-layer" in the bulk is the central idea in Doering, Otto & Westdickenberg [9] to derive
\(Nu\lesssim \big(\ln^{\frac{1}{3}}(Ra)\big) Ra^{\frac{1}{3}}.\)
By refinement of the argument in [9], F. Otto and C. Seis in [10] improved the last bound obtaining
\(Nu\lesssim \big(\ln^{\frac{1}{15}}(Ra)\big) Ra^{\frac{1}{3}}.\)
Despite the fact that the background field method has produced many significant upper bounds, it is not optimal. Indeed C. Nobili and F. Otto [in preparation] showed that the Nusselt number produced by the background field method
\( \widetilde{Nu}:=\inf\limits_{\substack{\tau:(0,1)\rightarrow \mathbb{R} \\ \tau(0)=1, \tau(1)=0}}\left\{\int_0^1\left(\frac{d\tau}{dz}\right)^2dz\quad \vert \;\;\tau \text{ marginally stable }\right\}\)
can be bounded from below by \(\big(\text{ln}^\frac{1}{15}\left(Ra\right)\big) Ra^\frac{1}{3}\).
In particular this implies that \(\widetilde{Nu} \sim \big(\text{ln}^\frac{1}{15}\left(Ra\right)\big) Ra^\frac{1}{3} \) and
no better bound can be produced by the background field method.
Nevertheless, the combination of the background stability with other methods can be implemented to produce better bounds. Indeed by the additional use of \(L^{\infty}\) maximal regularity for Stokes equation, F. Otto and C. Seis [10] obtained the upper bound
\(Nu\lesssim \Big(\ln^{\frac{1}{3}}\big(\ln(Ra)\big)\Big) Ra^{\frac{1}{3}}.\)
which is, up to now, the best known.
Finite Prandtl numbers
When \(Pr\) is finite, the equation for the temperature is coupled with the full Navier-Stokes equation for the velocity field, i.e
\(\frac{1}{Pr}\left(\frac{\partial \mathbf{u}}{\partial t}+\mathbf{u}\cdot \nabla \mathbf{u}\right)-\Delta \mathbf{u}+\nabla p=Ra T \mathbf{e_z},\)
\(\nabla\cdot \mathbf{u}=0.\)
As opposed to the case \(Pr=\infty\), here a major difficulty comes from the fact that the velocity and the temperature field are not
instantaneously slaved to each other.
Our current interest is in deriving rigorous upper bounds for \(Nu\)
that reproduce both physical scalings \(Nu\sim Ra^{\frac{1}{3}}\) and
\(Nu\sim Ra^{\frac{1}{2}}\) in some parameter regimes, up to logarithms.
In 1996 C. Doering and P. Constantin [11] applied the background field method to the problem, finding
\(Nu\lesssim c Ra^{\frac{1}{2}}\)
for no-slip (or stress-free) boundary conditions.
Up to now, the best rigorous upper bound for large but finite Prandtl number is proved by X. Wang in 2006 [12]. With a perturbation argument on the Stokes equation the author proves
\(Nu\lesssim \big(\ln^{\frac{2}{3}}(Ra)\big) Ra^{\frac{1}{3}} \;\; \mbox{ for }\;\; Pr\geq c_0 Ra\)
where \(c_0\) depends only on the aspect ratio of the domain.
Combining (logarithmically failing) maximal regularity estimates in \(L^{\infty}\) and \(L^1\) for the nonstationary Stokes equation with force terms given by the bouyancy term and the nonlinear term respectively, A. Choffrut, C.Nobili and F.Otto proved [in preparation] that
\( Nu \lesssim \begin{cases} \big( \text{ln}^\frac{1}{3}(Ra) \big) Ra^\frac{1}{3} &\;\; \mbox{ for }\;\; Pr \gtrsim \big( \text{ln}^\frac{1}{3}(Ra) \big) Ra^\frac{1}{3}\;, \\ \big( \text{ln}^\frac{1}{2}(Ra) \big) Pr^{-\frac{1}{2}} Ra^\frac{1}{2} &\;\; \mbox{ for }\;\; Pr \lesssim \big( \text{ln}^\frac{1}{3}(Ra) \big) Ra^\frac{1}{3}\;. \\ \end{cases} \)The result on one hand improves the bound in [12] and on the other hand captures a regime where the inertial effects are so strong that the convection unlinearity is not negligible. In particular, the notion of solution for the Navier-Stokes equation is that of Leray.
Kadanoff, L. P. Turbulent heat flow: structures and scaling. Phys. Today 54 (8) (2001), 34-39. Malkus, W.V.R. The Heat Transport and Spectrum of Thermal Turbulence. Proc. R: Soc. Lond. A 225, 196-2012 (1954) Kraichnan, R.H. Turbulent Thermal Convection at Arbitrary Prandtl Number. Phys. Fluids 5 1374 (1962) Spiegel, E.A. Convection in Stars I. Basic Boussinesq Convection. Ann. Rev Astron. Astrophys. 98, 323 (1971) Grossmann, S. and Lohse, D. Scaling in thermal convection: a unifying theory. J. Fluid Mech. 407, 27 (2000). Howard, L. Heat transport by turbulent convection. J. Fluid Mech. 17, 405 (1963) Busse, F.H. Fundamentals of Thermal Convection. Peltier, W., eds.: Mantle Convections, Plate Tectonics and Global Dynamics , Gordon and Breach Publishing 23 (1989) Doering, C.R and Constantin, P. Infinite Prandtl number convection. J. Stat. Phys. 94 (1-2) (1999), 159. Charles R. Doering ; Felix Otto and Maria G. Westdickenberg. Bounds on vertical heat transport for infinite-Prandtl-number Raleigh-Benard convection. Journal of fluid mechanics, 560 (2006), 229 Felix Otto and Christian Seis. Rayleigh-Bénard convection : improved bounds on the Nusselt number. Journal of mathematical physics, 52 (8) (2011) Doering, C.R and Constantin, P. Variational bounds on energy dissipation in incompressible flows: III. Convection. Phys. Rev. E 53 5957 (1996) Wang, X. Bound on vertical heat transport at large Prandtl number. Comm.Pure 61, 789 (2008) |
I've a question about the effect of aliasing on the magnitude of autocorrelations. From a simulation in MATLAB, I don't see any effect of aliasing or any need to anti-alias filter when I take the magnitude of the autocorrelation. Which means I can undersample the data and then take the autocorrelation. There is a paper "Effects of Aliasing on Spectral Moment Estimates Derived from the Complete Autocorrelation Function" which says something like what I claim. Would anybody please let me know if I've made a mistake?
Decimating before calculating the autocorrelation, in the presence of noise, is inferior to calculating the autocorrelation using the full dataset. Assume that the signal of interest is embedded in white noise. The vector $x[n], n = 0, 1, ..., N-1$ consists of samples from a discrete random process. The autocorrelation function of the vector $x[n]$ is:
$$ A_x[k] = \frac{1}{N-k} \sum_{i=0}^{N-1-k} x[i]x[i+k] $$
That is, $k$ is the lag used for the autocorrelation calculation. In your proposed scenario, you are decimating the autocorrelation function output by a factor $D$ (i.e. you are only calculating the function for lags $0, D, 2D, ...$) and comparing that result to the autocorrelation function of $x[n]$ decimated by the same factor $D$. Let $x_d[n]$ be the decimated sequence; its autocorrelation function is:
$$ A_{x_d}[k] = \frac{D}{N-k} \sum_{i=0}^{\frac{N-1-k}{D}} x[iD]x[(i+k)D] $$
(for simplicity here, I have assumed that $D$ is a factor of $N$ in the above equation)
Your inquiry can be written as:
$$ A_x[kD] \stackrel{?}{\approx} A_{x_d}[k] $$
$$ \frac{1}{N-kD} \sum_{i=0}^{N-1-kD} x[i]x[i+kD] \stackrel{?}{\approx} \frac{D}{N-k} \sum_{i=0}^{\frac{N-1-k}{D}} x[iD]x[(i+k)D] $$
Looking at this qualitatively, the summation on the left hand side has more terms than its counterpart on the right side. If $x[n]$ is second-order stationary, then the expected value of each term in each sum is the same; the act of averaging multiple samples that have the same expected value increases the signal to noise ratio. Stated a little differently, you can think the terms in each sum as samples from a new random process:
$$ y[n] = x[n]x[n+kD] $$
Since the noise present in $x[n]$ is white, the expected value of $y[n]$ is the true autocorrelation of the signal of interest at lag $kD$. Therefore, we would like to accurately estimate the expected value of $y[n]$. Our method for doing so is by calculating a sample average; it can easily be shown that the variance in the sample average estimator decreases given a larger sample size, converging to the actual expected value as the number of samples tends to $\infty$.
So, if there is white noise present in the signal (which is often the case), you're going to get a better estimate of the underlying signal's second-order statistics by using a larger sample size in the calculation (this might sound intuitively obvious). In the context of your two approaches, this is accomplished by using the full, non-decimated signal in the autocorrelation calculation and decimating afterward (i.e. only calculating the result for certain lag values).
Seems a little odd to me. The Matlab script below compares the "downsampled autocorrelation" to the "autocorrelation of the downsampled signals". For dual sine waves this actually comes pretty close (relative error of about -50dB) but for white noise it's simply wrong (relative error > +6 dB). While there may be some computational advantage it's not clear to me how useful the downsampled autocorrelations is even in the dual sine wave case. The peaks in the spectrum still show up in the wrong place.
% script to check whether autocorrelation is immune to aliasing% create two sine waves at 18k and 21k (assuming sample rate of 444.1k) n = 8192;t = (0:n-1)'/44100;x = sin(2*pi*t*21000)+sin(2*pi*t*18000);% calculate autocorrelation of original signal and one that's downsampled% by 4 and thus heavily aliasedy = xcorr(x,x);y2 = xcorr(x(1:4:end),x(1:4:end));d = y(4:4:end)-4*y2;% calculate the error in dBerr = 10*log10(sum(d.^2)./sum(y2.^2));fprintf('Dual sine wave relative error = %6.2f dB\n',err);%% try the same thing for white noisex = 2*rand(n,1)-1;y = xcorr(x,x);y2 = xcorr(x(1:4:end),x(1:4:end));d = y(4:4:end)-4*y2;err = 10*log10(sum(d.^2)./sum(y2.^2));fprintf('White noise relative error = %6.2f dB\n',err);
For specific types of inputs the effect of frequency aliasing on the magnitude of autocorrelations may be negligible. However, I don't think this will be true in general.
For instance, for a bandlimited input or for white noise the under-sampling will not impact the shape of the autocorrelation (although it might change the scaling in a predictive way). The autocorrelation of white noise is a delta and it will remain to be a delta if down-sampled.
Now, the power spectrum is related to the autocorrelation by the fourier transform. So if your claim would be true it seems that you could also claim that frequency aliasing does not change the frequency contents of the input. And this is not true. But there might be exceptions (special cases). |
A
Function assigns to each element of a set, exactly one element of a related set. Functions find their application in various fields like representation of the computational complexity of algorithms, counting objects, study of sequences and strings, to name a few. The third and final chapter of this part highlights the important aspects of functions.
A function or mapping (Defined as $f: X \rightarrow Y$) is a relationship from elements of one set X to elements of another set Y (X and Y are non-empty sets). X is called Domain and Y is called Codomain of function ‘f’.
Function ‘f’ is a relation on X and Y such that for each $x \in X$, there exists a unique $y \in Y$ such that $(x,y) \in R$. ‘x’ is called pre-image and ‘y’ is called image of function f.
A function can be one to one or many to one but not one to many.
A function $f: A \rightarrow B$ is injective or one-to-one function if for every $b \in B$, there exists at most one $a \in A$ such that $f(s) = t$.
This means a function
f is injective if $a_1 \ne a_2$ implies $f(a1) \ne f(a2)$.
$f: N \rightarrow N, f(x) = 5x$ is injective.
$f: N \rightarrow N, f(x) = x^2$ is injective.
$f: R\rightarrow R, f(x) = x^2$ is not injective as $(-x)^2 = x^2$
A function $f: A \rightarrow B$ is surjective (onto) if the image of f equals its range. Equivalently, for every $b \in B$, there exists some $a \in A$ such that $f(a) = b$. This means that for any y in B, there exists some x in A such that $y = f(x)$.
$f : N \rightarrow N, f(x) = x + 2$ is surjective.
$f : R \rightarrow R, f(x) = x^2$ is not surjective since we cannot find a real number whose square is negative.
A function $f: A \rightarrow B$ is bijective or one-to-one correspondent if and only if
f is both injective and surjective.
Prove that a function $f: R \rightarrow R$ defined by $f(x) = 2x – 3$ is a bijective function.
Explanation − We have to prove this function is both injective and surjective.
If $f(x_1) = f(x_2)$, then $2x_1 – 3 = 2x_2 – 3 $ and it implies that $x_1 = x_2$.
Hence, f is
injective.
Here, $2x – 3= y$
So, $x = (y+5)/3$ which belongs to R and $f(x) = y$.
Hence, f is
surjective.
Since
f is both surjective and injective, we can say f is bijective.
The
inverse of a one-to-one corresponding function $f : A \rightarrow B$, is the function $g : B \rightarrow A$, holding the following property −
$f(x) = y \Leftrightarrow g(y) = x$
The function f is called
invertible, if its inverse function g exists.
A Function $f : Z \rightarrow Z, f(x)=x+5$, is invertible since it has the inverse function $ g : Z \rightarrow Z, g(x)= x-5$.
A Function $f : Z \rightarrow Z, f(x)=x^2$ is not invertiable since this is not one-to-one as $(-x)^2=x^2$.
Two functions $f: A \rightarrow B$ and $g: B \rightarrow C$ can be composed to give a composition $g o f$. This is a function from A to C defined by $(gof)(x) = g(f(x))$
Let $f(x) = x + 2$ and $g(x) = 2x + 1$, find $( f o g)(x)$ and $( g o f)(x)$.
$(f o g)(x) = f (g(x)) = f(2x + 1) = 2x + 1 + 2 = 2x + 3$
$(g o f)(x) = g (f(x)) = g(x + 2) = 2 (x+2) + 1 = 2x + 5$
Hence, $(f o g)(x) \neq (g o f)(x)$
If f and g are one-to-one then the function $(g o f)$ is also one-to-one.
If f and g are onto then the function $(g o f)$ is also onto.
Composition always holds associative property but does not hold commutative property. |
L # 1
Show that
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Last edited by krassi_holmz (2006-03-09 02:44:53)
IPBLE: Increasing Performance By Lowering Expectations.
Offline
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 2
If
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Let
log x = x' log y = y' log z = z'. Then:
x'+y'+z'=0.
Rewriting in terms of x' gives:
IPBLE: Increasing Performance By Lowering Expectations.
Offline
Well done, krassi_holmz!
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 3
If x²y³=a and log (x/y)=b, then what is the value of (logx)/(logy)?
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
loga=2logx+3logy
b=logx-logy loga+3b=5logx loga-2b=3logy+2logy=5logy logx/logy=(loga+3b)/(loga-2b). Last edited by krassi_holmz (2006-03-10 20:06:29)
IPBLE: Increasing Performance By Lowering Expectations.
Offline
Very well done, krassi_holmz!
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 4
Offline
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
You are not supposed to use a calculator or log tables for L # 4. Try again!
Last edited by JaneFairfax (2009-01-04 23:40:20)
Offline
No, I didn't
I remember
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
You still used a calculator / log table in the past to get those figures (or someone else did and showed them to you). I say again:
no calculators or log tables to be used (directly or indirectly) at all!! Last edited by JaneFairfax (2009-01-06 00:30:04)
Offline
Offline
log a = 2log x + 3log y
b = log x log y
log a + 3 b = 5log x
loga - 2b = 3logy + 2logy = 5logy
logx / logy = (loga+3b) / (loga-2b)
Offline
Hi ganesh
for L # 1 since log(a)= 1 / log(b), log(a)=1 b a a we have 1/log(abc)+1/log(abc)+1/log(abc)= a b c log(a)+log(b)+log(c)= log(abc)=1 abc abc abc abc Best Regards Riad Zaidan
Offline
Hi ganesh
for L # 2 I think that the following proof is easier: Assume Log(x)/(b-c)=Log(y)/(c-a)=Log(z)/(a-b)=t So Log(x)=t(b-c),Log(y)=t(c-a) , Log(z)=t(a-b) So Log(x)+Log(y)+Log(z)=tb-tc+tc-ta+ta-tb=0 So Log(xyz)=0 so xyz=1 Q.E.D Best Regards Riad Zaidan
Offline
Gentleman,
Thanks for the proofs.
Regards.
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
log_2(16) = \log_2 \left ( \frac{64}{4} \right ) = \log_2(64) - \log_2(4) = 6 - 2 = 4, \,
log_2(\sqrt[3]4) = \frac {1}{3} \log_2 (4) = \frac {2}{3}. \,
Offline
L # 4
I don't want a method that will rely on defining certain functions, taking derivatives,
noting concavity, etc.
Change of base:
Each side is positive, and multiplying by the positive denominator
keeps whatever direction of the alleged inequality the same direction:
On the right-hand side, the first factor is equal to a positive number less than 1,
while the second factor is equal to a positive number greater than 1. These facts are by inspection combined with the nature of exponents/logarithms.
Because of (log A)B = B(log A) = log(A^B), I may turn this into:
I need to show that
Then
Then 1 (on the left-hand side) will be greater than the value on the
right-hand side, and the truth of the original inequality will be established.
I want to show
Raise a base of 3 to each side:
Each side is positive, and I can square each side:
-----------------------------------------------------------------------------------
Then I want to show that when 2 is raised to a number equal to
(or less than) 1.5, then it is less than 3.
Each side is positive, and I can square each side:
Last edited by reconsideryouranswer (2011-05-27 20:05:01)
Signature line:
I wish a had a more interesting signature line.
Offline
Hi reconsideryouranswer,
This problem was posted by JaneFairfax. I think it would be appropriate she verify the solution.
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Hi all,
I saw this post today and saw the probs on log. Well, they are not bad, they are good. But you can also try these problems here by me (Credit: to a book):
http://www.mathisfunforum.com/viewtopic … 93#p399193
Practice makes a man perfect.
There is no substitute to hard work All of us do not have equal talents but everybody has equal oppurtunities to build their talents.-APJ Abdul Kalam
Offline
JaneFairfax, here is a basic proof of L4:
For all real a > 1, y = a^x is a strictly increasing function.
log(base 2)3 versus log(base 3)5
2*log(base 2)3 versus 2*log(base 3)5
log(base 2)9 versus log(base 3)25
2^3 = 8 < 9
2^(> 3) = 9
3^3 = 27 < 25
3^(< 3) = 25
So, the left-hand side is greater than the right-hand side, because
Its logarithm is a larger number.
Offline |
I am doing an Intermediate Microeconomics class... in a 2*2*2*2 economy, I know that MRS (marginal rate of substitution) is supposed to be equal to MRTS (marginal rate of technical substitution) in order for it to be a competitive equilibrium. Why is this? I understand that supply is supposed to be equal to demand... Thank you!
Equilibrium conditions will require, among other things,:
1.) $MRS_{a,b}^i=MRS_{a,b}^j \forall [i,j] \in N$ , $N$ the set of agents
2.) $MRTS_{x,y}^i=MRTS_{x,y}^j \forall [i,j] \in J$ , $J$ the set of firms
3.) $MRT=MRS$ (Assuming that 1,2 hold)
And I note that 1,2 are efficiency conditions. 3 is efficiency in the output market.
Why? Suppose 1,2 hold at:
$MRS^i_{y,x}=3$
$MRT_{y,x}=2$
Obviously, condition 3 does not hold.
Can you see why this cannot be an equilibrium outcome? Think about what should happen in the economy and the forces that will drive this economy toward a state s.t. condition 3 holds.
So:
$MRT_{x,y}=\frac{MC_x}{MC_y} \equiv \frac{P_x}{P_y} = MRS$
where I assume 1 holds so MRS is same for all agents. |
I've confused myself about a rather trivial point. I could write the Lagrangian of the Dirac equation as
$\cal L = {\rm i}/2 \left( \bar \psi \gamma^\nu \partial_\nu \psi + {\rm cc} \right)$
which, for all I can tell is the same as
${\rm i}/2 \left( \partial_\nu (\bar \psi \gamma^\nu \psi) \right) $
So, assuming the current vanishes sufficiently fast at infinity, the volume integral should always vanish, regardless of what $\psi$ is. But that can't be because that would mean that, according to the principle of least action, literally all wave-functions would be solutions (and they'd all be stable under variation too). |
Cone Angle
When a sample is inserted at the focus of a convergent beam (at the focal plane), then biconical transmittance should be calculated:
\[T_b=\frac{total \; flux\; in \;output \;beam}{total\; flux\; in\; input \;beam}\].
The biconical transmittance \(T_b\) can be calculated as follows:
\[ T_b=\frac{\int_{\varphi=0}^{2\pi}\int_{\xi=0}^{\xi_max} I(\xi,\varphi)T(\xi,\varphi)\sin\xi d\xi d\varphi}{\int_{\varphi=0}^{2\pi}\int_{\xi=0}^{\xi_max} I(\xi,\varphi)\sin\xi d\xi d\varphi}, \]
where the angles \(\xi\) and \(\varphi\) are polar and azimuthal angles defined in the spherical coordinate system. The \(\xi_{\max}\) is the maximum semi-angle of the convergent cone.
In the case of Lambertian source \(I(\xi,\varphi)=I_0\cos\xi\) and equations can be simplified:
\[ T_b=(\pi \sin^2\xi_{\max})_{\varphi=0}^{-1}\int_{\varphi=0}^{2\pi}\int_{\xi=0}^{\xi_{\max}}T(\xi,\varphi)\sin\xi\cos\xi d\xi d\varphi.\]
In the case when the principal ray of the cone is perpendicular to the multilayer sample the formula can be further simplified:
\[T_b=2(\sin\xi_{\max})^{-2}\int_{\xi=0}^{\xi_{\max}} T(\xi)\sin\xi\cos\xi d\xi\]
Cone Angle database of OptiLayer allows you to create cone angles averaging specifications. When Cone Angle is loaded to the memory, all computations of Transmittance, Reflectance, Back Reflectance, and Absorptance take into account cone angle averaging. All synthesis procedures also will take into account cone averaging. The only exclusion from this rule is a target with UDT characteristics.
Cone can be specified with
Half-angle (in degrees), f/number or Numerical aperture. Computations are performed on angular grid, with growing number of points the accuracy is improving, but computational time is growing proportionally. OptiLayer uses high-precision sophisticated integration procedures, so for most cases 10-20 points are quite sufficient.
Type of distribution can be Uniform Intensity, Lambertian, or User-Defined. In the last case Cone Angle Editor allows to specify the number of Angular points for Cone Angle Intensity distribution and the spreadsheet itself. The number of angular points should not coincide with Cone averaging grid parameter. OptiLayer will perform all necessary interpolation procedures automatically. |
In the ACTIVE study, we have data on verbal ability at two times (hvltt and hvltt2). To investigate the relationship between them, we can generate a scatterplot using hvltt and hvltt2 as shown below. From the plot, it is easy to observe that those with a higher score on hvltt tend to have a higher score on hvltt2. Certainly, it is not difficult to find two people where the one with higher score on hvltt has a lower score on hvltt2. But in general hvltt and hvltt2 tend to go up and down together.
Furthermore, the association between hvltt and hvltt2 seems to be able to be described by a straight line. The red line in the figure is one that run across the points in the scatter plot. The straight line can be expressed using
\[hvltt2 = 5.19 + 0.73*hvltt\]
It says multiplying one's test score (hvltt) at the first occasion by 0.73 and adding 5.19 predict his/her test score (hvltt2) at the second occasion.
> usedata('active') > attach(active) > plot(hvltt, hvltt2) > abline(5.19, 0.73, col='red') >
A regression model / regression analysis can be used to predict the value of one variable (the dependent or outcome variable) on the basis of other variables (the independent and predictor variables). Typically, we use \(y\) to denote the dependent variable and \(x\) to denote an independent variable.
The simple linear regression model for the population can be written as
\[y = \beta_0 + \beta_1 x + \epsilon\]
where
In practice, the population parameters $\beta_0$ and $\beta_1$ are unknown. However, with a set of sample data, they can be estimated. One often used estimation method for regression analysis is called least squares estimation method. We explain the method here.
Let the model below represent the estimated regression model:
\[ y = b_0 + b_1 x + e \]
where
Assume we already get the estimates \(b_0\) and \(b_1\), then we can make a prediction of the value of \(y\) based on the value of \(x\). For example, for the \(i\)th participant, if his/her score on the independent variable is \(x_i\), then the corresponding predicted outcome \(\hat{y}_i\) based on the regression model is
\[\hat{y}_i = b_0 + b_1 x_i.\]
The difference between the predicted value and the observed data for this participant, also called the prediction error, is
\(e_i = y_i - \hat{y}_i\).
The parameter \(b_0\) and \(b_1\) potentially can take any values. A good choice could be those that make the errors smallest, which leads to the least squares estimation.
The least squares estimation obtains \(b_0\) and \(b_1\) by minimizing the sum of the squared prediction errors. Therefore, such a set of parameters makes \(\hat{y}_i\) closest to \(y_i\) on average. Specifically, one minimizes SSE as
\(SSE = \sum_{i=1}^{n} e_i^2 = \sum_{i=1}^{n} (y_i - \hat{y}_i)^2 = \sum_{i=1}^{n} (y_i - b_0 - b_1 x_i)^2.\)
The values of b0 and b1 that minimize SSE can be calculated by
\[b_1 = \frac{S_{xy}}{S_x^2} = \frac{ \sum_{i=1}^{n} (x_i - \bar{x})(y_i - \bar{y}) }{ \sum_{i=1}^{n} (x_i - \bar{x})^2 }\]
and
\[ b_0 = \bar{y} - b_1 \bar{x} \]
where \(\bar{x} = \sum_{i=1}^{n} x_i / n\) and \(\bar{y} = \sum_{i=1}^{n} y_i / n\).
If no linear relationship exists between the two variables, we would expect the regression line to be horizontal, that is, to have a slope of zero. Therefore, we can test whether the hypothesis that the population regression slope is 0. More specifically, we have the null hypothesis
\[ H_0: \beta_1 = 0.\]
And the alternative hypothesis is
\[ H_1: \beta_1 \neq 0.\]
To conduct the hypothesis testing, we calculate a t test statistic
\[ t = \frac{b_1 - \beta_1}{s_{b_1}} \]
where \(s_{b_1}\) is the standard error of \(b_1\).
If \(\beta_1 = 0\) in the population, then the statistic t follows a t-distribution with degrees of freedom \(n-2\).
Both the parameter estimates and the t test can be conducted using an R function
lm().
To show how to conduct a simple linear regression, we analyze the relationship between hvltt and hvltt2 from the ACTIVE study. The R input and output for the regression analysis is given below.
lm(), the first required input is a "formula" to specify the model. A model is usually formed by the "~" operator. For a regression, we use
y ~ x, which is interpreted as the outcome variable y is modelled by a linear predictor x. Therefore, for the regression analysis in this specific example, we use
hvltt2 ~ hvltt.
regmodel. To show the basic results including the estimated regression intercept and slope, one can type the name of the object directly in the R console.
summary function can be applied to the regression object.
5.1853 and the estimated slope is
0.7342. Their corresponding standard errors are
0.5463 and
0.200, respectively.
36.719. Based on the t distribution with the degrees of freedom
1573, we have the p-value < 2e-16. Therefore, one would reject the null hypothesis if the signifance level 0.05 is used.
> usedata("active") > > regmodel<-lm(hvltt2~hvltt, data=active) > regmodel Call: lm(formula = hvltt2 ~ hvltt, data = active) Coefficients: (Intercept) hvltt 5.1854 0.7342 > > summary(regmodel) Call: lm(formula = hvltt2 ~ hvltt, data = active) Residuals: Min 1Q Median 3Q Max -14.6671 -2.5408 0.2565 2.7250 13.5987 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 5.1853 0.5463 9.492 <2e-16 *** hvltt 0.7342 0.0200 36.719 <2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 3.951 on 1573 degrees of freedom Multiple R-squared: 0.4615, Adjusted R-squared: 0.4612 F-statistic: 1348 on 1 and 1573 DF, p-value: < 2.2e-16 >
Significant tests show
a linear relationship exists but not the if
\(R^2\) is an effect size measure. Cohen (1988) defined some standards for interpreting \(R^2\).
For the regression example, the \(R^2 = 0.4615\), representing a large effect.
Simply speaking, the \(R^2\) tells the percentage of variance in \(y\) that can be explained by the regression model or by \(x\).
The variance of y is
\[ Var(y) = \sum_{i=1}^n (y_i - \bar{y})^2. \]
With some simple re-arrangement, we have
\begin{align*}
Var(y) & = \sum_{i=1}^n (y_i - \bar{y})^2 \\ &= \sum_{i=1}^n (y_i - \hat{y}_i + \hat{y}_i - \bar{y})^2\\ &= \sum_{i=1}^n (y_i - \hat{y}_i)^2 + \sum_{i=1}^n (\hat{y}_i - \bar{y})^2\\ &= \text{Residual variance + Variance explained by regression}. \end{align*}
And the \(R^2\) is
\[ R^2 = \frac{\sum_{i=1}^n (\hat{y}_i - \bar{y})^2}{Var(y)} = 1 - \frac{\sum_{i=1}^n (y_i - \hat{y}_i)^2}{Var(y)}. \]
This can be illustrated using the figure below. |
Statistics Variance and Standard Deviation Standard deviation (σ) = \tt \sqrt{variance} For the Individual series, the standard deviation is \tt \sigma = \sqrt{\frac{\sum (x_i - M)^{2}}{n}} (general method) (Actual mean type) For the individual series, the standard deviation is \tt \sigma = \sqrt{\frac{\sum x_i^{2}}{n} - \left(\frac{\sum x_i}{n}\right)^{2}} (Short cut method - Actual mean type) For the individual series, the standard deviation is \tt \sigma = \sqrt{\frac{\sum {d_i}^{2}}{n} - \left(\frac{\sum d_i}{n}\right)^{2}} where d i= x i− A (Assumed mean type) For the individual series, the standard deviation is \tt \sigma = h \sqrt{\frac{\sum {d_i}^{2}}{n} - \left(\frac{\sum d_i}{n}\right)^{2}} where \tt d_i = \frac{x_i - A}{h} (Assumed mean type) For grouped data, the standard deviation is \tt \sigma = \sqrt{\frac{\sum f_i \ {x_i}^{2}}{N} - \left(\frac{\sum f_i \ x_i}{N}\right)^{2}} where N = Σ fi (Actual mean type) For the grouped data, the standard deviation is \tt \sigma = \sqrt{\frac{\sum f_i \ {d_i}^{2}}{N} - \left(\frac{\sum f_i \ d_i}{N}\right)^{2}} where d i= x i− A (Assumed mean type/short cut method) For the grouped data, the standard deviation is \tt \sigma = h \sqrt{\frac{\sum f_i \ {d_i}^{2}}{N} - \left(\frac{\sum f_i \ d_i}{N}\right)^{2}} where \tt d_i = \frac{x_i - A}{h} (Assumed mean type/short cut method) For Symmetrical distribution Mean deviation = \tt \frac{4}{5} (standard deviation) Coefficient of standard deviation = \frac{\sigma}{M} where σ = S.D M = Mean View the Topic in this video From 11:37 To 54:20
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Variance and standard deviation for ungrouped data
\sigma^{2}=\frac{1}{n}\sum (x_{i}-\overline{x})^2, \ \ \ \sigma = \sqrt{\frac{1}{n} \sum (x_{i}-\overline{x})^2}
2. Variance and standard deviation of a discrete frequency distribution
\sigma^{2}=\frac{1}{N}\sum f_i(x_{i}-\overline{x})^2, \ \ \ \sigma = \sqrt{\frac{1}{N}\sum f_i(x_{i}-\overline{x})^2}
3. Variance and standard deviation of a continuous frequency distribution
\sigma^{2}=\frac{1}{N}\sum f_i(x_{i}-\overline{x})^2, \ \ \ \sigma = \frac{1}{N}\sqrt{N\sum f_i x_i^2-\left(\sum f_{i}x_{i}\right)^2}
4. Shortcut method to find variance and standard deviation
\sigma^{2}=\frac{h^{2}}{N^{2}}\left[N\sum f_i y_i^2-\left(\sum f_{i}y_{i}\right)^2\right], \ \ \ \sigma = \frac{h}{N}\sqrt{N\sum f_i y_i^2-\left(\sum f_{i}y_{i}\right)^2}
where y_{i} = \frac{x_{i} - A}{h} |
The wave is
$\bar{E} = E_{0} sin(\frac{2\pi z}{\lambda} + wt) \bar{i} + E_{0} cos(\frac{2 \pi z}{\lambda}+wt) \bar{j}$
Let's simplify with $z = 1$. Now the xy-axis is defined by parametrization $(sin(\frac{2\pi }{\lambda}+wt), cos(\frac{2\pi }{\lambda} + wt)$ where $t$ is time and $\lambda$ is wavelength. This parametrization satisfy the equation $1^2=x^{2}+y^{2}$, a circle.
Now, let's variate the value of $z$. We know now that it cannot move into x or y coordinates or do we? Not really, the latter simplification is naive -- $x-y$ parametrization depends on the dimension $z$ -- but can we see something from it? If so, how to proceed now?
The solution is that the wave moves along the $z$ -axis to the negative direction as $t$ increases, a thing I cannot see.
The way I am trying to solve this kind of problems is:
Parametrize the equation suppose other things constant and change one dimension, observe check other variable
...now however I find it hard to parametrize the $z$ so a bit lost. So how can I visualize the wave with pen-and-paper? |
In his celebrated paper "Conjugate Coding" (written around 1970), Stephen Wiesner proposed a scheme for quantum money that is unconditionally impossible to counterfeit, assuming that the issuing bank has access to a giant table of random numbers, and that banknotes can be brought back to the bank for verification. In Wiesner's scheme, each banknote consists of a classical "serial number" $s$, together with a quantum money state $|\psi_s\rangle$ consisting of $n$ unentangled qubits, each one either
$$|0\rangle,\ |1\rangle,\ |+\rangle=(|0\rangle+|1\rangle)/\sqrt{2},\ \text{or}\ |-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}.$$
The bank remembers a classical description of $|\psi_s\rangle$ for every $s$. And therefore, when $|\psi_s\rangle$ is brought back to the bank for verification, the bank can measure each qubit of $|\psi_s\rangle$ in the correct basis (either $\{|0\rangle,|1\rangle\}$ or ${|+\rangle,|-\rangle}$), and check that it gets the correct outcomes.
On the other hand, because of the uncertainty relation (or alternatively, the No-Cloning Theorem), it's "intuitively obvious" that, if a counterfeiter who
doesn't know the correct bases tries to copy $|\psi_s\rangle$, then the probability that both of the counterfeiter's output states pass the bank's verification test can be at most $c^n$, for some constant $c<1$. Furthermore, this should be true regardless of what strategy the counterfeiter uses, consistent with quantum mechanics (e.g., even if the counterfeiter uses fancy entangled measurements on $|\psi_s\rangle$).
However, while writing a paper about other quantum money schemes, my coauthor and I realized that we'd never seen a rigorous proof of the above claim anywhere, or an explicit upper bound on $c$: neither in Wiesner's original paper nor in any later one.
So,
has such a proof (with an upper bound on $c$) been published? If not, then can one derive such a proof in a more-or-less straightforward way from (say) approximate versions of the No-Cloning Theorem, or results about the security of the BB84 quantum key distribution scheme?
Update: In light of the discussion with Joe Fitzsimons below, I should clarify that I'm looking for more than just a reduction from the security of BB84. Rather, I'm looking for an explicit upper bound on the probability of successful counterfeiting (i.e., on $c$)---and ideally, also some understanding of what the optimal counterfeiting strategy looks like. I.e., does the optimal strategy simply measure each qubit of $|\psi_s\rangle$ independently, say in the basis
$$\{ \cos(\pi/8)|0\rangle+\sin(\pi/8)|1\rangle, \sin(\pi/8)|0\rangle-\cos(\pi/8)|1\rangle \}?$$
Or is there an entangled counterfeiting strategy that does better?
Update 2: Right now, the best counterfeiting strategies that I know are (a) the strategy above, and (b) the strategy that simply measures each qubit in the $\{|0\rangle,|1\rangle\}$ basis and "hopes for the best." Interestingly, both of these strategies turn out to achieve a success probability of (5/8) n. So, my conjecture of the moment is that (5/8) n might be the right answer. In any case, the fact that 5/8 is a lower bound on c rules out any security argument for Wiesner's scheme that's "too" simple (for example, any argument to the effect that there's nothing nontrivial that a counterfeiter can do, and therefore the right answer is c=1/2).
This post has been migrated from (A51.SE)
Update 3: Nope, the right answer is (3/4) n! See the discussion thread below Abel Molina's answer. |
Now that LIGO has finally measured gravitational waves using a huge laser interferometer, to me, the question remains, why was it possible? As it is explained in many news articles, gravitational waves are similar to water waves or electromagnetic waves, they just do not exist in a medium like water or space, but space-time itself is the transport medium. If space-time itself gets contracted and expanded by the gravitational waves, so does any means of measurement. The ruler you use for measurement (the laser beam) gets deformed while the wave travels through the measuring device. Otherwise the "ruler" had to live outside of space-time, but there is no outside. If space-time was a cup filled with pudding, on which we had painted a straight line with 10 marks, pushing into the pudding slightly with our thumb does bend the line, but for us, there remain 10 marks on the line, because to measure the extension, we had to use a ruler, outside of our space-time (pudding) to measure, let's say, 11 marks. But, well, there is no outside. I assume the same happens not only to the 3 spacial dimensions but also to the time dimension. Because they "did it", what am I missing?
The short answer is that waves that are "in the apparatus" are indeed stretched. However the "fresh waves" being produced by the laser are not. So long as the "new" waves spend much less time in the interferometer than it takes to expand them (which takes roughly 1/gravitational wave frequency), then the effect you are talking about can be neglected.
Details:
There is an
apparent paradox: you can think about the detection in two ways. On the one hand you can imagine that the lengths of the detector arms change and that the round-trip travel time of a light beam is subsequently changed and so the difference in the time-of-arrival of wavecrests translates into a phase difference that is detected in the interferometer. On the other hand you have the analogy to the expansion of the universe - if the arm length is changed, then isn't the wavelength of the light changed by exactly the same factor and so there can be no change in the phase difference? I guess this latter is your question.
Well clearly, the detector works so there must be a problem with the second interpretation. There is an excellent discussion of this by Saulson 1997, from which I give a summary.
Interpretation 1:
If the two arms are in the $x$ and $y$ directions and the incoming wave the $z$ direction, then the metric due to the wave can be written $$ds^2 = -c^2 dt^2 + (1+ h(t))dx^2 + (1-h(t))dy^2,$$ where $h(t)$ is the strain of the gravitational wave.
For light travelling on geodesic paths the metric interval $ds^2=0$, this means that (considering only the arm aligned along the x-axis for a moment) $$c dt = \sqrt{(1 + h(t))}dx \simeq (1 + \frac{1}{2}h(t))dx$$ The time taken to travel the path is therefore increased to $$\tau_+ = \int dt = \frac{1}{c}\int (1 + \frac{1}{2}h(t))dx$$
If the original arm is of length $L$ and the perturbed arm length is $L(1+h/2)$, then the time difference for a photon to make the
round trip along each arm is$$ \Delta \tau = \tau_+ - \tau_- \simeq \frac{2L}{c}h$$leading to a phase difference in the signals of$$\Delta \phi = \frac{4\pi L}{\lambda} h$$ This assumes that $h(t)$ is treated as a constant for the time that the laser light is in the apparatus.
Interpretation 2:
In analogy with the expansion of the universe, the gravitational wave
does change the wavelength of light in each arm of the experiment. However, only the waves that are in the apparatus as the gravitational wave passes through can be affected.
Suppose that $h(t)$ is a step function so that the arm changes length from $L$ to $L+h(0)/2$ instantaneously. The waves that are just arriving back at the detector will be unaffected by this change, but subsequent wavecrests will have had successively further to travel and so there is a phase lag that builds up gradually to the value defined above in interpretation 1. The time taken for the phase lag to build up will be $2L/c$.
But then what about the waves that enter the apparatus later? For those, the laser
frequency is unchanged and as the speed of light is constant, then the wavelength is unchanged. These waves travel in a lengthened arm and therefore experience a phase lag exactly equivalent to interpretation 1.
In practice, the "buildup time" for the phase lag is short compared with the reciprocal of the frequency of the gravitational waves. For example the LIGO path length is about 1,000 km, so the "build up time" would be 0.003 s compared with the reciprocal of the $\sim 100$ Hz signal of 0.01 s and so is relatively unimportant when interpreting the signal (the detection sensitivity of the interferometer is indeed compromised at higher frequencies because of this effect). |
i am currently toying around with the behaviour of a classical relativistic point particle a bit. For a free one we get the action
\begin{align} S =\int_\tau - m\sqrt{- \dot X_\mu \dot X^\mu}. \end{align} This action is invariant to reparametrization of the worldline parameter $\tau$, which i can see mathematically, but i dont fully grasp its physical meaning (compare e.g. http://www.damtp.cam.ac.uk/user/tong/string/one.pdf ).
I would have intuitively linked this to lorentz invarianve, but we could add potential terms like $V( X_\mu X^\mu)$ that are lorentz invariant and break the reparametrization invarianve, and we can add terms that are invariant to reparametrization but not to lorentz tranfsfomrations, like $ \dot X^1$. So these two dont seem to be linked at all, but what does the reparametrization invarianvce mean then, and when is it relevant? For example, i would like to experiment a bit with simple potentials. More concrete a relativistic theory that reduces to the harmonic oscillator in the non relativistic limit. Naively i would just write down an action like this:
\begin{align} S =\int_\tau - m\sqrt{- \dot X_\mu \dot X^\mu} - \frac{m \omega^2}{2} \vec{x}^2. \end{align}
This is obviuously not lorentz invariant, which shouldnt be a problem i guess - the non relativistic version isnt galilei invariant either. But it also would break the reparametrization invariance - is that a problem?
I hope i could formulate my confusion clearly. Thanks in advance for any help! |
The fact is that, in the general case $$\vec{E} = -\vec{\nabla}V - \frac{\partial\vec{A}}{\partial t};$$(signs depend on conventions used) where $\vec{A}$ is called
vector potential. You can consult for example Wikipedia.
Let us consider homogeneous Maxwell equations:
$$\begin{cases}\vec{\nabla}\cdot\vec{B} = 0,\\\vec{\nabla}\times\vec{E} + \frac{\partial\vec{B}}{\partial t} = 0;\end{cases}$$
It is well-known that every divergenceless filed can be written a curl of another vector field (in a simply connected domain) just as we know that a curless field can be written as a gradient of a scalar function (always in a simply connected domain). Thus from the first equation,
$$\vec{B} = \vec{\nabla}\times\vec{A},$$
and substituting this in the second equation,
$$\vec\nabla\times\left(\vec{E} + \frac{\partial\vec{A}}{\partial t}\right)=0,$$
since one can exchange the curl with the derivative w.r.t. time, and so one can set:
$$\vec{E} + \frac{\partial\vec{A}}{\partial t} = -\vec\nabla V,$$
from which
$$\vec{E} = -\vec{\nabla}V - \frac{\partial\vec{A}}{\partial t}.$$
Note that if your magnetic field is time-independent, you recover the well-know formula
$$\vec{E} = -\vec\nabla V.$$ |
In his celebrated paper "Conjugate Coding" (written around 1970), Stephen Wiesner proposed a scheme for quantum money that is unconditionally impossible to counterfeit, assuming that the issuing bank has access to a giant table of random numbers, and that banknotes can be brought back to the bank for verification. In Wiesner's scheme, each banknote consists of a classical "serial number" $s$, together with a quantum money state $|\psi_s\rangle$ consisting of $n$ unentangled qubits, each one either
$$|0\rangle,\ |1\rangle,\ |+\rangle=(|0\rangle+|1\rangle)/\sqrt{2},\ \text{or}\ |-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}.$$
The bank remembers a classical description of $|\psi_s\rangle$ for every $s$. And therefore, when $|\psi_s\rangle$ is brought back to the bank for verification, the bank can measure each qubit of $|\psi_s\rangle$ in the correct basis (either $\{|0\rangle,|1\rangle\}$ or ${|+\rangle,|-\rangle}$), and check that it gets the correct outcomes.
On the other hand, because of the uncertainty relation (or alternatively, the No-Cloning Theorem), it's "intuitively obvious" that, if a counterfeiter who
doesn't know the correct bases tries to copy $|\psi_s\rangle$, then the probability that both of the counterfeiter's output states pass the bank's verification test can be at most $c^n$, for some constant $c<1$. Furthermore, this should be true regardless of what strategy the counterfeiter uses, consistent with quantum mechanics (e.g., even if the counterfeiter uses fancy entangled measurements on $|\psi_s\rangle$).
However, while writing a paper about other quantum money schemes, my coauthor and I realized that we'd never seen a rigorous proof of the above claim anywhere, or an explicit upper bound on $c$: neither in Wiesner's original paper nor in any later one.
So,
has such a proof (with an upper bound on $c$) been published? If not, then can one derive such a proof in a more-or-less straightforward way from (say) approximate versions of the No-Cloning Theorem, or results about the security of the BB84 quantum key distribution scheme?
Update: In light of the discussion with Joe Fitzsimons below, I should clarify that I'm looking for more than just a reduction from the security of BB84. Rather, I'm looking for an explicit upper bound on the probability of successful counterfeiting (i.e., on $c$)---and ideally, also some understanding of what the optimal counterfeiting strategy looks like. I.e., does the optimal strategy simply measure each qubit of $|\psi_s\rangle$ independently, say in the basis
$$\{ \cos(\pi/8)|0\rangle+\sin(\pi/8)|1\rangle, \sin(\pi/8)|0\rangle-\cos(\pi/8)|1\rangle \}?$$
Or is there an entangled counterfeiting strategy that does better?
Update 2: Right now, the best counterfeiting strategies that I know are (a) the strategy above, and (b) the strategy that simply measures each qubit in the $\{|0\rangle,|1\rangle\}$ basis and "hopes for the best." Interestingly, both of these strategies turn out to achieve a success probability of (5/8) n. So, my conjecture of the moment is that (5/8) n might be the right answer. In any case, the fact that 5/8 is a lower bound on c rules out any security argument for Wiesner's scheme that's "too" simple (for example, any argument to the effect that there's nothing nontrivial that a counterfeiter can do, and therefore the right answer is c=1/2).
This post has been migrated from (A51.SE)
Update 3: Nope, the right answer is (3/4) n! See the discussion thread below Abel Molina's answer. |
(I am reposting here a question I asked on stack overflow, since it actually sits right in between programming
(modeling of 2D physics) and physics proper (kinematics). I think I have the physics part of my solution right, but would be grateful to anyone who could provide a second look).
Issue:
I am modeling a simple differential drive robot
(such as the e-Puck, Khepera, etc.) with pyBox2D. This class of robots is usually controlled by indicating speeds for the right and left wheel in rad/s.However, Box2D can only control a (kinematic) body through two parameters: linear velocity (in m/s, as a 2D vector) and angular velocity (in rad/s). I need to convert my wheel speeds to linear + angular velocities.
Linear velocity is actually straightforward. Given a wheel radius $r$ in meters, and current robot orientation $\theta$ in radians, the forward speed is simply the average of the two wheel speeds in meters/sec and converted to a velocity vector according to current orientation. Let $v_r,v_l$ be the angular velocities of the two wheels, $Sp$ the forward speed of the robot, and $V$ its linear velocity. Then
$$Sp = \frac{(v_r \cdot r) + (v_l \cdot r)}{2} \tag{1}$$
$$V = (S \cos(\theta), S \sin(\theta)) \tag{2}$$
I cannot quite figure out the correct formula for angular velocity ($Av$), though. Intuitively, it should be the difference between the two speeds modulo the distance separating the wheels ($Sep$):
$$Av = \frac{(v_r - v_l )}{Sep} \tag{3}$$
in the limit cases: when $v_r=v_l$, the robot spins in place, and when either $v_l=0$ or $v_r = 0$, the robot spins (pivots) around the stationary wheel, i.e. in a circle with radius = $Sep$.
I checked my formulas with a standard computational robotics text (Dudek and Jenkin,
Computational Principles of Mobile Robotics), and they seem to be correct.
However, my model does not exhibit the expected behavior with formula (3). As a test, I set up a robot with the left wheel speed to 0 and progressively increased the value of the right wheel's speed. The expected behavior is that the robot's should spin around the left wheel with increased velocity. Instead, the robot spins in circles of increasing radius, i.e it spirals outward, which suggests that the angular velocity is insufficient (or that my linear velocity is too large, I guess).
(Notice that I use the
kinematic body type offered by the physics simulation package I use (Box2D), which excludes all computations of friction and forces. In other words, dynamics factors such as slippages, frictions, etc does not play a role in my problem) |
This answer focuses more on minimalizing the code, rather than finding the source of the problem, as the top-voted answer does. It is intended to be concise and hands-on, but digestible rather than exhaustive. Suggestions for improvement welcome!
Here are some strategies for reducing your code, which will help you get better and faster answers, since it will be clearer what your problem is and the other users will see that you put some effort into producing a concise Minimal Working Example. Thanks for that!
Most likely, not all of these things will apply to your question, so just pick what does apply. However, it is advised that you provide the community with something that will reproduce the problem in the easiest way possible. Typically this requires code that starts with
\documentclass and ends with
\end{document} (if using LaTeX). It will allow readers to copy-and-paste-and-compile your code and see exactly what problems you might be experiencing.
What follows below are snippets of code;
bad references imply that it should typically not be used, as it may not be part of the problem, while good references make suggestions that should be used instead. Note that these snippets should still form part of a larger,
\documentclass...
\end{document} structure as mentioned above.
Document Class - Bad:
\documentclass{MyUniversitysThesisClass}
- Bad:
\documentclass[..]{standalone}
...unless your problem relates to the
standalone document class.
standalone is meant for cropping stand-alone images within a main document usually. If this doesn't pertain to you, don't use it.
+ Good:
\documentclass{article}
Using a non-standard document class? Does your problem still show up with
article? Then
use .
article
Document Class Options - Bad:
\documentclass[12pt, a5paper, final, oneside, onecolumn]{article}
+ Good:
\documentclass{article}
Using any options for your document class? Does your problem still show up without them? Then
get rid of them. Comments - Bad:
\usepackage{booktabs} % hübschere Tabllen, besseres Spacing
\usepackage{colortbl} % farbige Tabellenzellen
\usepackage{multirow} % mehrzeilige Zellen in Tabellen
\usepackage{subfloat} % Sub-Gleitumgebungen
+ Good:
\usepackage{booktabs}
\usepackage{colortbl}
\usepackage{multirow}
\usepackage{subfloat}
You put comments in your code to remember what packages are there for? Great habit, but usually not necessary in a MWE –
get rid of them. Loading Packages - Bad:
\usepackage{a4wide}
\usepackage{amsmath, amsthm, amssymb}
\usepackage{url}
\usepackage[algoruled,vlined]{algorithm2e}
\usepackage{graphicx}
\usepackage[ngerman, american]{babel}
\usepackage{booktabs}
\usepackage{units}
\usepackage{makeidx}
\makeindex
\usepackage[usenames,dvipsnames]{color}
\usepackage{colortbl}
\usepackage{epstopdf}
\usepackage{rotating}
+ Good:
% Assuming your problem is related e.g. to the rotation of a figure, you might need:
\usepackage{rotating}
You’ve developed an awesome template with lots of helpful packages? Does your problem still show up if you remove some or even most of them? Then
get rid of those that aren’t necessary for reproducing the problem. (If you should later find out that another package is complicating the situation, you can always ask another question or edit the existing question.)
In most cases, even packages like
inputenc or
fontenc are not necessary in MWEs, even though they are essential for many non-English documents in “real” documents.
Images - Bad:
\includegraphics{graphs/dataset17b.pdf}
+ Good:
\usepackage[demo]{graphicx}
....
\includegraphics{graphs/dataset17b.pdf}
+ Good:
\usepackage{graphicx}
....
\includegraphics{example-image}% Image from the mwe package
Your problem includes
an image? Does your problem show up with any image? Then use the option for the package
demo
graphicx – this way, other users who don’t have your image file won’t get an error message because of that. If you prefer an actual image that you can rotate, stretch, etc., use the
, which provides a number of dummy images, named e.g.
mwe package
example-image.
If your problem is specific to the size of the included image, still use
mwe's
example-image, but also specify the
width and
height so it more readily replicates your
custom-image dimensions. Again, this way the problem is reproducible without using your image.
Text - Bad:
In \cite{cite:0}, it is shown that $\Delta \subset {U_{\mathcal{{D}}}}$. Hence
Y. Q. Qian's characterization of conditionally uncountable elements was a
milestone in constructive algebra. Now it has long been known that there exists
an almost everywhere Clifford right-canonically pseudo-integrable, Clairaut
subset \cite{cite:0}. The groundbreaking work of J. Davis on isomorphisms was a
major advance. In future work, we plan to address questions of uniqueness as
well as degeneracy. Thus in \cite{cite:0}, the main result was the
classification of meromorphic, completely left-invariant systems.
+ Good:
\usepackage{lipsum} % just for dummy text
...
\lipsum[1-3]
+ Good:
Foo bar baz.
Need a
few paragraphs of text to demonstrate your problem? Use a package that produces dummy text. Popular choices are
lipsum (plain paragraphs) and
blindtext (can produce entire documents with section titles, lists, and formulae).
Need just a
tiny amount of text? Then keep it maximally simple; avoid formulae, italics, tables – anything that’s not essential to the problem. Popular choices for dummy words are
foo,
bar, and
baz.
Bibliography Files + Good:
\usepackage{filecontents}
\begin{filecontents*}{\jobname.bib}
@book{Knu86,
author = {Knuth, Donald E.},
year = {1986},
title = {The \TeX book},
}
\end{filecontents*}
\bibliography{\jobname} % if you’re using BibTeX
\addbibresource{\jobname.bib} % if you’re using biblatex
Need a
.bib file to reproduce your problem? Use a maximally simple entry embedded in a in the preamble. During the compilation, this will create a .bib file in the same directory as the
filecontents environment
.tex file, so users compiling your code only need to save one file by themselves.
Another option for
biblatex would be to use the file
biblatex-examples.bib, which should be installed with
biblatex by default. You can find it in
bibtex/bib/biblatex/.
Data -- Bad:
Never include data as an image. - Bad:
Number of points Values
10 100
20 400
30 1200
40 2345
etc...
+ Good:
\usepackage{filecontents}
\begin{filecontents*}{data.txt}
Number of points, Values
10, 100
20, 400
30, 1200
40, 2345
\end{filecontents*}
Including the data as part of the MWE makes the example portable as well. Of course, the input may differ depending on what package you use to manage the data (some require CSV, some don't).
Index + Good:
\usepackage{filecontents}
\begin{filecontents*}{\jobname.ist}
delim_0 "\\dotfill "
\end{filecontents*}
The index style can be included in the
filecontents* environment in the preamble. The contents (and file extension) will differ according to the required indexing application (
makeindex or
xindy).
Sometimes a problem can only be demonstrated with an index that spans several pages. The
testidx package is like
lipsum etc but the dummy text is interspersed with
\index to make it easier to test index styles. It has over 400 top-level terms (along with some sub-items and sub-sub-items) that includes every basic Latin letter group (A–Z) as well some extended Latin characters and a few digraphs.
- Bad:
\begin{document}
aa\index{aa}
ab\index{ab}
...
zy\index{zy}
zz\index{zz}
\printindex
\end{document}
+ Good:
\begin{document}
\testidx
\printindex
\end{document}
If page breaking is the source of your problem (for example, after a letter group heading or between an item and sub-item), there's a high probability of an awkward break occurring given the large number of test items, but you can alter the page dimensions or font size to ensure one occurs in your MWE.
Glossaries
The
glossaries package comes with some files containing dummy entries, which can be used in MWEs.
- Bad:
\newglossaryentry{sample1}{name={sample1},description={description 1}}
...
\newglossaryentry{sample100}{name={sample100},description={description 100}}
\newacronym{ac1}{ac1}{acronym 1}
...
\newacronym{ac100}{ac100}{acronym 100}
+ Good:
\loadglsentries{example-glossaries-brief}
\loadglsentries[\acronymtype]{example-glossaries-acronym}
See Dummy Entries for Testing for a complete list of dummy entry files provided by
glossaries. There's an additional file
example-glossaries-xr.tex provided by
glossaries-extra.
Formatting your code
Formatting of code is done using Markdown. See the relevant FAQ How do I mark code blocks?. There also exists some syntax-highlighting, a discussion of which can be following at What is syntax highlighting and how does it work?.
With the above in mind, don't post your code in comments, since comments only support a limited amount of Markdown.
Posting a Picture of Your Output
It’s often helpful to see what your current, faulty output looks like. If you’re not sure how to do that, have a look at How does one add a LaTeX output to a question/answer? and how can i upload an image to be included in a question or answer?.
Selection of packages inspired by Inconsistent rotations with \sidewaysfigure. Math ramble generated by Mathgen. Bibliography sample from lockstep’s question biblatex: Putting thin spaces between initials. |
Electrostatic Potential and Capacitance Potential due to a Point Charge, Electric Dipole and system of charges The potential at a point due to an Electric dipole is \tt V=\frac{P\cos\theta}{4\pi \varepsilon_{0}r^{2}} For a point on the axial line is \tt V=\frac{P}{4\pi \varepsilon_{0}r^{2}} For a point on the equatorial line is V = O The surface over which the potential is same as called equipotential surface Electrostatic potential due to infinite long changed wire at “r” \tt V=\frac{\lambda}{2\pi e_{0}}\log_{e}^{r}+\ k {k = constant} The Electrostatic Potential due to a thin infinite non conducting plane sheet \tt V=-\frac{\sigma}{2\varepsilon_{0}}r The Potential at a point outside the sphere is \tt V=\frac{1}{4\pi \varepsilon_{0}}.\frac{Q}{r}=\frac{\sigma\ R^{2}}{\varepsilon_{0}r} (R = Radians) The Potential at a point on the surface of sphere \tt V=\frac{1}{4\pi e_{0}}.\frac{Q}{R}=\frac{\sigma\ R}{\varepsilon_{0}} The Potential inside the sphere \tt V=\frac{1}{4\pi \varepsilon_{0}}.\frac{Q\left(3R^{2}-r^{2}\right)}{2R^{3}} V centre > V surface > V out (sphere)
(Sphere)
Electric Potential’s unit is volt. When 1 Joule of work is done in bringing 1 unit coulomb of charge then the potential is said to be 1 volt. View the Topic in this video From 01:39 To 36:51
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Potential due to System of charges
Let there be a number of point charges q 1, q 2, q 3, ........ q n at distances r 1, r 2, r 3......, rrespectively from the point n P, where electric potential is given by
V = \frac{1}{4\pi \varepsilon_{0}}\sum_{i = 1}^{n} \frac{q_{i}}{r_{i}}
2. Potential Gradient
The rate of change of potential with distance in electric field is called potential gradient. Potential gradient = \frac{dV}{dr}
3. Relation between potential gradient and electric field intensity is given by
E = -\left[\frac{dV}{dr}\right]
4. The net torque experienced by the dipole is
τ = pE sin θ \overrightarrow{\tau} = \overrightarrow{p} \times \overrightarrow{E}
5. Potential Difference:
V_{ab} = -\int_{a}^{b} \overrightarrow{E}.d \ \overrightarrow{r}
6. Work done in Rotationg an Electric Dipole in a unifrom electric field
⇒ W external = PE(1 − cos θ) |
I'm trying to understand Milnor's proof of the existence of exotic 7-spheres.
Milnor finds his examples among $S^{3}$ bundles over $S^{4}$ (with structure group $SO(4)$ ). Such a bundle can be described as follows:
Given $M$, an $S^{3}$ bundle over $S^{4}$, if we restrict $M$ to the northern (or southern) hemisphere of $S^{4}$, it must trivialize since each hemisphere is contractible. Hence, we can build $M$ by specifying, for each point $p$ in $S^{3}$ = equator of $S^{4}$ = intersection of northern and southern hemispheres, an element of $SO(4)$ which glues $p\times S^{3}$ in the northern hemisphere to $p\times S^{3}$ in the southern hemisphere.
This defines a function $f:S^{3}\rightarrow SO(4)$, which is known as the clutching function for $M$. By usual fiber bundle theory, the isomorphism type of $M$ only depends on the homotopy class of $f$.
$SO(4)$ is double covered by $S^3\times S^3$, and hence $\pi_3(SO(4)) = \mathbb{Z}\oplus\mathbb{Z}$. Thus, $f$ is really determined (at least, up to homotopy) by an ordered pair of integers (i,j).
Now, as the bundles have structure group $SO(4)$, it makes sense to talk about the Pontryagin classes of $M$. In Milnor's proof of the existence of exotic spheres, he needs to argue that $p_1(M) = \pm 2(i-j)$. His first step in this argument is that "clearly $p_1(M)$ is a linear function of $i$ and $j$."
It IS clear to me that the Pontragin classes associated to $(ni, nj)$ for $n\in \mathbb{Z}$ will depend linearly on $n$. For, if we let $N_{i,j}$ denote the principal $SO(4)$ bundle over $S^{4}$ corresponding to $(i,j)$, then $N_{ni,nj}$ is clearly obtained as the pullback of $N_{i,j}$ via a degree $n$ map from $S^{4}$ to itself.
However, it's not clear to me why $p_1(M)$ is additive in $(i,j)$. Am I missing something simple?
And while we're talking about it, is more true? That is, For any sphere bundle over a sphere, say, $S^{k}\rightarrow E\rightarrow S^{n}$, should any characteristic classes (Pontryagin, Stiefel-Whitney, Euler) be linear in terms of the clutching function?
For example, we can think of $p_1$ as a map from $\pi_{n-1}(SO(k+1))\rightarrow H^{4}(S^{n})$. Is this map a homomorphism? How about for the other characteristic classes? |
Ideal derivative filter
Let $f(x)$ be a signal bandlimited to frequencies $(-\pi,\, \pi)$. Given $f(x)$ as input, the same $f(x)$ is given as output by a system that has as its impulse response the sinc function:
$$\operatorname{sinc}(x) = \left\{\begin{array}{ll}1&\text{if }x = 0,\\\frac{\sin(\pi x)}{\pi x}&\text{otherwise.}\end{array}\right.\tag{1}$$
Taking the derivative $f'(x)$ of signal $f(x)$ is a linear time-invariant operation. By associativity, $f(x)$ is differentiated by a system that has as its impulse response the derivative of $\operatorname{sinc}(x):$
$$\operatorname{sinc}'(x) = \left\{\begin{array}{ll}0&\text{if }x = 0,\\\frac{\cos(\pi x)}{x} - \frac{\sin(\pi x)}{\pi x^2}&\text{otherwise.}\end{array}\right.\tag{2}$$
Both of $\operatorname{sinc}(x)$ and $\operatorname{sinc}'(x)$ are bandlimited to $(-\pi,\, \pi)$ and can thus be sampled at integer $x$ without aliasing. Sampling $\operatorname{sinc}'(x)$ at integer $x$ gives the ideal discrete-time impulse response for you to use in filtering samples of $f(x)$ at integer $x$ to obtain samples of $f'(x)$.
Figure 1. $\operatorname{sinc}(x)$ (red) and its derivative $\operatorname{sinc}'(x)$ Windowed filter
The infinitely long ideal impulse response can be multiplied by a window function to obtain a realizable impulse response. SciPy can calculate several types of window functions and do finite-impulse-response (FIR) filtering with an arbitrary impulse response by
scipy.signal.convolve. You need to delay the impulse response to make it that of a causal system.
Least squares derivative filter
A least squares filter impulse response is obtained by sampling $\operatorname{sinc}'(x)$ from Eq. 2 symmetrically at integer $-N\le x\le N$ with $N$ determining the filter order. This includes the samples of the impulse response that have the largest absolute value and consequently have the largest contribution to the mean square error.
Least squares derivative filter for pre-oversampled data
For sampled data that is oversampled by a factor $\beta \ge 1$ compared to the sufficient sampling frequency of $1$, a least squares derivative filter is obtained by minimizing mean square error (MSE) or deviation of the Fourier transform of the impulse response from the Fourier transform $i \omega$ of derivation, over the bandwidth $\omega\in(-\pi,\,\pi)$ of sinc:
$$\begin{align}\mathrm{MSE} &= \frac{1}{2\pi}∫_{-\pi}^{\pi}\left|iω - \sum_{n=1}^N \left(c_n e^{-niω/β} - c_n e^{niω/β}\right)\right|^2\\&= \frac{1}{\pi}∫_{0}^{\pi}\left(ω + 2 \sum_{n=1}^N c_n \sin\left(\frac{nω}{β}\right)\right)^2,\end{align}\tag{3}$$
where the unorthodox notation $e^{-iω/β}$ represents a one-sample delay at the oversampled sampling frequency or a delay of $1/β$ samples at the critical sampling frequency of the sinc, and $c_n$ is the value of the antisymmetric impulse response at discrete time index $n=1\ldots N$ with a complementary value $-c_{n}$ at time $-n$. The reason for the unorthodox notation is that it gives MS values that are comparable between different $\beta$. If you require coefficients for calculation of the derivative as if the sampling frequency was $1$, then divide each coefficient (solved in the following) by $\beta$. MSE is minimized when partial derivatives of MSE with respect to all $c_n$ are zero. At $\beta = 1$ we get the least squares filter discussed earlier. The least squares solutions and the corresponding impulse responses can be calculated by this Python script:
from sympy import *
for N in [1, 2, 3, 4]: # <------ number of non-zero coefs / 2, too large is too slow to solve
omega, beta = symbols('omega beta', real=True)
c = [Symbol('c_'+str(i + 1), real=True) for i in range(N)]
MSE = integrate((omega + 2*sum([c[n]*sin((n + 1)*omega/beta) for n in range(N)]))**2, (omega, 0, pi))/pi
for use_beta in [1, 1.5, 2, 4, 8]: # <------- Oversampling factor
use_MSE = MSE.subs(beta, use_beta)
sol = solve([diff(use_MSE, c[n]) for n in range(N)], [c[n] for n in range(N)])
for n in range(N):
print(str(-sol[c[N - n - 1]].evalf()), end=',')
print('0', end=',')
for n in range(N-1):
print(str(sol[c[n]].evalf()), end=',')
print(str(sol[c[N-1]].evalf()), end=' ')
print('# N='+str(N)+', beta='+str(use_beta)+', MSE='+str(use_MSE.subs(sol).evalf()))
print()
SymPy's solver will hang on some inputs, but manages to find the solutions for $N = 1\ldots4$ and $\beta = 1, 1.5, 2, 4, 8$:
-1.00000000000000,0,1.00000000000000 # N=1, beta=1, MSE=1.28986813369645
-1.13548530933771,0,1.13548530933771 # N=1, beta=1.5, MSE=0.178081981619031
-1.27323954473516,0,1.27323954473516 # N=1, beta=2, MSE=0.0475902571416442
-2.12680294939990,0,2.12680294939990 # N=1, beta=4, MSE=0.00251926307592025
-4.06211519814366,0,4.06211519814366 # N=1, beta=8, MSE=0.000151083662191318
0.500000000000000,-1.00000000000000,0,1.00000000000000,-0.500000000000000 # N=2, beta=1, MSE=0.789868133696453
0.330596796032270,-1.24876549439064,0,1.24876549439064,-0.330596796032270 # N=2, beta=1.5, MSE=0.0130608565500205
0.288948415598137,-1.51850657748724,0,1.51850657748724,-0.288948415598137 # N=2, beta=2, MSE=0.000919718098315458
0.382389721267460,-2.75841263168952,0,2.75841263168952,-0.382389721267460 # N=2, beta=4, MSE=2.58832725686194e-6
0.689919219439685,-5.37907341717520,0,5.37907341717520,-0.689919219439685 # N=2, beta=8, MSE=9.33496011568591e-9
-0.333333333333333,0.500000000000000,-1.00000000000000,0,1.00000000000000,-0.500000000000000,0.333333333333333 # N=3, beta=1, MSE=0.567645911474231
-0.116742660811357,0.423760333529959,-1.31069003715342,0,1.31069003715342,-0.423760333529959,0.116742660811357 # N=3, beta=1.5, MSE=0.00108046987847814
-0.0790631626589370,0.433017934639585,-1.64079658337694,0,1.64079658337694,-0.433017934639585,0.0790631626589370 # N=3, beta=2, MSE=2.01122717118286e-5
-0.0825922199951234,0.659247420212084,-3.07101401580036,0,3.07101401580036,-0.659247420212084,0.0825922199951234 # N=3, beta=4, MSE=3.01168290348385e-9
-0.140650397261751,1.22875551919474,-6.03556856432330,0,6.03556856432330,-1.22875551919474,0.140650397261751 # N=3, beta=8, MSE=6.53134493440793e-13
0.250000000000000,-0.333333333333333,0.500000000000000,-1.00000000000000,0,1.00000000000000,-0.500000000000000,0.333333333333333,-0.250000000000000 # N=4, beta=1, MSE=0.442645911474231
0.0443010798143126,-0.173517915867411,0.482863594553193,-1.34856865228067,0,1.34856865228067,-0.482863594553193,0.173517915867411,-0.0443010798143126 # N=4, beta=1.5, MSE=9.55482684708157e-5
0.0232132887981686,-0.144357365428612,0.528060960838853,-1.71358998904610,0,1.71358998904610,-0.528060960838853,0.144357365428612,-0.0232132887981686 # N=4, beta=2, MSE=4.70708311492379e-7
0.0191194948759654,-0.179273078273186,0.859556991552367,-3.25775951118075,0,3.25775951118075,-0.859556991552367,0.179273078273186,-0.0191194948759654 # N=4, beta=4, MSE=3.75015864227226e-12
0.0307242274505004,-0.317445566053152,1.62921582434694,-6.42899176483137,0,6.42899176483137,-1.62921582434694,0.317445566053152,-0.0307242274505004 # N=4, beta=8, MSE=4.88983660976190e-17
The filter with
N=2, beta=2, MSE=0.000919718098315458 has the same number of taps as Rick Lyons' filter
−3/16, 31/32, 0, −31/32, 3/16 that is slightly suboptimal in least squares sense with
MSE=0.0009390870. |
Evaluate: $$\int_C \frac{e^z+\sin{z}}{z}dz$$ where, $C$ is the circle $|z|=5$ traversed once in the counterclockwise direction.
I can't find an anti-derivative to this function, and I am not sure one exists. I was thinking about using several theorems but each come up short. For instance, I cannot use the
Closed Curve Theorem since the function in the integrand is not entire (at least I don't think it is).
Any hints (to help get me started)? |
In his webpage, Fabrice Bellard mentions an exotic formula for $\pi$ as follows $$\pi = \frac{1}{740025}\left(\sum_{n = 1}^{\infty}\dfrac{3P(n)}{{\displaystyle \binom{7n}{2n}2^{n - 1}}} - 20379280\right)\tag{1}$$ where \begin{align} P(n) = &-885673181n^{5} + 3125347237n^{4} -2942969225n^{3}\notag\\ &+1031962795n^{2} - 196882274n + 10996648\notag \end{align} and he further adds that this was obtained while testing some numerical relations with PSLQ Algorithm which is a kind of integer relation algorithm.
My point here is not to ask a proof of the formula $(1)$ because it is based on a computer algorithm, but rather to understand how can be we be so sure of accuracy of such formulas obtained via these algorithms unless we have some other proof based on analytical arguments. Fundamentally a computer algorithm can't operate on a real number. The most we can hope for is operations on algebraic numbers. To deal with arbitrary real numbers
one does not need "arbitrary precision arithmetic" but rather "infinite precision arithmetic" which is kind of impossible to achieve via computers.
How does one guarantee a computer generated formula like $(1)$ involving transcendental numbers ($\pi$) to be true?
Update: Just so as to highlight the computer based calculations done for algebraic numbers I refer to my answer regarding a Ramanujan's formula for $\pi$. |
[I have no idea about the history --though I do have interest, and would like the OP to post whatever his research reveals-- but this is how I like to approach the transition pedagogically. Pardon the length; I'll attempt to edit things down later.]
We'll enter the discussion at the point where we agree that $\sin\theta$ and $\cos\theta$ are perfectly well-understood for each acute (and, we'll say,
non-zero) angle $\theta$, via a right triangle with $\theta$ at one non-right vertex. (We avoid $\theta=0^{\circ}$ for the same reason we avoid $\theta=90^{\circ}$: something doesn't seem quite proper about the triangle involved.) The lore of First Quadrant Trig is fairly rich, with plenty of identities and formulas, many (most? all?) of which have picture-proofs.For instance, there's a lot to learn from the similar and right triangles in the figure I call the "Complete Triangle"; also, the "Law of Cosines" (for acute triangles, anyway); even the power series representations of at least four of the six trig functions.
Most importantly here, First Quadrant Trig includes the elegantly-illustrated Angle-Sum and Angle-Difference Formulas (taken from one of my previous answers):
Of course, the figures only seem make to make sense for $\alpha$, $\beta$, $\alpha+\beta$, and $\alpha-\beta$ (strictly) between $0^{\circ}$ and $90^{\circ}$ ... but when they make sense,
boy do they make sense! Interestingly, the formulas they represent allow our knowledge to expand beyond the comfort zone of the First Quadrant.
For instance, while we may nor may not have felt comfortable defining sine and cosine for a right angle (
You can't have two right angles in a triangle!), the right-hand sides of the Angle Addition Formula have no problems churning out consistent values when $\beta$ is the complement of $\alpha$ (if you will, the "co-$\alpha$"), even if the left-hand sides seem non-sensical:
$$\begin{align}\text{“}\sin 90^\circ\text{''} &= \sin{(\alpha+\text{co-}\alpha)} \\&= \sin\alpha \cos(\text{co-}\alpha)+\cos\alpha\sin(\text{co-}\alpha) = \sin^2\alpha + \cos^2\alpha = 1 \\\text{“}\cos 90^\circ\text{''} &= \cos{(\alpha+\text{co-}\alpha)} \\&= \cos\alpha \cos(\text{co-}\alpha)-\sin\alpha\sin(\text{co-}\alpha) = \cos\alpha \sin\alpha - \sin\alpha \cos\alpha = 0 \\\end{align}$$
What these formulas tell us is that, if the symbols "$\sin 90^\circ$" and "$\cos 90^\circ$" are going to mean
anything (and still be consistent with what we've come to understand about Angle Addition), they must mean "$1$" and "$0$", respectively. Likewise, the Angle Subtraction Formulas (with $\beta = \alpha$) provide the other edge-case values:
$$\begin{align}\text{“}\sin 0^\circ\text{''} &= \sin{(\alpha-\alpha)} = \sin\alpha \cos\alpha-\cos\alpha\sin\alpha = 0 \\\text{“}\cos 0^\circ\text{''} &= \cos{(\alpha-\alpha)} = \cos\alpha \cos\alpha+\sin\alpha\sin\alpha = 1\end{align}$$
Of course, there's always the possibility that these symbols
really don't mean anything. However, there don't seem to be any obvious contradictions with known First Quadrant Lore; indeed, the purported right angle values allow the Angle Subtraction Formula to re-confirm what we knew "from definition":
$$\sin(\text{co}\theta)=\cos\theta \qquad \cos(\text{co}\theta)=\sin\theta$$
and the ostensible zero angle values are certainly consistent with common sense statements like
$$\sin(\theta+0)=\sin\theta \qquad \cos(\theta-0) = \cos\theta$$
All things considered, there's not much controversy here, so we have little problem augmenting our strict right-triangle definition of trig functions and accepting the computed values for right angles and zero angles.
In the same way, we can use the Formulas (and our newly-christened right angle values) to explore the Second Quadrant: we simply throw our First Quadrant angles over the wall. For example,
$$\begin{align}\sin(\theta+90^{\circ}) &= \sin\theta \cos 90^{\circ} + \cos\theta \sin 90^{\circ} = \cos\theta\end{align}$$
This result makes some sense:
The "vertical shadow" of a unit segment rotated to angle $\theta + 90^{\circ}$ matches the "horizontal shadow" of a unit segment rotated only to angle $\theta$; I bet it works the other way, too ... Hey, waydaminnit ...
$$\begin{align}\cos(\theta+90^{\circ}) &= \cos\theta \cos 90^{\circ} - \sin\theta \sin 90^{\circ} = -\sin\theta\end{align}$$
... We get a negative ? What's up with that?
At this point, we have two choices: retreat from the insanity, or embrace it. It turns out that the latter is the better course to take here: that single, tiny,
intuition-shattering negative sign is the key to understanding how First Quadrant Trig extends to All Quadrants Trig.
You know the story from here: We use the Angle Addition Formulas to push from the Second Quadrant to the Third, to the Fourth, and beyond; and use the Angle Subtraction Formulas to assign trig values to negative angles. And we begin to make interesting observations that bolster our confidence in these values:
sine values are signed just like $y$ coordinates in each quadrant; cosine values just like $x$ coordinates; kinda convenient, that. values repeat as we go all the way around the circle, because Angle Addition ultimately assures $\sin(\theta+360^{\circ}k) = \sin\theta$ and $\cos(\theta+360^{\circ}k) = \cos\theta$. the triangle area formula $\frac{1}{2} a b \sin C$ now works for any-size $C$ the Law of Sines and Law of Cosines work for any triangle the way is paved for complex exponentials, non-Euclidean geometry, Fourier analysis, etc, etc, etc.
It's a good thing we didn't let that negative sign scare us off!
Sure, we're forced to abandon the idea that the sine and cosine of an arbitrary $\theta$ should come from right triangles with angle $\theta$, but the gains from our expanded perspective more than make up for that. We all out-grow the training wheels sometime.
As I've admitted, I don't know how this conceptual progression matches with the actual history of trigonometry's development. However, I like using this approach as an object lesson in how mathematics often advances: we play around with intuitively-appealing notions, understand
the heck out of that stuff by observing patterns, and let those observations guide exploration beyond our intuition's limits. The tail, as they say, wags the dog.
Of course, there are other engines driving mathematical advancement, too, but this seems to appeal to students. It helps makes the case that math is
dynamic, subject to refinement (or overhaul) with every new discovery, and that its study is a never-ending and not-always-predictable journey. |
A year ago I asked [this] (Proving a few properties of Bertrand curves) same question (without the "why can't a curve be a Bertrand mate of itself" part) - see that post if you want to know the essential definitions - and gave a partial answer to the first question (I showed it was constant, but hand-waved away a sign in the last step). At the time I was a lot less mature so I made a few steps that currently seem unjustified, and unfortunately even now I seem to be unable to correct them, which is why I'm asking this question. Before I did that, however, I tried to find other solutions besides my own partial one and found this on Kreyszig's differential geometry book, which is essentially what I wrote on the other post, but more direct:
where the following notation is used: $t^{*}$ is the tangent vector to $x^{*}$, which is a Bertrand mate to $x$, $p$ is the normal vector to $x$. The * just mean we're talking about vectors tangent/normal/binormal to $x^{*}$ instead of $x$. That being said, my questions are:
Isn't the correct expression given by $t^{*} = t \cos(\alpha) \pm b \sin(\alpha) $? After all, $t^{*} = \langle t^{*}, t \rangle t + \langle t^{*}, b \rangle b $, and $\langle t^{*}, b \rangle ^2 = \sin^2(\alpha)$, since $\langle t^{*}, t^{*} \rangle = 1$, where $\langle t, t^{*} \rangle = \cos(\alpha)$ by definition. Also, in that case what determines the $\pm$ sign, and how?
What, if any, are the problems in assuming $\alpha$ is a Bertrand mate of itself, that is, $a = 0$? Is it just uninteresting or is there some other unwanted complication? |
This question already has an answer here:
In my quantum mechanics textbook it says that the relation between the basis $|x\rangle$ and $|p\rangle$ is given by:
$\langle p | x \rangle = \Large \frac{e^{-ip x/ \hbar}}{\sqrt{2\pi \hbar}} \, .$
However, i'm not sure how to go about proving this relation. My idea was that the eigenstates of momentum can be written as below:
$\phi(x,p) = \Large \frac{1}{\sqrt{2\pi \hbar}} e^{-ipx/\hbar},$
which form an orthogonal basis.
But, we can expand any state, $|\text{state}\rangle$, in terms of the othorgonal basis right?
So is my notation below correct for the expansion of the state $| x \rangle$ ?
$|x\rangle = \int_{-\infty}^{+\infty} \phi(p', x) |p'\rangle dp'$
If it were correct then the following would also be correct?
Using the fact that $\langle p | p' \rangle = \delta(p - p')$, $\langle p | x \rangle$ can be computed:
\begin{align} \langle p | x \rangle =& \int_{-\infty}^{+\infty} \phi(p', x) \langle p | p' \rangle dp' \\ =& \int_{-\infty}^{+\infty} \phi(p', x) \delta(p - p') dp' \\ =& \phi(p, x) \\ =& \Large \frac{e^{-ip x/ \hbar}}{\sqrt{2\pi \hbar}} \end{align} as required.
Does the logic work here? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.