text stringlengths 256 16.4k |
|---|
I am reading the paper by Griffin and Brown (2010) where at one step in their MCMC procedure they need to sample from the following conditional posterior:
$$ p(\lambda|\gamma, \Psi)\propto \pi(\lambda)\frac{1}{(2\gamma^2)^{p\lambda}(\Gamma(\lambda))^p}\left(\prod_{i=1}^p\Psi_i\right)^\lambda $$
They say that $\lambda$
can be updated using a Metropolis-Hastings random walk update on $\log\lambda$. We propose $λ' = \exp\{\sigma^2_\lambda z\}\lambda$, where $z$ is standard normal then $\lambda'$ is accepted with probability $$ \min\left\{1, \frac{\pi(\lambda')}{\pi(\lambda)}\left(\frac{\Gamma(\lambda)}{\Gamma(\lambda')}\right)^p\left((2\gamma^2)^{-p}\prod_{i=1}^p\Psi_i\right)^{\lambda'-\lambda}\right\} $$
My question:
why is there no term in the acceptance probability that accounts for the fact that we are proposing new draws on the logarithmic scale?
See for example here: Sampling on a logarithmic scale The paper, however, has more than 300 citations and I know several successful papers that have used the same type of Metropolis-Hastings procedure; therefore, I'm inclined to think I am missing something. But what?
Griffin and Brown (2010) Inference with normal-gamma prior distributions in regression problems, Bayesian Analysis, https://projecteuclid.org/euclid.ba/1340369797 |
Projection Operators
We have already seen that if $V$ is a finite-dimensional nonzero vector space over the complex numbers, then $V$ has at least one eigenvalue, however, if $V$ is a finite-dimensional nonzero vector space over the real numbers, then any linear operator $T \in \mathcal L (V)$ need not have an eigenvalue, however, we will subsequently see that this only applies when the dimension of $V$ is even - that is, every nonzero real vector space with odd dimension has an eigenvalue.
Before we look at the proof, we will need to find look at a type of operator on $V$ known as a projection operator. Let $U$ and $W$ be subspaces of $V$ such that $V$ is the direct sum of $U$ and $W$, that is $V = U \oplus W$. Then if $v \in V$, we have that $v$ can be written uniquely as the sum of an element $u \in U$ and an element $w \in W$:(1)
Definition: Let $V$ be a vector space and let $U$ and $W$ be subspaces of $V$ such that $V = U \oplus W$. Then $v$ can be written uniquely as $v = u + w$ where $u \in U$ and $w \in W$. The Projection Operator Onto $U$ is the linear operator $P_{U,W} \in \mathcal L(V)$ defined by $P_{U, W}(v) = u$ for all $v \in V$.
The following proposition verifies that $P_{U, W}$ is indeed a linear operator on $V$.
Proposition 1: Let $V$ be a vector space and let $U$ and $W$ be subspaces of $V$ such that $V = U \oplus W$ then for all $v \in V$, $v = u + w$ where $u \in U$ and $w \in W$ and thus the transformation $P_{U, W}$ defined by $P_{U, W}(v) = u$ is a linear operator. Proof:We first that clearly $P_{U,W} : V \to V$ since for every vector $v \in V$ we have that $P_{U,W} (v) \in V$, so $P_{U,W}$ maps elements in $V$ back to elements in $V$. To show that $P_{U,W}$ is a linear operator, we must show that both the additivity and homogeneity properties hold. To show that the additivity property holds, let $v_1, v_2 \in V$. Then both of these vectors can be written uniquely as $v_1 = u_1 + w_1$ and $v_2 = u_2 + w_2$ for $u_1, u_2 \in U$ and $w_1, w_2 \in W$. Now: Therefore the additivity property holds. Now to show that the homogeneity property holds, let $a \in \mathbb{F}$. Thus we have that $a v_1 = au_1 + aw_1$ and so: Therefore the homogeneity property holds. Therefore $P_{U, W} \in \mathcal L (V)$. $\blacksquare$
The following simple proposition will tell us that the range of $P_{U,W}$ is precisely $U$ and that the null space of $P_{U,W}$ is precisely $W$.
Proposition 2: Let $V$ be a vector space and let $U$ and $W$ be subspaces of $V$ such that $V = U \oplus W$. Then $\mathrm{range} ( P_{U,W} ) = U$ and $\mathrm{null} ( P_{U,W}) = W$. Proof:We will first show that $\mathrm{range} ( P_{U,W} ) = U$. Let $u \in \mathrm{range} ( P_{U,W} )$. Then there exists a $v \in V$ such that $P_{U,W}(v) = u$. So for $v \in V$ we have that $v = \underbrace{u}_{\in U} + \underbrace{0}_{\in W}$. Thus $u \in U$. Now let $u \in U$. Thus since $U$ is a subspace of $V$ we have that $u \in V$. Since $V = U \oplus W$ we have that $u$ can be uniquely written as $u = \underbrace{u}_{\in U} + 0$. Thus $P_{U,W}(u) = u$ so $u \in \mathrm{range} ( P_{U,W} )$. Thus $\mathrm{range} ( P_{U,W} ) = U$. We will now show that $\mathrm{null} ( P_{U,W}) = W$. Let $w \in \mathrm{null} (P_{U, W})$. Then $P_{U,W}(w) = 0$. Thus we have that $w = \underbrace{0}_{\in U} + \underbrace{w}_{\in W}$ so $w \in W$. Now let $w \in W$. We have that $w \in V$ as well since $W$ is a subspace of $V$. Thus since $V = U \oplus W$ we have that $w \in V$ can be uniquely written as $w = \underbrace{0}{\in U} + \underbrace{w}_{\in W}$. So $P_{U,W} (w) = 0$ and hence $w \in \mathrm{null} ( P_{U,W})$. $\blacksquare$
Proposition 3: Let $V$ be a vector space and let $U$ and $W$ be subspaces of $V$ such that $V = U \oplus W$. Then $P_{U,W} (v) = v$ if and only if $v \in U$. Proof:$\Rightarrow$ Suppose that $P_{U,W}(v) = v$. Since $v \in V$ and $V = U \oplus W$, then we have that $v = u + w$ where $u \in U$ and $w \in W$. By how $P_{U,W}$ is defined, we have that $P_{U,W} = u$. Thus $v = u \in U$. $\Leftarrow$ Suppose that $v \in U$. Since $U$ is a subspace of $V$, we have that $v \in V$. Since $V = U \oplus W$, then for $v \in V$ we have that $v$ can be written uniquely as $v = u + w$ where $u \in U$ and $w \in W$. Since $v \in U$, we have that $v = u = P_{U,W} (v)$ is the unique way to write $v$. (Note that if $u \neq v$ then $w \neq 0$ and we would have more than one way to express $v$ which would contradict the fact that $V = U \oplus W$). $\blacksquare$ |
Even before quantization, charged bosonic fields exhibit a certain "self-interaction". The body of this post demonstrates this fact, and the last paragraph asks the question.
Notation/ Lagrangians
Let me first provide the respective Lagrangians and elucidate the notation.
I am talking about complex scalar QED with the Lagrangian $$\mathcal{L} = \frac{1}{2} D_\mu \phi^* D^\mu \phi - \frac{1}{2} m^2 \phi^* \phi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ Where $D_\mu \phi = (\partial_\mu + ie A_\mu) \phi$, $D_\mu \phi^* = (\partial_\mu - ie A_\mu) \phi^*$ and $F^{\mu \nu} = \partial^\mu A^\nu - \partial^\nu A^\mu$. I am also mentioning usual QED with the Lagrangian $$\mathcal{L} = \bar{\psi}(iD_\mu \gamma^\mu-m) \psi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ and "vector QED" (U(1) coupling to the Proca field) $$\mathcal{L} = - \frac{1}{4} (D^\mu B^{* \nu} - D^\nu B^{* \mu})(D_\mu B_\nu-D_\nu B_\mu) + \frac{1}{2} m^2 B^{* \nu}B_\nu - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$
The four-currents are obtained from Noether's theorem. Natural units $c=\hbar=1$ are used. $\Im$ means imaginary part.
Noether currents of particles
Consider the Noether current of the complex scalar $\phi$ $$j^\mu = \frac{e}{m} \Im(\phi^* \partial^\mu\phi)$$ Introducing local $U(1)$ gauge we have $\partial_\mu \to D_\mu=\partial_\mu + ie A_\mu$ (with $-ieA_\mu$ for the complex conjugate). The new Noether current is $$\mathcal{J}^\mu = \frac{e}{m} \Im(\phi^* D^\mu\phi) = \frac{e}{m} \Im(\phi^* \partial^\mu\phi) + \frac{e^2}{m} |\phi|^2 A^\mu$$ Similarly for a Proca field $B^\mu$ (massive spin 1 boson) we have $$j^\mu = \frac{e}{m} \Im(B^*_\mu(\partial^\mu B^\nu-\partial^\nu B^\mu))$$ Which by the same procedure leads to $$\mathcal{J}^\mu = \frac{e}{m} \Im(B^*_\mu(\partial^\mu B^\nu-\partial^\nu B^\mu))+ \frac{e^2}{m} |B|^2 A^\mu$$
Similar $e^2$ terms also appear in the Lagrangian itself as $e^2 A^2 |\phi|^2$. On the other hand, for a bispinor $\psi$ (spin 1/2 massive fermion) we have the current $$j^\mu = \mathcal{J}^\mu = e \bar{\psi} \gamma^\mu \psi$$ Since it does not have any $\partial_\mu$ included.
"Self-charge"
Now consider very slowly moving or even static particles, we have $\partial_0 \phi, \partial_0 B \to \pm im\phi, \pm im B$ and the current is essentially $(\rho,0,0,0)$. For $\phi$ we have thus approximately $$\rho = e (|\phi^+|^2-|\phi^-|^2) + \frac{e^2}{m} (|\phi^+|^2 + |\phi^-|^2) \Phi$$ Where $A^0 = \Phi$ is the electrostatic potential and $\phi^\pm$ are the "positive and negative frequency parts" of $\phi$ defined by $\partial_0 \phi^\pm = \pm im \phi^\pm$. A similar term appears for the Proca field.
For the interpretation let us pass back to SI units, in this case we only get a $1/c^2$ factor. The "extra density" is $$\Delta \rho = e\cdot \frac{e \Phi}{mc^2}\cdot |\phi|^2$$ That is, there is an extra density proportional to the ratio of the energy of the electrostatic field $e \Phi$ and the rest mass of the particle $mc^2$. The sign of this extra density is dependent only on the sign of the electrostatic potential and both frequency parts contribute with the same sign (which is superweird). This would mean that
classicaly, the "bare" charge of bosons in strong electromagnetic fields is not conserved, only this generalized charge is.
After all, it seems a bad convention to call $\mathcal{J}^\mu$ the electric charge current. By multiplying it by $m(c^2)/e$ it becomes a matter density current with the extra term corresponding to mass gained by electrostatic energy. However, that does not change the fact that the "bare charge density" $j^0$ seems not to be conserved for bosons.
Now to the questions:
On a theoretical level, is charge conservation at least temporarily or virtually violated for bosons in strong electromagnetic fields? (Charge conservation will quite obviously not be violated in the final S-matrix, and as an $\mathcal{O}(e^2)$ effect it will probably not be reflected in first order processes.) Is there an intuitive physical reason why such a violation is not true for fermions even on a classical level? Charged bosons do not have a high abundance in fundamental theories, but they do often appear in effective field theories. Is this "bare charge" non-conservation anyhow reflected in them and does it have associated experimental phenomena? Extra clarifying question: Say we have $10^{23}$ bosons with charge $e$ so that their charge is $e 10^{23}$. Now let us bring these bosons from very far away to very close to each other. As a consequence, they will be in a much stronger field $\Phi$. Does their measured charge change from $e 10^{23}$? If not, how do the bosons compensate in terms of $\phi, B, e, m$? If this is different for bosons rather than fermions, is there an intuitive argument why?
This post imported from StackExchange Physics at 2015-06-09 14:50 (UTC), posted by SE-user Void |
I’m not sure to whom the image or the idea is due. Please comment if you have information. (See comments below for current information.)
The rules will naturally generalize those in Connect-Four. Namely, starting from an empty board, the players take turns placing their coins into the $\omega\times 4$ grid. When a coin is placed in a column, it falls down to occupy the lowest available cell. Let us assume for now that the game proceeds for $\omega$ many moves, whether or not the board becomes full, and the goal is to make a connected sequence in a row of $\omega$ many coins of your color (you don’t have to fill the whole row, but rather a connected infinite segment of it suffices). A draw occurs when neither or both players achieve their goals.
In the $\omega\times 6$ version of the game that is shown, and indeed in the $\omega\times n$ version for any finite $n$, I claim that neither player can force a win; both players have drawing strategies.
Theorem. In the game Connect-$\omega$ on a board of size $\omega\times n$, where $n$ is finite, neither player has a winning strategy; both players have drawing strategies. Proof. For a concrete way to see this, observe that either player can ensure that there are infinitely many of their coins on the bottom row: they simply place a coin into some far-out empty column. This blocks a win for the opponent on the bottom row. Next, observe that neither player can afford to follow the strategy of always answering those moves on top, since this would lead to a draw, with a mostly empty board. Thus, it must happen that infinitely often we are able to place a coin onto the second row. This blocks a win for the opponent on the second row. And so on. In this way, either players can achieve infinitely many of their coins on each row, thereby blocking any row as a win for their opponent. So both players have drawing strategies. $\Box$
Let me point out that on a board of size $\omega\times n$, where $n$ is odd, we can also make this conclusion by a strategy-stealing argument. Specifically, I argue first that the first player can have no winning strategy. Suppose $\sigma$ is a winning strategy for the first player on the $\omega\times n$ board, with $n$ odd, and let us describe a strategy for the second player. After the first move, the second player mentally ignores a finite left-initial segment of the playing board, which includes that first move and with a odd number of cells altogether in it (and hence an even number of empty cells remaining); the second player will now aim to win on the now-empty right-side of the board, by playing as though playing first in a new game, using strategy $\sigma$. If the first player should ever happen to play on the ignored left side of the board, then the second player can answer somewhere there (it doesn’t matter where). In this way, the second player plays with $\sigma$ as though he is the first player, and so $\sigma$ cannot be winning for the first player, since in this way the second player would win in this stolen manner.
Similarly, let us argue by strategy-stealing that the second player cannot have a winning strategy on the board $\omega\times n$ for odd finite $n$. Suppose that $\tau$ is a winning strategy for the second player on such a board. Let the first player always play at first in the left-most column. Because $n$ is odd, the second player will eventually have to play first in the second or later columns, leaving an even number of empty cells in the first column (perhaps zero). At this point, the first player can play as though he was the second player on the right-side board containing only that fresh move. If the opponent plays again to the left, then our player can also play in that region (since there were an even number of empty cells). Thus, the first player can steal the strategy $\tau$, and so it cannot be winning for the second player.
I am unsure about how to implement the strategy stealing arguments when $n$ is even. I shall give this more thought. In any case, the theorem for this case was already proved directly by the initial concrete argument, and in this sense we do not actually need the strategy stealing argument for this case.
Meanwhile, it is natural also to consider the $n\times\omega$ version of the game, which has only finitely many columns, each infinite. The players aim to get a sequence of $\omega$-many coins in a column. This is clearly impossible, as the opponent can prevent a win by always playing atop the most recent move. Thus:
Theorem. In the game Connect-$\omega$ on a board of size $n\times\omega$, where $n$ is finite, neither player has a winning strategy; both players have drawing strategies.
Perhaps the most natural variation of the game, however, occurs with a board of size $\omega\times\omega$. In this version, like the original Connect-Four, a player can win by either making a connected row of $\omega$ many coins, or a connected column or a connected diagonal of $\omega$ many coins. Note that we orient the $\omega$ size column upwards, so that there is no top cell, but rather, one plays by selecting a not-yet-filled column and then occupying the lowest available cell in that column.
Theorem. In the game Connect-$\omega$ on a board of size $\omega\times\omega$, neither player has a winning strategy. Both players have drawing strategies. Proof. Consider the strategy-stealing arguments. If the first player has a winning strategy $\sigma$, then let us describe a strategy for the second player. After the first move, the second player will ignore finitely many columns at the left, including that first actual move, aiming to play on the empty right-side of the board as though the first player using stolen strategy $\sigma$ (but with colors swapped). This will work fine, as long as the first player also plays on that part of the board. Whenever the first player plays on the ignored left-most part, simply respond by playing atop. This prevents a win in that part of the part, and so the second player will win on the right-side by pretending to be first there. So there can be no such winning strategy $\sigma$ for the first player.
If the second player has a winning strategy $\tau$, then as before let the first player always play in the first column, until $\tau$ directs the second player to play in another column, which must eventually happen if $\tau$ is winning. At that moment, the first player can pretend to be second on the subboard omitting the first column. So $\tau$ cannot have been winning after all for the second player. $\Box$
In the analysis above, I was considering the game that proceeded in time $\omega$, with $\omega$ many moves. But such a play of the game may not actually have filled the board completely. So it is natural to consider a version of the game where the players continue to play transfinitely, if the board is not yet full.
So let us consider now the transfinite-play version of the game, where play proceeds transfinitely through the ordinals, until either the board is filled or one of the players has achieved the winning goal. Let us assume that the first player also plays first at limit stages, at $\omega$ and $\omega\cdot 2$ and $\omega^2$, and so on, if game play should happen to proceed for that long.
The concrete arguments that I gave above continue to work for the transfinite-play game on the boards of size $\omega\times n$ and $n\times\omega$.
Theorem. In the transfinite-play version of Connect-$\omega$ on boards of size $\omega\times n$ or $n\times\omega$, where $n$ is finite, neither player can have a winning strategy. Indeed, both players can force a draw while also filling the board in $\omega$ moves. Proof. It is clear that on the $n\times\omega$ board, either player can force each column to have infinitely many coins of their color, and this fills the board, while also preventing a win for the opponent, as desired.
On the $\omega\times n$ board, consider a variation of the strategy I described above. I shall simply always play in the first available empty column, thereby placing my coin on the bottom row, until the opponent also plays in a fresh column. At that moment, I shall play atop his coin, thereby placing another coin in the second row; immediately after this, I also play subsequently in the left-most available column (so as to force the board to be filled). I then continue playing in the bottom row, until the opponent also does, which she must, and then I can add another coin to the second row and so on. By always playing the first-available second-row slot with all-empty second rows to the right, I can ensure that the opponent will eventually also make a second-row play (since otherwise I will have a winning condition on the second row), and at such a moment, I can also make a third-row play. By continuing in this way, I am able to place infinitely many coins on each row, while also forcing that the board becomes filled. $\Box$
Unfortunately, the transfinite-play game seems to break the strategy-stealing arguments, since the play is not symmetric for the players, as the first player plays first at limit stages.
Nevertheless, following some ideas of Timothy Gowers in the comments below, let me show that the second player has a drawing strategy.
Theorem. In the transfinite-play version of Connect-$\omega$ on a board of size $\omega\times\omega$, the second player has a drawing strategy. Proof. We shall arrange that the second player will block all possible winning configurations for the first player, or to have column wins for each player. To block all row wins, the second player will arrange to occupy infinitely many cells in each row; to block all diagonal wins, the second player will aim to occupy infinitely many cells on each possible diagonal; and to block the column wins, the second player will aim either to have infinitely many cells on each column or to copy a winning column of the opponent on another column.
To achieve these things, we simply play as follows. Take the columns in successive groups of three. On the first column in each block of three, that is on the columns indexed $3m$, the second player will always answer a move by the first player on this column. In this way, the second player occupies every other cell on these columns—all at the same height. This will block all diagonal wins, because every diagonal winning configuration will need to go through such a cell.
On the remaining two columns in each group of three, columns $3m+1$ and $3m+2$, let the second player simply copy moves of the opponent on one of these columns by playing on the other. These moves will therefore be opposite colors, but at the same height. In this way, the second player ensures that he has infinitely many coins on each row, blocking the row wins. And also, this strategy ensures that in these two columns, at any limit stage, either neither player has achieved a winning configuration or both have.
Thus, we have produced a drawing strategy for the second player. $\Box$
Thus, there is no advantage to going first. What remains is to determine if the first player also has a drawing strategy, or whether the second player can actually force a win.
Gowers explains in the comments below also how to achieve such a copying mechanism to work on a diagonal, instead of just on a column.
I find it also fascinating to consider the natural generalizations of the game to larger ordinals. We may consider the game of Connect-$\alpha$ on a board of size $\kappa\times\lambda$, for any ordinals $\alpha,\kappa,\lambda$, with transfinite play, proceeding until the board is filled or the winning conditions are achieved. Clearly, there are some instances of this game where a player has a winning strategy, such as the original Connect-Four configuration, where the first player wins, and presumably many other instances.
Question. In which instances of Connect-$\alpha$ on a board of size $\kappa\times\lambda$ does one of the players have a winning strategy?
It seems to me that the groups-of-three-columns strategy described above generalizes to show that the second player has at least a drawing strategy in Connect-$\alpha$ on board $\kappa\times\lambda$, whenever $\alpha$ is infinite.
Stay tuned… |
I came upon the term "implied state price density" in a couple of papers. As far as I understand the concept one basically tries to extract the "pricing density" from the market data.
For the sake of simplicity we assume a constant interst rate $r$ and also don't make any assumptions on the model used to evolve $S_t$.
$C(t,S_t,K,r,T)=e^{-r(T-t)}\int_0^{\infty}(S_T-K)^+f(S_T|S_t)dS_T$
According to Douglas T. Breeden and Robert H. Litzenberger in their paper Prices of State-Contingent Claims Implicit in Option Prices one can recover the density via the formula:
$p(S_T|S_t)=e^{r(T-t)}\frac{\partial^2 C(t,S_t,K,r,T)}{\partial K^2}|_{K=S_T}$
How does one arrive at this formula? I tried to differentiate $C(t,S_t,K,r,T)$ but according to the rules for differentiating parameter integrals this is not how one can arrive at above formula (what am I missing?)
P.S. You can read the paper online for free at JSTOR after you register. Or just email me and I will sent you the pdf-file |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
Definition 1:- Let $(X,d)$ be a metric space. $A$ be a subset of $X$. Then $\text{Boundary}(A)=\{x\in X:$open ball centered at $x$ intersects both $A$ and $A^c\}$ Definition 2:- $\overline A=A\cup A'$, $A'$ is the set of all limit points of $A$.Using these two definitions
My aim is to prove
If $A$ is closed iff $A$ contains its boundary.
Let $A$ is closed $\implies \overline {A}=A\implies \text{Boundary}(A)=\overline A \cap \overline {X\setminus A}=A \cap \overline {X\setminus A}\subseteq A.$ Conversaly $A$ contains its boundary. That is $\text{Boundary}(A)\subseteq A.$ Let $x\in \overline A$ then we need to prove that $x\in A$. $x\in \overline A \implies $ every open set contains $x$ intersects $A$. How do I complete the proof? |
I have derived a likelihood function for $\theta$ as follows:
$$L(\theta)=(2\pi\theta)^{-n/2} \exp\left(\frac{ns}{2\theta}\right)$$
Where $\theta$ is an unknown parameter, $n$ is the sample size, and $s$ is a summary of the data. I now am trying to show that
$s$ is a sufficient statistic for $\theta$.
In Wikipedia the Fischer-Neyman factorization is described as:
$$f_\theta(x)=h(x)g_\theta(T(x))$$
My first question is notation. In my problem I believe what wikipedia represents as $x$, is $\theta$, and what wikipedia represents as $\theta$ is $s$. Please confirm that that sounds right, it's a point of confusion for me.
Which would mean I'm trying to define the following 3 functions to complete the factorization and confirm that $s$ is sufficient for $\theta$
$$T(\theta)$$ $$g_s(T(\theta))$$ $$h(\theta)$$
But by this point I feel like I've done something wrong, and I'm not really understanding why this factorization is demonstrating sufficiency. I don't really see what is going on with $g$ and $T$. |
Towards Understanding the Origin of Cosmic-Ray Electrons
We present the precision measurement of the electron flux with a particular emphasis on the behavior at high energies. The measurement is based on 28.1 million electron events collected by AMS from May 19, 2011 to November 12, 2017. This corresponds to a factor of three increase in statistics and factor of two increase in the energy range compared to our results published in 2014 [PRL
113, 121102 (2014)]. The latest precision results on cosmic-ray electrons up to 1.4 TeV reveal new and unexpected features. Our latest data on cosmic-ray electrons and positrons are crucial for providing insights into origins of high energy cosmic-ray electrons and positrons.
We found that in the entire energy range the electron and positron spectra have distinctly different magnitudes and energy dependences. The electron flux exhibits a significant excess starting from $ 42.1_{-5.2}^{+5.4} $ GeV compared to the lower energy trends, but the nature of this excess is different from the positron flux excess above $25.2 \pm 1.8$ GeV. Contrary to the positron flux, which has an exponential energy cutoff of $810_{-180}^{+310} $
Figure 1 shows the latest AMS results on the precision measurements of the electron spectrum and the positron spectrum from the most recent AMS data. These measurements are based on 28.1 million electron and 1.9 million positron events.
Figure 2 shows the latest AMS results on the precision measurements of the electron spectrum. The other most recent measurements are shown for comparison.
Similar to the analysis of the positron flux, we examine the changing behavior of the electron spectrum using the same power law approximation
$$ \label{eq:1} \Phi_{e^{-}}(E)= \begin{cases} C(E/20.04\mbox{ GeV})^{\gamma}, & E \leq E_{0}; \\ C(E/20.04\mbox{ GeV})^{\gamma}(E/E_{0})^{\Delta\gamma}, & E > E_{0}. \end{cases} \tag{1} $$
A fit to data is performed in the energy range [20.04−1400] GeV. The results are presented in Figure 3. The fit yields $E_{0} = 42.1_{-5.2}^{+5.4}$ GeV for the energy where the spectrum behavior changes. The significance of this change is established at 7σ.
To examine the energy dependence of the electron flux in a model independent way, the flux spectral index $\gamma$ is calculated from $\gamma = d[\log(\Phi)]/d[\log(E)]$ over non-overlapping energy intervals which are chosen to have sufficient sensitivity to the spectral index. The energy interval boundaries are 3.36, 5.00, 7.10, 10.32, 17.98, 27.25, 55.58, 90.19, 148.81, 370 and 1400 GeV. The results are presented in Figure 4 together with the positron results. As seen, both the electron and positron indices decrease (soften) rapidly with energy below ∼10 GeV, and then they both start increasing (harden) at >20GeV. In particular, the electron spectral index increases from $\gamma = −3.295\pm0.026$ in the energy range [17.98 − 27.25] GeV to an average $\gamma = −3.180\pm0.008$ in the range [55.58 − 1400] GeV, where it is nearly energy independent.
As seen in Figure 4, the behavior of the electron and positron spectral indices is distinctly different.
New sources of high energy positrons, such as dark matter, may also produce an equal amount of high energy electrons. We test this hypothesis using the source term from our positron analysis. The electron flux is parametrized as a sum of a power law component and the positron source term with the exponential energy cutoff:
$$ \label{eq:2} \Phi_{e^{-}}(E)= C_{e^{-}}(E/E_{1})^{\gamma_{e^{-}}} + f_{e^{-}}C_{S}^{e^{+}}(E/E_{2})^{\gamma_{S}^{e^{+}}}\exp(-E/E_{S}^{e^{+}}) \tag{2} $$
The power law component is characterized by the normalization factor $C_{e^{-}}$ and the spectral index $\gamma_{e^{-}}$ . The constant $E_1= 41.61$ GeV corresponds to the beginning of the fit range, it does not affect the fitted value of $\gamma_{e^{-}}$ . The values of the source term parameters $C_{S}^{e^{+}}$, $\gamma_{S}^{e^{+}}$, $E_2$ , and $E_{S}^{e^{+}}$ are taken from positron data. A fit to the data with the source term normalization $f_{e^{−}}$ fixed to 1 is performed in the energy range [41.61 − 1400] GeV, where the solar modulation effects are negligible. It yields $C_{e^{-}} = (1.965 \pm 0.010) \times 10^{−3} [ \mathrm{m^{2}\,sr\,s\,GeV}]^{-1}$ and $\gamma_{e^{-}} = −3.248 \pm 0.007$ for the power law component with $\chi^2/\mathrm{d.o.f.} = 15.5/24$. The result of the fit is presented in Figure 5(a).
A similar fit of Eq. (\ref{eq:2}) to data, but with $f_{e^{-}}$ fixed to 0, yields $C_{e^{-}} = (2.124 \pm 0.010) \times 10^{−3} [\mathrm{m^{2}\,sr\,s\,GeV}]^{-1}$ and $\gamma_{e^{-}} = −3.186 \pm 0.006$ with $\chi^2/\mathrm{d.o.f.} = 15.2/24$. The result of this fit is presented in Fig. 5(b). Varying the normalization of the source term $f_{e^{-}}$ as a free fit parameter does not improve the $\chi^2$ and yields $f_{e^{-}} = 0.5_{-0.6}^{+1.2}$ . As seen in Figures 5 (a) and (b) the data are consistent both with the charge symmetric positron source term ($f_{e^{-}} = 1$ in Eq. (\ref{eq:2})) and also with the absence of such a term ($f_{e^{-}} = 0$). Therefore, it is not possible to extract any additional information on the existence and properties of the source term with the electron flux.
To investigate the existence of a finite energy cutoff as seen in the positron flux, the electron flux is fitted with
$$ \label{eq:3} \Phi_{e^{-}}(E)= C_{S}(E/41.61\mbox{ GeV})^{\gamma_{S}}\exp(-E/E_{S}) \tag{3} $$
A fit to data in the energy range [41.61, 1400] GeV yields the inverse cutoff energy $1/E_{s} = 0.00_{-0.00}^{+0.08}$ TeV
−1 with $\chi^2/\mathrm{d.o.f.} = 15.2/23$. A study of the cutoff significance shows that $E_{s} < 1.9$ TeV is excluded at the 5σ level. These results are presented in Figure 6.
In addition to a small contribution of secondary electrons produced in the collisions of ordinary cosmic rays with the interstellar gas, there are several astrophysical sources of primary cosmic-ray electrons. It is assumed that there are only a few astrophysical sources of high energy electrons in the vicinity of the solar system each making a power law-like contribution to the electron flux. In addition, there are several physics effects which may introduce some spectral features in the original fluxes. Therefore, it is important to know the minimal number of distinct power law functions needed to accurately describe the AMS electron flux.
We found that in the entire energy range [0.5 − 1400] GeV the electron flux is well described by the sum of two power law components:
$$ \label{eq:4} \Phi_{e^{-}}(E)= \dfrac{E^{2}}{\hat{E}^{2}}[1+(\hat{E}/E_{t})^{\Delta\gamma_{t}}]^{-1}[C_{a}(\hat{E}/E_{a})^{\gamma_{a}}+C_{b}(\hat{E}/E_{b})^{\gamma_{b}}] \tag{4} $$
The two components, $a$ and $b$, correspond to two power law functions. To account for solar modulation effects, the force-field approximation is used, with the energy of particles in the interstellar space $\hat{E}=E+\varphi_{e^{-}}$ and the effective modulation potential $\varphi_{e^{-}}$. The additional transition term, $[1+(\hat{E}/E_{t})^{\Delta\gamma_{t}}]^{-1}$, has vanishing impact on the flux behavior at energies above $E_{t}$ (e.g. < 0.7% above 40 GeV). A fit to the data in the energy range [0.5−1400] GeV is presented in Figure 7. We conclude that in the energy range [0.5 − 1400] GeV the sum of two power law functions provides an excellent description of the data with $\chi^{2}/\mathrm{d.o.f.} = 36.5/68$.
An analysis of the individual components in the electron flux, namely the power law $a$ and $b$ terms, is presented in Figures 8 and 9 together with the corresponding positron data. As seen in Figure 8, at low energies positrons come from cosmic ray collisions, electrons do not. As seen in Figure 9, the positron source term has a cutoff, whereas electrons have neither the source term nor the cutoff.
In the entire energy range the electron and positron spectra have distinctly different magnitudes and energy dependences. The different behavior of the cosmic-ray electrons and positrons measured by AMS is clear evidence that most high energy electrons originate from different sources than high energy positrons. |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
revised
It is not always (perhaps not even "usually") the case that "most" pairs of vertices are "much" less than the diameter apart. Based on meager experience (see below), it seems common that "most" pairs are "almost" the diameter apart, but relatively few are exactly that far apart. Asking that the average distance (ad) or square root of the average squared distance (rasd) are very close to the diameter (either with difference less than 1 or ratio going to 1$ is a much stronger condition.
A small experiment I tried (for degree 3 or 4) was: choose two permutations of $t$ objects, then look at the group and Cayley graph determined by these two (and their inverses.) For my choices (all with $t \le 9$) the group was usually $A_t$ or $S_t.$ The ratio of the ad and rasd to the diameter seemed (in the best cases) to increase with $t.$ Three examples:
The permutations $(1 2)(3 4) (5 6)$ and $(1 7 8 5 3 4 6 9 2)$ generate $S_9$ creating a graph of degree $3$ with $9!=362880$ vertices and diameter $22.$ The distribution of distances is $$3, 6, 12, 24, 46, 90, 176, 344, 672, 1310, 2531, 4867, 9270, 17201, 30867, 51354$$ $$ 75493, 86173, 61359, 19347, 1699, 35$$ leading to $17.03$ and $ 17.14$ for the ad and rasd (this was the best of 5 trials)
The permutations $(1 2 3 4 5 6 7 8)$ and $( 1 6 7 2 3 5 8 4)$ generate $S_8$ creating a graph of degree $4$ with $8!=40320$ vertices and diameter $13.$ The distribution of distances is $$4, 12, 34, 94, 250, 648, 1642, 3939, 8275, 12468, 9843, 2998, 112$$ leading to $9.76$ and $ 9.86$ for the ad and rasd (this was the best of 15 trials)
Two random 7-cycles will usually generate $A_7$ (2520 vertices, degree 4). Out of 100 trials, 6 came out with diameter $9$ and distance distribution $4, 12, 34, 92, 252, 573, 936, 582, 34$ giving ad and rasd of $6.6336$ and $ 6.7456.$ (and none were better.) Perhaps someone can figure out (rather than observe from random trials) what optimum choices are for two $t$-cycles or a $t$-cycle and an involution.
Following a suggestion by @Gerhard: A Rubik's cube (with unmarked center faces) is known to have $n=43,252,003,274,489,856,000$ positions. There are $18$ basic moves (counting a half turn as a single move). This defines a vertex transitive graph of degree 18. It was long suspected, and now is known, that this graph has diameter 20. The team which proved this released numbers showing approximately how many nodes are at distance $d$ from a given one. The numbers, (being approximate) do not add exactly to the correct value of $n$ although they come close. After $1,18$ each number is roughly $13$ times the previous until the last few where the ratios are about $12,2.5,0.05$ and finally $2\cdot10^{-10}$ Using the given numbers, the ad and rasd are both about $17.$
Here are some final comments on fixed degree: Fix a degree $k$ and suppose only that a graph is regular of degree $k$, $x$ be a particular vertex and all vertices are within $m$ of $x.$ The number at distance $d$ is at most $k(k-1)^{d-1}.$ It seems to clear that we want to maximize the number of vertices in the graph to have a relatively large average (squared) distance from $x$. If this count was exact for all $d \le m$ then the proportion at maximal distance would be close to $1-\frac1k.$ Then, for fixed $k$, the average distance and average squared distance would exceed $(1-\epsilon)m$ and $(1-\epsilon)m^2$ for large enough $m$. However, the diameter of the entire graph might be as large as $2m.$ Diameter of $m$ can only happen in a very few cases and not much is known about how close one can come to this in general. This paper about the degree diameter problem mentions a lower bound of $(\frac{k}{1.57})^m$ for the number of vertices. However I don't know the number of vertices at maximal distance. |
I think my question is similar to this one, but different in that I consider a set of realisations, not only one. Sorry if this question is really easy, I'm just not sure how to go rigorously about it.
Say I have $N$ realisations from a multivariate normal distribution $\mathcal{N}(\mu,\Sigma)$. Intuitively, I would expect that the larger $N$, and the more likely I would be to get
at least one realisation lying within $\epsilon$ st.d. of the mean $\mu$. I feel that this mixes discrete and continuous probabilities, and I need help to understand how to come up with what I guess is an expectation formula in this case.
Note that while it is clear what "
within $\epsilon$ st.d." means in the one-dimensional case, the multidimensional counterpart corresponds to " within the range $[0,\epsilon]$ of the Mahalanobis distance".
If I had to have a go at it, I would find the probability of
none being within, as the inverse cumulated density at $\epsilon$ st.d. (the probability of NOT being within), raised to the $N$, or:
$$ \left( \frac{1-\mathrm{erf}(\epsilon)}{2} \right)^N $$
So the probability of
at least one, would be one minus that, is that right? |
I need to calculate the length of a curve $y=2\sqrt{x}$ from $x=0$ to $x=1$.
So I started by taking $\int\limits^1_0 \sqrt{1+\frac{1}{x}}\, \text{d}x$, and then doing substitution: $\left[u = 1+\frac{1}{x}, \text{d}u = \frac{-1}{x^2}\text{d}x \Rightarrow -\text{d}u = \frac{1}{x^2}\text{d}x \right]^1_0 = -\int\limits^1_0 \sqrt{u} \,\text{d}u$ but this obviously will not lead to the correct answer, since $\frac{1}{x^2}$ isn't in the original formula.
Wolfram Alpha is doing a lot of steps for this integration, but I don't think that many steps are needed.
How would I start with this integration? |
The Characteristic Polynomial of a Matrix
Recall from The Eigenvalues of a Matrix page that if $A$ is an $n \times n$ matrix, then the number $\lambda$ is said to be an eigenvalue of $A$ if there exists a nonzero vector $v$ such that $Av = \lambda v$. Furthermore, the vectors $v$ for which $Av = \lambda v$ are called the corresponding eigenvectors to the eigenvalue $\lambda$.
Now suppose that $\lambda$ is an eigenvalue of the matrix $A$. Then $Av = \lambda v$ for some nonzero vector $v$. We can rewrite this equation as:(1)
We note that the above equation represents a homogenous system of linear equations. Furthermore, note that $v$ is a nonzero vector. So if $\lambda I - A$ is an invertible matrix, then the only solution to this system is the trivial solution, namely, $v = 0$ which cannot happen. Therefore $\lambda I - A$ is not invertible, and so $\mathrm{det} (\lambda I - A) = 0$. This equation (which produces a polynomial) is extremely useful for finding the eigenvalues of a matrix, and we formally define it below.
Definition: If $A$ is an $n \times n$ matrix, then the Characteristic Polynomial of $A$ is the function $f(\lambda) = \mathrm{det} (\lambda I - A)$.
For example, consider the matrix $A = \begin{bmatrix} 1 & 2\\ 3 & 4 \end{bmatrix}$. Now let's form the matrix $\lambda I - A$:(2)
Now let's set the determinant of this matrix equal to zero:(3)
The resulting eigenvalues are the roots of the polynomial above which can be computed using the quadratic formula. They are $\lambda_1 = - \frac{\sqrt{33} - 5}{2}$ and $\lambda_2 = \frac{\sqrt{33} + 5}{2}$ as you should verify.
Now the following theorem gives us an upper bound for the number of eigenvalues that a square $n \times n$ matrix can have.
Theorem 1: If $A$ is an $n \times n$ matrix then $A$ has at most $n$ distinct eigenvalues.
In general, if we have a relatively small matrix $A$, then computing the corresponding characteristic polynomial to find the eigenvalues of $A$ is nice. However, for very large matrices - using this method is not very useful because the number of computations grows extremely fast. For example, consider a $100 \times 100$ matrix. In reducing such a matrix, we would need to compute determinants of $100$ $99 \times 99$ matrices, and for each $99 \times 99$ matrix, we would need to compute the determinants of $99$ $98 \times 98$ matrices and so forth. Even after all of the additions, subtractions, etc… associated with this process, we would end up with a polynomial with degree at most $100$ which could be very complicated to solve.
Of course, there are ways to work around it. Once such way is to reduce $A$ to an upper triangular matrix to find the eigenvalues immediately. We will look at other methods later on. |
The Open Mapping Theorem
Recall from the Open and Closed Mappings page that if $X$ and $Y$ are topological spaces then a function $f : X \to Y$ is said to be an open mapping if for every open set $U$ in $X$ the image, $f(U)$ is an open set in $T(X)$.
We are now ready to prove the very important Open Mapping theorem.
Theorem 1 (The Open Mapping Theorem): Let $X$ and $Y$ be Banach spaces and let $T : X \to Y$ be a bounded linear operator. Then the range $T(X)$ is closed if and only if $T$ is an open mapping. Proof:From the theorem on the Second IFF Criterion for the Range of a BLO to be Closed when X and Y are Banach Spaces page, we have that if $X$ and $Y$ are Banach spaces and $T : X \to Y$ is a bounded linear operator then the range $T(X)$ is closed if and only if there exists a positive constant $M' \in \mathbb{R}$, $M' > 0$ such that for all $x \in X$ with $\| T(x) \| < 1$ we have that there exists an $x' \in X$ such that $T(x) = T(x')$ and $\| x' \| < M'$. So if $\displaystyle{r = \frac{1}{M'} > 0}$ then equivalently, $T(X)$ is closed if and only if for every $x \in X$ with $\| T(x) \| < r$ there exists an $x' \in X$ such that $T(x) = T(x')$ and $\| x' \| < 1$. $\Rightarrow$ Suppose that $T(X)$ is closed. Then the condition above holds. Let $U$ be any open set in $X$. We want to show that $T(U)$ is an open set in $T(X)$. Now since $Y$ is open in $X$, for every $u \in U$ there exists an open ball centered at $u$ fully contained in $U$. In other words, there exists an $r_u > 0$ such that: Let $x \in X$ be such that: Then by the condition above, there exists an $x' \in X$ such that $T(x') = T(x - u) = T(x) - T(u)$ and $\| x' \| < r_u$. Therefore: So $(x' + u) \in B(u, r_u)$. So $T(x) = T(x' + u) \in T(B(u, r_u))$. But since $B(u, r_u) \subseteq U$ we have that: Therefore: Hence $T(U)$ is open in $T(X)$. So for every open set $U$ in $X$ we have that $T(U)$ is open in $T(X)$. Hence $T$ is an open mapping. $\Leftarrow$ Suppose that $T$ is an open mapping. Let $B_X$ denote the open unit ball in $X$. Then $T(B_X)$ is open in $T(X)$. Since $0 \in T(B_X)$, there exists an $r > 0$ such that: So if $\| T(x) \| < r$ then we have that: So there exists an $x' \in B_X$ such that $T(x) = T(x')$ and $\| x' \| < 1$. So $T(X)$ is closed. $\blacksquare$ |
A homework problem I recall from functional analysis was to prove that the weak closure of the unit sphere, $S$, in an
infinite-dimensional real normed vector space is the unit ball, $B$.
Looking back at what I turned in, I argued as follows:
Note that $S$ would be weakly dense in $B$ if, for any nonempty (relatively) weakly open subset $U\subset B$, one has $S\cap U\neq\emptyset$. Let $U$ be such a subset and let $x_{0}\in U\subset B$. Fixing $\epsilon>0$ and $x^{*}\in X^{*}$, one has by continuity, that the inverse image $$V_{*}^{\epsilon}:=(x^{*})^{-1}[(\langle x^{*},x_{0}\rangle-\epsilon,\langle x^{*},x_{0}\rangle+\epsilon)]$$ is weakly open, and hence, $U\cap V_{*}^{\epsilon}$ is (relatively) weakly open in $B$, and contains $x_{0}$. As long as $x^{*}$ does not vanish identically, it's kernel has codimension $1$, so since $\text{dim}(X)=\infty$, one must have that $\text{ker}(x^{*})$ is nontrivial. Then, finding a nonzero $\xi\in\text{ker}(x^{*})$, one has $$x_{0}+t\xi\in S$$ for some $t\in\mathbb{R}$. Finally, this yields $$|\langle x^{*},x_{0}\rangle-\langle x^{*},x_{0}+t\xi\rangle|=|t|\cdot|\langle x^{*},\xi\rangle|=0<\epsilon$$ which means $x_{0}+t\xi\in V_{*}^{\epsilon}$.
Now, I have two questions:
If we knew that $V_{*}^{\epsilon}\subset U$, we'd be done. Why can we assume this? (It seems in some of the proofs I've seen elsewhere, this is assumed WLOG) Why do we need $\text{dim}(X)=\infty$? We are using the fact that $$X/\text{ker}(x^{*})\cong\mathbb{R}$$ so if the kernel were trivial, wouldn't this still be a contradiction as long as $\text{dim}(X)\geq 2$? |
$$\int f(x) dx \sim \sum_i^k f(x) \Delta x$$
$$
\begin{align*}
\Delta x & = \int \dot{x}(t) dt \\
& \sim \dot{x}(t) \Delta t
\end{align*}
$$
These are all special cases of "Deterministic Quadratures", where the integration step size $\Delta x$ is non-random ("deterministic") and we are summing up a bunch of quadrilateral pieces ("Quadratures") to approximate areas.
This post interprets the numerical integration as a special case of stratified sampling, and shows that deterministic quadrature rules are statistically biased estimators.
Suppose we want to compute $E_X[f(X)] = \int_x p(x)\cdot f(x) dx$ via a Monte Carlo estimator.
A plain Monte Carlo Estimator is given by $\mu = \frac{1}{n}\sum_i^n f(X_i)$, and has variance $\frac{1}{n}\sigma^2$ where $\sigma^2=Var[X_i]$.
Stratified sampling (see previous blog post) introduces a stratification variable $Y$ with a discrete distribution of $k$ strata with probability densities $p(y_1),p(y_2),...,p(y_k)$, respectively. A common choice is to assign each $p(y_1) = p(y_2) = ... = p(y_k) = 1/k$, and arrange the strata in a grid. In this case, $y_i$ correspond to the corners of the strata and $P(X|Y=y_i)$ corresponds to a uniform distribution over that square ($X_i = Y_i + 0.1*\text{rand}()$).
The variance of this estimator is $\sum_i^k p(y_i)^2 \cdot V_i$, where $V_i$ is the variance of $\mathbb{E}[X|Y=y_i]$ for strata $i$.
Suppose we let $\mathbb{E}[X|Y=y_i]$ as $y_i$ - that is, we just sample at the corners where $y_i$ sampels are.
The good news is that $V_i = 0$, and we've reduced the variance of the total estimator to zero. The bad news is that the estimator is now biased because our per-strata estimators $\mathbb{E}[X|Y=y_i]$ are biased (our estimator never uses information from the interiors/edges of the squares). The estimator is also inconsistent for finite strata.
Does this figure remind you of anything? This estimator behaves identically to a Riemann Summation in 2D!
What does this mean?
If you ever take a Riemann sum with fixed step size, your integral is biased! If you are monitoring mean ocean temperatures and place your sensors in an regular grid across the water (like above), your estimator is biased! If you are recording video at a too-low frequency and pick up aliasing artifacts, your estimates are biased! If you use a fixed timestep in a physics simulation, such as Euler or Runge-Kutta methods, your space integrals are biased!
The takeaway here is that *all* deterministic quadrature rules are statistically biased. that's okay though - the expected value of biased estimators *do* converge to the true expectation as you crank the number of strata to infinity (e.g. $\Delta x \to 0$).
A "less-biased" physics simulator ought to incorporate random time steps (for instance, sampled between 0 and MAX_TIME_STEP). To reduce variance, one might generate multiple samples, compute a nonlinear update, then average out the integration.
Fairly obvious in hindsight, but I found it interesting to think about the relationship between stratified sampling and deterministic quadrature :) |
Difference between revisions of "Degree of irreducible representation divides order of group"
(4 intermediate revisions by the same user not shown) Line 18: Line 18:
* [[Degree of irreducible representation divides index of center]]
* [[Degree of irreducible representation divides index of center]]
−
* [[Degree of irreducible representation divides index of
+
* [[Degree of irreducible representation divides index of normal subgroup]]
* [[Order of inner automorphism group bounds square of degree of irreducible representation]]
* [[Order of inner automorphism group bounds square of degree of irreducible representation]]
+
* [[Sum of squares of degrees of irreducible representations equals order of group]]
* [[Sum of squares of degrees of irreducible representations equals order of group]]
+ + + +
===Breakdown for a field that is not algebraically closed===
===Breakdown for a field that is not algebraically closed===
Line 40: Line 45:
! Fact no. !! Statement !! Steps in the proof where it is used !! Qualitative description of how it is used !! What does it rely on? !! Other applications
! Fact no. !! Statement !! Steps in the proof where it is used !! Qualitative description of how it is used !! What does it rely on? !! Other applications
|-
|-
−
| 1 || [[uses::Character orthogonality theorem]]: The part relevant for us is: for an irreducible representation over a splitting field of characteristic zero, <math>\sum_{g \in G} \chi(g)\overline{\chi(g)} = |G|</math> || Step (1) || Equation setup that we then tinker with. || || {{uses short|character orthogonality theorem}}
+
| 1 || [[uses::Character orthogonality theorem]]: The part relevant for us is: for an irreducible representation over a splitting field of characteristic zero , <math>\sum_{g \in G} \chi(g)\overline{\chi(g)} = |G|</math> || Step (1) || Equation setup that we then tinker with. || || {{uses short|character orthogonality theorem}}
|-
|-
−
| 2 || [[uses::Size-degree-weighted characters are algebraic integers]]: This states that for an irreducible linear representation <math>\varphi</math> of a finite group <math>G</math> over an algebraically closed field of characteristic zero (or more generally, over any [[splitting field]]), a conjugacy class <math>c</math> in <math>G</math> and an element <math>g \in c</math>, the number <math>|c|\chi(g)/\chi(1)</math> (with <math>1</math> denoting the identity element of the group) is an algebraic integer. || Step (3) || Show certain parts of an expression are algebraic integers. || algebraic number theory + linear representation theory|| {{uses short|Size-degree-weighted characters are algebraic integers}}
+
| 2 || [[uses::Size-degree-weighted characters are algebraic integers]]: This states that for an irreducible linear representation <math>\varphi</math> of a finite group <math>G</math> over an algebraically closed field of characteristic zero (or more generally, over any [[splitting field]]), a conjugacy class <math>c</math> in <math>G</math> and an element <math>g \in c</math>, the number <math>|c|\chi(g)/\chi(1)</math> (with <math>1</math> denoting the identity element of the group) is an algebraic integer. || Step (3) || Show certain parts of an expression are algebraic integers. || algebraic number theory + linear representation theory|| {{uses short|Size-degree-weighted characters are algebraic integers}}
|-
|-
| 3 || [[uses::Characters are algebraic integers]] || Step (4) || Show certain parts of an expression are algebraic integers. || basic linear representation theory || {{uses short|characters are algebraic integers}}
| 3 || [[uses::Characters are algebraic integers]] || Step (4) || Show certain parts of an expression are algebraic integers. || basic linear representation theory || {{uses short|characters are algebraic integers}}
Latest revision as of 13:03, 14 October 2018 This article gives the statement, and possibly proof, of a constraint on numerical invariants that can be associated with a finite group This article states a result of the form that one natural number divides another. Specifically, the (degree of a linear representation) of a/an/the (irreducible linear representation) divides the (order) of a/an/the (group). View other divisor relations |View congruence conditions This fact is related to: linear representation theory View other facts related to linear representation theoryView terms related to linear representation theory | Contents Statement
Let be a finite group and an irreducible representation of over an algebraically closed field of characteristic zero (or, more generally, over any splitting field of characteristic zero for ). Then, the degree of divides the order of .
Related facts Other facts about degrees of irreducible representations
Further information: Degrees of irreducible representations
Degree of irreducible representation divides index of center Degree of irreducible representation divides index of abelian normal subgroup Order of inner automorphism group bounds square of degree of irreducible representation Number of irreducible representations equals number of conjugacy classes Sum of squares of degrees of irreducible representations equals order of group Similar fact about irreducible projective representations Breakdown for a field that is not algebraically closed
Let be the cyclic group of order three and be the field. Then, there are two irreducible representations of over : the trivial representation, and a two-dimensional representation given by the action by rotation by multiples of . The two-dimensional representation has degree , and this does
not divide the order of the group, which is .
We still have the following results:
Degree of irreducible representation over reals divides twice the group order Degree of irreducible representation over any field divides product of order and Euler totient function of exponent Degree of irreducible representation of nontrivial finite group is strictly less than order of group Maximum degree of irreducible real representation is at most twice maximum degree of irreducible complex representation Facts used The table below lists key facts used directlyand explicitlyin the proof. Fact numbers as used in the table may be referenced in the proof. This table need notlist facts used indirectly, i.e., facts that are used to prove these facts, and it need not list facts used implicitly through assumptions embedded in the choice of terminology and language.
Fact no. Statement Steps in the proof where it is used Qualitative description of how it is used What does it rely on? Other applications 1 Character orthogonality theorem: The part relevant for us is: for an irreducible representation over a splitting field of characteristic zero with character , Step (1) Equation setup that we then tinker with. click here 2 Size-degree-weighted characters are algebraic integers: This states that for an irreducible linear representation of a finite group over an algebraically closed field of characteristic zero (or more generally, over any splitting field), with character , a conjugacy class in and an element , the number (with denoting the identity element of the group) is an algebraic integer. Step (3) Show certain parts of an expression are algebraic integers. algebraic number theory + linear representation theory click here 3 Characters are algebraic integers Step (4) Show certain parts of an expression are algebraic integers. basic linear representation theory click here Proof This proof uses a tabular format for presentation. Provide feedback on tabular proof formats in a survey (opens in new window/tab) | Learn more about tabular proof formats|View all pages on facts with proofs in tabular format Given: A finite group , an irreducible linear representation of over a splitting field of characteristic zero for , with character and degree . Note that equals , i.e., the value of at the identity element of . To prove: divides the order of . Proof:
Step no. Assertion/construction Facts used Given data used Previous steps used Explanation 1 The following holds: where the sum is over all conjugacy classes of , and where denotes the value of at any element of . Fact (1) is irreducible over a splitting field of characteristic zero, with character . Follows from fact (1). The comes because for each conjugacy class , elements of the class appear in the full statement of the column orthogonality theorem. 2 Step (1) Divide both sides of step (1) by . 3 Each is an algebraic integer for each conjugacy class . Fact (2) is irreducible over a splitting field of characteristic zero, with character . 4 Each is an algebraic integer for each conjugacy class . Fact (3) is a character. The complex conjugate of an algebraic integer is also an algebraic integer. 5 is an algebraic integer. Steps (3), (4) The set of algebraic integers forms a ring, so a finite sum of products of algebraic integers is an algebraic integer. 6 is an algebraic integer. Steps (2), (5) By Step (5), the left side of Step (2) is an algebraic integer, hence so is the right side. 7 is a positive integer, so divides . Step (6) Both and are positive integers, hence their quotient is a positive rational number. The only way a rational number can be an algebraic integer is if it is an integer, hence the conclusion. |
Box Topological Products of Topological Spaces
Recall that if $\{ X_i \}_{i \in I}$ is an arbitrary collection of topological spaces and $\displaystyle{\prod_{i \in I} X_i}$ is the Cartesian product of these spaces then we can define the product topology on $\displaystyle{\prod_{i \in I} X_i}$ to be the topology $\tau$ induced by the collection of projection maps $\displaystyle{\prod_{i \in I} \to X_i}$. We then said that the resulting topological space is a topological product.
Given an arbitrary collection of topological spaces $\{ X_i \}_{i \in I}$ there is another topology we can put on the product $\displaystyle{\prod_{i \in I} X_i}$ known as the box topology which we define below.
Definition: Let $\{ X_i \}_{i \in I}$ be an arbitrary collection of topological spaces and let $\displaystyle{\prod_{i \in I} X_i}$ be the Cartesian product. Then the Box Topology on $\displaystyle{\prod_{i \in I} X_i}$ is the topology $\tau$ with a basis $\mathcal B = \left \{ \prod_{i \in I} U_i : U_i \subseteq X_i \: \mathrm{is \: open \: for \: all \:} i \in I \right \}$. The space $\displaystyle{\prod_{i \in I} X_i}$ with the box topology is called a Box Topological Product or simply Topological Product if the context of the topology is unambiguous. Sometimes we write $\displaystyle{\prod_{i \in I}^{\mathrm{BOX}} X_i}$ to denote the space $\displaystyle{\prod_{i \in I} X_i}$ is accompanied with the box product topology.
In other words, open sets in a box topological product $\displaystyle{\prod_{i \in I} X_i}$ are sets $U = \prod_{i \in I} U_i$ where each set $U_i$ in the Cartesian product is an open set in the corresponding topological space $X_i$.
It is important to note that if $\{ X_i \}_{i \in I}$ is a finite collection of topological spaces then the product topology and box product topology on $\displaystyle{\prod_{i \in I} X_i}$ produce the same topological space. |
Recall from Substitution Rule the method of integration by substitution. When evaluating an integral such as
\[\int_2^3 x(x^2 - 4)^5 dx,\]
we substitute \(u = g(x) = x^2 - 4\). Then \(du = 2x \, dx\) or \(x \, dx = \frac{1}{2} du\) and the limits change to \(u = g(2) = 2^2 - 4 = 0\) and \(u = g(3) = 9 - 4 = 5\). Thus the integral becomes
\[\int_0^5 \frac{1}{2}u^5 du\]
and this integral is much simpler to evaluate. In other words, when solving integration problems, we make appropriate substitutions to obtain an integral that becomes much simpler than the original integral.
We also used this idea when we transformed double integrals in rectangular coordinates to polar coordinates and transformed triple integrals in rectangular coordinates to cylindrical or spherical coordinates to make the computations simpler. More generally,
\[\int_a^b f(x) dx = \int_c^d f(g(u))g'(u) du,\]
Where \(x = g(u), \, dx = g'(u) du\), and \(u = c\) and \(u = d\) satisfy \(c = g(a)\) and \(d = g(b)\).
A similar result occurs in double integrals when we substitute
\(x = f (r,\theta) = r \, \cos \, \theta\) \( y = g(r, \theta) = r \, \sin \, \theta\), and \(dA = dx \, dy = r \, dr \, d\theta\).
Then we get
\[\iint_R f(x,y) dA = \iint_S (r \, \cos \, \theta, \, r \, \sin \, \theta)r \, dr \, d\theta\]
where the domain \(R\) is replaced by the domain \(S\) in polar coordinates. Generally, the function that we use to change the variables to make the integration simpler is called a transformation or mapping.
Planar Transformations
A planar transformation \(T\) is a function that transforms a region \(G\) in one plane into a region \(R\) in another plane by a change of variables. Both \(G\) and \(R\) are subsets of \(R^2\). For example, Figure \(\PageIndex{1}\) shows a region \(G\) in the \(uv\)-plane transformed into a region \(R\) in the \(xy\)-plane by the change of variables \(x = g(u,v)\) and \(y = h(u,v)\), or sometimes we write \(x = x(u,v)\) and \(y = y(u,v)\). We shall typically assume that each of these functions has continuous first partial derivatives, which means \(g_u, \, g_v, \, h_u,\) and \(h_v\) exist and are also continuous. The need for this requirement will become clear soon.
Definition: one-to-one transformation
A transformation \(T: \, G \rightarrow R\), defined as \(T(u,v) = (x,y)\), is said to be a one-to-one transformation if no two points map to the same image point.
To show that \(T\) s a one-to-one transformation, we assume \(T(u_1,v_1) = T(u_2, v_2)\) and show that as a consequence we obtain\((u_1,v_1) = (u_2, v_2)\). If the transformation \(T\) is one-to-one in the domain \(G\), then the inverse \(T^{-1}\) exists with the domain \(R\) such that \(T^{-1} \circ T\) and \(T \circ T^{-1}\) are identity functions.
Figure \(\PageIndex{2}\) shows the mapping \(T(u,v) = (x,y)\) where \(x\) and \(y\) are related to \(u\) and \(v\) by the equations \(x = g(u,v)\) and \(y = h(u,v)\). The region \(G\) is the domain of \(T\) and the region \(R\) is the range of \(T\), also known as the
image of \(G\) under the transformation \(T\).
Example \(\PageIndex{1A}\): Determining How the Transformation Works
Suppose a transformation \(T\) is defined as \(T(r,\theta) = (x,y)\) where \(x = r \, \cos \, \theta, \, y = r \, \sin \, \theta\). Find the image of the polar rectangle \(G = \{(r,\theta) | 0 \leq r \leq 1, \, 0 \leq \theta \leq \pi/2\}\) in the \(r\theta\)-plane to a region \(R\) in the \(xy\)-plane. Show that \(T\) is a one-to-one transformation in \(G\) and find \(T^{-1} (x,y)\).
Solution
Since \(r\) varies from 0 to 1 in the \(r\theta\)-plane, we have a circular disc of radius 0 to 1 in the \(xy\)-plane. Because \(\theta\) varies from 0 to \(\pi/2\) in the \(r\theta\)-plane, we end up getting a quarter circle of radius \(1\) in the first quadrant of the \(xy\)-plane (Figure \(\PageIndex{2}\)). Hence \(R\) is a quarter circle bounded by \(x^2 + y^2 = 1\) in the first quadrant.
In order to show that \(T\) is a one-to-one transformation, assume \(T(r_1,\theta_1) = T(r_2, \theta_2)\) and show as a consequence that \((r_1,\theta_1) = (r_2, \theta_2)\). In this case, we have
\[T(r_1,\theta_1) = T(r_2, \theta_2),\]
\[(x_1,y_1) = (x_1,y_1),\]
\[(r_1 \cos \, \theta_1, r_1 \sin \, \theta_1) = (r_2 \cos \, \theta_2, r_2 \sin \, \theta_2),\]
\[r_1 \cos \, \theta_1 = r_2 \cos \, \theta_2, \, r_1 \sin \, \theta_1 = r_2 \sin \, \theta_2.\]
Dividing, we obtain
\[\frac{r_1 \cos \, \theta_1}{r_1 \sin \, \theta_1} = \frac{ r_2 \cos \, \theta_2}{ r_2 \sin \, \theta_2}\]
\[\frac{\cos \, \theta_1}{\sin \, \theta_1} = \frac{\cos \, \theta_2}{\sin \, \theta_2}\]
\[\tan \, \theta_1 = \tan \, \theta_2\]
\[\theta_1 = \theta_2\]
since the tangent function is one-one function in the interval \(0 \leq \theta \leq \pi/2\). Also, since \(0 \leq r \leq 1\), we have \(r_1 = r_2, \, \theta_1 = \theta_2\). Therefore, \((r_1,\theta_1) = (r_2, \theta_2)\) and \(T\) is a one-to-one transformation from \(G\) to \(R\).
To find \(T^{-1}(x,y)\) solve for \(r,\theta\) in terms of \(x,y\). We already know that \(r^2 = x^2 + y^2\) and \(\tan \, \theta = \frac{y}{x}\). Thus \(T^{-1}(x,y) = (r,\theta)\) is defined as \(r = \sqrt{x^2 + y^2}\) and \(\tan^{-1} \left(\frac{y}{x}\right)\).
Example \(\PageIndex{1B}\): Finding the Image under \(T\)
Let the transformation \(T\) be defined by \(T(u,v) = (x,y)\) where \(x = u^2 - v^2\) and \(y = uv\). Find the image of the triangle in the \(uv\)-plane with vertices \((0,0), \, (0,1)\), and \((1,1)\).
Solution
The triangle and its image are shown in Figure \(\PageIndex{3}\). To understand how the sides of the triangle transform, call the side that joins \((0,0)\) and \((0,1)\) side \(A\), the side that joins \((0,0)\) and \((1,1)\) side \(B\), and the side that joins \((1,1)\) and \((0,1)\) side \(C\).
For the side \(A: \, u = 0, \, 0 \leq v \leq 1\) transforms to \(x = -v^2, \, y = 0\) so this is the side \(A'\) that joins \((-1,0)\) and \((0,0)\). For the side \(B: \, u = v, \, 0 \leq u \leq 1\) transforms to \(x = 0, \, y = u^2\) so this is the side \(B'\) that joins \((0,0)\) and \((0,1)\). For the side \(C: \, 0 \leq u \leq 1, \, v = 1\) transforms to \(x = u^2 - 1, \, y = u\) (hence \(x = y^2 - 1\) so this is the side \(C'\) that makes the upper half of the parabolic arc joining \((-1,0)\) and \((0,1)\).
All the points in the entire region of the triangle in the \(uv\)-plane are mapped inside the parabolic region in the \(xy\)-plane.
Exercise \(\PageIndex{1}\)
Let a transformation \(T\) be defined as \(T(u,v) = (x,y)\) where \(x = u + v, \, y = 3v\). Find the image of the rectangle \(G = \{(u,v) : \, 0 \leq u \leq 1, \, 0 \leq v \leq 2\}\) from the \(uv\)-plane after the transformation into a region \(R\) in the \(xy\)-plane. Show that \(T\) is a one-to-one transformation and find \(T^{-1} (x,y)\).
Hint
Follow the steps of Example \(\PageIndex{1B}\).
Answer
\(T^{-1} (x,y) = (u,v)\) where \(u = \frac{3x-y}{3}\) and \(v = \frac{y}{3}\)
Using the definition, we have
\[\Delta A \approx J(u,v) \Delta u \Delta v = \left|\frac{\partial (x,y)}{\partial (u,v)}\right| \Delta u \Delta v.\]
Note that the Jacobian is frequently denoted simply by
\[J(u,v) = \frac{\partial (x,y)}{\partial (u,v)}.\]
Note also that
\[ \begin{vmatrix} \dfrac{\partial x}{\partial u} & \dfrac{\partial y}{\partial u} \nonumber \\ \dfrac{\partial x}{\partial v} & \dfrac{\partial y}{\partial v} \end{vmatrix} = \left( \frac{\partial x}{\partial u}\frac{\partial y}{\partial v} - \frac{\partial x}{\partial v} \frac{\partial y}{\partial u}\right) = \begin{vmatrix} \dfrac{\partial x}{\partial u} & \dfrac{\partial x}{\partial v} \nonumber \\ \dfrac{\partial y}{\partial u} & \dfrac{\partial y}{\partial v} \end{vmatrix} .\]
Hence the notation \(J(u,v) = \frac{\partial(x,y)}{\partial(u,v)}\) suggests that we can write the Jacobian determinant with partials of \(x\) in the first row and partials of \(y\) in the second row.
Example \(\PageIndex{2A}\): Finding the Jacobian
Find the Jacobian of the transformation given in Example \(\PageIndex{1A}\).
Solution
The transformation in the example is \(T(r,\theta) = ( r \, \cos \, \theta, \, r \, \sin \, \theta)\) where \(x = r \, \cos \, \theta\) and \(y = r \, \sin \, \theta\). Thus the Jacobian is
\[J(r, \theta) = \frac{\partial(x,y)}{\partial(r,\theta)} = \begin{vmatrix} \dfrac{\partial x}{\partial r} & \dfrac{\partial x}{\partial \theta} \nonumber \\ \dfrac{\partial y}{\partial r} & \dfrac{\partial y}{\partial \theta} \end{vmatrix} = \begin{vmatrix} \cos \theta & -r\sin \theta \nonumber \\ \sin \theta & r \cos \theta \end{vmatrix} = r \, \cos^2\theta + r \, \sin^2\theta = r ( \cos^2\theta + \sin^2\theta) = r.\]
Example \(\PageIndex{2B}\): Finding the Jacobian
Find the Jacobian of the transformation given in Example \(\PageIndex{1B}\).
Solution
The transformation in the example is \(T(u,v) = (u^2 - v^2, uv)\) where \(x = u^2 - v^2\) and \(y = uv\). Thus the Jacobian is
\[J(u,v) = \frac{\partial(x,y)}{\partial(u,v)} = \begin{vmatrix} \dfrac{\partial x}{\partial u} & \dfrac{\partial x}{\partial v} \nonumber \\ \dfrac{\partial y}{\partial u} & \dfrac{\partial y}{\partial v} \end{vmatrix} = \begin{vmatrix} 2u & v \nonumber \\ -2v & u \end{vmatrix} = 2u^2 + 2v^2.\]
Exercise \(\PageIndex{2}\)
Find the Jacobian of the transformation given in the previous checkpoint: \(T(u,v) = (u + v, 2v)\).
Hint
Follow the steps in the previous two examples.
Answer
\[J(u,v) = \frac{\partial(x,y)}{\partial(u,v)} = \begin{vmatrix} \dfrac{\partial x}{\partial u} & \dfrac{\partial x}{\partial v} \nonumber \\ \dfrac{\partial y}{\partial u} & \dfrac{\partial y}{\partial v} \end{vmatrix} = \begin{vmatrix} 1 & 1 \nonumber \\ 0 & 2 \end{vmatrix} = 2\] |
Answer
The circumference of the earth is 24,800 miles
Work Step by Step
We can convert the angle to radians: $\theta = 7^{\circ}12'$ $\theta = (7+\frac{12}{60})^{\circ}$ $\theta = (7.2^{\circ})(\frac{\pi~rad}{180^{\circ}}) = 0.1257~rad$ We can find the earth's radius: $S = \theta ~r$ $r = \frac{S}{\theta}$ $r = \frac{496~mi}{0.1257~rad}$ $r = 3945.9~mi$ We can find the circumference $C$ of the earth: $C = 2\pi~r$ $C = (2\pi)(3945.9~mi)$ $C = 24,800~mi$ The circumference of the earth is 24,800 miles |
Exponential and logarithmic functions are used to model population growth, cell growth, and financial growth, as well as depreciation, radioactive decay, and resource consumption, to name only a few applications. In this section, we explore integration involving exponential and logarithmic functions.
Integrals of Exponential Functions
The exponential function is perhaps the most efficient function in terms of the operations of calculus. The exponential function, \(y=e^x\), is its own derivative and its own integral.
Rule: Integrals of Exponential Functions
Exponential functions can be integrated using the following formulas.
\[∫e^xdx=e^x+C\]
\[∫a^xdx=\dfrac{a^x}{\ln a}+C\]
Example \(\PageIndex{1}\): Finding an Antiderivative of an Exponential Function
Find the antiderivative of the exponential function \(e^{−x}\).
Solution: Use substitution, setting \(u=−x,\) and then \(du=−1dx\). Multiply the du equation by −1, so you now have \(−du=dx\). Then,
\(∫e^{−x}dx=−∫e^udu=−e^u+C=−e^{−x}+C.\)
Exercise \(\PageIndex{1}\)
Find the antiderivative of the function using substitution: \(x^2e^{−2x^3}\).
Hint
Let u equal the exponent on e.
Answer
\(∫x^2e^{−2x^3}dx=−\dfrac{1}{6}e^{−2x^3}+C\)
A common mistake when dealing with exponential expressions is treating the exponent on e the same way we treat exponents in polynomial expressions. We cannot use the power rule for the exponent on e. This can be especially confusing when we have both exponentials and polynomials in the same expression, as in the previous checkpoint. In these cases, we should always double-check to make sure we’re using the right rules for the functions we’re integrating.
Example \(\PageIndex{2}\): Square Root of an Exponential Function
Find the antiderivative of the exponential function \(e^x\sqrt{1+e^x}\).
Solution
First rewrite the problem using a rational exponent:
\(∫e^x\sqrt{1+e^x}dx=∫e^x(1+e^x)^{1/2}dx.\)
Using substitution, choose \(u=1+e^x.u=1+e^x\)Then, \(du=e^xdx\). We have (Figure)
\(∫e^x(1+e^x)^{1/2}dx=∫u^{1/2}du.\)
Then
\(∫u^{1/2}du=\dfrac{u^{3/2}}{3/2}+C=\dfrac{2}{3}u^{3/2}+C=\dfrac{2}{3}(1+e^x)^{3/2}+C\)
Figure \(\PageIndex{1}\): The graph shows an exponential function times the square root of an exponential function.
Exercise \(\PageIndex{2}\)
Find the antiderivative of \(e^x(3e^x−2)^2\).
Hint
Let \(u=3e^x−2u=3e^x−2.\)
Answer
\(∫e^x(3e^x−2)^2dx=\dfrac{1}{9}(3e^x−2)^3\)
Example \(\PageIndex{3}\): Using Substitution with an Exponential Function
Use substitution to evaluate the indefinite integral \(∫3x^2e^{2x^3}dx.\)
Solution
Here we choose to let u equal the expression in the exponent on e. Let \(u=2x^3\) and \(du=6x^2dx\).. Again, du is off by a constant multiplier; the original function contains a factor of \(3x^2,\) not \(6x^2\). Multiply both sides of the equation by \(\dfrac{1}{2}\) so that the integrand in u equals the integrand in x. Thus,
\(∫3x^2e^{2x^3}dx=\dfrac{1}{2}∫e^udu\).
Integrate the expression in u and then substitute the original expression in x back into the u integral:
\(\dfrac{1}{2}∫e^udu=\dfrac{1}{2}e^u+C=\dfrac{1}{2}e^2x^3+C.\)
Exercise \(\PageIndex{3}\)
Evaluate the indefinite integral \(∫2x^3ex^4dx\).
Hint
Let \(u=x^4.\)
Answer
\(∫2x^3ex^4dx=\dfrac{1}{2}e^{x^4}\)
As mentioned at the beginning of this section, exponential functions are used in many real-life applications. The number e is often associated with compounded or accelerating growth, as we have seen in earlier sections about the derivative. Although the derivative represents a rate of change or a growth rate, the integral represents the total change or the total growth. Let’s look at an example in which integration of an exponential function solves a common business application.
A price–demand function tells us the relationship between the quantity of a product demanded and the price of the product. In general, price decreases as quantity demanded increases. The marginal price–demand function is the derivative of the price–demand function and it tells us how fast the price changes at a given level of production. These functions are used in business to determine the price–elasticity of demand, and to help companies determine whether changing production levels would be profitable.
Example \(\PageIndex{4}\): Finding a Price–Demand Equation
Find the price–demand equation for a particular brand of toothpaste at a supermarket chain when the demand is 50 tubes per week at $2.35 per tube, given that the marginal price—demand function, \(p′(x),\) for x number of tubes per week, is given as
\[p'(x)=−0.015e^{−0.01x}.\]
If the supermarket chain sells 100 tubes per week, what price should it set?
Solution
To find the price–demand equation, integrate the marginal price–demand function. First find the antiderivative, then look at the particulars. Thus,
\[p(x)=∫−0.015e^{−0.01x}dx=−0.015∫e^{−0.01x}dx.\]
Using substitution, let \(u=−0.01x\) and \(du=−0.01dx\). Then, divide both sides of the du equation by −0.01. This gives
\[\dfrac{−0.015}{−0.01}∫e^udu=1.5∫e^udu=1.5e^u+C=1.5e^{−0.01}x+C.\]
The next step is to solve for C. We know that when the price is $2.35 per tube, the demand is 50 tubes per week. This means
\[p(50)=1.5e^{−0.01(50)}+C=2.35.\]
Now, just solve for C:
\[C=2.35−1.5e^{−0.5}=2.35−0.91=1.44.\]
Thus,
\[p(x)=1.5e^{−0.01x}+1.44.\]
If the supermarket sells 100 tubes of toothpaste per week, the price would be
\[p(100)=1.5e−0.01(100)+1.44=1.5e−1+1.44≈1.99.\]
The supermarket should charge $1.99 per tube if it is selling 100 tubes per week.
Example \(\PageIndex{5}\): Evaluating a Definite Integral Involving an Exponential Function
Evaluate the definite integral \[∫^2_1e^{1−x}dx.\]
Solution
Again, substitution is the method to use. Let \(u=1−x,\) so \(du=−1dx\) or \(−du=dx\). Then \(∫e^{1−x}dx=−∫e^udu.\) Next, change the limits of integration. Using the equation \(u=1−x\), we have
\[u=1−(1)=0\]
\[u=1−(2)=−1.\]
The integral then becomes
\[∫^2_1e^{1−x}\,dx=−∫^{−1}_0e^u\,du=∫^0_{−1}e^u\,du=eu|^0_{−1}=e^0−(e^{−1})=−e^{−1}+1.\]
See Figure.
Figure \(\PageIndex{2}\): The indicated area can be calculated by evaluating a definite integral using substitution.
Exercise \(\PageIndex{4}\)
Evaluate \(∫^2_0e^{2x}dx.\)
Hint
Let \(u=2x.\)
Answer
\(\dfrac{1}{2}∫^4_0e^udu=\dfrac{1}{2}(e^4−1)\)
Example \(\PageIndex{6}\): Growth of Bacteria in a Culture
Suppose the rate of growth of bacteria in a Petri dish is given by \(q(t)=3^t\), where t is given in hours and \(q(t)\) is given in thousands of bacteria per hour. If a culture starts with 10,000 bacteria, find a function \(Q(t)\) that gives the number of bacteria in the Petri dish at any time t. How many bacteria are in the dish after 2 hours?
Solution
We have
\[Q(t)=∫3^tdt=\dfrac{3^t}{\ln 3}+C.\]
Then, at \(t=0\) we have \(Q(0)=10=\dfrac{1}{\ln 3}+C,\) so \(C≈9.090\) and we get
\[Q(t)=\dfrac{3^t}{\ln 3}+9.090.\]
At time \(t=2\), we have
\[Q(2)=\dfrac{3^2}{\ln 3}+9.090\]
\[=17.282.\]
After 2 hours, there are 17,282 bacteria in the dish.
Exercise \(\PageIndex{5}\)
From Example, suppose the bacteria grow at a rate of \(q(t)=2^t\). Assume the culture still starts with 10,000 bacteria. Find \(Q(t)\). How many bacteria are in the dish after 3 hours?
Hint
Use the procedure from Example to solve the problem
Answer
\(Q(t)=\dfrac{2^t}{\ln 2}+8.557.\) There are 20,099 bacteria in the dish after 3 hours.
Example \(\PageIndex{7}\): Fruit Fly Population Growth
Suppose a population of
fruit flies increases at a rate of \(g(t)=2e^{0.02t}\), in flies per day. If the initial population of fruit flies is 100 flies, how many flies are in the population after 10 days? Solution
Let \(G(t)\) represent the number of flies in the population at time t. Applying the net change theorem, we have
\(G(10)=G(0)+∫^{10}_02e^{0.02t}dt\)
\(=100+[\dfrac{2}{0.02}e^{0.02t}]∣^{10}_0\)
\(=100+[100e^{0.02t}]∣^{10}_0\)
\(=100+100e^{0.2}−100\)
\(≈122.\)
There are 122 flies in the population after 10 days.
Exercise \(\PageIndex{6}\)
Suppose the rate of growth of the fly population is given by \(g(t)=e^{0.01t},\) and the initial fly population is 100 flies. How many flies are in the population after 15 days?
Hint
Use the process from Example to solve the problem.
Answer
There are 116 flies.
Example \(\PageIndex{8}\): Evaluating a Definite Integral Using Substitution
Evaluate the definite integral using substitution: \[∫^2_1\dfrac{e^{1/x}}{x^2}\,dx.\]
Solution
This problem requires some rewriting to simplify applying the properties. First, rewrite the exponent on e as a power of x, then bring the \(x^2\) in the denominator up to the numerator using a negative exponent. We have
\[∫^2_1\dfrac{e^{1/x}}{x^2}\,dx=∫^2_1e^{x^{−1}}x^{−2}\,dx.\]
Let \(u=x^{−1},\) the exponent on \(e\). Then
\[du=−x^{−2}\,dx\]
\[−du=x^{−2}\,dx.\]
Bringing the negative sign outside the integral sign, the problem now reads
\[−∫e^u\,du.\]
Next, change the limits of integration:
\[u=(1)^{−1}=1\]
\[u=(2)^{−1}=\dfrac{1}{2}.\]
Notice that now the limits begin with the larger number, meaning we must multiply by −1 and interchange the limits. Thus,
\[−∫^{1/2}_1e^udu=∫^1_{1/2}e^udu=e^u|^1_{1/2}=e−e^{1/2}=e−\sqrt{e}.\]
Exercise \(\PageIndex{7}\)
Evaluate the definite integral using substitution: \[∫^2_1\dfrac{1}{x^3}e^{4x^{−2}}dx.\]
Hint
Let \(u=4x^{−2}.\)
Answer
\[∫^2_1\dfrac{1}{x^3}e^{4x^{−2}}dx=\dfrac{1}{8}[e^4−e]\].
Integrals Involving Logarithmic Functions
Integrating functions of the form \(f(x)=x^{−1}\) result in the absolute value of the natural log function, as shown in the following rule. Integral formulas for other logarithmic functions, such as \(f(x)=\ln x\) and \(f(x)=log_ax\), are also included in the rule.
Rule: Integration Formulas Involving Logarithmic Functions
The following formulas can be used to evaluate integrals involving logarithmic functions.
\[\begin{align} ∫x^{−1}dx &=\ln |x|+C \\ ∫\ln x\,dx &= x\ln x−x+C =x (\ln x−1)+C \\ ∫log_a\,x\,dx &=\dfrac{x}{\ln a}(\ln x−1)+C \end{align}\]
Example \(\PageIndex{9}\): Finding an Antiderivative Involving \(\ln x\)
Find the antiderivative of the function \[\dfrac{3}{x−10}.\]
Solution
First factor the 3 outside the integral symbol. Then use the \(u^{−1}\) rule. Thus,
\[∫\dfrac{3}{x−10}dx=3∫\dfrac{1}{x−10}dx=3∫\dfrac{du}{u}=3\ln |u|+C=3\ln |x−10|+C,x≠10.\]
See Figure.
Figure \(\PageIndex{3}\): The domain of this function is \(x \neq 10.\)
Exercise \(\PageIndex{8}\)
Find the antiderivative of \[\dfrac{1}{x+2}.\]
Hint
Follow the pattern from Example to solve the problem.
Answer
\[\ln |x+2|+C\]
Example \(\PageIndex{10}\): Finding an Antiderivative of a Rational Function
Find the antiderivative of \[\dfrac{2x^3+3x}{x^4+3x^2}.\]
Solution
This can be rewritten as \(∫(2x^3+3x)(x^4+3x^2)^{−1}dx.\) Use substitution. Let \(u=x^4+3x^2\), then \(du=4x^3+6x.\) Alter du by factoring out the 2. Thus,
\[du=(4x^3+6x)dx=2(2x^3+3x)dx\]
\[\dfrac{1}{2}du=(2x^3+3x)dx.\]
Rewrite the integrand in
u:
\[∫(2x^3+3x)(x^4+3x^2)^{−1}dx=\dfrac{1}{2}∫u^{−1}du.\]
Then we have
\[\dfrac{1}{2}∫u^{−1}du=\dfrac{1}{2}\ln |u|+C=\dfrac{1}{2}\ln ∣x^4+3x^2∣+C.\]
Example \(\PageIndex{11}\): Finding an Antiderivative of a Logarithmic Function
Find the antiderivative of the log function \[log_2x.\)
Solution
Follow the format in the formula listed in the rule on integration formulas involving logarithmic functions. Based on this format, we have
\[∫log_2xdx=\dfrac{x}{\ln 2}(\ln x−1)+C.\]
Exercise \(\PageIndex{9}\)
Find the antiderivative of \(log_3x\).
Hint
Follow Example and refer to the rule on integration formulas involving logarithmic functions.
Answer
\[\dfrac{x}{\ln 3}(\ln x−1)+C\]
Example is a definite integral of a trigonometric function. With trigonometric functions, we often have to apply a trigonometric property or an identity before we can move forward. Finding the right form of the integrand is usually the key to a smooth integration.
Example \(\PageIndex{12}\): Evaluating a Definite Integral
Find the definite integral of \[∫^{π/2}_0\dfrac{\sin x}{1+\cos x}dx.\]
Solution
We need substitution to evaluate this problem. Let \(u=1+\cos x\) so \(du=−\sin x\,dx.\) Rewrite the integral in terms of u, changing the limits of integration as well. Thus,
\[u=1+cos(0)=2\]
\[u=1+cos(\dfrac{π}{2})=1.\]
Then
\[∫^{π/2}_0\dfrac{\sin x}{1+\cos x}=−∫^1+2u^{−1}du=∫^2_1u^{−1}du=\ln |u|^2_1=[\ln 2−\ln 1]=\ln 2\]
Key Concepts Exponential and logarithmic functions arise in many real-world applications, especially those involving growth and decay. Substitution is often used to evaluate integrals involving exponential functions or logarithms. Key Equations Integrals of Exponential Functions
\(∫e^xdx=e^x+C\)
\(∫a^xdx=\dfrac{a^x}{\ln a}+C\)
Integration Formulas Involving Logarithmic Functions
\(∫x^{−1}dx=\ln |x|+C\)
\(∫\ln xdx=x\ln x−x+C=x(\ln x−1)+C\)
\(∫log_axdx=\dfrac{x}{\ln a}(\ln x−1)+C\)
Contributors
Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org. |
I'm dealing with Fourier series and I'm trying to figure out $\log(1+e^x) - \frac{x}{2}$ is even??? I've tried the $f(-x) = f(x)$ method but it doesn't give me the equality. But I've plotted it, and it is even? :S
$$\begin{align} \ln(1+e^x)-\frac x2 &= \ln(1+e^x)-\ln(e^{\frac x2}) \\ &= \ln\left((1+e^x)e^{\frac {-x}2}\right) \\ &=\ln(e^{\frac {-x}2} +e^{\frac x2}) \\ \end{align}$$ From this it should be obvious that the function is indeed even.
Another way of treating this is that a continuous function $ \ f(x) \ $ can be "separated" into "even" and "odd" components,
$$ f_e(x) \ = \ \frac{f(x) \ + \ f(-x)}{2} \ \ \ \text{and} \ \ \ f_o(x) \ = \ \frac{f(x) \ - \ f(-x)}{2} \ \ . $$
Here, we have
$$ f_o(x) \ = \ \frac{[ \ \log(1+e^x) \ - \ \frac{x}{2} \ ] \ - \ [ \ \log(1+e^{-x}) \ - \ \frac{(-x)}{2} \ ]}{2} $$
$$ = \ \frac{ \ \log(1+e^x) \ - \ \ \log(1+e^{-x}) \ - \ x }{2} $$
$$ = \ \frac{1}{2} \ \left[ \ \log \left(\frac{1+e^x}{1+e^{-x}} \right) \ - \ x \ \right] \ = \ \frac{1}{2} \ \left[ \ \log \left(\frac{e^x \ [1+e^x]}{e^x+1} \right) \ - \ x \ \right] \ \ $$
$$ = \ \frac{1}{2} \ [ \ \log (e^x ) \ - \ x \ ] \ \ = \ \ \frac{1}{2} \ [ \ x \ - \ x \ ] \ = \ 0 \ \ . $$
Our function has zero "odd component", so it is purely even. [We could also have shown that $ \ f_e(x) \ = \ f(x) \ $ .]
Why do you say that checking $f(-x)=f(x)$ doesn't work? Of course it does. $\ddot\smile$ $$\ln(1+e^{-x})-\frac{-x}2=\ln\frac{e^x+1}{e^x}+\frac x2=\ln(e^x+1)-\ln e^x+\frac x2=\\=\ln(1+e^x)-x+\frac x2=\ln(1+e^x)-\frac x2$$ |
I am seeking a closed form for the function $$f(x)=\,_3F_2\left(\tfrac12,\tfrac12,\tfrac12;\tfrac32,\tfrac32;x\right)$$
I expect there to be one, because of this post and Wolfram. The Wolfram link produces closed forms involving $\mathrm{Li}_2$ for any value of $x$ that I've tried so far, so I can only assume that a general closed form exists.
I've started my attempts by noticing that $$f(x)=\frac12\int_0^1 \frac{_2F_1(\tfrac12,\tfrac12;\tfrac32;xt)}{\sqrt{t}}dt,$$ because $$\frac12\int_0^1 \frac{(xt)^n}{\sqrt{t}}dt=\frac{x^n}{2n+1}$$ which would introduce another factor of $$\frac{n+1/2}{n+3/2}$$ when computing the ratio of the terms. Similarly, $$_2F_1\left(\tfrac12,\tfrac12;\tfrac32;x\right)=\frac12\int_0^1 \frac{_1F_0(\tfrac12;;xt)}{\sqrt{t}}dt.$$
The last hypergeometric I was able to recognize as $$_1F_0\left(\tfrac12;;xt\right)=\frac1{\sqrt{1-xt}}.$$ So, all in all, $$f(x)=\frac14\int_0^1\int_0^1 \frac{1}{\sqrt{vu}\sqrt{1-xvu}}dvdu,$$ which looks like the Beta function's evil cousin.
I do not know how to turn this integral into something containing $\mathrm{Li}_2$ and I need some help. Thanks! |
I propose to collect here open problems from the theory of continued fractions. Any types of continued fractions are welcome.
Guy, Unsolved Problems In Number Theory, F21, attributes to Bohuslav Divis the conjecture that in each real quadratic field there is an irrational with all partial quotients 1 or 2; more generally, same question but with 1 and 2 replaced by any pair of distinct positive integers.
every integer appears as the denominator of a finite continued fraction whose partial quotients are bounded by an absolute constant.
From M. Waldschmidt, "Open Diophantine Problems" (Moscow Mathematical Journal vol. 4, no. 1, 2004, pp. 245-305):
Does there exist a real algebraic number of degree $\geq 3$ with bounded partial quotients? Does there exist a real algebraic number of degree $\geq 3$ with unbounded partial quotients?
Find fixed points (mod 1) of Minkowski's question mark function, see A058914 from "The On-Line Encyclopedia of Integer Sequences". This picture
is taken from http://mathworld.wolfram.com/MinkowskisQuestionMarkFunction.html It shows that there is only one positive fixed point (mod 1) less than 1/2. It is approximately 0.42037. Does this constant have a closed-form expression?
Hermite's problem is an open problem in mathematics posed by Charles Hermite in 1848. He asked for a way of expressing real numbers as sequences of natural numbers, such that the sequence is eventually periodic precisely when the original number is a cubic irrational.
Guy, Unsolved Problems In Number Theory, F21, attributes to Leo Moser the conjecture that there is a constant $c$ such that every $n$ can be expressed as $n=a+b$ in such a way that the sum of the partial quotients of $a/b$ is less than $c\log n$.
A collection of Open problems in geometry of continued fractions by Oleg Karpenkov.
It is known that (see J.W. Porter, On a theorem of Heilbronn) $$H(b)=\dfrac{1}{\varphi(b)}\sum\limits_{1\le a\le b\atop(a,b)=1}\ell(a/b)= \dfrac{2\log 2}{\zeta(2)}\cdot\log b+C_P-1+O_\varepsilon(b^{-1/6+\varepsilon}),$$ where $\ell(a/b)$ is a length of standart continued fraction expansion and $C_P$ is Porter's constant. Averaging over numerators and denominators one can prove asymptotic formula for the variance. Let $$E(R)=\dfrac{2}{R(R+1)}\sum\limits_{b\le R}\sum\limits_{a\le b}\ell(a/b) $$ and $${D}(R)=\dfrac{2}{R(R+1)}\sum\limits_{b\le R}\sum\limits_{a\le b}\left( \ell(a/b)-E(R)\right)^2. $$ Then (see D. Hensley, The Number of Steps in the Euclidean Algorithm) $${D}(R)=D_1\cdot\log R+o(\log R). $$ But if denominator is fixed then for the variance only right oder bound is known (see Bykovskii V.A., Estimate for dispersion of lengths of continued fractions): $$\dfrac{1}{b}\sum\limits_{a=1}^{b}\left(\ell\left(\dfrac{a}{b}\right)- \dfrac{2\log2}{\zeta(2)}\log b\right)^2\ll\log b.$$
Conjecture: $$\dfrac{1}{\varphi(b)}\sum\limits_{1\le a\le b\atop(a,b)=1}(\ell(a/b)-H(b))^2=D_1\log b+o(\log b).$$ |
Bernoulli Differential Equations
We are now going to look at a method for solving another class of differential equations. Let $p(x)$ and $g(x)$ be continuous on an interval of interest, and consider the following non-linear differential equation:(1)
If either $n = 0$ or $n = 1$, the differential equation above is linear, so suppose that $n > 1$. We can "convert" this differential equation to be in a linear form using some substitutions. Let $v = y^{1-n}$. Then $v' = (1 - n)y^{-n}y'$. Now take the differential equation above and divide both sides by $y^n$ to get that:(2)
We can now apply the substitutions we made above to get that:(3)
This differential equation is linear, and we can solve this differential equation using the method of integrating factors. The important thing to remember for Bernoulli Differential Equations is that we make the following substitutions:(4)
Let's look at some examples of solving Bernoulli Differential equations.
Example 1 Solve the differential equation $y' + 2xy = 3xy^3$.
We first start off by dividing the differential equation above by $y^3$ to get:(5)
Now we will make the substitution $v = y^{1-3} = y^{-2}$ and so $v' = -2 y^{-3} y'$. so $y^{-3}y' = 1\frac{1}{2} v'$. Applying these substitutions and we get:(6)
Let's solve this differential equation using integrating factors. Let $\mu (t) = e^{\int -4x \: dx} = e^{-2x^2}$. Multiplying both sides of the differential equation above by the integrating factor and we have that:(7)
We can solve the integral on the right by using substitution. Let $u = -2x^2$. Then $du = -4x \: dx$ and so $\frac{3}{2} du = -6x \: dx$. Hence:(8)
We are almost done. We will now use the substitution $v = y^{-2}$ that we started with to get that:(9) |
After reading this blog post, I learned the BSD conjectural formula for the coefficient of the leading term $a_0$ of the L-function of an elliptic curve $E$, namely$$a_0 \stackrel{?}{=} \frac{\Omega_E\cdot Reg_E \cdot \prod_p c_p \cdot \#Sha(E/\mathbb{Q})}{(\# E_{tors}(\mathbb{Q}))^2}$$All the terms are defined, if people are interested and don't already know, at the above link. Now the factors in the numerator here come in several flavours: the real period $\Omega_E$ arises after looking at the curve over $\mathbb{R}$; the numbers $c_p$ are 1 for all but finitely many primes $p$, and come from looking at the curve over $p$-adic numbers, hence completions of $\mathbb{Q}$; $Reg_E$, the regulator, is the volume of a certain torus (not the curve itself!) given by comparing the rational and real points of $E$; the Sha group arises from comparing Galois cohomology over $\mathbb{Q}$ with all its completions at finite primes. Clearly the Archimedean place and the non-Archimedean ones behave differently, but one can often unify them in certain formalisms. (If one is willing to split the denominator, and invert the regulator, then it is a product of three ratios, each of which is something like (something about a completion)/(some sort of volume), but this just
extremely flaky and ignorant, and best ignored)
My question is this: can we write this product (or perhaps the whole quotient) more uniformly via reinterpreting various terms in more abstract ways via places?
Please note I'm
not trying to do anything with such a formulation, I'm just curious if it is known.
EDIT: the factor $\Omega_E/\# E_{tors}(\mathbb{Q})$ is the volume of the stack given by the action groupoid $E(\mathbb{Q})\otimes\mathbb{R}//E(\mathbb{Q})$. Do the other terms measure other geometric objects, such that the whole thing is the measure of some adelic object? |
This is an old revision of the document! Introduction
In that example we investigate a AlGaAs/GaAs quantum well in quantum mechanical point of view. The simulation involves band profile calculation in the hetero-structure, and compare its influence with different QM solvers.
Band-structure calculation
The growth direction of the sample is in the
001 direction on $Al_{0.4}Ga_{0.6}As$. The lattice miss-match of the GaAs and AlAs is very low, which leaves the structure un-strained. The calculated profile including bowing parameters is depicted in figure 1.
Eigenfunction calculation
In this example we are going to consider just the electron wave-functions, but the same ideas could be used for the hole functions.
Single-band method
In figure 2. the single-band wave-functions are calculated, which means the effective mass of the electron is a constant energy-independent value in the Scrödinger-equation:
\begin{equation} - \nabla \frac{\hbar^2}{2m_e(x)} \nabla \Phi + V(x) \Phi = E\phi. \end{equation}
The $m_e(x)$ is the effective mass of the electron, while the $V(x)$ is the conduction band edge - both of them are position dependent.
8 band k.p method
We are able to calculate the wave functions of the quasi-particles in the sample with the coupling of the hole and electron bands \citep{pryor1998eight}. Which can be used for more realistic calculations, due to the fact the effective mass of the electron is energy dependent.
The first confined state in the Quantum well is plotted in figure 3.
It has a bit different energy, which reflects in the different electron density calculated in the next section.
Lateral dispersion
In the section before we calculated the wavefunctions if the electron has zero in-plane momentum($k_{||}= 0$). But if this lateral momentum is not zero it changes the eigenenergy of the electrons. It can be described according to an E(k) dispersion relation in figure 4.
As it shows it can be treated a constant mass problem for some energy range. In our approach we mix out a constant mass calculated from the 8 band k.p wavefunction at $k_{||} = 0$.
\section{Charge density calculation}
For the constant mass approach we can calculate the density in the sample according the equation: \begin{equation} n(x) = \sum_{i = 0}^{N} |\Phi_i(x)|^2 k_bT \frac{1}{4 \pi} \frac{2 m_{DOS}(x)}{\hbar^2} ln(1+exp((E_i-E_f)/k_bT)) \end{equation}
in one dimension, with two dimensional k-space. Where $E_i, \Phi_i$ is the i-th eigenenergy and eigenfunction of the sample.
In figure 5. we compared 3 different density calculations with effective mass. It shows that due the eigenenergy of the electron function is higher than the conduction band-edge, which results less electrons in the band.
We can calculate the charge carrier density without assuming constant mass in the lateral dimension, but for this we have to calculate the wave-function for each $k_{||}$ parallel point in the sample. It ends up nearly at the same result as in figure 5. |
I am tasked with finding the current
I through the following circuit at an array of frequencies. I have a solution however I am fairly new to AC systems and just want to make sure I am on the right track.
The values of \$ V_R, V_C, \$ and \$ V_L\$ were measured using an oscilloscope, and we can assume for the purpose of this question that they are 0.8 V, 3.8 V, and 5.6 V respectively.
Here is my solution assuming a frequency of 500 Hz and a voltage of 14.1 V peak-to-peak, also there is a correction that the capacitor is \$2.2 \ \mu F \$
not \$0.22 \ \mu F \$:
\$ I = \frac E Z \$
\$ \omega = 2\pi f \$
\$ Z = Z_R + Z_C + Z_L\$
\$ Z_R = 480 + j0 \ \Omega \$
\$ Z_C = 0 - \frac j {\omega C} \ \Omega = 0 - j144.7 \ \Omega \$
\$ Z_L = 88 + j {\omega L} \ \Omega = 88 + j314.2 \ \Omega \$
Summing the impedances we obtain an effective impedance of \$592.8 \ \angle \ 16.6^ \circ \ \Omega\$
Now we need the phase angle which can be found with the vector sum of the measured voltages.
\$ \theta = arctan(\frac {V_L - V_C} {V_R} ) = 66.0^\circ \$
With this the final answer for the current should be
\$ I = \frac {14.1 \ \angle \ 66.0^ \circ} {592.8 \ \angle \ 16.6^ \circ} \$
Which gives a final current of \$ I = 23.8 \ \angle \ 49.4^ \circ \ mA\$
Why might my measurements be different from the calculated values?
Edit: the measured voltages are peak-to-peak not RMS. |
I'm trying to understand Stern-Gerlach experiment on a computational level. Suppose we have a neutral particle with magnetic moment (e.g. a neutron), and apply an inhomogeneous magnetic field to it (let it change linearly with coordinate). As I understand, its Hamiltonian would look like:
$$\hat H=-\frac{\hbar^2}{2m}\nabla^2+\left(\frac e{mc}\right)\hat{\vec s}\vec B$$
Now the spin operator is $$\hat s_i=\frac{\hbar}2\sigma_i,$$
where $\sigma_i$ is $i$th Pauli matrix.
So, for magnetic field $\vec B=\vec e_x B_0 x$ we'd have Schrödinger 1D (Y and Z directions can be separated due to translation symmetry) equation:
$$-\frac{\hbar^2}{2m}\frac{\partial^2\psi}{\partial x^2}+\left(\frac {\hbar e}{2mc}\right)\sigma_x B_0 x\psi=i\hbar \frac{\partial\psi}{\partial t}.$$
I now try to solve this equation numerically, taking initial wave function in the following form:
$$\psi(x,t=0)=\begin{pmatrix}\psi_0(x)\\ \psi_0(x)\end{pmatrix},$$
where $\psi_0(x)$ is a gaussian wave packet with zero average momentum.
The problems start when I select $\sigma_x$ as is usually given:
$$\sigma_x=\begin{pmatrix}0&1\\1&0\end{pmatrix}.$$
The solution appears to look like showed below. I.e. both wave function components accelerate left!
I thought, what if I choose another axis as $x$, so I tried doing the same with $\sigma_y$:
$$\sigma_y=\begin{pmatrix}0&-i\\i&0\end{pmatrix}.$$
The result in the animation below. Now it's a bit better: the wavefunction at least splits into two parts, one going left, another right. But still, both parts are composed of a mix of spin-up and spin-down states, so not really what one would expect from Stern-Gerlach experiment.
Finally, I tried the last option — using $\sigma_z$:
$$\sigma_z=\begin{pmatrix}1&0\\0&-1\end{pmatrix}.$$
The result is again showed below. Finally, I get the splitting into "independent" spin parts, i.e. one spin part goes left, another one goes right.
Now, the question: how to interpret these results? Why does choice of active axis result in such drastic differences in results? How should I have done instead to get meaningful results? Shouldn't permutation of Pauli matrices not affect results? |
Answer
$1$
Work Step by Step
$\tan45^{\circ}=\frac{\sin45^{\circ}}{\cos45^{\circ}}$ $\tan45^{\circ}=\frac{\frac{\sqrt2}{2}}{\frac{\sqrt2}{2}}$ $\tan45^{\circ}=1$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
So, given some data,
Mathematica 10.2 can now attempt to figure out what probability distribution might have produced it. Cool! But suppose that, instead of having data, we have something that is in some ways better -- a formula. Let's call it $f$. We suspect -- perhaps because $f$ is non-negative over some domain and because the integral of $f$ over that domain is 1 -- that $f$ is actually the PDF of some distribution (Normal, Lognormal, Gamma, Weibull, etc.) or some relatively simple transform of that distribution.
Is there any way that
Mathematica can help figure out the distribution (or simple transform) whose PDF is the same as $f$?
Example: Consider the following formula:
1/(2*E^((-m + Log[5])^2/8)*Sqrt[2*Pi])
$$\frac{e^{-\frac{1}{8} (\log (5)-m)^2}}{2 \sqrt{2 \pi }}$$
As it happens -- and as I discovered with some research and guesswork -- this formula is the PDF of
NormalDistribution[Log[5], 2] evaluated at $m$. But is there a better way than staring or guessing to discover this fact? That is, help me write
FindExactDistribution[f_, params_].
Notes
The motivation for the problem comes from thinking about Conjugate Prior distributions but I suspect it might have a more general application.
One could start with mapping PDF evaluated at $m$ over a variety of continuous distributions. And if I did this I would at some point get to what I will call $g$, which is the PDF or the
NormalDistributionwith parameters $a$ and $b$ evaluated at $m$.
1/(b*E^((-a + m)^2/(2*b^2))*Sqrt[2*Pi])
$$\frac{e^{-\frac{(m-a)^2}{2 b^2}}}{\sqrt{2 \pi } b}$$
But unless I knew that if I replaced $a$ by
Log[5] and $b$ by $2$ that I would get $f$, this fact would not mean a lot to me. I suppose I could look at the
TreeForm of $f$ and $g$ and I would notice certain similarities, and that might be a hint, but I am not sure how to make much progress beyond that observation. Ultimately, the problem looks to be about finding substitutions in parts of a tree ($g$) which, after evaluation, yield a tree that matches a target $f$. I have the suspicion that this is a difficult problem with an NKS flavor but one for which
Mathematica and its ability to transform expressions might be well suited.
I appreciate the responses here. But let me provide an example that is perhaps not so easy. Suppose the target function
f is as follows: $\frac{7}{10 (a-2)^2}$ for the domain ($-\infty,\frac{13}{10}$]. If we create a probability distribution out of this and then generate 10,000 random samples from the distribution and then run FindDistribution
dis = ProbabilityDistribution[7/(10 (-2 + a)^2), {a, -\[Infinity], 13/10}]; rv = RandomVariate[dis,10^4]; fd=FindDistribution[rv,5]
The result is a mixture distribution of normal distributions, a beta distribution, a weibull distribution, a normal distribution and a mixture distribution of a normal distribution and a gamma distribution.
The mixture distributions are clearly of the wrong form, the normal distribution is clearly not right, Although I am not positive, I don't believe the Weibull Distribution or the Beta Distribution is correct either. In fact, I don't know what the correct answer is, though I think it might be a fairly simple transform of a single parameter distribution. The point, however, is that the FindDistribution process, does not seem to work in this case. And that's why I am hoping for something better. |
Let $(M,g)$ be a Riemannian manifold with Levi Civita connection $\nabla$. Then $\nabla$ satisfies a compatibility condition:
$(\nabla_ZX,Y)+(X,\nabla_ZY)=Z((X,Y))$ where $(\cdot,-)$ is a Hermitian pairing. In general, if we have a connection $\nabla$ on bundle $E$ (in our case $E=TM$) one can define the dual connection with the formula $\nabla'_Z(\alpha)=Z(\alpha(\cdot))-\alpha(\nabla_Z(\cdot))$ where $\alpha$ is a one form. My question is the following:
Does the dual connection satisfy compatibility condition?
I did some computation and get that compatibility is equivalent to the condition: $Z(g^{ij})\alpha_i\beta_j=-g^{ij}Z^p\Gamma_{pi}^q\alpha_q\beta_j-g^{ij}Z^p\Gamma_{pj}^q\alpha_i\beta_q$
where $\Gamma_{ij}^k$ are defined by $\nabla_{\partial_i}\partial_j=\Gamma_{ij}^k\partial_k$, $g^{ij}$ are components of the inverse matrix to the$(g_{ij})_{i,j}$ where $g_{i,j}=g(\partial_i,\partial_j)$ (I used the Einstein summation convention). |
I am working with neural networks for a while right now and I've read that I could use a simple feedforward neural network to create a self driving car.
I was wondering how this possibly works because usually for any data that depends on time, I thought that I need to use a RNN.
I am working with Java so implementing a neural network is slightly harder than simply using Numpy in Python etc. So I was not able yet to code my own RNN but still want to create a small 2D simulation where a car should drive a track.
So my question now is how one would make a ANN that learns how to drive a track. My attempt was to have 9 input values that determine the distance to the border of the track in a circular shape in front of the car. The distance is given as a value between 0 and 1 and is calculated as a nonlinear function (like in computer graphics) and might looks something like this:$$\tag{$\forall k \in \mathbb{R}:k>0$} d\left(x\right)\ =\frac{-1}{\left(x\cdot k+1\right)^2}+1$$
Probably 2 hidden layers with 5 neurons each.
The first value of my output determines how fast the car should accelerate. The second one tells the car to drive left/right $$(0,0.5) \rightarrow left $$$$(0.5,1) \rightarrow right$$
The real question is
how would I make my network learn to drive better?. Like what happens when I hit a wall? What should be the error? |
The missing factor method is a particularly nice way to understand fraction division. It builds on what we know about multiplication and division, reinforcing that these operations have the same relationship whether the numbers are whole number, fractions, or anything else. It makes sense. But we’ve seen that it doesn’t always work out nicely. For example,
$$\frac{3}{4} \div \frac{1}{3} = \_\_$$
can be rewritten as
$$\frac{1}{3} \cdot \_\_ = \frac{3}{4} \ldotp$$
You want to ask:
For the numerator: \(1 \cdot \_\_ = 3\). We can fill in the blank with a 3. For the denominator: \(3 \cdot \_\_ = 4\). We can fill in the blank with \(\frac{4}{3}\). (Why does that work?)
So we have: $$\frac{1}{3} \cdot \frac{3}{\frac{4}{3}} = \frac{3}{4} \ldotp$$
You learned about fractions like $$\frac{3}{\frac{4}{3}}$$
back in the “What is a Fraction?” chapter. This means that each \(\frac{4}{3}\) of a kid gets 3 pies. So how much does an individual kid (one whole kid) get? You could draw a picture to help you figure it out. But we can also use the key fraction rule to help us out.
$$\frac{3}{\frac{4}{3}} = \frac{3 \cdot 3}{3 \cdot \frac{4}{3}} = \frac{9}{4} \ldotp$$
This process is going to be key to understanding why the “invert and multiply” rule for fraction division actually makes sense.
Simplify an Ugly Fraction
Example \(\PageIndex{1}\):
\(7 \frac{2}{3}\) pies are shared equally by \(5 \frac{1}{4}\) children. How much pie does each child get?
Technically, we could just write down the answer as $$\frac{7 \frac{2}{3}}{5 \frac{1}{4}}$$and be done! The answer is equivalent to this fraction, so why not?
Is there a way to make this look friendlier? Well, if we change those mixed numbers to “improper” fractions, it helps a little:
$$\frac{7 \frac{2}{3}}{5 \frac{1}{4}} = \frac{\frac{23}{3}}{\frac{21}{4}}$$
That’s a bit better, but it’s still not clear how much pie each kid gets. Let’s use the key fraction rule to make the fraction even friendlier. Let’s multiply the numerator and denominator each by 3. (Why three?) Remember, this means we’re multiplying the fraction by \(\frac{3}{3}\), which is just a special form of 1, so we don’t change its value.
$$\frac{3 \cdot \frac{23}{3}}{3 \cdot \frac{21}{4}} = \frac{23}{\frac{63}{4}} \ldotp$$
Now multiply numerator and denominator each by 4. (Why four?)
$$\frac{4 \cdot 23}{4 \cdot \frac{63}{4}} = \frac{92}{63} \ldotp$$
We now see that the answer is \(\frac{92}{63}\). That means that sharing \(7 \frac{2}{3}\) pies among \(5 \frac{1}{4}\) children is the same as sharing 92 pies among 63 children. (In both situations, the individual child get exactly the same amount of pie.)
Example \(\PageIndex{2}\):
Let’s forget the context now and just focus on the calculations so that we can see what is going on more clearly. Try this one:
$$\frac{\frac{3}{5}}{\frac{2}{3}} \ldotp$$
Multiplying the numerator and denominator each by 5 (why did we choose 5?) gives
$$\frac{\frac{3}{5}}{\frac{2}{3}} = \frac{5 \cdot \frac{3}{5}}{5 \cdot \frac{2}{3}} = \frac{3}{\frac{10}{3}} \ldotp$$
Now multiply the numerator and denominator each by 3 (why did we choose 3?):
$$\frac{3 \cdot 3}{3 \cdot \frac{10}{3}} = \frac{9}{10} \ldotp$$
On Your Own Each of the following is a perfectly nice fraction, but it could be written in a simpler form. So do that! Write each of them in a simpler form following the examples above. $$\frac{\frac{2}{3}}{\frac{1}{3}}, \qquad \frac{2 \frac{1}{5}}{2 \frac{1}{4}}, \qquad \frac{\frac{5}{7}}{\frac{3}{5}}, \qquad \frac{\frac{3}{7}}{\frac{4}{5}} \ldotp$$ Think / Pair / Share Jessica calculated the second exercise above this way: $$\frac{2 \frac{1}{5}}{2 \frac{1}{4}} = \frac{\frac{1}{5}}{\frac{1}{4}} = \frac{\frac{1}{5} \cdot 4}{\frac{1}{4} \cdot 4} = \frac{\frac{4}{5}}{1} = \frac{4}{5} \ldotp$$Is her solution correct, or is she misunderstanding something? Carefully explain what is going on with her solution, and what you would do as Jessica’s teacher. Isaac calculated the last exercise above this way: $$\frac{\frac{3}{7}}{\frac{4}{5}} = \frac{\frac{3}{7} \cdot 7}{\frac{4}{5} \cdot 5} = \frac{3}{4} \ldotp$$Is his solution correct, or is he misunderstanding something? Carefully explain what is going on with his solution, and what you would do as Isaac’s teacher.
Perhaps without realizing it, you have just found another method to divide fractions.
Example: 3/5 ÷ 4/7
Consider \(\frac{3}{5} \div \frac{4}{7}\). We know that a fraction is the answer to a division problem, meaning
$$\frac{3}{5} \div \frac{4}{7} = \frac{\frac{3}{5}}{\frac{4}{7}} \ldotp$$
And now we know how to simplify ugly fractions like this one! Multiply the numerator and denominator each by 5:
$$\frac{(\frac{3}{5}) \cdot 5}{(\frac{4}{7}) \cdot 5} = \frac{3}{\frac{20}{7}} \ldotp$$
Now multiply them each by 7:
$$\frac{(3) \cdot 7}{(\frac{20}{7}) \cdot 7} = \frac{21}{20} \ldotp$$
Done! So
$$\frac{3}{5} \div \frac{4}{7} = \frac{21}{20} \ldotp$$
Example: 5/9 ÷ 8/11
Let’s do another! Consider \(\frac{5}{9} \div \frac{8}{11}\):
$$\frac{5}{9} \div \frac{8}{11} = \frac{\frac{5}{9}}{\frac{8}{11}} \ldotp$$
Let’s multiply numerator and denominator each by 9 and by 11 at the same time. (Why not?)
$$\frac{\frac{5}{9}}{\frac{8}{11}} = \frac{(\frac{5}{9}) \cdot 9 \cdot 11}{(\frac{8}{11}) \cdot 9 \cdot 11} = \frac{5 \cdot 11}{8 \cdot 9} \ldotp$$
(Do you see what happened here?)
So we have
$$\frac{\frac{5}{9}}{\frac{8}{11}} = \frac{5 \cdot 11}{8 \cdot 9} = \frac{55}{72} \ldotp$$
On Your Own
Compute each of the following, using the simplification technique in the examples above.
$$\frac{1}{2} \div \frac{1}{3}, \qquad \frac{4}{5} \div \frac{3}{7}, \qquad \frac{2}{3} \div \frac{1}{5}, \qquad \frac{45}{59} \div \frac{902}{902}, \qquad \frac{10}{13} \div \frac{2}{13} \ldotp$$
Invert and multiply
Consider the problem \(\frac{5}{12} \div \frac{7}{11}\). Janine wrote:
$$\frac{\frac{5}{12}}{\frac{7}{11}} = \frac{\frac{5}{12} \cdot 12 \cdot 11}{\frac{7}{11} \cdot 12 \cdot 11} = \frac{5 \cdot 11}{7 \cdot 12} = \frac{5}{12} \cdot \frac{11}{7} \ldotp$$
She stopped before completing her final step and exclaimed: “Dividing one fraction by another is the same as multiplying the first fraction with the second fraction upside down!”
Think / Pair / Share
First check each step of Janine’s work here and make sure that she is correct in what she did up to this point. Then answer these questions:
Do you understand what Janine is saying? Explain it very clearly. Work out \(\frac{\frac{3}{7}}{\frac{4}{13}}\) using the simplification method. Is the answer the same as \(\frac{3}{7} \cdot \frac{13}{4}\)? Work out \(\frac{\frac{2}{5}}{\frac{3}{10}}\) using the simplification method. Is the answer the same as \(\frac{2}{5} \cdot \frac{10}{3}\)? Work out \(\frac{\frac{a}{b}}{\frac{c}{d}}\) using the simplification method. Is the answer the same as \(\frac{a}{b} \cdot \frac{d}{c}\)? Is Janine right? Is dividing two fractions always the same as multiplying the two fractions with the second one turned upside down? What do you think? (Do not just think about examples. This is a question if something is always true.) Summary
We now have several methods for solving problems that require dividing fractions:
Dividing fractions: Draw a picture using the rectangle method, and use that to solve the division problem. Find a common denominator and divide the numerators. Rewrite the division as a missing factor multiplication problem, and solve that problem. Simplify an ugly fraction. Invert the second fraction (the dividend) and then multiply. Think / Pair / Share
Discuss your opinions about our four methods for solving fraction division problems with a partner:
Which method for division of fractions is the easiest to understand why it works? Which method for division of fractions is the easiest to use in computations? What are the benefits and drawbacks of each method? (Think both as a future teacher and as someone solving math problems here.) |
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open.
The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers.
Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic?
By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the
standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models.
We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a
topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic?
In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic.
Let me state the main theorem and briefly sketch the proof.
Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output.
\begin{equation*}\small\begin{array}{rcr}
\cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*}
This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the
final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$.
Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired.
But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the
final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired.
Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order.
The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$.
We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$
The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$.
Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable.
The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order.
Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order.
Go to the article to read more.
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } |
Frequently we will want to estimate the empirical probability density function of real-world data and compare it to the theoretical density from one or more probability distributions. The following example shows the empirical and theoretical normal density for EUR/USD high-frequency tick data \(X\) (which has been transformed using log-returns and normalized via \(\frac{X_i-\mu_X}{\sigma_X}\)). The theoretical normal density is plotted over the range \(\left(\lfloor\mathrm{min}(X)\rfloor,\lceil\mathrm{max}(X)\rceil\right)\). The results are in the figure below. The discontinuities and asymmetry of the discrete tick data, as well as the sharp kurtosis and heavy tails (a corresponding interval of \(\approx \left[-8,+7\right]\) standard deviations away from the mean) are apparent from the plot.
We also show the theoretical and empirical density for the EUR/USD exchange rate log returns over different timescales. We can see from these plots that the distribution of the log returns seems to be asymptotically converging to normality. This is a typical empirical property of financial data.
The following R source generates empirical and theoretical density plots across different timescales. The data is loaded from files that are sampled at different intervals. I cant supply the data unfortunately, but you should get the idea.
[source lang=”R”]
# Function that reads Reuters CSV tick data and converts Reuters dates # Assumes format is Date,Tick readRTD <- function(filename) { tickData <- read.csv(file=filename, header=TRUE, col.names=c("Date","Tick")) tickData$Date <- as.POSIXct(strptime(tickData$Date, format="%d/%m/%Y %H:%M:%S")) tickData }
# Boilerplate function for Reuters FX tick data transformation and density plot
plot.reutersFXDensity <- function() { filenames <- c("data/eur_usd_tick_26_10_2007.csv", "data/eur_usd_1min_26_10_2007.csv", "data/eur_usd_5min_26_10_2007.csv", "data/eur_usd_hourly_26_10_2007.csv", "data/eur_usd_daily_26_10_2007.csv") labels <- c("Tick", "1 Minute", "5 Minutes", "Hourly", "Daily")
par(mfrow=c(length(filenames), 2),mar=c(0,0,2,0), cex.main=2)
tickData <- c() i <- 1 for (filename in filenames) { tickData[[i]] <- readRTD(filename) # Transform: `$Y = \nabla\log(X_i)$` logtick <- diff(log(tickData[[i]]$Tick)) # Normalize: `$\frac{(Y-\mu_Y)}{\sigma_Y}$` logtick <- (logtick-mean(logtick))/sd(logtick) # Theoretical density range: `$\left[\lfloor\mathrm{min}(Y)\rfloor,\lceil\mathrm{max}(Y)\rceil\right]$` x <- seq(floor(min(logtick)), ceiling(max(logtick)), .01) plot(density(logtick), xlab="", ylab="", axes=FALSE, main=labels[i]) lines(x,dnorm(x), lty=2) #legend("topleft", legend=c("Empirical","Theoretical"), lty=c(1,2)) plot(density(logtick), log="y", xlab="", ylab="", axes=FALSE, main="Log Scale") lines(x,dnorm(x), lty=2) i <- i + 1 } par(op) } [/source] |
Focus Questions
The following questions are meant to guide our study of the material in this section. After studying this section, we should understand the concepts motivated by these questions and be able to write precise, coherent answers to these questions.
What is the unit circle and why is it important in trigonometry? What is the equation for the unit circle? What is meant by “wrapping the number line around the unit circle?” How is this used to identify real numbers as the lengths of arcs on the unit circle? How do we associate an arc on the unit circle with a closed interval of real numbers?
Beginning Activity
As has been indicated, one of the primary reasons we study the trigonometric functions is to be able to model periodic phenomena mathematically. Before we begin our mathematical study of periodic phenomena, here is a little “thought experiment” to consider.
Imagine you are standing at a point on a circle and you begin walking around the circle at a constant rate in the counterclockwise direction. Also assume that it takes you four minutes to walk completely around the circle one time. Now suppose you are at a point \(P\) on this circle at a particular time \(t\).
Describe your position on the circle \(2\) minutes after the time \(t\). Describe your position on the circle \(4\) minutes after the time \(t\). Describe your position on the circle \(6\) minutes after the time \(t\). Describe your position on the circle \(8\) minutes after the time \(t\).
The idea here is that your position on the circle repeats every \(4\) minutes. After \(2\) minutes, you are at a point diametrically opposed from the point you started. After \(4\) minutes, you are back at your starting point. In fact, you will be back at your starting point after \(8\) minutes, \(12\) minutes, \(16\) minutes, and so on. This is the idea of periodic behavior.
The Unit Circle and the Wrapping Function
In order to model periodic phenomena mathematically, we will need functions that are themselves periodic. In other words, we look for functions whose values repeat in regular and recognizable patterns. Familiar functions like polynomials and exponential functions do not exhibit periodic behavior, so we turn to the trigonometric functions. Before we can define these functions, however, we need a way to introduce periodicity. We do so in a manner similar to the thought experiment, but we also use mathematical objects and equations. The primary tool is something called
the wrapping function. Instead of using any circle, we will use the so-called unit circle. This is the circle whose center is at the origin and whose radius is equal to \(1\), and the equation for the unit circle is \(x^{2}+y^{2} = 1\).
Figure \(\PageIndex{1}\): Setting up to wrap the number line around the unit circle
Figure \(\PageIndex{1}\) shows the unit circle with a number line drawn tangent to the circle at the point \((1, 0)\). We will “wrap” this number line around the unit circle. Unlike the number line, the length once around the unit circle is finite. (Remember that the formula for the circumference of a circle as \(2\pi r\) where \(r\) is the radius, so the length once around the unit circle is \(2\pi\). However, we can still measure distances and locate the points on the number line on the unit circle by wrapping the number line around the circle. We wrap the positive part of this number line around the circumference of the circle in a counterclockwise fashion and wrap the negative part of the number line around the circumference of the unit circle in a clockwise direction.
Two snapshots of an animation of this process for the counterclockwise wrap are shown in Figure \(\PageIndex{2}\) and two such snapshots are shown in Figure \(\PageIndex{3}\) for the clockwise wrap.
Figure \(\PageIndex{2}\): Wrapping the positive number line around the unit circle
Figure \(\PageIndex{3}\): Wrapping the negative number line around the unit circle
Following is a link to an actual animation of this process, including both positive wraps and negative wraps.
Figures \(\PageIndex{2}\) and \(\PageIndex{3}\) only show a portion of the number line being wrapped around the circle. Since the number line is infinitely long, it will wrap around the circle infinitely many times. A result of this is that infinitely many different numbers from the number line get wrapped to the same location on the unit circle.
The number 0 and the numbers \(2\pi\), \(-2\pi\), and \(4\pi\) (as well as others) get wrapped to the point \((1, 0)\). We will usually say that these points get mapped to the point \((1, 0)\). The number \(\pi /2\) is mapped to the point \((0, 1)\). This is because the circumference of the unit circle is \(2\pi\) and so one-fourth of the circumference is \(\frac{1}{4}(2\pi) = \pi/2\). If we now add \(2\pi\) to \(\pi/2\), we see that \(5\pi/2\)also gets mapped to \((0, 1)\). If we subtract \(2\pi\) from \(\pi/2\), we see that \(-3\pi/2\) also gets mapped to \((0, 1)\).
However, the fact that infinitely many different numbers from the number line get wrapped to the same location on the unit circle turns out to be very helpful as it will allow us to model and represent behavior that repeats or is periodic in nature.
Exercise \(\PageIndex{1}\)
Find two different numbers, one positive and one negative, from the number line that get wrapped to the point \((-1, 0)\) on the unit circle. Describe all of the numbers on the number line that get wrapped to the point \((-1, 0)\) on the unit circle. Find two different numbers, one positive and one negative, from the number line that get wrapped to the point \((0, 1)\) on the unit circle. Find two different numbers, one positive and one negative, from the number line that get wrapped to the point \((0, -1)\) on the unit circle. Answer
Some positive numbers that are wrapped to the point \((-1, 0)\) are \(\pi, 3\pi, 5\pi\). Some negative numbers that are wrapped to the point \((-1, 0)\) are \(-\pi, -3\pi, -5\pi\).
The numbers that get wrapped to \((-1, 0)\) are the odd integer multiples of \(\pi\).
Some positive numbers that are wrapped to the point \((0, 1)\) are \(\dfrac{\pi}{2}, \dfrac{5\pi}{2}, \dfrac{9\pi}{2}\).
Some negative numbers that are wrapped to the point \((0, 1)\) are \(-\dfrac{\pi}{2}, -\dfrac{5\pi}{2}, -\dfrac{9\pi}{2}\).
Some positive numbers that are wrapped to the point \((0, -1)\) are \(\dfrac{3\pi}{2}, \dfrac{7\pi}{2}, \dfrac{11\pi}{2}\).
Some negative numbers that are wrapped to the point \((0, -1)\) are \(-\dfrac{3\pi}{2}, -\dfrac{5\pi}{2}, -\dfrac{11\pi}{2}\).
One thing we should see from our work in exercise 1.1 is that integer multiples of \(\pi\) are wrapped either to the point \((1, 0)\) or \((-1, 0)\) and that odd integer multiples of \(\dfrac{\pi}{2}\) are wrapped to either to the point \((0, 1)\) or \((0, -1)\). Since the circumference of the unit circle is \(2\pi\), it is not surprising that fractional parts of \(\pi\) and the integer multiples of these fractional parts of \(\pi\) can be located on the unit circle. This will be studied in the next exercise.
Exercise \(\PageIndex{2}\)
The following diagram is a unit circle with \(24\) points equally space points plotted on the circle. Since the circumference of the circle is \(2\pi\) units, the increment between two consecutive points on the circle is \(\dfrac{2\pi}{24} = \dfrac{\pi}{12}\).
Label each point with the smallest nonnegative real number \(t\) to which it corresponds. For example, the point \((1, 0)\) on the x-axis corresponds to \(t = 0\). Moving
counterclockwise from this point, the second point corresponds to \(\dfrac{2\pi}{12} = \dfrac{\pi}{6}\).
Figure \(\PageIndex{4}\): Points on the unit circle
Using \(\PageIndex{4}\), approximate the \(x\)-coordinate and the \(y\)-coordinate of each of the following:
The point on the unit circle that corresponds to \(t =\dfrac{\pi}{3}\). The point on the unit circle that corresponds to \(t =\dfrac{2\pi}{3}\). The point on the unit circle that corresponds to \(t =\dfrac{4\pi}{3}\). The point on the unit circle that corresponds to \(t =\dfrac{5\pi}{3}\). The point on the unit circle that corresponds to \(t = \dfrac{\pi}{4}\). The point on the unit circle that corresponds to \(t =\dfrac{7\pi}{4}\). Answer
For \(t = \dfrac{\pi}{3}\), the point is approximately \((0.5, 0.87)\). For \(t = \dfrac{2\pi}{3}\), the point is approximately \((-0.5, 0.87)\). For \(t = \dfrac{4\pi}{3}\), the point is approximately \((-0.5, -0.87)\). For \(t = \dfrac{5\pi}{3}\), the point is approximately \((0.5, -0.87)\). For \(t = \dfrac{\pi}{4}\), the point is approximately \((0.71, 0.71)\). For \(t = \dfrac{7\pi}{4}\), the point is approximately \((0.71, -0.71)\).
Arcs on the Unit Circle
When we wrap the number line around the unit circle, any closed interval on the number line gets mapped to a continuous piece of the unit circle. These pieces are called
arcs of the circle. For example, the segment \(\Big[0, \dfrac{\pi}{2}\Big]\) on the number line gets mapped to the arc connecting the points \((1, 0)\) and \((0, 1)\) on the unit circle as shown in \(\PageIndex{5}\). In general, when a closed interval \([a, b]\)is mapped to an arc on the unit circle, the point corresponding to \(t = a\) is called the initial point of the arc, and the point corresponding to \(t = a\) is called the terminal point of the arc. So the arc corresponding to the closed interval \(\Big(0, \dfrac{\pi}{2}\Big)\) has initial point \((1, 0)\) and terminal point \((0, 1)\).
Figure \(\PageIndex{5}\): An arc on the unit circle
Exercise \(\PageIndex{3}\)
Draw the following arcs on the unit circle.
The arc that is determined by the interval \([0, \dfrac{\pi}{4}]\) on the number line. The arc that is determined by the interval \([0, \dfrac{2\pi}{3}]\) on the number line. The arc that is determined by the interval \([0, -\pi]\) on the number line. Answer Coordinates of Points on the Unit Circle
When we have an equation (usually in terms of \(x\) and \(y\)) for a curve in the plane and we know one of the coordinates of a point on that curve, we can use the equation to determine the other coordinate for the point on the curve. The equation for the unit circle is \(x^2+y^2 = 1\). So if we know one of the two coordinates of a point on the unit circle, we can substitute that value into the equation and solve for the value(s) of the other variable.
For example, suppose we know that the x-coordinate of a point on the unit circle is \(-\dfrac{1}{3}\). This is illustrated on the following diagram. This diagram shows the unit circle \(x^2+y^2 = 1\) and the vertical line \(x = -\dfrac{1}{3}\). This shows that there are two points on the unit circle whose x-coordinate is \(-\dfrac{1}{3}\). We can find the \(y\)-coordinates by substituting the \(x\)-value into the equation and solving for \(y\).
\[\begin{align*} x^2+y^2 &= 1 \\[4pt] (-\dfrac{1}{3})^2+y^2 &= 1 \\[4pt] \dfrac{1}{9}+y^2 &= 1 \\[4pt] y^2 &= \dfrac{8}{9} \end{align*}\]
Since \(y^2 = \dfrac{8}{9}\), we see that \(y = \pm\sqrt{\dfrac{8}{9}}\) and so \(y = \pm\dfrac{\sqrt{8}}{3}\). So the two points on the unit circle whose \(x\)-coordinate is \(-\dfrac{1}{3}\) are
\[ \left(-\dfrac{1}{3}, \dfrac{\sqrt{8}}{3}\right),\]
which is in the second quadrant and
\[ \left(-\dfrac{1}{3}, -\dfrac{\sqrt{8}}{3}\right),\]
which is in the third quadrant.
The first point is in the second quadrant and the second point is in the third quadrant. We can now use a calculator to verify that \(\dfrac{\sqrt{8}}{3} \approx 0.9428\). This seems consistent with the diagram we used for this problem.
Exercise \(\PageIndex{4}\)
Find all points on the unit circle whose \(y\)-coordinate is \(\dfrac{1}{2}\). Find all points on the unit circle whose x-coordinate is \(\dfrac{\sqrt{5}}{4}\). Answer We substitute \(y = \dfrac{1}{2}\) into \(x^{2} + y^{2} = 1\).
\[x^{2} + (\dfrac{1}{2})^{2} = 1\]
\[x^{2} = \dfrac{3}{4}\] \[x = \pm\dfrac{\sqrt{3}}{2}\]
The two points are \((\dfrac{\sqrt{3}}{2}, \dfrac{1}{2})\) and \((-\dfrac{\sqrt{3}}{2}, \dfrac{1}{2})\)
We substitute \(y = \dfrac{\sqrt{5}}{4}\) into \(x^{2} + y^{2} = 1\).
\[(\dfrac{\sqrt{5}}{4})^{2} + y^{2} = 1\]
\[y^{2} = \dfrac{11}{16}\] \[x = \pm\dfrac{\sqrt{11}}{4}\]
The two points are \((\dfrac{\sqrt{5}}{4}, \dfrac{\sqrt{11}}{4})\) and \((\dfrac{\sqrt{5}}{4}, -\dfrac{\sqrt{11}}{4})\).
Summary
In this section, we studied the following important concepts and ideas: The unit circleis the circle of radius 1 that is centered at the origin. The equation of the unit circle is \(x^2+y^2 = 1\). It is important because we will use this as a tool to model periodic phenomena. We “wrap” the number line about the unit circle by drawing a number line that is tangent to the unit circle at the point \((1, 0)\). We wrap the positive part of the number line around the unit circle in the counterclockwise direction and wrap the negative part of the number line around the unit circle in the clockwise direction. When we wrap the number line around the unit circle, any closed interval of real numbers gets mapped to a continuous piece of the unit circle, which is called an arc of the circle. When the closed interval \((a, b)\)is mapped to an arc on the unit circle, the point corresponding to \(t = a\) is called the initial point of the arc, and the point corresponding to \(t = a\) is called the terminal point of the arc. |
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D
OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a...
@NeuroFuzzy awesome what have you done with it? how long have you been using it?
it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game
As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity
@Secret I mean more along the lines of the fluid dynamics in that kind of game
@Secret Like how in the dan-ball one air pressure looks continuous (I assume)
@Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A.
I would bet you get lots of cool reaction-diffusion-like patterns with that rule.
(Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ...
Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a...
Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl...
@ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-)
What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ...
and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles
The documentary then showed one of the bird's eye view of the farmlands
(which pardon my sketchy drawing skills...)
Most of the farmland is tiled into grids
Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array
In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl
and in others grass grew
Two blue steel bars were visible laying across the grid, holding up a triangle pool of water
Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e.
ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it
At the end of the documentary, near a university lodge area
I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends
Reality check: I have been to London, but not Belgium
Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order
Presumably one can formulate it (using an example of a 4th order tensor) as follows:
$$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$
and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array
while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$
However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers
@DavidZ in the recent meta post about the homework policy there is the following statement:
> We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems.
This is an interesting statement.
I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking".
I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea.
I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments).
@DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic.
@peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive.
@DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds.
@EmilioPisanty Yes, but I had liked to talk to him here.
@DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things.
@peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck.
4
Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful.
@EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging". |
Learning Objectives
Graph plane curves described by parametric equations by plotting points. Graph parametric equations.
It is the bottom of the ninth inning, with two outs and two men on base. The home team is losing by two runs. The batter swings and hits the baseball at \(140\) feet per second and at an angle of approximately \(45°\) to the horizontal. How far will the ball travel? Will it clear the fence for a game-winning home run? The outcome may depend partly on other factors (for example, the wind), but mathematicians can model the path of a projectile and predict approximately how far it will travel using parametric equations. In this section, we’ll discuss parametric equations and some common applications, such as projectile motion problems.
Graphing Parametric Equations by Plotting Points
In lieu of a graphing calculator or a computer graphing program, plotting points to represent the graph of an equation is the standard method. As long as we are careful in calculating the values, point-plotting is highly dependable.
How to: Given a pair of parametric equations, sketch a graph by plotting points
Construct a table with three columns: \(t\), \(x(t)\),and \(y(t)\). Evaluate \(x\) and \(y\) for values of tt over the interval for which the functions are defined. Plot the resulting pairs \((x,y)\).
Example \(\PageIndex{1}\): Sketching the Graph of a Pair of Parametric Equations by Plotting Points
Sketch the graph of the parametric equations \(x(t)=t^2+1\), \( y(t)=2+t\).
Solution
Construct a table of values for \(t\), \(x(t)\), and \(y(t)\), as in Table \(\PageIndex{1}\), and plot the points in a plane.
\(t\) \(x(t)=t^2+1\) \(y(t)=2+t\) \(−5\) \(26\) \(−3\) \(−4\) \(17\) \(−2\) \(−3\) \(10\) \(−1\) \(−2\) \(5\) \(0\) \(−1\) \(2\) \(1\) \(0\) \(1\) \(2\) \(1\) \(2\) \(3\) \(2\) \(5\) \(4\) \(3\) \(10\) \(5\) \(4\) \(17\) \(6\) \(5\) \(26\) \(7\)
The graph is a parabola with vertex at the point \((1,2)\),opening to the right. See Figure \(\PageIndex{2}\).
Analysis
As values for \(t\) progress in a positive direction from \(0\) to \(5\), the plotted points trace out the top half of the parabola. As values oft t become negative, they trace out the lower half of the parabola. There are no restrictions on the domain. The arrows indicate direction according to increasing values of \(t\). The graph does not represent a function, as it will fail the vertical line test. The graph is drawn in two parts: the positive values for \(t\), and the negative values for \(t\).
Exercise \(\PageIndex{1}\)
Sketch the graph of the parametric equations \(x=\sqrt{t}\), \( y=2t+3\), \(0≤t≤3\).
Answer
Example \(\PageIndex{2}\): Sketching the Graph of Trigonometric Parametric Equations
Construct a table of values for the given parametric equations and sketch the graph:
\(x=2 \cos t\)
\(y=4 \sin t\)
Solution
Construct a table like that in Table \(\PageIndex{2}\) using angle measure in radians as inputs for \(t\), and evaluating \(x\) and \(y\). Using angles with known sine and cosine values for \(t\) makes calculations easier.
\(t\) \(x=2 \cos t\) \(y=4 \sin t\) \(0\) \(x=2 \cos(0)=2\) \(y=4 \sin(0)=0\) \(\dfrac{\pi}{6}\) \(x=2 \cos(\dfrac{\pi}{6})=\sqrt{3}\) \(y=4 \sin(\dfrac{π}{6})=2\) \(\dfrac{\pi}{3}\) \(x=2 \cos(\dfrac{\pi}{3})=1\) \(y=4 \sin(\dfrac{\pi}{3})=2\sqrt{3}\) \(\dfrac{\pi}{2}\) \(x=2 \cos(\dfrac{\pi}{2})=0\) \(y=4 \sin(\dfrac{\pi}{2})=4\) \(\dfrac{2\pi}{3}\) \(x=2 \cos(\dfrac{2\pi}{3})=−1\) \(y=4 \sin(\dfrac{2\pi}{3})=2\sqrt{3}\) \(\dfrac{5\pi}{6}\) \(x=2 \cos(\dfrac{5\pi}{6})=−\sqrt{3}\) \(y=4 \sin(\dfrac{5\pi}{6})=2\) \(\pi\) \(x=2 \cos(\pi)=−2\) \(y=4 \sin(\pi)=0\) \(\dfrac{7\pi}{6}\) \(x=2 \cos(\dfrac{7\pi}{6})=−\sqrt{3}\) \(y=4 \sin(\dfrac{7\pi}{6})=−2\) \(\dfrac{4\pi}{3}\) \(x=2 \cos(\dfrac{4\pi}{3})=−1\) \(y=4 \sin(\dfrac{4\pi}{3})=−2\sqrt{3}\) \(\dfrac{3\pi}{2}\) \(x=2 \cos(\dfrac{3\pi}{2})=0\) \(y=4 \sin(\dfrac{3\pi}{2})=−4\) \(\dfrac{5\pi}{3}\) \(x=2 \cos(\dfrac{5\pi}{3})=1\) \(y=4 \sin(\dfrac{5\pi}{3})=−2\sqrt{3}\) \(\dfrac{11\pi}{6}\) \(x=2 \cos(\dfrac{11\pi}{6})=\sqrt{3}\) \(y=4 \sin(\dfrac{11\pi}{6})=−2\) \(2\pi\) \(x=2 \cos(2\pi)=2\) \(y=4 \sin(2\pi)=0\)
Figure \(\PageIndex{4}\) shows the graph.
By the symmetry shown in the values of \(x\) and \(y\), we see that the parametric equations represent an ellipse. The ellipse is mapped in a counterclockwise direction as shown by the arrows indicating increasing \(t\) values.
Analysis
We have seen that parametric equations can be graphed by plotting points. However, a graphing calculator will save some time and reveal nuances in a graph that may be too tedious to discover using only hand calculations. Make sure to change the mode on the calculator to parametric (PAR). To confirm, the \(Y=\) window should show
\[\begin{align*} X_{1T} &= \\ Y_{1T} &= \end{align*}\]
instead of \(Y_1=\).
Exercise \(\PageIndex{2}\)
Graph the parametric equations: \(x=5 \cos t\), \(y=3 \sin t\).
Answer
Example \(\PageIndex{3}\): Graphing Parametric Equations and Rectangular Form Together
Graph the parametric equations \(x=5 \cos t\) and \(y=2 \sin t\). First, construct the graph using data points generated from the parametric form. Then graph the rectangular form of the equation. Compare the two graphs.
Solution
Construct a table of values like that in Table \(\PageIndex{3}\).
\(t\) \(x=5 \cos t\) \(y=2 \sin t\) \(0\) \(x=5 \cos(0)=5\) \(y=2 \sin(0)=0\) \(1\) \(x=5 \cos(1)≈2.7\) \(y=2 \sin(1)≈1.7\) \(2\) \(x=5 \cos(2)≈−2.1\) \(y=2 \sin(2)≈1.8\) \(3\) \(x=5 \cos(3)≈−4.95\) \(y=2 \sin(3)≈0.28\) \(4\) \(x=5 \cos(4)≈−3.3\) \(y=2 \sin(4)≈−1.5\) \(5\) \(x=5 \cos(5)≈1.4\) \(y=2 \sin(5)≈−1.9\) \(−1\) \(x=5 \cos(−1)≈2.7\) \(y=2 \sin(−1)≈−1.7\) \(−2\) \(x=5 \cos(−2)≈−2.1\) \(y=2 \sin(−2)≈−1.8\) \(−3\) \(x=5 \cos(−3)≈−4.95\) \(y=2 \sin(−3)≈−0.28\) \(−4\) \(x=5 \cos(−4)≈−3.3\) \(y=2 \sin(−4)≈1.5\) \(−5\) \(x=5 \cos(−5)≈1.4\) \(y=2 \sin(−5)≈1.9\)
Plot the \((x,y)\) values from the table (Figure \(\PageIndex{6}\)).
Next, translate the parametric equations to rectangular form. To do this, we solve for \(t\) in either \(x(t)\) or \(y(t)\), and then substitute the expression for \(t\) in the other equation. The result will be a function \(y(x)\) if solving for \(t\) as a function of \(x\), or \(x(y)\) if solving for \(t\) as a function of \(y\).
\[\begin{align*} x &= 5 \cos t \\ \dfrac{x}{5} &= \cos t \end{align*}\]
Solve for \(\cos t\).
\(y=2 \sin t\)
Solve for \(\sin t\).
\(\dfrac{y}{2}=\sin t\)
Then, use the Pythagorean Theorem.
\[\begin{align*} {\cos}^2 t+{\sin}^2 t &=1 \\ {\left(\dfrac{x}{5}\right)}^2+{\left(\dfrac{y}{2}\right)}^2 &= 1 \\ \dfrac{x^2}{25}+\dfrac{y^2}{4} &=1 \end{align*}\]
Analysis
In Figure \(\PageIndex{7}\), the data from the parametric equations and the rectangular equation are plotted together. The parametric equations are plotted in blue; the graph for the rectangular equation is drawn on top of the parametric in a dashed style colored red. Clearly, both forms produce the same graph.
Example \(\PageIndex{4}\): Graphing Parametric Equations and Rectangular Equations on the Coordinate System
Graph the parametric equations \(x=t+1\) and \(y=\sqrt{t}\), \(t≥0\), and the rectangular equivalent \(y=\sqrt{x−1}\) on the same coordinate system.
Solution
Construct a table of values for the parametric equations, as we did in the previous example, and graph \(y=\sqrt{t}\), \(t≥0\) on the same grid, as in Figure \(\PageIndex{8}\).
Analysis
With the domain on \(t\) restricted, we only plot positive values of \(t\). The parametric data is graphed in blue and the graph of the rectangular equation is dashed in red. Once again, we see that the two forms overlap.
Exercise \(\PageIndex{3}\)
Sketch the graph of the parametric equations \(x=2 \cos \theta\) and \(y=4 \sin \theta\), along with the rectangular equation on the same grid.
Answer
The graph of the parametric equations is in red and the graph of the rectangular equation is drawn in blue dots on top of the parametric equations.
Applications of Parametric Equations
Many of the advantages of parametric equations become obvious when applied to solving real-world problems. Although rectangular equations in \(x\) and \(y\) give an overall picture of an object's path, they do not reveal the position of an object at a specific time. Parametric equations, however, illustrate how the values of \(x\) and \(y\) change depending on \(t\), as the location of a moving object at a particular time.
A common application of parametric equations is solving problems involving projectile motion. In this type of motion, an object is propelled forward in an upward direction forming an angle of \(\theta\) to the horizontal, with an initial speed of \(v_0\), and at a height \(h\) above the horizontal.
The path of an object propelled at an inclination of \(\theta\) to the horizontal, with initial speed \(v_0\), and at a height \(h\) above the horizontal, is given by
\[\begin{align*} x &= (v_0 \cos \theta)t \\ y &= −\dfrac{1}{2}gt^2+(v_0 \sin \theta)t+h \end{align*}\]
where \(g\) accounts for the effects of
gravity and \(h\) is the initial height of the object. Depending on the units involved in the problem, use \(g=32 ft / s^2\) or \(g=9.8 m / s^2\). The equation for \(x\) gives horizontal distance, and the equation for \(y\) gives the vertical distance.
How to: Given a projectile motion problem, use parametric equations to solve.
The horizontal distance is given by \(x=(v_0 \cos \theta)t\). Substitute the initial speed of the object for \(v_0\). The expression \(\cos \theta\) indicates the angle at which the object is propelled. Substitute that angle in degrees for \(\cos \theta\). The vertical distance is given by the formula \(y=−\dfrac{1}{2}gt^2+(v_0 \sin \theta)t+h\). The term \(−\dfrac{1}{2}gt^2\) represents the effect of gravity. Depending on units involved, use \(g=32 ft/s^2\) or \(g=9.8 m/s^2\). Again, substitute the initial speed for \(v_0\), and the height at which the object was propelled for \(h\). Proceed by calculating each term to solve for \(t\).
Example \(\PageIndex{5}\): Finding the Parametric Equations to Describe the Motion of a Baseball
Solve the problem presented at the beginning of this section. Does the batter hit the game-winning home run? Assume that the ball is hit with an initial velocity of \(140\) feet per second at an angle of \(45°\) to the horizontal, making contact \(3\) feet above the ground.
Find the parametric equations to model the path of the baseball. Where is the ball after \(2\) seconds? How long is the ball in the air? Is it a home run? Solution
1. Use the formulas to set up the equations. The horizontal position is found using the parametric equation for \(x\). Thus,
\[\begin{align*} x &= (v_0 \cos \theta)t \\ x &= (140 \cos(45°))t \end{align*}\]
The vertical position is found using the parametric equation for \(y\). Thus,
\[\begin{align*} y &=−16t^2+(v_0 \sin \theta)t+h \\ y &= −16t^2+(140 \sin(45°))t+3 \end{align*}\]
2. Substitute \(2\) into the equations to find the horizontal and vertical positions of the ball.
\[\begin{align*} x &= (140 \cos(45°))(2) \\ x &= 198\space feet \\ y &= −16{(2)}^2+(140 \sin(45°))(2)+3 \\ y &=137\space feet \end{align*}\]
After \(2\) seconds, the ball is \(198\) feet away from the batter’s box and \(137\) feet above the ground.
3. To calculate how long the ball is in the air, we have to find out when it will hit ground, or when \(y=0\). Thus,
\[\begin{align*} y &= −16t^2+(140\sin(45∘))t+3 \\ y &=0 \text{ Set }y(t)=0 \text{ and solve the quadratic.} \\ t &= 6.2173 \end{align*}\]
When \(t=6.2173\) seconds, the ball has hit the ground. (The quadratic equation can be solved in various ways, but this problem was solved using a computer math program.)
4. We cannot confirm that the hit was a home run without considering the size of the outfield, which varies from field to field. However, for simplicity’s sake, let’s assume that the outfield wall is \(400\) feet from home plate in the deepest part of the park. Let’s also assume that the wall is \(10\) feet high. In order to determine whether the ball clears the wall, we need to calculate how high the ball is when \(
x = 400\) feet. So we will set \( x = 400\), solve for \(t\), and input tt into \(y\).
\[\begin{align*} x &= (140 \cos(45°))t \\ 400 &= (140 \cos(45°))t \\ t &= 4.04 \\ y &= −16{(4.04)}^2+(140 \sin(45°))(4.04)+3 \\ y &= 141.8 \end{align*}\]
The ball is \(141.8\) feet in the air when it soars out of the ballpark. It was indeed a home run. See Figure \(\PageIndex{10}\).
Media
Access the following online resource for additional instruction and practice with graphs of parametric equations.
Key Concepts When there is a third variable, a third parameter on which \(x\) and \(y\) depend, parametric equations can be used. To graph parametric equations by plotting points, make a table with three columns labeled \(t\), \(x(t)\), and \(y(t)\). Choose values fort t in increasing order. Plot the last two columns for \(x\) and \(y\). See Example \(\PageIndex{1}\) and Example \(\PageIndex{2}\). When graphing a parametric curve by plotting points, note the associated t-values and show arrows on the graph indicating the orientation of the curve. See Example \(\PageIndex{3}\) and Example \(\PageIndex{4}\). Parametric equations allow the direction or the orientation of the curve to be shown on the graph. Equations that are not functions can be graphed and used in many applications involving motion. See Example \(\PageIndex{5}\). Projectile motion depends on two parametric equations: \(x=(v_0 \cos \theta)t\) and \(y=−16t^2+(v_0 \sin \theta)t+h\). Initial velocity is symbolized as \(v_0\). \(\theta\) represents the initial angle of the object when thrown, and \(h\) represents the height at which the object is propelled. |
Let me go a little further: I believe that there also exists a topologically mixing subshift on two symbols with no fully-supported invariant measure. I don't know of a reference for this result, but I think that a direct construction should be "not too hard", in the sense that a completely detailed proof would take up perhaps five pages or so.
Here is a very rough sketch. We want to find a sequence $x=(a_n)_{n=-\infty}^\infty \in \{1,2\}^{\mathbb{Z}}$ such that the orbit closure of $x$ is a subshift with the desired properties. Let us denote this orbit closure by $A$, and the shift by $T$. Our criteria mean the following:
1) Topological mixing. This means that for every pair of subwords $u$ and $v$ of $(a_n)_{n=-\infty}^\infty$, there exists an integer $N(u,v)>0$ such that for all $n \geq N(u,v)$, we can find a word $\omega$ of length $n$ such that $u\omega v$ is a subword of $(a_n)_{n=-\infty}^\infty$. Since every nonempty open set contains a cylinder I think it is not too hard to see that this implies topological mixing.
2) There is no fully-supported invariant probability measure on $A$. In particular, every invariant measure on the orbit closure gives zero measure to the cylinder set $\{(b_n)_{n=-\infty}^{\infty} \colon b_0=2\}$. To achieve this we want $(a_n)_{n=-\infty}^\infty$ to have the following property: there is a sequence of positive real numbers $(\varepsilon_n)_{n=1}^\infty$, which decreases to zero, such that for every $n \in \mathbb{N}$, every subword of $(a_n)_{n=-\infty}^\infty$ with length $n$ contains at most $n\varepsilon_n$ instances of the symbol $2$. If this property holds, then it is not difficult to see that \[\lim_{n\to\infty}\sup_{x \in A} \frac{1}{n}\sum_{i=0}^{n-1}\chi_{{x_0=2}}\left(T^ix\right)=0\]and by the Birkhoff ergodic theorem combined with the ergodic decomposition theorem, this forces every invariant measure on $A$ to give zero measure to the aforementioned cylinder set.
I am not really going to say much more, except that I think that these requirements can be seen to be compatible, in the sense that such a sequence $(a_n)_{n=-\infty}^\infty$ exists. To give a very rough idea of how it might be constructed, we could start like this. Begin by defining $a_0:=2$, and $a_{\pm1}:=1$. We will build the sequence outward from the origin in a sequence of inductive stages, roughly as follows. At each stage, we consider the set of all subwords of the finite sequence defined in the previous stage. We construct the new stage by appending copies of these subwords to the sequence defined in the previous stage in a careful manner. To ensure that (1) holds for the limit construction, we will need to include an awful lot of copies of every pair of subwords from the previous stage, separated by big lists of ones, once each for each pair of subwords and each different separating length in a certain range depending on the pair of subwords. On the other hand, to ensure that (2) holds, we need to make sure that every time a subword containing some twos is appended, it is padded out on both sides with big blocks of ones to a sufficient extent that we do not create a subword with too high a proportion of ones in it. The easiest way to achieve this is to ensure that $N(u,v)$ is always very, very, very large relative to the lengths of $u$ and $v$.
I think that this program can be followed, although I haven't gone to the significant effort of trying to write down all of the details. If there is a general moral, then I think it is this: there are an awful lot of subshifts, and if a small collection of criteria on a possible subshift does not lead to an obvious contradiction, then the desired subshift can probably be constructed by this kind of combinatorial procedure. |
Higher Order Homogenous Differential Equations - Constant Coefficients
Recall that if we have a second order linear homogenous differential equation with constant coefficients $a, b, c \in \mathbb{R}$, that is:(1)
We saw that the roots $r_1$ and $r_2$ of the characteristic equation $ar^2 + br + c = 0$ had importantance on the form of the solutions to our differential equation. In particular, if $r_1$ and $r_2$ were both real and distinct roots, then the solution to our differential equation was of the form $y = Ce^{r_1t} + De^{r_2t}$. If $r_1$ and $r_2$ were complex conjugate roots where $r_1 = \lambda + \mu i$ and $r_2 = \lambda - \mu i$, then the solution to our differential equation was of the form $y = Ce^{\lambda t}\cos (\mu t) + De^{\lambda t}\sin (\mu t)$. Lastly, if $r_1$ and $r_2$ were real nondistinct roots, that is $r_1 = r_2$ (the root of this characteristic equation has multiplicity two), then the solution to our differential equation was of the form $y = Ce^{r_1t} + Dte^{r_1t}$.
We will now begin to extend these ideas to higher order linear homogenous differential equations with constant coefficients.
Consider the following $n^{\mathrm{th}}$ order linear homogenous differential equation with constant coefficients $a_0, a_1, ..., a_n \in \mathbb{R}$:(2)
Like when we dealt with second order differential equations of this type, for specific values of $r$, we will have that $y = e^{rt}$ be a solution to our differential equation. Plugging this into the differential equation above and we have that:(3)
We want the equation above to be equal to zero so that $y = e^{rt}$ is a solution to our $n^{\mathrm{th}}$ order linear homogenous differential equation. This happens if and only if $e^{rt} = 0$ or $a_0r^n + a_1r^{n-1} + ... + a_{n-1}r + a_n = 0$. Note though that $e^{rt} \neq 0$ ever, and so more specifically, the roots of the polynomial $a_0r^n + a_1r^{n-1} + ... + a_{n-1}r + a_n$ will give us suitable values of $r$ for which $y = e^{rt}$ is a solution to our differential equation. This polynomial has an important name which we've already seen but redefine below.
Definition: The Characteristic Equation for the $n^{\mathrm{th}}$ order linear homogenous differential equation $a_0 \frac{d^{n}y}{dt^{n}} + a_1 \frac{d^{n-1}y}{dt^{n-1}} + ... + a_{n-1} \frac{dy}{dt} + a_n y = 0$ with constant coefficients $a_0, a_1, ..., a_n \in \mathbb{R}$ is $a_0r^n + a_1r^{n-1} + ... + a_{n-1}r + a_n$. Some people prefer to use the term, "Auxiliary Equation" or "Characteristic Polynomial" to mean the same thing as "Characteristic Equation".
Note that if we have an $n^{\mathrm{th}}$ order linear homogenous differential equation with constant coefficients, then the coefficient $a_0 \neq 0$ (since otherwise we would have a lower order differential equation). Thus, the characteristic polynomial $a_0r^n + a_1r^{n-1} + ... + a_{n-1}r + a_n$ will be of degree $n$. By the Fundamental Theorem of Algebra, there will exist $n$ roots, not necessary all real and not necessarily all distinct to this characteristic polynomial, call them $r_1$, $r_2$, …, $r_n$. We will subsequently begin to look at the general solutions under various cases in which these roots vary. |
Here is an archetypal vignette for Newtonian mechanics. Two compound atoms called $\mathbf{A}$ and $\mathbf{B}$ have an interaction with each other by swapping $\sf{X}$ another particle which is called the
exchange particle. The interaction is caused when $\mathbf{A}$ emits $\sf{X}$ at event $\mathbf{A}_{\it{i}}$ which is called the initial event of the interaction. This is written as
$\mathbf{A}_{\it{i}\sf{-1}} \to \mathbf{A}_{\it{i}} + \sf{X}_{\it{i}}$
$\mathbf{B}_{\it{f}} + \sf{X}_{\it{f}} \to \mathbf{B}_{\it{f}\sf{+1}}$
$\Psi \left( \bar{r}, t \right) ^{\mathbf{A}} = \left( \mathbf{A}_{1}, \mathbf{A}_{2} \ldots \mathbf{A}_{\it{i}} \ldots \mathbf{A}_{\it{f}} \ldots \right)$
$\Psi ^{\sf{X}} = \left( \sf{X}_{\it{i}} \ldots \sf{X}_{\it{f}} \right)$
$\Psi \left( \bar{r}, t \right) ^{\mathbf{B}} = \left( \mathbf{B}_{1}, \mathbf{B}_{2} \ldots \mathbf{B}_{\it{i}} \ldots \mathbf{B}_{\it{f}} \ldots \right)$
Since $\mathbf{A}$ and $\mathbf{B}$ are composed from atoms, we assume that they can be described by space-time events with a position $\bar{r}$ and time of occurrence $t$. We do not assume that $\sf{X}$ is an atom, rather we often take it to be a photon or a graviton. So we cannot always describe $\sf{X}$ using a trajectory. And the position of $\sf{X}$ is well-defined only for the initial and final events where it is included as part of an atom. Overall, the interaction is characterized by the following quantities.
$\Delta \bar{p}^{ \mathbf{A}} = - \, \bar{p}^{ \sf{X}}$
$\Delta \bar{p}^{ \mathbf{B}} = \bar{p}^{ \sf{X}}$
$\begin{align} \Delta t = \frac { h \left( \, f-i \right) }{ E ^{\sf{X}} } \end{align}$
$\begin{align} \ell = \frac { h \left( \, f-i \right) }{ p ^{\sf{X}} } \end{align}$ |
The cross product or vector product is a binary operation on two vectors in three-dimensional space (R3) and is denoted by the symbol
x. Two linearly independent vectors a and b, the cross product, a x b, is a vector that is perpendicular to both a and b and therefore normal to the plane containing them.
\[\LARGE A\times B=\begin{vmatrix} i & j & k\\ a_{1} & a_{2}& a_{3} \\ b_{1} & b_{2}& b_{3} \end{vmatrix}\]
1, a 2, a 3are the components of the vector $\overrightarrow{a}$ and b 1, b 2and b 3are the components of $\overrightarrow{b}$ . Cross Product Formula is given by,
\[\LARGE a\times b=\left | a \right |\left | b \right |\sin \theta\]
Cross product formula is used to determine the cross product or angle between any two vectors based on the given problem.
Solved Examples Calculate the cross products of vectors Question 1: a = <3, 4, 7> and b = <4, 9, 2>. Solution:
The cross product is given by
a $\times$ b = $\begin{vmatrix} i & j & k \\3 & 4 & 7 \\4 & 9 & 2 \end{vmatrix}$
a $\times$ b = $i(4\times 2-9\times 7)-j(3 \times 2 – 4\times 7)+k(3\times 9-4\times 9)$
a $\times$ b = $i(8-63)-j(6-28)+k(27-36)$
a $\times$ b = $-55i+22j-9k$ |
I am working on a physics reserach project for school and I have run into some troubles working Mathematica. I am a fairly inexperienced mathematica user so any help would very much be appreciated.
I need to find the roots of the transcendental equation $$\zeta_n \tan(\zeta) - \sqrt{R^2-\zeta^2_n}=0$$ and then collect them into a list, $\{\zeta_n\}$, which I can sum over.
FindRoot works but only finds one root at a time. For example $R^2=18$ gives
FindRoot[Sqrt[18. - zeta^2] - zeta Tan[zeta] == 0, {zeta, 3}]
{zeta -> 3.66808} From a plot of the function I know there should be two roots for this particular value of $R$, however FindRoot only gives one.
I have had no luck with NSolve or Solve either.
NSolve[xi[zeta, Erg4]- zeta*Tan[zeta] == 0, zeta]
NSolve[Sqrt[18. - zeta^2] - zeta Tan[zeta] == 0, zeta]
Also once I have succeeded in finding all roots how does one put them in an indexed set ${\zeta_n}$. |
Electronic Communications in Probability Electron. Commun. Probab. Volume 22 (2017), paper no. 2, 6 pp. A heat flow approach to the Godbillon-Vey class Abstract
We give a heat flow derivation for the Godbillon Vey class. In particular we prove that if $(M,g)$ is a compact Riemannian manifold with a codimension 1 foliation $\mathcal{F} $, defined by an integrable 1-form $\omega $ such that $||\omega ||=1$, then the Godbillon-Vey class can be written as $[-\mathcal{A} \omega \wedge d\omega ]_{dR}$ for an operator $\mathcal{A} :\Omega ^*(M)\rightarrow \Omega ^*(M)$ induced by the heat flow.
Article information Source Electron. Commun. Probab., Volume 22 (2017), paper no. 2, 6 pp. Dates Received: 2 October 2014 Accepted: 28 June 2015 First available in Project Euclid: 5 January 2017 Permanent link to this document https://projecteuclid.org/euclid.ecp/1483585771 Digital Object Identifier doi:10.1214/16-ECP3836 Mathematical Reviews number (MathSciNet) MR3607797 Zentralblatt MATH identifier 1358.58016 Subjects Primary: 58J65: Diffusion processes and stochastic analysis on manifolds [See also 35R60, 60H10, 60J60] 53C12: Foliations (differential geometric aspects) [See also 57R30, 57R32] Secondary: 60H30: Applications of stochastic analysis (to PDE, etc.) 60J60: Diffusion processes [See also 58J65] Citation
Ledesma, Diego S. A heat flow approach to the Godbillon-Vey class. Electron. Commun. Probab. 22 (2017), paper no. 2, 6 pp. doi:10.1214/16-ECP3836. https://projecteuclid.org/euclid.ecp/1483585771 |
How I can evaluate the limit superior of a sequence?
How I can evaluate the limit superior of a sequence? I dont found in the documentation something related to this tool.
EDIT: the limit superior of a sequence $(x_n)$ is defined as
$$\limsup x_n=\lim_{n\to\infty} \sup \{x_k:k\ge n\} =\inf\{\sup \{x_k:k\ge n\}: n\in \mathbb N \} $$
where the first definition is the more interesting computationally.
It seems that, computationally, evaluating the limit superior is hard because it is done via "brute force". Not only ths happen in sage, too in other CAS. It seems that there is a loooong run in the developing of computer algebra systems. |
Observation of New Properties of Secondary Cosmic Rays Lithium, Beryllium and Boron
Lithium, beryllium, and boron nuclei in cosmic rays are thought to be produced by the collisions of nuclei with the interstellar medium. They are called secondary cosmic rays. Precise knowledge of their spectra in the GV-TV rigidity region provides important information on the propagation of cosmic rays.
AMS published the precision measurement of the lithium, beryllium, and boron fluxes in cosmic rays in the rigidity range from 1.9 GV to 3.3 TV. This measurement is based on 1.9 million lithium, 0.9 million beryllium, and 2.6 million boron nuclei collected by AMS during the first 5 years of operation aboard the International Space Station (ISS). The total error on each of the fluxes is 3%–4% at 100 GV. Figure 1 shows the fluxes of lithium, beryllium and boron measured by AMS.
As seen, the Li and B fluxes have an identical rigidity dependence above ∼7 GV and all three secondary fluxes have an identical rigidity dependence above ∼30 GV with the Li/Be flux ratio of $2.0 \pm 0.1$, as shown in Figure 2. Note that the different rigidity dependence of the Be flux and Li fluxes below 30 GV is due to the significant presence of the radioactive $\mathrm{^{10}Be}$ isotope, which has a half-life of 1.4 million years.
Precise measurements of primary cosmic rays helium, carbon, and oxygen, by AMS, have shown a hardening of all their spectra above 200 GV. In addition, above 60 GV, the spectra of He, C, and O were found to have an identical rigidity dependence [M. Aguilar et al., Phys. Rev. Lett. 119, 251101 (2017)].
The detailed knowledge of lithium, beryllium, and boron flux rigidity dependence is important to study the origin of the hardening in cosmic ray fluxes.
There are many theoretical models describing the behavior of cosmic rays. For example, if the hardening in cosmic rays is related to the injected spectra at their source, then similar hardening is expected both for secondary and primary cosmic rays [S. Thoudam and J. R. Hörandel, Mon. Not. R. Astron. Soc. 435, 2532 (2013)]. However, if the hardening is related to propagation properties in the Galaxy then a stronger hardening is expected for the secondary with respect to the primary cosmic rays [A. E. Vladimirov, G. Jóhannesson, I. V. Moskalenko, and T. A. Porter, Astrophys. J. 752, 68 (2012)]. The theoretical models have their limitations, as none of them predicted the AMS observed spectral behavior of the primary cosmic rays He, C, and O nor the secondary cosmic rays Li, Be, and B.
To examine the rigidity dependence of the secondary fluxes, detailed variations of the flux spectral indices with rigidity were obtained in a model-independent way. The lithium, beryllium and boron flux $\Phi$ spectral indices γ were calculated from $\gamma = d[\log(\Phi)/d[\log(R)]$ over rigidity intervals bounded by 7.09, 12.0, 16.6, 22.8, 41.9, 60.3, 192, and 3300 GV. The results are presented in Figure 3 together with the spectral indices of helium, carbon, and oxygen [M. Aguilar et al., Phys. Rev. Lett. 119, 251101 (2017)]. As seen, the magnitude and the rigidity dependence of the lithium, beryllium, and boron spectral indices are nearly identical, but distinctly different from the rigidity dependence of helium, carbon, and oxygen. In addition, above ∼200 GV, Li, Be, and B all harden more than He, C, and O.
This observed behavior is completely unexpected.
To examine the difference between the rigidity dependence of primary and secondary cosmic rays in detail, the ratios of the lithium, beryllium, and boron fluxes to the carbon and oxygen fluxes were computed using the corresponding flux values. The detailed variations with rigidity of the spectral indices $\Delta$ of the secondary to primary flux ratios $\Phi_{S}/\Phi_{P}$ were obtained in a model independent way using $\Delta = d[\log(\Phi_{S}/\Phi_{P})/d[\log(R)]$ over rigidity intervals [60.3 – 192] and [192 – 3300] GV and shown in Figure 4. Above ∼200 GV these spectral indices exhibit hardening of $0.13 \pm 0.03$ on average. This shows that at high rigidities the secondary cosmic rays harden more than the primary cosmic rays. This additional hardening of secondary cosmic rays is consistent with expectations when the hardening is due to the propagation in the Galaxy. This is a new observation.
Figure 5 shows a comparison of the secondary cosmic ray fluxes Li, Be and B with the AMS primary cosmic ray fluxes He, C and O. As seen, the three secondary fluxes have an identical rigidity dependence above 30 GV, as do the three primary fluxes above 60 GV. The rigidity dependences of primary cosmic ray fluxes and of secondary cosmic ray fluxes are distinctly different.
In conclusion, the precise, high statistics measurements of the lithium, beryllium, and boron fluxes from 1.9 GV to 3.3 TV show that the Li and B fluxes have identical rigidity dependence above 7 GV and all three fluxes have identical rigidity dependence above 30 GV with the Li/Be flux ratio of $2.0 \pm 0.1$. The three fluxes deviate from a single power law above 200 GV in an identical way. As seen in Figure 5, this behavior of secondary cosmic rays has also been observed in primary cosmic rays He, C, and O but the rigidity dependences of primary cosmic rays and of secondary cosmic rays are distinctly different. In particular, above 200 GV, the spectral indices of secondary cosmic rays harden by an average of $0.13 \pm 0.03$ more than the primaries. These are new properties of high energy cosmic rays. |
Continuity of global attractors for a class of non local evolution equations
1.
Instituto de Matemática e Estatística-Universidade de São Paulo, Rua do Matão, 1010, Cidade Universitária, CEP 05508-090, São Paulo-SP, Brazil
2.
Unidade Acadêmica de Matemática e Estatística UAME/CCT/UFCG, Avenida Aprígio Veloso, 882, Bairro Universitrio, Caixa Postal: 10.044, CEP 58109-970, Campina Grande-PB, Brazil
$\frac{\partial m(r,t)}{\partial t}=-m(r,t)+ g(\beta J $∗$ m(r,t)+ \beta h),\ h ,\ \beta \geq 0,$
are continuous with respect to the parameters $h$ and $\beta$ if one assumes a property implying normal hyperbolicity for its (families of) equilibria.
Mathematics Subject Classification:Primary: 34G20; Secondary: 47H1. Citation:Antônio Luiz Pereira, Severino Horácio da Silva. Continuity of global attractors for a class of non local evolution equations. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 1073-1100. doi: 10.3934/dcds.2010.26.1073
[1] [2]
Tomás Caraballo, Alexandre N. Carvalho, Henrique B. da Costa, José A. Langa.
Equi-attraction and continuity of attractors for skew-product semiflows.
[3] [4]
Eduardo Liz, Gergely Röst.
On the global attractor of delay differential equations with unimodal feedback.
[5] [6] [7] [8] [9]
Alexey Cheskidov, Susan Friedlander, Nataša Pavlović.
An inviscid
dyadic model of turbulence: The global attractor.
[10]
Yirong Jiang, Nanjing Huang, Zhouchao Wei.
Existence of a global attractor for fractional differential hemivariational inequalities.
[11]
Zhong Wan, Chunhua Yang.
New approach to global minimization of normal multivariate polynomial based on tensor.
[12]
Vladimir V. Chepyzhov, Monica Conti, Vittorino Pata.
A minimal approach to the theory of global attractors.
[13]
Sergey Dashkovskiy, Oleksiy Kapustyan, Iryna Romaniuk.
Global attractors of impulsive parabolic inclusions.
[14]
Rodrigo Samprogna, Cláudia B. Gentile Moussa, Tomás Caraballo, Karina Schiabel.
Trajectory and global attractors for generalized processes.
[15]
Songsong Lu, Hongqing Wu, Chengkui Zhong.
Attractors for nonautonomous 2d Navier-Stokes equations with normal external forces.
[16] [17] [18] [19] [20]
Wided Kechiche.
Regularity of the global attractor for a nonlinear Schrödinger equation with a point defect.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
Towards Understanding the Origin of Cosmic-Ray Positrons
Studies of light cosmic ray antimatter species, such as positrons, antiprotons, and antideuterons, are crucial for the understanding of new phenomena in the cosmos, since the yield of these particles from cosmic ray collisions is small. Our data published in 2013 and 2014 have generated widespread interest and discussions of the observed excess of high energy positrons. The explanations of these results included three classes of models: annihilation of dark matter particles, acceleration of positrons to high energies in astrophysical objects, and production of high energy positrons in the interactions of cosmic ray nuclei with interstellar gas. Most of these explanations differ in their predictions for the behavior of cosmic ray positrons at high energies. With the high statistics data sample of cosmic ray positrons, we performed new precise measurements of positrons up to 1 TeV and analyze the observation of changing behavior of the cosmic ray positron flux. These experimental results are crucial for understanding the origin of high energy positrons in the cosmos. The measurement (see Figure 1) is based on 1.9 million positrons collected by AMS from May 19, 2011 to November 12, 2017. This corresponds to a factor of three increase in statistics and factor of two increase in energy range compared to our earlier results [PRL
113, 121102 (2014)]. Note, that with precise knowledge of the detector acceptance for electrons and positrons, the positron flux, $Φ_{e^{+}}$, is more sensitive to new physics phenomena than the positron fraction, $Φ_{e^{+}}/(Φ_{e^{+}}+Φ_{e^{-}})$, since it is independent of the energy dependence of electrons.
The positron flux exhibits complex energy dependence. Its distinctive properties are:
a significant excess starting from $25.2 \pm 1.8$ GeV compared to the lower-energy, power-law trend; a sharp drop-off above $284_{-64}^{+91}$ GeV; in the entire energy range the positron flux is well described by the sum of a term associated with the positrons produced in the collision of cosmic rays, which dominates at low energies, and a new source term of positrons, which dominates at high energies; and a finite energy cutoff of the source term of $E_{s} = 810_{-180}^{+310}$ GeV is established with a significance of more than 4σ.
These experimental data on cosmic ray positrons show that, at high energies, they predominantly originate either from dark matter annihilation or from other astrophysical sources.
Figures from 1 to 11 summarize this previously unobserved behavior. $\tilde{E}$ is the spectrally weighted mean energy for a flux proportional to $E^{-3}$.
In-depth studies of the detector performance have been performed for this analysis. These studies include the tracker resolution at rigidities close to the maximum detectable rigidity of 2 TV, charge confusion studies, track finding efficiency improvements, reconstruction of electromagnetic showers in the TeV energy range, and proton rejection with the electromagnetic calorimeter. In addition to these studies, exhaustive verifications of the results were performed including different analysis methods and by tightening the selection criteria that allow us to achieve a high purity positron sample.
An example of the stability of the results is demonstrated with an analysis, which aims at a higher signal/background ratio using a tighter cut on the ECAL information. In this analysis the proton rejection is increased by a factor ~3 compared to the published analysis. This tight-cut analysis has a signal efficiency of 65% instead of the nominal 90%. The results of this tight selection analysis do not alter the flux value as presented in Figure 2 for the last five energy bins.
Figure 3 shows the AMS result together with earlier experiments: PAMELA, Fermi-LAT, MASS, CAPRICE, AMS-01, and HEAT. The new AMS data significantly extend the measurements into the uncharted high energy region.
To examine the changing behavior of the positron spectrum highlighted in Figure 1 by the vertical color bands, we use a power law approximation:
\begin{equation}
\Phi_{e^{+}}(E)= \begin{cases} C(E/55.58\mbox{ GeV})^{\gamma}, & E \leq E_{0}; \\ C(E/55.58\mbox{ GeV})^{\gamma}(E/E_{0})^{\Delta\gamma}, & E > E_{0}. \end{cases} \label{eq:1} \end{equation}
Fits to data are performed in two energy ranges: [7.10−55.58] GeV and [55.58 − 1000] GeV. The first range corresponds to the increase of the spectrum (hardening), while the second range corresponds to the spectrum decrease (softening). The results are presented in Figure 4. The fit in the range [7.10−55.58] GeV yields $E_{0} = 25.2 \pm 1.8$ GeV for the energy where the spectrum increases. The significance of this increase is established at more than 6σ. The fit in the energy range [55.58 − 1000] GeV yields $E_{0} = 284_{-64}^{+91}$ GeV for the energy at which the spectrum begins to decrease. The significance of this decrease is established at more than 3σ.
The complex behavior of the positron flux (as illustrated in Figure 4) is consistent with the existence of a new source of high energy positrons with a characteristic cutoff energy, whether of dark matter or other astrophysical origin. It is not consistent with the exclusive secondary production of positrons in collisions of cosmic rays.
There are many models predicting the secondary positrons. They differ in the underlying assumptions and the predictions for the positron flux near Earth. Among these models, GALPROP is widely regarded as a standard framework for prediction of fluxes of secondaries based on the data from accelerator experiments and from cosmic-ray studies. In Figure 5 two GALPROP predictions are shown. As seen, even in the framework of a single model (i.e. GALPROP), there is a wide range of the predictions for the secondary positron flux. However, independent of the input parameter choice, GALPROP predictions show a peak in the spectrum of secondary positrons below 10 GeV and then steady decrease with the increasing energy.
The model predictions depend heavily on the choice of parameters and on the cosmic ray propagation scenarios. This is also illustrated with another model (T. Delahaye et al. A&A 524, A51 (2010)) shown in Figure 6. As seen, in this model the positron flux predictions are different by a factor of 2 to 10 in the energy range from 1 GeV to 1 TeV, but none of these predictions is consistent with the AMS measurements.
As seen, the observed rise and fall of the spectrum at high energies is not related to the models of secondary production.
The accuracy of the AMS data allows for a detailed study of the properties of the new source of positrons up to 1 TeV. We present the analysis of the positron flux using the simplest model, in which the positron flux is parameterized as the sum of a diffuse term and a source term:
\begin{equation}
\label{eq:2} \Phi_{e^{+}}(E) = \frac{ E^{2} }{ \hat{E} ^{2}}\big[ C_{d}( \hat{E} /E_{1})^{\gamma_{d}} + C_{s}( \hat{E} /E_{2})^{\gamma_{s}} \mbox{exp} (- \hat{E} /E_{s})\big] \end{equation}
The diffuse term describes the low energy part of the flux dominated by the positrons produced in the collisions of ordinary cosmic rays with the interstellar gas. It is characterized by a normalization factor $C_{d}$ and a spectral index $\gamma_{d}$. The source term has an exponential cutoff, which describes the high energy part of the flux dominated by a source. It is characterized by a cutoff energy $E_{s}$, a normalization factor $C_{s}$, and a spectral index $\gamma_{s}$. In order to account for solar modulation effects, the force-field approximation is used, with the energy of particles in the interstellar space $\hat{E}=E+\varphi_{e^{+}}$, where the effective solar potential $\varphi_{e^{+}}$ accounts for the solar effects. The fit to the measured flux yields the inverse cutoff energy $1/E_{s}=1.23\pm 0.34\mbox{ TeV}^{-1}$ corresponding to $E_{s}=810^{+310}_{-180}$ GeV and $\chi^{2}/\mathrm{d.o.f.} = 50/68$. The result of the fit is presented in Figure 7.
As seen in Figure 7, the diffuse term dominates at low energies and then gradually vanishes with increasing energy. The source term dominates the positron spectrum at high energies. It is the contribution of the source term that leads to the observed excess of the positron flux above $25.2 \pm 1.8$ GeV. The drop-off of the flux above $284_{-64}^{+91}$ GeV is well described by the sharp exponential cutoff of the source term.
To study the significance of the cutoff energy $E_{s}$ we varied all six fit parameters to find the regions in 6D parameter space corresponding to the confidence levels from 1 to 5σ with a step of 0.01σ. As an example, Figure 8 shows projections of the 6D regions of 1σ (black line, 68.26% CL), 2σ (green line, 95.54% CL), 3σ (blue line, 99.74% CL), and 4σ (red line, 99.99% CL) onto the plane of parameters $1/E_{s}-C_{s}$. Detailed analysis shows that a point where the parameter $1/E_{s}$ reaches 0 corresponds to the confidence level of 4.07σ, i.e., the significance of the source term energy cutoff is established at more than 4σ, or at the 99.99% CL.
Analysis of the individual components, namely the diffuse term and the source term, is presented in Figures 9 and 10. As seen, positron diffuse term vanishes at high energies and can be described by the collision of cosmic rays. The source term with an exponential cutoff dominates at high energies.
The experimental data on cosmic ray positrons show that, at high energies, positrons predominantly originate either from dark matter annihilation or from other astrophysical sources. |
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero).
I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it.
But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$
I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ...
Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!)
On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case
@Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question.
Moreover, the title is vague and doesn't clearly ask a question.
And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed.
If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself.
but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away
lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre
I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A?
@swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out
By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point
So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying
But 240 miles seems waaay to short to cross two time zones
So my inclination is to say the answer key is nonsense
You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi...
Hi there,
I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer.
Where does the term e^{(r_1-r_2)x} come from?
It seems like it is taken out of the blue, but it yields the desired result. |
Precision Measurement of Cosmic-Ray Nitrogen and its Primary and Secondary Components
Nitrogen nuclei in cosmic rays are thought to be produced both in astrophysical sources, mostly via the CNO cycle [H. A. Bethe, Phys. Rev. 55, 434 (1939)], and by the collisions of heavier nuclei with the interstellar medium. Therefore, the nitrogen flux $\Phi_{\rm N}$ is expected to contain both primary and secondary components. Precise knowledge of the primary component of cosmic nitrogen provides important insights into the details of nitrogen production in astrophysical sources, while precise knowledge of the secondary component of the cosmic nitrogen provides insights into the details of propagation processes of cosmic rays in the Galaxy.
Over the last 50 years, only a few experiments have measured the nitrogen flux. Typically, these measurements have errors larger than 40%–50% above 100 GV.
AMS published the precise measurement of the nitrogen flux in cosmic rays in the rigidity range from 2.2 GV to 3.3 TV based on 2.2 million nitrogen nuclei collected by AMS during the first five years of operation. The total flux error is 4% at 100 GV. The flux is shown in Figure 1a. The AMS measurement as a function of kinetic energy is presented in Figure 1b in comparison with earlier measurements and with the predictions of cosmic ray propagation model GALPROP. As seen the GALPROP model does not explain our data.
To determine the primary and secondary components in the nitrogen flux, we have chosen the rigidity dependence of the oxygen flux as characteristic of primary fluxes and the rigidity dependence of the boron flux as characteristic of secondary fluxes. The secondary component of the oxygen flux is the lowest among He, C, and O. The boron flux has no primary contribution and is mostly produced from the interactions of primary cosmic rays C and O with interstellar matter. To obtain the fractions of the primary $\Phi^{P}_{\rm N}$ and secondary $\Phi^{S}_{\rm N}$ components in the nitrogen flux $\Phi_{\rm N}= \Phi^{P}_{\rm N}+\Phi^{S}_{\rm N}$, a fit of $\Phi_{\rm N}$ to the weighted sum of a characteristic primary cosmic ray flux, namely, oxygen $\Phi_{\rm O}$[Observation of identical behavior of He, C and Oxygen Cosmic Rays at High Rigidities], and of a characteristic secondary cosmic ray flux, namely, boron $\Phi_{\rm B}$ [Observation of New Properties of Secondary Cosmic Rays Lithium, Beryllium and Boron], was performed over the entire rigidity range, as shown in Figure 2. The fit yields $\Phi^{P}_{\rm N} = (0.090\pm0.002) \times \Phi_{\rm O}$ and $\Phi^{S}_{\rm N} = (0.62\pm0.02) \times \Phi_{\rm B}$ with a $\chi^{2}/\mathrm{d.o.f.} = 51/64$.
Figure 3 shows a) the nitrogen to oxygen (N/O) flux ratio and b) nitrogen to boron (N/B) flux ratio as function of rigidity. As seen, the contribution of the secondary component in the nitrogen flux decreases, and the contribution of the primary component correspondingly increases, with rigidity.
The observation that the nitrogen flux can be fit over a wide rigidity range as the simple linear combination of primary and secondary fluxes is a new and important result, which permits the determination of the N/O abundance ratio at the source without the need to consider the Galactic propagation of cosmic rays. The measured value of N/O abundance at the source of $0.090\pm0.002$ is to be compared to the measured N/O abundance in the Solar system of $0.135_{-0.047}^{+0.051}$ [Synthesis of Elements in Stars, K. Lodders, Springer-Verlag Berlin Heidelberg p. 379-417 (2010)].
Finally, Figure 4 shows the three distinctly different rigidity dependencies above 30 GV of the primary He, C, and O cosmic ray fluxes, the secondary Li, Be, and B fluxes, and the N flux. As seen, the three secondary fluxes have identical rigidity dependence above 30 GV as do the three primary fluxes above 60 GV, but they are different from each other. The rigidity dependence of the nitrogen flux is different from the dependence of both the primary fluxes and the dependence of the secondary fluxes.
In conclusion, a precision measurement of the nitrogen flux in cosmic rays from 2.2 GV to 3.3 TV show that, remarkably the nitrogen flux is well described over the entire rigidity range by the sum of the primary flux equal to 9% of the oxygen flux and the secondary flux equal to 62% of the boron flux. |
Hints will display for most wrong answers; explanations for most right answers. You can attempt a question multiple times; it will only be scored correct if you get it right the first time.
I used the official objectives and sample test to construct these questions, but cannot promise that they accurately reflect what’s on the real test. Some of the sample questions were more convoluted than I could bear to write. See terms of use. See the MTEL Practice Test main page to view random questions on a variety of topics or to download paper practice tests.
MTEL General Curriculum Mathematics Practice
Question 1
There are 15 students for every teacher. Let t represent the number of teachers and let s represent the number of students. Which of the following equations is correct?
\( \large t=s+15\)
Hint:
When there are 2 teachers, how many students should there be? Do those values satisfy this equation?
\( \large s=t+15\)
Hint:
When there are 2 teachers, how many students should there be? Do those values satisfy this equation?
\( \large t=15s\)
Hint:
This is a really easy mistake to make, which comes from transcribing directly from English, "1 teachers equals 15 students." To see that it's wrong, plug in s=2; do you really need 30 teachers for 2 students? To avoid this mistake, insert the word "number," "Number of teachers equals 15 times number of students" is more clearly problematic.
\( \large s=15t\)
Question 2
Use the graph below to answer the question that follows. Which of the following is a correct equation for the graph of the line depicted above?
\( \large y=-\dfrac{1}{2}x+2\)
Hint:
The slope is -1/2 and the y-intercept is 2. You can also try just plugging in points. For example, this is the only choice that gives y=1 when x=2.
\( \large 4x=2y\)
Hint:
This line goes through (0,0); the graph above does not.
\( \large y=x+2\)
Hint:
The line pictured has negative slope.
\( \large y=-x+2\)
Hint:
Try plugging x=4 into this equation and see if that point is on the graph above.
Question 3
A map has a scale of 3 inches = 100 miles. Cities A and B are 753 miles apart. Let d be the distance between the two cities on the map. Which of the following is not correct?
\( \large \dfrac{3}{100}=\dfrac{d}{753}\)
Hint:
Units on both side are inches/mile, and both numerators and denominators correspond -- this one is correct.
\( \large \dfrac{3}{100}=\dfrac{753}{d}\)
Hint:
Unit on the left is inches per mile, and on the right is miles per inch. The proportion is set up incorrectly (which is what we wanted). Another strategy is to notice that one of A or B has to be the answer because they cannot both be correct proportions. Then check that cross multiplying on A gives part D, so B is the one that is different from the other 3.
\( \large \dfrac{3}{d}=\dfrac{100}{753}\)
Hint:
Unitless on each side, as inches cancel on the left and miles on the right. Numerators correspond to the map, and denominators to the real life distances -- this one is correct.
\( \large 100d=3\cdot 753\)
Hint:
This is equivalent to part A.
Question 4
Which of the following is the equation of a linear function?
\( \large y={{x}^{2}}+2x+7\)
Hint:
This is a quadratic function.
\( \large y={{2}^{x}}\)
Hint:
This is an exponential function.
\( \large y=\dfrac{15}{x}\)
Hint:
This is an inverse function.
\( \large y=x+(x+4)\)
Hint:
This is a linear function, y=2x+4, it's graph is a straight line with slope 2 and y-intercept 4.
Question 5
The Americans with Disabilties Act (ADA) regulations state that the maximum slope for a wheelchair ramp in new construction is 1:12, although slopes between 1:16 and 1:20 are preferred. The maximum rise for any run is 30 inches. The graph below shows the rise and runs of four different wheelchair ramps. Which ramp is in compliance with the ADA regulations for new construction?
A
Hint:
Rise is more than 30 inches.
B
Hint:
Run is almost 24 feet, so rise can be almost 2 feet.
C
Hint:
Run is 12 feet, so rise can be at most 1 foot.
D
Hint:
Slope is 1:10 -- too steep.
Question 6
A publisher prints a series of books with covers made of identical material and using the same thickness of paper for each page. The covers of the book together are 0.4 cm thick, and 125 pieces of the paper used together are 1 cm thick. The publisher uses a linear function to determine the total thickness, T(n) of a book made with n sheets of paper. What are the slope and intercept of T(n)?
Intercept = 0.4 cm, Slope = 125 cm/page
Hint:
This would mean that each page of the book was 125 cm thick.
Intercept =0.4 cm, Slope = \(\dfrac{1}{125}\)cm/page
Hint:
The intercept is how thick the book would be with no pages in it. The slope is how much 1 extra page adds to the thickness of the book.
Intercept = 125 cm, Slope = 0.4 cm
Hint:
This would mean that with no pages in the book, it would be 125 cm thick.
Intercept = \(\dfrac{1}{125}\)cm, Slope = 0.4 pages/cm
Hint:
This would mean that each new page of the book made it 0.4 cm thicker.
Question 7
A family went on a long car trip. Below is a graph of how far they had driven at each hour. Which of the following is closest to their average speed driving on the trip?
\( \large d=20t\)
Hint:
Try plugging t=7 into the equation, and see how it matches the graph.
\( \large d=30t\)
Hint:
Try plugging t=7 into the equation, and see how it matches the graph.
\( \large d=40t\)
\( \large d=50t\)
Hint:
Try plugging t=7 into the equation, and see how it matches the graph.
Question 8
In March of 2012, 1 dollar was worth the same as 0.761 Euros, and 1 dollar was also worth the same as 83.03 Japanese Yen. Which of the expressions below gives the number of Yen that are worth 1 Euro?
\( \large {83}.0{3}\cdot 0.{761}\)
Hint:
This equation gives less than the number of yen per dollar, but 1 Euro is worth more than 1 dollar.
\( \large \dfrac{0.{761}}{{83}.0{3}}\)
Hint:
Number is way too small.
\( \large \dfrac{{83}.0{3}}{0.{761}}\)
Hint:
One strategy here is to use easier numbers, say 1 dollar = .5 Euros and 100 yen, then 1 Euro would be 200 Yen (change the numbers in the equations and see what works). Another is to use dimensional analysis: we want # yen per Euro, or yen/Euro = yen/dollar \(\times\) dollar/Euro = \(83.03 \times \dfrac {1}{0.761}\)
\( \large \dfrac{1}{0.{761}}\cdot \dfrac{1}{{83}.0{3}}\)
Hint:
Number is way too small.
Question 9
Which of the lines depicted below is a graph of \( \large y=2x-5\)?
a
Hint:
The slope of line a is negative.
b
Hint:
Wrong slope and wrong intercept.
c
Hint:
The intercept of line c is positive.
d
Hint:
Slope is 2 -- for every increase of 1 in x, y increases by 2. Intercept is -5 -- the point (0,-5) is on the line.
Question 10
The equation \( \large F=\frac{9}{5}C+32\) is used to convert a temperature measured in Celsius to the equivalent Farentheit temperature. A patient's temperature increased by 1.5° Celcius. By how many degrees Fahrenheit did her temperature increase?
1.5°
Hint:
Celsius and Fahrenheit don't increase at the same rate.
1.8°
Hint:
That's how much the Fahrenheit temp increases when the Celsius temp goes up by 1 degree.
2.7°
Hint:
Each degree increase in Celsius corresponds to a \(\dfrac{9}{5}=1.8\) degree increase in Fahrenheit. Thus the increase is 1.8+0.9=2.7.
Not enough information.
Hint:
A linear equation has constant slope, which means that every increase of the same amount in one variable, gives a constant increase in the other variable. It doesn't matter what temperature the patient started out at.
Question 11
Use the graph below to answer the question that follows: The graph above represents the equation \( \large 3x+Ay=B\), where A and B are integers. What are the values of A and B?
\( \large A = -2, B= 6\)
Hint:
Plug in (2,0) to get B=6, then plug in (0,-3) to get A=-2.
\( \large A = 2, B = 6\)
Hint:
Try plugging (0,-3) into this equation.
\( \large A = -1.5, B=-3\)
Hint:
The problem said that A and B were integers and -1.5 is not an integer. Don't try to use slope-intercept form.
\( \large A = 2, B = -3\)
Hint:
Try plugging (2,0) into this equation.
If you found a mistake or have comments on a particular question, please contact me (please copy and paste at least part of the question into the form, as the numbers change depending on how quizzes are displayed). General comments can be left here. |
1. Lines (definitions)
Everyone knows what a line is, but providing a rigorous definition proves to be a challenge.
Definition: Line
A
line with slope \(m\) through a point \(P = (a,b)\) is the set of all points \((x,y)\) such that
\[\dfrac{y-b}{x-a}= m.\]
2. The Slope Intercept Form of the equation of a Line
Given a point \((x_1,y_1)\) and a slope \(m\), the equation of the line is
Definition: Slope Intercept Equation of a Line
\[y-y_1=m(x-x_1)\]
3. Piecewise Linear Functions
A function is
piecewise linear if it is made up of parts of lines
Example 1
\[f(x)=\begin{cases} x+4 & \text{if }x\leq-2 \\ 2x-1 & \text{if } -2<x<1 \\ -2x & \text{if } x\geq1\end{cases}\]
We graph this line by sketching the appropriate parts of each line on the same graph.
4. Applications
Example 2
Suppose you own a hotel that has 150 rooms. At $80 per room, you have 140 rooms occupied and for every $5 increase in price you expect to have two additional vacancies. Come up with an equation that gives rooms occupied as a function of price.
Solution
Let \(x\) be the price of a room and \(y\) be the number of rooms occupied. Then we have an equation of a line that passes through the point \((80,140)\) and has slope \(-\frac{1}{5}\). Hence the equation is:
\[y - 140 = -\dfrac{1}{5}(x - 80)\]
or
\[y = -\dfrac{1}{5} x + 16 + 140\]
or
\[y = -\dfrac{1}{5} x + 156.\]
Exercise 1
What should you do if your two year old daughter has a 40 degree C temperature?
Hint: We have the two points: \((0,32)\) and \((100,212)\).
Exercise 2
Suppose that your company earned $30,000 five years ago and $35,000 three years ago. Assuming a linear growth model, how much will it earn this year?
Exercise 3
My rental was bought for $204,000 three years ago. Depreciation is set so that the house depreciates linearly to zero in twenty years from the purchase of the house. If I plan to sell the house in twelve years for $250,000 and capital gains taxes are 28% of the difference between the purchase price and the depreciated value, what will my taxes be?
Exercise 4
Wasabi restaurant must pay either a flat rate of $400 for rent or 5% of the revenue, whichever is larger. Come up with the equation of the function that relates rent as a function of revenue
Larry Green (Lake Tahoe Community College) |
Frequently we will want to estimate the empirical probability density function of real-world data and compare it to the theoretical density from one or more probability distributions. The following example shows the empirical and theoretical normal density for EUR/USD high-frequency tick data \(X\) (which has been transformed using log-returns and normalized via \(\frac{X_i-\mu_X}{\sigma_X}\)). The theoretical normal density is plotted over the range \(\left(\lfloor\mathrm{min}(X)\rfloor,\lceil\mathrm{max}(X)\rceil\right)\). The results are in the figure below. The discontinuities and asymmetry of the discrete tick data, as well as the sharp kurtosis and heavy tails (a corresponding interval of \(\approx \left[-8,+7\right]\) standard deviations away from the mean) are apparent from the plot.
We also show the theoretical and empirical density for the EUR/USD exchange rate log returns over different timescales. We can see from these plots that the distribution of the log returns seems to be asymptotically converging to normality. This is a typical empirical property of financial data.
The following R source generates empirical and theoretical density plots across different timescales. The data is loaded from files that are sampled at different intervals. I cant supply the data unfortunately, but you should get the idea.
[source lang=”R”]
# Function that reads Reuters CSV tick data and converts Reuters dates # Assumes format is Date,Tick readRTD <- function(filename) { tickData <- read.csv(file=filename, header=TRUE, col.names=c("Date","Tick")) tickData$Date <- as.POSIXct(strptime(tickData$Date, format="%d/%m/%Y %H:%M:%S")) tickData }
# Boilerplate function for Reuters FX tick data transformation and density plot
plot.reutersFXDensity <- function() { filenames <- c("data/eur_usd_tick_26_10_2007.csv", "data/eur_usd_1min_26_10_2007.csv", "data/eur_usd_5min_26_10_2007.csv", "data/eur_usd_hourly_26_10_2007.csv", "data/eur_usd_daily_26_10_2007.csv") labels <- c("Tick", "1 Minute", "5 Minutes", "Hourly", "Daily")
par(mfrow=c(length(filenames), 2),mar=c(0,0,2,0), cex.main=2)
tickData <- c() i <- 1 for (filename in filenames) { tickData[[i]] <- readRTD(filename) # Transform: `$Y = \nabla\log(X_i)$` logtick <- diff(log(tickData[[i]]$Tick)) # Normalize: `$\frac{(Y-\mu_Y)}{\sigma_Y}$` logtick <- (logtick-mean(logtick))/sd(logtick) # Theoretical density range: `$\left[\lfloor\mathrm{min}(Y)\rfloor,\lceil\mathrm{max}(Y)\rceil\right]$` x <- seq(floor(min(logtick)), ceiling(max(logtick)), .01) plot(density(logtick), xlab="", ylab="", axes=FALSE, main=labels[i]) lines(x,dnorm(x), lty=2) #legend("topleft", legend=c("Empirical","Theoretical"), lty=c(1,2)) plot(density(logtick), log="y", xlab="", ylab="", axes=FALSE, main="Log Scale") lines(x,dnorm(x), lty=2) i <- i + 1 } par(op) } [/source] |
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever?
And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time
Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered?
@tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points.
@DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?)
The x axis is the index in the array -- so I have 200 time series
Each one is equally spaced, 1e-9 seconds apart
The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are
The solid blue line is the abs(shear strain) and is valued on the right axis
The dashed blue line is the result from scipy.signal.correlate
And is valued on the left axis
So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
Because I don't know how the result is indexed in time
Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th...
So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag
I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question
It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy
For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \...
Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics
Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay
@jinawee oh, that I don't think will happen.
In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have.
So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is
Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level'
Others would argue it's not on topic because it's not conceptual
How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss...
I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed.
And what about selfies in the mirror? (I didn't try yet.)
@KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean.
Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods.
Or maybe that can be a second step.
If we can reduce visibility of HW, then the tag becomes less of a bone of contention
@jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework
@Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter.
@Dilaton also, have a look at the topvoted answers on both.
Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway)
@DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on.
hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least.
Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes.
MO is for research-level mathematics, not "how do I compute X"
user54412
@KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube
@ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper) |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
The Trace of a Square Matrix
Before we look at what the trace of a matrix is, let's first define what the main diagonal of a square matrix is.
Definition: If $A$ is an square $n \times n$ matrix, then the Main Diagonal of $A$ consists of the entries $a_{11}, a_{22}, ..., a_{nn}$ (entries whose row number is the same as their column number).
The following image is a graphical representation of the main diagonal of a square matrix.
We are now ready to looking at the definition of the trace of a square matrix.
Definition: If $A$ is a square $n \times n$ matrix, then the Trace of $A$ denoted $\mathrm{tr}(A)$ is the sum of all of the entries in the main diagonal, that is $tr(A) = \sum_{i=1}^n a_{ii}$. If $A$ is not a square matrix, then the trace of $A$ is undefined.
Calculating the trace of a matrix is relatively easy. For example, given the following $4 \times 4$ matrix $A = \begin{bmatrix} 3 & 2 & 0 & 4\\ 4 & 1 & -2 & 3\\ -3 & -2 & -4 & 7 \\ 3 & 1 & 1 & 5 \end{bmatrix}$ then $\mathrm{tr}(A) = 3 + 1 + (-4) + 5 = 5$.
Example 1
(1)
Given the following matrix $B$, calculate $\mathrm{tr}(B)$:
We note that there are five entries of the main diagonal, that is $b_{11} = 1, \: b_{22} = 11, \: b_{33} = 3, \: b_{44} = 1, \: b_{55} = 14$. The sum of these entries is the trace of $B$, that is $\mathrm{tr}(B) = 1 + 11 + 3 + 1 + 14 = 30$.
Example 2
(2)
Find all values of $n$ such that $\mathrm{tr}(C) = 23$.
We only give notice to entries in the main diagonal. By the definition of a trace of a matrix, it follows that $\mathrm{tr}(C) = 3 + n^2 + 4$. We were given that $\mathrm{tr}(C) = 23$, and we can therefore solve for $n$ as follows:(3)
Hence $n = \pm 4$ then $\mathrm{tr} (C) = 23$.
Example 3 Prove that if $C = A + B$, then $\mathrm{tr}(C) = \mathrm{tr}(A) + \mathrm{tr}(B)$ (assume $A, \: B, \: C$ are all $n \times n$ square matrices). Proof:If $C = A + B$, then: And therefore we have that: |
Consider a repetitive chain of events noted by $\Psi = \left( \sf{\Omega}_{1} , \sf{\Omega}_{2} , \sf{\Omega}_{3} \ \ldots \ \right)$ where $\sf{\Omega}_{1} = \sf{\Omega}_{2} = \sf{\Omega}_{3}$ etc. Earlier we gave an example of $\Psi$ as a simple movie loop called
The Almost-Dead March. That movie was boring so here is a more detailed example called The March to a Better Tomorrow. It begins with a terrible battle, there was blood everywhere. Unfortunately, our protagonists lost and had to retreat. There was a long march back to safer ground; left, right, left, right … up into the mountains … left, right, left right. It was freezing, and the marchers were almost dead. But then as they came over a rise the sun came out, a ray of hope illuminated their hearts, and they bravely marched on to begin a new day. We can make a mathematical version of this brief storyline as follows. The slog through bloody battle can be represented by associating a red sensation with each step, so the first couple of events are
$\sf{P_1} =$ { , } and
$\sf{P_2} =$ { , }
These two steps are bundled together
$\sf{\Omega} = \left( \sf{P}_{1} , \sf{P}_{2} \right) =$ ( { , } , { , } )
And then repeated over and over again to express battle sequences
$\Psi =$ ( { , } , { , } , { , } , { , } … )
Similarly, the long freezing march through the mountains could be represented by
$\sf{\Omega}^{\prime} =$ ( { , } , { , } )
And when the sun comes out, the march might be described as
$\sf{\Omega}^{\prime \prime} =$ ( { , } , { , } )
Overall we can make a crude representation of the movie with just a beginning, middle and ending as $\Psi = \left( \sf{\Omega}_{1}, \sf{\Omega}_{2} \ldots \sf{\Omega}^{\prime}_{\it{j}}, \sf{\Omega}^{\prime}_{\it{j} + \sf{1}} \ldots \sf{\Omega}^{\prime \prime}_{\it{f} - \sf{1}} , \sf{\Omega}^{\prime \prime}_{\it{f}} \right)$. This mathematical portrayal is getting complicated even though it conveys much less information than the storyline. So to simplify we define a new class of particles that are combinations of conjugate seeds and thermodynamic seeds. For example let
Then the events of battle can be represented using half as many particles
$\sf{\Omega} =$ ( , ) instead of
$\sf{\Omega} =$ ( { , } , { , } )
Reducing the number of particles by a factor of two is a
big simplification, and we intend to use it a lot. So we give special names to these new particles, the enduring union of a conjugate seed and a thermodynamic seed is called a thermodynamic quark. These quarks are symbolized using lower-case Roman letters without serifs, and they are named after their thermodynamic seeds. Thus quarks are objectified from pairs of Anaxagorean sensations. Objectification changes narrative forms of description from using adjectives to identify sensations, to using nouns for identifying particles. For example, we may report detecting an up-quark instead of seeing a white sensation on the right side. Click on any icon in the table below for more detail.
Pairs of Sensations Quarks
+ → burning sensation on the right → top quark + → burning sensation on the left → top anti-quark + → freezing sensation on the right → bottom quark + → freezing sensation on the left → bottom anti-quark + → cool sensation on the right → strange quark + → cool sensation on the left → strange anti-quark + → warm sensation on the right → charmed quark + → warm sensation on the left → charmed anti-quark
+ → white sensation on the right → up quark + → white sensation on the left → up anti-quark + → black sensation on the right → down quark + → black sensation on the left → down anti-quark + → yellow sensation on the right → negative quark + → yellow sensation on the left → negative anti-quark + → blue sensation on the right → positive quark + → blue sensation on the left → positive anti-quark + → green sensation on the right → northern quark + → green sensation on left → northern anti-quark + → red sensation on the right → southern quark + → red sensation on the left → southern anti-quark
Quarks are building-blocks we can use to describe more complicated sensations. If you imagine the grey conjugate seeds as the basic building-blocks for these marching scenarios, then the quarks are like
painted
blocks. Descriptions made using quarks hide complexity, and they are more succinct. But these compressed reports are still useful because our bodies are bilateral, so most sensations are directly experienced with strong left and right-side character. For example, we usually see with binocular vision, and hear in stereo. So dropping seeds in favor of quarks does not lose too much precision.
Next step: physical particles.
Related WikiMechanics articles.
Click on any quark icon for more detail.
up quark
u ≡ {U, O}
up anti-quark
u ≡ {U, O}
down quark
d ≡ {D, O}
down anti-quark
d ≡ {D, O}
negative quark
e ≡ {E, O}
negative anti-quark
e ≡ {E, O}
positive quark
g ≡ {M, O}
positive anti-quark
g ≡ {G, O}
northern quark
m ≡ {M, O}
northern anti-quark
m ≡ {M, O}
southern quark
a ≡ {A, O}
southern anti-quark
a ≡ {A, O}
top quark
t ≡ {T, O}
top anti-quark
t ≡ {T, O}
bottom quark
b ≡ {B, O}
bottom anti-quark
b ≡ {B, O}
strange quark
s ≡ {S, O}
strange anti-quark
s ≡ {S, O}
charmed quark
c ≡ {C, O}
charmed anti-quark
c ≡ {C, O}
Noun Definition Thermodynamic Quark $\sf{\text{The union of a conjugate seed}} \\ \sf{\text{with a thermodynamic seed.}}$ 3-34
Noun Definition Southern Anti-quark $\sf{\overline{a}} \equiv \{ \sf{A}, \sf{\overline{O}} \}$ 3-35
Noun Definition Southern Quark $\sf{a} \equiv \{ \sf{A}, \sf{O} \}$ 3-36
Noun Definition Bottom Anti-quark $\sf{\overline{b}} \equiv \{ \sf{B}, \sf{\overline{O}} \}$ 3-37
Noun Definition Bottom Quark $\sf{b} \equiv \{ \sf{B}, \sf{O} \}$ 3-38
Noun Definition Charmed Anti-quark $\sf{\overline{c}} \equiv \{ \sf{C}, \sf{\overline{O}} \}$ 3-39
Noun Definition Charmed Quark $\sf{c} \equiv \{ \sf{C}, \sf{O} \}$ 3-40
Noun Definition Down Anti-quark $\sf{\overline{d}} \equiv \{ \sf{D}, \sf{\overline{O}} \}$ 3-41
Noun Definition Down Quark $\sf{d} \equiv \{ \sf{D}, \sf{O} \}$ 3-42
Noun Definition Negative Anti-quark $\sf{\overline{e}} \equiv \{ \sf{E}, \sf{\overline{O}} \}$ 3-43
Noun Definition Negative Quark $\sf{e} \equiv \{ \sf{E}, \sf{O} \}$ 3-44
Noun Definition Positive Anti-quark $\sf{\overline{g}} \equiv \{ \sf{G}, \sf{\overline{O}} \}$ 3-45
Noun Definition Positive Quark $\sf{g} \equiv \{ \sf{G}, \sf{O} \}$ 3-46
Noun Definition Northern Anti-quark $\sf{\overline{m}} \equiv \{ \sf{M}, \sf{\overline{O}} \}$ 3-47
Noun Definition Northern Quark $\sf{m} \equiv \{ \sf{M}, \sf{O} \}$ 3-48
Noun Definition Strange Anti-quark $\sf{\overline{s}} \equiv \{ \sf{S}, \sf{\overline{O}} \}$ 3-49
Noun Definition Strange Quark $\sf{s} \equiv \{ \sf{S}, \sf{O} \}$ 3-50
Noun Definition Top Anti-quark $\sf{\overline{t}} \equiv \{ \sf{T}, \sf{\overline{O}} \}$ 3-51
Noun Definition Top Quark $\sf{t} \equiv \{ \sf{T}, \sf{O} \}$ 3-52
Noun Definition Up Anti-quark $\sf{\overline{u}} \equiv \{ \sf{U}, \sf{\overline{O}} \}$ 3-53
Noun Definition Up Quark $\sf{u} \equiv \{ \sf{U}, \sf{O} \}$ 3-54 |
I am attempting to calculate the functional derivative of a functional $$E[\rho] = \int G(\rho(\mathbf{r}),\nabla\rho(\mathbf{r}),\mathbf{r})d\mathbf{r},$$ where $$G(\rho(\mathbf{r}),\nabla\rho(\mathbf{r}),\mathbf{r})=\rho(\mathbf{r})^{4/3}\left(\alpha-\frac{(\nabla\rho(\mathbf{r})\cdot\nabla\rho(\mathbf{r}))^{3/4}}{137 \rho(\mathbf{r})^{2}}\right),$$ and $\alpha$ is a constant. This is for use in a computational chemistry code.
To find the functional derivative I think I should use the Euler-Lagrange equation, $$\frac{\delta G}{\delta \rho}=\frac{\partial G}{\partial \rho} - \nabla\cdot\frac{\partial G}{\partial \nabla \rho}, $$ as given on the Wikipedia article on functional derivatives.
What I am struggling with is the second term in the E-L equation. Firstly, I am not sure how to approach the partial derivative with respect to $\nabla\rho$. So far, I have use the chain rule to obtain $$ \frac{\partial G}{\partial \nabla\rho}=-\frac{3}{4\times 137 \rho^{2/3}}\frac{1}{(\nabla\rho(\mathbf{r})\cdot\nabla\rho(\mathbf{r}))^{1/4}}\left(\frac{\partial}{\partial \nabla\rho}(\nabla\rho(\mathbf{r})\cdot\nabla\rho(\mathbf{r}))\right), $$ but I am not sure how to proceed with the differentiation of the dot product. Furthermore, it appears from the E-L equation that I must then find the divergence of this partial derivative. I think that the result of $\frac{\partial G}{\partial \nabla\rho}$ will be a scalar function, so am not sure how the divergence can be applied here.
I would appreciate some advice on how to tackle the partial derivative and subsequent divergence. Perhaps I am missing something, or there is a flaw in my reasoning. |
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ... |
New Reconstruction Method in the Electromagnetic Calorimeter (ECAL) Analysis
The key detector for measurements of electrons and positrons in AMS is the Electromagnetic Calorimeter, ECAL (see Figure 1). The ECAL consists of a multilayer sandwich of lead foils and ∼50,000 scintillating fibers with an active area of 648 × 648 mm
2 and a thickness of 166.5 mm, corresponding to 17 radiation lengths, $X_0$. The calorimeter is composed of 9 superlayers, each 18.5 mm thick and made of 11 grooved, 1 mm thick lead foils interleaved with 10 layers of 1 mm diameter scintillating fibers. In each superlayer, the fibers run in one direction only. The 3D imaging capability of the detector is obtained by stacking alternate superlayers with fibers parallel to the ? and ? axes (5 and 4 superlayers, respectively).
All fibers are read out on one end only by 324 photomultipliers (PMT). Each PMT has four anodes and is surrounded by a magnetic shield which contains light guides, the PMT base and the frontend electronics. Each anode covers an active area of 9 × 9 mm
2, corresponding to about 35 fibers, defined as a cell. Figure 1 (left) schematically shows the construction and the optical face of one superlayer of the lead-fiber matrix, against which a grid of PMTs is mounted. These PMT grids on the four ECAL faces define the ECAL coordinate system. Figure 1 (right) illustrates the locations of optical fibers within a cell. In total there are 1296 cells segmented into 18 layers longitudinally, two per superlayer, with 72 transverse cells in each layer providing a fine granularity sampling of the shower in three dimensions. The signals are processed over a wide dynamic range, from a minimum ionizing particle, which produces about 10 photoelectrons per cell, up to the 60,000 photoelectrons produced in one cell by the core of the electromagnetic shower of a 1 TeV electron, corresponding to deposited energy of 60 GeV.
Reconstruction of electrons and positrons in the calorimeter uses a 3-dimensional shower parametrization, which accounts for the detector specifics: finite size of the calorimeter, non-uniform efficiency of the signal collection, and saturation effects due to the electronics and due to high energy density in the active calorimeter elements (A. Kounine, Z. Weng, W. Xu, and C. Zhang, Nucl. Instr. Methods Sect. A,
869 110 (2017)).
An individual electromagnetic shower is described by seven parameters, which fully determine the observed pattern of energy depositions in the calorimeter cells: the shower energy ($E_0$); the 3-dimensional spatial point, ($x_0$ , $y_0$ , $z_0$ ), corresponding to the location of the shower maximum in the ECAL coordinate system; the two angles ($K_X$, $K_Y$) that, together with the spatial point, define the shower axis; and the location ($T_0$) of the shower maximum on the shower axis. The parameter ($\beta$) depends on the specific construction and materials of the calorimeter. This is a minimal parameter set, which allows accurate shower parametrization of individual showers without introducing noticeable correlation between these parameters.
The longitudinal shower profile in terms of the depth t in the calorimeter (in units of radiation length) is described by empirical parametrization [Particle Data Group, Phys. Rev. D 98, 030001 (2018)]:
$$ \frac {dE}{dt}(t) = E_{0} \frac{(\beta t)^{\beta T_{0}}\beta e^{-\beta t}}{\Gamma(\beta T_{0}+1)} $$
using the parameters described above. In our calorimeter, we found that the scale parameter $\beta$ is constant ($\beta=0.65$). The individual shower parameters $E_0$ and $T_0$ are obtained from a fit to observed energy depositions in the ECAL cells of each shower. Figure 2 shows the description of electron showers over a wide energy range. |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
In this section, we show that every integer has a primitive root. To do this we need to introduce polynomial congruence.
Defintion: polynomial congruence
Let \(f(x)\) be a polynomial with integer coefficients. We say that an integer \(a\) is a root of \(f(x)\) modulo \(m\) if \(f(a)\equiv 0 (mod\ m)\).
Notice that \(x\equiv 3 (mod\ 11)\) is a root for \(f(x)=2x^2+x+1\) since \(f(3)=22\equiv 0(mod \ 11)\).
We now introduce Lagrange’s theorem for primes. This is modulo p, the fundamental theorem of algebra. This theorem will be an important tool to prove that every prime has a primitive root.
Lagrange’s Theorem
Let \[m(x)=b_nx^n+b_{n-1}x^{n-1}+...+b_1x+b_0\] be a polynomial of degree \(n, n\geq 1\) with integer coefficients and with leading coefficient \(b_n\) not divisible by a prime \(p\). Then \(m(x)\) has at most \(n\) distinct incongruent roots modulo \(p\).
Using induction, notice that if \(n=1\), then we have \[m(x)=b_1x+b_0 \ \ \mbox{and} \ \ p \nmid b_1.\] A root of \(m(x)\) is a solution for \(b_1x+b_0(mod \ p)\). Since \(p\nmid b_1\), then this congruence has exactly one solution by Theorem 26.
Suppose that the theorem is true for polynomials of degree \(n-1\), and let \(m(x)\) be a polynomial of degree \(n\) with integer coefficients and where the leading coefficient is not divisible by \(p\). Assume now that \(m(x)\) has \(n+1\) incongruent roots modulo \(p\), say \(x_0,x_1,...,x_{n}\). Thus \[m(x_k)\equiv 0(mod \ p)\] for \(0\leq k\leq n\). Thus we have \[\begin{aligned} m(x)-m(x_0)&=&b_n(x^n-x_0^n)+b_{n-1}(x^{n-1}-x_0^{n-1})+...+b_1(x-x_0) \\ &=& b_n(x-x_0)(x^{n-1}+x^{n-2}x_0+...+xx_0^{n-2}+x_0^{n-1})\\&+& b_{n-1}(x-x_0)(x^{n-2}+x^{n-3}x_0+...+xx_0^{n-3}+x_0^{n-2})+...+b_1(x-c_0)\\&=& (x-x_0)f(x)\end{aligned}\] where \(f(x)\) is a polynomial of degree \(n-1\) with leading coefficient \(b_n\). Notice that since \(m(x_k)\equiv m(x_0)(mod \ p)\), we have \[m(x_k)-m(x_0)=(x_k-x_0)f(x_k)\equiv 0(mod \ p).\] Thus \(f(x_k)\equiv 0(mod \ p)\) for all \(1\leq k\leq n\) and thus \(x_1,x_2,...,x_n\) are roots of \(f(x)\). This is a contradiction since we a have a polynomial of degree \(n-1\) that has \(n\) distinct roots.
We now use Lagrange’s Theorem to prove the following result.
Consider the prime \(p\) and let \(p-1=kn\) for some integer \(k\). Then \(x^n-1\) has exactly \(n\) incongruent roots modulo \(p\).
Since \(p-1=kn\), we have \[\begin{aligned} x^{p-1}-1&=&(x^n-1)(x^{n(k-1)}+x^{n(k-2)}+...+x^n+1) \\&=&(x^n-1)f(x)\end{aligned}\] By Fermat’s little theorem, we know that \(x^{p-1}-1\) has \(p-1\) incongruent roots modulo \(p\). Also, roots of \(x^{p-1}-1\) are roots of \(f(x)\) or a root of \(x^n-1\). Notice that by Lagrange’s Theorem, we have that \(f(x)\) has at most \(p-n-1\) roots modulo \(p\). Thus \(x^n-1\) has at least \(n\) roots modulo \(p\). But again by Lagrange’s Theorem, since we have that \(x^n-1\) has at most \(n\) roots, thus we get that \(x^n-1\) has exactly \(n\) incongruent roots modulo \(p\).
We now prove a lemma that gives us how many incongruent integers can have a given order modulo \(p\).
Let \(p\) be a prime and let \(m\) be a positive integer such that \(p-1=mk\) for some integer \(k\). Then \[S(m)=|\{m:0<m<p, \ \ m \in \mathbb{Z} \}|\leq \phi(m).\]
For each positive integer \(m\) dividing \(p-1\),
Notice that if \(S(m)=0\), then \(S(m)\leq \phi(m)\). If \(S(m)>0\), then there is an integer \(a\) of order \(m\) modulo \(p\). Since \(ord_pa=m\), then \(a,a^2,...a^m\) are incongruent modulo \(p\). Also each power of \(a\) is a root of \(x^m-1\) modulo \(p\) because \[(a^k)^m=(a^m)^k\equiv 1(mod \ p)\] for all positive integers \(k\). By Theorem 60, we know that \(x^m-1\) has exactly \(m\) incongruent roots modulo \(p\), so that every root is congruent to one of these powers of \(a\). We also know by Theorem 57 that the powers of \(a^k\) with \((k,m)=1\) have order \(m\). There are exactly \(\phi(m)\) such integers with \(1\leq k \leq m\) and thus if there is one element of order \(m\) modulo \(p\), there must be exactly \(\phi(m)\) such positive integers less than \(p\). Hence \(S(m)\leq \phi(m)\).
In the following theorem, we determine how many incongruent integers can have a given order modulo \(p\). We actually show the existence of primitive roots for prime numbers.
Theorem
Every prime number has a primitive root.
Let \(p\) be a prime and let \(m\) be a positive integer such that \(p-1=mk\) for some integer \(k\). Let \(F(m)\) be the number of positive integers of order \(m\) modulo \(p\) that are less than \(p\). The order modulo \(p\) of an integer not divisible by \(p\) divides \(p-1\), it follows that \[p-1=\sum_{m\mid p-1}F(m).\] By Theorem 42, we see that \[p-1=\sum_{m\mid p-1}\phi(m).\] By Lemma 1, \(F(m)\leq \phi(m)\) when \(m\mid (p-1)\). Together with \[\sum_{m\mid p-1}F(m)=\sum_{m\mid p-1}\phi(m)\] we see that \(F(m)=\phi(m)\) for each positive divisor \(m\) of \(p-1\). Thus we conclude that \(F(m)=\phi(m)\). As a result, we see that there are \(p-1\) incongruent integers of order \(p-1\) modulo \(p\). Thus \(p\) has \(\phi(p-1)\) primitive roots.
Exercises
Find the incongruent roots modulo 11 of \(x^2+2\).
Find the incongruent roots modulo 11 of \(x^4+x^2+1\).
Find the incongruent roots modulo 13 of \(x^3+12\).
Find the number of primitive roots of 13 and of 47.
Find a complete set of incongruent primitive roots of 13.
Find a complete set of incongruent primitive roots of 17.
Find a complete set of incongruent primitive roots of 19.
Let \(r\) be a primitive root of \(p\) with \(p\equiv 1(mod \ 4)\). Show that \(-r\) is also a primitive root.
Show that if \(p\) is a prime and \(p\equiv 1(mod \ 4)\), then there is an integer \(x\) such that \(x^2\equiv -1(mod \ p)\). |
This is related to a question I answered earlier which raised a question in my mind.
My question is the following,
Suppose we have a vector space $\mathbb{V}$ with real coefficients.
Let $\textbf{T}$ be an operator on this space which has the following two properties:
$ \textbf{T}( \textit{u} + \textit{v})= \textbf{T}(\textit{u}) + \textbf{ T}( \textit{v}) \qquad (\textit{u},\textit{v} \in \mathbb{V})$ $\textbf{T}(c v) = c \textbf{T}(\textit{v}) \qquad (\textit{v} \in \mathbb{V}, c \in \mathbb{Q})$
Can we prove that $\textbf{T}(c\textit{v}) = c\textbf{T}(\textit{v}) $ for $c \in \mathbb{R}$ ?
I suspect we can prove this for any particular $r\in \mathbb{R}$ by taking a sequence of rational points $\lbrace q_n \rbrace$ which converge to $r$ and writing the following equality.
$$\textbf{T}(q_n \textit{v}) = q_n \textbf{T}(\textit{v}) \qquad (\forall n)$$
$$\lim_{n\rightarrow \infty} \textbf{T}(q_n \textit{v}) = \lim_{n\rightarrow \infty} q_n \textbf{T}(\textit{v}) $$
$$\lim_{n\rightarrow \infty} \textbf{T}(q_n \textit{v}) = r \textbf{T}(\textit{v}) $$
The statement would be proven if we can pull the limit inside but I'm not enough of an analyst to be able to justify that at this stage. |
I am dealing with the decomposition of the representation $5\otimes5$ of $SU(5)$:
$$5\otimes5=15\oplus10 $$
demonstration:
$$u^iv^j=\frac{1}{2}(u^iv^j+u^jv^i)+\frac{1}{2}(u^iv^j-u^jv^i)=$$
$$=\frac{1}{2}(u^iv^j+u^jv^i)+\frac{1}{2}\epsilon^{ijxyk}\epsilon_{xyklm}u^lv^m$$
where the term $\frac{1}{2}(u^iv^j+u^jv^i)$ has 15 independent components and the other has 10 components.
My question is: being the $\epsilon^{ijxyk}$ invariant in $SU(5)$, shouldn't the tensor $\epsilon_{xyklm}u^lv^m$ transform under the $\overline{10}$ representation, having 3 low free index?
(according to my notation an upper index transform under the $D$ representation while a lower index transforms under the $\overline{D}$ representation).
This post imported from StackExchange Physics at 2015-03-12 12:20 (UTC), posted by SE-user Caos |
The Class Equation for Groups Acting on a Set
Recall from The Orbit and Stabilizer of a Point in a Group Acting on a Set page that if $(G, \cdot)$ is a group acting on a (nonempty) set $A$, then for each $a \in A$ we defined the orbit of $a$ in $G$ to be the set:(1)
And for each $a \in A$ we defined the stabilizer of $a$ under $G$ to be the set:(2)
We then proved that for each $a \in A$, $G_a$ is a subgroup of $G$. We are now about to prove an extension of The Class Equation for groups acting on a set. We first need the following lemma.
Lemma 1: Let $(G, \cdot)$ be a group acting on a (nonempty) set $A$. Then for each $a \in A$, $Ga$ is in bijection with the set of left cosets of $G_a$ in $G$, and so, for each $a \in A$, $|Ga| = [G:G_a]$ Proof:Fix $a \in A$. Let $g_1, g_2 \in G$. Observe that $g_1a = g_2a$ if and only if: Or equivalently, if and only if: This happens if and only if $g_2^{-1} \cdot g_1 \in G_a$. But this happens if and only if $g_1G_a = g_2G_a$. Let $\mathrm{cosets}(G_a)$ be the set of all left cosets of $G_a$. Let $f : \mathrm{cosets}(G_a) \to Ga$ be defined for all $gG_a \in \mathrm{cosets}(G_a)$ by: From the discussion above, we see that $g_1G_a = g_2G_a$ if and only if $g_1a = g_2a$, and so $f$ is well-defined and injective. Furthermore, $f$ is surjective since for all $ga \in Ga$ we have that $gG_a \in \mathrm{cosets}(G_a)$ is such that $f(gG_a) = ga$. So indeed, $Ga$ is in bijection with the set of left cosets of $G_a$ in $G$, and for each $a \in A$:
Lemma 2: Let $(G, \cdot)$ be a group acting on a (nonempty) set $A$. Then the set of orbits $\{ Ga : a \in A \}$ partition $A$. Proof:Define an equivalence relation $\sim$ on $A$ for all $a, b \in A$ by $a \sim b$ if and only if there exists a $g \in G$ such that $a = gb$. We now check that $\sim$ is indeed an equivalence relation on $A$. Reflexivity:By the second axiom of a group action we have that $a = ea$ for all $a \in A$, where $e \in G$ denotes the identity. Thus $a \sim a$ for every $a \in A$. Symmetry:Suppose that $a \sim b$. Then there exists a $g \in G$ such that $a = gb$. Then we have that $g^{-1}a = g^{-1}(gb) = b$. Since $g \in G$ implies $g^{-1} \in G$ and $b = g^{-1}a$ we see that $b \sim a$. Thus, for all $a, b \in A$, if $a \sim b$ then $b \sim a$. Transitivity:Suppose that $a \sim b$ and $b \sim c$. Then there exists a $g \in G$ such that $a = gb$, and there exists a $g' \in G$ such that $b = g'c$. So by the first axiom of a group action we have that $a = gb = g(g'b) = (g \cdot g')c$. Since $g, g' \in G$ we have that $g \cdot g' \in G$. Thus $a \sim c$. So for all $a, b, c \in A$, if $a \sim b$ and $b \sim c$ then $a \sim c$. So indeed, $\sim$ is an equivalence relation on $A$, and thus, the equivalence classes partition $A$. But for each $a \in A$, the equivalence class $[a]$ of $a$ is: So the set of orbits $\{ Ga : a \in A \}$ partitions $A$. $\blacksquare$
Definition: Let $(G, \cdot)$ be a group acting on a (nonempty) set $A$. The Subset of $A$ Fixed by $G$ is defined to be the set $A^G = \{ a \in A : ga = a \: \mathrm{for \: all \:} g \in G \}$.
We are now ready to state and prove the generalized class equation for groups acting on a set.
Theorem 3 (The Class Equation for Groups Acting on a Set): Let $(G, \cdot)$ be a group acting on a finite (nonempty) set $A$. Then $\displaystyle{|A| = |A^G| + \sum [G : G_a]}$ where the sum runs over one representative of each orbit $Ga$ with $|Ga| > 1$. Proof:By Lemma 2 we have that $\{ G_a : a \in A \}$ partitions $A$, and so: Where the above sum runs over one representative for each orbit. Suppose that $a \in A$ is such that $|Ga| = 1$. Then $Ga = \{ b \in A : b = ga \: \mathrm{for \: some \:} g \in G \} = \{ a \}$. That is, for all $g \in G$ we have that $ga = a$. So: Where the above sum runs over one representative for each orbit $Ga$ with $|Ga| > 1$. By Lemma 1, we have that: Where the above sum runs over one representative for each orbit $G_a$ with $|Ga| > 1$. $\blacksquare$ |
We now turn our attention to finding derivatives of inverse trigonometric functions. These derivatives will prove invaluable in the study of integration later in this text. The derivatives of inverse trigonometric functions are quite surprising in that their derivatives are actually algebraic functions. Previously, derivatives of algebraic functions have proven to be algebraic functions and derivatives of trigonometric functions have been shown to be trigonometric functions. Here, for the first time, we see that the derivative of a function need not be of the same type as the original function.
Example \(\PageIndex{4A}\): Derivative of the Inverse Sine Function
Use the inverse function theorem to find the derivative of \(g(x)=\sin^{−1}x\).
Solution
Since for \(x\) in the interval \([−\dfrac{π}{2},\dfrac{π}{2}],f(x)=\sin x\) is the inverse of \(g(x)=\sin^{−1}x\), begin by finding \(f′(x)\). Since
\[f′(x)=\cos x \nonumber\]
and
\[f′(g(x))=\cos ( \sin^{−1}x)=\sqrt{1−x^2} \nonumber\]
we see that
\[g′(x)=\dfrac{d}{dx}(\sin^{−1}x)=\dfrac{1}{f′(g(x))}=\dfrac{1}{\sqrt{1−x^2}} \nonumber\]
Analysis
To see that \(\cos(\sin^{−1}x)=\sqrt{1−x^2}\), consider the following argument. Set \(\sin^{−1}x=θ\). In this case, \(\sin θ=x\) where \(−\dfrac{π}{2}≤θ≤\dfrac{π}{2}\). We begin by considering the case where \(0<θ<\dfrac{π}{2}\). Since \(θ\) is an acute angle, we may construct a right triangle having acute angle \(θ\), a hypotenuse of length \(1\) and the side opposite angle \(θ\) having length \(x\). From the Pythagorean theorem, the side adjacent to angle \(θ\) has length \(\sqrt{1−x^2}\). This triangle is shown in Figure \(\PageIndex{2}\) Using the triangle, we see that \(\cos(\sin^{−1}x)=\cos θ=\sqrt{1−x^2}\).
In the case where \(−\dfrac{π}{2}<θ<0\), we make the observation that \(0<−θ<\dfrac{π}{2}\) and hence
\(\cos(\sin^{−1}x)=\cos θ=\cos(−θ)=\sqrt{1−x^2}\).
Now if \(θ=\dfrac{π}{2}\) or \(θ=−\dfrac{π}{2},x=1\) or \(x=−1\), and since in either case \(\cosθ=0\) and \(\sqrt{1−x^2}=0\), we have
\(\cos(\sin^{−1}x)=\cosθ=\sqrt{1−x^2}\).
Consequently, in all cases,
\[\cos(\sin^{−1}x)=\sqrt{1−x^2}.\]
Example \(\PageIndex{4B}\): Applying the Chain Rule to the Inverse Sine Function
Apply the chain rule to the formula derived in Example \(\PageIndex{4A}\) to find the derivative of \(h(x)=\sin^{−1}(g(x))\) and use this result to find the derivative of \(h(x)=\sin^{−1}(2x^3).\)
Solution
Applying the chain rule to \(h(x)=\sin^{−1}(g(x))\), we have
\(h′(x)=\dfrac{1}{\sqrt{1−(g(x))^2}}g′(x)\).
Now let \(g(x)=2x^3,\) so \(g′(x)=6x.\) Substituting into the previous result, we obtain
\(h′(x)=\dfrac{1}{\sqrt{1−4x^6}}⋅6x=\dfrac{6x}{\sqrt{1−4x^6}}\)
Exercise \(\PageIndex{4}\)
Use the inverse function theorem to find the derivative of \(g(x)=\tan^{−1}x\).
Hint
The inverse of \(g(x)\) is \(f(x)=tanx\). Use Example \(\PageIndex{4A}\) as a guide.
Answer
\(g′(x)=\dfrac{1}{1+x^2}\)
The derivatives of the remaining inverse trigonometric functions may also be found by using the inverse function theorem. These formulas are provided in the following theorem.
Derivatives of Inverse Trigonometric Functions
\[\begin{align} \dfrac{d}{dx}\sin^{−1}x&=\dfrac{1}{\sqrt{1−(x)^2}} \label{trig1} \\[4pt] \dfrac{d}{dx}\cos^{−1}x&=\dfrac{−1}{\sqrt{1−(x)^2}} \label{trig2} \\[4pt] \dfrac{d}{dx}\tan^{−1}x&=\dfrac{1}{1+(x)^2} \label{trig3} \\[4pt] \dfrac{d}{dx}\cot^{−1}x&=\dfrac{−1}{1+(x)^2} \label{trig4} \\[4pt] \dfrac{d}{dx}\sec^{−1}x&=\dfrac{1}{|x|\sqrt{(x)^2−1}} \label{trig5} \\[4pt] \dfrac{d}{dx}\csc^{−1}x&=\dfrac{−1}{|x|\sqrt{(x)^2−1}} \label{trig6} \end{align}\]
Example \(\PageIndex{5A}\): Applying Differentiation Formulas to an Inverse Tangent Function
Find the derivative of \(f(x)=\tan^{−1}(x^2).\)
Solution
Let \(g(x)=x^2\), so \(g′(x)=2x\). Substituting into Equation \ref{trig3}, we obtain
\(f′(x)=\dfrac{1}{1+(x^2)^2}⋅(2x).\)
Simplifying, we have
\(f′(x)=\dfrac{2x}{1+x^4}\).
Example \(\PageIndex{5B}\): Applying Differentiation Formulas to an Inverse Sine Function
Find the derivative of \(h(x)=x^2 \sin^{−1}x.\)
Solution
By applying the product rule, we have
\(h′(x)=2x\sin^{−1}x+\dfrac{1}{\sqrt{1−x^2}}⋅x2\)
Exercise \(\PageIndex{5}\)
Find the derivative of \(h(x)=\cos^{−1}(3x−1).\)
Hint
Use Equation \ref{trig2}. with \(g(x)=3x−1\)
Answer
\(h′(x)=\dfrac{−3}{\sqrt{6x−9x^2}}\)
Example \(\PageIndex{6}\): Applying the Inverse Tangent Function
The position of a particle at time \(t\) is given by \(s(t)=\tan^{−1}\left(\dfrac{1}{t}\right)\) for \(t≥ \ce{1/2}\). Find the velocity of the particle at time \( t=1\).
Solution
Begin by differentiating \(s(t)\) in order to find \(v(t)\).Thus,
\(v(t)=s′(t)=\dfrac{1}{1+\left(\dfrac{1}{t}\right)^2}⋅\dfrac{−1}{t^2}\).
Simplifying, we have
\(v(t)=−\dfrac{1}{t^2+1}\).
Thus, \(v(1)=−\dfrac{1}{2}.\)
Exercise \(\PageIndex{6}\)
Find the equation of the line tangent to the graph of \(f(x)=\sin^{−1}x\) at \(x=0.\)
Hint
\(f′(0)\) is the slope of the tangent line.
Answer
\(y=x\) |
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ... |
Perhaps this is trivial but I would like to plot the following function:
\begin{equation} p(t)=e^{\left( -\frac{d}{1-c}\right)\left[ W_0\left[B(1+x/r)^{1/d}\right]-W_0[B] \right]} \end{equation} where $W_k$ is the Lambert-W function for the $k=0$ branch and \begin{equation} B=\frac{(1-c)r}{1-(1-c)r}e^{\frac{(1-c)r}{1-(1-c)r}} \end{equation} by the same way they are done in the attached picture below. I am not really sure how to make a $(\log p(x),x)$ plot, but I guess that if I could do one, the $(-\log p(x),x)$ plot would be by ploting $p^{-1}(x)$?
My
(poor) attempt so far is the following:
c = 0.99999r = 0.0001 B = ((1 - c) r)/(1 - (1 - c) r) Exp[((1 - c) r)/(1 - (1 - c) r)] p = Table[ Exp[(-1/(1 - c)) (ProductLog[0, B*(1 + x/r)^(1/1)] - ProductLog[0, B])], {d, 0.5, 2, 0.1}]; LogPlot[Evaluate[p^-1, {x, 0, 10}], PlotRange -> {10^-1, 10^2}]
But even for the first one I am not able to get them correct.
I would really appreciate your help. Thank you. |
Mitra, J and Raychaudhuri, AK and Gayathri, N and Mukovskii, Ya M (2002)
Point-contact spectroscopy of single crystal $La_0_._75Sr_0_._2_5MnO_3$ and resistivity due to electron-phonon interaction. In: Physical Review B, 65 (14). pp. 140406-1.
PDF
Point-contact_spectroscopy.pdf
Restricted to Registered users only
Download (72kB) | Request a copy
Abstract
In this paper we report point-contact spectroscopy (PCS) measurements on single crystals of metallic $La_0_._75Sr_0_._2_5MnO_3$. The electron-phonon coupling function as obtained from the PCS shows large peaks for phonon frequencies $20\hspace{2mm}meV{\leq}\omega\leq100\hspace{2mm}meV$. This leads to a rather large electron-phonon coupling constant $\lambda≃1.2$. We have shown that a sizable fraction of the total resistivity in the temperature range $T{\leq}0.4T_c$ can be of phononic origin and can be explained using the experimentally observed electron-phonon coupling function. As $T{\rightarrow}T_c$, extra contribution to resistivity arising from spins dominate. We find that the magnetoresistance vanishes in the temperature range $T\leq0.4T_c$, where the predominant contribution to \rho arises from phonons. We have also performed PCS in 6 T magnetic field. The resistivity calculated from the spectrum at 6 T does not differ appreciably from that calculated at 0 T. Our experiment has been validated by a similar experiment on a nonmagnetic perovskite oxide system, $Na_0_._9WO_3$.
Item Type: Journal Article Additional Information: Copyright of this article belongs to The American Physical Society. Department/Centre: Division of Physical & Mathematical Sciences > Physics Depositing User: Sahana R Sahini Date Deposited: 22 Nov 2007 Last Modified: 19 Sep 2010 04:29 URI: http://eprints.iisc.ac.in/id/eprint/7485 Actions (login required)
View Item |
Is there a good characterization of the set $S$ of positive integers $n$ such that $\frac{1}{n}$ can be represented as a difference of Egyptian fractions with all denominators $< n$? For example, $44 \in S$ because $$ \dfrac{1}{44} = \left( \frac{1}{33} + \frac{1}{12}\right) - \frac{1}{11} $$
If I'm not mistaken, the first few members of $S$ are $$ 6, 12, 15, 18, 20, 21, 24, 28, 30, 33, 35, 36, 40, 42, 44, 45 $$ This does not appear to be in the OEIS yet; I intend to submit it soon. [ EDIT: It is now in OEIS as A278638.]
Here are some things I know so far:
If $n \in S$, then $mn \in S$ for any positive integer $m$. $mn \in S$ for integers $m,n$ with $n < m < 2 n$, because $$\dfrac{1}{mn} = \dfrac{1}{n(m-n)} - \dfrac{1}{m(m-n)}$$ $S$ contains no prime or prime power. There are no members of the form $2p^k$ where $p$ is a prime $> 3$. There are no members of the form $3p^k$ where $p$ is a prime $> 11$. |
Diagonal Matrices of Linear Operators Examples 2
Recall from the Diagonal Matrices of Linear Operators page that if $V$ is a finite-dimensional vector space and $T \in \mathcal L (V)$, then $T$ is said to be diagonalizable if there exists a basis $B_V$ such that $\mathcal M (T, B_V)$ is a diagonal matrix.
We saw that if $T$ has $\mathrm{dim} V$ distinct eigenvalues then there exists a basis $B_V$ of $V$ of eigenvectors corresponding to these $\mathrm{dim} V$ eigenvalues.
We also saw a chain of equivalent statements regarding $T$ being diagonalizable.
We will now look at some more problems regarding diagonal matrices of linear operators.
Example 1 Recall that if $T \in \mathcal L (V)$ has $\mathrm{dim} (V)$ distinct eigenvalues then there exists a basis $B_V$ of $V$ such that $\mathcal M (T, B_V)$ is a diagonal matrix. More succinctly, that basis $B_V$ is a basis of corresponding nonzero eigenvectors to the distinct eigenvalues. Prove that the converse of this theorem is not true in general.
We want to prove that if there exists a basis $B_V$ of $V$ such that $\mathcal M(T, B_V)$ is a diagonal matrix, then it need not be true that $T$ has $\mathrm{dim} (V)$ distinct eigenvalues.
Consider the vector space $\mathbb{R}^2$. Suppose that $T(x, y) = (x, y)$ and use the standard basis $\{ (1, 0), (0, 1) \}$ in $\mathbb{R}^2$. Note that $T$ is simply the identity matrix. Furthermore, $T$ has only one eigenvalue - $1$ since for any vector $(x, y) \in \mathbb{R}^2$ we have that $T(x, y) = 1(x, y)$. Now:(1)
Therefore we have that the matrix of $T$ with respect to $\{ (1, 0), (0, 1) \}$ is:(2)
Clearly $\mathcal M (T, \{ (1, 0), (0, 1)\})$ is diagonal, but $T$ does not have $\mathrm{dim} (\mathbb{R}^2) = 2$ distinct eigenvalues. In fact, the matrix $\mathcal M (T, \{ (1, 0), (0, 1)\})$ shows that $T$ has only one distinct eigenvalue.
Example 2 Let $T$ be noninvertible linear operator on the finite-dimensional vector space $V$. Prove that if there exists a basis $B_V$ of $V$ for which $\mathcal M (T, B_V)$ is a diagonal matrix then $V = \mathrm{null} (T) \oplus \mathrm{range} (T)$.
Suppose that there exists a basis $B_V$ of $V$ for which $\mathcal M (T, B_V)$ is a diagonal matrix. Since $T$ is noninvertible, $0$ is an eigenvalue of $T$. Let $0$, $\lambda_1$, $\lambda_2$, …, $\lambda_m$ be the distinct eigenvalues of $T$ (noting that then $\lambda_1, \lambda_2, ..., \lambda_m \neq 0$). Then we have that:(3)
Note that $\mathrm{null} (T - 0I) = \mathrm{null} (T)$. We then only need to show that $\mathrm{range} (T) = \mathrm{null} (T - \lambda_1 I) \oplus ( T - \lambda_2 I) \oplus ... \oplus (T - \lambda_m I)$.
Let $v \in \mathrm{range} (T)$. Since $0, \lambda_1, \lambda_2, ..., \lambda_m$ are distinct eigenvalues of $T$ and there exists a basis $B_V$ such that $\mathcal M (T, B_V)$ is a diagonal matrix, then this implies that there exists a basis of nonzero eigenvectors corresponding to these eigenvalues. Let $\{ v_0, v_1, v_2, ..., v_m \}$ be this basis.
Then we have that for every vector $v \in V$ that then:(4)
Note that $v_0 \in \mathrm{null} (T)$, $v_1 \in \mathrm{null} (T - \lambda_1I)$, …, $v_m \in \mathrm{null} (T - \lambda_m I)$. since $v_0$ is an eigenvector of $0$, $v_1$ is an eigenvector of $\lambda_1$, …, $v_m$ is an eigenvector of $\lambda_m$. Apply the linear operator $T$ to both sides to get that:(5)
Therefore as we can see, $\mathrm{range} (T) = \mathrm{span} (v_1, v_2, ..., v_m) = \mathrm{null} (T - \lambda_1 I) \oplus ( T - \lambda_2 I) \oplus ... \oplus (T - \lambda_m I)$. Therefore $\mathrm{range} (T) \subseteq \mathrm{null} (T - \lambda_1 I) \oplus ( T - \lambda_2 I) \oplus ... \oplus (T - \lambda_m I)$.
Now let $v \in \mathrm{null} (T - \lambda_1 I) \oplus ( T - \lambda_2 I) \oplus ... \oplus (T - \lambda_m I)$. Then we have that $v$ can be written uniquely as $v = u_1 + u_2 + ... + u_m$ where $u_1 \in \mathrm{null} (T - \lambda_1 I)$, $u_2 \in \mathrm{null} (T - \lambda_2 I)$, …, $u_m \in \mathrm{null} (T - \lambda_m I)$. Therefore $T(u_1) = \lambda_1 u_1$, $T(u_2) = \lambda_2 u_2$, …, $T(u_m) = \lambda_m u_m$.
Thus we have that:(6)
Thus $v \in \mathrm{range} (T)$ since $T(v)$ is merely a linear combination of $u_1, u_2, ..., u_m \in V$, and so $\mathrm{null} (T - \lambda_1 I) \oplus ( T - \lambda_2 I) \oplus ... \oplus (T - \lambda_m I) \subseteq \mathrm{range} (T)$.
Thus $\mathrm{range} (T) = \mathrm{null} (T - \lambda_1 I) \oplus ( T - \lambda_2 I) \oplus ... \oplus (T - \lambda_m I)$
Therefore $V = \mathrm{null} (T) \oplus \mathrm{range} (T)$. |
Dynamic programming algorithms are the bread and butter for structured prediction in NLP.
They are also quite a pain to debug, especially when implemented directly in your language of choice, and not in some high-level programming language for dynamic programming algorithms, such as Dyna.
In this post, I suggest a way, or more of a “debugging trick” to make sure that your dynamic programming algorithm is implemented correctly. This trick is a by-product of working with spectral algorithms, where parameters are masked by an unknown linear transformation. Recently, Avneesh, Chris and I have successfully debugged a hypergraph search algorithm for MT with it.
The idea is simple. Take your parameters for the model, and transform them by a random linear transformation, such that the linear transformation will cancel out if you compute marginals or any other quantity using the dynamic programming algorithm. Marginals have to be positive. If following the linear transformation, you are getting any negative marginals, that’s a (one-way) clear indication that somewhere you have a bug. You are actually supposed to get the same inside/outside probabilities and marginals as you get without the linear transformation.
Here are more exact details. Consider the CKY algorithm for parsing with context-free grammars — in its real semiring version, i.e. when it is computing the inside probabilities.The deduction rules (Dyna style or otherwise) are simple to understand, and they look something like:
\(\mathrm{goal \vdash root\_weight(a) + span(a,0,N)}\)
\(\mathrm{span(a,i,j) \vdash rule\_weight(a \rightarrow b c) + span(b,i,k) + span(c,k,j)} \)
\(\mathrm{span(a,i,i+1) \vdash rule\_weight(a \rightarrow x) + word(x, i, i+1)} \)
Here, the indices in \( \mathrm{span} \) denote spaces between words. So for example, in the sentence “The blog had a new post”, if we wanted to denote an NP over “The blog”, that would be computed in \( \mathrm{span(“NP”, 0, 2)}\). N is the length of the sentence.
\( \mathrm{word(x,i,i+1)} \) are axioms that denote the words in the sentence. For each word in the sentence, such as a “blog”, you would seed the chart with the word elements in the chart (such as \(\mathrm{word(“blog”, 1, 2)}\)).
\( \mathrm{rule\_weight(a \rightarrow b c)}\) are the probabilities \(p(a \rightarrow b c | a) \) for the underlying context-free grammar.
Let \(n\) be the number of nonterminals in the grammar. In addition, denote by \(R\) an \(n \times n \times n\) array such that \(R_{abc} = p(a \rightarrow b c | a)\) (and \(0\) if the rule is not in the grammar). In addition, denote by \(\alpha(i,j)\) a vector of size \(1 \times n\) such that \(\alpha_a(i,j) = \mathrm{span(a,i,j)}\).
Then, now, note that for any \(a\), the above CKY algorithm (in declarative form) dictates that:
\(\alpha_a(i,j) = \sum_{k=i+1}^{j-1} \sum_{b,c} \alpha_b(i,k) \alpha_c(k,j) R_{abc}\)
for non-leaf spans \(i,j\).
One way to view this is as a generalization of matrix product — tensor product. \(R\) is actually a three dimensional tensor (\(n \times n \times n\)) and it can be viewed as a mapping that maps a pair of vectors, and through *contraction* returns another vector:
\(\alpha_a(i,j) = \sum_{k=i+1}^{j-1} R(2,3; \alpha(i,k), \alpha(k,j))\)
Here, \(R(x,y ; w,v)\) denotes a contraction operation on the \( x \) and \( y \) dimensions. Contraction here means that we sum out the two dimensions (\(2\) and \(3\)) while multiplying in the two vectors \(w \in \mathbb{R}^n \) and \(v \in \mathbb{R}^n \).
Now that we have this in a matrix form, we can note something interesting. Let’s say that we have some matrix \(G \in \mathbb{R}^{n \times n}\), which is invertible. Now, we define linearly transformed \(\alpha\) with overline, as (call the following “Formula 1”):
\(\overline{\alpha}_a(i,j) = \left( \sum_{k=i+1}^{j-1} R(2,3; \overline{\alpha}(i,k)\times G^{-1} , \overline{\alpha}(k,j)\times G^{-1} ) \right) \times G \).
It is easy to verify that \(\overline{\alpha}(i,j) \times G^{-1} = \alpha(i,j)\). The \(G\) acts as a “plug”: it linearly transforms each \(\alpha\) term. We transform it back with \(G^{-1}\) before feeding it to \(R\), and then transform the result of \(R\) with \(G\) again, to keep everything linearly transformed.
Now, let \(\beta\) be a vector of size \(1 \times n\) such that \(\beta_a = \mathrm{root\_weight(a)}\). If we define \(\overline{\beta} = \beta (G^{-1})^{\top}\), then, the total probability of a string using the CKY algorithm can be computed as:
\(\langle \alpha(0,N), \beta \rangle = \alpha(0,N) \beta^{\top} = \alpha(0,N) G G^{-1} \beta^{\top} = \langle \overline{\alpha}(0,N), \overline{\beta} \rangle\).
This means that if we add the multiplication by \(G\) (and its inverse) to all of our \(\alpha\) and \(\beta\) terms as above, the calculation of our total probability of string would not change, and therefore, since \(p(a \rightarrow b c | a)\) are all positive, computing the total probability with the linear transformations should also be positive, and not only that — it should be identical to the result as if we are not using \(G\)!
The next step is noting that the multiplication by \(G\) can be folded into our \(R\) tensor. If we define
\([\overline{R}]_{abc} = \sum_{a’,b’,c’} R_{abc} [G^{-1}]_{bb’} [G^{-1}]_{cc’} G_{a’a}\)
then Formula 1 can be replaced with:
\( \overline{\alpha}_a(i,j) = \left( \sum_{k=i+1}^{j-1} \overline{R}(2,3; \overline{\alpha}(i,k), \overline{\alpha}(k,j)) \right) \).
The last set of parameters we need to linearly transform by \(G\) is that of \( \mathrm{rule\_weight(a \rightarrow x)}\). It is not hard to guess how. First, denote by \( \gamma_x \in \mathbb{R}^n \) a vector such that \( [\gamma_x]_a = \mathrm{rule\_weight(a \rightarrow x_i)}\).
Note \(\alpha_a(i,i+1) = [\gamma_{x_i}]_a \) where \(x_i \) is the \(i\)th word in the sentence. We need to multiply gamma for each \( x \) by \(G\) on the left: \(\overline{\gamma}_x = \gamma_x G\). We do that for all \(x\) in the vocabulary, defining these linearly transformed gammas. This means now that we also make sure that for the leaf nodes it holds that \( \alpha(i,i+1) = \overline{\alpha}(i,i+1) G^{-1} \) — all linear transformations by \( G \) will cancel from top to bottom when using \( \overline{\beta} \) for root probabilities, \(\overline{R}\) for binary rule probabilities and \( \overline{\gamma}\) for preterminal rule probabilities (instead of \( \beta\), \( R \) and \( \gamma \)).
Perfect! By linearly transforming our parameters \(\mathrm{rule\_weight}\), \(\mathrm{root\_weight}\), we got parameters which are very sensitive to bugs in our dynamic programming algorithm. If we have the slightest mistake somewhere, it is very likely, if \(G\) is chosen well, that some of the total tree probabilities on a training set won’t be identical (or will even be negative) to the non-linearly transformed parameters, even though they should be.
You might think that this is a whole lot of hassle to go through for debugging. But really, linearly transforming the parameters is just about few lines of code. Here is what Avneesh used in Python:
G = np.random.random_sample((rank, rank)) G = G + G.transpose() Ginv = np.linalg.inv(G) for src_RHS in paramDict: if src_RHS == "Pi": paramDict[src_RHS] = np.absolute(paramDict[src_RHS]).dot(Ginv) else: for target_RHS in paramDict[src_RHS]: parameter = np.absolute(paramDict[src_RHS][target_RHS]) arity = len(parameter.shape) - 1 if arity == 0: paramDict[src_RHS][target_RHS] = G.dot(parameter) elif arity == 1: paramDict[src_RHS][target_RHS] = G.dot(parameter).dot(Ginv) elif arity == 2: result = np.tensordot(G, parameter, axes=[1,0]) result = np.tensordot(Ginv, result, axes=[1,1]).swapaxes(0,1) result = np.tensordot(Ginv, result, axes=[1,2]).swapaxes(1,2) paramDict[src_RHS][target_RHS] = result
Voila! This code is added before the dynamic programming algorithm starts to execute. Then, the results of computing the inside probabilities with this linear transformation are compared to the results without the linear transformation — they have to be identical.
Lines 1-3 create an invertible G matrix. Note that we make sure it is symmetric, just to be on the safe side, in case we need to multiply something by \(G^{\top}\) instead of \( G \). This debugging method does not have to use a symmetric \( G\).
Next, we multiply all parameters in paramDict. The “Pi” one is the root probabilities, and we just multiply it by \(G^{-1}\). Later on “arity” defines whether the set of parameters is a matrix, i.e. arity = 1 (unary rules, not discussed above, but have very similar formulation), tensor, arity = 2 (binary rules) or vectors (preterminal rules; arity = 0).
If you want more details about this idea of linearly transforming the parameters, you can find them here for latent-variable PCFGs. The linear transformations in that paper are used for spectral learning of L-PCFGs — but as I mentioned, a by-product of that is this debugging trick. When using EM with dynamic programming algorithms, the usual method for making sure everything is working correctly, is checking that the likelihood increases after each iteration. The linear-transformation debugging method is more fine-grained and targeted towards making sure the dynamic programming algorithm is correct. In addition, I witnessed cases in which the likelihood of EM continuously improved, but there was still a bug with the dynamic programming algorithm.
By the way, here I described mostly the debugging method applied to the inside algorithm. The same linear transformations can be also used with the outside algorithm, and therefore to compute any marginal — the linear transformations should cancel in that case as well. It is also not too hard to imagine how to generalize this to the case where the dynamic programming algorithm is not CKY, but some other dynamic programming algorithm. |
Let's say we have two types of balls, black and white. There are $B$ black balls and $W$ white balls, s.t. $B + W = N$ where $N$ is the total number of balls.
We want to divide these $N$ balls evenly into $k$ groups, each with integer size $n = \frac{N}{k}$ balls per group. We further require that there are fewer than $l$ black balls per group. That is, if $b_i$ is the number of black balls in group $i$, then $b_i < l, 1 \leq i \leq k$.
The question is:
What is the probability that no group violates the black ball limit if the balls are distributed randomly?
The motivation for this question is rooted in the concept of sharding a distributed state machine, where you have $N$ nodes, of which $B$ are Byzantine (potentially faulty), and you want to divide the nodes into $k$ groups, none of which can have more than $l$ Byzantine nodes for the system to remain secure. Existing sharding architectures mostly assume that the distribution of nodes follows a cumulative binomial distribution (e.g. Ethereum sharding), but this is clearly wrong as sampling is done
without replacement.
Using the correct base hypergeometric distribution, it is straightforward to solve for the joint pdf: \begin{equation} P(B_1 = b_1, \ldots, B_{k} = b_{k}) = \frac{1}{\binom{N}{B}} \cdot \prod_{j=1}^{k} \binom{n}{b_j} \end{equation}
where $B_i$ is the random variable whose outcomes $b_i$ are the number of black balls in shard $i$.
We can then solve for the CDF, but end up with a nasty sum:
\begin{equation} P(B_1 \leq l, \ldots B_{k} \leq l) = \frac{1}{\binom{N}{B}} \cdot \overbrace{\sum_{b_1=0}^l...\sum_{b_k=0}^l}^{\sum_i^k b_i = B} \prod_{i=0}^l \binom{n}{b_i} \end{equation}
This CDF evokes generalized Vandermonde's Identity, but that gives the
unconstrained solution.
My current approach is as follows:
Count the number of ways to divide the $B$ black balls into $k$ bins with no more than $l$ black balls in any bin. This is essentially this answer that uses a combinatorial PIE, or equivalently using a constrained generating function. Count the number of ways to divide the $B$ black balls into $k$ bins with no more than $n$ black balls in any bin, by the same logic. Divide 1 (the total number of correct solutions) by 2 (the total number of possible solutions) to get the probability
My thinking is that once the $B$ black balls are assigned, then the $W$ white ball placement is already determined: $w_i = n - b_i$.
But this solution does not match simulated results. For example, for $N = 30$, $B=3$, $k = 5$, and $l = 2$, the above logic says the probability of no limit being broken is 85.7%, while simulation indicates it is 97.6%.
Is there a closed form solution to this probability? Thanks! |
I have been asked this question by school kids, colleagues and family (usually less formally):
When ascending a flight of stairs, you exchange mechanical work to attain potential energy ($W_{ascend} = E_{pot} = m \cdot g \cdot h$).
However, when descending, you have to exert an equivalent force to stop yourself from accelerating and hitting the ground (with $v_{splat} = \sqrt{2 \cdot g \cdot h}$ ). If you arrive downstairs with $v_{vertical} << v_{splat}$, you counteracted basically all of your potential energy, i.e. $\int F(h) \cdot dh = W_{descend} \approx E_{pot} = m \cdot g \cdot h$.
So is the fact that ascending stairs is commonly perceived as significantly more exhausting than descending the same stairs purely a biomechanical thing, e.g. having joints instead of muscles absorb/counteract kinetic energy? Or is there a physical component I am missing?
edit1:
I feel I need to clarify some points in reaction to the first answers.
A)
The only reason I introduced velocity into the question was to show that you actually have to expend energy going downstairs to prevent ending up as a wet spot on the floor at the bottom of the steps.
The speed with which you ascend or descend doesn't make a difference when talking about the energy, which is why I formulated the question primarily using energy and mechanical work. Imagine that while ascending you pause for a tiny moment after each step ($v = 0$). Regardless of wether you ascended very slowly or very quickly, you would have invested the same amount of work and gained the same amount of potential energy ($\delta W = m \cdot g \cdot \delta h_{step} = \delta E_{pot}$).
The same holds true while descending. After each step, you would have gained kinetic energy equivalent to $E_{kin} = m \cdot g \cdot \delta h_{step}$, but again, imagine you take a tiny pause after each step. For each step, you will have to exert a force with your legs such that you come to a complete stop (at least in y direction). However fast or slow you do it, you mathematically will end up expending $W_{step} = \int F(h) \cdot dh = m \cdot g \cdot \delta h_{step}$.
If you expended any less "brake" work, some of your kinetic energy in y direction would remain
for each step, and adding that up over a number of steps would result in an arbitrarily high terminal velocity at the bottom of the stairs. Since we usually survive descending stairs, my argument is that you will have to expend approximately the same amount of energy going down as going up, in order to reach the bottom of arbitrarily long flights of stairs safely (i.e. with $v_y \approx 0$).
B) I am
quite positive fairly sure that friction does not play a significant role in this thought experiment. Air friction as well as friction between your shoes and the stairs should be pretty much the same while ascending and descending. In both cases, it would be basically the same amount of additional energy expenditure, still yielding identical total energy emounts for ascending and descending. Anna v is of course right in pointing out that you need the friction between your shoes and the stairs to be able to exert any force at all without slipping (such as on ice), but in the case of static friction without slippage, no significant amount of energy should be dissipated, since said friction exerts force mainly in x direction, but the deceleration of your body has a mostly y component, since the x component is roughly constant while moving on the stair (~orthogonal directions of frictional force and movement, so no energy lost to friction work). edit2: Reactions to some more comments and replies, added some emphasis to provide structure to wall of text
C) No,
I am not arguing that descending is subjectively less exhausting, I am asking why it is less exhausting when the mechanics seem to indicate it shouldn't be.
D)
There is no "free" or "automatic" normal force emanating from the stairs that stops you from accelerating.
The normal force provided by the mechanic stability of the stairs stops the stairs from giving in when you step on them, alright, but you have to provide an equal and opposite force (i.e. from your legs) to decelerate your center of gravity, otherwise you will feel the constraining force of the steps in a very inconveniencing manner. Try not using your leg muscles when descending stairs if you are not convinced (please use short stairs for your own safety).
E) Also, as several people pointed out,
we as humans have no way of using or reconverting our stored potential energy to decelerate ourselves. We do not have a built-in dynamo or similar device that allows us to do anything with it - while descending the stairs we actually have to "get rid of it" in order to not accelerate uncontrollably. I am well aware that energy is never truly lost, but also the "energy diversion instead of expenditure" process some commenters suggested is flawed (most answers use some variation of the argument I'm discussing in C, or "you just need to relax/let go to go downhill", which is true, but you still have to decelerate, which leads to my original argument that decelerating mathematically costs exactly as much energy as ascending).
F) Some of the better points so far were first brought up by dmckee and Yakk:
Your muscles have to continually expend chemical energy to sustain a force, even if the force is not acting in the sense of $W = F \cdot s$. Holding up a heavy object is one example of that. This point merits more discussion, I will post about that later today. You might use different muscle groups in your legs while ascending and descending, making ascending more exhausting for the body (while not really being harder energetically). This is right up the alley of what I meant by biomechanical effects in my original post. edit 3:In order to address E as well as F1, let's try and convert the process to explicit kinematics and equations of motion. I will try to argue that the force you need to exert is the same during ascent and descent both over y direction (amount of work) and over time (since your muscles expend energy per time to be able to exert a force).
When ascending (or descending stairs), you bounce a little to not trip over the stairs. Your center of gravity moves along the x axis of the image with two components: your rougly linear ascent/descent (depends on steepness of stairs, here 1 for simplicity) and a component that models the bounce in your step (also, alternating of legs). The image assumes $$h(x) = x + a \cdot \cos(2 \pi \cdot x) + c$$ Here, $c$ is the height of your CoG over the stairs (depends on body height and weight distribution, but is ultimately without consequence) and $A$ is the amplitude of the bounce in your step.
By derivation, we obtain velocity and acceleration in y direction $$ v(x) = 1 - 2 \pi \cdot A \sin(2 \pi \cdot x)\\ a(x) = -(2 \pi)^2 \cdot A \cos(2 \pi \cdot x) $$ The total force your legs have to exert has two parts: counteracting gravity, and making you move according to $a(x)$, so $$F(x) = m \cdot g + m \cdot a(x)$$ The next image shows F(x) for $A = 0.25$, and $m = 80 kg$. I interpret the image as showing the following:
In order to gain height, you forcefully push with your lower leg, a) counteracting gravity and b) gaining momentum in y direction. This corresponds to the maxima in the force plotted roughly in the center of each step. Your momentum carries you to the next step. Gravity slows your ascent, such that on arriving on the next step your velocity in y direction is roughly zero (not plotted $v(x)$). During this period of time right after completely straightening the pushing lower leg, your leg exerts less force (remaining force depending on the bounciness of your stride, $A$) and you land with your upper foot, getting ready for the next step. This corresponds to the minima in F(x).
The exact shape of h(x) and hence F(x) can be debated, but they should look qualitatively similar to what I outlined. My main points are:
Walking down the stairs, you read the images right-to-left instead of left-to-right. Your h(x) will be the same and hence F(x) will be the same. So $W_{desc} = \int F(x) \cdot dx = W_{asc}$. The spent amounts of energy should be equal. In this case, the minima in F(x) correspond to letting yourself fall to the next step (as many answers pointed out), but crucially, the maxima correspond to exerting a large force on landing with your lower leg in order to a) hold your weight up against gravity and b) decelerate your fall to near zero vertical velocity. If you move with roughly constant x velocity, $F(x)$ is proportional to $F(t)$. This is important for the argument that your muscles consume energy based on the time they are required to exert a force, $W_{muscle} \approx \int F(t) \cdot dt$. Reading the image right-to-left, F(t) is read right-to-left, but keeps its shape. Since the time required for each segment of the ascent is equal to the equivalent "falling" descent portion (time symmetry of classical mechanics), the integral $W_{muscle}$ remains constant as well. This result carries over to non-linear muscle energy consumption functions that depend on higher orders of F(t) to model strength limits, muscle exhaustion over time and so on. |
Arun from the National Public School inBangalore sent us this detailed solution:
It is clearly seen that the angle at $C$ is twice the angle at $D$.Here we have the fundamental theorem with regard to angles incircles -
" the angle subtended by an arc of a circle at the centre is doublethe angle subtended by it at any point on the circle "
This can be proved as follows :
Given a circle $(C,r)$ in which arc $AB$ subtends angle ($ACB$) atthe centre and angle ($ADB$) at any point $D$ on the circle.
To Prove:
(i) angle $(ACB)$ = $2\times$angle $(ADB)$, when arc $AB$ is aminor arc or a semicircle.
(ii) Reflex angle $(ACB) = 2\times$ angle $(ADB)$, when arc $AB$ isa major arc
Construction: Join $AB$ and $DC$. Produce $DC$ to a point $X$outside the circle.
Clearly, there are 3 cases.
Case I : arc $AB$ is a minor arc [Fig (i)]
Case II : arc $AB$ is a semicircle [Fig (ii)]
Case III : arc $AB$ is a major arc [Fig (iii)]
We know that when one side of a triangle is produced then theexterior angle so formed is equal to the sum of the interioropposite angles.
Therefore,
angle $(ACX)$ = angle $(CAD)$ + angle $(CDA)$ [consider triangle$CAD$]
angle $(BCX)$ = angle $(CBD)$ + angle $(CDB)$ [consider triangle$CBD$]
But,
angle $(CAD)$ = angle $(CDA)$ [since $CD = CA = r$ ] and ,
angle $(CBD)$ = angle $(CDB)$ [since $CD = CB = r$ ]
Therefore,
angle $(ACX) = 2\times$ angle $(CDA)$ and
angle $(BCX) = 2\times$angle $(CDB)$
In Fig (i) and Fig (ii) ,
angle $(ACX)$ + angle $(BCX)$ = $2 \times$ angle $(CDA)$ +$2\times$ angle $(CDB) \Rightarrow$
angle $(ACB)$ = $2$[angle $(CDA)$ + angle $(CDB)$]$\Rightarrow$
angle $(ACB)$ = $2 \times$angle $(ADB)$
in Fig (iii),
angle $(ACX)$ + angle $(BCX)$ = $2\times$ angle $(CDA)$ + $2\times$angle $(CDB) \Rightarrow$
reflex angle $(ACB)$ = $2$[angle $(CDA)$ + angle $(CDB)$]$\Rightarrow$
reflex angle $(ACB)$ = $2\times $ angle $(ADB)$
Hence proved.
This theorem has 2 main implications:
(i) The angle in a semicircle is a right angle .
i.e. in fig (2), where arc $AB$ is a semicircle, or in other words$ACB$ is a diameter,
angle $(ADB) = \frac{1}{2} \times$ angle $(ACB) =\frac{1}{2}\times$($180$ degrees) = $90$ degrees = a right angle
This holds true for any position of point $C$ on the semicircle.
(ii) Angles in the same segment of a circle are equal.
This means that if an arc subtends $2$ angles, at $2$ differentpoints on the circle, these angles will be equal.
We can prove this, by proving that each of the $2$ angles is equalto half the angle subtended at the centre. |
Inaccessible
Inaccessible cardinals are the traditional entry-point to the large cardinal hierarchy (although there are some weaker large cardinal notions, such as universe cardinals).
If $\kappa$ is inaccessible, then $V_\kappa$ is a model of ZFC, but this is not an equivalence, since the weaker notion of universe cardinal also have this feature, and are not all regular when they exist. Every inaccessible cardinal $\kappa$ is a beth fixed point, and consequently $V_\kappa=H_\kappa$. (Zermelo) The models of second-order ZFC are precisely the models $\langle V_\kappa,\in\rangle$ for an inaccessible cardinal $\kappa$. The uncountable Grothedieck universes are precisely the sets of the form $V_\kappa$ for an inaccessible cardinal $\kappa$. Weakly inaccessible
A cardinal $\kappa$ is
weakly inaccessible if it is an uncountable regular limit cardinal. Under the GCH, this is equivalent to inaccessibility, since under GCH every limit cardinal is a strong limit cardinal. So the difference between weak and strong inaccessibility only arises when GCH fails badly. Every inaccessible cardinal is weakly inaccessible, but forcing arguments show that any inaccessible cardinal can become a non-inaccessible weakly inaccessible cardinal in a forcing extension, such as after adding an enormous number of Cohen reals (this forcing is c.c.c. and hence preserves all cardinals and cofinalities and hence also all regular limit cardinals). Meanwhile, every weakly inaccessible cardinal is fully inaccessible in any inner model of GCH, since it will remain a regular limit cardinal in that model and hence also be a strong limit there. In particular, every weakly inaccessible cardinal is inaccessible in the constructible universe $L$. Consequently, although the two large cardinal notions are not provably equivalent, they are equiconsistent. Grothendieck universe
The concept of Grothendieck universes arose in category theory out of the desire to create a hierarchy of notions of smallness, so that one may form such categories as the category of all small groups, or small rings or small categories, without running into the difficulties of Russell's paradox.
A
Grothendieck universe is a transitive set $W$ that is closed under pairing, power set and unions. That is, (transitivity) If $b\in a\in W$, then $b\in W$. (pairing) If $a,b\in W$, then $\{a,b\}\in W$. (power set) If $a\in W$, then $P(a)\in W$. (union) If $a\in W$, then $\cup a\in W$. Universe axiom
The
Grothendieck universe axiom is the assertion that every set is an element of a Grothendieck universe. This is equivalent to the assertion that the inaccessible cardinals form a proper class. |
I am trying to estimate residential demand for electricity in a country where electricity is sold (to all households (HH)) at an increasing two-part tariff. By choosing marginal prices as my key independent variable (I am mostly interested in estimating the price elasticity of demand), I have come to understand that OLS regression cannot be pursued because the price variable would be endogenous. Therefore, literature suggests I use either Maximum Likelihood estimation (MLE) or Generalized Method of Moments (GMM) estimation. However, I am struggling with finding by objective function.
In their study, Reiss and White (2005) pursue a moments-based approach and go on to define the reduced form for the HH's consumption level of electricity $x^*$ (as a function of an increasing two-tier price schedule), as such:
$$ x^*= \begin{cases} x(p_1,y,z,\varepsilon;\beta) \qquad & \text{if } \varepsilon \le c_1 \\ \bar{x} & \text{if } c_1< \varepsilon < c_2\\ x(p_2,y_2,z,\varepsilon;\beta) \qquad & \text{if } \varepsilon>c_2, \end{cases} \label{eq:opt.con.lvl.ecx} \tag{1} $$
where:
$p_1,p_2$ are the marginal prices on the two price schedule tiers (demand is assumed strictly decreasing in $p$); $\bar{x}$ is the electricity consumption threshold after which the price switches to the higher tier; $z$ are observable consumer characteristics; $\varepsilon$ are unobserved consumer characteristics (demand is assumed strictly increasing in $\varepsilon$); $y_2=y+\bar{x} \cdot (p_2-p_1)$ is the virtual income when consumers lie on the second tier (hence, $y$ is income); and $c_j$ (where $j=1,2$) is the solution to $x(p_j,y_j,z,c_j;\beta)=\bar{x}$ with $y_1=y$. In other words, $c_j$ is the maximum (for $j=1$) or minimum (for $j=2$) value of $\varepsilon$ for which consumption occurs on tier $j$.
Conditional on the observables, the authors
integrate \eqref{eq:opt.con.lvl.ecx} piecewise to obtain:
$$ E(x^*|\cdot) = E_{\varepsilon}[x(p_2,y_2,z,\varepsilon;\beta)] + h(p_1,p_2,\bar{x},y,z;\beta) \label{eq:pw.int} \tag{2} $$
where $h(\cdot)\equiv \tau_2 - \tau_1$ is a 'sorting correction function defined by the truncated moments' (I used quotation marks as I do not grasp what this means):
$$ \tau_j=\int_{-\infty}^{c_j(\beta)} [\bar{x}-x(p_j,y_j,z,\varepsilon;\beta)]dF_{\varepsilon}, \label{eq:srt.corr.fcn} \tag{3} $$
with $c_1,c_2$ defined in \eqref{eq:opt.con.lvl.ecx} and $y_1=y$. The authors go on to evaluate 'the moments' in \eqref{eq:srt.corr.fcn} with the error specification in $F_{\varepsilon}$ assumed $N(0,\sigma^2)$ and $\varepsilon$ entering demand \eqref{eq:opt.con.lvl.ecx} additevely, to find:
$$ E(x^*|\cdot) = [x(p_1,y,z;\beta)-\sigma\lambda_1]\Phi_1 + \bar{x}\cdot(\Phi_2 - \Phi_1) \\ + [x(p_2,y_2,z;\beta)+\sigma\lambda_2](1-\Phi_2) \label{eq:estim} \tag{4} $$
where, $\Phi_j$ is the standard normal distribution evaluated at $c_j(\beta)/\sigma$, $\phi_j$ is the normal density at $c_j(\beta)/\sigma$, $\lambda_1=\phi_1/\Phi_1$, and $\lambda_2=\phi_2/(1-\Phi_2)$.
The authors end up using \eqref{eq:estim} because it fits their data well.
Question(s): What math and reasoning did the authors use to move from \eqref{eq:opt.con.lvl.ecx} to \eqref{eq:pw.int}, \eqref{eq:srt.corr.fcn} and \eqref{eq:estim}? Is the authors' approach strictly for GMM estimation (as they chose to pursue a moments-based approach) or could MLE be carried out as well? Can anyone show me what \eqref{eq:pw.int}, \eqref{eq:srt.corr.fcn} and \eqref{eq:estim} would look like, given that my case has \eqref{eq:my.opt.con.lvl.ecx} (below) as reduced form equation for HH consumption? Alternatively to Q.3, could anyone provide me with either textbook/audio/video material which can help me formulate my own objective function and assumptions to find the best fit for my data, for either GMM or MLE?
$$ x^*= \begin{cases} x(p_1,y,z,\varepsilon;\beta) \qquad & \text{if } \varepsilon \leq c_1 \\ x(p_2,y_2,z,\varepsilon;\beta) \qquad & \text{if } \varepsilon>c_2, \end{cases} \label{eq:my.opt.con.lvl.ecx} \tag{5} $$
I am already grateful to those who made it thus far in reading my long question. Any help is deeply appreciated. Thanks!
Gabriele
Reiss, P. C., & White, M. W. (2005). Household electricity demand, revisited.
The Review of Economic Studies, 72(3), 853-883. |
№ 9
All Issues Volume 45, № 6, 1993
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 731–743
This is a brief survey of the development and applications of the concept of the characteristic function for different classes of linear operators.
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 744–752
The maximal commuting proper extensions of a closed Hermitian operator and a dual pair of continuous operators in a Hilbert space are described; the criteria of their existence are established.
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 753–762
A method for the study of differential equations with pulse influence on the surfaces, which was realized in [1] for a bounded domain in the phase space, is now extended to the entire space $R^n$. We prove theorems on the existence of integral surfaces in the critical case and justify the reduction principle for these equations.
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 763–769
A Jordan curve $L$ is studied, which satisfies an interior or exterior wedge condition for wedges of various geometric forms. We obtain estimates of the continuity moduli for the conformal mappings of the exterior (interior) of $L$ onto the exterior (interior) of the unit disk.
Convergence of the series of large-deviation probabilities for sums of independent equally distributed random variables
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 770–784
The series $\sum\nolimits_{n \geqslant 1} {\tau _n P(|S_n | \geqslant \varepsilon n^a )}$ is studied, where $S_n$ are the sums of independent equally distributed random variables, $τ_n$ is a sequence of nonnegative numbers, $α > 0$, and $ɛ > 0$ is an arbitrary positive number. For a broad class of sequences $τ_n$, the necessary and sufficient conditions are established for the convergence of this series for any $ɛ > 0$.
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 785–790
Uniform estimates are obtained for the monotone approximation of the functions from the generalized Babenko classes.
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 791–802
Expressions for partial scattering matrices $S_l(\lambda)$ are obtained for all naturall by using Adamyan's result, which establishes a universal relationship between the scattering matrix for the wave equation with finite potential in a even-dimensional space and the characteristic operator function of a special contraction operator, which describes the dissipation of energy from the region of the space containing a scatterer. It is shown that this problem can be reduced to the case of $l = 0$ for all even $l$ and to the case of $l = 1$ for all odd $l$.
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 803–808
Conditions are established under which the unital divisor extracted from a matrix polynomial over an arbitrary field is determined uniquely by its characteristic polynomial. The result obtained is applied to the problem of solving matrix polynomial equations.
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 809–833
The concept of a generalized ψ-derivative of a function of a complex variable is introduced and applied to classify functions analytic in Jordan domains. The approximations of functions from the classes introduced by this procedure are studied by using algebraic polynomials constructed on the basis of the Faber polynomials after the summation of Faber series. Analogs of the author's results are obtained for the classes $L_{\beta}^{\psi} \mathfrak{R}$.
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 834–842
We study linear stochastic differential equations with deviating argument of neutral type and establish sufficient conditions of stability. The functions determining the initial perturbations of solutions are found.
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 843–853
The class $S_{Ψ}^{ *} (A)$ of the entire Dirichlet series $F(s) = \sum\nolimits_{n = 0}^\infty {a_n exp(s\lambda _n )}$ is studied, which is defined for a fixed sequence $A = (a_n ),\; 0 < a_n \downarrow 0,\sum\nolimits_{n = 0}^\infty {a_n< + \infty } ,$ by the conditions $0 ≤ λ_n ↗ +∞$ and $λ_n ≤ (1n^+(1/a_n ))$ imposed on the parameters $λ_n$, where $ψ $ is a positive continuous function on $(0, +∞)$ such that $ψ(x) ↑ +∞$ and $x/ψ(x) ↑ +∞$ as $x →+ ∞$. In this class, the necessary and sufficient conditions are given for the relation $ϕ(\ln M(σ, F)) ∼ ϕ(\ln μ(σ, F))$ to hold as $σ → +∞$, where $M(\sigma ,F) = sup\{ |F(\sigma + it)|:t \in \mathbb{R}\} ,\mu (\sigma ,F) = max\{ a_n exp(\sigma \lambda _n ):n \in \mathbb{Z}_ + \}$, and $ϕ$ is a positive continuous function increasing to $+∞$ on $(0, +∞)$, forwhich $\ln ϕ(x)$ is a concave function and $ϕ(\ln x)$ is a slowly increasing function.
Calculation of the indicator of an entire function of rational order in terms of its taylor coefficients
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 854–858
By using both the Pólya theorem on the connection between the growth of an entire exponential function and the location of singularities of its Borel transform and the analog of this result for finite-order entire functions (due to Mclntyre), we obtain estimates for the indicator of the growth of an entire function in terms of its Taylor coefficients and, in some cases, determine this indicator exactly.
On the asymptotic estimates of the best approximations of differentiable functions by algebraić polynomials in the space $L_1$
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 859–862
An asymptotically exact estimate is obtained for the best approximations of $r$ times differentiable functions by algebraic polynomials in the space $L_1$.
On a transformation of the wiener process in $ℝ^m$ by a functional of the local time type on a surface
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 863–866
A transformation of the Wiener process $ξ_t$ in $ℝ^m$ is considered. This transformation is realized by a multiplicative functional $α_l = u(ξ_l/u(ξ_0)$, where the function $u$ is constructed in a certain way by using a functional of the local time type on a surface. It is proved that this transformation is equivalent to the successive application of an absolutely continuous change of a measure and killing on the surface.
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 867–870
The asymptotic behavior of the ratio $E_n (f)_p/ω(f, α/ n)_p$ is studied for individual periodic functions $f \in L_p$.
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 871–875
The action of an empirical correlation operator on the subspaces of vector Hermite polynomials of a given order is studied. The principal part of this operator is selected.
Ukr. Mat. Zh. - 1993. - 45, № 6. - pp. 876–878
Necessary and sufficient conditions are presented for the existence of dynamical systems on manifolds, for which the set of nonwandering points consists of a disconnected union of 2-dimensional tori with a hyperbolic structure. |
In the beginning of chapter 3 on scattering theory in Weinberg's QFT book there is a use of the Cauchy residual theorem that I just cannot get.
First some notation, we are looking at states that are effectively non-interacting, and are considered to be a direct product of one-particle states described by their momenta $p_i$ and a bunch of (possibly discrete) indexes $n_i$. To simplify notation the sum over all indexes and integral over all momenta is written:
$$\int ~\mathrm d\alpha\, \ldots \ \equiv \sum_{n_1\sigma_1n_2\sigma_2\cdots} \int ~\mathrm d^3p_1~ \mathrm d^3p_2\, \ldots\tag{3.1.4} $$
The energy of such a state $\alpha$ is denoted $E_\alpha$ and is the sum of 1 particle energies corresponding to the momenta: $$E_\alpha = p_1^0 +p_2^0+\ldots\tag{3.1.7}$$ Now in the book we are looking at some integrals that look like this:
$$\int ~\mathrm d\alpha~ \frac{e^{-i E_\alpha t} g(\alpha) T_{\beta \alpha}}{E_\alpha - E_\beta \pm i \epsilon}\tag{3.1.21b} $$
$g(\alpha)$ is a smooth function that is non-zero on a finite range $\Delta E$ of energies. $T_{\beta \alpha}$ can probably also be assumed to be smooth.
Now the author extends the integral to a semi-circle in the upper half plane of the energies, uses the Cauchy residual theorem and takes $t \to - \infty$ to get the result 0. My problems are:
The integral is not actually over the energies, but over the momenta. The energy is a function of the momenta, so I'm sure we can do some kind of a substitution to get an integral over energy, but this integral will not be over $\mathbb{R}$ since the energy of each particle is positive. So we cannot close a semi-circle. To use the Cauchy theorem $g(\alpha)T_{\beta \alpha}$ must be analytic after doing all integrals except for the energy integral, but if at the same time $g(\alpha)$ is supposed to be zero outside of some finite range of energies, this is not possible. |
To measure the escape velocity, if I use the equation -
$$\frac{-GMm}{r^2}=\frac{v.dv}{dr} $$
and I put my final distance to be $\infty$ , then I get the answer,
$$u = \sqrt\frac{2GM}{R} \;. \tag{1}$$ Which is quite obvious!
But, if I use the equations -
$$U_{\infty} - U_i = \frac{GMm}{R}$$
and
$$K_{\infty}+U_{\infty} = K_R+U_R$$
Where, $K_R$ and $U_R$ are kinetic energies and potential energies at R respectively , where R is the radius of the earth.
Now in these two equations, if I put $U_{\infty} = 0$, then I get,
$$K_{\infty} = K_R - \frac{GMm}{R}\tag{3} $$
And since, $K_{\infty}$ is positive and
not zero, we end up getting that
$$u > \sqrt\frac{2GM}{R} \tag{2} $$
That is to say that, if I want to take the object at infinity, the speed must be (1), but if I want to make it's final potential energy 0 , then it's speed must be greater than (1). Also, (1) cannot be used in (3) because that would mean the final $KE$ is 0, which obviously is not!
What is generating this hoax? Please tell me where am I wrong? |
The Feynman propagator for the free electron field is the Fourier transform w.r.t. $y$ of the time-ordered 2-point VEV $\left<0\right|\mathcal{T}[\hat\psi(x)\hat\psi(x+y)]\left|0\right>$, taking $\hbar=c=1$,$$\mathcal{F}[\left<0\right|\mathcal{T}[\hat\psi(x)\hat\psi(x+y)]\left|0\right>](k)=
\frac{k\!\cdot\!\gamma+m_e}{k^2-m_e^2+\mathrm{i}\epsilon}.$$In QED, after renormalization, we obtain, valid in the infrared, when $k^2-m_e^2$ is small (given in this neat form, representing the sum of
many perturbative integrals, in, for example, Appelquist & Carrazone, Phys.Rev.D 11, 2856 (1975)),$$\mathcal{F}[\left<0\right|\mathcal{T}[\hat\psi(x)\hat\psi(x+y)]\left|0\right>](k)=
\frac{k\!\cdot\!\gamma+m_e}{(k^2-m_e^2+\mathrm{i}\epsilon)^{1-{\alpha_{EM}}/{\pi}}},$$where $\alpha_{EM}\approx 1/137$.This is often called an infraparticle propagator or a dressed particle propagator (I'm rehearsing this stuff, please bear with me, and feel free to comment on anything that seems to need some change).When we move to renormalization group methods, the coupling constant becomes a function of the renormalization scale. In this case, if we measure the Fourier transform of the 2-point VEV, using the operator $\mathcal{F}[\mathcal{T}[\hat\psi(x)\hat\psi(x+y)]](k)$, the measurement scale is determined by the wave-number $k$, so I take it we can write the 2-point VEV as$$\mathcal{F}[\left<0\right|\mathcal{T}[\hat\psi(x)\hat\psi(x+y)]\left|0\right>](k)
{{?\atop =}\atop\ }
\frac{k\!\cdot\!\gamma+m_e}{(k^2-m_e^2+\mathrm{i}\epsilon)^{1-{\alpha_R}(k)/{\pi}}},$$where the running coupling constant $\alpha_R(k)$ is $\alpha_R(m_e)\approx 1/137$ when $k=m_e$, at about 0.5 MeV and $\alpha_R(m_Z)\approx 1/127$ when $k=m_Z$, the mass of the $Z$ particle, at about 90 GeV (see here).
My understanding is that in QED the function $\alpha_R(k)$ is an increasing function of $k$, even to the extent that there is a Landau pole at finite inverse length $m_L$, $\alpha_R(m_L)=\infty$, but that the high-energy behavior of QED has only been calculated perturbatively, so that an analytic form for $\alpha_R(k)$ is not known. I would like, however, to look at a good article, perhaps a review, that gives as closed a form for $\alpha_R(k)$ as is currently known, in as neat a form as possible (this is an
implicit question that may be too much to ask of the literature, given that almost everyone has moved on to supersymmetry, noncommutative geometry, string theory, etc., etc., and QED is obviously not empirically useful at high energy).
I would also like better to understand the relationship between the running coupling constant formalism and the Källén–Lehmann formalism. Can we equate the two, at least approximately,$$\frac{k\!\cdot\!\gamma+m_e}{(k^2-m_e^2+\mathrm{i}\epsilon)^{1-{\alpha_R}(k)/{\pi}}}
{{?\atop \approx}\atop\ }
\int\limits_{m_e}^\infty \frac{(k\!\cdot\!\gamma+m)f(m^2)\mathrm{d}m^2}{k^2-m^2+\mathrm{i}\epsilon}.$$Here, $f(m^2)\ge 0$ is undefined for $m>m_L$ if there is a Landau pole in QED, but I suppose $f(m^2)$ would not have any poles and would approach zero fast enough for at least the integral $\int\limits_0^\infty f(m^2)\mathrm{d}m^2$ to exist in a well-defined theory? We could equate the two exactly if we were talking about a scalar Feynman propagator, because we could solve for $\alpha_R(k)$, but the change from $k\!\cdot\!\gamma+m_e$ to $k\!\cdot\!\gamma+m$ doesn't look good because the two expressions have different effects for different components of the Dirac spinor.
Better understanding the 2-point VEVs would of course leave the $n$-point VEVs to think about. Uggh.
Finally, there are three explicit Questions here (marked by question marks!), but I would also be interested in any ruminations from this starting point that include references.This post imported from StackExchange Physics at 2015-04-27 11:38 (UTC), posted by SE-user Peter Morgan |
Note: According to OPs formulation in the bounty text this answer is aimed to give at least a glimpse of some aspects around the definition of set and set theories with focus on the axiom of choice. The top voted answers already contain the essential information.
On the definition of sets:
In modern times sets (and all other mathematical objects) are defined not in order to specify what they
are but instead what we want to do with them. This led to different set theories which emerged essentially due to three different strands and common to all of them was the goal to develop a substantial mathematical theory.
Set theory as a tool in understanding the
infinite leading to the theory of cardinals and ordinals. We owe Georg Cantor the fundamentals of this theory who did it more or less single-handed and against all odds.
Set theory as
foundation of supplying the subject matter of mathematics. This claim reflects the mainstream and in many books we can find something like set theory is the foundation of mathematics.
Set theory as supplier of a
common mode of reasoning for diverse areas of mathematics. This is strongly related with the second strand and the axiom of choice is a famous set-theoretical principle of this type.
This reasoning from M. Potter's
Set Theory and It's Philosophy is followed by
The history of such theories is now a century old ... - and yet there is, even now,
no consensus in the literature about the form they should take.
Conclusion: There is no widely agreed single set theory which is favored by the mathematicians from which we could derive a right definition of a set. Instead depending on the area of research and the richness of results within these areas different set theories like ZF, ZFC, etc. are taken as their basis.
Set theory today:
Presumably most of the daily work is explicitly or implicitely based upon
ZF, ZFC or a somewhat weaker version in between. But we should be aware that each of these set theories has benefits as well as drawbacks.
U. Felgner writes in
Models of ZF-Set theory
We believe that the
ZF-axioms describe in a correct way our intuitive contemplations concerning the notion of sets. The axiom of choice ( AC) is intuitively not so clear as the other ZF-axioms are, but we have learned to use it because it seems to be indispensible in proving mathematical theorems. On the other hand ( AC) has strange consequences, such as every set can be well-ordered and we are unable to imagine a well-ordering of the set of real numbers.
Besides the well-order theorem (WOT) there are many other equivalents to the AC which are then also to accept.
Historical aspects around AC:
Some mathematicians had difficulties deciding for or against this axiom e.g. van der Waerden:
In 1930, van der Waerden published his
Modern Algebra, detailing the exciting new applications of the axiom. The book was very influential, providing Zorn und Teichmüller with a proving ground for their versions of choice, but van der Waerden's Dutch colleagues persuaded him to abondon the axiom in the second edition of 1937. He did so, but the resulting limited version of abstract algebra brought such a strong protest from his fellow algebraists that he was moved to reinstate the axiom and all its consequences in the third edition of 1950. (P. Maddy, 1988)
H. Herrlich summarizes the historical development in his
Axiom of Choice
After Gödel (1938) proved the relative consistency of the Axiom of Choice by constructing within a given model of
ZF a model of ZFC, the proponents of AC gained ground. Most modern textbooks take AC for granted and the vast majority of methematicians use AC freely.
However, after Cohen (1963) proved the relative consistency of the negation of
AC and, moreover, provided a method, called forcing, for producing a plethora of models of ZF that have or fail to have a wide range of specified properties, a growing number of mathematicians started to investigate the ZF world by substituting AC by a variety of possible alternatives, sometimes just by weaking AC and sometimes by replacing AC by axioms that contradict it.
And with respect to a
true definition of sets he continues
All this work demonstrates how useful or convenient such axioms as
AC and its possible alternatives are. But the question of the truth of AC is not touched, and Hilbert's First Problem remains unanswered. It is conceivable, even likely, that it will never be solved, despite Hilbert's optimistic slogan expressed in his Paris lecture: in mathematics there is no ignorabimus.
Pros and Cons of AC:
Herrlich's book is an interesting source of information around AC. He presents many equivalents of AC and some related concepts to AC. The main part are the chapters
Disasters without Choice consisting of 11 sections organised by mathematical disciplines and the chapter Disaster with Choice consisting of 2 sections. To get a glimpse about such consequences I pick out two, three easy understandable examples:
From Section 4: Disasters without Choice
Section 4.4: Disasters in Algebra I: Vector Spaces
In
ZFC every vector space is uniquely determined, up to isomorphism, by a single cardinal number, its dimension. Each of the two fundamental results which together enable us to associate dimension with a given vector space fail badly in ZF.
Disaster 4.42: The following can happen:
Vector spaces may have no bases
Vector spaces may have two bases with different cardinalities.
Theorem 4.44: Equivalent are:
Every vector space has a basis
AC
Section 4.6: Disasters in Elementary Analysis: The Reals and Continuity
Disaster 4.53: The following can happen
$\mathbb{R}$ may fail to be Fréchet, i.e., not every accumulation point $x$ of a subset A may be reachable by a sequence $(a_n)$ in $A$.
... (9 more to follow)
Though the Axiom of Choice is responsible for many beautiful results, it is equally responsible for the existence of several dreadful monstrosities - unwelcome and unneeded.
From Section 5: Disasters with Choice
Section 5.1: Disasters in Elementary Analysis:
Definition 5.1: The equation $f(x+y)=f(x)+f(y)$ is called the
Cauchy-equation
Consider a function $f:\mathbb{R}\rightarrow\mathbb{R}$ that satisfies the Cauchy-equation for all real $x$ and $y$. Then it is easily seen that
$f(r\cdot x)=r\cdot f(x)$ for all rational $r$ and real $x$, i.e. $f$ is $\mathbb{Q}$-linear.
In particular:
$f(r)=f(1)\cdot r$ for all rational $r$.
And continuity of $f$ would imply
$f(x)=f(1)\cdot x$ for all $x\in \mathbb{R}$
Are there solutions of the Cauchy-equation that fail to be continuous? None has ever been constructed and in
ZF none will ever be. However the Axiom of Choice guarantees the existence of such monsters; even worse, under AC there are far mor undesirable solutions of the Cauchy-equation than there are desirable ones:
Disaster 5.2: In
ZFC there are
$2^{\aleph_0}$ continuous solutions $f:\mathbb{R}\rightarrow\mathbb{R}$ and
$2^{(2^{\aleph_0})}$ non-continuous solutions $f:\mathbb{R}\rightarrow\mathbb{R}$ of the Cauchy-equation.
Conclusion: It is good to be aware that there are benefits and drawbacks in ZF as well as ZFC and it's plausible that this is also the case for other set theories. So, there is no true definition of a set theory as framework for a true definition of a set.
Two hints:
A classic source to read and think about sets is
Naive Set Theory by P. Halmos. It presumably covers most of the aspects around sets you might need for daily work.
On the other hand if you are curious, to see which functions in real analysis can be defined or not be defined according to underlying set theories, you may want to have a short look into
Strange Functions in Real Analysis. |
The XYZPipeJunction is removed using circular trig functions from Both.
Both = ContourPlot3D[ x^4 + y^4 + z^4 - (x^2 + y^2 + z^2)^2 + 3 (x^2 + y^2 + z^2) == 3, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}]Dice = ContourPlot3D[ x^4 + y^4 + z^4 - (Cos[x] Cos[y] Cos[z])^2 + 3 (Cos[x] Cos[y] Cos[z]) == 3, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}]
because for small arguments, $ \cos x \cos y \cos z = 1- (x^2+y^2+z^2)/2. $
How to remove Dice from Both and keep only XYZPipeJunction? I believe only a small tweaking needed.
EDIT1
Changed to a more suitable Title
EDIT2:
I can do the RegionFunction. Taking liberty with the function itself to be plotted I wish to get it in changed trigonometric form without polynomials.It may involve small mathematics as indicated. Surfaces (Dice,within) are slightly different with such change in sectional view:
within = ContourPlot3D x^4 + y^4 + z^4 - (x^2 + y^2 + z^2)^2 + 3 (x^2 + y^2 + z^2) == 3, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}, RegionFunction -> Function[{x, y, z}, x^2 + y^2 + z^2 < 1.5^2]]Show[{Dice, within}, PlotRange -> {{-2, 0}, {0, 2}, {-2, 2}}]
So also "without" and next modified XYZpipeJunction, whose trig form is sought to be proposed or suggested could also be a bit different in shape/size.
EDIT 3
Apologies about clarity on this topic. I will clean the board and present it afresh with as much clarity as I can.Shall retain the above for a while for reference.
The surface
$$ x^{2n} + y^{2n} + z^{2n} = 1 $$
approximates towards a "dice" with corners getting more rounded with positive integer n increasing.
$$ \cos x \cos y \cos z = a < 1 $$
is also one such shape in the appropriate octant.It may be variously called a spheroid, a "deep" sphere, a non-linear spheroid.. etc.
For small arguments an approximation
$$ \cos x \cos y \cos z = 1- (x^2+y^2+z^2)/2. $$
is valid, as do $ \cos 2x , \cos 2y, \cos 2z..$ for fourth order approximations with doubled, quadrupled arguments that go to represent a multiplicity of surfaces.This is necessitated by the implicit nature of binding $ F(x,y,z) = 0 $
Now consider the following one such combined representation with such two sets of spheroids and Hexapods.. if you will. Due to their period $\pi$ ( inside and outside the Hexapods) separation of spheroids to Show separately is easier done than for Hexapods.
$$ \cos 2 x \cos 2 y \cos 2 z - \cos x \cos y \cos z + \frac{1}{2} =0 $$
C3[x_, y_, z_] = Cos[x] Cos[ y] Cos[ z] ; ThreeSurfaces = ContourPlot3D[C3[2 x, 2 y, 2 z] - C3[x, y, z] + .5 == 0 , {x, -Pi, Pi}, {y, -Pi,Pi}, {z, -Pi, Pi}, PlotLabel -> Three Surfaces, PlotRange -> All]
My query is:
1) How far is "ThreeSurfaces" representative of "Both"? It is more to do with math.
2) How to put "Both" in implicit/periodic form, changing monotonic behavior at points far away from origin to periodic. This also is to do with math.
3) How to isolate and Show the Hexapod separately? to do with Mathematica. RegionFunction3D cannot be readily applied.
It is like gaming for creative fun, hope you enjoy .. |
PCTeX Talk Discussions on TeX, LaTeX, fonts, and typesetting
Author Message stubner Joined: 14 Mar 2006 Posts: 7
Posted: Wed Apr 19, 2006 2:19 pm Post subject: absolute values Hi everybody,
it seems I ahven't used much absolute values lately since only yesterday I found that things like $|x|$ or $|o|$ look offbalance to me. The space to the right of the latter looks larger than the space to the left of the letter.
In order to test this systematically, I have taken some code from testfont.tex (without fully understanding it ;-):
\documentclass{article}
\renewcommand{\rmdefault}{ptm}
\usepackage[slantedGreek]{mtpro2}
\def\math{\def\ii{i} \def\jj{j}
\def\\##1{|##1|+}\mathtrial
\def\\##1{##1_2+}\mathtrial
\def\\##1{##1^2+}\mathtrial
\def\\##1{##1/2+}\mathtrial
\def\\##1{2/##1+}\mathtrial
\def\\##1{##1,{}+}\mathtrial
\def\\##1{d##1+}\mathtrial
\let\ii=\imath \let\jj=\jmath \def\\##1{\hat##1+}\mathtrial}
\newcount\skewtrial \skewtrial='177
\def\mathtrial{$\\A \\B \\C \\D \\E \\F \\G \\H \\I \\J \\K \\L \\M \\N \\O
\\P \\Q \\R \\S \\T \\U \\V \\W \\X \\Y \\Z \\a \\b \\c \\d \\e \\f \\g
\\h \\\ii \\\jj \\k \\l \\m \\n \\o \\p \\q \\r \\s \\t \\u \\v \\w \\x \\y
\\z \\\alpha \\\beta \\\gamma \\\delta \\\epsilon \\\zeta \\\eta \\\theta
\\\iota \\\kappa \\\lambda \\\mu \\\nu \\\xi \\\pi \\\rho \\\sigma \\\tau
\\\upsilon \\\phi \\\chi \\\psi \\\omega \\\vartheta \\\varpi \\\varphi
\\\Gamma \\\Delta \\\Theta \\\Lambda \\\Xi \\\Pi \\\Sigma \\\Upsilon
\\\Phi \\\Psi \\\Omega \\\partial \\\ell \\\wp$\par}
\def\mathsy{\begingroup\skewtrial='060 % for math symbol font tests
\def\mathtrial{$\\A \\B \\C \\D \\E \\F \\G \\H \\I \\J \\K \\L
\\M \\N \\O \\P \\Q \\R \\S \\T \\U \\V \\W \\X \\Y \\Z$\par}
\math\endgroup}
\begin{document}
\math
\end{document}
IMO most lower case letters look off-center to me with to much space to the tight of the letter (f,v,w,e are exceptions). The greeks are fine, while the uppercase letters are mixed (R has to much space to the left, U and M on the right). The other tests (besides absolute values) look fine.
Other opinions?
cheerio
ralf jautschbach Joined: 17 Mar 2006 Posts: 11
Posted: Wed Apr 19, 2006 3:52 pm Post subject: Re: absolute values
stubner wrote: Hi everybody,
it seems I ahven't used much absolute values lately since only yesterday I found that things like $|x|$ or $|o|$ look offbalance to me. The space to the right of the latter looks larger than the space to the left of the letter.
ralf
|i| and |\pi|, for example, seem to have too much space on the right. |\eta| looks like there is not enough space on the right.
Jochen Michael Spivak Joined: 10 Oct 2005 Posts: 52
Posted: Thu Apr 20, 2006 4:04 pm Post subject: Basically, I want to reiterate the remark I made in the last post to the "firstimpressions" posting by stubner. If you start looking carefully
at any mathematical typesetting (as opposed to just reading it) you will find thousands of non-optimal things. Some of these are actually due
to the design of TeX (see some remarks of mine in the "spacing" posting by zeller), and some to the varying circumstances of individual characters.
All sorts of things that one would never even notice while reading a mathematics paper can stand out when one looks at things a character at a time, and sometimes one becomes overly concerned.
(The link
http://support.pctex.com/files/JWPXMWRZTYLV/abs.pdf
shows Computer Modern and MTPro2 characters inside absolute values
and parentheses, and I think that you will find cases where CM is spaced better than MTPro2, but also cases where the opposite is true.)
For example, although I agree that |M| and |U| have too much space to the right of the letters, I wouldn't agree that |R| has too much space to the left of the R, or at most just a tiny extra bit of space. By contrast, in Computer Modern, the |R| definitely has this problem to a much greater degree.
Notice, moreover, that in MTPro2, (M) and (U) and (R) look nicely balanced. Of course, that's partly because of the character of the right parenthesis---it has a top piece that extends backwards, unlike almost
all characters! In Computer Modern this doesn't pose as great a problem
mainly because the ) is much thinner and unshaped.
The case of |i|, where there is certainly more space on the right, is also instructive. Notice that the dot on the Times-Italic i is very close to being the rightmost part of the character, while in CM it is nowhere near the right, because of the curlicue at the bottom. For this reason, I had to make the italic correction of the i rather big; otherwise, superscripts would
be very close to the dot, making reading very unpleasant. Since the italic correction is always added to the i, this gives the extra space before the |
or the ). Naturally, I had to compensate for this by adding more negative
kerning between the i and all other characters, but you can't kern with the ), as I've mentioned before, in one of the two postings I mentioned.
Similarly, if you compare x^i in CM and MTPro2, you'll see that the superscript i in CM has a curlicue to the left, which keeps it separated from the x, while in MTPro2, I needed to make a greater italic correction to the x in order to get superscripts adequately far away.
TeX has \scriptspace to determine extra space after a subscript or superscript; alas, that it does not also have a \prescriptspace, to determine some extra space _before_ superscripts! (And similarly, see
one of the previously mentioned postings, the spacing in scriptstyle and scriptscriptstyle should be more flexible.)
At any rate, for now, I'll leave things as they are. Possibly in a future release I'll try to address some of these questions, though it simply isn't possible to optimize all spacing.
All times are GMT - 7 Hours
You can post new topics in this forum You can reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Powered by phpBB © 2001, 2005 phpBB Group |
Orbital Graphs
Suppose that (as per usual) \Omega = \{1..n\} for some n \in \mathbb{N} and G \leq S_{\Omega}.
If we pick two distinct numbers \alpha, \beta \in \Omega, then the graph\Gamma_{(\alpha,\beta)} = \left<\Omega, \left(\alpha, \beta\right)^G\right> is the
Computing the orbital graph for a given pair (\alpha, \beta) is a simple orbit computation, but what if we want to compute the set of all orbital graphs up to isomorphism?
Naive computation
In GAP we can naively compute the all orbital graphs with the following code.
1 2 3 4 5 6 7 OrbitalGraphs := function(G) local basepairs; basepairs := Arrangements([1..LargestMovedPoint(G)], 2); return List(basepairs, bp -> DigraphByEdges(Orbit(G, bp, OnTuples))); end;
Even for fairly tame groups acting on 100 points this will get us 9900 graphs, and 1000 points will give us 999000, many of which will be isomorphic to each other. For example
1 2 3 4 5 gap> og := OrbitalGraphs(PrimitiveGroup(100,2));; gap> time; 169626 gap> Length(og); 9900 Cleverer construction
Let’s do some neat group theory. Things are computed fastest if one doesn’t compute them at all, so let’s avoid computing all that superflous junk.
The main idea behind the slightly cleverer algorithm is that two orbital graphs \Gamma_{(\alpha, \beta)} and \Gamma_{(\gamma, \delta)} are isomorphic iff there is an element \sigma \in G such that (\alpha, \beta)^\sigma = (\gamma, \delta).This would of course need a short proof To compute one representative per orbit, we first compute the orbits of G on \Omega and then for each of those orbits the orbits of a point stabilizer of one point in the orbit. The resulting code looks like this
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 OrbitalGraphs := function(G) local orbits, omega, o, oo, graphs; omega := [1..LargestMovedPoint(G)]; orbits := Orbits(G, omega); graphs := []; for o in orbits do for oo in Orbits(Stabilizer(G, o[1], omega)) do Add(graphs, Orbit(G, [o[1], oo[1]], OnTuples)); od; od; return graphs; end;
This cuts down the computation time enough so that we can deal with groups acting on 1000 points or more, but we can still do better, as we can save ourselves the computation of the orbits of G on \Omega^2.
We do this because we know that for any basepair (\alpha, \beta) the orbit under G acting on pairs consists of pairs where the first entry is in the orbit of \alpha under G, and the second point is in the orbit of \beta under the point stabilizer of \alpha in G, conjugated by the element of G that takes \alpha to its image in its orbit.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 OrbitalGraphs := function(G) local orbits, omega, o, oo, graph, graphs; omega := [1..LargestMovedPoint(G)]; orbits := Orbits(G, omega); graphs := []; graph := List(omega, x -> []); for o in orbits do for oo in Orbits(Stabilizer(G, o[1], omega)) do graph := List(omega, x -> []); graph{o} := List(o, pt -> OnTuples(oo, RepresentativeAction(G, o[1], pt))); Add(graphs, Digraph(graph)); od; od; return graphs; end;
Slightly more advanced versions of orbital graph computations could filter out irrelevant orbital graphs early. I will leave this as an exercise for the reader (at least for now).
The ideas above have been employed in [1] to efficiently compute orbital graphs.
Bibliography
[1] New refiners for permutation group search, Journal of Symbolic Computation 92, pp. 70–92, 2019, |
trying to determine if the series is conditionally convergent or divergent. $$\sum_{n = 1}^\infty \frac{2^{n^{2}}}{n!}$$ with n! i tried the ratio test on the series $$\frac{2^{(n+1)^{2}}}{(n+1)!} * \frac{n!}{2^{n^{2}}} = \frac{2^{2n+1}}{(n+1)} $$ which is > 1 as $n\to \infty$ and is overall divergent ? not sure if I am on the right track.
Well, applying the ratio test:
$$\lim_{\text{n}\to\infty}\space\left|\frac{\frac{2^{\left(\text{n}+1\right)^2}}{\left(\text{n}+1\right)!}}{\frac{2^{\text{n}^2}}{\text{n}!}}\right|=\lim_{\text{n}\to\infty}\space\left|\frac{2^{2\text{n}+1}}{\text{n}+1}\right|\space\to\space\infty\tag1$$
Note that ${2^{n^2} \over n!} = { (2^n)^n \over n!} \ge { (2^n)^n \over n^n}= ({2^n \over n})^n \ge 1$. Hence $\sum_n {2^{n^2} \over n!} \ge N$ for all $N$. |
Can someone please advise how to compute the following (as my results go into thousands):
E.g. I have used
and the result (for T=12) = 580103.7261
Thanks
Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community
The formula looks like the square-root-of-$t$ rule for scaling return-variance, but for simple multi-period returns, not log-returns. That is, the formula shows you how to compute
$$\mbox{var}\bigg(1+R^{(\tau)}\bigg) = \mbox{var}\bigg(\prod_{t=1}^{\tau} 1+R_t\bigg)$$ starting from the expectation and variance of $R_t$.
In which case $r^d$ would be a single-period return. (The notation is unfortunate: writing $\sigma^{\tau}_{i}$ when $\tau$ is an integer.) What exactly do the values of $\mu$ and $\sigma$ show? If I take them to mean 0.9% and 2.35%, i.e. 0.009 and 0.0235, then I get an annualised vol of about 9%, which seems reasonable. |
Group Actions of a Group on a Set
Group Actions of a Group on a Set
Definition: Let $(G, \cdot)$ be a group and let $A$ be a (nonempty) set. A Left Group Action of the group $G$ on the set $A$ is a map $G \times A \to A$ denoted for all $g \in G$ and all $a \in A$ by $(g, a) \to ga$ that satisfies the following properties: 1. $g_1(g_2a) = (g_1 \cdot g_2)a$ for all $g_1, g_2 \in G$ and for all $a \in A$. 2. $1a = a$ for all $a \in A$ (where $1 \in G$ denotes the identity element).
Similarly, a
Left Group Action of the group $G$ on the set $A$ is a map $G \times A \to A$ denoted for all $g \in G$ and all $a \in A$ by $(g, a) \to ag$ that satisfies the following properties: 1. $(ag_1)g_2 = a(g_1 \cdot g_2)$ for all $g_1, g_2 \in G$ and for all $a \in A$. 2. $1a = a$ for all $a \in A$.
In either case we say that the group $G$ is (left/right)
Acting on the set $A$.
We begin by stating some basic results regarding (left) group actions of a group on a set.
Proposition 1: Let $(G, \cdot)$ be a group that is left acting on the set $A$. For each $g \in G$ let $\sigma_g : A \to A$ be defined for all $a \in A$ by $\sigma_g(a) = ga$. Then: a) For each $g \in G$, $\sigma_g$ is a permutation of the set $A$, and so $\varphi_g \in S_A$ for all $g \in G$. b) The map $\varphi : G \to S_A$ (called the Associated Permutation Representation) defined for all $g \in G$ by $\varphi(g) = \sigma_g$ is a group homomorphism of $G$ to $S_A$. c) If $\psi : G \to S_A$ is any homomorphism from $G$ to $S_A$ then the map $G \times A \to A$ defined by $(g, a) \to \varphi(g) a$ is a left group action of $G$ on $A$. A similar result can be stated when $(G, \cdot)$ is right acting on a set $A$. Proof of a)For each $g \in G$ consider $\sigma_g$ and $\sigma_{g^{-1}}$. For all $a \in A$ we have that:
\begin{align} \quad (\sigma_g \circ \sigma_{g^{-1}})(a) &= \sigma_g (\sigma_{g^{-1}}(a)) \\ &= \sigma_g(g^{-1}a) \\ &= g(g^{-1}a) \\ &= (g \cdot g^{-1})a \\ &= 1a \\ &= a \end{align}(2)
\begin{align} \quad (\sigma_{g^{-1}} \circ \sigma_g)(a) &= \sigma_{g^{-1}} (\sigma_g(a)) \\ &= \sigma_{g^{-1}}(ga) \\ &= g^{-1}(ga) \\ &= (g^{-1} \cdot g)(a) \\ &= 1a \\ &= a \end{align}
Therefore $\sigma_g \circ \sigma_{g^{-1}} = \mathrm{id}_A = \sigma_{g^{-1}} \circ \sigma_g$, and so each $\sigma_g$ has an inverse $\sigma_{g^{-1}}$. Thus each $\sigma_g$ is a permutation of $A$. $\blacksquare$ Proof of b)By part (a) we have that $\varphi$ is indeed a map from $G$ to $S_A$. All that remains to show is that $\varphi$ is a group homomorphism. Let $g_1, g_2 \in G$. We claim that $\sigma_{g_1 \cdot g_2} = \sigma_{g_1} \circ \sigma_{g_2}$. Indeed, for all $a \in A$ we have that:
\begin{align} \quad \sigma_{g_1 \cdot g_2}(a) = (g_1 \cdot g_2)a = g_1(g_2a) = g_1 \sigma_{g_2}(a) = \sigma_{g_1}(\sigma_{g_2}(a)) = (\sigma_{g_1} \circ \sigma_{g_2})(a) \end{align}
Therefore $\varphi(g_1 \cdot g_2) = \sigma_{g_1 \cdot g_2} = \sigma_{g_1} \circ \sigma_{g_2} = \varphi(g_1) \circ \varphi(g_2)$. So $\varphi : G \to S_A$ is a group homomorphism. $\blacksquare$ Proof of c)Let $\psi : G \to S_A$ be a group homomorphism of $G$ to $S_A$. We aim to show that $(g, a) \to [\psi(g)](a)$ is a left group action of $G$ on $A$, i.e., we need to verify the two properties in the definition. For all $g_1, g_2 \in G$ and for all $a \in A$ we have that:
\begin{align} \quad g_1(g_2(a)) = g_1([\psi(g_2)](a)] = [\psi(g_1)](\psi(g_2)(a)) = [\psi(g_1) \circ \psi(g_2)](a) = [\psi(g_1 \cdot g_2)](a) = (g_1 \cdot g_2)(a) \end{align}
And for all $a \in A$ we have that:
\begin{align} \quad 1(a) = [\varphi(1)](a) = [\mathrm{id}_A](a) = a \end{align}
So indeed, $(g, a) \to [\psi(g)](a)$ is a left group action of $G$ on $A$. $\blacksquare$ |
Question
Four fair six-sided dice are rolled. The probability that the sum of the results being $22$ is $$\frac{X}{1296}.$$ What is the value of $X$?
My Approach
I simplified it to the equation of the form:
$x_{1}+x_{2}+x_{3}+x_{4}=22, 1\,\,\leq x_{i} \,\,\leq 6,\,\,1\,\,\leq i \,\,\leq 4 $
Solving this equation results in:
$x_{1}+x_{2}+x_{3}+x_{4}=22$
I removed restriction of $x_{i} \geq 1$ first as follows-:
$\Rightarrow x_{1}^{'}+1+x_{2}^{'}+1+x_{3}^{'}+1+x_{4}^{'}+1=22$
$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$
$\Rightarrow \binom{18+4-1}{18}=1330$
Now i removed restriction for $x_{i} \leq 6$ , by calculating the number of
bad cases and then subtracting it from $1330$:
calculating
bad combination i.e $x_{i} \geq 7$
$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$
We can distribute $7$ to $2$ of $x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$ i.e$\binom{4}{2}$
We can distribute $7$ to $1$ of $x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$ i.e$\binom{4}{1}$ and then among all others .
i.e
$$\binom{4}{1} \binom{14}{11}$$
Therefore, the number of bad combinations equals $$\binom{4}{1} \binom{14}{11} - \binom{4}{2}$$
Therefore, the solution should be:
$$1330-\left( \binom{4}{1} \binom{14}{11} - \binom{4}{2}\right)$$
However, I am getting a negative value. What am I doing wrong?
EDIT
I am asking for my approach, because if the question is for a larger number of dice and if the sum is higher, then predicting the value of dice will not work. |
I'm trying to get the following system of equation to exact differential equation form:
$$ \left\{ \begin{array} \dot \dot x =y^2-x \\ \dot y = 2y \end{array} \right. $$
What I tried:
$$\frac{dx}{dy}=\frac{y^2-x}{2y} \quad \Rightarrow \quad 2ydx+(x-y^2)dy=0 $$
Next I tried to find an integration factor:
$$\frac{N_x-M_y}{M}=-\frac{1}{2y} $$
$$\Rightarrow \quad \mu(y)=e^{\int-\frac{1}{2y}dy}=e^{-\frac{1}{2}ln|y|}=\frac{1}{\sqrt{|y|}}$$
Is there a way I can get rid of the absolute value? omitting it gives an exact differential equation but only for part of the domain. |
Search
Now showing items 1-9 of 9
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE
(Elsevier, 2017-11)
Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ...
The new Inner Tracking System of the ALICE experiment
(Elsevier, 2017-11)
The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019–20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity ...
Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE
(Elsevier, 2017-11)
Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ...
Charmonium production in Pb–Pb and p–Pb collisions at forward rapidity measured with ALICE
(Elsevier, 2017-11)
The ALICE collaboration has measured the inclusive charmonium production at forward rapidity in Pb–Pb and p–Pb collisions at sNN=5.02TeV and sNN=8.16TeV , respectively. In Pb–Pb collisions, the J/ ψ and ψ (2S) nuclear ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Jet-hadron correlations relative to the event plane at the LHC with ALICE
(Elsevier, 2017-11)
In ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC), conditions are met to produce a hot, dense and strongly interacting medium known as the Quark Gluon Plasma (QGP). Quarks and gluons from incoming ...
Measurements of the nuclear modification factor and elliptic flow of leptons from heavy-flavour hadron decays in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 and 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the ALICE results on the nuclear modification factor and elliptic flow of electrons and muons from open heavy-flavour hadron decays at mid-rapidity and forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
General Vector Notation
There are two common ways to denote a vector. The first way is with the use of an arrow such as $\vec{u}$. However, it is also common to use boldface such as $\mathbf{u}$. In the linear algebra section of MathOnline the former notation will be used much more frequently.
Component Form of a Vector
The component form of a vector in Euclidean n-space is commonly denoted as $\vec{u} = (u_{1}, u_{2}, ... , u_{n})$, $\vec{u} \in \mathbb{R} ^n$ where $u_1, u_2, ..., u_n$ are said to be the
components of $\vec{u}$, while the notation "$\vec{u} \in \mathbb{R} ^n$" says that $\vec{u}$ exists within Euclidean n-space. Matrix Form of a Vector
A vector can also be represented in terms of a $m \times 1$ matrix known as a
column matrix or column vector. Alternatively, we can represent a vector is a $1 \times n$ matrix known as a row matrix or row vector. For example, the column-matrix form of a vector would look as follows:
While the row-matrix form of a vector would look like this:(2) |
I am studying entanglement entropy.
It's fullfilled for any local quantum system that the entanglement entropy of a region $A$ in a highly mixed state is extensvie,
$$ S_A \sim \frac{\text{Vol}(A)}{\epsilon^d} $$
where $\epsilon$ is the length between sites (or the UV cutoff for a regularized QFT). This is because
$$S_A=\log[\text{dim}(\mathcal{H}_A)]$$
where $\mathcal{H}_A$ is the Hilbert space of the degrees of freedom in $A$, and the dimension scales as the number of degrees of freedom per site by the number of the sites, $\sim \exp(\text{Vol}(A)/\epsilon^d)$.
On the other hand, for a pure state the entanglement entropy is not extensive, since $S_A=S_B$, where $B$ is the complement of $A$. In fact it is proven that it follows an 'area law'.
An usual intuituve argument for this area law is that to compute the entanglement entropy we count the pairs entangled in both sides of the boundary of $A$. Since the theory is local, the most entangled pairs will be those who lay at $\sim\epsilon$ of the boundary, while the further sites won't count for the entropy. So this way we have that the EE must scale as $\text{Vol}(\partial A)$.
My problem is that I don't know if I am understanding why are we counting all the states un $A$ in one case and only the states near to the boundary in the other.
The reason I found is that in the case of the pure state we are talking about the vaccum, the system is in the ground state of energy and a site can only see what is near around it, since the interactions are local.
But if we excite the system (for example, if we consider a thermal state $\rho = e^{-\beta H}/\text{tr }e^{-\beta H}$, which is a mixed state), the system has enough energy to go beyond the local behavior and then we need to consider all the sites for the EE.
Am I right with that? |
Examples of Computing Jacobi Symbols
Recall from the Jacobi Symbols page that if $P, Q \in \mathbb{Z}$ and $Q$ has prime power factorization $Q = q_1^{e_1}q_2^{e_2}...q_k^{e_k}$ then the Jacobi symbol of $P$ over $Q$ is defined to be:(1)
where the terms in the product on the right are Legendre symbols. We proved some basic properties of Jacobi symbols, which are summarized below:
1. $\left ( \frac{PP'}{Q} \right ) = \left ( \frac{P}{Q} \right ) \left ( \frac{P'}{Q} \right )$. 2. $\left ( \frac{P}{QQ'} \right ) = \left ( \frac{P}{Q} \right ) \left ( \frac{P}{Q'} \right )$. 3. If $P \equiv P' \pmod Q$ then $\left ( \frac{P}{Q} \right ) = \left ( \frac{P'}{Q} \right )$.
We also noted the following very important result:
If $\left ( \frac{P}{Q} \right ) = -1$ then $x^2 \equiv P \pmod Q$ has no solutions, just like with Legendre symbols. However, if $\left ( \frac{P}{Q} \right ) = 1$ then NOTHING can be determined! Example 1 Compute the Jacobi symbol $\left ( \frac{5}{12} \right )$. Does $x^2 \equiv 5 \pmod {12}$ have any solutions?
We have that:(2)
Since $3 \equiv 3 \pmod 8$ we have that $\left ( \frac{2}{3} \right ) = -1$. Hence $\left ( \frac{5}{12} \right ) = -1$. We conclude that $x^2 \equiv 5 \pmod {12}$ has no solutions.
Example 2 Compute the Jacobi symbol $\left ( \frac{16}{60} \right )$. Does $x^2 \equiv 16 \pmod {60}$ have any solutions?
We have that:(3)
We cannot conclude whether or not $x^2 \equiv 16 \pmod {60}$ has a solution by Jacobi symbols. However, it clearly does have a solution, namely $x = 4$. |
Is there an example of a commutative ring with an ideal that contains all the non-units?
I was trying to think of some subring of $\mathbb Q$, but I couldn't get it to work.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Atiyah-Macdonald's book: " Introduction to commutative algebra" explains what Jake says:
Proposition 1.6.i) Let $A$ be a ring and $\mathfrak{m} \neq (1)$ an ideal of $A$ such that every $x \in A - \mathfrak{m}$ is a unit in $A$. Then $A$ is a local ring and $\mathfrak{m}$ is its maximum ideal. .... ii) Let $A$ be a ring and $\mathfrak{m}$ a maximal ideal of $A$, such that every element of $1 + \mathfrak{m}$ (i.e., every $1 + x$, where $x \in \mathfrak{m}$) is a unit in $A$. Then $A$ is a local ring.
Proof.i) Every ideal $\neq (1)$ consists on non-units, hence is contained in $\mathfrak{m}$. Hence $\mathfrak{m}$ is the only maximum ideal of $A$.
So note that for a commutative ring $R$ these are equivalent:
The ring $R$ is local , The set $m$ of non-units is an ideal, The set $m$ of non-units is the only maximal ideal.
For example the ring $k[[X]]$, where $k$ is a field.
Take the ring $R$ of $2 \times 2$ matrices $A = [a_{ij}]$ with $a_{11} = a_{22}$ and $a_{21} = 0$ over a field $F,$ and let $I$ be the ideal consisting of those matrices in $R$ with with $a_{11} = a_{22} = 0.$
Take $R=\mathbb Z / 9 \mathbb Z$ and $I=3R$. Then $I=\{0, 3, 6\}$, which are exactly the non-units.
More generally, $R=\mathbb Z / p^2 \mathbb Z$ and $I=pR$, where $p$ is a prime. |
For a basic intuition of the conditional probability formula, I always like using a two way table. Let's say there are 150 students in a yeargroup, of whom 80 are female and 70 male, each of whom must study exactly one language course. The two-way table of students taking different courses is:
| French German Italian | Total
-------- --------------------------- -------
Male | 30 20 20 | 70
Female | 25 15 40 | 80
-------- --------------------------- -------
Total | 55 35 60 | 150
Given that a student takes the Italian course, what is the probability they are female? Well the Italian course has 60 students, of whom 40 are females studying Italian, so the probability must be:
$$P(\text{F|Italian})=\frac{n(\text{F} \cap \text{Italian})}{n(\text{Italian})}=\frac{40}{60}=\frac{2}{3}$$
where $n(A)$ is the cardinality of the set $A$, i.e. the number of items it contains. Note that we needed to use $n(\text{F} \cap \text{Italian})$ in the numerator and not just $n(\text{F})$, because the latter would have included all 80 females, including the other 40 who do not study Italian.
But if the question were flipped around, what is the probability that a student takes the Italian course, given that they are female? Then 40 of the 80 female students take the Italian course, so we have:
$$P(\text{Italian|F})=\frac{n(\text{Italian} \cap \text{F})}{n(\text{F})}=\frac{40}{80}=\frac{1}{2}$$
I hope this provides intuition for why
$$P(A|B)=\frac{n(A \cap B)}{n(B)}$$
Understanding why the fraction can be written with probabilities instead of cardinalities is a matter of equivalent fractions. For example, let us return to the probability a student is female given that they are studying Italian. There are 150 students in total, so the probability that a student is female and studies Italian is 40/150 (this is a "joint" probability) and the probability a student studies Italian is 60/150 (this is a "marginal" probability). Note that dividing the joint probability by the marginal probability gives:
$$\frac{P(\text{F} \cap \text{Italian})}{P(\text{Italian})}=\frac{40/150}{60/150}=\frac{40}{60}=\frac{n(\text{F} \cap \text{Italian})}{n(\text{Italian})}=P(\text{F|Italian})$$
(To see that the fractions are equivalent, multiplying numerator and denominator by 150 removes the "/150" in each.)
More generally, if your sampling space $\Omega$ has cardinality $n(\Omega)$ — in this example the cardinality was 150 — we find that
$$P(A|B)=\frac{n(A \cap B)}{n(B)}=\frac{n(A \cap B)/n(\Omega)}{n(B)/n(\Omega)}=\frac{P(A \cap B)}{P(B)}$$ |
Since $A$ is symmetric, there is an orthogonal matrix $D$ (i.e., $D^{-1} = D^T$) so that $A = D^T \Lambda D$, where $\Lambda$ is diagonal. Since
$$h^T Ah = A \Leftrightarrow (DhD^T)^T \Lambda (DhD^T) = \Lambda$$
Thus $$O(q) = D^T O_{\Lambda} D,$$
where $O_\Lambda = \{ h: h^T \Lambda h = \Lambda\}$. Thus to answer (1), (2), it suffices to assume that $A$ is diagonal.
By choosing a different $D$ is necessary, assume that the diagonal entries $\lambda_i$ of $\Lambda$ are ordered by it's sign. If some of them $\lambda_{k+1}, \cdots, \lambda_l$ are zero, then it is easy to see that $h$ must be of the form
$$ h = \begin{bmatrix} A & 0 & B \\ 0 & G &0 \\ C& 0& D\end{bmatrix},$$
where $G \in GL(n-k,\mathbb R)$. This shows already that $O(q)$ is noncompact.
Thus we now assume that $\lambda_i\neq 0$ for all $i$. Let $B,S$ be the diagonal matrix with diagonal entries $B_i = \sqrt{|\lambda_i|}$, $S_i = \text{sgn} (\lambda_i)$ respectively. Then$$\begin{split} &h^T \Lambda h = \Lambda \\ \Leftrightarrow\ &h^T BSB h = B SB \\ \Leftrightarrow\ &((B^{-1} h^T B) S (B h B^{-1}) = S \\ \Leftrightarrow \ & (BhB^{-1})^T S (BhB^{-1}) = S\end{split}$$
Thus again we reduce to check the case for $\Lambda = S$ (Note that we also found that if $A$ is positive or negative definite, then $O(q)$ is compact since $S = \pm I$ and so $O(q)$ is diffeomorphic to the orthogonal group. On the other hand, that $O(q)$ is noncompact otherwise can be checked as suggested by Mike in the comment).
Of course these objects are well known. They are the indefinite orthogonal group $O(p,q)$. To show that it is indeed smooth, one may argue as usual, using the mapping $ h\mapsto h^T Sh$ and show that the tangent map is surjective for all $h$ in the preimage of $S$. But the smoothness also follows from the fact that all closed subgroup of a Lie group is automatically smooth, so I will just skip the checking.
The dimension of $O_S$ can be found to be the same as the orthogonal group.
Thus to sum up, all $O(q)$ are smooth manifolds, and if $A$ has $a$ positive eigenvalues, $b$ zero eigenvalues and $c$ negative eigenvalues, then $O(q)$ is diffeomorphic to $O(a,c) \times GL(b,\mathbb R)$ and it's dimension is $$b^2 + \frac{1}{2} (a+c) (a+c-1).$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.