text stringlengths 256 16.4k |
|---|
The Gaussian Process is an incredibly interesting object.
Paraphrasing from Rasmussen & Williams 2006, a Gaussian Process is a stochastic process (a collection of random variables, indexed on some set) where any finite number of them have a multivariate normal distribution.
Sample paths of GPs can represent functions. They also have many different interpretations, e.g. in the sense of covariances or basis function representations.
An MVN is completely characterized by its mean and covariance, and in the case of a GP, we can use the covariance to specify what sort of a function space we want to look at. For example, to create a representation of a continuous function, let \(y_d\) be a vector of points corresponding to domain points \(x_{n,d}\). To enforce continuity, all we need is:
Covariance between points is usually enforced using kernels, so for example, this is the RBF kernel:
This can be scaled (hence scaling the function along the y-axis), and a scale (the length-scale) can be applied to the domain variables. The function enforced by this kernel is infinitely differentiable, because the kernel itself is infinitely differentiable.
Knowing a few (“training”) points, one can predict all points between the training points (i.e. smoothing) by using the fact that, if \(x_2\) is the set of training points, the conditional distribution of the predicted points is also multivariate normal:
In this case, \(\Sigma_{ij} = k({\bf x_i}, {\bf x_j})\), but interestingly, due to the linearity of the MVN, a derivative of a GP is also a GP assuming that the mean and covariance are differentiable, in which case, the kernels are:
Simulations
Prior Draw from a Gaussian Process
GP Prior Draw Code
Posterior of a Gaussian Process
We fit a GP with an RBF kernel and lengthscale 1.0 to the points: \( \begin{bmatrix} [-0.5, -1] \ [0, 0.5] \ [0.5, 0] \end{bmatrix} \).
GP Posterior Code
Derivatives of the GP
I’ve written a piece to calculate the derivative kernels efficiently, but I’m still testing this:
GP Derivative Covariances (1-D)
This seems to work and I’ve used (previous versions of it, coupled with Tensorflow’s Adam and Stan’s HMC/l_BFGS) to solve differential equations:
Modeling SDE equivalents of GPs
There’s some great literature out there about modeling GPs as solutions of differential equations with a random component, but before I encountered that, the following was a brute-force attempt to model the functions \(\mu(X_t), \sigma(X_t)\) where \(X_t\) is a continuous time stochastic process and \(W_t\) is the standard Weiner process:
When \(X_t\) is a Gaussian Process, equating the Euler-Maruyama representation of the SDE above with the GP expressed as \(LZ\) where \(L\) is the Cholesky-decomposition of the covariance matrix, results in the random normal vector \(Z\) being exactly equal to the random part of the SDE: \(dW_t = \sqrt{\Delta t} Z\).
Hence modeling the functions \(\mu, \sigma\) and minimizing the distance between \(Z, dW_t\) is a way to obtain those functions without solving the SDE. When this is mathematically infeasible, the algorithm fails.
I’ve got the suspicion that the RBF GP can’t be represented this way (easily) because it’s infinitely differentiable; the Matern GP isn’t, so a derivative of a high enough order would appear to be random (which is set to \(u(t)\) and integrating that many times over would lead to a nice function. So, it’s probably unsurprising that the RBF needs to be written as its infinite series representation and then represented as a SSM.
There’s also interesting literature out there about gaussian convolutional models, which in some sense, represent moving-average counterparts of the autoregressive approach above. |
I am a bit confused about resonances in QFT. I am reading Schwarz's QFT book and as far as I understand, if in a reaction the mass of the particle acting as a propagator is bigger than the sum of the masses of the particles interacting, the propagator particle can be on shell, thus the propagator has an imaginary part and doing some math you get the Breit-Winger distribution (this is chapter 24.1.4). However, the particles that were discovered as resonances are not propagators, as far as I understand. For example, in muon-muon scattering to obtain the J/psi particle, the 2 muons interact by exchanging a photon, not a J/psi particle. So the propagator is that corresponding to a photon. Can someone explain to me how does this work? How do resonances from and how does this on-shell propagator comes into play? Thank you!
This is partly about clarifying the (nonperturbative) connection between field operators and particles, and partly about relating this connection to how perturbative calculations are done.
I'll start with the nonperturbative part.In general, the connection between field operators andparticles is not one-to-one in QFT.Fields are what we use to define the model and to constructlocal observables. Particles are
phenomena that the model predicts.This is evident in QCD, where the model and its observablesare constructed in terms of quark and gluon fields,but the particles are mesons and baryons.The connection is closer to being one-to-one in QED,but even in QED there are subtleties: electrons always have associated electric fields, and we can have short-livedbound states like positronium.The example you cited ($\mu^+\mu^-\rightarrow J/\psi$)requires a model more analogous to QCD, where the connection is not one-to-one at all.
Continuing with the nonperturbative part:Let $|0\rangle$ be the vacuum state, and let $A(x)$ be someoperator constructed from the field operators.To be specific, let's take $A(x)$ to be something like $A(x)=\mu_+(y)\mu_-(x-y)$, where $\mu_\pm(x)$ are muon field operators. The important thing here is that when applied to $|0\rangle$,the operators $\mu_\pm(x)$do
not simply create muon particles.The state $A(x)|0\rangle$ is a superposition of manyterms, one of which contains just a pair of oppositely charged muon particles, but other termscontain other stuff, including photons, the $J/\psi$, other mesons, and just about anything else that has the same quantum numbers as a pair of oppositely charged muons.The relative amplitudes of these various contributionschange in time (I'm working in the Heisenberg picture here,so the $x$ in $A(x)$ includes a time coordinate),but there is no time at which a simpleconstruct like $A(x)|0\rangle$ contains only a pair of muonparticles. To construct such a state would require actingon the vacuum with an operator much more complicated than $A(x)$,and I'm pretty sure that nobody knows exactly which operator that would be.
The key question here is: How can we isolate the contributionfrom just the particle (or resonance) of interest —say, the $J/\psi$ meson?We don't know how to do this directlyin the state-vector $A(x)|0\rangle$, but we can do it indirectly.Here's the idea: write$$ A(x)|0\rangle = \sum_p f(p)|p\rangle + \text{other stuff},$$where $|p\rangle$ represents a state withnothing but a single $J/\psi$ with momentum $p$, and $f(p)$ is a complex coefficient,and "other stuff" includes a term with nothing but photonsand any other terms that have the same quantum numbers as a pair ofoppositely charged muons.We don't know how to express the states $|p\rangle$expilcitly in terms of field operators, and that's okay.We'll use a trick.To use the trick, we need to rememberthat the $J/\psi$ is not a stable particle,so the vectors $|p\rangle$ cannot be eigenstates of the Hamiltonian.This means$$ e^{-iHt}|p\rangle = z|p\rangle + \text{other stuff},$$where $z$ is some time-dependent complex numberwith magnitude that
must decrease in time,because $|p\rangle$ is not an eigenstate. Empirically, weexpect the magnitude to decay exponentially, so$$ e^{-iHt}|p\rangle \approx e^{-\Gamma t}e^{-iE(p)t}|p\rangle + \text{other stuff}$$for some $\Gamma>0$.There is no violation of unitarity here.As the norm of the $|p\rangle$ term decreases,the norm of the "other stuff" increases,because that's where the decay products are going.I wrote this relationship as an approximation only becausethe time-dependence of the norm of the $|p\rangle$ term might not be exactly described by the simplistic $\exp(-\Gamma t)$ factor.
Now, consider the time-ordered correlation function$$ \langle 0|T\,B(y)A(x)|0\rangle,$$where $A$ and $B$ are operators chosen so that $A|0\rangle$ and$B|0\rangle$ include terms describing the initial and final(respectively) configurations of interest, such as an incomingpair of muons and an outgoing spray of whatever.We can use the LSZ trick to select the right incoming/outgoing particle configurations.I'll be very schematic here, because this is already workedout in several places.(I don't have access to Schwartz's QFT book right now,but it's in Weinberg's book that AccidentalFourierTransform cited,at least for the special case $\Gamma=0$,and allowing $\Gamma\neq 0$ is an easy generalization.)Write the $T$-ordering in terms of $\theta$-functionsand insert a complete set of $J/\psi$ momentum eigenstatesto get\begin{align*} \langle 0|T\,B(y)A(x)|0\rangle &\sim \theta(y_0-x_0)\sum_p\langle 0|B(y)|p\rangle\,\langle p|A(x)|0\rangle\\ &+ \theta(x_0-y_0)\sum_p\langle 0|A(x)|p\rangle\,\langle p|B(y)|0\rangle + \text{other stuff}.\end{align*}We're inserting $J/\psi$ eigenstates because we want to isolatethe contribution from the $J/\psi$ resonance. Everything else is lumped into "other stuff".Now write $x=(t,\mathbf{x})$ to separate the temporal and spatialcomponents of $x$, and use$$ \langle 0|A(x)|p\rangle = e^{-\Gamma t}e^{-iE(p)t}\langle 0|A(0,\mathbf{x})|p\rangle,$$and likewise for $\langle 0|B(y)|p\rangle$.Now take the Fourier transform to go from $x,y$ to theenergy/momentum domain. As shown elsewhere (like Weinberg's book),this reveals the $J/\psi$ pole,with an imaginary part due to $\Gamma$.This pole off the real axis shows up as a resonance in scattering cross-sections.(At least, it
looks like a pole when we ignore the "other stuff". With a little hand-waving and empirical input, we could probably almost justify this.)
Everything I've said so far is nonperturbative, except maybe the LSZ part.Now I'll connect it to perturbation theory.Working to some low order in a small coupling expansion,the photon "pole" (or whatever the massless version of a pole is)will show up in a propagatorbecause the photon corresponds closely to one of the fieldsthat was used to construct the model.But we
won't see the $J/\psi$ pole/resonance in a small coupling expansion(at least not without some other fancy trickery, like what is mentioned on page 153 in Peskin and Schroeder's Introduction to Quantum Field Theory), because it doesn't correspond closely to any individual fieldthat was used to construct the model. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
I am confused about what it is exactly that a reduced density operator describes. To illustrate, I came across the following seemingly paradoxical argument.
Consider a bipartite system $AB$, described by the pure state:
$$| \psi \rangle = \sum_{a,b} \psi(a,b) |ab \rangle$$
Its density matrix is defined as:
$$\rho \equiv | \psi \rangle \langle \psi |$$
With reduced density operators:
$$\rho_A = tr_B(\rho) \qquad \rho_B = tr_A(\rho)$$
We can construct a new operator:
$$\rho_{AB} = \rho_A \otimes \rho_B$$
Which is only equal to $\rho$ if $|\psi\rangle$ is a separable state (no correlations between $A$ and $B$). Another way to say this is:
$$0 = S(\rho) \leq S(\rho_A) + S(\rho_B) = S(\rho_A \otimes \rho_B) = S(\rho_{AB})$$
Because if $A$ and $B$ are entangled, then $\rho$ contains information about these correlations, but $\rho_{AB}$ does not (is this wrong?).
So say $A$ and $B$ are entangled, then $\rho_{AB}$ does not contain that information, but I was under the impression that $\rho_A$ ánd $\rho_B$ do contain (some, all?) of this correlation information; otherwise both $A$ and $B$ should be described by pure states. I realise that $\rho_{AB}$ has no physical meaning in the case of entanglement, but even as a purely mathematical construct I don't see how that information can just vanish.
I guess my question is this: what information exactly does the reduced density operator encompass? And, if any of it is related to the correlations, how can this be reconciled with the argument in the paragraph above? |
Let's say I'm tryng to derive the expression for the velocity of a really low orbit satellite. Let $S$ be the frame of reference of the earth's center (in rest). Let $S'$ be the frame of an observer on the surface of the earth.
The first step should be writing:
\begin{equation} \boldsymbol{r_s}=\boldsymbol{r}-\boldsymbol{R} \end{equation}
where $\boldsymbol{r_s}$ is the position vector of the satellite from the surface of the earth, $\boldsymbol{r}$ is the position vector of the satellite from the center, and $\boldsymbol{R}$ is the position vector of the surface frame from the center frame.
The next thing should be deriving the equality with respect to time.
Since $\boldsymbol{R}$ and $\boldsymbol{r}$ are vectors of constant magnitude, the result should be:
\begin{equation} \boldsymbol{v_s}=\boldsymbol{\omega}\times\boldsymbol{r}-\boldsymbol{\Omega}\times\boldsymbol{R} \end{equation}
Where $\boldsymbol{\omega}$ is the angular velocity of the satellite and $\boldsymbol{\Omega}$ is the angular velocity of the earth. But I feel like something missing there. The "derivative operator" in rotating systems is defined as $\frac{d}{dt}=[(\frac{d}{dt})_r+\boldsymbol{\omega}\times$], where $(\frac{d}{dt})_r$ is the velocity in the rotating system.
This might be really simple, but I've always had a really hard time with rotating frames of reference.
EDIT:
My answer is wrong. If I suppose that the satellite has a polar orbit with the same angular velocity as the arth, and that the observer in the surface is on the equator, then:
\begin{equation} v_s=0 \end{equation}
wich is false. |
The equation and its meaning:
Consider two sets $(A)_{l=0,...,m_a},$,$(B)_{l=0,...,m_b}$ of hermitian matrices and a set of positive semidefinite matrices $(C)_{l=0,...,m_c}$. Each matrix has the dimension $n \times n$. These form the ODE: $$ y^\prime(t) = \sum_{i=0}^{m_a} (A_iy+yA_i^\dagger) + \sum_{i=0}^{m_b}(B_iy-yB_i^\dagger) + \sum_{i=0}^{m_c}C_iyC^\dagger_i. \qquad(*) $$ Note that y(0) is a hermitian $n\times n$ matrix too.
Eqs. like $(*)$ describe the norm-preserving evolution of a linear operator with respect to the anti-commutator (1st), commutator (2nd) and "basis-transform" (3rd). Eqs. like this occur often in Quantum mechanics, see "master equations" for more information.
The problem with vectorization of the matrix matrix form:
The most common ODE solver operates on vectors. So say x(t) is a vector that contains all columns of y(t) then the ODE becomes with the Kronecker product $\otimes$: $$ x^\prime(t) = \left(\sum_{i=0}^{m_a} (1 \otimes A_i + A_i^*\otimes 1) + \sum_{i=0}^{m_b}(1 \otimes B_i - B_i^*\otimes 1) + \sum_{i=0}^{m_c}C_i^*\otimes C_i\right)x \equiv Xx(t). $$ But X is a full $n^2 \times n^2$ matrix! Note that the Jacobi-Matrix coincides with X. So each call to $Xx(t)$ is asymptotically worse than the matrix-matrix multiplication above. Thus, I'm looking for an algorithm to solve the ODE in the matrix form (*) for $y(t_{i=0,...,m_t})$.
A few additional properties
The eigenvalues of the matrices may vary over several magnitudes,
The dimension of $n$ is never bigger than $10^3$, usually it is between $10^1 -10^2$,
The matrices $A_i,B_i,C_i$ may be sparse as single ones, but the sum, i.e. X, is not sparse,
The eigenvalues can be negative,
Stiffness and time-dependent scaling of the matrices, i.e. $A_j \to \alpha(t)_j A_j$, can occur.
Precision should be at least $10^{-5}$, but never more than $10^{-8}$
What I've tried
I know ODEPACK, but they rely on the Jacobian, which is unnecessary big here. RK4 fails, except for very, very small step sizes. So, are they solutions which are easy to adapt for this problem, preferable in Fortran?
Related: |
Peikert's Method
The continuous error distribution over $K_\mathbb{R}$ denoted as $\psi$ is usually a Gaussian parameter that adds noise to $(a_i, b_i)$ which produces the equation:
$(a_i, b_i = s · a_i + e_i) \in R_q × K_\mathbb{R}/qR$
By analyzing $e_i ← ψ$ and $a_i ← R_q$ an attacker is able to derive:
$\lbrace a'_i = a_i \pmod \mathfrak{q} ,\ \ b'_i = b_i \pmod \mathfrak{q} \rbrace \ \in R/{\mathfrak{q}} × K_\mathbb{R}/\mathfrak{q}$
This is done by reducing the samples modulo q.
In section 2.2.4 Peikert states: "Any fractional ideal I ⊂ K maps, via the canonical embedding σ, to a lattice L = σ(I) ⊂ H, which iscalled an ideal lattice." This then means that the ideal divisor, treated as a fractional ideal, maps to the ideal lattice.
By demonstrating that you can derive the variables $(a_i, b_i)$ from the Gaussian distribution, it's immediately apparent $\psi\pmod q$ is detectably non-uniform. This amounts to an attack that can separate the "noise" from the secret $s*a_i$ and $e_i$, which in turn means a bias is evident in the error distribution. This is a serious issue for R-LWE because the fundamental operation relies on non-bias in the error distribution. By using $\pmod q$ for poorly implemented R-LWE, an attack may succeed.
The norm of the ideal divisor is defined as $N(q):=|R/q|$, so if (as Peikert states) $N(q)$ is too small the decision form is weak. You simply look for any non-uniform $\hat s$ and accept if one exists. The second bullet point of the attacks in section 3.1 are similar in that an attacker focuses on errorless samples, which in turn are an attack against the statistically uniform distribution.
The third bullet point addresses solutions to both of the attacks by stating a proper implementation of a continuous Gaussian parameter mitigates the attacks. Essentially, if you've implemented it properly no attacker could distinguish between the error, or "noise" compared to $(a_i, b_i)$.
Modular Arithmetic
Reduction $\pmod q$ is defined by the rules of modular arithmetic. This is more or less straightforward in terms of addition, subtraction, and multiplication but a little more nuanced with division. Addition and multiplication are carried out by adding or multiplying as you normally would, but then dividing the result by $m$, with the remainder then being an element of that ring. Given integers $(a,b)$, they are congruent $\pmod m$ if their difference is divisible by $m$. This is expressed as: $a \equiv b \pmod m$. Any number that satisfies $a \equiv 0 \pmod m$ are the numbers divisible by $m$. This is relevant because the rings used in R-LWE are defined as some form of integer ring, and for any $\pmod q$ for prime $q$ the range is restricted to $(0,1,...q-1)$. Fermat's little theorem states that for a prime $q$ and an arbitrary integer $a$, then:
$a^{q-1} \equiv \bigg \lbrace \begin{matrix}1 \pmod{q},\ q \nmid{a} \\ 0 \pmod{q},\ q \mid{a}\end{matrix}$
The notation $q \mid{a}$ is read as "q divides a" and equivalently, $q \nmid{a}$ is read as "q does not divide a."
Tying it Together
If I am able to calculate:
$a'_i = a_i \pmod \mathfrak{q} ,\\ b'_i = b_i \pmod \mathfrak{q}$
Using a reduction $\pmod q$, I'm effectively able to sift through the integers to pull apart the "noisy" errors from the private key $e_i$. Given Fermat's little theorem, an algorithm to return an accept value under the conditions Peikert outlines in the first bullet point would be more or less straightforward to implement. The reduction using modular arithmetic is how all operations are carried out with respect to the elements of the ring.
Examples
Examples for the decision version of R-LWE would be having to find the closest vector, which is hidden by "noise." If a bias is present, and $\psi$ is detectably non-uniform the CVP is much more straightforward because the noise is essentially removed from the problem.
An example of the search version of R-LWE would be finding the shortest non-zero vector given a basis, but again this becomes straightforward once the error is removed. You simply compare the lengths of the dual vectors, and this is as simple as comparing rows in a matrix which represent the coefficient vector of $e$.
An example outside of cryptography is a standard analog clock. The way we tell time uses modular arithmetic. If it's 3 p.m. and you tell someone you'll meet them in 11 hours, once you hit the number 12 you reduce the number so the actual time of the meeting is less than 12 (assuming you aren't using a 24 hour standard, in which case the time will always be less than or equal to 24).
Reduction $\pmod q$, in the methods Peikert outlines, is the difference between only knowing the hour of a meeting, and needing the exact time down to seconds. His attack based on poor implementations makes this problem easier. Instead of only having an hour, an alarm might as well go off when you return the ACCEPT for $\hat s$, because the value is detectably non-uniform. The same is true for the search form of R-LWE, you're essentially able to find the correct meeting time by reducing the numbers you sift through. So instead of comparing all possible times on the clock, you can reduce it to a smaller range (like comparing the dual vectors as rows in a matrix). |
Why is this not 0 on 50g or Prime?
07-16-2018, 03:35 PM
Post: #1
Why is this not 0 on 50g or Prime?
\[\frac { 1 }{ \infty +\cfrac { 1 }{ 0 } } =\frac { 1 }{ \infty +\infty } =\frac { 1 }{ \infty } =0\]The above is correct, no? Why then do both the HP 50g (in exact mode) and the HP Prime (in CAS view) say that the first expression, 1/(inf+1/0), is "undefined"?
<0|ɸ|0>
-Joe-
07-16-2018, 04:56 PM
Post: #2
RE: Why is this not 0 on 50g or Prime?
07-16-2018, 05:05 PM
Post: #3
RE: Why is this not 0 on 50g or Prime?
(07-16-2018 03:35 PM)Joe Horn Wrote: \[\frac { 1 }{ \infty +\cfrac { 1 }{ 0 } } =\frac { 1 }{ \infty +\infty } =\frac { 1 }{ \infty } =0\]The above is correct, no? Why then do both the HP 50g (in exact mode) and the HP Prime (in CAS view) say that the first expression, 1/(inf+1/0), is "undefined"?
Because \( \frac{1}{0} \) is undefined? Or failing that, because \( \infty + \frac{1}{0} = \frac{\infty \cdot 0 + 1}{0} \) and \(\infty \cdot 0\) is undefined?
— Ian Abbott
07-16-2018, 05:22 PM (This post was last modified: 07-16-2018 05:51 PM by BartDB.)
Post: #4
RE: Why is this not 0 on 50g or Prime?
try the following:
\[\frac { 1 }{ \infty +\left | \cfrac { 1 }{ 0 } \right | } \]
Also in WA:
http://www.wolframalpha.com/input/?i=1%2...bs(1%2F0))
EDIT:
The reason for the WA answer is that 1/0 gives "complex infinity" which cannot be added to real infinity (definition of Complex Infinity: a complex quantity with infinite magnitude but indeterminate phase) .
The 50G also recognises 2 infinities: see AUR p3-289
"The calculator recognizes two kinds of infinity: signed and unsigned. Evaluating '1/0' gives an unsigned infinity. Selecting infinity from the keyboard ... returns '+inf' and the sign can be changed. Calculations with the unsigned infinity return unsigned infinity or ? as their result. Calculations with the signed infinity can return ordinary numeric results, as in the example. Positive infinity and unsigned infinity are equal if tested with ==, but are not identical if tested with SAME."
07-16-2018, 05:38 PM
Post: #5
RE: Why is this not 0 on 50g or Prime?
(07-16-2018 03:35 PM)Joe Horn Wrote: \[\frac { 1 }{ \infty +\cfrac { 1 }{ 0 } } =\frac { 1 }{ \infty +\infty } =\frac { 1 }{ \infty } =0\]The above is correct, no?
No.
Quote:Why then do both the HP 50g (in exact mode) and the HP Prime (in CAS view) say that the first expression, 1/(inf+1/0), is "undefined"?
Because it is. Mathematically, 1/0 is undefined, period,
Very, very informally you can see why this way:
The value of 1/0 depends on the sign of 0.
So, without knowing the sign of 0 the denominator is Undefined and the calculator is Ok.
Very informally. It can be made mora rigorous by taking limits, etc. but it would be overkill.
Regards.
V.
.
Find All My HP-related Materials here: Valentin Albillo's HP Collection
07-17-2018, 03:51 AM
Post: #6
RE: Why is this not 0 on 50g or Prime?
Fascinating. I'm always learning. Thanks, guys!
<0|ɸ|0>
-Joe-
07-17-2018, 12:37 PM
Post: #7
RE: Why is this not 0 on 50g or Prime?
Excluding limits when approaching from positive or negative, isn't 0 actually neither positive or negative by definition?
Maybe I've just been happily uninformed for many years...
(07-16-2018 05:38 PM)Valentin Albillo Wrote:
I've never had a belief or opinion about these... are these simply defined as allowed and not defined respectively as a convention?
--Bob Prosperi
07-17-2018, 01:20 PM
Post: #8
RE: Why is this not 0 on 50g or Prime?
My HP71 has +/- 0.
07-17-2018, 10:33 PM
Post: #9
RE: Why is this not 0 on 50g or Prime?
This is interesting topic. Also the answer do depend if we allow hyper real definitions.
My stupid engineer head claims that 1/0 = inf. That by purely logical reasoning because 1/(1+inf) = 0. Also 0/0=1. So 0 is logical operator for something so nonexistently insignificant that you just don't bother and inf is something so enormous that you just don't have imagination and the 1 is the swap point of the numerical continuum.
Of course this is just personal view and as such a worthless little fallacy. Which I throw out of the window if it gives wrong solution for the task in hand.
Interesting is also how all the primes (P) do have equivalents in
1/P form, if I have understood correctly.
07-17-2018, 11:12 PM (This post was last modified: 07-17-2018 11:16 PM by Claudio L..)
Post: #10
RE: Why is this not 0 on 50g or Prime?
This is quite a subject.
The IEEE standard decided to create the signed zero, so they could map 1/0 to either +Inf or -Inf. That was their solution to avoid the "Undefined".
What if you don't want to have signed zero?
In reality, 1/0 should be "complex infinity", which is a circle, not a point (all points at an infinite distance from zero, coming from any direction). If you define a complex infinity (Wolfram and any serious math package seem to prefer this), then you don't need a signed zero and problem solved. Or not? How do you even handle a complex infinity? The only operation you can define with it is 1/Inf = 0. Anything else you try will have to be "Undefined", since you have more than a point as a result.
Now if you don't want to deal with the concept of complex numbers, then you could take only the real axis, and define "unsigned infinity", which is 2 points at the intersection of the complex infinity circle with the +x and -x real axis. But in practice this is the same as the complex infinity: because it represents 2 points, any operation that's not 1/Inf will be undefined.
Then you may need to use the individual "values", +Inf and -Inf that you can actually define a few more operations on.
So now we have 4 different types of infinity (complex, unsigned, +Inf and -Inf) to make a real mess of everything.
For newRPL I didn't know what to do so I opted for:
0 means 0, there's no signed zero. I implemented the complex infinity but not the unsigned infinity.
Then for practical reasons, when the user disables the complex mode flag, 1/0= +Inf.
Why? Because VTile said it, take it up on him :-)
EDIT: If you disable complex mode, Joe Horn gets his expected result of 0 (granted, there will be an error saying "Infinite result" but you can ignore that), but if you activate complex mode, Valentin gets his more academically correct "Undefined".
07-18-2018, 12:40 AM
Post: #11
RE: Why is this not 0 on 50g or Prime?
Oh, it appears my HP71 has it too.
Though I can happily report that I've never encountered it, most likely because I'm a Mechanical Engineer and such notions never seem to crop up in the real world of measuring and calculating physical things.
Wonder if there's a flag to disable that, to ensure I don't encounter it in the future as well....
--Bob Prosperi
07-18-2018, 12:45 AM
Post: #12
RE: Why is this not 0 on 50g or Prime?
.
Hi, Bob:
rprosperi Wrote:Oh, it appears my HP71 has it too.
Mine too, +0 and -0 (which played an essential part in one of my S&SMCs), plus +Inf, -Inf, signaling NaNs and silent (i.e.: unsignaling) NaNs, all of which can be the real and/or imaginary part of a complex number.
Regards.
V.
.
Find All My HP-related Materials here: Valentin Albillo's HP Collection
07-18-2018, 01:23 AM
Post: #13
RE: Why is this not 0 on 50g or Prime?
(07-18-2018 12:45 AM)Valentin Albillo Wrote: Mine too, +0 and -0 (which played an essential part in one of my S&SMCs), plus +Inf, -Inf, signaling NaNs and silent (i.e.: unsignaling) NaNs, all of which can be the real and/or imaginary part of a complex number.
Yes, I explored most of these when I first got my 71 in the 80's, particularly finding it interesting to fabricate programs to generate NaNs (both kinds), but also of course creating programs that could handle (and even DISPLAY/PRINT) answers that include Infinity, thus proving my Numerical Methods Professor wrong, having claimed that any good program spends as much code trapping and preventing such cases to avoid crashes, as the actual intended calculations!
But I somehow never noticed the -0 capability, or possibly did but then seeing no use for it, successfully forgot it over the years.
Anyhow, I am thrilled to continue to learn new things about my favorite machine, so thanks to you guys for this interesting discussion.
--Bob Prosperi
07-18-2018, 01:41 PM
Post: #14
RE: Why is this not 0 on 50g or Prime?
07-18-2018, 04:05 PM
Post: #15
RE: Why is this not 0 on 50g or Prime?
(07-18-2018 01:41 PM)KeithB Wrote:
First of yes, that is the right consensus and mathematically solid definition. Hence my note of my personal rather meaningless philosophical little thinking game.
However if 1/inf = 0 and (a/b)/(a/b) = 1 then it would be only reasonable to say that (1/inf)/(1/inf) = 1 and 0/0 = 1. However it is not allowed to declare inf = inf, but inf <> inf is only allowed. For mathematical rigor this (0/0=1) of course is not the case since there is these 'special' rules and blu tack for certain things.
07-18-2018, 05:18 PM (This post was last modified: 07-18-2018 05:18 PM by ijabbott.)
Post: #16
RE: Why is this not 0 on 50g or Prime?
(07-18-2018 04:05 PM)Vtile Wrote:
So if \(0/0 = 1\), does \(2\cdot 0/0 = 0/0 = 1\) or does \(2\cdot 0/0 = 2\cdot 1 = 2\)? Things are undefined for a reason!
— Ian Abbott
07-18-2018, 05:45 PM (This post was last modified: 07-18-2018 06:05 PM by Vtile.)
Post: #17
RE: Why is this not 0 on 50g or Prime?
(07-18-2018 05:18 PM)ijabbott Wrote:Number 2 would be solution, for symbolic state (2a/b)*(b/a) = 2, where a=1 and b=inf.(07-18-2018 04:05 PM)Vtile Wrote: First of yes, that is the right consensus and mathematically solid definition. Hence my note of my personal rather meaningless philosophical little thinking game.
.. But what if +inf + n = -inf
Mind you personal view and small amusing thinking game, like I said earlier if something is not working I throw it out of the window.
07-18-2018, 06:34 PM
Post: #18
RE: Why is this not 0 on 50g or Prime?
According to me
(2a/b)*(b/a) = (2a/a)*(b/b) = 2*something because the b/b is undefined (inf/inf)
07-18-2018, 07:01 PM (This post was last modified: 07-18-2018 08:09 PM by Vtile.)
Post: #19
RE: Why is this not 0 on 50g or Prime?
(07-18-2018 06:34 PM)klesl Wrote: According to meIn my little thinking bubble (inf/inf) would be 1. All this naturally arises from the nature of infinity and also the inverse of it as zero(, which would lead to something odd like 1/Real as a fraction, known as
07-18-2018, 08:31 PM
Post: #20
RE: Why is this not 0 on 50g or Prime?
Not for all values of a and b.
More generally \(\frac{x}{x}= 1\: for \: all \:x\neq 0\)
The discontinuities must always be excluded. For example in equations with fractions, the points where the denominator(s) = 0 must be excluded (have exception handling).
User(s) browsing this thread: 1 Guest(s) |
Please refer Does Cook Levin Theorem relativize?.
Also refer to Arora, Implagiazo and Vazirani's paper: Relativizing versus Nonrelativizing Techniques: The Role of local checkability.
In the paper by Baker, Gill and Solovay (BGS) on Relativizations of the P =? N Pquestion (SIAM Journal on Computing, 4(4):431–442, December 1975) they give a language $B$ and $U_B$ such that $U_B \in NP^B$ and $U_B \not\in P^B$, thus proving that there are oracles $B$ for which $P^B \neq NP^B$.
We shall modify the $U_B$ and $B$ to $U_{B'}$ and $B'$ such that we get a new language that cannot be reduced to 3SAT even if there is availability of $B'$ as an oracle.
First assume that we can pad every $3SAT$ boolean instance $\phi$ to $\phi'$ with some additional dummy 3CNF expressions such that $|\phi'|$ is odd and they are equivalent, i.e., $\phi$ is satisfiable iff $\phi'$ is satisfiable. We can do it in $n+O(1)$ time and with $O(1)$ padding, but even if it takes polynomial time and extra polynomial padding it does not matter.
Now we need to combine the $B$ and $3SAT$ to $B'$ somehow so that BGS theorem still holds but additionally $3SAT \in P^{B'}$. So we do something like the following.
$U_{B'} = \{1^n \ \ |\ \ \exists x \in B, $ such that $|x| = 1^{2n}\}$ and
$B' = B'_{constructed} \ \cup \{\phi \ \ |\ \ \phi \in 3SAT $ and $ |\phi| $ is odd $\}$.
Now we shall construct $B'_{constructed}$ according to the theorem such that if the deterministic machine $M_i^{B'}$ for input $1^n$ ($n$ is determined as in theorem) asks the oracle $B'$ a query of odd length we check if it is in $3SAT$ and answer correctly but if it asks a query of even length we proceed according to the construction, that is, answering correctly if it is already in the table, otherwise answer no every time. Then since we are running for $1^n$ we flip the answers at $2n$ length so that $M_i^{B'}$ does not decide $U_{B'}$.
We can prove similarly as in the BGS theorem that for this $B'$ and $U_{B'}$ too, we have $U_{B'} \in NP^{B'}$ and $U_{B'} \not\in P^{B'}$.
$U_{B'} \in NP^{B'}$ is easy to prove. We construct a non-deterministic Turing Machine which for input $1^n$ creates non-deterministic branches that runs for $2n$ steps to generate a different $2n$-length string and then asks oracle $B'$ if the $2n$-length string is in $B'$, and if the answer is yes it accepts $1^n$ else it rejects $1^n$. This construction shows that $U_{B'} \in NP^{B'}$.
$U_{B'} \not\in P^{B'}$ can be proved with the help of diagonalization argument. Basically it is different from every $L(M_i^{B'})$ for every oracle Turing Machine that have $B'$ as an oracle. This is because of how we construct $B'_{constructed}$.
Now we shall prove by contradiction that there does not exist a reduction from $U_{B'}$ to $3SAT$ even with the availability of oracle $B'$.
Assume there is a reduction using oracle $B'$, i.e., $U_{B'} \leq^{B'}_P 3SAT$.
That means we can reduce a string of the form $1^n$ to a 3SAT instance $\phi$ using a polynomial-time deterministic machine which uses $B'$ as oracle.
We can now describe a deterministic TM $M^{B'}$ which will decide strings $U_{B'}$ in polynomial time using $B'$ as an oracle. First this machine reduces the input $1^n$ to a 3SAT-instance $\phi$ using $B'$ as an oracle. This can be done because we have the reduction above. Then if $\phi$ is not odd length $M^{B'}$ will pad it to make $\phi'$ which is odd length. Next, it will give this $\phi'$ to oracle $B'$ and get the answer yes/no. It will accept if the answer is yes and reject if the answer is no.
This machine is deterministically polynomial and uses oracle $B'$.
Thus we have proved that $U_{B'} \in P^{B'}$, a
contradiction.
Therefore $U_{B'} \not\leq^{B'}_P 3SAT$. |
Prove $2^n \mid (b+\sqrt{b^2-4c})^n + (b-\sqrt{b^2-4c})^n $ for all $n\ge 1$ and $b,c$ are integers.
Is it possible to prove this without induction?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Prove $2^n \mid (b+\sqrt{b^2-4c})^n + (b-\sqrt{b^2-4c})^n $ for all $n\ge 1$ and $b,c$ are integers.
Is it possible to prove this without induction?
Since the characteristic polynomial of twice the integer matrix
$$\mathbb P = \left( \begin{array}{cc} b & 1 \\ -c & 0 \\ \end{array} \right)$$
is $\lambda^2 - 2 b \lambda + 4c$, its roots are
$$b \pm \sqrt{b^2-4c}.$$
The sum of those roots is its trace. Because $\mathbb P$ has integral coefficients, all coefficients of $\mathbb P^n$ are integral, whence $(2\mathbb P)^n = 2^n \mathbb P^n$ are obviously divisible by $2^n$. But the trace of $(2 \mathbb P)^n$ is the sum of the $n^\text{th}$ powers of the eigenvalues, QED.
As others have noted, an induction
must be lurking here, if only in the relationship $(2\mathbb P)^n = 2^n \mathbb P ^n$. But this approach helps make the result immediately obvious.
There is a close relationship between this solution and one based on the two-term recursion mentioned by Jack D'Aurizio. That recursion can be represented by the right multiplication of
$$\mathbb Q = \left( \begin{array}{cc} 2b & 1 \\ -4c & 0 \\ \end{array} \right)$$
on $(a_{n-1}, a_{n-2})$, producing $(a_n, a_{n-1})$. Note that $\mathbb Q$ and $2\mathbb P$ have the same characteristic polynomial. The principal difference, then, is that $2\mathbb P$ has all even coefficients whereas $\mathbb Q$ does not, making the result for $2\mathbb P$ a little more evident.
The sequence given by: $$ a_n = \left(b+\sqrt{b^2-4c}\right)^n + \left(b-\sqrt{b^2-4c}\right)^n $$ satisfies the recurrence relation: $$ a_{n+2} = \color{red}{2}b\cdot a_{n+1} - \color{red}{4}c\cdot a_{n}.$$ Since $\nu_2(a_n)\geq n$ holds for $n=0$ and $n=1$, it holds for every $n$ by the previous relation.
Ok, this is still induction :D
We want to show$$2^n\vert\left(\left(b+\sqrt{b^2-4c}\right)^n+\left(b-\sqrt{b^2-4c}\right)^n\right).$$ Using $x-y=\frac{x^2-y^2}{x+y}$ indeed we have
$$ 2^n\vert\left(\left(b+\sqrt{b^2-4c}\right)^n+\left(\frac{4c}{b+\sqrt{b^2-4c}}\right)^n\right) \\ 2^n\vert\frac{\left(b+\sqrt{b^2-4c}\right)^{2n}+4^nc^n}{\left(b+\sqrt{b^2-4c}\right)^n} \\ 2^n\vert\frac{\left(2b^2-4c+2b\sqrt{b^2-4c}\right)^n+4^nc^n}{\left(b+\sqrt{b^2-4c}\right)^n}\\2^n\vert\frac{2^n\left(\left(b^2-2c+b\sqrt{b^2-4c}\right)^n+2^nc^n\right)}{\left(b+\sqrt{b^2-4c}\right)^n}.$$
Let $x_1$ and $x_2$ be the roots of the quadratic equation $$x^2 - bx + c = 0$$
Then $$x_1 = \frac{b + \sqrt{b^2 - 4c}}{2}$$ $$x_2 = \frac{b - \sqrt{b^2 - 4c}}{2} $$
The above statement is equivalent to $$2^n \,\big| \, 2^n\big({x_1}^n + {x_2}^n \big) $$ |
Yes, there aren't too many resources on this for some reason. For a very long time, the standard goto was "just use BDF methods". This mantra was set in stone for few historical reasons: for one Gear's code was the first widely available stiff solver, and for another the MATLAB suite didn't/doesn't include an implicit RK method. However, this heuristic isn't ...
It is because we typically neglect higher order terms in error estimates. For example, we can show that$$\|e\| \le C(u) h^2 + {\cal O}(h^3).$$The point is that when $h$ is small, the cubic term is small and can be neglected. In fact, when $h$ is small, you can observe quadratic convergence. But whenever $h$ is not small (where "small" is relative to ...
There is another approach called the adjoint method, which is commonly used in inverse problems for PDE and which is quite easy to generalize to other problems. This is going to be long.Your observations give you a field $u_{\text{exp}}$; you'd like to find a value of $\mu$ for which$\frac{1}{2}\iint(u - u_\text{exp})^2dx\hspace{2pt}dt$is a minimum, ...
Because the advection-diffusion equation is linear, there are many exact solutions. One reference with many exact solutions (including source terms) isM. Th. van Genuchten and W. J. AlvesAnalytical Solutions of the One-Dimensional Convective-Dispersive Solute Transport EquationU. S. Department of Agriculture, Agricultural Research ServiceTechnical ...
Add a right-hand-side/forcing function, and use the Method of Manufactured Solutions. Then you can have any solution you want. Also, Separation of Variables works on this PDE just fine, so it also has an analytical solution.
You will have a problem if $x=0$ is part of your domain because in that case your advection velocity $u=1/x$ becomes singular. In particular, there will be cells close to $x=0$ where the cell Peclet number is very large, and consequently you will be in the advection-dominated regime. You will need to stabilize your discretization.
I think this greatly depends on what kind of physics you are trying to model even though for some problems both approaches are viable.Lagrangian vs. Eulerian FrameworkFor certain problems Lagrangian frameworks are better suited than their Eulerian counterparts. For instance, if one is interested in studying the current patterns or sedimentation problems ...
The stationary equation you show transports information from the right to the left via the advection term; it also diffuses slightly. If you switch off the diffusion term altogether, then you only have transport from the right to the left, and you need to also drop the boundary condition at the left: because information is from the right to the left, nothing ...
We have the following problem:$$\frac{\partial u}{\partial t}+v\color{red}{\frac{\partial u}{\partial x}}-\nu\color{blue}{\frac{\partial^2u}{\partial x^2}}=0 \tag{*}$$The function $u$ may represent for example the concentration that propagates at velocity $v>0$ and disperses in a medium with viscosity $\nu>0$. Since only we are discussing how terms ...
Your observed quadratic convergence indicates that the Jacobian is likely correct. Have you looked at the solutions for your under-resolved configurations? Galerkin optimality uses the operator norm, which contains only the symmetric part, thus the solution of the discrete system could be quite different from the projection of the exact solution. This ...
Since you haven't had an answer yet, I'll reformulate my comment.Saying that a method is $p$-th order accurate implies that a polynomial manufactured solution of lesser order can be captured exactly. For example, a 2nd order method will represent a linear solution up to machine precision. This has often helped me in finding implementation issues. It's ...
In the advection dominated case the problem can develop physics which is invisible to a computational mesh that is too coarse (say by having elements which contain many wavelengths). This leads to instability because neighboring elements could disagree considerably on what they believe are the right values to represent this physics (think interpolating high ...
There are whole books written on this, but you should investigate (search the web, really) upwind diffusion and Streamline Upwind Petrov Galerkin finite element methods first. There many more methods that extend these ideas or approach the problem from a different direction.
The finite element method has similar problems to FD with regards to stability in solving the advection-diffusion equation, i.e. the same restrictions on Peclet number apply. One remedy, also similar to that used in FD, is to use a formulation that includes "upwinding".A nice set of lecture notes that discusses how upwinding is added to FE formulations is ...
There are two different flavors of smoothing and stability. The spurious oscillations in convection-diffusion problems are not an artifact of the linear solver but an inevitable artifact of the discretization method. Any linear solver for non-symmetric matrices you use will give you the same wiggly answer, or it's wrong. Multi-grid is one such linear solver. ...
What I describe below is a version of the method of characteristics. Its applications are limited, but if it works in your case, it would be a fairly simple, fast, and accurate process. If you want to discretize on a mesh, it might not be that helpful.Assuming your solutions $\phi$ are smooth, you can reduce this to a family of ODE problems. Exchange the ...
The answer is YES, you can impose the Dirichlet boundary condition weakly on the inflow boundary $\partial \Omega^-$.Say for the following advective equation:$$\begin{cases}u_t + \nabla \cdot (\mathbf{v} u) = 0 \quad \text{in }\Omega,\\u= g \quad \text{on }\partial \Omega^-,\end{cases}$$where $\partial \Omega^-$ is the inflow boundary: $\mathbf{v}\...
Well, you can use Crank-Nicolson here but then you'll have to construct and solve a linear system for each time-step. That's easy to do but it would be much easier to use an ODE integrator that is available in MATLAB. Due to the diffusion operator in the RHS the implicit integrator ode23tb seems to be a good choice here but experimenting with different ...
An adaptive grid is a grid network that automatically clusters grid points in regions of high flow field gradients; it uses the solution of the flow field properties to locate the grid points in the physical plane. The adaptive grid evolves in steps of time in conjunction with a time-dependent solution of the governing flow field equations, which computes ...
Including an actual ghost cell for FVM discretization is more or less a matter of convenience. For instance, if the boundary condition on the wall is a Neumann type, you actually do not need to discretize in the $x^-$ direction since the flux is given by the boundary condition and is known.If the boundary condition is of Dirichlet type, what you need is to ...
Typically you would use a slope limiter (or artificial diffusion and just cross your fingers) which detects where the solution has gone negative and modifies the solution to restore positivity (often by modifying the gradient of the solution in order to maintain conservation, at least in conservative schemes like Discontinuous Galerkin and Finite Volume)....
What is usually done when you have a matrix-like variable due to the dimensions of your problem and the indexing is to linearize the index. I.e., if you have $C_{i,j}$ you would replace that with $\hat{C}_k=C_{i,j}$ where $k=i\times N+j$ and $N$ is the number of points on one side of your $N\times N$ grid of points that you are doing finite differences on. ...
I'm assuming the limitation in 2D and 3D is storing the Jacobian.One option is to retain the time derivatives and use an explicit "pseudo" time-stepping to iterate to steady state. Normally the CFL number you need for diffusive and reactive systems might get prohibitively small. You could try nonlinear multigrid (it's also called Full Approximation Storage ...
The solution you believe to be inaccurate is actually by far the more accurate one; you've simply plotted it in a very deceptive way. For $\nu=2$, the exact solution is actually no bigger than about $10^{-35}$ everywhere -- it's zero for all intents and purposes. Therefore the numerical solution is correct to 10 digits -- far better than the accuracy of ...
Use a first order upwind (for the convection component) and a second order central difference (for the diffusion component). So the end result would be equivalent to discretising the equation,$$\frac{\partial u}{\partial t} = \frac{\partial \boldsymbol{v}}{\partial x} + D\frac{\partial^2 u}{\partial x^2}$$So using the $\theta$-method you will end up ...
You need to rederive the term$$\frac{d}{(h_j^+ + h_j^-)^2}$$because this appears to be wrong. It must lead to factor of 4 when switching to a uniform grid because you've taken the sum of two things that are $O(h)$ and squared it.Perhaps if you spell out the derivation, we can help you find the error.
I was (still am) looking for good answers for this. I work with multi-level adaptive grids where I use some sort of criterion for refinement. Folks doing FEM enjoy, rather cheap (computationally), rigorous error estimates that they use as refinement criterion. For us doing FDM/FVM, I have not had luck finding any such estimates.In this context, if you want ...
The equation is$\partial_{t} \psi = \partial_{x} F(\phi)$where $\psi = \partial_x \phi$The time-integration can be done, e.g., by explicit time-stepping:$\psi^{j+1}_i = \psi^j_i + \frac{\tau}{2 dx} (F^j_{i+1}-F^j_{i-1})$,which requires that $F$ is known at each grid node $i$ at the "old" time slice $j$.This can be achieved by adding a numerical "... |
Forgot password? New user? Sign up
Existing user? Log in
Find the intersection of the curves y=cosxy=\cos xy=cosx and y=sin3xy = \sin 3xy=sin3x for −π2≤x≤π2-\frac {\pi}{2}≤x≤\frac {\pi}{2}−2π≤x≤2π.
Enter the sum of the x-coordinates of the intersections.
This problem is part of the set Trigonometry.
Problem Loading...
Note Loading...
Set Loading... |
Description: Test - 1 Number of Questions: 20 Created by: Yashbeer Singh Tags: Test - 1 Soil Mechanics
The results for sieve analysis carried out for three types of sand, P, Q and R, are given in the adjoining figure. If the fineness modulus values of the three sands are given as FMP, FMQ and FMR, it can be stated that
Likelihood of general shear failure for an isolated footing in sand decreases with
Likehood of general shear failure for an isolated footing in sand decreases with decreasing intergranular packing of sand.
For a saturated sand deposit, the void ratio and the specific gravity of solids are 0.70 and 2.67, respectively. The critical (upward) hydraulic gradient for the deposit would be
$e = 0.70, g = 2.67\\
\text{Critical hydraulic gradient causes effective stress to zero and}\\ i_c = \frac{G - 1}{1 + e} = \frac{2.67 - 1}{1 + 0.7} = 0.98$
Two geometrically identical isolated footings, X (linear elastic) and Y (rigid), are loaded identically (shown alongside). The soil reactions will
Quicksand condition occurs when
Quick sand condition occurs when the upward seepage pressure in soil becomes equal to submerged unit weight of the soil. So, effective stress is equal to zero.
A soil is composed of solid spherical grains of identical specific gravity and diameter between 0.075 mm and 0.0075 mm. If the terminal velocity of the largest particle falling through water without flocculation is 0.5 mm/s, that for the smallest particle would be
The e-log p curve shown in the figure is representative of
e - log P curve or e Vs P curve on semi log graph paper is for over consolidated clay as p increases and 'e' decreases.
Deposit with flocculated structure is formed when
Flocculated structure is formed when clay particles settle on fresh water lake bed.
Dilatancy correction is required when a strata is
Dilatancy correction is also known as correction due to submergence of the soil and applied for the silt and fine sand is given as
$$N_c = 15 + \frac{1}{5}(N_0 - 15)$$
Where, $N_0 = $ value of SPT after correction applied due to overburden pressure.
A precast concrete pile is driven with a 50 kN hammer falling through a height of 1.0 m with an efficiency of 0.6. The set value observed is 4 mm per blow and the combined temporary compression of the pile, cushion and the ground is 6 mm. As per Modified Hiley Formula, the ultimate resistance of the pile is
In a compaction test, G, w, S and e represent the specific gravity, water content, degree of saturation and void ratio of the soil sample respectively. If yw represents the unit weight of water and yd represents the dry unit weight of the soil, the equation for zero air voids line is
Group symbols assigned to silty sand and clayey sand are respectively
$\text{SM - Silty Sand} \\
\text{SC - Clayey Sand}$
When a retaining wall moves away from the back-fill, the pressure exerted on the wall is termed as
Active earth pressure - When there is a horizontal strain and retaining wall moves away from bachfill. Passive earth pressure - When retaining wall moves toward the backfill. Pore pressure - Neutral pressure and constant in all directions.
A fine grained soil has liquid limit of 60 and plastic limit of 20. As per the plasticity chart, according to IS classification, the soil is represented by the letter symbols
A clay soil sample is tested in triaxial apparatus in consolidated-drained conditions at a cell pressure of 100 kN/m2. What will be the pore water pressure at a deviator stress of 40 kN/m2?
The relationship among specific yield (Sy), specific retention (Sr) and porosity ($\eta$) of an aquifer is
$\eta = \text{porosity of an aquifer}\\
\hspace{0.2cm} = \text{specific yield + specific retention}\\ \hspace{0.2cm} = S_y + S_r\\ S_y = \eta - S_r$
Compaction by vibratory roller is the best method of compaction in case of
Compaction by vibratory roller - well graded dry sand.
The number of blows observed in a Standard Penetration Test (SPT) for different penetration depths are given as follows:
Penetration of sampler Number of blows 0-150 mm 6 150-300 mm 8 300-450 mm 10
The observed N value is
In standard penetration test (SPT), the number of blows required for first 15 cm penetration is no counted and treated as sheeting apparatus inside the soil and total number of blows for next two 15 cm blows is taken into account for the calculation of standard penetration test. So, N = 8 + 10 = 18
If $\sigma_h, \sigma_w, \sigma_n$ and $\sigma_w$ represent the total horizontal stress, total vertical stress, effective horizontal stress and effective vertical stress on a soil element respectively, the coefficient of earth pressure at rest is given by
For a sample of dry, cohesionless soil with friction angle $\phi$, the failure plane will be inclined to the major principal plane by an angle equal to |
Freezing point depression is a colligative property observed in solutions that results from the introduction of solute molecules to a solvent. The freezing points of solutions are all lower than that of the pure solvent and is directly proportional to the molality of the solute.
\[\Delta{T_f} = T_f(solvent) - T_f (solution) = K_f \times m\]
where \(\Delta{T_f}\) is the freezing point depression, \(T_f\) (solution) is the freezing point of the solution, \(T_f\) (solvent) is the freezing point of the solvent, \(K_f\) is the freezing point depression constant, and
m is the molality. Introduction
Nonelectrolytes are substances with no ions, only molecules. Strong electrolytes, on the other hand, are composed mostly of ionic compounds, and essentially all soluble ionic compounds form electrolytes. Therefore, if we can establish that the substance that we are working with is uniform and is not ionic, it is safe to assume that we are working with a nonelectrolyte, and we may attempt to solve this problem using our formulas. This will most likely be the case for all problems you encounter related to freezing point depression and boiling point elevation in this course, but it is a good idea to keep an eye out for ions. It is worth mentioning that these equations work for both volatile and nonvolatile solutions. This means that for the sake of determining freezing point depression or boiling point elevation, the vapor pressure does not effect the change in temperature. Also remember that a pure solvent is a solution that has had nothing extra added to it or dissolved in it. We will be comparing the properties of that pure solvent with its new properties when added to a solution.
Adding solutes to an ideal solution results in a positive ΔS, an increase in entropy. Because of this, the newly altered solution’s chemical and physical properties will also change. The properties that undergo changes due to the addition of solutes to a solvent are known as colligative properties. These properties are dependent on the amount of solutes added, not on their identity. Two examples of colligative properties are boiling point and freezing point: due to the addition of solutes, the boiling point tends to increase and freezing point tends to decrease.
The freezing point and boiling point of a pure solvent can be changed when added to a solution. When this occurs, the freezing point of the pure solvent may become lower, and the boiling point may become higher. The extent to which these changes occur can be found using the formulas:
\[\Delta{T}_f = -K_f \times m\]
\[\Delta{T}_f = K_b \times m\]
where \(m\) is the solute
molality and \(K\) values are proportionality constants; (\(K_f\) and \(K_b\) for freezing and boiling, respectively.
If solving for the proportionality constant is not the ultimate goal of the problem, these values will most likely be given. Some common values for \(K_f\) and \(K_b\) respectively, are:
Solvent \(K_f\) \(K_b\) Water 1.86 .512 Acetic acid 3.90 3.07 Benzene 5.12 2.53 Phenol 7.27 3.56
Molality is defined as the number of moles of solute per kilogram
solvent. Be careful not to use the mass of the entire solution. Often, the problem will give you the change in temperature and the proportionality constant, and you must find the molality first in order to get your final answer.
The solute, in order for it to exert any change on colligative properties, must fulfill two conditions. First, it must not contribute in the vapor pressure of the solution and second, it must remain suspended in the solution even during phase changes. Because the solvent is no longer pure with the addition of solutes, we can say that the
chemical potential of the solvent is lower. Chemical potential is the molar Gibb's energy that one mole of solvent is able to contribute to a mixture. The higher the chemical potential of a solvent is, the more it is able to drive the reaction forward. Consequently, solvents with higher chemical potentials will also have higher vapor pressures.
Boiling point is reached when the chemical potential of the pure solvent, a liquid, reaches that of the chemical potential of pure vapor. Because of the decrease of the chemical potential of mixed solvents and solutes, we observe this intersection at higher temperatures. In other words, the boiling point of the impure solvent will be at a higher temperature than that of the pure liquid solvent. Thus,
boiling point elevation occurs with a temperature increase that is quantified using
\[\Delta{T_b} = K_b b_B\]
where
\(K_b\) is known as the ebullioscopic constantand \(m\) is the molality of the solute.
Freezing point is reached when the chemical potential of the pure liquid solvent reaches that of the pure solid solvent. Again, since we are dealing with mixtures with decreased chemical potential, we expect the freezing point to change. Unlike the boiling point, the chemical potential of the impure solvent requires a colder temperature for it to reach the chemical potential of the solid pure solvent. Therefore, a
freezing point depression is observed.
Example \(\PageIndex{1}\)
2.00 g of some unknown compound reduces the freezing point of 75.00 g of benzene from 5.53 to 4.90 \(^{\circ}C\). What is the molar mass of the compound?
SOLUTION
First we must compute the molality of the benzene solution, which will allow us to find the number of moles of solute dissolved.
\[ \begin{align*} m &= \dfrac{\Delta{T}_f}{-K_f} \\[4pt] &= \dfrac{(4.90 - 5.53)^{\circ}C}{-5.12^{\circ}C / m} \\[4pt] &= 0.123 m \end{align*}\]
\[ \begin{align*} \text{Amount Solute} &= 0.07500 \; kg \; benzene \times \dfrac{0.123 \; m}{1 \; kg \; benzene} \\[4pt] &= 0.00923 \; m \; solute \end{align*}\]
We can now find the molecular weight of the unknown compound:
\[ \begin{align*} \text{Molecular Weight} =& \dfrac{2.00 \; g \; unknown}{0.00923 \; mol} \\[4pt] &= 216.80 \; g/mol \end{align*}\]
The freezing point depression is especially vital to aquatic life. Since saltwater will freeze at colder temperatures, organisms can survive in these bodies of water.
Applications
Road salting takes advantage of this effect to lower the freezing point of the ice it is placed on. Lowering the freezing point allows the street ice to melt at lower temperatures. The maximum depression of the freezing point is about −18 °C (0 °F), so if the ambient temperature is lower, \(\ce{NaCl}\) will be ineffective. Under these conditions, \(\ce{CaCl_2}\) can be used since it dissolves to make three ions instead of two for \(\ce{NaCl}\).
Problems
Benzophenone has a freezing point of 49.00
oC. A 0.450 molal solution of urea in this solvent has a freezing point of 44.59 oC. Find the freezing point depression constant for the solvent.(answ.: 9.80oC/m) References Atkins, Peter and de Paula, Julio. Physical Chemistry for the Life Sciences. New York, N.Y.: W. H. Freeman Company, 2006. (124-136). Contributors Lorigail Echipare (UCD), Zachary Harju (UCD) |
Several nice replies already stated the Pros of finite element methods being flexible and powerful, here I will give another advantage of FEM, from Sobolev space and differential geometry point of view, is that the possibility of finite element space inheriting the physical continuity condition of the Sobolev spaces where the true solution lies in.
For example, Raviart-Thomas face element for plane elasticity, and mixed method for diffusion ; Nédélec edge element for computational electromagnetics.
Normally the solution of a PDE, which is a differential $k$-form lying in the "energy $L^2$-integrable" space:$$
H\Lambda^k = \{\omega \in \Lambda^k: \omega \in L^2(\Lambda^k), \mathrm{d} \omega \in L^2(\Lambda^k)\}
$$where $\mathrm{d}$ is the exterior derivative, and we could build the de Rham cohomology around this space, which means we could construct an exact de Rham sequence like the following in 3D space:
$$
\mathbb{R}^3 \xrightarrow{id}\, H(\mathbf{grad},\Omega) \xrightarrow{\nabla}\,
H(\mathbf{curl},\Omega) \xrightarrow{\nabla \times}\,H(\mathrm{div},\Omega)
\xrightarrow{\nabla \cdot}\, L^2(\Omega)
$$
the range of the operator is the null space of the next operator, and there are many nice properties about this, if we could build a finite element space to inherit this de Rham exact sequence, then the Galerkin method based on this finite element space will be stable and will converge to the real solution. And we could get the stability and approximation property of the interpolation operator simply by the commuting diagram from the de Rham sequence, plus we could build the a posteriori error estimation and adaptive mesh refining procedure based on this sequence.
More about this please see Douglas Arnold's article in Acta Numerica: "Finite element exterior calculus,homological techniques, and applications "and a slide briefly introducing the idea |
I've been wondering about something, and it might be nonsense (if so I apologize!). Consider the unit disk in $\mathbb{R}^2$ and a function $f$ defined on the disk. I can compute its double integral as
$$\int_{D(0,1)} f dA = \int_{0}^{2\pi} \int_0^1 f(r,\theta) r dr d\theta$$
by polar coordinates. Now, separately, consider a function $g$ defined on the upper hemisphere of $S^2$ (the sphere in $\mathbb{R}^3$. In spherical coordinates, I can compute its integral over this region as
$$\int_0^{2\pi} \int_0^{\frac{\pi}{2}} f(\theta,\phi) \sin(\theta) d\theta d\phi$$
What I'm wondering about is the following: given a function $f$ on the unit disc, can I find a function $g$ on the hemisphere such that $$\int_{D(0,1)} f dA = \int_{0}^{2\pi} \int_0^{\frac{\pi}{2}} g(\theta,\phi) \sin(\theta) d\theta d\phi$$
Of course I could just choose some $g$ that satisfies that, but I was wondering if there was a systematic way for each $f$ on the unit disk to associate it with a $g$ on the sphere such that their integrals are the same.
So far, I was thinking about the following. I know I can map the disk onto the upper hemisphere by the map $F(x,y) = (x,y, \sqrt{1-x^2-y^2})$. At first, I naively just thought of assigning each point on the sphere the value of $f(x,y)$ at the corresponding point beneath it, but that fails. This would send a constant function on the disk to a constant function on the hemisphere, but their integrals are different. Integrating a constant on the unit disk would just yield the constant times $\pi$, whereas a constant on the upper hemisphere when integrated would yield a $2\pi$. I would be okay with this if this process always differed just by a factor of two (i.e., I could identify $g$ with $\frac{1}{2} f$), but I don't think that works.
I thought maybe some change of coordinates might work, but I can't seem to get it to pan out. If anyone could give a suggestion or a pointer in the right direction, that would be very helpful. I do not want a full solution, just a pointer or a reference that discusses relevant ideas would be great. Thanks! |
Let's suppose we have a closed plane curve of some shape whose points are described by the single parametric equation $P(x(t), y(t))$ where $t$ is some increasing parameter (example circle) or by a set of parametric equations for segments of the curve (example rectangle).
Now we are transforming curve by the following operation performed for all points of the closed curve:
$P_r(x_r(t), y_r(t))$ where $x_r(t)=\dfrac{x(t-\Delta {t})+x(t+\Delta {t})}{2}, y_r(t)= \dfrac{y(t-\Delta {t})+y(t+\Delta {t})}{2}$
In this step the bigger value of $\Delta {t}$ we assume the more rounded transformed curve we obtain so index $r$ is used for the new point which is simply a midpoint of segment $P(x(t-\Delta {t})),y(t-\Delta {t})P(x(t+\Delta {t}),y(t+\Delta {t})).$ This transformation of closed curve $C$ I would name "averaging" and denote as $A$ so we have $C_r=A(C)$. Maybe it has some other name, someone knows? Parametric function $P(x(t), y(t))$ should be of such nature that it would give infinitely many "loops" for the curve (as in the case of circle - period $2{\pi}$) but we "average" the curve only for a single loop, of course.
Main questions:
What conditions should be imposed on parameter $t$ and $\Delta {t}$ to be sure that after transformation center of the gravity (CG) of the new closed curve will be exactly the same as that of the old one? Should any conditions be assumed for the original closed curve to have stabile CG? (for rectangles and circles transformation acts properly) Or maybe CG is stabile under described transformation for any closed curve .. but if so .... how to prove it ? 4. And what gives composition of n averaging operations i.e. $C_{r_n}=A^n(C)$ with $n{\to{\infty}}$. It always converges to point and it converges to CG?(for circle it is true - it can be proved - it is easy to obtain equation for transformed circle) 5. Assuming ${\Delta}t=$ const and is known in what circumstances transformation $A^{-1}$ reverse to the original one exists (for a circle it is only one such transformation)? How to construct $C=A^{-1}(C_r)$ ? |
General case. In relativistic thermodynamics, inverse temperature $\beta^\mu$ is a vector field, namely the multipliers of the 4-momentum density in the exponent of the density operator specifying the system in terms of statistical mechanics, using the maximum entropy method, where $\beta^\mu p_\mu$ (in units where $c=1$) replaces the term $\beta H$ of the nonrelativistic canonical ensemble. This is done in
C.G. van Weert, Maximum entropy principle and relativistic hydrodynamics, Annals of Physics 140 (1982), 133-162.
for classical statistical mechanics and for quantum statistical mechanics in
T. Hayata et al., Relativistic hydrodynamics from quantum field theory on the basis of the generalized Gibbs ensemble method, Phys. Rev. D 92 (2015), 065008. https://arxiv.org/abs/1503.04535
For an extension to general relativity with spin see also
F. Becattini, Covariant statistical mechanics and the stress-energy tensor, Phys. Rev. Lett 108 (2012), 244502. https://arxiv.org/abs/1511.05439
Conservative case. One can define a scalar temperature $T:=1/k_B\sqrt{\beta^\mu\beta_\mu}$ and a velocity field $u^\mu:=k_BT\beta^\mu$ for the fluid; then $\beta^\mu=u^\mu/k_BT$, and the distribution function for an ideal fluid takes the form of a Jüttner distribution $e^{-u\cdot p/k_BT}$.
For an ideal fluid (i.e., assuming no dissipation, so that all conservation laws hold exacly), one obtains the format commonly used in relativistic hydrodynamics (see Chapter 22 in the book Misner, Thorne, Wheeler, Gravitation). It amounts to treating the thermodynamics nonrelativistically in the rest frame of the fluid.
Note that the definition of temperature consistent with the canonical ensemble needs a distribution of the form $e^{-\beta H - terms~ linear~ in~ p}$, conforming with the identification of the noncovariant $\beta^0$ as the inverse canonical temperature. Essentially, this is due to the frame dependence of the volume that enters the thermodynamics. This is in agreement with the noncovariant definition of temperature used by Planck and Einstein and was the generally agreed upon convention until at least 1968; cf. the discussion in
R. Balescu, Relativistic statistical thermodynamics, Physica 40 (1968), 309-338.
In contrast, the covariant Jüttner distribution has the form $e^{-u_0 H/k_BT - terms~ linear~ in~ p}$. Therefore the covariant scalar temperature differs from the canonical one by a velocity-dependent factor $u_0$. This explains the different transformation law. The covariant scalar temperature is simply the canonical temperature in the rest frame, turned covariant by redefinition.
Quantum general relativity. In quantum general relativity, accelerated observers interpret temperature differently. This is demonstrated for the vacuum state in Minkowski space by the Unruh effect, which is part of the thermodynamics of black holes. This seems inconsistent with the assumption of a covariant temperature.
Dissipative case. The situation is more complicated in the more realistic dissipative case. Once one allows for dissipation, amounting to going from Euler to Navier-Stokes in the nonrelativistic case, trying to generalize this simple formulation runs into problems. Thus it cannot be completely correct. In a gradient expansion at low order, the velocity field defined above from $\beta^\mu$ can be identified in the Landau-Lifschitz frame with the velocity field proportional to the energy current; see (86) in Hayata et al.. However, in general, this identification involves an approximation as there is no reason for these velocity fields to be exactly parallel; see, e.g.,
P. Van and T.S. Biró, First order and stable relativistic dissipative hydrodynamics, Physics Letters B 709 (2012), 106-110. https://arxiv.org/abs/1109.0985
There are various ways to patch the situation, starting from a kinetic description (valid for dilute gases only): The first reasonable formulation by Israel and Stewart based on a first order gradient expansion turned out to exhibit acausal behavior and not to be thermodynamically consistent. Extensions to second order (by Romatschke, e.g., https://arxiv.org/abs/0902.3663) or third order (by El et al., https://arxiv.org/abs/0907.4500) remedy the problems at low density, but shift the difficulties only to higher order terms (see Section 3.2 of Kovtun, https://arxiv.org/abs/1205.5040).
A causal and thermodynamically consistent formulation involving additional fields was given by Mueller and Ruggeri in their book Extended Thermodynamics 1993 and its 2nd edition, called Rational extended Thermodynamics 1998.
Paradoxes. Concerning the paradoxes mentioned in the original post:
Note that the formula $\langle E\rangle = \frac32 k_B T$ is valid only under very special circumstances (nonrelativistic ideal monatomic gas in its rest frame), and does not generalize. In general there is no simple relationship between temperature and velocity.
One can say that your paradox arises because in the three scenarios, three different concepts of temperature are used. What temperature is and how it transforms is a matter of convention, and the dominant convention changed some time after 1968; after Balescu's paper mentioned above, which shows that until 1963 it was universally defined as being frame-dependent. Today both conventions are alive, the frame-independent one being dominant.
This post imported from StackExchange Physics at 2016-06-24 15:03 (UTC), posted by SE-user Arnold Neumaier |
Difference between revisions of "Quasirandomness"
(→Introduction)
(→Examples of quasirandomness definitions)
Line 2: Line 2:
BZnmyM <a href="http://wsmwsxvnhfws.com/">wsmwsxvnhfws</a>, [url=http://xfsvlyjckrzg.com/]xfsvlyjckrzg[/url], [link=http://afyogomumdwb.com/]afyogomumdwb[/link], http://ljfxoyxsnnwi.com/
BZnmyM <a href="http://wsmwsxvnhfws.com/">wsmwsxvnhfws</a>, [url=http://xfsvlyjckrzg.com/]xfsvlyjckrzg[/url], [link=http://afyogomumdwb.com/]afyogomumdwb[/link], http://ljfxoyxsnnwi.com/
− +
://..///
− + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − +
==A possible definition of quasirandom subsets of <math>[3]^n</math>==
==A possible definition of quasirandom subsets of <math>[3]^n</math>==
Revision as of 04:37, 12 March 2009
http://planeta.rambler.ru/users/hajgas/ ����� ���� �����������
A possible definition of quasirandom subsets of [math][3]^n[/math]
As with all the examples above, it is more convenient to give a definition for quasirandom functions. However, in this case it is not quite so obvious what should be meant by a balanced function.
Here, first, is a possible definition of a quasirandom function from [math][2]^n\times [2]^n[/math] to [math][-1,1].[/math] We say that f is c-quasirandom if [math]\mathbb{E}_{A,A',B,B'}f(A,B)f(A,B')f(A',B)f(A',B')\leq c.[/math] However, the expectation is not with respect to the uniform distribution over all quadruples (A,A',B,B') of subsets of [math][n].[/math] Rather, we choose them as follows. (Several variants of what we write here are possible: it is not clear in advance what precise definition will be the most convenient to use.) First we randomly permute [math][n][/math] using a permutation [math]\pi[/math]. Then we let A, A', B and B' be four random intervals in [math]\pi([n]),[/math] where we allow our intervals to wrap around mod n. (So, for example, a possible set A is [math]\{\pi(n-2),\pi(n-1),\pi(n),\pi(1),\pi(2)\}.[/math])
As ever, it is easy to prove positivity. To apply this definition to subsets [math]\mathcal{A}[/math] of [math][3]^n,[/math] define f(A,B) to be 0 if A and B intersect, [math]1-\delta[/math] if they are disjoint and the sequence x that is 1 on A, 2 on B and 3 elsewhere belongs to [math]\mathcal{A},[/math] and [math]-\delta[/math] otherwise. Here, [math]\delta[/math] is the probability that (A,B) belongs to [math]\mathcal{A}[/math] if we choose (A,B) randomly by taking two random intervals in a random permutation of [math][n][/math] (in other words, we take the marginal distribution of (A,B) from the distribution of the quadruple (A,A',B,B') above) and condition on their being disjoint. It follows from this definition that [math]\mathbb{E}f=0[/math] (since the expectation conditional on A and B being disjoint is 0 and f is zero whenever A and B intersect).
Nothing that one would really like to know about this definition has yet been fully established, though an argument that looks as though it might work has been proposed to show that if f is quasirandom in this sense then the expectation [math]\mathbb{E}f(A,B)f(A\cup D,B)f(A,B\cup D)[/math] is small (if the distribution on these "set-theoretic corners" is appropriately defined). |
When something
feels cold, it isn't the temperature that makes it feel cold, it's the rate of heat transfer (heat flux) from your body to the object.
Consider that thermal conduction follows Fourier's Law, $\dot{q} = -k A \Delta T$
We can simplify this equation for our purposes by taking one of the objects to be our body, at body temperature, 37$^\circ$C, and the other
test object to always have the same contact area. We can then conclude that the heat flux depends on the thermal conductivity of the object, which is a property of the material it's made of, and its temperature.
$$\dot{q}\sim k~(T - 37)$$
Let's play around with this a bit.
Case 1: Touching the same object, at different temperatures In this case, $T$ varies. $k$ remains constant. This gives us $\dot{q}\sim(T - 37)$, which is why a dime at 0C feels colder than a dime at 30C
Case 2: Touching a dime and a clay coin, at the same temperature In this case, $T$ is the same, and the equation reduces to $\dot{q} \sim k$. We know the conductivity of a clay coin is less than the conductivity of a metal coin. That's why we feel the metal coin removing heat from our finger quicker, and hence it feels colder.
Consequently, if you look from the perspective of the coin, it gains heat faster, and hence reaches your body temperature faster. The clay coin will take more time to reach the same temperature.
Since temperature difference is the driving parameter for conduction, both will, given enough time, reach the same temperature.
Now for some slightly more complicated math.
The Fourier equation can be rewritten as$$\begin{align}\dot{q} &= -k A \Delta T = \frac{\text{d}q}{\text{d}t} \\\therefore \frac{\text{d}q}{\text{d}t} &= -k A (T - T_{res}) \\\text{d}q &= m C \text{d}T \\\therefore m C \frac{\text{d}T}{\text{d}t} &= -k A (T - T_{res}) \\\therefore \frac{\text{d} T}{T - T_{res}} &= -\frac{k A}{m C} \text{d} t \\\therefore \int_{T_{init}}^{T}\frac{\text{d} T}{T - T_{res}} &= \int_0^t-\frac{k A}{m C} \text{d} t \\\therefore \ln(\frac{T - T_{ref}}{T_{init} - T_{ref}}) &= -\frac{k A}{m C} t \\\therefore \frac{\Delta T}{\Delta T_{init}} &= \exp(-\frac{k A}{m C} t) \\\therefore \frac{\Delta T}{\Delta T_{init}} &= \exp(-\phi t) \\\therefore \dot{q} &= -k A \exp(-\phi t) \Delta T_{init}\end{align}$$
As you can see from this picture, the temperature difference falls much quicker if $\phi$ is larger, i.e. $k A / m C$ is larger. If the mass and contact area of both objects under comparison are identical, this implies the larger the $k / C$, the quicker the object reaches equilibrium with the heat reservoir. Here is the same plot, but showing the temperature of the coin initially at 0C with time, when touched to a heat reservoir at 37C.
This next image shows the heat flux plotted against time. As you can see, a smaller $k$ gives a smaller heat flux, but sustained over a longer period of time.
Notes:
$C$ is the specific heat of the material the coin is made of. $k$ is it's thermal conductivity. $m$ is the mass and $A$ is the contact area between the object and the hear reservoir. |
Difference between revisions of "Talk:Fujimura's problem"
(→General n)
(→General n)
Line 38: Line 38:
You get a non-constructive quadratic lower bound for the quadruple problem by taking a random subset of size <math>cn^2</math>. If c is not too large the linearity of expectation shows that the expected number of tetrahedrons in such a set is less than one, and so there must be a set of that size with no tetrahedrons. I think <math> c = \frac{24^{1/4}}{6} + o(\frac{1}{n})</math>.
You get a non-constructive quadratic lower bound for the quadruple problem by taking a random subset of size <math>cn^2</math>. If c is not too large the linearity of expectation shows that the expected number of tetrahedrons in such a set is less than one, and so there must be a set of that size with no tetrahedrons. I think <math> c = \frac{24^{1/4}}{6} + o(\frac{1}{n})</math>.
+ +
One upper bound can be found by counting tetrahedrons. For a given n the tetrahedral grid has <math>\frac{1}{24}n(n+1)(n+2)(n+3)</math> tetrahedrons. Each point on the grid is part of n tetrahedrons, so <math>\frac{1}{24}(n+1)(n+2)(n+3)</math> points must be removed to remove all tetrahedrons. This gives an upper bound of <math>\frac{1}{8}(n+1)(n+2)(n+3)</math>.
One upper bound can be found by counting tetrahedrons. For a given n the tetrahedral grid has <math>\frac{1}{24}n(n+1)(n+2)(n+3)</math> tetrahedrons. Each point on the grid is part of n tetrahedrons, so <math>\frac{1}{24}(n+1)(n+2)(n+3)</math> points must be removed to remove all tetrahedrons. This gives an upper bound of <math>\frac{1}{8}(n+1)(n+2)(n+3)</math>.
Latest revision as of 03:22, 29 March 2009
Let [math]\overline{c}^\mu_{n,4}[/math] be the largest subset of the tetrahedral grid:
[math] \{ (a,b,c,d) \in {\Bbb Z}_+^4: a+b+c+d=n \}[/math]
which contains no tetrahedrons [math](a+r,b,c,d), (a,b+r,c,d), (a,b,c+r,d), (a,b,c,d+r)[/math] with [math]r \gt 0[/math]; call such sets
tetrahedron-free.
These are the currently known values of the sequence:
n 0 1 2 [math]\overline{c}^\mu_{n,4}[/math] 1 3 7 n=0
[math]\overline{c}^\mu_{0,4} = 1[/math]:
There are no tetrahedrons, so no removals are needed.
n=1
[math]\overline{c}^\mu_{1,4} = 3[/math]:
Removing any one point on the grid will leave the set tetrahedron-free.
n=2
[math]\overline{c}^\mu_{2,4} = 7[/math]:
Suppose the set can be tetrahedron-free in two removals. One of (2,0,0,0), (0,2,0,0), (0,0,2,0), and (0,0,0,2) must be removed. Removing any one of the four leaves three tetrahedrons to remove. However, no point coincides with all three tetrahedrons, therefore there must be more than two removals.
Three removals (for example (0,0,0,2), (1,1,0,0) and (0,0,2,0)) leaves the set tetrahedron-free with a set size of 7.
General n
A lower bound of 2(n-1)(n-2) can be obtained by keeping all points with exactly one coordinate equal to zero.
You get a non-constructive quadratic lower bound for the quadruple problem by taking a random subset of size [math]cn^2[/math]. If c is not too large the linearity of expectation shows that the expected number of tetrahedrons in such a set is less than one, and so there must be a set of that size with no tetrahedrons. I think [math] c = \frac{24^{1/4}}{6} + o(\frac{1}{n})[/math].
With coordinates (a,b,c,d), take the value a+2b+3c. This forms an arithmetic progression of length 4 for any of the tetrahedrons we are looking for. So we can take subsets of the form a+2b+3c=k, where k comes from a set with no such arithmetic progressions. [This paper] gives a complicated formula for the possible number of subsets.
One upper bound can be found by counting tetrahedrons. For a given n the tetrahedral grid has [math]\frac{1}{24}n(n+1)(n+2)(n+3)[/math] tetrahedrons. Each point on the grid is part of n tetrahedrons, so [math]\frac{1}{24}(n+1)(n+2)(n+3)[/math] points must be removed to remove all tetrahedrons. This gives an upper bound of [math]\frac{1}{8}(n+1)(n+2)(n+3)[/math]. |
I'm looking for a
continuous function $f$ defined on the compact interval $[0,1]$ which is not of bounded variation.
I think such function might exist. Any idea?
Of course the function $f$ such that $$ f(x) = \begin{cases} 1 & \text{if $x \in [0,1] \cap \mathbb{Q}$} \\\\ 0 & \text{if $x \notin [0,1] \cap \mathbb{Q}$} \end{cases} $$ is not of bounded variation on $[0,1]$, but it is not continuous on $[0,1]$. |
The running of the coupling strengths is usually visualized on a logarithmic scale like here
What surprises me is that the weak and the electromagnetic coupling strength do not meet before the GUT scale. Why is this the case?
A common argument in Grand Unified Theories is that all elementary forces meet at some energy scale. Above this threshold we have only one interaction, describe by a gauge group $G$ and correspondingly only one coupling strength. The symmetry gets broken spontaneously to the standard model gauge group $ G \rightarrow SU(3) \times SU(2) \times U(1)$ at lower energies, the coupling strength split and the new gauge bosons and possibly exotic fermions get a mass comparable to the GUT scale (this is called survival hypothesis).
Now, this is speculative beyond the standard model stuff, but in the standard model something very similar happens. The standard model gauge group $SU(3) \times SU(2) \times U(1)$ gets broken at energies below the electroweak scale.
$$SU(3) \times SU(2) \times U(1) \rightarrow SU(3) \times U(1) $$
Most books and papers talk about a unified electroweak interaction.
Shouldn't this mean that the electromagnetic and weak coupling strength get unified?
And
bonus: Shouldn't all fermions and bosons get a mass comparable to the Electroweak scale? Even without the neutrino the mass difference between the lightest (electron) $\approx 0,5 \cdot 10^{-3}$ GeV and heaviest (top) $\approx 170$ GeV is six orders of magnitude. |
Cold dark matter is thought to fill our galactic neighborhood with a density $\rho$ of about 0.3 GeV/cm${}^3$ and with a velocity $v$ of roughly 200 to 300 km/s. (The velocity dispersion is much debated.) For a given dark matter mass $m$ and nucleon scattering cross section $\sigma$, this will lead to a constant collision rate of roughly
$r \sim \rho v \sigma / m$
for every nucleon in normal matter. The kinetic energy transferred to the nucleon (which is essentially at rest) will be roughly
$\Delta E \sim 2 v^2 \frac{M m^2}{(m+M)^2}$,
where $M \approx 1$ amu $\approx 1$ GeV/c${}^2$ is the mass of a nucleon. The limits for light ($m \ll M$) and heavy ($m \gg M$) dark matter are
$\Delta E_\mathrm{light} \sim 2 v^2 \frac{m^2}{M}$ and $\Delta E_\mathrm{heavy} \sim 2 v^2 M$.
This leads to an apparent intrinsic heat production in normal matter
$\tilde{P} \sim r \Delta E / M$,
which is measured in W/kg. The limits are
$\tilde{P}_\mathrm{light} \sim 2 \rho v^3 \sigma m / M^2$ and $\tilde{P}_\mathrm{heavy} \sim 2 \rho v^3 \sigma / m$.
What existing experiment or observation sets the upper limit on $\tilde{P}$?
(Note that $\tilde{P}$ is only sensibly defined on samples large enough to hold onto the recoiling nucleon. For tiny numbers of atoms--e.g. laser trap experiments--the chance of any of the atoms colliding with dark matter is very small, and those that do will simply leave the experiment.)
The best direct limit I could find looking around the literature comes from dilution refrigerators. The NAUTILUS collaboration (resonant-mass gravitational wave antenna) cooled a 2350 kg aluminum bar down to 0.1 K and estimated that the bar provided a load of no more than 10 $\mu$W to the refrigerator. Likewise, the (state-of-the-art?) Triton dilution refrigerators from Oxford Instruments can cool a volume of (240 mm)${}^3$ (which presumably could be filled with lead for a mass of about 150 kg) down to ~8mK. Extrapolating the cooling power curve just a bit, I estimated it handled about $10^{-7}$ W at that temperature.
In both cases, it looked like the direct limit on intrinsic heating is roughly $\tilde{P} < 10^{-9}$W/kg.
However, it looks like it's also possible to use the Earth's heat budget to set a better limit. Apparently, the Earth produces about 44 TW of power, of which about 20 TW is unexplained. Dividing this by the mass of the Earth, $6 \times 10^{24}$ kg,
limits the intrinsic heating to $\tilde{P} < 3 \times 10^{-12}$W/kg.
Is this Earth-heat budget argument correct? Is there a better limit elsewhere?
To give an example, the CDMS collaboration searches for (heavy) dark matter in the range 1 to 10${}^3$ GeV/c${}^2$ with sensitivities to cross sections greater than 10${}^{-43}$ to 10${}^{-40}$ cm${}^2$ (depending on mass). A 100 GeV dark matter candidate with a cross-section of 10${}^{-43}$ cm${}^2$ would be expected to generate $\tilde{P} \sim 10^{-27}$ W/kg, which is much too small to be observed.
On the other hand, a 100 MeV dark matter particle with a cross-section of $10^{-27}$ cm${}^2$ (which, although not nearly as theoretically motivated as heavier WIMPs, is not excluded by direct detection experiments) would be expected to generate $\tilde{P} \sim 10^{-10}$ W/kg. This would have shown up in measurements of the Earth's heat production.
EDIT: So it looks like I completely neglected the effects of coherent scattering, which has the potential to change some of these numbers by 1 to 2 orders of magnitude. Once I learn more about this, I will update the question. |
Search
Now showing items 1-10 of 27
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
Search
Now showing items 1-2 of 2
Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector
(Elsevier, 2014-11-10)
This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ...
Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector
(Elsevier, 2014-11-10)
Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ... |
Forgot password? New user? Sign up
Existing user? Log in
∫−π2π2(sinx)200 dx\large \displaystyle \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} (\sin{x})^{200} \, dx ∫−2π2π(sinx)200dx
The closed form of the integral above can be expressed as πb!ab(c!)2 \dfrac{\pi b!}{a^b (c!)^2} ab(c!)2πb!, where a,ba,ba,b and ccc are natural numbers. Find a+b+ca+b+c a+b+c.
Inspiration
Problem Loading...
Note Loading...
Set Loading... |
Is there an easy way to prove that if $M$ is finitely generated projective $R$-module, then $M$ is a reflexive module using just the definition of the natural map?
The natural map is $\theta: M \rightarrow M^{**}$ is defined by $\theta(m)=\widehat{m},$ where given $f \in M^{*},$ we have $\widehat{m}(f)=f(m).$
Using just the natural map and the dual basis for projective modules, it is easy to see that $M$ is torsionless (or that $\theta$ is injective), but I'm struggling to see that $\theta$ is surjective.
I'm using this page's notation, and they also give another path to prove what I'm asking. |
More specifically, I was wondering if there are well-known conditions to put on $X$ in order to make $K_0(X)\simeq K^0(X)$. Wikipedia says they are the same if $X$ is smooth. It seems to me that you get a nice map from the coherent sheaves side to the vector bundle side (the hard direction in my opinion) if you impose some condition like "projective over a Noetherian ring". Is this enough? In other words, is the idea to impose enough conditions to be able to resolve a coherent sheaf, $M$, by two locally free ones $0\to \mathcal{F}\to\mathcal{G}\to M\to 0$?
Imposing that you can resolve by a length $2$ sequence of vector bundles is too strong. What you want is that there is some $N$ so that you can resolve by a length $N$ sequence of vector bundles. By Hilbert's syzygy theorem, this follows from requiring that the scheme be regular. (Specifically, if the scheme is regular of dimension $d$, then every coherent sheaf has a resolution by projectives of length $d+1$.)
Here is a simple example of what goes wrong on singular schemes. Let $X = \mathrm{Spec} \ A$ where $A$ is the ring $k[x,y,z]/(xz-y^2)$. Let $k$ be the $A$-module on which $x$, $y$ and $z$ act by $0$. I claim that $k$ has no finite free resolution. I will actually only show that $A$ has no
graded finite free resolution. Proof: The hilbert series of $A$ is $(1-t^2)/(1-t)^3 = (1+t)/(1-t)^2$. So every graded free $A$-module has hilbert series of the form $p(t) (1+t)/(1-t)^2$ for some polynomial $p$; and the hilbert series of anything which has a finite resolution by such modules also has hilbert series of the form $p(t) (1+t)/(1-t)^2$. In particular, it must vanish at $t=-1$. But $k$ has hilbert series $1$, which does not.
There is, of course, a resolution of $k$ which is not finite. If I am not mistaken, it looks like
$$\cdots \to a[-4]^4 \to A[-3]^4 \to A[-2]^4 \to A[-1]^3 \to A \to k$$
You want coherent sheaves to have finite global resolutions by locally free sheaves. So definitely you need the regularity of $X$ to ensure that a locally free resolution stops at a finite stage. You also need a global condition such as quasiprojectivity over an affine base to guarantee that you can start the process. (The last condition is not optimal.)
Edit: In reading the follow up comments, I realize my answer was a bit cryptic. The inverse map $K_0(X)\to K^0(X)$ would send the class of a coherent sheaf to the alternating sum of the classes in a resolution. In general, these groups behave quite differently. $K^0(X)$ is contravariant like cohomology and $K_0(X)$ is covariant for proper maps like (Borel-Moore) homology. That they coincide for regular schemes is reminiscent of Poincaré duality.
Asking $K^0(X)$ to be isomorphic to $K_0(X)$ is not always "good enough". Of course, it will allow you to carry over constructions for $K_0(X)$ to $K^0(X)$, but not canonically. And it can happen that $K^0(X)\cong K_0(X)$ without $X$ being regular. For example, take $X= \textrm{Spec} A$, $A=k[x]/(x^n)$ with $n\geq 2$. Then you have an infinite resolution as given in David's answer for $k$. Computing $Tor^A_i(k,k)$ shows that $k$ has no finite resolution. (In fact, $Tor_i^A(k,k) = k^2$ for all $i>0$.) Now, although the above "existence of finite resolution" fails, it is not hard to see that $K^0(X)\cong \mathbf{Z}\cong K_0(X)$ in this case. (Use that $A$ is a local ring and the length map on $A$.) Of course, the natural map $K^0(X) \longrightarrow K_0(X)$ is not an isomorphism. (It is given by $1\mapsto n$.)
[Edit: I added another example]
[Edit 2: There was something wrong with the example below as noted by Michael. I fixed the problem]
Let me also add to my answer the following "snake in the grass". If you work with general schemes, even if regular, one requires the extra assumption of "finite-dimensionality". For example, take the scheme $X=\textrm{Spec} (k \times k[t_1]\times k[t_1,t_2] \times \ldots)$. Now, even though $A = k\times k[t_1]\times\ldots$ is regular, there is an infinite resolution for $k$ of the form $$\ldots \longrightarrow A\longrightarrow A\longrightarrow A \longrightarrow k \longrightarrow 0$$ which corresponds geometrically to taking a point, then adding a line, then adding a plane, etc. Again, take the Tor's to see that $k$ has no finite resolution. Do note that $X$ is not noetherian.
[Edit 3: I added the following for completeness]
Let $X$ be a regular finite-dimensional scheme. Assume that $X$ has enough locally frees. (This notion also arose in Are schemes that "have enough locally frees" necessarily separated ). Then the canonical morphism $K^0(X) \longrightarrow K_0(X)$ is an isomorphism. In the second example, $X=\textrm{Spec} \ A$ is regular, but not finite-dimensional. Does $X$ have enough locally frees? |
Overview
I am looking for a way to solve a structured linear system in Python without using a for loop (preferably using vectorization, if possible).
Background
Consider the following linear system: \begin{align} \begin{pmatrix} E_0 \\ F_1 & E_1 \\ & F_2 & E_2 \\ && \ddots & \ddots \\ &&& F_{K-1} & E_{K-1} \end{pmatrix} \begin{pmatrix} x_0 \\ x_1 \\ x_2 \\ \vdots \\ x_{K-1} \end{pmatrix} = \begin{pmatrix} b_0 \\ b_1 \\ b_2 \\ \vdots \\ b_{K-1} \end{pmatrix} \end{align} where $E_i, F_i \in \mathbb{R}^{n \times n}$, and $x_i, b_i \in \mathbb{R}^n$ for $i = 0, \ldots, K-1$
Further, the $E_i$ are invertible for $i = 0, \ldots, K-1$.
Then this system can be solved through forward substitution:
Solve $E_0 x_0 = b_0$
for $i = 1, \ldots, K-1$: Solve $E_i x_i = b_i - F_i x_{i-1}$
My Current Implementation
The block matrices $E_i$ and $F_i$ are available by calling
Ek(i) and
Fk(i).
Currently $x$ and $b$ are shaped as a numpy arrays with shape $K \times n$ so that
x[k] gives $x_k$, and so forth.
import numpy as npfrom scipy.sparse.linalg import spsolve# define K and n, create b and initialize xx[0] = spsolve(Ek(0), b[0])for i in range(1, K): x[i] = spsolve(Ek(i), b[i] - Fk(i) @ x[i-1])
Can this be vectorized? I would like to not use a for loop here since they are quite slow in Python. |
I am trying to solve an optimization problem
$$\begin{align} &\min f(x)\\ &\text{subject to } Ax\leq b\\ &x \in R^{\sim 10000},\ b \in R^{\sim 10000} \end{align}$$
$A$ is somewhat sparse (usually less than 5% populated) and I can efficiently evaluate $f(x)$ and $\nabla f(x)$. The Hessian of $f(x)$ comes at prohibitive computation times. $f(x)$ is a convex, non-linear, smooth function.
I tried Matlab's built-in solver
fmincon but I keep receiving memory errors even though I am running on a system with 32GB memory. The exact Matlab settings I use are
options = optimoptions(@fmincon,'GradObj', 'on','SubproblemAlgorithm', 'cg', 'Display', 'iter','Hessian',{'lbfgs',20}, 'MaxIter', 50, 'Diagnostics', 'on');[x,fval] = fmincon(@(x)myObjFunc(x),x0,A,b,[],[],lb,[],[],options);
I would be very happy if someone could recommend a suited open source solver I could use for this problem -preferably with some Matlab interface- or even better: more elaborate settings for Matlab's
fmincon to circumvent the memory issues.
I already found the
tomopt package. However, this is not open source. In case I cannot find any open source alternative, I will check this out. |
Difference between revisions of "Kakeya problem"
Line 36: Line 36:
since the set of all vectors in <math>{\mathbb F}_3^n</math> such that at least one of the numbers <math>1</math> and <math>2</math> is missing among their coordinates is a Kakeya set.
since the set of all vectors in <math>{\mathbb F}_3^n</math> such that at least one of the numbers <math>1</math> and <math>2</math> is missing among their coordinates is a Kakeya set.
−
This estimate can be improved using an idea due to Ruzsa. Namely, let <math>E:=A\cup B</math>, where <math>A</math> is the set of all those vectors with <math>r/3+O(\sqrt r)</math> coordinates equal to <math>1</math> and the rest equal to <math>0</math>, and <math>B</math> is the set of all those vectors with <math>2r/3+O(\sqrt r)</math> coordinates equal to <math>2</math> and the rest equal to <math>0</math>. Then <math>E</math>, being of size just about <math>(27/4)^{r/3}</math> (which is not difficult to verify using [[Stirling's formula]]), contains lines in a positive proportion of directions. Now one can use the random rotations trick to get the rest of the directions in <math>E</math> (losing a polynomial factor in <math>n</math>).
+
This estimate can be improved using an idea due to Ruzsa. Namely, let <math>E:=A\cup B</math>, where <math>A</math> is the set of all those vectors with <math>r/3+O(\sqrt r)</math> coordinates equal to <math>1</math> and the rest equal to <math>0</math>, and <math>B</math> is the set of all those vectors with <math>2r/3+O(\sqrt r)</math> coordinates equal to <math>2</math> and the rest equal to <math>0</math>. Then <math>E</math>, being of size just about <math>(27/4)^{r/3}</math> (which is not difficult to verify using [[Stirling's formula]]), contains lines in a positive proportion of directions. Now one can use the random rotations trick to get the rest of the directions in <math>E</math> (losing a polynomial factor in <math>n</math>).
Putting all this together, we seem to have
Putting all this together, we seem to have
Revision as of 04:53, 19 March 2009
Define a
Kakeya set to be a subset [math]A\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]a\in{\mathbb F}_3^n[/math] such that [math]a,a+d,a+2d[/math] all lie in [math]A[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math].
Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements.
General lower bounds
Trivially,
[math]k_n\le k_{n+1}\le 3k_n[/math].
Since the Cartesian product of two Kakeya sets is another Kakeya set, we have
[math]k_{n+m} \leq k_m k_n[/math];
this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity.
From a paper of Dvir, Kopparty, Saraf, and Sudan it follows that [math]k_n \geq 3^n / 2^n[/math], but this is superseded by the estimates given below.
To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, etermining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence
[math]k_n\gtrsim 3^{(n+1)/2}.[/math]
One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math].
A better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a-b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus,
[math]k_n \ge 3^{6(n-1)/11}.[/math] General upper bounds
We have
[math]k_n\le 2^{n+1}-1[/math]
since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set.
This estimate can be improved using an idea due to Ruzsa. Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]).
Putting all this together, we seem to have
[math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math]
or
[math](1.8207\ldots+o(1))^n \le k_n \le (1.88988+o(1))^n.[/math] |
$\sin x^2$ does not converge as $x \to \infty$, yet its integral from $0$ to $\infty$ does.
I'm trying to understand why and would like some help in working towards a formal proof.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
$x\mapsto \sin(x^2)$ is integrable on $[0,1]$, so we have to show that $\lim_{A\to +\infty}\int_1^A\sin(x^2)dx$ exists. Make the substitution $t=x^2$, then $x=\sqrt t$ and $dx=\frac{dt}{2\sqrt t}$. We have $$\int_1^A\sin(x^2)dx=\int_1^{A^2}\frac{\sin t}{2\sqrt t}dt=-\frac{\cos A^2}{2\sqrt A}+\frac{\cos 1}2+\frac 12\int_1^{A^2}\cos t\cdot t^{-3/2}\frac{-1}2dt,$$ and since $\lim_{A\to +\infty}-\frac{\cos A^2}{2\sqrt A}+\frac{\cos 1}2=\frac{\cos 1}2$ and the integral $\int_1^{+\infty}t^{-3/2}dt$ exists (is finite), we conclude that $\int_1^{+\infty}\sin(x^2)dx$ and so does $\int_0^{+\infty}\sin(x^2)dx$. This integral is computable thanks to the residues theorem.
The humps for $x\mapsto \sin(x^2)$ go up and down. Each has an area smaller than that of the last. The areas converge to 0 as you progress down the $x$-axis. By the alternating series test, this converges.
I solved this one integral as a particular case of the formula I provide here: http://www.mymathforum.com/viewtopic.php?f=15&t=26243 under the name Weiler.
$$\int\limits_0^\infty {\sin \left( {a{x^2}} \right)\cos \left( {2bx} \right)dx} = \sqrt {\frac{\pi }{{8a}}} \left( {\cos \frac{{{b^2}}}{a} - \sin \frac{{{b^2}}}{a}} \right)$$
$$\int\limits_0^\infty {\cos \left( {a{x^2}} \right)\cos \left( {2bx} \right)dx} = \sqrt {\frac{\pi }{{8a}}} \left( {\cos \frac{{{b^2}}}{a} + \sin \frac{{{b^2}}}{a}} \right)$$
So you have
$$\int\limits_0^\infty {\sin \left( {{x^2}} \right)dx} = \sqrt {\frac{\pi }{8}} $$
This is also informative, and works when there is no aspect with closed form. Taking Davide's substitution, define $$ A_n^+ = \int_{2 \pi n}^{2 \pi n + \pi} \; \frac{\sin t}{2 \sqrt t} \; dt \; , $$ $$ A_n^- = \int_{2 \pi n + \pi}^{2 \pi n + 2 \pi} \; \frac{\sin t}{2 \sqrt t} \; dt \; , $$ and finally $$ A_n = A_n^+ + A_n^- = \int_{2 \pi n }^{2 \pi n + 2 \pi} \; \frac{\sin t}{2 \sqrt t} \; dt \; , $$
Next, I just used $\int_{m \pi}^{m \pi + \pi} \sin t dt = \pm 2,$ depending upon the integer $m,$ and took bounds based on the size of the denominators.
I suppose we need to start with $n \geq 1.$ With that, $$ \frac{1}{\sqrt{2 \pi n + \pi}} \leq A_n^+ \leq \frac{1}{\sqrt{2 \pi n}}, $$ $$ \frac{-1}{\sqrt{2 \pi n + \pi}} \leq A_n^- \leq \frac{-1}{\sqrt{2 \pi n + 2 \pi}}, $$ and $$ 0 \leq A_n \leq \frac{1}{\sqrt{ 8 \pi} \; \; n^{3/2}}.$$
What does this say about convergence? The integral is $$ \sum_{n = 0}^\infty A_n. $$ Convergence does not depend on the initial terms, so we may start at the more convenient $n=1.$ From the $3/2$ exponent in the estimate of $A_n,$ we see that the sum is a finite constant. We do see modest oscillation in the indefinite integral, however the $\sqrt n$ terms in the denominators of $A_n^+$ and $A_n^-$ tell us that eventually the indefinite integral stays within any desired distance of the infinite integral.
This idea, cancellation of alternating contributions, can be used with far worse integrands, $\sin (x^5 - x - 1)$ comes to mind.
Here is a much simpler method to show only the convergence of the integral.$$\begin{align}\int_0^\infty\sin\left(x^2\right)\,\mathrm{d}x&=\int_0^\infty\frac{\sin(x)}{2\sqrt{x}}\,\mathrm{d}x\tag{1}\\&=\sum_{k=0}^\infty\int_{2k\pi}^{(2k+2)\pi}\frac{\sin(x)}{2\sqrt{x}}\,\mathrm{d}x\tag{2}\\&=\sum_{k=0}^\infty\int_{2k\pi}^{(2k+1)\pi}\frac{\sin(x)}2\left(\frac1{\sqrt{x}}-\frac1{\sqrt{x+\pi}}\right)\mathrm{d}x\tag{3}\\&=\sum_{k=0}^\infty\int_{2k\pi}^{(2k+1)\pi}\frac{\sin(x)}2\frac{\pi}{\sqrt{x}\sqrt{x+\pi}\left(\sqrt{x}+\sqrt{x+\pi}\right)}\mathrm{d}x\tag{4}\\&\le\int_0^\pi\frac{\sin(x)}2\left(1+\sum_{k=1}^\infty\frac1{4\sqrt{2\pi} k^{3/2}}\right)\mathrm{d}x\tag{5}\\&=1+\frac{\zeta\!\left(\frac32\right)}{4\sqrt{2\pi}}\tag{6}\end{align}$$Explanation:
$(1)$: substitute $x\mapsto\sqrt{x}$ $(2)$: break the integral into $2\pi$ segments $(3)$: $\sin(x+\pi)=-\sin(x)$ $(4)$: algebra $(5)$: $\frac{\pi}{\sqrt{x}\sqrt{x+\pi}\left(\sqrt{x}+\sqrt{x+\pi}\right)}\le\min\left(1,\frac1{4\sqrt{2\pi} k^{3/2}}\right)$ for $x\ge2k\pi$ $(6)$: evaluate integral and sum
In fact we have: \begin{split} \int_0^\infty \sin(x^2) dx=\frac{1}{2\sqrt{2}} \int^\infty_0\frac{\sin^2 x}{x^{3/2}}\,dx<\infty\end{split} See below
Employing the change of variables $2u =x^2$ after integration by parts we get \begin{split} \int_0^\infty \sin(x^2) dx&=&\frac{1}{\sqrt{2}}\int^\infty_0\frac{\sin(2x)}{\sqrt{x}}\,dx\\& =&\frac{1}{\sqrt{2}} \int^\infty_0\frac{\sin(2x)}{x^{1/2}}\,dx\\&=&\frac{1}{\sqrt{2}}\underbrace{\left[\frac{\sin^2 x}{x^{1/2}}\right]_0^\infty}_{=0} +\frac{1}{2\sqrt{2}} \int^\infty_0\frac{\sin^2 x}{x^{3/2}}\,dx \\&= &\frac{1}{2\sqrt{2}} \int^\infty_0\frac{\sin^2 x}{x^{3/2}}\,dx\end{split}
Given that $ \sin 2x =(\sin^2x)'$ and $$\lim_{x\to 0}\frac{\sin x}{x}=1$$ However, $$ \int^\infty_1\frac{\sin^2 x}{x^{3/2}}\,dx\le \int^\infty_1\frac{1}{x^{3/2}}\,dx<\infty$$ since $|\sin x|\le |x|$ we have, $$\int^1_0\frac{\sin^2 x}{x^{3/2}}\,dx \le \int^1_0\frac{\ x^2}{x^{3/2}}\,dx = \int^1_0\sqrt x\,dx<\infty$$ |
Patrick of UC Davis told us not to overlook a new intriguing Korean hep-th paper
The paper is rather short so you may try to quickly read it – I did – and I am a bit disappointed after I did. The abstract suggested that there is something special about the Standard Model (if it is not completely unique) that makes its rewriting as a double field theory more natural (or completely natural if not unavoidable). I couldn't find
anyfingerprint of this sort in the paper. It seems to me that what they did to the Standard Model could be done to any quantum field theory in the same class.
Double field theory is a quantum field theory but it has a special property that emerged while describing phenomena in string theory. But if you remember your humble correspondent's recent comments about "full-fledged string theory", "string-inspired research", and "non-stringy research", you should know that I would place this Korean paper to the middle category. I disagree with them that the features they are trying to cover or find are "purely stringy". It's a field theory based on finitely many point-like particle species – their composition is picked by hand, and so are various interactions and constrains taming these fields – so it is simply
notstring theory where all these technicalities (the field content, interactions, and constraints) are completely determined from a totally different starting point (and not "adjustable" at all). They're not solving the full equations of string theory etc. Again, I don't think that the theories they are describing should be counted as "string theory" although the importance of string theory for this research to have emerged is self-evident.
What is double field theory (DFT)? In string theory, there is this cute phenomenon called T-duality.
If a circular dimension is compactified on a circle of radius \(R\) i.e. circumference \(2\pi R\), the momentum becomes quantized in units of \(1/R\) i.e. \(p=n/R\), \(n\in\ZZ\). That's true even in ordinary theories of point-like particles and follows from the single-valuedness of the wave function on the circle. However, in string theory, a closed string may also be wrapped around the circle \(w\) times (the winding number). In this way, you deal with a string of the minimum length \(2\pi R w\) whose minimum mass is \(2\pi R w T\) where \(T=1/2\pi\alpha'\) is the string tension (mass/energy density per unit length).
So there are contributions to the mass of the particle-which-is-a-string that go like \(n/R\) and \(wR/ \alpha'\), respectively (note that the factors of \(2\pi\) cancel). Well, a more accurate comment is that there are contributions to \(m^2\) that go like \((n/R)^2\) and \((wR/ \alpha')^2\), respectively, but I am sure that you become able to fix all these technical "details" once you start to study the theory quantitatively.
There is a nice symmetry between \(n/R\) and \(wR/ \alpha'\). If you exchange \(n\leftrightarrow w\) and \(R\leftrightarrow \alpha'/R\), the two terms get interchanged. (The squaring changes nothing about it.) That's cool. It means that the spectrum (and degeneracies) of a closed string on a circle of radius \(R\) is the same as on the radius \(\alpha'/R\). This is no coincidence. The symmetry actually does exist in string theory and applies to all the interactions, too. In particular, something special occurs when \(R^2=\alpha'\). For this "self-dual" radius, the magnitude of momentum-like and winding-like contributions is the same and that's the point where string theory produces new enhanced symmetries. For example, in bosonic string theory on the self-dual circle, \(U(1)\times U(1)\) from the Kaluza-Klein \(g_{\mu 5}\) and B-field \(B_{\mu 5}\) potentials gets extended to \(SU(2)\times SU(2)\).
You may compactify several dimensions on circles, i.e. on the torus \(T^k\). The T-duality may be interpreted as \(n_i\leftrightarrow w_i\), the exchange of the momenta and winding numbers, which may be generalized "locally" on the world sheet to the "purely left-moving" parity transformation of \(X_i\). The reversal of the sign of the left-moving part of \(X_i\) only may also be interpreted as a Hodge-dualization of \(\partial_\alpha X_i\) on the world sheet.
In the full string theory, the theory has the \(O(k,k)\) symmetry rotating all the \(k\) compactified coordinates \(X_i\) and their \(k\) T-duals \(\tilde X_i\) if you ignore the periodicities as well as what the bosons are actually doing on the world sheet (left-movers vs right-movers). Normally, we only want to use either the tilded or the untilded fields \(X\). The double field theory is any formalism that tries to describe the string in such a way that
both\(X_i\) and \(\tilde X_i\) exist at the same time. You have to "reduce" 1/2 of these spacetime coordinates at a "different place" not to get a completely wrong theory but it's possible to find such a "different place": some extra constraints on the string fields.
When the periodicities (circular compactification) are not ignored, the theory on \(T^k\) has the symmetry just \(O(k,k,\ZZ)\), a discrete subgroup, and the moduli space of the vacua is the coset\[
{\mathcal M}= (O(k,\RR)\times O(k,\RR)) \backslash O(k,k,\RR) / O(k,k,\ZZ)
\] because both compact simple \(O(k,\RR)\) transformations remain symmetries. This \(k^2\)-dimensional moduli space parameterizes all the radii and angles in the torus \(T^k\) as well as all the compact antisymmetric B-field components on it. It's not hard to see why there are \(k^2\) parameters associated with the torus: you may describe each torus using "standardized" periodic coordinates between zero and one, and all the information about the shape and the B-field may be stored in the general (non-symmetric, non-antisymmetric) tensor \(g_{mn}+B_{mn}\) which obviously has \(k^2\) components.
OK, what do you do with the Standard Model?
I said that the spacetime coordinates are effectively "doubled" when we add all the T-dual coordinates at the same moment. In this "Standard Model" case, it's done with all the spacetime coordinates, including – and especially – the 3+1 large dimensions. So instead of 3+1, we get 3+1+1+3 = 4+4 dimensions (note that the added dimensions have the opposite signature so the "sum" always has the same number of time-like and space-like coordinates).
The parent spacetime is 8-dimensional and the parent Lorentz group is \(O(4,4)\). This is broken to \(O(3,1)\times O(1,3)\). We obviously don't want an eight-dimensional spacetime. The authors describe some (to my taste, ad hoc) additional constraints that make all the fields in the 8-dimensional spacetime independent of 1+3 coordinates. So they only depend on the 3+1 coordinates we know and love.
They work hard to rewrite all the normal Standard Model to fields in this 8-dimensional parent spacetime with some extra restrictions and claim that it can be done. They just make some "very small" comments that their formalism bans the \(F\wedge F\) term in QCD – which would solve the strong CP-problem – as well as some quark-lepton couplings (an experimental prediction about the absence of some dimension-six operators). I don't quite get these claims. And their indication that the quarks transform under the first \(O(3,1)\) while the leptons transform under the other \(O(1,3)\), but they may also transform under the "same" factor of the group, sound scary to me. Depending on this choice, one must obtain very different theories, right?
Aside from the very minor (I would say) issue concerning the \(\theta\)-angle, I think it's fair to say that they present no evidence that the Standard Model is "particularly willing" to undergo this doubling exercise.
Even though I "independently discovered" the basic paradigm of the double field theory before others published it, I do share some worries with my then adviser who was discouraging me. The \(O(k,k,\RR)\) symmetry is really "totally broken" at the string scale, by the stringy effects. Some of the bosonic components are left-moving, others (one-half) are right-moving. This is no detail. The left-movers and right-movers are two totally separate worlds. So there isn't any continuous symmetry that totally mixes them.
In some sense, the \(O(k,k,\RR)\) symmetry is an illusion that only "seems" to be relevant in the field-theoretical limit but it's totally broken at the string scale. This fate of a symmetry is strange because we're used to symmetries that are restored at high energies and broken at low energies. Here, it seems to be the other way around.
After all, the symmetry is brutally broken in the double field theory Standard Model, too. The eight spacetime dimensions aren't really equal. Things can depend on four of them but not the other four. Maybe this separation is natural and may be done covariantly – they make it look like it is the case. But I still don't understand any sense or regime in which the \(O(k,k,\RR)\) symmetry could be truly unbroken which is why it seems vacuous to consider this symmetry physical.
Maybe such a symmetry may be useful and important even if it can never be fully restored. I just don't follow the logic. I don't understand why this symmetry would be "necessary for the consistency" or otherwise qualitatively preferred. That's why I don't quite see why we should trust things like \(\theta=0\) which follow from the condition that the Standard Model may be rewritten as a double field theory even though this rewriting doesn't seem to be "essential" for anything.
But at the end, I feel that they have some chance to be right that there's something special and important about theories that may be rewritten in this way. The broader picture of \(O(4,4)\)-symmetric theories reminds me of many ideas especially by Itzhak Bars who has been excited about "theories with two time coordinates" for many years. Here, we have "four times".
Quite generally, more than "one time coordinate" makes the theory inconsistent if you define it too ordinarily. The plane spanned by two of the time coordinates contains circles – and, as you may easily verify, they are closed time-like curves, the source of logical paradoxes (involving your premature castration of your grandfather). So the new time coordinates cannot be quite treated on par with the time coordinate we know. There have to be some gauge-fixing conditions or constraints that only preserve the reality of one time coordinate.
The idea is that there may be some master theory with a noncompact symmetry, \(O(4,4)\) or \(O(\infty,\infty)\) or something worse, which has some huge new "gauge" symmetry that may be fixed in many ways and the gauge fixing produces the "much less noncompact" theories we know – theories with at most one time and with compact Yang-Mills gauge groups. Is this picture really possible? And if it is, are the "heavily noncompact" parent theories more than some awkward formalism that teaches us nothing true? Can these "heavily noncompact" parent theories unify theories that look very different in the normal description? And if this unification may be described mathematically, should we believe that it's physically relevant, or is it just a bookkeeping device that reshuffles many degrees of freedom in an unphysical way?
I am not sure about the answers to any of these questions. Many questions in physics are open and many proposals remain intriguing yet unsettled for a very long time. But I also want to emphasize that it is perfectly conceivable that these questions may be settled and will soon be settled. And they may be settled in both ways. It may be shown that this double field theory formalism is natural, important, and teaches us something. But it may also be shown that it is misguided. Before robust enough evidence exists in either direction, I would find it very dangerous and unscientific to prematurely discard one of the possibilities. The usefulness, relevance, or mathematical depth of the double field theory formalism is just a
working hypothesis, I think, and the amount of evidence backing this hypothesis (e.g. nontrivial consistency checks) is in no way comparable to the evidence backing the importance and validity of many established concepts in string theory (or physics). |
Given a forward-in-time approximation I have the coupled equations: $$ \frac{T^{(n+1)} - T^{(n)}}{\Delta t} = x T^{(n)} - y h^{(n)} \\ \frac{h^{(n+1)} - h^{(n)}}{\Delta t} = -z h^{(n)} - \alpha T^{(n)} $$ where $x, y, z$ and $\alpha$ are constants, I see from my simulation that the solution is damping. But how can I use Von Neumann analysis to find the amplication factor $A$?
You can rewrite your finite difference method into the form
\begin{align}\left[\begin{matrix} T^{t+1} \\ h^{t+1}\end{matrix} \right]=\left[\begin{matrix}A_{11} & A_{12} \\ A_{21} & A_{22} \end{matrix} \right] \left[\begin{matrix} T^n\\h^n\end{matrix} \right] = \left[\begin{matrix}A_{11} & A_{12} \\ A_{21} & A_{22} \end{matrix} \right]^n\left[\begin{matrix} T^0\\h^0\end{matrix} \right] \end{align}
Rewritten this way, you will find that the matrix $A$ becomes $$A=I+\Delta tJ,$$ where $I$ is the identity matrix and $J$ is the jacobian of the RHS of your original system. $A$ can be thought of as the
amplification matrix, whose eigenvalues determine the stability of the numerical system. To ensure absolute stability, you need to find a region in the complex vector space such that the spectral radius $\rho(A)<1$ (i.e. the dominant eigenvalue is less than one). |
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!! |
How would I find the series expansion $\displaystyle\frac{1}{(1+x)(1−x)(1+x^2)(1−x^2)(1+x^3)(1−x^3)\cdots}$ so that it will turn into an infinite power series again??
If you set
$$f(x)=\prod_{n=0}^\infty (1-x^n)^{-1}=\sum_{n=0}^\infty p(n)x^n$$
we see that yours is just
$$f(x^2)=\sum_{n=0}^\infty p(n)x^{2n}.$$
$$\frac{1}{1-x^k}=1+x^k+x^{2k}+x^{3k}\cdots$$ $$\frac{1}{1+x^k}=1-x^k+x^{2k}-x^{3k}\cdots$$
I suppose that you can make the expresion shorter using $(1+a)(1-a)=1-a^2$ so $$A=\frac{1}{(1+x)(1−x)(1+x^2)(1−x^2)(1+x^3)(1−x^3)\cdots}=\frac{1}{(1−x^2)(1−x^4)(1-x^6)\cdots}$$ and now use the fact that $$\frac{1}{1-y}=\sum_{i=0}^{\infty}y^i$$ and replace successively $y$ by $x^2$, $x^4\cdots$,$x^{2n}$ before computing the overall product.
Doing so, for a large number of terms, you should arrive to $$A=1+x^2+2 x^4+3 x^6+5 x^8+7 x^{10}+11 x^{12}+15 x^{14}+22 x^{16}+30 x^{18}+42 x^{20}+56 x^{22}+77 x^{24}+101 x^{26}+135 x^{28}+176 x^{30}+O\left(x^{31}\right)$$ and, as mentioned earlier, the coefficients correspond to sequence $\text{A00041}$ of $\text{EOIS}$ where $a_n$ is the number of partitions of $n$ (the partition numbers).
Probably off-topic, for an infinite number of terms, $$A=\frac{1}{\left(x^2;x^2\right){}_{\infty }}$$ where appears the q-Pochhammer symbol and which, for sure, leads to the same expansion. |
If $1$, $\alpha_1$, $\alpha_2$, $\alpha_3$, $\ldots \alpha_9$ are the $10$th roots of unity, then what is the value of $$ (1 - \alpha_1)(1 - \alpha_2)(1 - \alpha_3) \cdots (1 - \alpha_9)? $$
I am not being able to solve this. Please help!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
If $1$, $\alpha_1$, $\alpha_2$, $\alpha_3$, $\ldots \alpha_9$ are the $10$th roots of unity, then what is the value of $$ (1 - \alpha_1)(1 - \alpha_2)(1 - \alpha_3) \cdots (1 - \alpha_9)? $$
I am not being able to solve this. Please help!
This question appears to be off-topic. The users who voted to close gave this specific reason:
Let $$f(x)=x^{10}-1=(x-1)(x-\alpha_1)\cdots(x-\alpha_9).$$ Then $$ f'(x)=10x^9=\sum_{i=0}^9\prod_{0\le j\le9, j\neq i}(x-\alpha_j) $$ (where I denote $\alpha_0=1$).
Can you calculate $f'(1)$? Do you see why it answers your question?
Hint: Let $ \mu_n $ be the set of $ n $th roots of unity. Then, we have
$$ \prod_{\zeta \in \mu_n} (x - \zeta) = x^n - 1 = (x-1)(x^{n-1} + x^{n-2} + \ldots + x + 1) $$
and therefore
$$ \prod_{\zeta \in \mu_n - \{1\}} (x - \zeta) = x^{n-1} + x^{n-2} + \ldots + x + 1 $$
Let the roots of $x^n-1=0$ be $$1,\alpha_1,\alpha_2,\cdots,\alpha_{n-1}$$
If $1-x=y\iff x=1-y$
So, the equation whose roots are $$1-1,1-\alpha_1,1-\alpha_2,\cdots,1-\alpha_{n-1}$$ will be $$(1-y)^n-1=0\iff(-1)^ny^n+(-1)^{n-1}\binom n1y^{n-1}\cdots-\binom n1y=0$$
So, the roots of $$y^{n-1}-\binom n1y^{n-1}\cdots+(-1)^{n-1}\binom n1=0$$ will be $$1-\alpha_1,1-\alpha_2,\cdots,1-\alpha_{n-1}$$
Can you use Vieta's formula now?
Hint: What is the product of the roots of the polynomial $(1-x)^{10} - 1$, other than $0$?
Here is an other approach. You have this result:
Roots to any polynomial with real coefficients are either real or come in complex conjugate pairs.
We have one real root, -1, the contribution from -1 would be multiplication by $(1-(-1))=2$.
Then let us consider the product of any complex conjugate pair $$(1-z)(1-z^*) = (1+R(z)^2+I(z)^2-2R(z))=2(1-R(z))$$ Where in the last step we used the $R(z)^2+I(z)^2 = 1$ which is true on the unit circle.
Now consider the distribution of these $R(z)$ on our circle. Maybe there is some way we can arrange them to make the pairwise product simpler. We had 8 roots, that means 4 complex conjugate pairs.
But we can also pair these pairs up so we have 2 pairs of pairs of opposite real signs. The real parts are cosine values. So the pairwise products become $1-\cos^2$ which should ring a trig-one bell and then we are rather close to a solution! |
The first relationship is actually more interesting than you suggest. Specifically, applying your recipe to the ordered partitions of $n$ produces $F_{2n-1}$, while adding the products
without reducing the last entry to $1$ produces $F_{2n}$.
It’s not too hard to see why this works. Suppose that it’s true for some $n$, and consider the ordered partitions of $n+1$. They are of two types: those that end in $1$, and those that don’t. Each partition of $n+1$ that ends in $1$ is obtained from a partition $\pi$ of $n$ by appending $+1$; its reduced product is the product of the entries in $\pi$, so the sum of the reduced products of these partitions of $n+1$ is $F_{2n}$.
Each partition of $n+1$ that does
not end in $1$ is obtained from a partition $\pi$ of $n$ by adding $1$ to its last element; its reduced product is the same as the reduced product of $\pi$, so the sum of the reduced products of these partitions of $n+1$ is $F_{2n-1}$. Thus, the sum of the reduced products of all of the ordered partitions of $n+1$ is $F_{2n}+F_{2n-1}=F_{2n+1}=F_{2(n+1)-1}$, as desired.
The sum of the unreduced products of the partitions of $n+1$ that end in $1$ is the same as the sum of their reduced products, or $F_{2n}$, so to complete the argument, we need only show that the sum of the unreduced products of the partitions that do
not end in $1$ is $F_{2n+1}$. But this is clear: if $\pi'$ is an ordered partition of $n+1$ obtained by adding $1$ to the last element of some ordered partition $\pi$ of $n$, the unreduced product of $\pi'$ is the sum of the reduced and unreduced products of $\pi$. Summing over all such partitions of $n+1$ then yields a total of $F_{2n}+F_{2n-1}=$ $F_{2n+1}$, just as in the last paragraph.
Of course the induction gets off the ground with no difficulty, since the sums for $n=1$ are $F_1=1=F_2$.
I’d meant to add this a while back, but I got busy and forgot:
This differs from savicko1’s argument chiefly in that it looks only at the immediately preceding integer.
The second question can be dealt with similarly. Let $P(n)$ be the set of ordered partitions of $n$. Call an ordered partition of $n$ odd or even according as the number of terms is odd or even. Let $P_o(n)$ be the set of odd ordered partitions of $n$, $P_o^-$ the set of odd ordered partitions of $n$ whose last term is $1$, and $P_o^+(n)$ be the set of odd ordered partitions of $n$ whose last term is greater than $1$, and define $P_e(n)$, $P_e^-(n)$, and $P_e^+(n)$ similarly for even ordered partitions of $n$. For each ordered partition $\pi$ of $n$ let $f(\pi)$ be the product of the factors $2^{x-1}$ as $x$ ranges over the odd-numbered terms of $\pi$. Finally, let $$s(n)=\sum_{\pi\in P(n)}f(\pi)\text{ and }t(n)=\sum_{\pi\in P_o(n)}f(\pi)\;.$$
Then $$\begin{align*}
&\sum_{\pi\in P_o^-(n+1)}f(\pi)=\sum_{\pi\in P_e(n)}f(\pi)=s(n)-t(n)\;,\\
&\sum_{\pi\in P_o^+(n+1)}f(\pi)=2\sum_{\pi\in P_o(n)}f(\pi)=2t(n)\;,\\
&\sum_{\pi\in P_e^-(n+1)}f(\pi)=\sum_{\pi\in P_o(n)}f(\pi)=t(n)\;,\text{ and}\\
&\sum_{\pi\in P_e^+(n+1)}f(\pi)=\sum_{\pi\in P_e(n)}f(\pi)=s(n)-t(n)\;,
\end{align*}\tag{1}$$
and since the lefthand sides of $(1)$ sum to $s(n+1)$, $$
s(n+1)=2s(n)+t(n)\tag{2}$$ and $$t(n+1)=\sum_{\pi\in P_o^-(n+1)}f(\pi)+\sum_{\pi\in P_o^+(n+1)}f(\pi)=s(n)+t(n)\;.$$ If $s(k)=F_{2k}$ for $k\le n$, then $$\begin{align*}
s(n+1)&=2F_{2n}+t(n)\\
&=2F_{2n}+s(n-1)+t(n-1)\\
&=2F_{2n}+s(n-1)+\big(s(n)-2s(n-1)\big)\qquad\text{(from }(2)\text{)}\\
&=3F_{2n}-F_{2n-2}\\
&=2F_{2n}+F_{2n-1}\\
&=F_{2n}+F_{2n+1}\\
&=F_{2n+2}\;,
\end{align*}$$
and the result follows by induction. |
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ... |
State
A
state is something that a system is in. The system is where I perform my measurement; the state is the result of that measurement.
When I perform one kind of a measurement, and then I perform another kind of a measurement, the two results are correlated. In particular, the first result may determine the probability that the second measurement will yield a specific value, i.e., that the system will be in a specific state with respect to the second measurement, as opposed to some other state.
The abstract symbol for a state \(x\) is \(|x\rangle\). This is just a label; it is not a number.
The symbol for the transition from state \(x\) to state \(y\) is \(\langle y|x\rangle\). This, actually, is a number!
Amplitude
Specifically, it is a
complex number called the amplitude. The reason why it is a complex number is experimental: we found that if there are two possible ways for a system to reach state \(y\) starting from state \(x\), it is not the probabilities, but these complex numbers that will need to be summed: Probability
The actual probability that the system in state \(x\) will also be in state \(y\) is computed as the square of the absolute value of the complex number:
Base states
It is assumed as an axiom that any state can be expressed as a sum of
base states:
What we know about the base states is that they are
orthogonal:
\[\langle i|j\rangle=\delta_{ij}.\]
State vector
The contribution of each base state \(|i\rangle\) to \(|x\rangle\) is characterized by , which is just a complex number. The state \(|x\rangle\) can, therefore, be viewed as a
vector that is expressed in terms of base vectors \(| i\rangle\) in some complex vector space.
The transition amplitude from state \(x\) to state \(y\) can be expressed through a set of base states as:
The probability that a system in state \(x\) is in state \(x\) is unity:
\[\langle x|x\rangle=\sum\limits_i\langle x|i\rangle\langle i|x\rangle=1.\]
The probability that a system in state \(x\) is found in
some base state is also unity:
\[\sum\limits_i|\langle i|x\rangle|^2=\sum\limits_i\langle i|x\rangle\langle i|x\rangle^\star=1.\]
From this one can see that
\[\langle i|x\rangle=\langle x|i\rangle^\star.\]
And since any state \(y\) can be a base state in some set of base states, it is true in general that
\[\langle y|x\rangle=\langle x|y\rangle^\star.\]
Operators
When you
do something to a system, you change its state. This is expressed by an operator acting on that state:
\[|y\rangle=\hat A|x\rangle.\]
This is
defined to mean the following:
\[\hat A|x\rangle=\sum\limits_{ij}|i\rangle\langle i|\hat A|j\rangle\langle j|x\rangle,\]
which means that \(\hat A\) is just a collection of matrix elements \(A_{ij}\), expressed with respect to some set of base states.
Expectation value
A measurement may be expressed in the form of an operator. If this is the case, the average (expectation) value of that measurement can be expressed as:
\[A_\mathrm{av}=\langle x|\hat A|x\rangle,\]
which really is just shorthand for
\[A_\mathrm{av}=\sum\limits_{ij}\langle x|i\rangle\langle i|\hat A|j\rangle\langle j|x\rangle.\]
Wave function
What if there is an infinite number of states? For instance, a particle's position \(l\) along a line may be expressed in terms of the set of individual positions as base states. But there is an infinite number of such positions possible. Thus, our sum
\[|l\rangle=\sum\limits_x|x\rangle\langle x|l\rangle\]
becomes instead the integral
\[|l\rangle=\int|x\rangle\langle x|l\rangle dx.\]
This equation has little practical meaning since \(|x\rangle\) is just an abstract symbol. However, the probability that a system in state \(l\) is later found in state \(k\), previously expressed as the sum (1):
\[\langle k|l\rangle=\sum\limits_x\langle k|x\rangle\langle x|l\rangle.\]
is now the integral
\[\langle k|l\rangle=\int\langle k|x\rangle\langle x|l\rangle dx.\]
Both \(\langle k|x\rangle\) and \(\langle x|l\rangle\) are just complex numbers; complex-valued functions, in fact, of the continuous variable \(x\):
\[\langle k|x\rangle=\langle x|k\rangle^\star=\psi^\star(x),\]
\[\langle l|x\rangle=\psi(x).\]
These functions are called
wave functions mainly because they typically appear in the form of periodic complex-valued functions. With their help, the transitional probability can now be expressed as:
\[P(\phi\rightarrow\psi)=\int\phi^\star(x)\psi(x)dx,\]
and the expectation value of an operator can be written as
Algebraic operator
In this context, \(\hat A\) no longer works as a matrix operator converting a state vector into another state vector, but as an algebraic operator converting a wave function into another wave function. How the matrix operator, expressed in terms of states and amplitudes, and the algebraic operator, expressed usually as a differential operator, relate to each other is another question!
Position operator
If we know the probability \(P(x)\) that a particle will be at position \(x\), we can compute the average position of the particle after many measurements as follows:
\[\bar x=\int xP(x)dx.\]
But this is the same as
\[\int x\phi^\star(x)\phi(x)dx=\int\phi^\star(x)x\phi(x)dx,\]
which is formally identical to the expectation value (2) for a measurement that can be expressed in the form of an operator \(\hat x\). In other words, \(\hat x\) can be viewed as the
position operator. When the base states are positions, the position operator is just a multiplication of the wave function by \(x\). Momentum operator
The same computation can be performed for the momentum, using as base states states of definite momentum:
\[\int p\phi^\star(p)\phi(p)dp=\int\phi^\star(p)p\phi(p)dp.\]
Question is, can the momentum operator be expressed in terms of base states of position?
The amplitude of a system, which is in state \(\beta\), to be found in a state of definite momentum \(p\), is just the definite integral
\[\int\limits_{-\infty}^\infty\langle p|x\rangle\langle x|\beta\rangle dx.\]
The relationship between position and momentum, specifically the amplitude for a particle to be found at position \(x\) after it has been measured to have momentum \(p\) is
assumed to be
So our integral becomes
Now let's use, as \(|\beta\rangle\), the state \(\hat p|k\rangle\) (it can be any state after all) where \(\langle x|k\rangle=\phi(x)\), and the \(k\)'s are assumed to be states of definite momentum. This way, our earlier expression becomes the expectation value of the momentum:
\[\int\limits_{-\infty}^\infty\langle p|x\rangle\langle x|\beta\rangle dx=\int\limits_{-\infty}^\infty\langle p|x\rangle\langle x|\hat p|k\rangle dx=\int\limits_{-\infty}^\infty\langle p|x\rangle p\langle x|k\rangle dx.\]
Then \(\langle x|\beta\rangle\) is just \(p\langle x|k\rangle=p\phi(x)\), and:
\[\langle p|\beta\rangle=\int\limits_{-\infty}^\infty e^{-ipx/\hbar}p\phi(x)dx.\]
The integral can be computed by observing that \(de^{-ipx/\hbar}/dx=(-i/\hbar)pe^{-ipx/hbar}\), integrating in parts, and assuming that \(\phi(x)=0\) when \(x=\pm\infty\):
\[\langle p|\beta\rangle=\frac{\hbar}{i}\int\limits_{-\infty}^\infty e^{-ipx/\hbar}\frac{\partial\phi}{\partial x}dx,\]
so, from (3):
\[\langle x|\beta\rangle=\frac{\hbar}{i}\frac{\partial\phi}{\partial x},\]
and we now have an expression for the momentum operator \(\hat p\):
\[\hat p=\frac{\hbar}{i}\frac{\partial}{\partial x}.\]
Time displacement
How does a system evolve over time? Let's consider the
time displacement operator \(\hat U(t_1,t_2)\):
\[\langle\chi|\hat U(t_1,t_2)|\phi\rangle.\]
S-matrix
When \(t_1\rightarrow-\infty\) and \(t_2\rightarrow+\infty\), we call \(\hat U(t_1,t_2)\) the
S-matrix.
Making \(t_1=t\) and \(t_2=t+\Delta t\), observing that when \(\Delta t=0\), \(U_{ij}\) (in some coordinate representation) must be \(\delta_{ij}\), and
assuming that for small \(\Delta t\), the change in \(\phi\) will be linear, we get:
\[U_{ij}=\delta_{ij}-\frac{i}{\hbar}H_{ij}(t)\Delta t.\]
(the factor \(-i/\hbar\) is introduced for reasons of convenience.)
In other words, the
difference between the wave function of the two states can be expressed as:
\[\phi'-\phi=-\frac{i}{\hbar}\Delta t\bar H\phi,\]
or, dividing by \(\Delta t\) and recognizing the left-hand side as a time differential:
\[i\hbar\frac{\partial\phi}{\partial t}=\bar H\phi.\]
Schrödinger equation
Schrödinger, that kind chap, then just decided to use in place of \(\hat H\) an operator that he concocted up on the basis of the
classical expression for energy:
\[E=\frac{p^2}{2m}+V.\]
His equation:
describes the wave function of a particle moving in a potential field \( V\).
A crucial thought is that the Schrödinger equation is not as fundamental as you might have been led to believe. Indeed, there's no single Schrödinger equation; the actual equation of a system depends on the characteristics of that system, and is often derived heuristically, through the process of operator substitution. Operator substitutions
One result is a "rule of thumb": substitution rules that are used to derive quantum operators from the classical quantities of momentum, energy, and position:
\[\hat p\rightarrow\frac{\hbar}{i}\frac{\partial}{\partial x},\]
\[\hat H\rightarrow i\hbar\frac{\partial}{\partial t},\]
\[\hat x\rightarrow x.\]
Commutativity
The operators \(\hat x\) and \(\hat p\) do not commute:
\[(\hat x\circ\hat p)\phi=x\frac{\hbar}{i}\frac{\partial\phi}{\partial x},\]
\[(\hat p\circ\hat x)\phi=\frac{\hbar}{i}\frac{\partial(x\phi)}{\partial x}=\frac{\hbar}{i}\frac{\partial x}{\partial x}\phi+\frac{\hbar}{i}x\frac{\partial\phi}{\partial x},\]
\[(\hat p\circ\hat x-\hat x\circ\hat p)\phi=\frac{\hbar}{i}\phi.\]
Probability Current
A simple manipulation of the Schrödinger equation—multiplying on the left by \(\phi^\star\), multiplying the equation's complex conjugate on the left by \(\phi\), and subtracting one from the other—can lead to the continuity equation:
\[\phi^\star\left(\frac{\hbar^2}{2m}\nabla^2\phi+V\phi-i\hbar\frac{\partial\phi}{\partial t}\right)-\phi\left(\frac{-\hbar^2}{2m}\nabla^2\phi^\star+V\phi^\star+i\hbar\frac{\partial\phi^\star}{\partial t}\right)\]
\[=\frac{-\hbar^2}{2m}(\phi^\star\nabla^2\phi+\nabla\phi^\star\nabla\phi-\nabla\phi\nabla\phi^\star-\phi\nabla^2\phi^\star)-i\hbar\left(\phi^\star\frac{\partial\phi}{\partial t}+\phi\frac{\partial\phi^\star}{\partial t}\right)\]
\[=\frac{-\hbar^2}{2m}\nabla(\phi^\star\nabla\phi-\phi\nabla\phi^\star)-i\hbar\frac{\partial\phi^\star\phi}{\partial t},\]
or, substituting
\[{\bf\mathrm{j}}=\frac{-i\hbar^2}{2m}(\phi^\star\nabla\phi-\phi\nabla\phi^\star),\]
\[\rho=\hbar\phi^\star\phi,\]
we get
\[-i\left(\nabla{\bf\mathrm{j}}+\frac{\partial\rho}{\partial t}\right)=0,\]
\[\nabla{\bf\mathrm{j}}+\frac{\partial\rho}{\partial t}=0.\]
References
Feynman, Richard P., The Feynman Lectures on Physics III., Addison-Wesley, 1977 Aitchison, I. J. R. & Hey, A. J. G., Gauge Theories in Particle Physics, Institute of Physics Publishing, 1996 |
The problem is given: source
Let $c_0$ be a space of real sequences $x = \{x_n\}_{n = 1}^\infty=0$ converging to $0$. Let $\ell^\infty$ be a set of real sequences $w = \{w_k\}^\infty_{k=0}$ furnished with the norm $\|w \|_\infty = \sup_{1 \leq n \leq \infty} |w_n|$. Prove that $c_0$ is closed in $\ell^\infty$
The pdf says
TL;DR: Please check whether my recap of the proof is correct? As detail as possible so I have a deep understanding of what is really going on. Thank you
Start with a sequence in $c_0$, our goal is to show that its limit $\omega \in \ell^\infty$ is also in $c_0$. This limit exists by completeness of $\ell^\infty$.
Denote this sequence by $(x_n^k)_{k=1}^{\infty} = ( \{x^1_1,x_2^1,\dots \}, \{x_1^2, x_2^2,\dots \}, \dots)$. As the objects in $c_0$ are sequences, we have a sequence of sequences.
A quick remark: My intuition tells me that since the elements of $c_0$ all converge to $0$, it would seem to make sense to me that this sequence (of sequences) will also converge to a sequence (of sequences) to $0.$ That is $\omega = 0.$
Instead of showing that the entire sequence (of sequences) $(x_n^k)_{k=1}^{\infty}$ converges to a limit (of sequences) $(\omega^{k}_n)_{k =1}^{\infty}$ (this should be a sequence of constant sequences), we show that each term of $(x_n^k)_{k=1}^{\infty}$ converges to each term of $(\omega_n^k)_{k = 1}^{\infty}$. Note that for a fixed $k$, $\omega_n^k = (\omega_1^k, \omega_2^k, \dots, )$, so the elements are numbers
I am guessing $(\omega_n^k)_{k =1}^{\infty}$ is another sequence in $c_0$? I don't know why you want $(\omega_n^k)_{k =1}^{\infty}$to be in $\ell^\infty$ in the first place.
I am also guessing that when they say
Take $\epsilon >0$ and $N_0 \in \mathbb{N}$ such that $\sup_{1 \leq n \leq \infty} |x_{n}^{k} - \omega_n| < \epsilon/2$ for all $k > N_0$.
They are using the convergence of $(x_n^k)_{k=1}^{\infty}$
I am guessing that when they say
For each $k $, choose $N_1 \in \Bbb N$ such that $|x_n^k| < \epsilon/2$ for all $n > N_1$ and $k > N_0$
They are using the convergence of the
elements in $(x_n^k)_{k=1}^{\infty}$, which all go to $0$.
Remark: As opposed to (5), they don't really know if the sequence (of sequence) $(\omega_n)$ really go to a sequence (of sequences) of $0$s. So they denote this "mystery" limit of sequence (of sequences) by $(\omega_n^k)_{k = 1}^{\infty}$
(7) I think the conclusion now is that $0 = \omega \in c_0$?
(8) Would it be more firm to make $k,n > \max \{N_1,N_1\}$ |
I will show that there is a unique $f_a$ which obeys $f(1)=0$, $f(x) = f(x-1) + \log^a x$ and is convex on $(e^{a-1}, \infty)$. The functional equation gives us a unique extension to $(1,\infty)$; I am not sure whether it is convex there. (I will write $\log^a x$ for $(\log x)^a$, as the original poster does.)
Construction Set $g(x) = \log^a x$. So$$g'(x) = a \frac{\log^{a-1} x}{x} \ \mbox{and}$$$$g''(x) = \left( \frac{d}{dx} \right)^2 \log^a x = \frac{a (a-1 - \log x) \log^{a-2} x}{x^2}.$$For $x \in \mathbb{C} \setminus (-\infty, -1)$, set$$h_2(x) = - \sum_{n=1}^{\infty} g''(x+n).$$(If $a<2$, we also have to remove $x=0$, so we don't have $\log 1$ in the denominator.) We have $g''(x+n) = O((\log n)^{a-1}/n^2)$, so the sum converges, and does so uniformly on compact sets. So $h_2(x)$ is an analytic function. Moreover, for $x \in (e^{a-1}, \infty)$, we have $g''(x) <0$, so $h_2(x)>0$. By construction, we have $$h_2(x) - h_2(x-1)=g''(x).$$ Also, easy estimates give $h_2(x) = O(\log^a x/x)$ as $x \to \infty$.
Put $h_1(x) = \int_{t=1}^x h_2(t) dt$. Then $h_1(x) - h_1(x-1) = g'(x) + C$ for some $C$. Sending $x$ to $\infty$, we have $$\lim_{x \to \infty} h_1(x) - h_1(x-1) = \lim_{x \to \infty} \int_{t=x-1}^x O(\log^a x/x) dt = O(\log^a x/x)=0,$$ and $\lim_{x \to \infty} g'(x) =0$, so $C=0$.
Integrating again, put $h(x) = \int_{t=1}^x h_1(t) dt$. So $h(x) - h(x-1) = \log^a x + C$ for some $C$. This time, I couldn't figure out whether or not $C$ is the correct constant. But, if it isn't, that's okay: Set $f(x) = h(x) - C(x-1)$. Now the condition $f(1) = 0$ and $f(x)-f(x-1) = \log^a x$ are okay, and $f''(x) = h''(x) = h_2(x)$ which, as we observed, is $>0$ for $x>e^{a-1}$.
Uniqueness Suppose there was some other $C^2$ function $\tilde{f}(x) = f(x)+r(x)$ which met the required criteria. Then $r(x)-r(x-1)=0$, so $r(x)$ is periodic. Suppose for the sake of contradiction that $r$ is not constant. (If $r$ is constant, then the constant is zero, as $r(1)=0$.) Then $r''$ is a periodic function with average $0$, so $r''(y)<0$ for some $y$, and we then have $r''(y)=r''(y+1)=r''(y+2)=\cdots$. But then $\tilde{f}''(y+n) = f''(y+n) + r''(y+n) = O(\log^{a-1} n/n^2)+r''(y+n)>0$ for all $n$, contradiction that $r''(y+n)<0$.
Convexity for all $x$? What remains is the question: Is $f''(x)>0$ for all $x \in (1,\infty)$? Or, equivalently, is$$\sum_{n=1}^{\infty} g''(x+n)<0$$for all $x \in (1, \infty)$? I expected that the answer would be "no", and I would just have to search for a little bit to find a counter-example to finish this answer. But, so far, numerical computations suggest the answer is always "yes". I think I could prove this by unenlightening bounds, but instead I'm going to go to bed and see if I think of a better strategy in the morning. |
I need help to know if the result obtained from the following problem is correct, or if there is a better way to solve it.
Clients arrive at a bank according to a Poisson process of parameter lambda> 0 to a waiting system with a single server. The customer-to-customer services consist of two independent and suc- cessive stages, each lasting a random time that is distributed according to an exponential law of mu> 0 parameter. The length of service in each stage are independent of each other and of the arrivals to the system. The same server attends both stages so it can not receive a second person in stage one when the person who proceeds is in stage two. Write the Q generator of a Markov chain in continuous time that model the number of clients in the system, and explain what the states of the chain represent. Determine under what conditions there is an invariant vector for Q and have such a vector.
The form in which I determine my solution is the following.
First, I construct a process $Y_t$ that represents the number of cuts left to the process to finish all the services of all the clients that are in the system. If there are n clients in the system, then the number of stages you have to complete to complete all services are $2(n-1)-1$.
The generator Q associated with this process has the form
$$ Q=\left(\begin{matrix} -\lambda& 0& \lambda& 0& \cdots\\ \mu& -\lambda -\mu& 0& \lambda& \\ 0& \mu& -\lambda-\mu& 0&\cdots\\ 0& 0& \mu& -\lambda-\mu& \\ \vdots& & \vdots& & \ddots \end{matrix}\right) $$ Using the generator Q and the relation that fulfills an invariant measure, and that is $\pi Q=0$, we have
$$ \begin{aligned} \lambda\pi_0&=\mu\pi_1\\ (\lambda+\mu)\pi_1&=\mu\pi_2\\ (\lambda+\mu)\pi_n&=\lambda\pi_{n-1}+\mu\pi_{n+1},\quad n=2,3,\cdots, \end{aligned} $$ If the relationship is considered time, $\pi_n=x^n$, then it will have
$$ (\lambda+\mu)x^n=\lambda x^{n-2}+\mu x^{n+1} $$
Dividing by $x^{n-2}$ is achieved
$$ (\lambda+\mu)x^2=\lambda+\mu x^2 $$
Such a polynomial has roots
$$ \begin{aligned} x&=1\\ x_1&=\frac{1}{2}(\rho+\sqrt{\rho^2+4\rho})\\ x_2&=\frac{1}{2}(\rho-\sqrt{\rho^2+4\rho}), \quad when \quad \rho=\frac{\lambda}{\mu} \end{aligned} $$
It can be shown that any combination of the following form is satisfied
$$ \pi_n=c_1x_1^n+c_2x_2^n, \quad n=0,1, \cdots, $$ It can also show that using the above relation with two equations, we have a unique colution for the coefficients $c_k$, in the form
$$ c_k=\frac{1-2\rho}{\prod_{j\neq k}(1-x_j/x_k)} $$ Concluding that the invariant distribution of the Y-chain has elements of the form
$$ \pi_n=\frac{1-2\rho}{\sqrt{\rho^2+4\rho}}\left\{\left[\frac{1}{2}\left(\rho+\sqrt{\rho^2+4\rho}\right)\right]^n-\left[\frac{1}{2}\left(\rho-\sqrt{\rho^2+4\rho}\right)\right]^n\right\}, \quad n\geq 0 $$
If I now define the process (which is really what interests me) $ X_t $, as the process that counts in the number of clients in the system at time $ t $, then using the above I was able to see that the invariant distribution For this process has elements of the form
$$ \begin{aligned} \gamma_n&=\sum_{k=2(n-1)+1}^{2n}\pi_k\\ &=\sum_{j=1}^2\sum_{k=2(n-1)+1}^{2n}c_jx_j^k\\ &=\sum_{j=1}^2c_j(x_j^{-1}+1)x_j^{2n} \end{aligned} $$ That is, the mixture of two geometric distributions.
And finally it can be seen that the condition for the $\gamma$ invariant measure to exist is that $x_1 <1$ and $x_2 <1$, or equivalently that 2 $\lambda <\mu$. |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
Let $A = \mathbb{R}\setminus\mathbb{Q}$. Then it can be shown that $A + A = \mathbb{R}$, for example by using the fact that $A$ is $G_{\delta}$. Let $q\in\mathbb{Q}$. This means that $q = r_1+r_2$ where $r_1,r_2$ are irrational numbers. But this is not too surprising, as every rational can be written $q = \left(\frac{q}{2} + r_1\right) + \left(\frac{q}{2} - r_1\right)$. The question is, is this the only way? More precisely, if $q = r_1 + r_2$ is rational and $r_1,r_2$ are irrational, does this mean that there exists $r\in\mathbb{R}\setminus\mathbb{Q}$ and $q_1,q_2\in\mathbb{Q}$ such that $q_1+q_2 = q$, and $r_1 = q_1 + r, r_2 = q_2 - r$?
Yes, the set of solutions to your equations are:
\begin{align} q_2 = q - q_1 \\ r = r_1 - q_1 \end{align}
where $q_1$ is a free variable allowed to be any rational number.
You are engaging in a slippery terrain.
When you write $r_1=q_1+r$, do you imply that $r_1$ is the sum of a rational $q_1$ and a "rational free" irrational $r$ ? We shall see that this concept is not so simple. In fact since $(\mathbb Q,+)$ being a normal subgroup of $(\mathbb R,+)$ we can define the quotient space $\mathbb R/\mathbb Q$ of the reals modulo the rationals.
And thanks to the axiom of choice we can construct a Vitali set $V$ such that $V\subset[0,1]$ and $V$ contains exactly one representative for each class of $\mathbb R/\mathbb Q$.
This may seems a convenient construction but this set belongs to a freak show, it is not measurable and we have no way of knowing what's exactly inside.
Let the representative of a real $x$ be noted $[x]$.
Obviously for any rational $q$ then $0$ seems a suitable choice for $[q]$.
But what would be $[\sqrt{2}]$ ? Would this be $\sqrt{2}-1\in[0,1]$ or maybe $\sqrt{2}-\frac 34$ which is also an element of $[0,1]$.
We could say then that since $\mathbb Q$ is countable, let just have an ordering $\phi$ based on the bijection with $\mathbb N$ and select $[x]=x+\phi(\min\{n\mid x+\phi(n)\in[0,1]\})$.
Unfortunately $[\sqrt{2}]$ and $[\sqrt{2}+17/113]$ would not have the same representative via this process although they belong to the same class and this is quite annoying. This is why the axiom of choice is required to pick a suitable representative for every real.
Now let's come back to your question:
First let's notice that for $(v_1,v_2)\in V^2$ then $v_1+v_2$ is a rational means that $[v_1]=[-v_2]$ ($v_1+v_2=q\iff v_1=q-v_2=-v_2\pmod{\mathbb Q})$
Thus if you have two irrationnals $r_1+r_2=q$ then $\begin{cases} r_1=[r_1]+q_1\\r_2=[r_2]+q_2\end{cases}$ and $[r_1]+[r_2]=q-q_1-q_2$
So $[r_1]=[-r_2]$, let's call it $v$.
Consequently we have $\begin{cases}r_1=v+q_1\\r_2=(q_3-v)+q_2\end{cases}\iff\begin{cases}r_1=q_1+v\\r_2=(q_3+q_2)-v\end{cases}$ with $q_1+q_2+q_3=q$
So indeed the construction you have envisaged with $r_1,r_2$ being the sum of rationals and $\pm$ the same irrational number makes sense.
However, as we have seen there is no way in practice to exhibit $v\in V$. All we can access are elements in this class which are already $q_i+v$.
You should really see $r_1+r_2=q$ as $[r_1]=[-r_2]$ which is just $r_2=q-r_1$ and nothing more really that can be said about the "inners" of $r_1$ or $r_2$. They just sum up to a rational... |
The following is from
Representation Theory of Finite Groups, by B. Steinberg; Example 3.1.14, p.16:
Let $\rho:S_3\to GL_2(\mathbb{C})$ be specified on the generators $(1 \ 2)$ and $(1 \ 2\ 3)$ by
$$\rho_{(1 \ 2)}=\begin{bmatrix}-1 & -1 \\ 0 & 1\end{bmatrix}, \ \rho_{(1 \ 2 \ 3)}=\begin{bmatrix}-1 & -1 \\ 1 & 0\end{bmatrix} $$
I'm trying to verify that $\rho:S_3\to GL_2(\mathbb{C})$ is a representation. By definition a representation is homomorphism, thus I want to show that $\rho_{gh}=\rho_g\rho_h$ for every $g,h\in S_3$. Starting with $$\rho_{(1 \ 2)}\rho_{(1 \ 2 \ 3)}=\begin{bmatrix}-1 & -1 \\ 0 & 1\end{bmatrix}\begin{bmatrix}-1 & -1 \\ 1 & 0\end{bmatrix}=\begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix}$$ we need to verify that $\rho_{(1 \ 2)(1 \ 2 \ 3)}=\rho_{(2 \ 3)}=\begin{bmatrix}0 & 1 \\ 1 & 0 \end{bmatrix}$, but I can't see how to do that. Moreover, I have doubts on checking the homomorphism property only for generators, that is, why checking the homomorphism property for the generators is enough to establish that $\rho$ is a representation? |
Difference between revisions of "Kakeya problem"
(9 intermediate revisions by 4 users not shown) Line 1: Line 1: − +
'''Kakeya set''' a subset <math>\subset{\mathbb F}_3^n</math> that contains an [[algebraic line]] in every direction; that is, for every <math>d\in{\mathbb F}_3^n</math>, there exists <math>\in{\mathbb F}_3^n</math> such that <math>,+d,+2d</math> all lie in <math></math>. Let <math>k_n</math> be the smallest size of a Kakeya set in <math>{\mathbb F}_3^n</math>.
Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements.
Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements.
−
==
+
== ==
−
Trivially,
+
Trivially,
:<math>k_n\le k_{n+1}\le 3k_n</math>.
:<math>k_n\le k_{n+1}\le 3k_n</math>.
−
Since the Cartesian product of two Kakeya sets is another Kakeya set,
+
Since the Cartesian product of two Kakeya sets is another Kakeya set,
:<math>k_{n+m} \leq k_m k_n</math>;
:<math>k_{n+m} \leq k_m k_n</math>;
−
this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity.
+
this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity.
− + −
To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set,
+
To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence
−
:<math>k_n\
+
:<math>k_n\3^{(n+1)/2}.</math>
One can derive essentially the same conclusion using the "bush" argument, as follows. Let <math>E\subset{\mathbb F}_3^n</math> be a Kakeya set, considered as a union of <math>N := (3^n-1)/2</math> lines in all different directions. Let <math>\mu</math> be the largest number of lines that are concurrent at a point of <math>E</math>. The number of point-line incidences is at most <math>|E|\mu</math> and at least <math>3N</math>, whence <math>|E|\ge 3N/\mu</math>. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that <math>|E|\ge 2\mu+1</math>. Comparing the two last bounds one obtains
One can derive essentially the same conclusion using the "bush" argument, as follows. Let <math>E\subset{\mathbb F}_3^n</math> be a Kakeya set, considered as a union of <math>N := (3^n-1)/2</math> lines in all different directions. Let <math>\mu</math> be the largest number of lines that are concurrent at a point of <math>E</math>. The number of point-line incidences is at most <math>|E|\mu</math> and at least <math>3N</math>, whence <math>|E|\ge 3N/\mu</math>. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that <math>|E|\ge 2\mu+1</math>. Comparing the two last bounds one obtains
<math>|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}</math>.
<math>|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}</math>.
−
A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (a
+ + + + + + +
A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (ab)\in G\}</math> is all of <math>{\mathbb F}_3^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 a paper of Katz-Tao], we conclude that <math>3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}</math>, leading to <math>|E|\ge 3^{6(n-1)/11}</math>. Thus,
:<math>k_n \ge 3^{6(n-1)/11}.</math>
:<math>k_n \ge 3^{6(n-1)/11}.</math>
−
==
+
== ==
We have
We have
Line 36: Line 42:
since the set of all vectors in <math>{\mathbb F}_3^n</math> such that at least one of the numbers <math>1</math> and <math>2</math> is missing among their coordinates is a Kakeya set.
since the set of all vectors in <math>{\mathbb F}_3^n</math> such that at least one of the numbers <math>1</math> and <math>2</math> is missing among their coordinates is a Kakeya set.
−
This estimate can be improved using an idea due to Ruzsa. Namely, let <math>E:=A\cup B</math>, where <math>A</math> is the set of all those vectors with <math>r/3+O(\sqrt r)</math> coordinates equal to <math>1</math> and the rest equal to <math>0</math>, and <math>B</math> is the set of all those vectors with <math>2r/3+O(\sqrt r)</math> coordinates equal to <math>2</math> and the rest equal to <math>0</math>. Then <math>E</math>, being of size just about <math>(27/4)^{r/3}</math> (which is not difficult to verify using [[Stirling's formula]]), contains lines in a positive proportion of directions. Now one can use the random rotations trick to get the rest of the directions in <math>E</math> (losing a polynomial factor in <math>n</math>).
+
This estimate can be improved using an idea due to Ruzsa . Namely, let <math>E:=A\cup B</math>, where <math>A</math> is the set of all those vectors with <math>r/3+O(\sqrt r)</math> coordinates equal to <math>1</math> and the rest equal to <math>0</math>, and <math>B</math> is the set of all those vectors with <math>2r/3+O(\sqrt r)</math> coordinates equal to <math>2</math> and the rest equal to <math>0</math>. Then <math>E</math>, being of size just about <math>(27/4)^{r/3}</math> (which is not difficult to verify using [[Stirling's formula]]), contains lines in a positive proportion of directions. Now one can use the random rotations trick to get the rest of the directions in <math>E</math> (losing a polynomial factor in <math>n</math>).
Putting all this together, we seem to have
Putting all this together, we seem to have
Line 44: Line 50:
or
or
−
:<math>(1.8207
+
:<math>(1.8207+o(1))^n \le k_n \le (1.+o(1))^n.</math>
Latest revision as of 00:35, 5 June 2009
A
Kakeya set in [math]{\mathbb F}_3^n[/math] is a subset [math]E\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]e\in{\mathbb F}_3^n[/math] such that [math]e,e+d,e+2d[/math] all lie in [math]E[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math].
Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements.
Basic Estimates
Trivially, we have
[math]k_n\le k_{n+1}\le 3k_n[/math].
Since the Cartesian product of two Kakeya sets is another Kakeya set, the upper bound can be extended to
[math]k_{n+m} \leq k_m k_n[/math];
this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity.
Lower Bounds
To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence
[math]k_n\ge 3^{(n+1)/2}.[/math]
One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math].
The better estimate
[math]k_n\ge (9/5)^n[/math]
is obtained in a paper of Dvir, Kopparty, Saraf, and Sudan. (In general, they show that a Kakeya set in the [math]n[/math]-dimensional vector space over the [math]q[/math]-element field has at least [math](q/(2-1/q))^n[/math] elements).
A still better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a,b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus,
[math]k_n \ge 3^{6(n-1)/11}.[/math] Upper Bounds
We have
[math]k_n\le 2^{n+1}-1[/math]
since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set.
This estimate can be improved using an idea due to Ruzsa (seems to be unpublished). Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]).
Putting all this together, we seem to have
[math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math]
or
[math](1.8207+o(1))^n \le k_n \le (1.8899+o(1))^n.[/math] |
Here's a useful topological fact.
Fact. If $K$ is a compact subset of a domain $\Omega$, then there exists a domain $U$ such that $K\subset U$ and $\overline{U}$ is a compact subset of $\Omega$.
Proof: Fix a point $z_0\in \Omega$. For $n=1,2,\dots$ let $G_n$ be the connected component of $z_0$ in the set $$\{z\in\Omega: |z|<n, \ \operatorname{dist}(z,\partial \Omega)>1/n\}$$Using the connectedness of $\Omega$, one can show that $\bigcup_n G_n=\Omega$. Therefore, the sets $G_n$ form an open cover of $K$. There is a finite subcover; since the sets $G_n$ are nested, this means $K\subset G_n$ for some $n$. This $G_n$ is the desired $U$. $\quad\Box$
Back to the issue. It's not really about holomorphic functions and their derivatives. We are to prove the following:
Theorem 1. If statement $A$ holds on every compact subset of domain $\Omega$, then statement $B$ holds on every compact subset of $\Omega$.
But instead we prove
Theorem 2. If statement $A$ holds on domain $\Omega$, then statement $B$ holds on every compact subset of $\Omega$.
Indeed, suppose Theorem 2 is proved. Given $\Omega$ as in Theorem 1 and its compact set $K$, take $U$ from the topological fact. Since $\overline{U}$ is a compact subset of $\Omega$, property $A$ holds on $U$. Theorem 2 says that property $B$ holds on $K$, which was to be proved. |
This article appeared originally in my blog.
It looks like the Mythbusters tend to ignore air resistance.
In a recent episode, they claimed to have demonstrated that a horizontally fired bullet and a bullet that is simply dropped fall to the ground in the same amount of time.
They were wrong. What they actually demonstrated is that air resistance causes the fired bullet to hit the ground more slowly.
Their argument would apply perfectly in a vacuum, as on the surface of the Moon, but not here on the Earth, where the bullet’s motion is governed not just by the laws of gravity, but also by the laws of a non-conservative force, namely air resistance. (Why is it non-conservative? Some of the bullet’s kinetic energy is converted into heat, as it travels through the air at high speed. Unless we also include the thermodynamics of the air into our equations of motion, the equations will not conserve energy, as the amount of kinetic energy converted into heat will just appear “lost”.)
The bullet’s velocity, $v$, can be written as $v^2=v_h^2+v_v^2$, where $v_h$ is the horizontal and $v_v$ is the vertical component. The initial horizontal velocity is $v_0$. The initial vertical velocity is 0.
Air resistance is proportional to the square of the bullet’s velocity. To be precise, acceleration due to air resistance will be
\[a_\mathrm{air}=\kappa v^2,\]
where $\kappa$ is an unknown proportionality factor. (To be more precise, $\kappa=\frac{1}{2}cA\rho m$, where $c$ is the dimensionless drag coefficient, $A$ is the bullet’s cross-sectional area, $\rho$ is the density of the air, and $m$ is the bullet’s mass.) The direction of the acceleration will be opposite the direction of the bullet’s motion.
The total acceleration of the bullet will have two components: a vertical component $g$ due to gravity, and a component $a_\mathrm{air}$ opposite the direction of the bullet’s motion.
The resulting equations of motion can be written as:
\begin{align}\frac{dv_h}{dt}&=-\kappa vv_h,\\\frac{dv_v}{dt}&=g-\kappa vv_v.\end{align}
Right here we can see the culprit: air resistance not only slows the bullet down horizontally, it also reduces its downward acceleration.
This is a simple system of two differential equations in the two unknown functions $v_h(t)$ and $v_v(t)$. Its solution is not that simple, unfortunately. However, it can be greatly simplified if we notice that given that $v_v\ll v_h$, $v\simeq v_h$, and therefore, we get
\begin{align}\frac{dv_h}{dt}&=-\kappa v_h^2,\\\frac{dv_v}{dt}&=g-\kappa v_vv_h.\end{align}
This system is solved by
\begin{align}v_h&=\frac{1}{\kappa t+C_1},\\v_v&=v_h\left[\left(\frac{1}{2}\kappa t^2+C_1t\right)g+C_2\right].\end{align}
Given
\[v_0=\frac{1}{\kappa t_0+C_1},\]
we have
\[C_1=\frac{1}{v_0-\kappa t_0},\]
which leads to
\[v_h=\frac{v_0}{\kappa(t-t_0)v_0+1},\]
or, if we set $t_0=0$,
\[v_h=\frac{v_0}{\kappa tv_0+1}.\]
Similarly, given $v_v(0)=0$, we get $C_2=0$, thus
\[v_v=gt\frac{\kappa v_0t+2}{2\kappa v_0t+2}.\]
The distance traveled horizontally ($s_h$) and vertically ($s_v$) between $t_0=0$ and $t_1$ can be obtained by simple integration of the respective velocities with respect to $t$ between 0 and $t_1$:
\begin{align}s_h&=\frac{\log(\kappa v_0t_1+1)}{\kappa},\\s_v&=g\frac{\kappa v_0t_1(\kappa v_0t_1+2)-2\log(\kappa v_0t_1+1)}{(2\kappa v_0)^2}.\end{align}
The claim by the Mythbusters was that the time it took for the fired bullet to hit the ground was only $\sim 40~{\rm ms}$ more than the time it took for a dropped bullet to fall, which is a negligible difference. But it is not! Taking $g\simeq 10~{\rm m}/{\rm s}^2$, it is easy to see that the time it takes for a bullet to fall from a height of $1~{\rm m}$, using the well-known formula $\frac{1}{2}gt^2$, is $447~{\rm ms}$; the difference measured by the Mythbusters is nearly 10% of this number!
Not only did the fired bullet take longer to hit the ground, the Mythbusters’ exquisite setup allows us to calculate the bullet’s initial velocity $v_0$ and drag coefficient $\kappa$. This is possible because the Mythbusters conveniently provided three pieces of information (I am using approximate numbers here): the length of the path that the bullet traveled $s_h\simeq 100~{\rm m}$, the height of the bullet at the time of firing ($s_v\simeq 1~{\rm m}$), and the time it took for the fired bullet to hit the ground. Actually, what they provided was the difference between the time for a fired vs. a dropped bullet to hit the ground, but we know what it is for the dropped bullet (and because it is never moving very rapidly, we
can ignore air resistance in its case), so $t_1=447+40=487 {\rm ms}$. The solution is given by
\begin{align}\kappa&=0.0054~\mathrm{m}^{-1},\\v_0&=272.3~\mathrm{m}/\mathrm{s}.\end{align}
Given a bullet cross-sectional area of $A=2~{\rm cm}^2=2\times 10^{-4}~{\rm m}^2$, an approximate air density of $\rho=1~{\rm kg}/{\rm m}^3$, and a bullet mass of $m=20~{\rm g}=0.02~{\rm kg}$, the dimensionless drag coefficient for the bullet can be calculated as $c=2\kappa mA\rho=1.08$, which is not at all unreasonable for a tumbling bullet. Of course the actual values of $A$ and $m$ may differ from the ones I’m using here, resulting in a different value for the dimensionless drag coefficient $c$.
And now (May 28, 2012) I feel obliged to include a footnote, given the almost hysterical news coverage in the last couple of days about an exact solution to this same problem obtained by a 16-year old Dresden youth of Indian descent, Shouryya Ray.
The news coverage was breathless but incomplete: the youth, we were told, solved a problem that baffled Newton 350 years ago, but there was no description of the actual problem (other than a hint that it had to do with air resistance) and its solution, except for a press photo with him holding up a large sheet of paper with an equation.
I suspected that his variables $u$ and $v$ were in fact the horizontal and vertical components of a projectile, which I denoted with $v_h$ and $v_v$ above. So Ray's equation would read, in my notation:
\[\frac{g^2}{2v_h^2}-\frac{\kappa g}{2}\left(\frac{v_v\sqrt{v_h^2+v_v^2}}{v_h^2}+{\rm arcsinh}\left|\frac{v_v}{v_h}\right|\right)={\rm const,}\]
where I picked up an extra minus sign because I considered $g$ to be positive when pointing downwards.
But how did he find this neat form? Is it even correct? To find out, I tried a few ways to integrate the equations but to no avail. I admit I was ready to give up when I came across a solution posted on reddit. The derivation is actually quite simple. We begin with (I now use the overdot for the time derivative):
\begin{align}
\dot{v}_h=-\kappa\sqrt{v_h^2+v_v^2}v_h,\\ \dot{v}_v=g-\kappa\sqrt{v_h^2+v_v^2}v_v. \end{align}
Now multiply the first equation by $\dot{v}_v$ and the second by $\dot{v}_h$:
\begin{align}
\dot{v}_h\dot{v}_v=-\kappa\sqrt{v_h^2+v_v^2}v_h\dot{v}_v,\\ \dot{v}_h\dot{v}_v=g\dot{v}_h-\kappa\sqrt{v_h^2+v_v^2}\dot{v}_hv_v. \end{align}
Subtract one from the other and rearrange:
\[g\dot{v}_h=\kappa\sqrt{v_h^2+v_v^2}(\dot{v}_hv_v-v_h\dot{v}_v).\]
Now substitute $v_v=sv_h$:
\[g\dot{v}_h=\kappa\sqrt{(s^2+1)v_h^2}(\dot{v}_hv_v-v_h\dot{v}_v).\]
Divide both sides by $v_h^3$:
\[g\frac{\dot{v}_h}{v_h^3}=-\kappa\sqrt{(s^2+1)}\dot{s}.\]
In this form, both sides are full derivatives with respect to $t$ and thus both sides can be integrated, to yield
\[-\frac{1}{2}\frac{g}{v_h^2}=-\kappa\left[\frac{1}{2}s\sqrt{s^2+1}+\frac{1}{2}{\rm arcsinh}~s\right]+C,\]
where $C$ is an integration constant. Multiplying by $g$ and replacing $s$ with $v_v/v_h$, we get back Ray's equation.
This is an elegant result. Its practical utility may be limited, however, as the solution is implicit. Actually computing $v_h$ and $v_v$ as functions of time still requires numerical methods, so one might as well just solve the original differential equations numerically. Perhaps this explains why Ray's formula is not better known. But it has been discovered before. Apart from trivial notational differences, this very formulation is discussed by Parker (Am. J. Phys, 45, 7, 606, July 1977). But given its importance to ballistics, it should come as no surprise that this problem is discussed in depth in textbooks dating back to the middle of the 19th century, for instance in an 1860 book by Didion. |
The first time I ever heard the term, "gauge invariance", I didn't know what to make of it. Of course it didn't help that Hungarian physics literature uses the term, "mértékinvariancia", which, literally translated, means something like "invariance of measure". But to be honest, the English phrase doesn't appear to be much more meaningful either.
It wasn't until I came across an excellent informal introduction to the concept in Aitchison's and Hey's book, Gauge Theories in Particle Physics, that I finally began to understand what this whole concept was all about. And it didn't take very long to realize just how powerful a concept it really is!
The idea is simple and has its roots in the well-known (non-gauge) invariances in classical physics. For instance, everyone knows that the equations of physics remain invariant under, say, a translation of the coordinate system. This is just a fancy way of saying that there are no absolute positions; what matters is not where an object is in absolute terms, but where it is relative to other objects, i.e.,
coordinate differences.
So when you have an equation of physics involving the $x$ coordinate of two objects, $x_1$ and $x_2$, the equation may contain their differences but not the absolute coordinates themselves. An equation in the form, $F(x_1)=0$ won't be
physical, whereas an equation like $F(x_2-x_1)=0$ might be.
Or, even for a single object, you may have a physical equation that contains the time derivative of its $x$ coordinate. Since $d(x+C)/dt$ is the same as $dx/dt$ for any constant $C$, such equations will remain invariant under a translation.
Translations, rotations, time translations, Lorentz-boosts... these transformations are all geometrical. But is it only geometrical transformations under which physical equations remain invariant? Of course not. While much of physics is about geometry (some hope that one day, all of it will be), physics also deals with non-geometrical quantities. Take, for instance, voltage. It is another one of those "relative" quantities: what matters is not the absolute voltage but the relative potential between two conductors. In other words, adding a constant to all voltages should leave physics equations invariant. This may, at first, look to be at odds with equations like $R=U/I$ but only until you realize that in this case, $U$ is really the potential
difference, $U_2-U_1$, between two points in a circuit: and adding the same constant to both $U_2$ and $U_1$ will leave this difference unchanged, so Ohm's equation remains valid.
There is one common characteristic to these transformations under which physical laws remain unchanged. Namely that all are
global: the value that characterizes the transformation is the same everywhere in all of spacetime.
What if this is not so? What if we use a transformation that is characterized by a value that is different everywhere (e.g., a smooth function of spacetime coordinates, as opposed to a constant?)
At first sight, the idea may appear like madness. Such an arbitrary transformation must surely destroy the validity of any physics equation.
Then again... there's another way of looking at this. Okay, so our physics equation was mangled by that strange transformation. But is there any way to unmangle it?
Surprisingly, the answer is yes. Even more surprisingly, we find that when the equations are unmangled, i.e., changed to accommodate our strange transformation, the new components that appear in them will correspond to known physical forces!
By far the simplest
gauge theory is electromagnetism. And by far the simplest way to present electromagnetism as a gauge theory is through the non-relativistic Schrödinger equation of a particle moving in empty space:
\[i\hbar\frac{\partial\psi}{\partial t}=\frac{-\hbar^2}{2m}\nabla^2\psi.\]
Although the equation contains the wave function $\psi$, we know that the actual probability of finding a particle in some state is a function of $|\psi|$. In other words, the phase of the complex function
ψ can be changed without altering the outcome of physical experiments. In other words, all physical experiments will produce the same result if we perform the following substitution:
\[\psi\rightarrow e^{ip(x,t)}\psi,\]
where $p(x,t)$ is an arbitrary smooth function of space and time coordinates.
Let's see what happens to the Schrödinger equation though when we apply this transformation. First, the left-hand side:
\[\frac{\partial\psi'}{\partial t}=\frac{\partial e^{ip(x,t)}}{\partial t}\psi+e^{ip(x,t)}\frac{\partial\psi}{\partial t}=e^{ip(x,t)}\left[i\frac{\partial p(x,t)}{\partial t}\psi+\frac{\partial\psi}{\partial t}\right].\]
Next, the right-hand side, which is a bit more difficult to tackle, but hey, it's just straightforward algebraic manipulation:
\begin{align} \nabla^2\psi'&=\nabla\{\nabla[e^{ip(x,t)}\psi]\}=\nabla\{\nabla[e^{ip(x,t)}]\psi+e^{ip(x,t)}\nabla\psi\}\\ &=\nabla[e^{ip(x,t)}i\nabla p(x,t)\psi+e^{ip(x,t)}\nabla\psi]=\nabla\{e^{ip(x,t)}[i\nabla p(x,t)\psi+\nabla\psi]\}\\ &=\nabla e^{ip(x,t)}[i\nabla p(x,t)\psi+\nabla\psi]+e^{ip(x,t)}[i\nabla p(x,t)\psi+\nabla\psi]\\ &=e^{ip(x,t)}i\nabla p(x,t)[i\nabla p(x,t)\psi+\nabla\psi] +e^{ip(x,t)}[i\nabla^2p(x,t)\psi+i\nabla p(x,t)\nabla\psi+\nabla^2\psi]\\ &=e^{ip(x,t)}\{i\nabla p(x,t)[i\nabla p(x,t)\psi+\nabla\psi]+i\nabla^2p(x,t)\psi+i\nabla p(x,t)\nabla\psi+\nabla^2\psi\}\\ &=e^{ip(x,t)}\{\nabla^2\psi+2i\nabla p(x,t)\nabla\psi-[\nabla p(x,t)]^2\psi+i\nabla^2p(x,t)\psi\}\\ &=e^{ip(x,t)}\{[\nabla+i\nabla p(x,t)]^2\psi\}. \end{align}
On both sides of the equation, we now have an extra factor $e^{ip(x,t)}$, which we can safely drop, resulting in the following equation:
\[i\hbar\left[i\frac{\partial p(x,t)}{\partial t}\psi+\frac{\partial\psi}{\partial t}\right]=\frac{-\hbar^2}{2m}[\nabla+i\nabla p(x,t)]^2\psi,\]
or
\[i\hbar\frac{\partial\psi}{\partial t}=\frac{-\hbar^2}{2m}\left\{[\nabla+i\nabla p(x,t)]^2-\frac{2m}{\hbar}\frac{\partial p(x,t)}{\partial t}\right\}\psi.\]
Whatever it is, it is definitely not the Schrödinger equation of a particle in empty space. In other words, we can conclude that the Schrödinger equation is
not invariant under the gauge transformation $\psi\rightarrow e^{ip(x,t)}\psi$.
Now of course that is not exactly surprising. We have, after all, mangled the wave function beyond recognition by changing its complex phase with an arbitrary amount at each point of spacetime.
But what if we start with a Schrödinger equation that already includes components that look like the ones we ended up with? For instance:
\[i\hbar\frac{\partial\psi}{\partial t}=\frac{-\hbar^2}{2m}[(\nabla+i{\bf\mathrm{A}})^2+V]\psi.\]
Starting with this equation, when we perform the gauge transformation $\psi\rightarrow e^{ip(x,t)}\psi$ we end up with the following substitutions:
\begin{align}{\bf\mathrm{A}}&\rightarrow{\bf\mathrm{A}}+\nabla p(x,t),\\
V&\rightarrow V-\frac{2m}{\hbar}\frac{\partial p(x,t)}{\partial t},\end{align}
after which our new Schrödinger equation remains valid.
What we have here is a vectorial and a scalar quantity, and a pair of transformation laws that supposedly do not alter the validity of our physics. But we already have just such a set of quantities in physics, in electromagnetism. If $\vec{A}$ were the electromagnetic vector potential and $V$ were the scalar potential, the transformation rules (2) would leave measurable physical quantities—namely, the magnetic field, $\vec{B}=\nabla\times\vec{A}$, and the electric field, $\vec{E}=-\nabla V-\partial\vec{A}/\partial t$ invariant. Which suggests that (1) is nothing less than the Schrödinger equation of a charged particle moving in an electromagnetic field characterized by $\vec{A}$ and $V$. Which it indeed is, as can be confirmed through experiment.
This is nothing short of remarkable. We have, after all, made no a priori assumptions about electromagnetism. We started off with the Schrödinger equation of a particle moving in empty space, observed that the probability of finding a particle in a state does not depend on the complex phase of its wave function, and made an attempt to incorporate this invariance into the equation itself. We were successful, and in the process, we managed to recover a vectorial and a scalar quantity that satisfy Maxwell's equations: in a sense, we "invented" electromagnetism!
Pure magic, if you ask me. But this really is just the beginning. The transformations represented by $e^{ip(x,t)}$, that is, rotations in a plane, form an
Abelian group; these transformations are commutative. However, when the wave function is not a complex-valued function but something more intricate (such as a pair of quaternions, the simplest solution of the Dirac equation) the gauge transformation can become something more intricate as well. When the gauge transformation is not commutative, the force field that corresponds to it will contain the commutator of the gauge transformation, which will make the field self-interacting, and things get really interesting... But no, this is not why the photon is massless while other interactions are carried by massive particles. That was my naïve initial thought when I first read about the fact that a non-commutative gauge transformation results in a field that is self-interacting, but, as usual, reality tends to be more complex than one's naïve first impressions of it. What we get into here is Yang-Mills theory, and I am quite a long way away from being able to say anything meaningful about it just yet! References Aitchison, I. J. R. & Hey, A. J. G., Gauge Theories in Particle Physics, Institute of Physics Publishing, 1996 |
A linear program (LP)
\begin{alignat}{1} & \min_{x} {c}^{T}x, \\ \mathrm{s.t.} & \quad Ax = b, \\ & x \geq 0. \end{alignat}
is called combinatorial if the size of entries of matrix $A \in \mathbb{R}^{m \times n}$ is bounded by a polynomial of dimension of the LP. (See Eva Tardós, "A strongly polynomial algorithm to solve combinatorial linear programs",
Operations Research (1985)) The size of a rational number is described as the length of its binary representation.
It seems as though any entry size is a polynomial of the LP dimension. In other words, size($a_{ij}$) = size($a_{ij}$) * $m^0$ $n^0$ where size of LP is $m\times n$. So it seems like any LP is combinatorial. Why isn't this statement true? |
One can construct a "dual" $S^*$ to $S$, for which a lot is known.
Indeed, this is related to "convex algebraic geometry", namely, an approach to describe convex semialgebraic sets as sets $S^*$ of solutions of the linear matrix inequality in $y:=(y_1,\dots, y_m)$: $$y_1A_1+\dots+y_mA_m\succeq A_0,\quad A_k=A_k^\top\in \mathbb{R}^{n\times n}, \tag{*}$$where $T\succeq U$ means $T-U$ is positive semidefinite. There is a wealth of material on this, see e.g. Victor Vinnikov's survey.In loc.cit. you can find footnote on p.1, explaining that there is not much generality added by considering Hermitian $A_k$ rather than real symmetric $A_k$.
On (*) one can consider a semindefinite optimisation problem (SDP):$$\max b^\top y, \quad y \text{ satisfies (*)}, \quad b:=(b_1,\dots,b_m)\in\mathbb{R}^m.\tag{P}$$It has a naturally defined dual:$$\min_{N\succeq 0}\ tr(A_0 N), \quad tr(A_kN)=b_k,\quad 1\leq k\leq m.\tag{D}$$This is related to your set $S$ by setting $A_0=0$ and $A_1=I$.That is, a point $b$ is in $S$ if and only if (D) has a solution.The duality of (P) and (D) comes from the computation $$tr(A_0N)=tr((-T+\sum_k y_k A_k)N)=\sum_k y_k tr(A_k N)- tr(TN)=b^\top y-tr(TN), \quad T\succeq 0.$$Save some pathological cases, at the optimum of (D) one has $TN=0$, and the optima of (D) and (P) are equal.
Building upon this, one can relate $S$ and $S^*$: for each $b\in S$ there exists unique $y\in S^*$ such that $b^\top y=0$.
A lot is understood about the boundary of $S^*$, see Theorem 2.1.Basically, it is a component in an algebraic set defined by $\det(-A_0+\sum_k y_kA_k)=0$. An interesting nontrivial example comes from determinantal representations of certain cubic curves, for which one has one compact convex component---this is the boundary of $S^*$, and one unbounded branch. |
67 0
Sorry I had to start a new thread here because the latex was just not generating on the other thread, I don't know if it is my browser or something else but I am having a very difficult day with the tex.
anyhow
[tex] \lim_{x \rightarrow 0} \frac {\sqrt(5x+9)-3}{x} [/tex]
This gives 0/0 off the bat so apply L'H once I get:
[tex] \lim_{x \rightarrow 0} \frac {(5x+9).5}{2} [/tex]
Which plugs in nicely to give [itex] \frac {45}{2} [/itex]
Did I do that all right?
Hi again can any latex specialists help me. My code definitely says the equation I wrote above is (sqrt(5x+9))-3/x but on my browser there is a red message saying the latex is not valid when I look at the thread and on my preview it is showing me a completely different other tex thing that I wrote two hours ago . I can't seem to avoid this problem, does anyone else see the mix up that I am seeing or is it just my own browser having a hissy fit?
anyhow
[tex]
\lim_{x \rightarrow 0} \frac {\sqrt(5x+9)-3}{x} [/tex]
This gives 0/0 off the bat so apply L'H once I get:
[tex]
\lim_{x \rightarrow 0} \frac {(5x+9).5}{2} [/tex]
Which plugs in nicely to give [itex] \frac {45}{2} [/itex]
Did I do that all right?
Hi again can any latex specialists help me. My code definitely says the equation I wrote above is (sqrt(5x+9))-3/x but on my browser there is a red message saying the latex is not valid when I look at the thread and on my preview it is showing me a completely different other tex thing that I wrote two hours ago . I can't seem to avoid this problem, does anyone else see the mix up that I am seeing or is it just my own browser having a hissy fit?
Last edited: |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
Learning Objectives
To balance equations that describe reactions in solution. To calculate the quantities of compounds produced or consumed in a chemical reaction. To solve quantitative problems involving the stoichiometry of reactions in solution.
A balanced chemical equation gives the identity of the reactants and the products as well as the accurate number of molecules or moles of each that are consumed or produced.
Stoichiometry is a collective term for the quantitative relationships between the masses, the numbers of moles, and the numbers of particles (atoms, molecules, and ions) of the reactants and the products in a balanced chemical equation. A stoichiometric quantity is the amount of product or reactant specified by the coefficients in a balanced chemical equation. This section describes how to use the stoichiometry of a reaction to answer questions like the following: How much oxygen is needed to ensure complete combustion of a given amount of isooctane? (This information is crucial to the design of nonpolluting and efficient automobile engines.) How many grams of pure gold can be obtained from a ton of low-grade gold ore? (The answer determines whether the ore deposit is worth mining.) If an industrial plant must produce a certain number of tons of sulfuric acid per week, how much elemental sulfur must arrive by rail each week?
All these questions can be answered using the concepts of the mole, molar and formula masses, and solution concentrations, along with the coefficients in the appropriate balanced chemical equation.
Stoichiometry Problems
When carrying out a reaction in either an industrial setting or a laboratory, it is easier to work with masses of substances than with the numbers of molecules or moles. The general method for converting from the mass of any reactant or product to the mass of any other reactant or product using a balanced chemical equation is outlined in and described in the following text.
Steps in Converting between Masses of Reactant and Product Convert the mass of one substance (substance A) to the corresponding number of moles using its molar mass. From the balanced chemical equation, obtain the number of moles of another substance (B) from the number of moles of substance A using the appropriate mole ratio (the ratio of their coefficients). Convert the number of moles of substance B to mass using its molar mass. It is important to remember that some species are present in excess by virtue of the reaction conditions. For example, if a substance reacts with the oxygen in air, then oxygen is in obvious (but unstated) excess.
Converting amounts of substances to moles—and vice versa—is the key to all stoichiometry problems, whether the amounts are given in units of mass (grams or kilograms), weight (pounds or tons), or volume (liters or gallons).
To illustrate this procedure, consider the combustion of glucose. Glucose reacts with oxygen to produce carbon dioxide and water:
\[ C_6H_{12}O_6 (s) + 6 O_2 (g) \rightarrow 6 CO_2 (g) + 6 H_2O (l) \tag{3.6.1}\]
Just before a chemistry exam, suppose a friend reminds you that glucose is the major fuel used by the human brain. You therefore decide to eat a candy bar to make sure that your brain does not run out of energy during the exam (even though there is no direct evidence that consumption of candy bars improves performance on chemistry exams). If a typical 2 oz candy bar contains the equivalent of 45.3 g of glucose and the glucose is completely converted to carbon dioxide during the exam, how many grams of carbon dioxide will you produce and exhale into the exam room?
The initial step in solving a problem of this type is to write the balanced chemical equation for the reaction. Inspection shows that it is balanced as written, so the strategy outlined above can be adapted as follows:
1. Use the molar mass of glucose (to one decimal place, 180.2 g/mol) to determine the number of moles of glucose in the candy bar:
\[ moles \, glucose = 45.3 \, g \, glucose \times {1 \, mol \, glucose \over 180.2 \, g \, glucose } = 0.251 \, mol \, glucose \]
2. According to the balanced chemical equation, 6 mol of CO
2 is produced per mole of glucose; the mole ratio of CO 2 to glucose is therefore 6:1. The number of moles of CO 2 produced is thus
\[ moles \, CO_2 = mol \, glucose \times {6 \, mol \, CO_2 \over 1 \, mol \, glucose } \]
\[ = 0.251 \, mol \, glucose \times {6 \, mol \, CO_2 \over 1 \, mol \, glucose } \]
\[ = 1.51 \, mol \, CO_2 \]3. Use the molar mass of CO
2(44.010 g/mol) to calculate the mass of CO 2corresponding to 1.51 mol of CO 2:
These operations can be summarized as follows:
\[ 45.3 \, g \, glucose \times {1 \, mol \, glucose \over 180.2 \, g \, glucose} \times {6 \, mol \, CO_2 \over 1 \, mol \, glucose} \times {44.010 \, g \, CO_2 \over 1 \, mol \, CO_2} = 66.4 \, g \, CO_2 \]
Discrepancies between the two values are attributed to rounding errors resulting from using stepwise calculations in steps 1–3. (Remember that you should generally carry extra significant digits through a multistep calculation to the end to avoid this!) This amount of gaseous carbon dioxide occupies an enormous volume—more than 33 L. Similar methods can be used to calculate the amount of oxygen consumed or the amount of water produced.
The balanced chemical equation was used to calculate the mass of product that is formed from a certain amount of reactant. It can also be used to determine the masses of reactants that are necessary to form a certain amount of product or, as shown in Example \(\PageIndex{1}\), the mass of one reactant that is required to consume a given mass of another reactant.
Example \(\PageIndex{1}\): The US Space Shuttle
The combustion of hydrogen with oxygen to produce gaseous water is extremely vigorous, producing one of the hottest flames known. Because so much energy is released for a given mass of hydrogen or oxygen, this reaction was used to fuel the NASA (National Aeronautics and Space Administration) space shuttles, which have recently been retired from service. NASA engineers calculated the exact amount of each reactant needed for the flight to make sure that the shuttles did not carry excess fuel into orbit. Calculate how many tons of hydrogen a space shuttle needed to carry for each 1.00 tn of oxygen (1 tn = 2000 lb).
. The US space shuttle Discovery during liftoff. The large cylinder in the middle contains the oxygen and hydrogen that fueled the shuttle’s main engine Given: reactants, products, and mass of one reactant Asked for: mass of other reactant Strategy: Write the balanced chemical equation for the reaction. Convert mass of oxygen to moles. From the mole ratio in the balanced chemical equation, determine the number of moles of hydrogen required. Then convert the moles of hydrogen to the equivalent mass in tons. Solution:
We use the same general strategy for solving stoichiometric calculations as in the preceding example. Because the amount of oxygen is given in tons rather than grams, however, we also need to convert tons to units of mass in grams. Another conversion is needed at the end to report the final answer in tons.
A We first use the information given to write a balanced chemical equation. Because we know the identity of both the reactants and the product, we can write the reaction as follows:
\[ H_2 (g) + O_2 (g) \rightarrow H_2O (g) \]
This equation is not balanced because there are two oxygen atoms on the left side and only one on the right. Assigning a coefficient of 2 to both H
2O and H 2 gives the balanced chemical equation:
\[ 2 H_2 (g) + O_2 (g) \rightarrow 2 H_2O (g) \]
Thus 2 mol of H
2 react with 1 mol of O 2 to produce 2 mol of H 2O.
1.
B To convert tons of oxygen to units of mass in grams, we multiply by the appropriate conversion factors:
Using the molar mass of O
2 (32.00 g/mol, to four significant figures), we can calculate the number of moles of O 2 contained in this mass of O 2:
\[ mol \, O_2 = 9.07 \times 10^5 \, g \, O_2 \times {1 \, mol \, O_2 \over 32.00 \, g \, O_2} = 2.83 \times 10^4 \, mol \, O_2 \]
2. Now use the coefficients in the balanced chemical equation to obtain the number of moles of H
2 needed to react with this number of moles of O 2:
\[ mol \, H_2 = mol \, O_2 \times {2 \, mol \, H_2 \over 1 \, mol \, O_2} \]
\[ = 2.83 \times 10^4 \, mol \, O_2 \times {2 \, mol \, H_2 \over 1 \, mol \, O_2} = 5.66 \times 10^4 \, mol \, H_2 \]
3. The molar mass of H
2 (2.016 g/mol) allows us to calculate the corresponding mass of H 2:
\[mass \, of \, H_2 = 5.66 \times 10^4 \, mol \, H_2 \times {2.016 \, g \, H_2 \over mol \, H_2} = 1.14 \times 10^5 \, g \, H_2 \]
Finally, convert the mass of H2 to the desired units (tons) by using the appropriate conversion factors:
\[ tons \, H_2 = 1.14 \times 10^5 \, g \, H_2 \times {1 \, lb \over 453.6 \, g} \times {1 \, tn \over 2000 \, lb} = 0.126 \, tn \, H_2 \]
The space shuttle had to be designed to carry 0.126 tn of H
2 for each 1.00 tn of O 2. Even though 2 mol of H 2 are needed to react with each mole of O 2, the molar mass of H 2 is so much smaller than that of O 2 that only a relatively small mass of H 2 is needed compared to the mass of O 2.
Exercise \(\PageIndex{1}\): Roasting Cinnabar
Cinnabar, (or Cinnabarite) \(HgS\) is the common ore of mercury. Because of its mercury content, cinnabar can be toxic to human beings; however, because of its red color, it has also been used since ancient times as a pigment.
Alchemists produced elemental mercury by roasting cinnabar ore in air:
\[ HgS (s) + O_2 (g) \rightarrow Hg (l) + SO_2 (g) \]
The volatility and toxicity of mercury make this a hazardous procedure, which likely shortened the life span of many alchemists. Given 100 g of cinnabar, how much elemental mercury can be produced from this reaction?
Answer
86.2 g
Calculating Moles from Volume
Quantitative calculations involving reactions in solution are carried out with
masses, however, volumes of solutions of known concentration are used to determine the number of moles of reactants. Whether dealing with volumes of solutions of reactants or masses of reactants, the coefficients in the balanced chemical equation give the number of moles of each reactant needed and the number of moles of each product that can be produced. An expanded version of the flowchart for stoichiometric calculations is shown in Figure \(\PageIndex{2}\). The balanced chemical equation for the reaction and either the masses of solid reactants and products or the volumes of solutions of reactants and products can be used to determine the amounts of other species, as illustrated in the following examples.
The balanced chemical equation for a reaction and either the masses of solid reactants and products or the volumes of solutions of reactants and products can be used in stoichiometric calculations.
Example \(\PageIndex{2}\) : Extraction of Gold
Gold is extracted from its ores by treatment with an aqueous cyanide solution, which causes a reaction that forms the soluble [Au(CN)
2] − ion. Gold is then recovered by reduction with metallic zinc according to the following equation:
\[ Zn(s) + 2[Au(CN)_2]^-(aq) \rightarrow [Zn(CN)_4]^{2-}(aq) + 2Au(s) \]
What mass of gold can be recovered from 400.0 L of a 3.30 × 10
−4 M solution of [Au(CN) 2] −? Given: chemical equation and molarity and volume of reactant Asked for: mass of product Strategy: Check the chemical equation to make sure it is balanced as written; balance if necessary. Then calculate the number of moles of [Au(CN) 2] −present by multiplying the volume of the solution by its concentration. From the balanced chemical equation, use a mole ratio to calculate the number of moles of gold that can be obtained from the reaction. To calculate the mass of gold recovered, multiply the number of moles of gold by its molar mass. Solution: A The equation is balanced as written; proceed to the stoichiometric calculation. Figure \(\PageIndex{2}\) is adapted for this particular problem as follows:
As indicated in the strategy, start by calculating the number of moles of [Au(CN)
2] − present in the solution from the volume and concentration of the [Au(CN) 2] − solution:
\( \begin{align} moles\: [Au(CN)_2 ]^-
& = V_L M_{mol/L} \\ & = 400 .0\: \cancel{L} \left( \dfrac{3 .30 \times 10^{4-}\: mol\: [Au(CN)_2 ]^-} {1\: \cancel{L}} \right) = 0 .132\: mol\: [Au(CN)_2 ]^- \end{align} \) B Because the coefficients of gold and the [Au(CN) 2] − ion are the same in the balanced chemical equation, assuming that Zn(s) is present in excess, the number of moles of gold produced is the same as the number of moles of [Au(CN) 2] − (i.e., 0.132 mol of Au). The problem asks for the mass of gold that can be obtained, so the number of moles of gold must be converted to the corresponding mass using the molar mass of gold:
\( \begin{align} mass\: of\: Au &= (moles\: Au)(molar\: mass\: Au) \\
&= 0 .132\: \cancel{mol\: Au} \left( \dfrac{196 .97\: g\: Au} {1\: \cancel{mol\: Au}} \right) = 26 .0\: g\: Au \end{align}\)
At a 2011 market price of over $1400 per troy ounce (31.10 g), this amount of gold is worth $1170.
\( 26 .0\: \cancel{g\: Au} \times \dfrac{1\: \cancel{troy\: oz}} {31 .10\: \cancel{g}} \times \dfrac{\$1400} {1\: \cancel{troy\: oz\: Au}} = \$1170 \)
Exercise \(\PageIndex{2}\) : Lanthanum Oxalate
What mass of solid lanthanum(III) oxalate nonahydrate [La
2(C 2O 4) 3•9H 2O] can be obtained from 650 mL of a 0.0170 M aqueous solution of LaCl 3 by adding a stoichiometric amount of sodium oxalate? Answer
3.89 g
Summary
Either the masses or the volumes of solutions of reactants and products can be used to determine the amounts of other species in the balanced chemical equation. Quantitative calculations that involve the stoichiometry of reactions in solution use volumes of solutions of known concentration instead of masses of reactants or products. The coefficients in the balanced chemical equation tell how many moles of reactants are needed and how many moles of product can be produced. |
Learning Objectives
Understand the difference between induced and natural radioactivity Know which elements have unstable nuclei. Correlate the relationship between binding energy and stability.
From Section 3.3, we saw how J. J. Thomson discovered the electron by using cathode ray tubes (CRT). In November of 1895, a German physicist named Wilhelm Roengten used this same technology to discover the x-ray. Roengten's ray originated from the CRT and reacted with a barium platinocyanide screen to fluoresce. After seeing this, Roengten place objects between this ray and the screen. He noted that the ray would still penetrate substances and leave an image on photographic film.
When naming this type of radiation, Roentgen reflected back to his experiments. He could not confidently explain his observations. For this reason, he decided to call this new type of radiation the x-ray. Throughout his lifetime, he continue to explore this technology and even used his wife as a test subject (refer to chapter 1 for her x-ray image). By never patenting this invention, Roengten freely shared the x-ray with other researchers and medical professionals. In 1901, he was awarded the Nobel Prize in physics for his work with the x-ray.
As with any new technology, people looked for ways to apply the x-ray to day to day life. Thomas Edison, the inventor of the light bulb, thought the average person should have an x-ray machine in their home. He designed x-ray machines to be smaller and portable. Unfortunately, many of his technicians died from radiation poisoning.
The Harmful Effects of X-rays
In November 1896, an article entitled "the harmful effects of X-rays" was published in Nature. The witness was an X-ray demonstrator during the summer in London. He therefore paid for himself throughout the summer at the rate of several hours per day of exposure. He testified: "In the first two or three weeks I felt no inconvenience, but after a while appeared on the fingers of my right hand many dark spots which pierced under the skin, and gradually they became very painful; the rest of the skin was red and strongly inflamed My hand was so bad that I was constantly forced to bathe it in very cold water An ointment momentarily calm the pain but the epidermis had dried up had become hard and yellow like parchment and completely insensible, so I was not surprised when my hand began to peel. "
"Soon the skin and nails fall off and the fingers swelled, the pain remaining constant. I lost the skin of my right and left hands, and four of my nails have disappeared from the right hand and two left and three others were ready to fall off. During more than six weeks I was unable to hold anything in my right hand and I can not hold a pen since the loss of my nails …"
During Edison's time, people would host x-ray parties in their homes. The host of these gatherings would allow guests to x-ray different parts of their body. People would leave these events with their framed souvenirs. X-rays were even used to measure foot size. Devices like the fluoroscope were placed in shoe stores. Technicians, consumers, and observers could look in oculars while the foot was being x-rayed. Unnecessary exposure to radiation was not considered as a hazard during these times. In this era, most believed this was the most accurate way to measure the foot.
Video \(\PageIndex{1}\): The shoe fitting fluoroscope was a common fixture in shoe stores during the 1930s, 1940s and 1950s. The first fluoroscopic device for x-raying feet may have been created during World War I to eliminating the need for patients to remove their boots, to speed up the processing of the large number of injured military personnel who were seeking help. After the war the device was modified the device for shoe-fitting and showed it for the first time at a shoe retailers convention in Boston in 1920. Radioactivity
When Becquerel heard about Roentgen's discovery, he wondered if his fluorescent minerals would give the same x-rays. Becquerel placed some of his rock crystals on top of a well-covered photographic plate and sat them in the sunlight. The sunlight made the crystals glow with a bright fluorescent light, but when Becquerel developed the film he was very disappointed. He found that only one of his minerals, a uranium salt, had fogged the photographic plate. He decided to try again, and this time, to leave them out in the sun for a longer period of time. Fortunately, the weather did not cooperate and Becquerel had to leave the crystals and film stored in a drawer for several cloudy days. Before continuing his experiments, Becquerel decided to check one of the photographic plates to make sure the chemicals were still good. To his amazement, he found that the plate had been exposed in spots where it had been near the uranium containing rocks and some of these rocks had not been exposed to sunlight at all. In later experiments, Becquerel confirmed that the radiation from the uranium had no connection with light or fluorescence, but the amount of radiation was directly proportional to the concentration of uranium in the rock. Becquerel had discovered
radioactivity. The Curies and Radium
One of Becquerel's assistants, a young Polish scientist named Maria Sklowdowska (to become Marie Curie after she married Pierre Curie), became interested in the phenomenon of radioactivity. With her husband, she decided to find out if chemicals other than uranium were radioactive. The Austrian government was happy to send the Curies a ton of pitchblende from the mining region of Joachimstahl because it was waste material that had to be disposed of anyway. The Curies wanted the pitchblende because it was the residue of uranium mining. From the ton of pitchblende, the Curies separated \(0.10 \: \text{g}\) of a previously unknown element, radium, in the form of the compound radium chloride. This radium was many times more radioactive than uranium.
By 1902, the world was aware of a new phenomenon called radioactivity and of new elements which exhibited natural radioactivity. For this work, Becquerel and the Curies shared the 1903 Nobel Prize in Physics. These three researchers continued to work with hazardous radioactive materials. They experienced various ailments included burns and weakness by touching these substances. In 1906, Pierre Curie accidently fell under a horse drawn cart. He immediately died from him wounds and left Marie with two small children (Irene, 9 years old and Eve, 2 years old). After his death, Marie continued her research by identifying a new element, Polonium. In 1911, she was awarded the Nobel Prize in chemistry for the discoveries of radium and polonium. As of today, Marie Curie is the only person ever to have received two Nobel Prizes in the sciences.
Figure \(\PageIndex{2}\): Everyone knows that winning the Nobel Prize is a big deal, but why do we even have a Nobel Prize? And why does it matter?
Further experiments provided information about the characteristics of the penetrating emissions from radioactive substances. It was soon discovered that there were three common types of radioactive emissions. Some of the radiation could pass easily through aluminum foil while some of the radiation was stopped by the foil. Some of the radiation could even pass through foil up to a centimeter thick. The three basic types of radiation were named alpha, beta, and gamma radiation. The actual composition of the three types of radiation was still not known.
Eventually, scientists were able to demonstrate experimentally that the alpha particle, \(\alpha\), was a helium nucleus (a particle containing two protons and two neutrons), a beta particle, \(\beta\), was a high speed electron, and gamma rays, \(\gamma\), were a very high energy form of light (even higher energy than x-rays).
Unstable Nuclei May Disintegrate
A nucleus (with one exception, hydrogen-1) consists of some number of protons and neutrons pulled together in an extremely tiny volume. Since protons are positively charged and like charges repel, it is clear that protons cannot remain together in the nucleus unless there is a powerful force holding them there. The force which holds the nucleus together is generated by
nuclear binding energy.
A nucleus with a large amount of binding energy per nucleon (proton or neutron) will be held together tightly and is referred to as stable. These nuclei do not break apart. When there is too little binding energy per nucleon, the nucleus will be less stable and may disintegrate (come apart). Such disintegrations are referred to as
natural radioactivity. It is also possible for scientists to smash nuclear particles together and cause nuclear reactions between normally stable nuclei. These disintegrations are referred to as artificial radioactivity. None of the elements above #92 on the periodic table occur on earth naturally; they are all products of artificial radioactivity (man-made).
When nuclei come apart, they come apart violently accompanied by a tremendous release of energy in the form of heat, light, and radiation. This energy comes from some of the nuclear binding energy. In nuclear changes, the energy involved comes from the nuclear binding energy. However, in chemical reactions, the energy comes from electrons moving energy levels. A typical nuclear change (such as fission) may involve millions of times more energy per atom changing compared to a chemical change (such as burning)!
Need More Practice? Turn to Section 5.E of this OER and answer questions #5 and #7. |
To expand my comment (and Robert Bryant's) a little further, the main point here is that tensor products of
finite dimensional irreducible representations behave nicely. In the equivalent Lie algebra setting, the finite dimensional irreducibles are those whose highest weights are dominant integral in the dual of a Cartan subalgebra $\mathfrak{t}:= \mathrm{Lie} \;T$ of $\mathfrak{g} := \mathrm{Lie} \;G$, relative to some fixed choice of positive (or simple) roots.
In particular, $V(\mu) \otimes V(\nu)$ has highest weight $\lambda:= \mu + \nu$ with multiplicity 1. The complete reducibility of this tensor product depends mainly on the fact that the center $Z(\mathfrak{g})$ of the universal enveloping algebra of $\mathfrak{g}$ (generated by "Casimir operators") acts by distinct central characters on the distinct finite dimensional representations. In other words, the sum of all copies of a typical $V(\pi)$ in the tensor product decomposition is a single "eigenspace" for the action of the center, where the action is given by a "central character" $\chi_\pi$.
Since the summand $V(\lambda)$ occurs only once in the tensor product (the weight space for $\lambda$ being one dimensional), $V(\lambda)$ may be characterized as the set of all vectors on which $Z(\mathfrak{g})$ acts by the central character $\chi_\lambda$. (There may of course be many isomorphic summands of smaller highest weights $\pi$, where the center acts by the single character $\chi_\pi$.) In different language this is essentially Robert's comment. |
I'm trying to find the normalisation constant $N$ for the following wavefunction:
$$ \psi\left(x\right) = \left\{ \begin{array}{lr} N \left(x^2 - l^2\right)^2 &\: \left|x\right| \le l \\ 0 &\: otherwise \end{array} \right. $$
Using:
$$ \int_{-\infty}^{\infty} \left|\psi\left(x\right)\right|^2 \, dx = 1 $$
The answer should be:
$$ N = \sqrt{\frac{315}{256}} \frac{e^{i \phi}}{\sqrt{l}} $$
However I get:
$$ \int_{-\infty}^{\infty} \left|\psi\left(x\right)\right|^2 \, dx \ = \int_{-l}^{l} N^2 \left(x^2 - l^2 \right)^4 \, dx \ = \ \frac{N^2}{10} \left[\frac{\left(x^2 - l^2\right)^5}{x}\right]_{\,-l}^{\,l} = 0 $$
Which is clearly wrong, and I do not understand where the phase could have come from. Am I approaching this completely the wrong way?
I have now corrected the integration, however I get (double checked with Mathematica):
$$ N = \sqrt{\frac{315}{256}} \frac{1}{l^\frac{9}{2}} $$
Which is the wrong power of $l$ (it should be $\frac{1}{2}$). Substituting either the answer or my answer into the original integral does not yield $1$ either.
(I am working though these quantum physics notes: http://ocw.mit.edu/courses/physics/8-04-quantum-physics-i-spring-2013/lecture-notes/MIT8_04S13_Lec04.pdf) |
I am trying to derive the non-relativistic Lagrangian for a complex scalar field from taking the non-relativistic limit of the complex scalar field Lagrangian. I am following the steps in "QFT for the Gifted Amateur," section 12.3, which uses the mostly-minus metric convention.
They start out with the complex scalar field Lagrangian (with interaction term),
$$\mathcal{L} = \partial^{\mu}\psi^{\dagger}\partial_{\mu}\psi-m^2\psi^{\dagger}\psi -\lambda(\psi^{\dagger}\psi)^2\tag{1} $$
and take the non-relativistic limit,
$$\psi \rightarrow \frac{1}{\sqrt{2m}}e^{-imt}\Psi(\textbf{x},t),\tag{2}$$
and the corresponding adjoint expression and plug them into $\mathcal{L}$. The book first studied the time derivative part, and argues:
As in the last example, the guts of taking the [non-relativistic] limit may be found in the time derivatives. We find
$$ \partial_{0}\psi^{\dagger}\partial_{0}\psi = \frac{1}{2m} \left[ \partial_0 \Psi^{\dagger}\partial_0\Psi + im \left( \Psi^{\dagger}\partial_0\Psi - (\partial_0\Psi^{\dagger})\Psi \right) +m^2\Psi^{\dagger}\Psi \right]. \tag{3}$$ The first term going as $1/m$ is negligible in comparison with the others.
I agree with the equation they derived, however, I don't know why the first term that goes as $(1/m)$ is negligible, because if you expand out the rest of the Lagrangian (this is now my work, not the textbook's,) you get
$$ \mathcal{L} = \frac{m}{2}\Psi^{\dagger}\Psi + \frac{i}{2}\left( \Psi^{\dagger}\partial_0\Psi-(\partial^0\Psi^{\dagger})\Psi \right) + \frac{1}{2m}\partial^0\Psi^{\dagger}\partial_0\Psi$$$$ - \frac{1}{2m}\nabla\Psi^{\dagger} \cdot \nabla\Psi - \frac{m}{2}\Psi^{\dagger}\Psi - \frac{\lambda}{4m^2}(\Psi^{\dagger}\Psi)^2. \tag{4} $$
Then it is claimed that $\Psi$ and its adjoint may be expanded in plane waves, like
$$ \Psi = \sum a_{\textbf{p}}e^{-ip\cdot x},\tag{5} $$
so $\left( \Psi^{\dagger}\partial_0\Psi - \Psi \partial_0\Psi^{\dagger} \right) $ can be replaced with $2\Psi^{\dagger}\partial_0\Psi$, which leaves
$$ \mathcal{L} = i\Psi^{\dagger}\partial_0\Psi + \frac{1}{2m}\partial^0\Psi^{\dagger}\partial_0\Psi - \frac{1}{2m}\nabla\Psi^{\dagger} \cdot \nabla \Psi - \frac{\lambda}{4m^2} (\Psi^{\dagger}{\Psi})^2.\tag{6} $$
The algebra I am fine with, but I say that if you look at the full Lagrangian, not just the time derivative part, both the time derivative and spatial derivative parts go as $1/m$, and even worse, the interaction term goes as $1/m^2$. The book neglected the time derivative part and uses as its final answer
$$ \mathcal{L} = i\Psi^{\dagger}\partial_0\Psi - \frac{1}{2m}\nabla\Psi^{\dagger} \cdot \nabla\Psi - \frac{\lambda}{4m^2} \left( \Psi^{\dagger}\Psi \right)^2. \tag{7}$$
Is there a reason why the $\frac{1}{2m}\partial_0\Psi^{\dagger}\partial_0\Psi$ time-derivative term is neglected because "it is small compared to the other two terms in the time derivative $(\partial_0\Psi^{\dagger}\partial_0\Psi)$ expression", even though it is $\mathbf{not}$ small compared to the other terms in the Lagrangian (at least, I don't think it is). |
You are right. The notion of "escape" from a gravitational field is something that only exists in an idealized mathematical model of the situation. In that idealized model, we have only one gravitator, in a space-time of infinite extent. With that, we can state the idea of the escape velocity as thus:
It is the minimum speed that an object must be thrown away from the gravitator, in order that it will reach infinite distance, after an infinite lapse of time.
If you throw the object with less speed than this, it will come back: either it will strike the gravitator's surface, or it will go into a repeating orbit around it that nonetheless stays within some maximum distance from the object. But if you throw it outward with speed at least the escape velocity or higher, then the orbital path "becomes infinitely large" - it goes out to, and intersects (in a sense) infinity.
In formal language, if $\mathbf{r}(t)$ is a vector pointing from the gravitator toward the thrown object and which traces its trajectory, and at $t = 0$ we throw the object, the escape velocity is the minimum speed $v_\mathrm{esc}$ such that if
$$|\mathbf{r}'(0)| \ge v_\mathrm{esc}$$
i.e. the speeed at time of throw is larger, then
$$\lim_{t \rightarrow \infty} |\mathbf{r}(t)| = \infty$$
, that is, the distance from the planet grows forever. In reality, of course, there are two limitations: for one, we can never have an infinite time go by, which means that the object will, at all times, still be theoretically under the influence of the gravitator, even if that influence is extremely small. The other limitation is that in reality, there will be other gravity sources present at distance, which will then change the trajectory in other ways once their influence begins to dominate control over the motion of the thrown object and this, in turn, will also prevent reaching that infinite distance. For example, if we are talking about the Solar System, and we are "escaping" the Earth, if we launch a craft away with no more than Earth's escape velocity, then it will only "escape" until the Sun's gravity begins to dominate it, at which point it will now follow a more complicated trajectory based on mutual influence between the Earth and Sun (mostly, it will orbit the Sun, since the Earth will move away in the interim, but it may encounter the Earth again), and also, to some extent, the other planets.
Nonetheless, it is useful because we can take it as a ballpark for the minimum speed, and hence minimum effort which our engine must expend, to "get away" from the gravity source and get to a more remote destination like, say, Mars. Of course, actually travelling there will require more speed change (so-called "delta vee"), because we will also need to navigate to intercept that destination and then settle onto its surface safely. |
Answer
Write $I$ in terms of the sine function: $$I=k-k\sin^2\theta$$
Work Step by Step
$$I=k\cos^2\theta$$ The exercise asks to write $I$ in terms of the sine function. This can be done by changing $\cos\theta$ into $\sin\theta$. Using Pythagorean Identity: $$\cos^2\theta=1-\sin^2\theta$$ we can rewrite $I$ as follows: $$I=k(1-\sin^2\theta)$$ $$I=k-k\sin^2\theta$$ |
Search
Now showing items 1-10 of 18
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV
(Springer, 2014-08)
The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ... |
I am trying to prove the following:
Let $G$ be a topological group acting properly on a Hausdorff locally compact space $X$, i.e. preimages of compacts sets by the map $$G\times X\to X\times X$$ $$(g,x)\mapsto (gx,x)$$ are compacts. Then the quotient $X/G$ is Hausdorff.
Take $x,x'\in X$ such that $Gx\not=Gx'$.
By using that X is Hausdorff locally compact and property of the action, I found $V_1,V_2$ disjoint open set with compact closure and $H\subset G$ compact, such that $x\in V_1,y'\in V_2$ and $$(G-H)\overline{V_1}\cap \overline{V_2}=\varnothing.$$ So now we need to work on $H$. Since $x'\in (Hx)^c\cap V_2$ open, there exists $x'A_2\subset V_2$ such that $$A_2\cap Hx=\varnothing.$$ It implies that $x\in (H^{-1}A_2)^c\cap V_1$. Now, I'd like to find as before $x\in A_1\subset V_1$ open such that $$A_1\cap H^{-1}A_2=\varnothing,$$ which would be enough to conclude. But $H^{-1}A_2$ is not closed like $Hx$, so I'm stuck...
Are my first ideas correct ? Is there a way to exit this dead end ? |
Theproof.tex
\section{Completing the proof}
In this section, we show how to deduce DHJ($k$) from DHJ($k-1$). We will treat $k = 3$ separately, since we have better bounds in this case due to the Gunderson-R\"{o}dl-Sidirenko theorem (and the probabilistic Sperner theorem).
\subsection{The \texorpdfstring{$k = 3$}{k=3} case} As described in Section~\ref{sec:outline-incr}, to establish DHJ($3$) it suffices to establish the density increment theorem stated therein. (One also needs the rigorous reduction from DHJ($3$) to the equal-slices version, Proposition~\ref{prop:edhj-equiv-dhj}.) For the reader's convenience, we restate the density increment theorem: \begin{named}{Theorem~\ref{thm:incr3}} Assume $n \geq 2 \uparrow^{(7)} (1/\delta)$. If $\ens{3}^n(A) \geq \delta$, then either $A$ contains a combinatorial line, or we may pass to a subspace of dimension at least $\log^{(7)} n$ on which $\ens{3}(A) \geq \delta + \Omega(\delta^3)$. \end{named} \begin{proof} Throughout the proof we may assume $\delta \leq 2/3$, as otherwise DHJ($3$) is easy. By Proposition~\ref{prop:degen}, we have $\eqs{3}^n(A) \geq \delta - 6/n \geq \delta - \delta^{50}$, by our assumption on $n$.
Let $\theta = 2\delta^{50}$. (We will not be especially economical when it comes to parameters which do not matter.) Applying Lemma~\ref{lem:32}\noteryan{not bothering to point out that the hypothesis holds}, we pass to subspace of dimension at least $(\delta^{50}/7.5) \sqrt{n} \geq n^{1/3}$ on which either $\eqs{3}(A) \geq \delta + \delta^{50}$, or both $\eqs{3}(A) \geq \delta - 4\delta^{13}$ and $\eqs{2}(A) \geq \delta - 4\delta^{25}$. We repeat this step until either the second conclusion obtains or until the step has been repeated $\lfloor 1/\delta^{47}\rfloor$ times. In either case, the dimension $n$ becomes no smaller than $n' = n \uparrow 3 \uparrow (-1/\delta^{47})$. Having done this, we either have $\eqs{3}(A) \geq \delta + \Omega(\delta^3)$ and may stop (as $n' \gg \log^{(7)} n$ by the assumption on $n$), or we have $\eqs{3}(A), \eqs{2}(A) \geq \delta - 4.5\delta^{13}$. \noteryan{$4.5$ is weird, I know, I messed up the constants earlier and was too lazy to readjust them} With another application of Proposition~\ref{prop:degen} we may obtain $\ens{3}(A), \ens{2}(A) \geq \delta - 5 \delta^{13}$, reducing $n'$ to at most, say, $n \uparrow 3 \uparrow (-1/\delta^{48})$.
Next, we apply Theorem~\ref{thm:correlate}, taking $\delta_k = \delta_{k-1} = \delta - 5\delta^{13}$. Note that we have proved the probabilistic DHJ($2$) theorem in Lemma~\ref{lem:pdhj2}. We certainly have $n' \geq \pdhj{2}{\delta_{k-1}} = \Theta(1/\delta^2)$, and thus we either obtain a combinatorial line in $A$, or a set $B \subseteq [3]^{n'}$ with $\ens{3}(B) \geq \delta - 5 \delta^{13}$ and \[ \frac{\ens{3}(A \cap B)}{\ens{3}(B)} \geq \delta - 5\delta^{13} + (\delta - 5\delta^{13})(\delta - 5\delta^{13})^2/2 \geq \delta + \delta^3/2 - 12.5\delta^{13} \geq \delta + \delta^3/4, \] where we used $\pdhj{2}{\delta_{k-1}} = \delta_{k-1}^2/2$. \noteryan{check these calculations} Also, $B$ may be written as $B_1 \cup B_2$, where $B_a$ is $a3$-insensitive. We henceforth write $A$ instead of $A \cap B$.
We now return to the uniform distribution by applying Lemma~\ref{lem:back-to-uniform}. Its $\beta$ satisfies $\beta \geq \delta - 5\delta^{13}$, and its $\delta$ needs (crucially) to be replaced by $\delta + \delta^3/4$. We choose its $\eta = \delta^3/8$. Note that the resulting $r = (\beta \eta/15k)\sqrt{n'}$ indeed satisfies $r \gg 2 \ln n$, by the initial assumption on $n$.\noteryan{I think I checked.} We now pass to a restriction on some $n' \geq r \geq n \uparrow 3 \uparrow (-1/\delta^{49})$ coordinates on which we have, say, \begin{equation} \label{eqn:3inc} \unif_3(B) \geq \delta^4/50, \quad \unif_3(A) \geq (\delta + \delta^3/8) \unif_3(B). \end{equation} Crucially, since $B$ was the union of a $13$-insensitive set and a $23$-insensitive set before, it remains so in the restriction. We also now apply Lemma~\ref{lem:switch} to partition $B$ into sets $C_1$ and $C_2$, where $C_1$ is $13$-insensitive and $C_2$ is an intersection of $13$- and $23$-insensitive sets.
Nearing the end, we now apply Corollary~\ref{cor:multi-partition} twice; once with $C = C_1$ and once with $C = C_2$. We choose $\eta = \delta^7/4000$ and for simplicity we take $\ell = 2$ both times. We will select the parameter $d$ and check the hypothesis on $n'$ later. We can therefore partition each of $C_1$, $C_2$ into disjoint $d$-dimensional combinatorial subspaces, along with error sets, $E_1$, $E_2$. Writing $E = E_1 \cup E_2$ we have \begin{equation} \label{eqn:3err} \unif_3(E) \leq 2\eta + 2\eta \leq \frac{\delta^7}{1000} \leq \frac{\delta^3}{20} \unif_3(B). \end{equation} We now simply discard the strings in $E$ from $A$ and $B$. Then $B$ is perfectly partitioned into disjoint $d$-dimensional subspaces, and some arithmetic shows that, whereas we had $\unif_3(A)/\unif_3(B) \geq \delta + \delta^3/8$ in~\eqref{eqn:3inc}, we now still have \[ \frac{\unif_3(A)}{\unif_3(B)} \geq \delta + \delta^3/8 - \frac{\unif_3(E)}{\unif_3(B) - \unif_3(E)}, \] and the fraction on the right is easily seen to be at most $\delta^3/19$, using~\eqref{eqn:3err}. Thus we have $\unif_3(A)/\unif_3(B) \geq \delta + \delta^3/20$, with $B$ perfectly partitioned into disjoint $d$-dimensional subspaces. It follows that we can pass to at least one $d$-dimensional subspace on which $\unif_3(A) \geq \delta + \delta^3/20$.
We come now to the final setting of parameters. To execute the partitioning argument, we required that \begin{equation} \label{eqn:crazy3} n' \geq \left(d^{3m(d)}\right)^{3m(d^{3m(d)})}, \end{equation} where \[ m(d) = 2\mdhj{2}{\delta^7/16000}{d} = 2(10d)^{d2^d}(16000/\delta^7)^{2^d}, \] using the Gunderson-R\"{o}dl-Sidirenko theorem. We will ensure that \begin{equation} \label{eqn:sat-me} d \geq 800/\delta^7, \end{equation} so that we may bound $m(d) \leq (20d)^{2d2^d} \leq 2 \uparrow 4 \uparrow d$. Then the right-hand side of~\eqref{eqn:crazy3} is at most $2 \uparrow^{(6)} (3d)$, and with $n' \geq n \uparrow 3 \uparrow (-1/\delta^{49})$ we get that~\eqref{eqn:crazy3} is satisfied so long as $d \ll \log^{(6)} n$, using also the initial assumption on $n$.\noteryan{Getting pretty out of control at this point.} Thus we set, say, \[ d \approx (\log^{(7)} n)^{100} \] and we've satisfied~\eqref{eqn:sat-me} using the initial assumption $n \geq 2 \uparrow^{(7)} (1/\delta)$.
It remains to note that, having passed to a $(\log^{(7)} n)^{100}$-dimensional subspace on which $\unif_3(A) \geq \delta + \delta^3/20$, we can pass to a further subspace of dimension at least the claimed $\log^{(7)} n$ on which $\unif_3(A) \geq \delta + \delta^3/40$ via the embedding argument sketched in Section~\ref{sec:outline-dist}, justified by Lemma~\ref{lem:distributions}. \end{proof}
\subsection{The general \texorpdfstring{$k$}{k} case}
For general $k \geq 4$ we will need to use the inductive deduction of the multidimensional DHJ($k-1$) theorem from the DHJ($k-1$) theorem, given in Proposition~\ref{prop:mdhj-from-dhj}. Because the overall argument then becomes a double-induction\noteryan{I think}, the resulting bounds for DHJ($k$) are of Ackermann-type \noteryan{I think}, and we do not work them out precisely.\noteryan{cop-out}.
\begin{theorem} The general $k$ case of the density Hales--Jewett theorem follows from the $k-1$ case. \end{theorem} \begin{proof} (Sketch.) Having proved DHJ($k-1$), we deduce the multidimensional version via Proposition~\ref{prop:mdhj-from-dhj}, the equal-slices version via Proposition~\ref{prop:edhj-equiv-dhj}, and the probabilistic version from Proposition~\ref{prop:pdhj-from-edhj} (only the $d = 1$ case is necessary). The overall goal for proving DHJ($k$) is, as in the $k = 3$ case, to show that for $n$ sufficiently large as a function of $\delta$, we can either find a line in $A$ or can obtain a density increment of $\Omega(\eps)$ on a combinatorial subspace of dimension $d'$, where $\eps = \delta \cdot \pdhj{k-1}{\delta/2}$ depends only on $\delta$ and $k$, and where $d' = \omega_1(n)$.
Beginning with $\ens{k}(A) \geq \delta$, we choose $\theta = 2\eps^C$ for some large constant $C$; note that $\theta$ only depends on $\delta$ and $k$. We pass from $\ens{k}(A)$ to $\eqs{k}(A)$ via Proposition~\ref{prop:degen}, incurring density loss only $\eps^c$ by assuming $n$ large enough ($n$ becomes slightly smaller here). We next apply Lemma~\ref{lem:32}. This gives us either a density increment of $\eps^c$, or achieves $\eqs{k}(A), \eqs{k-1}(A) \geq \delta - \eps^c$ for some smaller but still large constant $c$. We can repeat this step $1/\eps^c$ times, eventually achieving a density increment of $\Omega(\eps)$ and stopping if the first outcome keeps happening; otherwise achieving $\eqs{k}(A), \eqs{k-1}(A) \geq \delta - O(\eps^c)$. This brings $n$ down to some fractional power $n'$, but the fraction depends only on $\eps^c$ and hence on $\delta$ and $k$; thus we may continue to assume it is ``large enough
. \noteryan{guess I'm getting pretty sketchy here} This lets us ensure that $\ens{k}(A), \ens{k-1}(A) \geq \delta - O(\eps^c) \geq \delta/2$.
We now apply Theorem~\ref{thm:correlate} with $\delta_k = \delta_{k-1} = \delta/2$. This requires $n'$ to be larger than $\Pdhj{k-1}{\delta/2}$, which will be a very large but still ``constant
quantity, depending only on $\delta$ and $k$. Having applied the theorem, we either get a combinatorial line, or a density increment of $\delta_k \cdot \pdhj{k-1}{\delta/2} = \Omega(\eps)$ on some set $B$. Note that $\Omega(\eps) \gg O(\eps^c)$, so are able to obtain the desired relative density increment. The set $B$ has $\ens{k}(B) \geq \delta/2$ and is a union of $ak$-insensitive sets, as in the theorem statement. We can preserve this relative density increment under the uniform distribution by passing to another subspace according to Lemma~\ref{lem:back-to-uniform} with $\eta = \eps^C$. The dimension $n'$ continues to decrease to some $r$, but this is still $\omega_n(1)$ as we have only involved ``constant factors depending on $\delta$ and $k$. As in the DHJ($3$) proof, $B$ still has the special structure as a union of $ak$-insensitive sets. We again apply Lemma~\ref{lem:switch} to write it as a disjoint union of $k$ sets, each of which is an intersection of up to $k$ many $ak$-insensitive sets.
We now apply Corollary~\ref{cor:multi-partition} to each of these sets, taking $\eta = \eps^C$ again and $\ell = k$ each time for simplicity. It suffice now to analyze what parameter $d$ we may choose; the remainder of the proof is essentially the same as in the DHJ($3$) case. At this point, the original $n$ has shrunk to some $n'$ which is some fractional power of $n$ depending on $\delta$ and $k$ only, and this $n'$ is required to be large enough compared to $\pdhj{k-1}{\delta/2}$, a very large function of $\delta$ and $k$ only. The function $m(d)$ arising in the corollary depends on $\eta$ and hence on $\delta$ and $k$, and also on $d$; the dependence of $f(d) = d^{3m(d)}$ on $d$ will be quite bad due to the inductive argument Proposition~\ref{prop:mdhj-from-dhj}. Still, when we iterate it $\ell = k$ times, the function $f^{(k)}(d)$ is just another (huge) function of $k$, $\delta$, and $d$. We will then have some demand on how large $n'$ needs to be as a function of $d$. Crucially, we may take $d$ to be some function $\omega_n(1)$ which satisfies this demand.
Finally, we need to take our uniform-distribution density increment of $\Omega(\eps)$ on a $d$-dimensional subspace, and produce an equal-slices density increment of $\Omega(\eps)$. This will require reducing $d$ further to some $d'$, a function of $d$ and $\eps$ (hence $\delta$ and $k$), but this $d'$ will still be $\omega_n(1)$. \noteryan{this is all poorly explained and probably wrongly explained in several places :) luckily it's right overall :)} \end{proof} |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
I would like to compute the Christoffel symbols of the second kind using the geodesic equation. To practice, I have tried the Schwarzschild Ansatz $$ g_{00} = \mathrm e^\nu,\quad g_{11} = - \mathrm e^\lambda,\quad g_{22} = -r^2,\quad g_{33} = -r^2 \sin(\theta)^2, $$ where $\nu$ and $\lambda$ are functions of $r$.
The Lagrangian is $$ L = \mathrm e^\nu \dot t^2 - \mathrm e^\lambda \dot r^2 - r^2 \dot \theta^2 - r^2 \sin(\theta)^2 \dot \phi^2.$$
From this, I have computed for Euler Lagrange equations: $$ 0 = 2 \mathrm e^\nu \ddot t $$ $$ \mathrm e^\nu \nu' \dot t^2 - \mathrm e^\lambda \lambda' \dot r^2 - 2 r \dot \theta^2 - 2 r \sin(\theta)^2 \dot\phi^2 = -2 \mathrm e^\lambda \ddot r $$ $$ - 2 r^2 \sin(\theta) \cos(\theta) \dot \phi^2 = - 2r^2 \ddot \theta $$ $$ 0 = - 2 r^2 \sin(\theta)^2 \ddot \phi $$
With the second I got: $$ \Gamma^1_{00} = \frac{\nu'}2 \mathrm e^{\nu-\lambda}, \quad \Gamma^1_{11} = - \frac{\lambda'}2, \quad \Gamma^1_{22} = - r \mathrm e^{-\lambda}, \quad \Gamma^1_{33} = - r\sin(\theta)^2 \mathrm e^{-\lambda}$$
And the third one: $$ \Gamma^2_{33} = - \sin(\theta)\cos(\theta) $$
From the first and fourth equation I would deduce that any $\Gamma^0_{\mu\nu} = 0$ as well as $\Gamma^3_{\mu\nu} = 0$. The solution says that this is not the case. How can I obtain the other nonzero Christoffel symbols? |
I am trying to do a Molecular Dynamics Simulation of a complex fluid, confined between solid surfaces. I would like to find the flow rate as a function of fluid film thickness, $h$, for a plane poiseuille flow. (The flow rate is not expected to show an $h^3$ dependence, in general, if $h$ is small, in the order of a few nanometers).
The question I have is, how I can get an estimate for the simulation time required for the system to attain steady state, so the flow rate measurements can be made after such time. If we assumed continuum, we would have this equation:
$ \begin{align*} \frac{\partial M}{\partial t} = \nu \frac{\partial^2 M}{\partial y^2} - \frac{1}{\rho} \frac{\partial P}{ \partial x} \end{align*}$
to describe this unsteady Poiseuille flow($M$ being the Momentum, $\nu$ the kinematic viscosity and $P$ the pressure), with the applied pressure gradient being very high (since it is applied via adding high elementary body forces to the particles) to start with, when compared to the viscous diffusion term. The pressure gradient term can be assumed to be constant along the direction of flow $x$ and in time, $t$.
Now, if we didn't have the pressure term, we would have a momentum diffusion equation from where we could estimate the time required for the unsteadiness to decay. But, from the above equation (or from any other starting point), it is not obvious to me how one can get an estimate for the simulation time required for steady state to be reached.
Thank you very much for your patience and help! |
A friend and I have been discussing turning a $O(n^2)$ graph problem's algorithm into $O(n\log n)$, or at least less than $O(n^2)$. And no - this is not a homework question. We've narrowed it down to the following subproblem:
Let $A,B,P,$ and $Q$ be a group of 4 nodes, with coordinates $(x_A,y_A),(x_B,y_B)$,etc... such that:
$x_A$ and $x_P$ are less than both $x_B$ and $x_Q$ $y_A$ and $y_B$ are greater than $y_P$ and $y_Q$.
Let there be an arbitrary cluster of nodes contained within the quadrilateral formed by $A,B,P,$ and $Q$. Find a pair of paths $A\rightarrow B$ and $P\rightarrow Q$ such that:
Every node in the cluster is hit by exactly one of the two paths The two paths have the minimum possible total (Euclidean) length The two paths do not intersect The paths always increase in $x$. That is, if we enumerate path $A\rightarrow B$ as $\{A,(x_1,y_1),(x_2,y_2), ...(x_n,y_n),B\}$, then $x_i<x_j$ for $0<i<j\leq n$. Also, $x_A<x_1$ and $x_n<x_B$. This property must hold for the $P\rightarrow Q$ path as well.
It is also worth mentioning that no two nodes in the entire problem will have the same $x$ coordinate.
I'm positive some geometric trick will play a huge role in finding the solution. Keep in mind this is just a subproblem and a solution to this in $O(n^2)$ will still be helpful - though I'm hoping for $O(n\log n)$.
Here's an example problem for clarity:
Any solutions or even helpful ideas are always appreciated. Diagrams are also very much appreciated.
Finally, for added clarity, here is what a sample solution might look like: |
The sections of the holomorphic line bundle $\mathcal{O}(n)$ are acted on by the covariant derivative $$ d+nA, $$ where $A$ is the connection on the $U(1)$ bundle to which $\mathcal{O}(n)$ is associated.
My question is, are there holomorphic line bundles associated to $U(1)^M$ bundles, where $M$ is some integer? The sections of such a holomorphic line bundle would be acted on by the covariant derivative
$$ d+\sum_b^M n_b A_b, $$ where $n_b$ denotes the representation w.r.t. each $U(1)$ factor. It would be natural to denote such a line bundle as $\mathcal{O}(n_1,n_2,\ldots,n_b)$. Do such holomorphic line bundles exist in the mathematical literature? References would be highly appreciated.
Edit: To be more precise, I am interested in the holomorphic line bundles described in the previous paragraph defined over toric manifolds $X=(\mathbb{C}^N-\mathcal{P})/({\mathbb{C}^{\times}})^k$, especially the case where $k>1$.
(Here $\mathcal{P}$ denotes a subset of $\mathbb{C}^N$ fixed under a continuous subgroup of $({\mathbb{C}^{\times}})^k$.)
In particular, I would like to understand the generalization of the line bundles over $\mathbb{P}^1 \times \mathbb{P}^1$ which are denoted as $\mathcal{O}(m,n)$, mentioned in Line bundles and vector bundles on $\mathbb P^1 \times \mathbb P^1$. There, it is explained in the comments that $\mathcal{O}(m,n)= p_1^*\mathcal{O}(m)\otimes p_2^*\mathcal{O}(n)$, where $p_1$ and $p_2$ denote the projections onto the two factors. |
I'm trying to get a transfer function of
F=ma in the Laplace domain. This should be simple, but yet I'm confused. The transfer function is displacement over force. So, I have two approaches.
First approach: integrate w.r.t. time twice, and laplace transform of that. $$\ddot{x}=F \\ \dot{x}=Ft+c_1 \\ x=\dfrac{1}{2}Ft^2+c_1t+c_2$$ which makes for a transfer function with $c_1$ and $c_2$ zero: $$h(t)=\frac{x}{F}=\frac{1}{2}t^2$$ The right hand side has a Laplace transform of $$H(s)=\frac{1}{s^3}$$
Laplace transform of double integration, which is simply $$H(s)=\frac{1}{s^2}$$
I have heard that the second one is correct, but why is the first one not correct? |
Velocity factor is a property of electromagnetic wave propagation, not wire. A transmission line (like coax) is a conduit for an electromagnetic wave in itself, so the velocity factor can be defined. In fact, the velocity factor can be derived from the lumped model of a transmission line:
simulate this circuit – Schematic created using CircuitLab
Following the usual simplifying assumptions of a lossless transmission line, R and G become insignificant. If $L$ and $C$ are inductance (henry) and capacitance (farad) per meter, then the velocity of propagation in a transmission line is:
$$ v = \frac{1}{\sqrt{LC}} \tag{1} $$
The velocity factor is just this relative to the velocity of propagation in a vacuum: $v/c$.
A very similar equation is the characteristic impedance of the transmission line:
$$ Z_0 = \sqrt{\frac{L}{C}} \tag{2} $$
Thus, if we know any two of:
characteristic impedance, capacitance per unit length, or inductance per unit length
then we can calculate the velocity factor. Let's try it for Belden 9223, which specifies in the datasheet:
$$ Z_0 = 50\:\Omega \\C = 37\:\mathrm{pF}/\mathrm{ft} = 1.21 \cdot 10^{-10}\:\mathrm{F/m} $$
So by equation (2):
$$ \begin{align}50\Omega &= \sqrt{\frac{L}{ 1.21 × 10^{-10} }} \\50^2 &= \frac{L}{ 1.21 × 10^{-10} } \\L &= 3.03 \cdot 10^{-7}\:\mathrm{H}/\mathrm{ft}\end{align} $$
And then by equation (1):
$$ \begin{align}v &= \frac{1}{\sqrt{( 3.03 \cdot 10^{-7} )( 1.21 × 10^{-10} )}} \\v &= 165289256 \:\mathrm{m/s}\end{align} $$
Thus the velocity factor is:
$$ v/c = 165289256 / 299792458 = 0.55 $$
The datasheet says 0.56. I attribute the discrepancy to rounding error.
So what about a single wire? What values do we use for L and C?
That depends on the geometry of the wire. In the case of coax, the wave is propagating within the dielectric between the center conductor and the shield. This dielectric has a known geometry and composition, so the manufacturer can specify a velocity factor.
In the case of a wire, the wave will be propagating between the wire, and something else. Maybe the ground. Maybe another wire. Maybe the same wire some distance away, as in the case of a dipole. Are you stretching the wire in a straight line, or winding it into a coil? The capacitance will be depend on the permittivity of the space containing the electric field. Is it air? A tree? Because there are so many variables, a wire datasheet can not possibly specify a velocity factor. And while we might measure one, we must be careful to specify what propagation mode we are talking about, and the conditions under which it was measured. |
The electromagnetic field in a medium gets attenuated exponentially.$$\mathbf E = \mathbf E_0e^{-x/\delta}$$
Where $x$ is the distance the signal has traveled. Since the power of the signal is proportional to the square of the field, then the power will be attenuated by $P = P_0 e^{-2x/\delta}$. The quantity $\delta$ is called skin depth. Its the distance that the signal will attenuate its field by $1/e$. So, your question is: what is the skin depth of water at microwave frequency range?
That depends... Depends on the conductivity and dielectric constant of the water. The more salty the water is, the greater the conductivity, and thus, the greater the attenuation.
Let $\epsilon_0\approx 8.812\cdot 10^{-12} F/m$ be the electric permittivity of vacuum, $\mu_0 = 4\pi\cdot 10^{-7} H/m$ the magnetic permeability of vacuum, $\epsilon_r$ the dielectric constant of water (which is approximately $80$), and $g$ the conductivity of water. The skin depth can be calculated with such parameters:
$$\frac{1}{\delta} = 2\pi f\sqrt{\frac{1}{2}\epsilon_r\epsilon_0\mu_0\left(-1 + \sqrt{1 + \left(\frac{g}{2\pi f\epsilon_0\epsilon_r}\right)^2}\right)}$$
Notice that, if $g$ is much greater than $2\pi f\epsilon_0\epsilon_r$ in the minimum of 2 orders of magnitude, ie $g\gg 2\pi f\epsilon_0\epsilon_r$ then the skin depth can be approximated:$$\delta = \sqrt{\frac{1}{\pi fg\mu_0}}$$
If $g\ll 2\pi f\epsilon_0\epsilon_r$ it can be easily seen this approximation:$$\delta = \frac{2}{g}\sqrt{\frac{\epsilon_0\epsilon_r}{\mu_0}}$$
In the first approximation (good conductor approximation), notice that, as conductivity increases, the skin depth decreases, then attenuation increases. And in the second (bad conductor approximation), as conductivity increases the skin depth decreases then the attenuation increases. Thus, in all cases, the greater the conductivity, the greater the attenuation. According to this data then, we have for water, at a signal with $f = 2.485GHz$:
Pure water. $g\approx 5.5\cdot 10^{-6} S/m$. Then, $g\ll 2\pi f\epsilon_0\epsilon_r$. Then $\delta\approx 8.01Km$.
Drinkable Water. $g\approx 0.001S/m$. Then, $g\ll 2\pi f\epsilon_0\epsilon_r$. Then $\delta\approx 44.54m$.
Sea Water. $g\approx 5S/m$. Then, $g\gg 2\pi f\epsilon_0\epsilon_r$. Then $\delta\approx 8.91mm$.
As you can see, salty water is evil. After signal has penetrated $8.91mm$ inside sea water, its power decreases by factor of $1/e^2\approx 0.13534$ ie $86.466\%$ of its power was lost. In WW2 this was a problem to overcome in submarine communication. =). In pure water, we show the wave needs to travel $8.01Km$ to lose its power by $e^{-2}$ factor. A lot to travel. Thus, won't be too much attenuated. In your case however, I guess its drinkable water. However, assuming the $0.05S/m$ for drinkable water, we get skin depth of $80cm$. So, the skin depth varies too much for little variation in conductance. Thus, you will need to measure the conductivity of your water.
There is also the problem of reflectance. A signal might be reflected back by water. The skin depth relates to water imaginary optical constant $k$ by:$$\delta = \frac{c}{2\pi fk}$$
The water reflectance, that is, the ratio of the power that is reflected, at a normal incidence angle, can be calculated:$$R = \frac{(n-1)^2 + k^2}{(n+1)^2 + k^2}$$
Where, $n$ is the refractive index of water, that is, $n\approx 8.5$ for this frequency. The relation $n = \sqrt{\epsilon_r}$ might help. Since $k$ is calculable and related to the $\delta$, one can show that, the greater the attenuation (low skin depth), the greater the reflectance. Thus, you have two problems to overcome: The reflectance of the interface air-water, and about the non-reflected waves, ie, the ones inside the water, there is the skin depth (attenuation).
Be aware that this reflectance is only for normal incidence. For incidences with a generic angle $\theta$ (ray cast), the reflectance will change (often to increase its value, ie, reflect more).
For drinkable water (probably your case (there is a fish involved...)), then we have at worse scenario, skin depth of $\delta\approx 80cm$ and reflectance of $R = 0.61940$, meaning roughly $62\%$ of incoming radiant power at normal incidence will be reflected back, and won't penetrate the water. Only roughly $38\%$ will penetrate water, and encounter exponential attenuation of $e^{-2x/0.8} = e^{-x/0.4}$.
Now.. your final formula! Assuming signal leaves antenna with power $P_0$, the amount of power $P$ that the receiver will receive is:$$P = \frac{P_0}{4\pi r^2}(1-R)e^{-x/\delta},\quad\quadP = \frac{P_0}{4\pi r^2}0.38060 e^{-x/0.4}$$
Where $r$ is the distance between the antenna and the receiver (assuming spherical propagation), and $x$ is the distance that the signal will have to travel thru water.
Out of note, there is no such thing as "rate of signal absorption per distance through intervening medium of water" as you suggested, because the attenuation is exponential, as opposed to linear.
Well, hopefully, besides answering your question, I hope I helped you learn something new of physics. =). |
Learning Objectives
To calculate a mass-energy balance and a nuclear binding energy. To understand the differences between nuclear fission and fusion.
Nuclear reactions, like chemical reactions, are accompanied by changes in energy. The energy changes in nuclear reactions, however, are enormous compared with those of even the most energetic chemical reactions. In fact, the energy changes in a typical nuclear reaction are so large that they result in a measurable change of mass. In this section, we describe the relationship between mass and energy in nuclear reactions and show how the seemingly small changes in mass that accompany nuclear reactions result in the release of enormous amounts of energy.
Mass–Energy Balance
The relationship between mass (m) and energy (E) is expressed in the following equation:
\[E = mc^2 \label{Eq1}\]
where
\(c\) is the speed of light (\(2.998 \times 10^8\; m/s\)), and \(E\) and \(m\) are expressed in units of joules and kilograms, respectively.
Albert Einstein first derived this relationship in 1905 as part of his special theory of relativity: the mass of a particle is directly proportional to its energy. Thus according to Equation \(\ref{Eq1}\), every mass has an associated energy, and similarly, any reaction that involves a change in energy must be accompanied by a change in mass. This implies that all exothermic reactions should be accompanied by a decrease in mass, and all endothermic reactions should be accompanied by an increase in mass. Given the law of conservation of mass, how can this be true? The solution to this apparent contradiction is that chemical reactions are indeed accompanied by changes in mass, but these changes are simply too small to be detected. As you may recall, all particles exhibit wavelike behavior, but the wavelength is inversely proportional to the mass of the particle (actually, to its momentum, the product of its mass and velocity). Consequently, wavelike behavior is detectable only for particles with very small masses, such as electrons. For example, the chemical equation for the combustion of graphite to produce carbon dioxide is as follows:
\[\textrm{C(graphite)} + \frac{1}{2}\textrm O_2(\textrm g)\rightarrow \mathrm{CO_2}(\textrm g)\hspace{5mm}\Delta H^\circ=-393.5\textrm{ kJ/mol} \label{Eq2}\]
Combustion reactions are typically carried out at constant pressure, and under these conditions, the heat released or absorbed is equal to ΔH. When a reaction is carried out at constant volume, the heat released or absorbed is equal to ΔE. For most chemical reactions, however, ΔE ≈ ΔH. If we rewrite Einstein’s equation as
we can rearrange the equation to obtain the following relationship between the change in mass and the change in energy:
\[\Delta m=\dfrac{\Delta E}{c^2} \label{Eq4}\]
Because 1 J = 1 (kg•m
2)/s 2, the change in mass is as follows:
This is a mass change of about 3.6 × 10
−10 g/g carbon that is burned, or about 100-millionths of the mass of an electron per atom of carbon. In practice, this mass change is much too small to be measured experimentally and is negligible.
In contrast, for a typical nuclear reaction, such as the radioactive decay of
14C to 14N and an electron (a β particle), there is a much larger change in mass:
We can use the experimentally measured masses of subatomic particles and common isotopes given in Table 20.1 to calculate the change in mass directly. The reaction involves the conversion of a neutral
14C atom to a positively charged 14N ion (with six, not seven, electrons) and a negatively charged β particle (an electron), so the mass of the products is identical to the mass of a neutral 14N atom. The total change in mass during the reaction is therefore the difference between the mass of a neutral 14N atom (14.003074 amu) and the mass of a 14C atom (14.003242 amu):
\[\begin{align} \Delta m &= {\textrm{mass}_{\textrm{products}}- \textrm{mass}_{\textrm{reactants}}}
\\&=14.003074\textrm{ amu} - 14.003242\textrm{ amu} = - 0.000168\textrm{ amu}\end{align} \label{Eq7}\]
The difference in mass, which has been released as energy, corresponds to almost one-third of an electron. The change in mass for the decay of 1 mol of
14C is −0.000168 g = −1.68 × 10 −4 g = −1.68 × 10 −7 kg. Although a mass change of this magnitude may seem small, it is about 1000 times larger than the mass change for the combustion of graphite. The energy change is as follows:
The energy released in this nuclear reaction is more than 100,000 times greater than that of a typical chemical reaction, even though the decay of
14C is a relatively low-energy nuclear reaction.
Because the energy changes in nuclear reactions are so large, they are often expressed in kiloelectronvolts (1 keV = 10
3 eV), megaelectronvolts (1 MeV = 10 6 eV), and even gigaelectronvolts (1 GeV = 10 9 eV) per atom or particle. The change in energy that accompanies a nuclear reaction can be calculated from the change in mass using the relationship 1 amu = 931 MeV. The energy released by the decay of one atom of 14C is thus
Example \(\PageIndex{1}\)
Calculate the changes in mass (in atomic mass units) and energy (in joules per mole and electronvolts per atom) that accompany the radioactive decay of
238U to 234Th and an α particle. The α particle absorbs two electrons from the surrounding matter to form a helium atom. Given: nuclear decay reaction Asked for: changes in mass and energy Strategy: A Use the mass values in Table 20.1 to calculate the change in mass for the decay reaction in atomic mass units. B Use Equation \(\ref{Eq4}\) to calculate the change in energy in joules per mole. C Use the relationship between atomic mass units and megaelectronvolts to calculate the change in energy in electronvolts per atom. SOLUTION A Using particle and isotope masses from Table 20.1, we can calculate the change in mass as follows:
\\&=(234.043601\textrm{ amu}+4.002603\textrm{ amu}) - 238.050788\textrm{ amu} = - 0.004584\textrm{ amu}\end{align}\)
B Thus the change in mass for 1 mol of 238U is −0.004584 g or −4.584 × 10 −6 kg. The change in energy in joules per mole is as follows:
ΔE = (Δm)c
2 = (−4.584 × 10 −6 kg)(2.998 × 10 8 m/s) 2 = −4.120 × 10 11 J/mol C The change in energy in electronvolts per atom is as follows:
Exercise \(\PageIndex{1}\)
Calculate the changes in mass (in atomic mass units) and energy (in kilojoules per mole and kiloelectronvolts per atom) that accompany the radioactive decay of tritium (
3H) to 3He and a β particle. Answer
Δm = −2.0 × 10
−5amu; ΔE = −1.9 × 10 6kJ/mol = −19 keV/atom Nuclear Binding Energies
We have seen that energy changes in both chemical and nuclear reactions are accompanied by changes in mass. Einstein’s equation, which allows us to interconvert mass and energy, has another interesting consequence: The mass of an atom is always less than the sum of the masses of its component particles. The only exception to this rule is hydrogen-1 (
1H), whose measured mass of 1.007825 amu is identical to the sum of the masses of a proton and an electron. In contrast, the experimentally measured mass of an atom of deuterium ( 2H) is 2.014102 amu, although its calculated mass is 2.016490 amu:
\[\begin{align}m_{^2\textrm H}&=m_{\textrm{neutron}}+m_{\textrm{proton}}+m_{\textrm{electron}}
\\&=1.008665\textrm{ amu}+1.007276\textrm{ amu}+0.000549\textrm{ amu}=2.016490\textrm{ amu} \end{align}\label{Eq10}\]
The difference between the sum of the masses of the components and the measured atomic mass is called the
mass defect of the nucleus. Just as a molecule is more stable than its isolated atoms, a nucleus is more stable (lower in energy) than its isolated components. Consequently, when isolated nucleons assemble into a stable nucleus, energy is released. According to Equation \(\ref{Eq4}\), this release of energy must be accompanied by a decrease in the mass of the nucleus.
The amount of energy released when a nucleus forms from its component nucleons is the
nuclear binding energy (Figure \(\PageIndex{1}\)). In the case of deuterium, the mass defect is 0.002388 amu, which corresponds to a nuclear binding energy of 2.22 MeV for the deuterium nucleus. Because the magnitude of the mass defect is proportional to the nuclear binding energy, both values indicate the stability of the nucleus.
Just as a molecule is more stable (lower in energy) than its isolated atoms, a nucleus is more stable than its isolated components.
Not all nuclei are equally stable. Chemists describe the relative stability of different nuclei by comparing the binding energy per nucleon, which is obtained by dividing the nuclear binding energy by the mass number (A) of the nucleus. As shown in Figure \(\PageIndex{2}\), the binding energy per nucleon increases rapidly with increasing atomic number until about Z = 26, where it levels off to about 8–9 MeV per nucleon and then decreases slowly. The initial increase in binding energy is not a smooth curve but exhibits sharp peaks corresponding to the light nuclei that have equal numbers of protons and neutrons (e.g.,
4He, 12C, and 16O). As mentioned earlier, these are particularly stable combinations.
Because the maximum binding energy per nucleon is reached at
56Fe, all other nuclei are thermodynamically unstable with regard to the formation of 56Fe. Consequently, heavier nuclei (toward the right in Figure \(\PageIndex{2}\)) should spontaneously undergo reactions such as alpha decay, which result in a decrease in atomic number. Conversely, lighter elements (on the left in Figure \(\PageIndex{2}\)) should spontaneously undergo reactions that result in an increase in atomic number. This is indeed the observed pattern.
Heavier nuclei spontaneously undergo nuclear reactions that
decreasetheir atomic number. Lighter nuclei spontaneously undergo nuclear reactions that increasetheir atomic number.
Example \(\PageIndex{2}\)
Calculate the total nuclear binding energy (in megaelectronvolts) and the binding energy per nucleon for
56Fe. The experimental mass of the nuclide is given in Table A4. Given: nuclide and mass Asked for: nuclear binding energy and binding energy per nucleon Strategy: A Sum the masses of the protons, electrons, and neutrons or, alternatively, use the mass of the appropriate number of 1H atoms (because its mass is the same as the mass of one electron and one proton). B Calculate the mass defect by subtracting the experimental mass from the calculated mass. C Determine the nuclear binding energy by multiplying the mass defect by the change in energy in electronvolts per atom. Divide this value by the number of nucleons to obtain the binding energy per nucleon. Solution:
A An iron-56 atom has 26 protons, 26 electrons, and 30 neutrons. We could add the masses of these three sets of particles; however, noting that 26 protons and 26 electrons are equivalent to 26
1H atoms, we can calculate the sum of the masses more quickly as follows:
&=26(1.007825)\textrm{amu}+30(1.008665)\textrm{amu}=56.463400\textrm{ amu}\\
\textrm{experimental mass} &=55.934938
\end{align}\)
B We subtract to find the mass defect:
\\&=56.463400\textrm{ amu}-55.934938\textrm{ amu}=0.528462\textrm{ amu}\end{align}\)
C The nuclear binding energy is thus 0.528462 amu × 931 MeV/amu = 492 MeV. The binding energy per nucleon is 492 MeV/56 nucleons = 8.79 MeV/nucleon.
Exercise \(\PageIndex{2}\)
Calculate the total nuclear binding energy (in megaelectronvolts) and the binding energy per nucleon for
238U. Answer
1800 MeV/
238U; 7.57 MeV/nucleon Summary
Unlike a chemical reaction, a nuclear reaction results in a significant change in mass and an associated change of energy, as described by Einstein’s equation. Nuclear reactions are accompanied by large changes in energy, which result in detectable changes in mass. The change in mass is related to the change in energy according to Einstein’s equation: ΔE = (Δm)c
2. Large changes in energy are usually reported in kiloelectronvolts or megaelectronvolts (thousands or millions of electronvolts). With the exception of 1H, the experimentally determined mass of an atom is always less than the sum of the masses of the component particles (protons, neutrons, and electrons) by an amount called the mass defect of the nucleus. The energy corresponding to the mass defect is the nuclear binding energy, the amount of energy released when a nucleus forms from its component particles. In nuclear fission, nuclei split into lighter nuclei with an accompanying release of multiple neutrons and large amounts of energy. The critical mass is the minimum mass required to support a self-sustaining nuclear chain reaction. Nuclear fusion is a process in which two light nuclei combine to produce a heavier nucleus plus a great deal of energy. |
Reference: Page 4 of this document
Given two polynomials $p(x)$ and $q(x)$ each of degree $n$ represented in coefficient form, it takes $\mathcal{\Theta}(n)$ time to evaluate given some input $x$.
In the reference linked, in order to get product of these two polynomials, $c(x) = p(x)q(x)$, via brute force, we have to compute new coefficients via convolution $\left(c_k = \sum_{i=0}^k a_i b_{k-i} \right)$ which takes $\mathcal{\Theta}(n^2)$ time.
However, if we wanted to evaluate this product with some given input $x$, we can evaluate the two polynomials $p(x)$ and $q(x)$ separately and then multiply the two scalar results. Overall, this would take $\mathcal{\Theta}(n)$ time for two evaluations and a scalar multiplication. This lets us avoid having to do convolutions and would be faster than $\mathcal{\Theta}(n^2)$.
Question: When it comes to evaluation of the product of two polynomials, is it actually $\mathcal{\Theta}(n)$ using the method above or are we somehow still doomed to $\mathcal{\Theta}(n^2)$?
I'm aware there's a faster way to do this in $\mathcal{O}(n \log n)$ time via Fast Fourier Transform, but I'm more curious about the necessity of the convolution calculations. I suspect it's more if we care about getting the coefficients of the product than the actual evaluation, as in we more care about knowing $\{c_0, c_1, c_2, \dotsc\}$ than $c(x) = p(x)q(x)$. |
Let $a$, $b$ and $c$ be real numbers such that $a^2+b^2+c^2=1$. Prove that: $$\frac{1}{(1-ab)^2}+\frac{1}{(1-ac)^2}+\frac{1}{(1-bc)^2}\leq\frac{27}{4}$$
This inequality is stronger than $\sum\limits_{cyc}\frac{1}{1-ab}\leq\frac{9}{2}$ with the same condition,
which we can prove by AM-GM and C-S: $$\sum\limits_{cyc}\frac{1}{1-ab}=3+\sum\limits_{cyc}\left(\frac{1}{1-ab}-1\right)=3+\sum\limits_{cyc}\frac{ab}{1-ab}\leq$$ $$\leq3+\sum\limits_{cyc}\frac{(a+b)^2}{2(2a^2+2b^2+2c^2-2ab)}\leq3+\sum\limits_{cyc}\frac{(a+b)^2}{2(a^2+b^2+2c^2)}\leq$$ $$\leq3+\frac{1}{2}\sum_{cyc}\left(\frac{a^2}{a^2+c^2}+\frac{b^2}{b^2+c^2}\right)=\frac{9}{2},$$ but for the starting inequality this idea does not work.
By the way, I have a proof of the following inequality.
Let $a$, $b$ and $c$ be real numbers such that $a^2+b^2+c^2=3$. Prove that: $$\sum\limits_{cyc}\frac{1}{(4-ab)^2}\leq\frac{1}{3}$$ (we can prove it by SOS and uvw).
This inequality is weaker and it not comforting.
We can assume of course that all variables are non-negatives.
Thank you! |
$$ \newcommand{\bsth}{{\boldsymbol\theta}} \newcommand{\va}{\textbf{a}} \newcommand{\vb}{\textbf{b}} \newcommand{\vc}{\textbf{c}} \newcommand{\vd}{\textbf{d}} \newcommand{\ve}{\textbf{e}} \newcommand{\vf}{\textbf{f}} \newcommand{\vg}{\textbf{g}} \newcommand{\vh}{\textbf{h}} \newcommand{\vi}{\textbf{i}} \newcommand{\vj}{\textbf{j}} \newcommand{\vk}{\textbf{k}} \newcommand{\vl}{\textbf{l}} \newcommand{\vm}{\textbf{m}} \newcommand{\vn}{\textbf{n}} \newcommand{\vo}{\textbf{o}} \newcommand{\vp}{\textbf{p}} \newcommand{\vq}{\textbf{q}} \newcommand{\vr}{\textbf{r}} \newcommand{\vs}{\textbf{s}} \newcommand{\vt}{\textbf{t}} \newcommand{\vu}{\textbf{u}} \newcommand{\vv}{\textbf{v}} \newcommand{\vw}{\textbf{w}} \newcommand{\vx}{\textbf{x}} \newcommand{\vy}{\textbf{y}} \newcommand{\vz}{\textbf{z}} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator\mathProb{\mathbb{P}} \renewcommand{\P}{\mathProb} % need to overwrite stupid paragraph symbol \DeclareMathOperator\mathExp{\mathbb{E}} \newcommand{\E}{\mathExp} \DeclareMathOperator\Uniform{Uniform} \DeclareMathOperator\poly{poly} \DeclareMathOperator\diag{diag} \newcommand{\pa}[1]{ \left({#1}\right) } \newcommand{\ha}[1]{ \left[{#1}\right] } \newcommand{\ca}[1]{ \left\{{#1}\right\} } \newcommand{\norm}[1]{\left\| #1 \right\|} \newcommand{\nptime}{\textsf{NP}} \newcommand{\ptime}{\textsf{P}} \newcommand{\R}{\mathbb{R}} \newcommand{\card}[1]{\left\lvert{#1}\right\rvert} \newcommand{\abs}[1]{\card{#1}} \newcommand{\sg}{\mathop{\mathrm{SG}}} \newcommand{\se}{\mathop{\mathrm{SE}}} \newcommand{\mat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \DeclareMathOperator{\var}{var} \DeclareMathOperator{\cov}{cov} \newcommand\independent{\perp\kern-5pt\perp} \newcommand{\CE}[2]{ \mathExp\left[ #1 \,\middle|\, #2 \right] } \newcommand{\disteq}{\overset{d}{=}} $$
FAISS, Part 1
FAISS is a powerful GPU-accelerated library for similarity search. It’s available under MIT on GitHub. Even though the paper came out in 2017, and, under some interpretations, the library lost its SOTA title, when it comes to a practical concerns:
the library is actively maintained and cleanly written. it’s still extremely competitive by any metric, enough so that the bottleneck for your application won’t likely be in FAISS anyway. if you bug me enough, I may fix my one-line EC2 spin-up script that sets up FAISS deps here.
This post will review context and motivation for the paper. Again, the approximate similarity search space may have progressed to different kinds of techniques, but FAISS’s techniques are powerful, simple, and inspirational in their own right.
Motivation
At a high level,
similarity search helps us find similar high dimensional real vectors from a fixed “database” of vectors to a given query vector, without resorting to checking each one. In database terms, we’re making an index of high-dimensional real vectors. Who Cares Spam Detection
Tinder bot 1 bio: “Hey, I’m just down for whatever you know? Let’s have some fun.”
Tinder bot 2 bio: “Heyyy, I’m just down for whatevvver you know? Let’s have some fun.”
Tinder bot 3 bio: “Heyyy, I’m just down for whatevvver you know!!? I just wanna find someone who wants to have some fun.”
You’re Tinder and you know spammers make different accounts, and they randomly tweak the bios of their bots, so you have to check similarity across all your comments. How?
Recommendations
You’re or and users clicking on ads keep the juices flowing.
Or you’re and part of trapping people with convenience is telling them what they want before they want it. Or you’re and you’re trying to keep people inside on a Friday night with another Office binge.
Luckily for those companies, their greatest minds have turned those problems into summarizing me as faux-hipster half-effort yuppie as encoded in a dense 512-dimensional vector, which must be matched via inner product with another 512-dimensional vector for Outdoor Voices’ new marketing “workout chic” campaign.
Problem Setup
You have a set of database vectors \(\{\textbf{y}_i\}_{i=0}^\ell\), each in \(\mathbb{R}^d\). You can do some prep work to create an index. Then at runtime I ask for the \(k\) closest vectors, which might be measured in \(L^2\) distance, or the vectors with the largest inner product.
Formally, we want the set \(L=\text{$k$-argmin}_i\norm{\textbf{x}-\textbf{y}_i}\) given \(\textbf{x}\).
Overlooking the fact that this is probably an image of \(k\)-nearest neighbors, this summarizes the situation, in two dimensions:
Why is this hard?
Suppose we have 1M embeddings at a dimensionality of about 1K. This is a very conservative estimate; but that amounts to scanning over 1GB of data per query if doing it naively.
Let’s continue to be extremely conservative, say our service is replicated so much that we have one machine per live query per second, which is still a lot of machines. Scanning over 1GB of data serially on one 10Gb RAM bandwidth node isn’t something you can do at interactive speeds, clocking in at 1 second response time for just this extremely crude simplification.
Exact methods for answering the above problem (Branch-and-Bound, LEMP, FEXIPRO) limit search space. Most recent SOTA for exact is still 1-2 orders below approximate methods. For prev use cases, we don’t care about exact (though there certainly are cases where it does matter).
Related Work Before FAISS
FAISS itself is built on product quantization work from its authors, but for context there were a couple of interesting approximate nearest-neighbor search problems around.
Tangentially related is the lineage of hashing-based approaches Bachrach et al 2014 (Xbox), Shrivastava and Li 2014 (L2ALSH), Neyshabur and Srebro 2015 (Simple-ALSH) for solving inner product similarity search. The last paper in particular has a unifying perspective between inner product similarity search and \(L^2\) nearest neighbors (namely a reduction from the former to the latter).
However, for the most part, it wasn’t locally-sensitive hashing, but rather clustering and hierarchical index construction that was the main approach to this problem before. One of the nice things about the FAISS paper in my view is that it is a disciplined epitome of these approaches that’s effectively implemented.
After FAISS
Just kidding. Like the second place winners for ILSVRC 2012 will tell you, simple and fast beats smart and slow. As this guy proved, a CPU implementation from 2 years in the future still won’t compete with a simpler GPU implementation from the past.
You might say this is an unfair comparison, but life (resource allocation) doesn’t need to be fair either.
Evaluation
FAISS provides an engine which approximately answers the query \(L=\text{$k$-argmin}_i\norm{\textbf{x}-\textbf{y}_i}\) with the response \(S\).
The metrics for evaluation here are:
Index build time, in seconds. For a set of \(\ell\) database vectors, how long does it take to construct the index? Search time, in seconds, which is the average time it takes to respond to a query. R@k, or recall-at-\(k\). Here the response \(S\) may be slightly larger than \(k\), but we look at the closest \(k\) items in \(S\) with an exact search, yielding \(S_k\). This value is then \(\card{S_k\cap L}/k\), where \(k=\card{L}\). FAISS details
In the next post, I’ll take a look at how FAISS addresses this problem. |
I have a data set $x_{1}, x_{2}, \ldots, x_{k}$ and want to find the parameter $m$ such that it minimizes the sum $$\sum_{i=1}^{k}\big|m-x_i\big|.$$ that is
$$\min_{m}\sum_{i=1}^{k}\big|m-x_i\big|.$$
Computational Science Stack Exchange is a question and answer site for scientists using computers to solve scientific problems. It only takes a minute to sign up.Sign up to join this community
Probably you ask for a proof that the median solves the problem? Well, this can be done like this:
The objective is piecewise linear and hence differentiable except for the points $m=x_i$. What is the slope of the objective is some point $m\neq x_i$? Well, the slope is the sum of the slopes of the mappings $m\mapsto |m-x_j|$ and this is either $+1$ (for $m>x_j$) or $-1$ (for $m<x_j$ ). Hence, the slope indicates how many $x_i$'s are smaller than $m$. You see that the slope is zero if there are equally many $x_i$'s smaller and larger than $m$ (for and even number of $x_i$'s). If there is an odd number of $x_i$'s then the slope is $-1$ left of the "middlest" one and $+1$ right of it, hence the middlest one is the minimum.
A generalization of this problem to multiple dimensions is called the geometric median problem. As David points out, the median is the solution for the 1-D case; there, you could use median-finding selection algorithms, which are more efficient than sorting. Sorts are $O(n\log n)$ whereas selection algorithms are $O(n)$; sorts are only more efficient if multiple selections are needed, in which case you could sort (expensively) once, and then repeatedly select from the sorted list.
The link to the geometric median problem mentions solutions for multidimensional cases.
The explicit solution in terms of the median is correct, but in response to a comment by mayenew, here's another approach.
It is well-known that $\ell^1$ minimization problems generally, and the posted problem in particular, can be solved by linear programming.
The following LP formulation will do for the given exercise with unknowns $z_i,m$:
$$ min \sum z_i $$ such that: $$ z_i \ge m - x_i $$ $$ z_i \ge x_i - m $$
Clearly $z_i$ must equal $|x_i - m|$ at the minimum, so this asks the sum of absolute values of errors to be minimized.
The over-powered convex analysis way to show this is just take subgradients. In fact this is equivalent to the reasoning used in some of the other answers involving slopes.
The optimization problem is convex (because the objective is convex and there are no constraints.) Also, the subgradient of $\left|m-x_i\right|$ is
-1 if $m<x_i$
[-1,1] if $m=x_i$
+1 if $m>x_i$.
Since a convex function is minimized if and only if it's subgradient contains zero, and the subgradient of a sum of convex functions is the (set) sum of the subgradients, you get that 0 is in the subgradient if and only if $m$ is the median of $x_1,\ldots x_k$.
We're basically after: $$ \arg \min_{m} \sum_{i = 1}^{N} \left| m - {x}_{i} \right| $$
One should notice that $ \frac{\mathrm{d} \left | x \right | }{\mathrm{d} x} = \operatorname{sign} \left( x \right) $ (Being more rigorous would say it is a Sub Gradient of the non smooth $ {L}_{1} $ Norm function).
Hence, deriving the sum above yields $ \sum_{i = 1}^{N} \operatorname{sign} \left( m - {x}_{i} \right) $. This equals to zero only when the number of positive items equals the number of negative which happens when $ m = \operatorname{median} \left\{ {x}_{1}, {x}_{2}, \cdots, {x}_{N} \right\} $.
One should notice that the
median of a discrete group is not uniquely defined.
Moreover, it is not necessarily an item within the group. |
A simple question.
If $Y=\frac{1}{X}$ and I know $f_X(x)$, is it true that $E(Y) = E(1/X) = \int_{-\infty}^\infty \frac{1}{x}f_X(x) dx$?
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community
Yes. In general if $X\sim f(x)$ then for a function $g(x)$ you have $E(g(X)) = \int g(x)f(x)dx$. You can verify this for simple cases by deriving the distribution of the transformed variable. The completely general result takes some more advanced math which you can probably safely avoid :)
Another approach if you are happy with a numerical estimate (as opposed to the theorectical exact value) is to generate a bunch of data from the distribution, do the transformation, then take the mean of the transformed data as the estimate of the expected value. This avoids integration which can be nice in ugly cases, but does not give the theory, relationship, or exact value. |
How to count the solutions of the equation \$x \cdot y + x \cdot z + y \cdot z = n, (0 \leq n \leq 10^4)\$ with the constraint that \$x \geq y \geq z \geq 0\$ ?
To solve this i've isolated the z value and brute-forced the values of x and y. If the values x,y,z satisfy the constraint and equation than its a valid solution.
#include <stdio.h>int main () { int x,y,z,n,counter; while ( scanf("%d", &n), n != -1 ) { counter = 0; for ( x = n ; x >= 0; --x ) { for ( y = x ; y > 0 ; --y ) { z = (n - x*y); //negative z isn't valid if(z < 0) continue; z /= (x + y); if( z <= y && y <= x && x*y + x*z + y*z == n ) { ++counter; } } } printf("%d\n",counter); } return 0;}
The input contains several lines with one value for n. The program must stop when this value is -1.
The output is the number of solutions of the equation in separated lines.
Sample Input:
20 1 9747 -1
Sample Output:
5 1 57
How to make this code faster ? |
I encountered a problem while having a look at the proof of the strong Markov Property in Liggett's book about Markov Processes "Continuous Time Markov Processes: An Introduction". I want to proof the Strong Markov Property for continuous Markov chains using the same approach as in the book. As far as I can see the proof in the book uses the following statement:
Let $(X_t)_{t\geq 0}$ be a continuous Markov chain with values in the discrete space $S$. Further let $Y$ be a bounded random variable and $\xi$ a finite stopping time. Then the following holds $Pr_x$-a.s.
$$ E_x[Y] = E_{x}[E_{\xi}[Y]] \iff E_x[E_x[Y\cdot \theta_{\xi} \vert {\cal{F}}_{\xi} ]] = E_{x}[E_{\xi}[Y]]$$
$$ \implies E_x[Y\cdot \theta_{\xi} \vert {\cal{F}}_{\xi} ] = E_{\xi}[Y] $$
where ${\cal{F}}_{\xi} := \{ A \in {\cal{F}}: A \cap \{\xi\leq t\}\in {\cal{F}}_t \text{ for all } t\geq 0 \}$ and $\theta_s$ the shift operator for right-continuous functions $\omega$ i.e. $(\theta_s \omega)(t) = \omega (t+s)$.
I have a problem with the implication in the second line of the equations. In general this doesn't hold because I can find two different random variables s.t. the expectation would be the same, right? So am I missing another argument that is being used to proof the statement?
Thank you for your help! |
I am trying to derive Galerkin type weak formulation for the Stokes equations. I'm having a bit of a problem reconciling the notation in the integration by parts. I know that the answer I'm looking for is: $ \int_\Omega \Delta\mathbf{u}\cdot\mathbf{v}d\Omega = \int_\Gamma (\mathbf{n}\cdot\nabla\mathbf{u})\cdot\mathbf{v}d\Gamma - \int_\Omega \nabla\mathbf{u}:\nabla\mathbf{v}d\Omega $
When I integrate by parts myself I get: $ \int_\Omega \nabla u\cdot\mathbf{v}d\Omega = \int_\Gamma u(\mathbf{v}\cdot\mathbf{n})d\Gamma - \int_\Omega u \nabla\cdot\mathbf{v}d\Omega\\\ \quad\quad\quad \Rightarrow \int_\Omega\Delta\mathbf{u}\cdot\mathbf{v}d\Omega = \int_\Omega (\nabla\cdot (\nabla\mathbf{u}))\cdot\mathbf{v}d\Omega = \int_\Gamma \nabla\mathbf{u} (\mathbf{v}\cdot\mathbf{n})d\Gamma - \int_\Omega\nabla\mathbf{u}\nabla\cdot\mathbf{v}d\Omega $
I assume I should be using a dot product for the vector/matrix multiplication, but even so I can't reconcile my answer with what I know the correct answer to be. For instance the line integral should be a scalar, but with my answer $\nabla\mathbf{u}$ is a matrix and $\mathbf{v}\cdot\mathbf{n}$ is a scalar so I fail to see how their product could be a scalar.
I did notice that the formula I used applies for scalar $u$'s. Is there another identity I should be using when $u$ is a vector? |
I'm trying to reproduce Kaluza & Klein's result of obtaining the electromagnetic field by introducing a fifth dimension. The basic idea is that the extra components of the five-dimensional metric will materialize in four dimensions as components of the electromagnetic vector potential. For instance, by postulating the appropriate five-dimensional metric and writing up the equation of motion for a particle in empty space, we should be able to recover the four dimensional equation of motion for a charged particle in an electromagnetic field.
Dealing with a single particle, that's a rather special case. Texts on Kaluza-Klein usually focus instead on the relativistic action, which would be applicable to all mechanical systems. My goal here, however, was simply to outline the approach and demonstrate through a simple case how it works, not to develop a comprehensive theory; that has been done by Kaluza over 80 years ago.
My first attempt was a naïve one: I thought I might be able to derive the desired result in flat space, without having to consider curvature with the associated computational complications. That is not so: as I now discovered, curvature, in particular the Christoffel-symbols, play an essential role in the theory, as it is due to the Christoffel-symbols that the electromagnetic field tensor will appear in the four dimensional equation of motion.
We start with empty 5-space. We use upper-case indices for 5-dimensional coordinates (0...4), while lower-case indices will be used in four dimensions (0...3). The electromagnetic field tensor, $F_{ab}$, is defined as $F_{ab}=\nabla_aA_b-\nabla_bA_a=\partial_aA_b-\partial_bA_a$, the contributions of the Christoffel-symbols canceling out each other due to their symmetry in the first two indices. The metric tensor of 5-space is assumed to take the following form (the reason for this peculiar choice will become evident later on):
\[G_{AB}=\begin{pmatrix}g_{ab}+g_{44}A_aA_b&g_{44}A_a\\g_{44}A_b&g_{44}\end{pmatrix},\]
where $A_a$ is an arbitrary 4-vector. Writing up the metric tensor in this form does not imply any loss of generality. The inverse of the metric tensor takes the following form:
\[G^{AB}=\begin{pmatrix}g^{ab}&-A_a\\-A_b&g_{44}^{-1}+A^2\end{pmatrix}.\]
The result can be verified through direct calculation, i.e., by computing $G_{AB}G^{BC}$. What next? Why, computing the Christoffel-symbols of course:
\[\Gamma_{AB}^C=G^{CD}\Gamma_{ABD}=\frac{1}{2}G^{CD}(\partial_AG_{BD}+\partial_BG_{AD}-\partial_DG_{AB}).\]
Wherever the notation might appear ambiguous, I use an upper left index (4) or (5) to distinguish between the four-dimensional and the five dimensional Christoffel-symbols.
Now is the time to make some assumptions about the 5-dimensional metric. First, we assume that the component $g_{44}$ remains constant everywhere. Second, we postulate that the fifth direction forms a so-called Killing field, meaning that the metric will not change with respect to the fifth coordinate: $\partial_4G_{AB}=0$. This is Kaluza's celebrated "cylinder condition". These identities imply that $\Gamma_{a44}=\Gamma_{4b4}=\Gamma_{44c}=0$. Now let's try some of the other Christoffel-symbols:
\begin{align}
{}^{(5)}\Gamma_{4b}^c&=G^{cD}\Gamma_{4bD}=G^{cd}\Gamma_{4bd}+G^{c4}\Gamma_{4b4}=\frac{1}{2}g^{cd}(\partial_4G_{bd}+\partial_bG_{4d}-\partial_dG_{4b})\\ &=\frac{1}{2}[\partial_b(g_{44}A_d)-\partial_d(g_{44}A_b)]=\frac{1}{2}g_{44}g^{cd}(\partial_bA_d-\partial_dA_b)=\frac{1}{2}g_{44}g^{cd}F_{bd}=\frac{1}{2}g_{44}F_b{}^c,\\ {}^{(5)}\Gamma_{a4}^c&=G^{cD}\Gamma_{a4D}=G^{cd}\Gamma_{a4d}+G^{c4}\Gamma_{a44}=\frac{1}{2}g^{cd}(\partial_aG_{4d}+\partial_4G_{ad}-\partial_dG_{a4})\\ &=\frac{1}{2}g^{cd}[\partial_a(g_{44}A_d)-\partial_d(g_{44}A_a)]=\frac{1}{2}g_{44}g^{cd}(\partial_aA_d-\partial_dA_a)=\frac{1}{2}g_{44}g^{cd}F_{ad}=\frac{1}{2}g_{44}F_a{}^c,\\ {}^{(5)}\Gamma_{44}^b&=G^{bD}\Gamma_{44D}=G^{bd}\Gamma_{44d}+G^{b4}\Gamma_{444}=0. \end{align}
There are more, but these are all we're going to need. With the Christoffel-symbols at hand, we can begin to rewrite the five-dimensional equation of motion in the hope that we can extract something useful and interesting about motion in four dimensions. In explicit notation, the equation of motion takes the following form (geodesic equation):
\[\frac{d^2x^A}{d\tau^2}+\Gamma_{BC}^A\frac{dx^B}{d\tau}\frac{dx^C}{d\tau}=0.\]
But since we are trying to recover the equation of motion in four dimensions, we can just ignore the $A=4$ case:
\[\frac{d^2x^a}{d\tau^2}+\Gamma_{BC}^a\frac{dx^B}{d\tau}\frac{dx^C}{d\tau}=0.\]
Rewriting this in terms of Christoffel-symbols that we can evaluate, and making some dummy index substitutions, we get:
\begin{align}\frac{d^2x^a}{d\tau^2}+\Gamma_{BC}^a\frac{dx^B}{d\tau}\frac{dx^C}{d\tau}&=\frac{d^2x^a}{d\tau^2}+{}^{(5)}\Gamma_{bc}^a\frac{dx^b}{d\tau}\frac{dx^c}{d\tau}+\Gamma_{4c}^a\frac{dx^4}{d\tau}\frac{dx^c}{d\tau}+\Gamma_{b4}^a\frac{dx^b}{d\tau}\frac{dx^4}{d\tau}+\Gamma_{44}^a\frac{dx^4}{d\tau}\frac{dx^4}{d\tau}\\
&=\frac{d^2x^a}{d\tau^2}+{}^{(5)}\Gamma_{bc}^a\frac{dx^b}{d\tau}\frac{dx^c}{d\tau}+\frac{1}{2}g_{44}F_c{}^a\frac{dx^c}{d\tau}\frac{dx^4}{d\tau}+\frac{1}{2}g_{44}F_b{}^a\frac{dx^b}{d\tau}\frac{dx^4}{d\tau}\\ &=\frac{d^2x^a}{d\tau^2}+{}^{(5)}\Gamma_{bc}^a\frac{dx^b}{d\tau}\frac{dx^c}{d\tau}+g_{44}F_b{}^a\frac{dx^b}{d\tau}\frac{dx^4}{d\tau}=0, \end{align}
i.e.,
\[\frac{d^2x^a}{d\tau^2}+{}^{(5)}\Gamma_{bc}^a\frac{dx^b}{d\tau}\frac{dx^c}{d\tau}=-g_{44}\frac{dx^4}{d\tau}F_b{}^a\frac{dx^b}{d\tau},\]
which is formally identical to the equation of motion in 4D spacetime in an electromagnetic field characterized by $F_b{}^a$, for a particle with a charge-mass ratio of $-g_{44}dx^4/d\tau$ (in other words, the momentum in the fifth direction will be proportional to the charge.) There is, of course, some sleight of hand involved in what I have done, namely that what we see on the left is the five-dimensional Christoffel-symbol in what is supposed to be a 4-dimensional equation, consequently hiding a term in the form $g_{44}A_CF_b{}^a(dx^b/d\tau)(dx^c/d\tau)$, but this derivation nevertheless should suffice to demonstrate the basic idea: starting with empty 5-dimensional space, we can recover an equation of motion in four dimensions that contains the electromagnetic field tensor. In any case, I believe the sleigh of hand is
necessary, because the case of a "pure" electromagnetic field would be a nonphysical situation in general relativity: the electromagnetic field itself carries energy and will also influence the particle's motion gravitationally by introducing curvature, which is what I suspect is hidden behind the unwanted term that I eliminated by cheating.
By the way, all this is, by and large, the Kaluza part of the theory. Klein's contribution was with regards to the compactification of the fifth dimension. No, not for aesthetic reasons, though a compactified dimension certainly helped explaining why the fifth dimension couldn't be seen; no, the main reason was to account for the quantized electric charge. It was through compactification that Kaluza achieved a fifth dimension admitting only discrete solutions. |
Skills to Develop
To know how to use half-lives to describe the rates of first-order reactions
Another approach to describing reaction rates is based on the time required for the concentration of a reactant to decrease to one-half its initial value. This period of time is called the
half-life of the reaction, written as t 1 /2. Thus the half-life of a reaction is the time required for the reactant concentration to decrease from [A] 0 to [A] 0/2. If two reactions have the same order, the faster reaction will have a shorter half-life, and the slower reaction will have a longer half-life.
The half-life of a first-order reaction under a given set of reaction conditions is a constant. This is not true for zeroth- and second-order reactions. The half-life of a first-order reaction is independent of the concentration of the reactants. This becomes evident when we rearrange the integrated rate law for a first-order reaction (Equation 14.21) to produce the following equation:
Substituting [A]
0/2 for [A] and t 1 /2 for t (to indicate a half-life) into Equation \(\ref{21.4.1}\) gives
Substituting \(\ln{2} \approx 0.693\) into the equation results in the expression for the half-life of a first-order reaction:
\[t_{1/2}=\dfrac{0.693}{k} \label{21.4.2}\]
independentof [A].
If we know the rate constant for a first-order reaction, then we can use half-lives to predict how much time is needed for the reaction to reach a certain percent completion.
Number of Half-Lives Percentage of Reactant Remaining 1 \(\dfrac{100\%}{2}=50\%\) \(\dfrac{1}{2}(100\%)=50\%\) 2 \(\dfrac{50\%}{2}=25\%\) \(\dfrac{1}{2}\left(\dfrac{1}{2}\right)(100\%)=25\%\) 3 \(\dfrac{25\%}{2}=12.5\%\) \(\dfrac{1}{2}\left(\dfrac{1}{2}\right )\left (\dfrac{1}{2}\right)(100\%)=12.5\%\) n \(\dfrac{100\%}{2^n}\) \(\left(\dfrac{1}{2}\right)^n(100\%)=\left(\dfrac{1}{2}\right)^n\%\)
As you can see from this table, the amount of reactant left after
n half-lives of a first-order reaction is (1/2) times the initial concentration. n
For a first-order reaction, the concentration of the reactant decreases by a constant with each half-life and is independent of [A].
Example \(\PageIndex{1}\)
The anticancer drug cis-platin hydrolyzes in water with a rate constant of 1.5 × 10
−3 min −1 at pH 7.0 and 25°C. Calculate the half-life for the hydrolysis reaction under these conditions. If a freshly prepared solution of cis-platin has a concentration of 0.053 M, what will be the concentration of cis-platin after 5 half-lives? after 10 half-lives? What is the percent completion of the reaction after 5 half-lives? after 10 half-lives? Given: rate constant, initial concentration, and number of half-lives Asked for: half-life, final concentrations, and percent completion Strategy: Use Equation \(\ref{21.4.2}\) to calculate the half-life of the reaction. Multiply the initial concentration by 1/2 to the power corresponding to the number of half-lives to obtain the remaining concentrations after those half-lives. Subtract the remaining concentration from the initial concentration. Then divide by the initial concentration, multiplying the fraction by 100 to obtain the percent completion. SOLUTION A We can calculate the half-life of the reaction using Equation \(\ref{21.4.2}\):
Thus it takes almost 8 h for half of the cis-platin to hydrolyze.
B After 5 half-lives (about 38 h), the remaining concentration of cis-platin will be as follows:
After 10 half-lives (77 h), the remaining concentration of cis-platin will be as follows:
C The percent completion after 5 half-lives will be as follows:
The percent completion after 10 half-lives will be as follows:
Thus a first-order chemical reaction is 97% complete after 5 half-lives and 100% complete after 10 half-lives.
Exercise \(\PageIndex{1}\)
Ethyl chloride decomposes to ethylene and HCl in a first-order reaction that has a rate constant of 1.6 × 10
−6 s −1 at 650°C. What is the half-life for the reaction under these conditions? If a flask that originally contains 0.077 M ethyl chloride is heated at 650°C, what is the concentration of ethyl chloride after 4 half-lives? Answer a
4.3 × 10
5s = 120 h = 5.0 days; Answer b
4.8 × 10
−3M Radioactive Decay Rates
Radioactivity, or radioactive decay, is the emission of a particle or a photon that results from the spontaneous decomposition of the unstable nucleus of an atom. The rate of radioactive decay is an intrinsic property of each radioactive isotope that is independent of the chemical and physical form of the radioactive isotope. The rate is also independent of temperature. In this section, we will describe radioactive decay rates and how half-lives can be used to monitor radioactive decay processes.
In any sample of a given radioactive substance, the number of atoms of the radioactive isotope must decrease with time as their nuclei decay to nuclei of a more stable isotope. Using
N to represent the number of atoms of the radioactive isotope, we can define the rate of decay of the sample, which is also called its activity ( A) as the decrease in the number of the radioisotope’s nuclei per unit time:
Activity is usually measured in disintegrations per second (dps) or disintegrations per minute (dpm).
The activity of a sample is directly proportional to the number of atoms of the radioactive isotope in the sample:
\[A = kN \label{21.4.4}\]
Here, the symbol
k is the radioactive decay constant, which has units of inverse time (e.g., s −1, yr −1) and a characteristic value for each radioactive isotope. If we combine Equation \(\ref{21.4.3}\) and Equation \(\ref{21.4.4}\), we obtain the relationship between the number of decays per unit time and the number of atoms of the isotope in a sample:
Equation \(\ref{21.4.5}\) is the same as the equation for the reaction rate of a first-order reaction, except that it uses numbers of atoms instead of concentrations. In fact, radioactive decay is a first-order process and can be described in terms of either the differential rate law (Equation \(\ref{21.4.5}\)) or the integrated rate law:
\[N = N_0e^{−kt} \]
\[\ln \dfrac{N}{N_0}=-kt \label{21.4.6}\]
Because radioactive decay is a first-order process, the time required for half of the nuclei in any sample of a radioactive isotope to decay is a constant, called the half-life of the isotope. The half-life tells us how radioactive an isotope is (the number of decays per unit time); thus it is the most commonly cited property of any radioisotope. For a given number of atoms, isotopes with shorter half-lives decay more rapidly, undergoing a greater number of radioactive decays per unit time than do isotopes with longer half-lives. The half-lives of several isotopes are listed in Table 14.6, along with some of their applications.
Radioactive Isotope Half-Life Typical Uses *The m denotes metastable, where an excited state nucleus decays to the ground state of the same isotope. hydrogen-3 (tritium) 12.32 yr biochemical tracer carbon-11 20.33 min positron emission tomography (biomedical imaging) carbon-14 5.70 × 10 3 yr dating of artifacts sodium-24 14.951 h cardiovascular system tracer phosphorus-32 14.26 days biochemical tracer potassium-40 1.248 × 10 9 yr dating of rocks iron-59 44.495 days red blood cell lifetime tracer cobalt-60 5.2712 yr radiation therapy for cancer technetium-99 m* 6.006 h biomedical imaging iodine-131 8.0207 days thyroid studies tracer radium-226 1.600 × 10 3 yr radiation therapy for cancer uranium-238 4.468 × 10 9 yr dating of rocks and Earth’s crust americium-241 432.2 yr smoke detectors
Note
Radioactive decay is a first-order process.
Radioisotope Dating Techniques
In our earlier discussion, we used the half-life of a first-order reaction to calculate how long the reaction had been occurring. Because nuclear decay reactions follow first-order kinetics and have a rate constant that is independent of temperature and the chemical or physical environment, we can perform similar calculations using the half-lives of isotopes to estimate the ages of geological and archaeological artifacts. The techniques that have been developed for this application are known as radioisotope dating techniques.
The most common method for measuring the age of ancient objects is carbon-14 dating. The carbon-14 isotope, created continuously in the upper regions of Earth’s atmosphere, reacts with atmospheric oxygen or ozone to form
14CO 2. As a result, the CO 2 that plants use as a carbon source for synthesizing organic compounds always includes a certain proportion of 14CO 2 molecules as well as nonradioactive 12CO 2 and 13CO 2. Any animal that eats a plant ingests a mixture of organic compounds that contains approximately the same proportions of carbon isotopes as those in the atmosphere. When the animal or plant dies, the carbon-14 nuclei in its tissues decay to nitrogen-14 nuclei by a radioactive process known as beta decay, which releases low-energy electrons (β particles) that can be detected and measured:
\[ \ce{^{14}C \rightarrow ^{14}N + \beta^{−}} \label{21.4.7}\]
The half-life for this reaction is 5700 ± 30 yr.
The
14C/ 12C ratio in living organisms is 1.3 × 10 −12, with a decay rate of 15 dpm/g of carbon (Figure \(\PageIndex{2}\)). Comparing the disintegrations per minute per gram of carbon from an archaeological sample with those from a recently living sample enables scientists to estimate the age of the artifact, as illustrated in Example 11.Using this method implicitly assumes that the 14CO 2/ 12CO 2 ratio in the atmosphere is constant, which is not strictly correct. Other methods, such as tree-ring dating, have been used to calibrate the dates obtained by radiocarbon dating, and all radiocarbon dates reported are now corrected for minor changes in the 14CO 2/ 12CO 2 ratio over time.
Example \(\PageIndex{2}\)
In 1990, the remains of an apparently prehistoric man were found in a melting glacier in the Italian Alps. Analysis of the
14C content of samples of wood from his tools gave a decay rate of 8.0 dpm/g carbon. How long ago did the man die? Given: isotope and final activity Asked for: elapsed time Strategy: A Use Equation \(\ref{21.4.4}\) to calculate N 0/ N. Then substitute the value for the half-life of 14C into Equation \(\ref{21.4.2}\) to find the rate constant for the reaction. B Using the values obtained for N 0/ N and the rate constant, solve Equation \(\ref{21.4.6}\) to obtain the elapsed time. SOLUTION
We know the initial activity from the isotope’s identity (15 dpm/g), the final activity (8.0 dpm/g), and the half-life, so we can use the integrated rate law for a first-order nuclear reaction (Equation \(\ref{21.4.6}\)) to calculate the elapsed time (the amount of time elapsed since the wood for the tools was cut and began to decay).
\\ \dfrac{\ln(N/N_0)}{k}&=t\end{align}\)
A From Equation \(\ref{21.4.4}\), we know that A = kN. We can therefore use the initial and final activities ( A 0 = 15 dpm and A = 8.0 dpm) to calculate N 0/ N:
Now we need only calculate the rate constant for the reaction from its half-life (5730 yr) using Equation \(\ref{21.4.2}\):
This equation can be rearranged as follows:
B Substituting into the equation for t,
From our calculations, the man died 5200 yr ago.
Exercise \(\PageIndex{2}\)
It is believed that humans first arrived in the Western Hemisphere during the last Ice Age, presumably by traveling over an exposed land bridge between Siberia and Alaska. Archaeologists have estimated that this occurred about 11,000 yr ago, but some argue that recent discoveries in several sites in North and South America suggest a much earlier arrival. Analysis of a sample of charcoal from a fire in one such site gave a
14C decay rate of 0.4 dpm/g of carbon. What is the approximate age of the sample? Answer
30,000 yr
Summary The half-life of a first-order reaction is independent of the concentration of the reactants. The half-lives of radioactive isotopes can be used to date objects.
The half-life of a reaction is the time required for the reactant concentration to decrease to one-half its initial value. The half-life of a first-order reaction is a constant that is related to the rate constant for the reaction: t
1 /2 = 0.693/ k. Radioactive decay reactions are first-order reactions. The rate of decay, or activity, of a sample of a radioactive substance is the decrease in the number of radioactive nuclei per unit time. |
Difference between revisions of "Main Page"
(→The Problem)
Line 6: Line 6:
The original proof of DHJ used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular [http://gowers.wordpress.com/2009/02/01/a-combinatorial-approach-to-density-hales-jewett/ combinatorial approach to DHJ, suggested by Tim Gowers.]
The original proof of DHJ used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular [http://gowers.wordpress.com/2009/02/01/a-combinatorial-approach-to-density-hales-jewett/ combinatorial approach to DHJ, suggested by Tim Gowers.]
+ + + + + + + + +
== Unsolved questions ==
== Unsolved questions ==
Revision as of 20:33, 11 February 2009 The Problem
Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A
combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. Density Hales-Jewett (DHJ) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math]
The original proof of DHJ used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers.
Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (final call) (500-599) TBA (600-699) A reading seminar on density Hales-Jewett (active) Unsolved questions
Gowers.462: Incidentally, it occurs to me that we as a collective are doing what I as an individual mathematician do all the time: have an idea that leads to an interesting avenue to explore, get diverted by some temporarily more exciting idea, and forget about the first one. I think we should probably go through the various threads and collect together all the unsolved questions we can find (even if they are vague ones like, “Can an approach of the following kind work?”) and write them up in a single post. If this were a more massive collaboration, then we could work on the various questions in parallel, and update the post if they got answered, or reformulated, or if new questions arose.
IP-Szemeredi (a weaker problem than DHJ)
Solymosi.2: In this note I will try to argue that we should consider a variant of the original problem first. If the removal technique doesn’t work here, then it won’t work in the more difficult setting. If it works, then we have a nice result! Consider the Cartesian product of an IP_d set. (An IP_d set is generated by d numbers by taking all the [math]2^d[/math] possible sums. So, if the n numbers are independent then the size of the IP_d set is [math]2^d[/math]. In the following statements we will suppose that our IP_d sets have size [math]2^n[/math].)
Prove that for any [math]c\gt0[/math] there is a [math]d[/math], such that any c-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner.
The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for k=4. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our c-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma.
Finally, let me prove that there is square if d is large enough compare to c. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length d. It has a one to one mapping to [4]^d; Given a point ((x_1,…,x_d),(y_1,…,y_d)) where x_i,y_j are 0 or 1, it maps to (z_1,…,z_d), where z_i=0 if x_i=y_i=0, z_i=1 if x_i=1 and y_i=0, z_i=2 if x_i=0 and y_i=1, and finally z_i=3 if x_i=y_i=1. Any combinatorial line in [4]^d defines a square in the Cartesian product, so the density HJ implies the statement.
Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product.
This is simpler than the density Hales-Jewett problem in at least one respect: it involves 01-sequences rather than 012-sequences. But that simplicity may be slightly misleading because we are looking for corners in the Cartesian product. A possible disadvantage is that in this formulation we lose the symmetry of the corners: the horizontal and vertical lines will intersect this set in a different way from how the lines of slope -1 do.
I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler.
Gowers.22: A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs (U,V) of subsets of \null [n], and let’s take as our definition of a corner a triple of the form (U,V), (U\cup D,V), (U,V\cup D), where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference D is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form (U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C, where D is disjoint from both U and V and C is contained in both U and V. That is your original problem I think.
I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of X, Y and Z equal the power set of \null [n]. We join U\in X to V\in Y if (U,V)\in A.
Ah, I see now that there’s a problem with what I’m suggesting, which is that in the normal corners problem we say that (x,y+d) and (x+d,y) lie in a line because both points have the same coordinate sum. When should we say that (U,V\cup D) and (U\cup D,V) lie in a line? It looks to me as though we have to treat the sets as 01-sequences and take the sum again. So it’s not really a set-theoretic reformulation after all.
O'Donnell.35: Just to confirm I have the question right…
There is a dense subset A of {0,1}^n x {0,1}^n. Is it true that it must contain three nonidentical strings (x,x’), (y,y’), (z,z’) such that for each i = 1…n, the 6 bits
[ x_i x'_i ] [ y_i y'_i ] [ z_i z'_i ]
are equal to one of the following:
[ 0 0 ] [ 0 0 ] [ 0, 1 ] [ 1 0 ] [ 1 1 ] [ 1 1 ] [ 0 0 ], [ 0 1 ], [ 0, 1 ], [ 1 0 ], [ 1 0 ], [ 1 1 ], [ 0 0 ] [ 1 0 ] [ 0, 1 ] [ 1 0 ] [ 0 1 ] [ 1 1 ]
?
McCutcheon.469: IP Roth:
Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every $\delta>0$ there is an $n$ such that any $E\subset [n]^{[n]}\times [n]^{[n]}$ having relative density at least $\delta$ contains a corner of the form $\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}$. Here $(e_i)$ is the coordinate basis for $[n]^{[n]}$, i.e. $e_i(j)=\delta_{ij}$.
Presumably, this should be (perhaps much) simpler than DHJ, k=3.
High-dimensional Sperner Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.)
Fourier approach
Kalai.29: A sort of generic attack one can try with Sperner is to look at f=1_A and express using the Fourier expansion of f the expression \int f(x)f(y)1_{x<y} where x<y is the partial order (=containment) for 0-1 vectors. Then one may hope that if f does not have a large Fourier coefficient then the expression above is similar to what we get when A is random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the k=3 density HJ problem too but Sperner would be easier;) This is not unrealeted to the regularity philosophy. Gowers.31: Gil, a quick remark about Fourier expansions and the k=3 case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again.
The problem was that the natural Fourier basis in \null [3]^n was the basis you get by thinking of \null [3]^n as the group \mathbb{Z}_3^n. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that n is a multiple of 7, and you look at the set A of all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set A has too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that A has no large Fourier coefficient.
You can use this idea to produce lots more examples. Obviously you can replace 7 by some other small number. But you can also pick some arbitrary subset W of \null[n] and just ask that the numbers of 0s, 1s and 2s inside W are multiples of 7.
DHJ for dense subsets of a random set
Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic. |
Idris is Turing Complete! It does check for totality (termination when programming with data, productivity when programming with codata) but doesn't require that everything is total.Interestingly, having data and codata is enough to model Turing Completeness since you can write a monad for partial functions. I did this, years ago, in Coq - it's probably ...
For a rather simple version of dependent type theory, Gilles Dowek gave a proof of undecidability of typability in a non-empty context:Gilles Dowek, The undecidability of typability in the $\lambda\Pi$-calculusWhich can be found here.First let me clarify what is proven in that paper: he shows that in a dependent calculus without annotations on the ...
The main differences are along two dimensions -- in the underlying theory,and in how they can be used. Lets just focus on the latter.As a user, the "logic" of specifications in LiquidHaskell and refinement type systems generally, is restricted to decidable fragmentsso that verification (and inference) is completely automatic, meaning one does not require ...
Certainly, assigning a type to $\lambda x. x\ x$ is not enough for inconsistency: in system $F$, we can derive$$ \lambda x.x\ x:(\forall X.X)\rightarrow (\forall X.X)$$in a pretty straightforward way (this is a good exercise!). However, $(\lambda x.x\ x)(\lambda x.x\ x)$ cannot be well typed in this system, assuming $\omega$-consistency of 2nd order ...
It is a common misconception that we can translate let-expresions to applications. The difference between let x : t := b in v and (fun x : t => v) b is that in the let-expression, during type-checking of v we know that x is equal to b, but in the application we do not (the subexpression fun x : t => v has to make sense on its own).Here is an example:...
Refinement types are simply usual types with predicates.That is, given that $T$ is a usual type and $P$ is some predicate on $T$$$\{v:T \mid P(v)\}$$is a refinement type. $T$ in this case is called a base type.AFAIK, in Liquid Haskell, they also allow some dependend function types, that is types $\{x:T_1 \to T_2 \mid P\}$ [1]. Notice that fully ...
There is recent work by Paul-André Melliès and Noam Zeilberger that explores this. In particular the papers Functors are Type Refinement Systems and An Isbell Duality Theorem for Type Refinement Systems. There's also a video of a talk on the first one.I think there is a lot of confusion around refinement types due to people thinking of particular systems ...
The dependent sum is a common generalization of both the cartesian product $A \times B$ and the coproduct $A + B$. It just so happens that the HoTT book introduces dependent sum by generalizing $A \times B$, because that does not require the boolean type to be defined first.The coproduct is a special case of dependent sum. Given types $A$ and $B$, consider ...
In traditional Martin-Löf type theory there is no distinction between types and propositions. This goes under the slogan "propositions as types". But there are sometimes reasons for distinguishing them. CoC does precisely that.There are many variants of CoC, but most would have$$\mathsf{Prop} : \mathsf{Type}$$but not $\mathsf{Type} : \mathsf{Prop}$. ...
Dependent types are types which depend on values in any way. A classic example is "the type of vectors of length n", where n is a value. Refinement types, as you say in the question, consist of all values of a given type which satisfy a given predicate. E.g. the type of positive numbers. These concepts aren't particularly related (that I know of). Of course, ...
First, I assume you've already heard of the Church-Turing thesis, which states that anything we call “computation” is something that can be done with a Turing machine (or any of the many other equivalent models). So a Turing-complete language is one in which any computation can be expressed. Conversely, a Turing-incomplete language is one in which there is ...
The question under what circumstances we need to jump from a universe to one higher in the hierarchy is a good one. Having the hierarchy and the ability to climb it is important. You need to jump levels when you want to treat a universe as a type or as part of a type. For example to define functions of (non-dependent) type$$A \rightarrow \mathcal{U}_i$$...
Gilles answer is a good one, except for the paragraph on the real numbers, which is completely false, except for the fact that the real numbers are indeed a different kettle of fish. Because this sort of misinformation seems to be quite widespread, I would like to record here a detailed rebuttal.It is not true that all inductive types are denumerable. For ...
Yes, it is possible to express a precise type for a sorting routine, such that any function having that type must indeed sort the input list.While there might be a more advanced and elegant solution, I'll sketch an elementary one, only.We will use a Coq-like notation. We start by defining a predicate requiring that f: nat -> nat acts as a permutation ...
It is an illusion that the computation rules "define" or "construct" the objects they speak about. You correctly observed that the equation for $\mathrm{ind}_{=_A}$ does not "define" it, but failed to observe that the same is true in other cases as well. Let us consider the induction principle for the unit type $1$, which seems particularly obviously "...
To elaborate on gallais' clarifications, a type theory with impredicative Prop, and dependent types, can be seen as some subsystem of the calculus of constructions, typically close to Church's type theory. The relationship between Church's type theory and the CoC is not that simple, but has been explored, notably by Geuvers excellent article.For most ...
There are a lot of misconceptions here. To begin, MLTT doesn't have subtypes, so Java is not going to simply be a fragment of it. It does not require dependent types to make either of the types you gave. A dependent type system doesn't need to have a "type" of types (a universe) in general (MLTT does have universes though), nor do you need dependent types ...
I will slightly amend Martin's answer to explain where cumulativity comes in (the rule which says that $X : \mathcal{U}_i$ and $i \leq j$ entail $X : \mathcal{U}_j$). Suppose we have $A : \mathcal{U}_{42}$ and we would like to give a type to $A \to \mathcal{U}_{99}$. The formation rule for $\to$ is this:$$\frac{\Gamma \vdash X : \mathcal{U}_i \qquad \Gamma \...
The problem with Church encodings is that you cannot obtain induction principles for your types meaning that they are pretty much useless when it comes to proving statements about them.In terms of minimality of the system, one path mentioned in the comments is to use containers and (W / M)-types however they are rather extensional so that's not really ...
But is that exactly where they are located in the lambda cube?The lambda cube is not a giant spectrum on which all programming languages can be classified. It is precisely eight languages, which combine a lambda calculus (values abstracted over values) with all possible combinations of three features:Values abstracted over types (parametric polymorphism)...
I think you're confusing two things: dependently typed languages are convenient for specifying properties and giving proofs about functional programs. The techniques you mention are possible decision procedures for certain properties of functional programs.The ability to specify program properties usually takes place within a logic. Dependent types are a ...
There are multiple ways to define a mathematical structure, depending on what properties you consider to be the definition. Between equivalent characterizations, which one you take to be the definition and which one you take to be an alternative characterization is not important.In constructive mathematics, it is preferable to pick a definition that makes ...
Coq has 3 "big" types:Prop is meant for propositions. It is impredicative, meaning that you can instantiate polymorphic functions with polymorphic types. It also has proof irrelevance, meaning that if $p_1, p_2 : P$ then $p_1 = p_2$. This allows terms that are only used for proof to be erased in any code generated by Coq.Set is meant for computation. It's ...
There's a nice idiom, which is explained more in chapter 22 of Types and Programming Languages (it's used for polymorphism rather than dependent types, but the idea is the same). The idiom is as follows:A type checking system can be turned into a type inference system by making the rules linear and adding constraints.I'll use the application rule as an ...
Your code does not work. I would suggest that you forget about the universe levels for the time being (the $\ell$ thing) and focus on simpler things first. Here is working code:idd : (A : Set) → A → Aidd A a = aThe type of idd is (A : Set) → A → A. It is a dependent product, i.e., it could be written as $\prod_{A : \mathsf{Set}} A \to A$ in mathematical ...
Yes, it can. While conceptually it's not that difficult, it hasn't been studied all that much. One aspect of the field is cost semantics such as the research done by Guy Blelloch.In the vein of the video Anton mentioned is Daniellson's work in Lightweight Semiformal Time Complexity Analysis for Purely Functional Data Structures. This does indeed use a ...
Why are recursive types seldomly seen in dependent type theory?The point of inductive types is precisely that you get normalization. Unrestricted recursive types simply lead to non-normalizing terms.Given any type $A$, we may inhabit $A$ with a non-normalizing term as follows. Consider the recursive type$$D = D \to A.$$The term $\omega \mathrel{{:}{=}} ...
The problem is not specific to homotopy type theory. In type theory in general, if there is a type of all types, then every type is inhabited. This was shown first by Girard who encoded the Burali-Forti paradox in type theory.A simplification of the paradox was found by Hurkens. Here is Agda code for it, and here is Coq code.
I'm no HoTT person, but I'll throw in my two-cents.Suppose we are wanting to make a function $$f_A : \prod_{x,y : A}\prod_{p : x =_A y} C(x,y,p)$$How would we do this? Well, suppose we're given any $x,y : A$ and a proof of their equality $p : x =_A y$. Since I know nothing about the arbitrary type $A$, I know nothing about the `structure' of $x,y$. ... |
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for
@JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default?
@JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever
I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font.
@DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma).
@egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge.
@barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually)
@barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording?
@barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us.
@DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.)
@barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow)
if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.)
@egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended.
@barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really
@DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts.
@DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ...
@DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts.
MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers...
has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable?
I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something.
@baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!...
@baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier.
@baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Glossary
Here we will describe a few terms often used in the context of FLEUR calculations
atomic units
Almost all input and output in the FLEUR code is given in atomic units, with the exception of the U and J parameters for the LDA+U method in the input-file and the bandstructure and the DOS output-files where the energy unit is eV.
energy units: 1 Hartree (htr) = 2 Rydberg (Ry) = 27.21 electron volt (eV)
length units: 1 bohr (a.u.) = 0.529177 Ångström = 0.0529177 nm electron mass, charge and Planks constant h / 2 π (ℏ) are unity speed of light = e'^2^'/ℏ 1/ α ; fine-structure constant α: 1/α = 137.036 band gap
The band-gap printed in the output ([[out]] file) of the FLEUR code is the energy separation between the highest occupied Kohn-Sham eigenvalue and the lowest unoccupied one. Generally this value differs from the physical band-gap, or the optical band-gap, due to the fact that Kohn-Sham eigenvalues are in a strict sense Lagrange multipliers and not quasiparticle energies (see e.g. Perdew & Levy, PRL 51, 1884 (1983)).
core levels
States, which are localized near the nucleus and show no or negligible dispersion can be treated in an atomic-like fashion. These core levels are excluded from the valence electrons and not described by the FLAPW basisfunctions. Nevertheless, their charge is determined at every iteration by solving a Dirac equation for the actual potential. Either a radially symmetric Dirac equation is solved (one for spin-up, one for spin-down) or, if @@kcrel=1@@ in the input file, even a magnetic version (cylindrical symmetry) is solved.
distance (charge density)
In an iteration of the self consistency cycle, from a given input charge density, ρ'^in^', a output density, ρ'^out^', is calculated. As a measure, how different these two densities are, the distance of charge densities (short: distance, d) is calculated. It is defined as the integral over the unit cell: {$ d = \int || \rho^{in} - \rho^{out} || d \vec r $}\ and gives an estimate, whether self-consistency is approached or not. Typically, values of 0.001 milli-electron per unit volume (a.u.'^3^') are small enough to ensure that most properties have converged. You can find this value in the out-file, e.g. by @@grep dist out@@. In spin-polarized calculations, distances for the charge- and spin-density are provided, for non-Collinear magnetism calculations even three components exists. Likewise, in an LDA+U calculation a distance of the density matrices is given.
energy parameters
To construct the FLAPW basisfunctions such, that only the relevant (valence) electrons are included (and not, e.g. 1s, 2s, 2p for a 3d-metal) we need to specify the energy range of interest. Depending slightly on the shape of the potential and the muffin-tin radius, each energy corresponds to a certain principal quantum number "n" for a given "l". E.g. if for a 3d transition metal all energy parameters are set to the Fermi-level, the basis functions should describe the valence electrons 4s, 4p, and 3d. Also for the vacuum region we define energy parameters, if more than one principal quantum number per "l" is needed, local orbitals can be specified.
Fermi level
In a calculation, this is the energy of the highest occupied eigenvalue (or, sometimes it can also be the lowest unoccupied eigenvalue, depending on the "thermal broadening", i.e. numerical issues). In a bulk calculation, this energy is given relative to the average value of the interstitial potential; in a film or wire calculation, it is relative to the vacuum zero.
interstitial region
Every part of the unit cell that does not belong to the
muffin-tin spheres and not to the vacuum region. Here, the basis (charge density, potential) is described as 3D planewaves. lattice harmonics
Symmetrized spherical harmonics. According to the point group of the atom, only certain linear combinations of spherical harmonics are possible. A list of these combinations can be found at the initial section of the out-file.
local orbitals
To describe states outside the valence energy window, it is recommended to use local orbitals. This can be useful for lower-lying semicore-states, as well as unoccupied states (note, however, that this just enlarges the basis-set and does not cure DFT problems with unoccupied states).
magnetic moment
The magnetic (spin) moment can be defined as difference between "spin-up" and "spin-down" charge, either in the entire unit cell, or in the muffin-tin spheres. Both quantities can be found in the out-file, the latter one explicitly marked by " --> mm", the former has to be calculated from the charge analysis (at the end of this file). \ The orbital moments are found next to the spin-moments, when SOC is included in the calculation. They are only well defined in the muffin-tin spheres as {$ m_{orb} = \mu_B \sum_i < \phi_i | r \times v | \phi_i > $}.\ The in a collinear calculation, the spin-direction without SOC is arbitrary, but assumed to be in z-direction. With SOC, it is in the direction of the specified spin-quantization axis. The orbital moment is projected on this axis. In a non-collinear calculation, the spin-directions are given explicitely in the input-file.
muffin-tin sphere
Spherical region around an atom. The muffin-tin radius is an important input parameter. The basis inside the muffin-tin sphere is described in spherical harmonics times a radial function. This radial function is given numerically on a logarithmic grid. The charge density and potential here are also described by a radial function times a the lattice harmonics. |
Procedures 1 and 2 are equivalent when the action is quadratic in the momentum, and when there is a gauge fixing which produces a unitary quantum field theory. Unitarity is not obvious in the Path integral, as immediately noted by Dirac. It is established either by proving reflection positivity in the path integral directly, or by passing to a canonical description where the unitarity is obvious, because the Hamiltonian is real.
It is important to note that the fact that the quantities in the path integral are not operators is
completely insignificant. Their products don't commute, and require careful definition in terms of time order, which is in every way corresponds to the ordering ambiguities in the Hamiltonian formalism. If you want to think of them as operators, you can, they act on the boundary conditions coming in in the same way as Heisenberg operators, because they are just the matrix elements of Heisenberg operators. There is no difference in the properties of the quantities in the path integral formalism and any other formalism, they don't get easier in the path integral. Feynman-Fourier transform
For Hamiltonians which are not quadratic in the momentum, it is trickier to pass to a path integral, the quantum action is not equal to the classical action. The general prescription to pass to the Feynman description is through the phase-space path integral:
$$ K(x,y) = \int Dx Dp e^{i\int (p\dot{x} - H(p,q))} dt$$
where the term $p\dot{x}$ is to be interpreted as $p_t(x_{t+\epsilon} - x_t)$, that is, $\dot{x}$ is a forward difference, and H(p,q) is "normal ordered", meaning all p terms are commuted to appear first.
Then the Feynman form is given by integrating out the momentum. This cannot be done in closed form in general, so there are many examples of well-behaved Hamiltonians whose Lagrangian description is not closed-form expressible, for example
$$H= p^4 + V(x)$$
and there are converse examples of nice Lagrangians whose Hamiltonian form is not very nice. I will give such an example in a minute. But first, the Feynman transform.
When the Hamiltonian is of the form
$$H =K(p) + V(x)$$
Then the Lagrangian description is expressed entirely in terms of the function K' appearing in this formula:
$$ e^{-K'(v)} = \int e^{-K(p) + i p v} dx$$
That is, the Feynman transform K' is the log of the fourier transform of the exponential of minus the original function. To see that this works is simple, you Wick rotate each integral over p and do the integral formally using K'.
Each exactly expressible Feynman transform is interesting, but there are very few. In the literature, there is exactly one:
$$ K(p) = {1\over 2} p^2 \implies K'(v) = {1\over 2} v^2 $$
This takes care of quadratic momentum. If you restrict your attention to the published literature, the Feynman integral table is that ridiculous. This much takes care of all the usual quantum field theories, however, so it is not insignificant.
More Feynman transforms
Since the literature on this is pathetic, here are some nontrivial Feynman transforms, and the physics they describe:
Cauchy quantum mechanics: $$ K(p) = |p| \implies K'(v) = -\log( 1+ x^2) $$
This is a nice transform, because the Lagrangian path integral you get (in imaginary time) is
$$\int Dx e^{-\int \log(1+|\dot{q}|^2) - V(x)}$$
This path integral defines a path integral over Levy flights whose stable distribution is the Cauchy distribution. You can see this by looking at the propagation function between adjacent times, it gives a Cauchy distribution. This path integral defines Cauchy quantum mechanics. It's a special case of
Levy quantum mechanics: $$ K(p) = |p|^\alpha \implies K(x) = - \log( L_\alpha(x) ) $$
For $0<\alpha<2$, and where $L_\alpha$ is the unit Levy stable distribution for exponent $\alpha$. These quantum mechanical systems have been studied in recent years, but their path integral doesn't appear anywhere in the literature. The path integral is given by the Feynman transform.
There are tons more interesting Feynman transforms, they are the analog of Legendre transforms in classical mechanics, and are just as useful.
Proving unitarity
The path integral is well defined for any Euclidean statistical theory, but only a very few of these continue to quantum mechanics. A proof of unitarity usually passes to a Hamiltonian formulation, because this is manifestly unitary.
An example of a nonunitary renormalizable path integral statistical system which is otherwise perfectly ok is
$$\int d^8x |\nabla \phi|^4 + Z|\nabla\phi|^2 + t(\phi)^2 + \lambda \phi^4 $$
This system was studied in $8-\epsilon$ dimensions by Mukhamel, because it's epsilon expansion is pretty much the same as the $4-\epsilon$ expansion of the $\phi^4$ model. In eight dimensions, it defines a perfectly good second order point when Z and t are tuned to the right values. But the theory is absolutely not unitary--- there aren't any interacting scalar quantum theories in 8 dimensions. This can be seen immediately from the Kallen representation.
Any propagator in a unitary theory can be expressed in Euclidean space as
$$ G(k) = \int ds {\rho(s) \over k^2 - s}$$
That is, as a superposition of ordinary propagators at different values $s$ of the squared mass. $\rho(s)$ is non-negative, because in real time it is the norm of the state created by the field whose propagator you are expressing. It is this representation that tells you that wrong sign poles are ghost states.
The ${1\over k^4 + (A+B) k^2 + AB}$ Mukhamel Lifschitz-point propagator (with strange parametrization) is expressible as a spectral representation by partial fractions:
$$G(k) \propto {1\over k^2 + A} - {1\over k^2 + B}$$
This Kallen-Lehman spectral representation is clearly ghosty. The double-pole case has a Lehman function which is the derivative of a delta function, which is not positive definite either, by limits.
There are tons of non-unitary Euclidean theories, and to find the unitary ones, the Hamiltonian formulation is very helpful. Finding a no-ghost gauge and transforming to canonical form is how you prove that Gauge theory is unitary, for example. |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
I try to be rational and keep my questions as impersonal as I can in order to comply to the community guidelines. But this one is making me
mad. Here it goes. Consider the uniform distribution on $[0, \theta]$. The likelihood function, using a random sample of size $n$ is $\frac{1}{\theta^{n}}$. Now $1/\theta^n$ is decreasing in $\theta$ over the range of positive values. Hence it will be maximized by choosing $\theta$ as small as possible while still satisfying $0 \leq x_i \leq \theta$.The textbook says 'That is, we choose $\theta$ equal to $X_{(n)}$, or $Y_n$, the largest order statistic'.But if we want to minimize theta to maximize the likelihood, why we choose the biggest x? Suppose we had real numbers for x like $X_{1} = 2, X_{2} = 4, X_{3} = 8$.If we choose 8, that yields $\frac{1}{8^{3}}=0.001953125$. If we choose $\frac{1}{2^{3}}=0.125$. Therefore why we want the maximum in this case $X_{n}$ and not $X_{1}$, since we`ve just seen with real numbers that the smaller the x the bigger the likelihood? Thanks!
I try to be rational and keep my questions as impersonal as I can in order to comply to the community guidelines. But this one is making me
$\theta$ is the parameter to estimate, which corresponds to the upper bound of the $U(0,\theta)$. The observed samples are $x_1=2,\,x_2=4$, and $x_3=8$. The likelihood function to maximize is $\mathcal{L}(\theta|X) = \frac{1}{\theta^n}$ with $X$ corresponding to the observed values (a vector, really), $\theta$ the upper margin of the interval for which this uniform distribution is defined, and $n$ the number of samples.
In order to "see" this intuitively, it's important to realize that $\theta$ has to be at least as big as your largest sampled value ($8$) to avoid leaving samples out of the interval for which your probability density function is defined.
Picking out $8$ would render the value of the $pdf$ at any point equal to $\frac{1}{8}$, and the joint probability distribution (you are sampling three observations - a vector), $\frac{1}{8^3}$. That would certainly the maximum $\mathcal{L}(\theta|X)$, because it is the largest denominator possible (compare to $\frac{1}{2^3}$ or $\frac{1}{4^3}$).
What you are doing is wrong. You must find the likelihood function. What you found is $1/\theta^n$? so where is it defined? It is true that $X_n$ is the maximum likelihood estimator because it maximizes the true likelihood function. How do you find it?
Added: Your answer is actually on the right direction but as I mentioned it is missing a crucial point which alters everything. So the right way of writing down the likelihood funtion is as follows:
\begin{align}L(x_n;\theta)=\prod_{n=1}^N\theta^{-1}\mathbf{1}_{0\leq x_n\leq \theta}(x_n)\\=\theta^{-N}\prod_{i=1}^n\mathbf{1}_{0\leq x_n\leq \theta}(x_n)\end{align}
Until now, $L$ is a function of $x_n$ now lets write it as a function of $\theta$
\begin{align}L(\theta;x_n)=\theta^{-N}\prod_{i=1}^n\mathbf{1}_{\theta \geq x_n}(x_n)\end{align}
Observe that $L(\theta;x_n)$ is
zero if $\theta<x_N$ and it is a decreasing positive function of $x$ if $\theta\geq x_N$. We can now see that for any choice $x>x_N$, $L(\theta;x_N)>L(\theta;x)$ this means maximum is reached at $\hat\theta=x_{N}$. |
I was reading this paper http://www.ist.temple.edu/~vucetic/documents/wang11kdd.pdf related to adaptive multi-hyperplane machine for non linear classification
In that paper, they have mentioned about multiclass SVM, with multiple weights for each class.
The loss for any classification is
$l(x_n,y_n) = max_{i\epsilon y\\\y_n}(0,1 + max g(i,x_n) - g(y_n,x_n))$
where $y_n$ is the label for the nth example and $x_n$ is the features.
I have this confusion when they do the training of this algorithm. They call this SVM MM(Multiple Hyperplane).
They say the convex-approximated problem is defined as
$min_{W}P(W|z) = \frac{\lambda}{2}||W||^2 + \frac{1}{N}\sum_{n=1}^{N}l_{cvx}(W;(x_n,y_n);z_n)$
where they have the concave term $-g(y_n,z_n)$ replaced with the convex term $-w^T_{y_n,z_n}x_n$.
I am not sure if I have described it clearly. But I am going to attach the screenshot of the paper as well. The thing is I didn't get what's the difference between $-g(y_n,z_n)$ and $-w^T_{y_n,z_n}x_n$. They seem the same term to me.
I might be asking a lot. But can anyone provide some info?
I have marked by the red rectangle the part that I didn't understand. I might be asking a lot. But I didn't get that part. Why is it so? |
Answer
It reaches specifications for the time to start up, starting up in 53 seconds, but it does not have 96 percent efficiency.
Work Step by Step
We see how long it takes to get to 17 rpm: $\alpha=\frac{\tau}{I}=\frac{896,000\ Nm}{2.65\times10^7 \ kgm^2}=.034\ rads/s^2$ 17 rpm equals 1.78 radians per second, so it follows: $\Delta t =\frac{1.78\ rads/s}{.034\ rads/s^2}=53 \ seconds $ We now see about efficiency specifications: We find the power: $P=\frac{I\omega^2}{2t}=\frac{2(2.65\times10^7)(1.78)}{2(53)}=8.96\times10^5$ 96 percent of 1.5 is 1.41, not .896, so the turbine is not meeting efficiency specifications. |
This is a reference request for a relationship in quantum field theory between the electromagnetic potential and the electromagnetic field when they are presented in test function form. $U(1)$ gauge invariance becomes a particularly simple constraint on test functions for smeared electromagnetic potential operators to be gauge invariant observables. This is such a simple constraint that I think it must be out there, but I have never seen this in text books or in the literature, presumably because we mostly do not work with test function spaces in QFT; instead we use operator-valued distributions directly, where, however, gauge fixing is a perpetual nuisance.
For the electromagnetic potential operator-valued distribution smeared by a test function $f^\rho(x)$ on Minkowski space, $\hat A_f=\int_M \hat A_\rho(x)f^{\rho*}(x)\mathrm{d}^4x$, to be an observable that is invariant under $U(1)$ gauge transformations $\hat A_\rho(x)\rightarrow\hat A_\rho(x)-\partial_\rho\alpha(x)$, we require that $\int_M \partial_\rho\alpha(x)f^{\rho*}(x)\mathrm{d}^4x$ must be zero for all scalar functions $\alpha(x)$.
Integrating by parts over a region $\Omega$ in Minkowski space, we obtain, in terms of differential forms,$$\int_\Omega d\alpha\wedge(\star f^*)=\int_{\partial\Omega}\alpha\wedge(\star f^*)-\int_\Omega \alpha\wedge(d\!\star\! f^*),$$which will be zero for large enough $\Omega$, and hence for the whole of Minkowski space, for any smooth test function $f^\rho(x)$ that has compact support and is divergence-free, $d\!\star\! f=0$.[If we constrain the gauge transformation function $\alpha(x)$ not to increase faster than polynomially with increasing distance in any direction, it will be enough for the test function $f^\rho(x)$ to be Schwartz and divergence-free.]
So we have proved:
Theorem: The smeared electromagnetic potential $\hat A_f$ is a $U(1)$ gauge invariant observable if the test function $f^\rho(x)$ is smooth, of compact support, and divergence-free.
The divergence-free condition on $f^\rho(x)$ ensures that the commutator for creation and annihilation operators associated with the electromagnetic potential$\hat A_f=\mathbf{\scriptstyle a}^{\,}_{f^*}+\mathbf{\scriptstyle a}^{\dagger}_f$ ,$$[\mathbf{\scriptstyle a}^{\,}_f,\mathbf{\scriptstyle a}^\dagger_g]=-\hbar\int \tilde f^*_\rho(k)\tilde g^\rho(k)2\pi\delta(k_\nu k^\nu)\theta(k_0)\frac{\mathrm{d}^4k}{(2\pi)^4},$$is positive semi-definite (which is necessary for us to be able to construct a vacuum sector Hilbert space), and that because $\delta f=\delta g=0$ we can construct, on Minkowski space, $f=\delta F$, $g=\delta G$, where $F$ and $G$ are bivector potentials for the electromagnetic potential test functions $f$ and $g$.
In terms of $F$ and $G$, we can write $a^{\,}_F=\mathbf{\scriptstyle a}^{\,}_{\delta F}$ , $a_G^\dagger=\mathbf{\scriptstyle a}^\dagger_{\delta G}$ , which satisfy the electromagnetic
field commutator$$[a^{\,}_F,a_G^\dagger]=-\hbar\int k^\alpha\tilde F_{\alpha\mu}^*(k) k^\beta\tilde G_\beta{}^\mu(k)2\pi\delta(k_\nu k^\nu)\theta(k_0)\frac{\mathrm{d}^4k}{(2\pi)^4}.$$Consequently, turning around the usual relationship because we are working with test functions instead of directly with quantum fields, we can regard test functions for the electromagnetic field as potentials for test functions for the electromagnetic potential.
Because of the restriction that electromagnetic potential test functions must have compact support (or that gauge transformations must be constrained if electromagnetic potential test functions are taken to be Schwartz), electromagnetic potential observables are less general than electromagnetic field observables if electromagnetic field test functions are taken to be Schwartz (as is most commonly assumed), or equivalent if electromagnetic field test functions are taken to be smooth and to have compact support.
So, references?
EDIT (October 24th 2011): Noting the Answer from user388027, and my comment, a decent reference for what constraints are conventionally imposed on gauge transformations would be welcome. I would particularly hope for a rationale for the constraints from whatever theoretical standpoint is taken by the reference.This post has been migrated from (A51.SE) |
Period detection and plotting¶ Introduction¶
time provides methods for period detection in time series, i.e. lightcurves of \(\gamma\)-ray sources. The period detection is implemented in thescope of the Lomb-Scargle periodogram, a method that detects periods in unevenlysampled time series typical for \(\gamma\)-ray observations. We refer to the
astropy.stats.LombScargle-class and documentation within for an introductionto the Lomb-Scargle algorithm, interpretation and usage [1].
With
robust_periodogram, the analysis is extended to a moregeneral case where the unevenly sampled time series is contaminated by outliers,i.e. due to the source’s high states. This robust periodogram includes the naiveLomb-Scargle implementation as a special case.
robust_periodogram returns the periodogram of the input. This isdone by fitting a single sinusoidal model to the light curve and computing anormalised \(\chi^2\)-statistic for each period of interest. TheLomb-Scargle algorithm uses a naive least square regression and thus, issensitive to outliers in the light curve. Contrary,
robust_periodogram uses different loss functions that accountfor outliers [2]. The location of the highest periodogram peak is assumed to bethe period of an intrinsic periodic behaviour.
The result’s significance can be estimated in terms of a false alarm probability(FAP) with the respective function of the
astropy.stats.LombScargle-class. Itcomputes the probability of the highest periodogram peak being observed bychance if the underlying light curve would consist of Gaussian white-noise only.
Both, periodogram and light curve can be plotted with
plot_periodogram.
See the Astropy docs for more details about the Lomb-Scargle periodogram andits false alarm probability [1]. The loss functions for the robust periodogramare provided by
scipy.optimize.least_squares [2].
Getting Started¶ Basic Usage¶
robust_periodogram takes a light curve with data format
timeand
flux as input. It returns the period grid, the periodogram peaks and thelocation of the highest periodogram peak.
>>> import numpy as np>>> from gammapy.time import robust_periodogram>>> time = np.linspace(0.1, 10, 100)>>> flux = np.sin(2 * np.pi * time)>>> periodogram = robust_periodogram(time, flux)>>> periodogram['best_period']0.99
The returned period diverges from the true period of \(P = 1\), since thisperiod is not contained in the linear period grid automatically computed by
robust_periodogram.
Period Grid¶
The checked periods can be specified optionally by forwarding an array
periods.
>>> periods = np.linspace(0.1, 10, 100)>>> periodogram = robust_periodogram(time, flux, periods=periods)>>> periodogram['best_period']1.0
If not given, a linear grid will be computed limited by the length of the light curve and the Nyquist frequency.
Measurement Uncertainties¶
robust_periodogram can also handle measurement uncertainties.They can be forwarded as an array
flux_err.
>>> rand = np.random.RandomState(42)>>> flux_err = 0.1 * rand.rand(100)>>> periodogram = robust_periodogram(time, flux, flux_err=flux_err, periods=periods)>>> periodogram['best_period']1.0
Loss Function and Loss Scale¶
To obtain a robust periodogram, loss function
loss and loss scale parameter
scale need to be given.
>>> periodogram = robust_periodogram(time, flux, loss='huber', scale=1)
For available parameters, see [2]. The choice of
loss and
scale dependson the data set and needs to be optimised by the user.
If the loss function
linear is used,
robust_periodogram isperformed with an ordinary linear least square regression. It is then identicalto
astropy.stats.LombScargle and
scale can be set arbitrarily. This is thedefault setting.
>>> from astropy.stats import LombScargle>>> periods = np.linspace(1.1, 10, 90)>>> periodogram = robust_periodogram(time, flux, periods=periods)>>> LSP = LombScargle(time, flux).power(1. / periods)>>> np.isclose(periodogram['power'], LSP).all() == TrueTrue
Also, if
scale is set to infinity, this results in the Lomb-Scargleperiodogram for any
loss. Default settings are recommended if no outliersare expected in the light curve.
False Alarm Probabilities¶
For the determination of peak significance in terms of a false alarmprobability, see [1] and [7]. Methods for the false alarm probability can bechosen from
methods [3]. The respective modul can be called, for examplewith the
Baluev-method:
>>> from astropy.stats.lombscargle import _statistics>>> periods = np.linspace(0.1, 10, 100)>>> periodogram = robust_periodogram(time, flux, periods=periods)>>> fap = _statistics.false_alarm_probability(... periodogram['power'].max(), 1. / periodogram['periods'].min(),... time, flux, flux_err, 'standard', 'baluev'... )>>> fap0.0
If other loss functions than
linear are used, using the
Bootstrap-methodis not recommended, because it internally calls
astropy.stats.LombScargle(linear least square regression) which is not identical to non-linear robustperiodogram.
Plotting¶
>>> import matplotlib.pyplot as plt>>> from gammapy.time import plot_periodogram>>> fig = plot_periodogram(... time, flux, periodogram['periods'], periodogram['power'],... flux_err, periodogram['best_period'], fap... )>>> fig.show()
Example¶
An example of detecting a period with
robust_periodogram isshown in the figure below. The code can be found under [4]. The light curve ofthe X-ray binary LS 5039 is used, observed in 2005 with H.E.S.S. at energiesabove \(0.1 \mathrm{TeV}\) [4]. The robust periodogram reveals the periodof \(P = (3.907 \pm 0.001) \mathrm{d}\) in agreement with [5] and [6].
The maximum FAP of the highest periodogram peak is estimated to \(4.06e^{-19}\) with the \(\texttt{Baluev}\)-method. The other methods return following FAP:
method FAP ‘single’ \(1.04e^{-21}\) ‘naive’ \(5.40e^{-16}\) ‘davies’ \(4.05e^{-15}\) ‘baluev’ \(4.05e^{-15}\) ‘bootstrap’ \(0.0\)
The plot of the light curve shows no evidence for outliers. Thus,\(\texttt{linear}\) is used as
loss with an arbitrary
scale of\(1\). As periods, a linear grid is forwarded that is limited by \(10\mathrm{d}\) to decrease computation time in favour for a higher resolution of\(0.001 \mathrm{d}\).
The periodogram has many spurious peaks, which are due to several factors:
Errors in observations lead to leakage of power from the true peaks.
The signal is not a perfect sinusoid, so additional peaks can indicate higher-frequency components in the signal.
Sampling biases the periodogram and leads to failure modes. Its impact can be qualified by the spectral window function. This is the periodogram of the observation window and can be computed by setting
fluxand
flux errto one and running
astropy.stats.LombScargle.
It shows a prominent peak around one day that arises from the nightly observation cycle. Aliases in the light curve’s periodogram, \(P_{{alias}}\), are expected to appear at \(f_{{true}} + n f_{{window}}\). In terms of periods\[P_{{alias}} = (\frac{{1}}{{P_{true}}} + n f_{{window}})^{{-1}}\]
for integer values of \(n\) [7]. For the peak in the spectral window function at \(f_{{window}} = 0.997 d^{{-1}}\), this corresponds to the third highest peak in the periodogram at \(P_{{alias}} = 0.794\).
[1] (1, 2, 3) Astropy docs, Lomb-Scargle Periodograms,Link
[2] (1, 2, 3) Scipy docs, scipy.optimize.least_squaresLink
[3] Astropy docs, Utilities for computing periodogram statistics. Link
[4] (1, 2) Gammapy docs, period detection example,Link
[5] F. Aharonian, 3.9 day orbital modulation in the TeV gamma-ray flux and spectrum from the X-ray binary LS 5039, Link
[6] J. Casares, A possible black hole in the gamma-ray microquasar LS 5039, Link
[7] (1, 2) Jacob T. VanderPlas, Understanding the Lomb-Scargle Periodogram,Link |
There are many interesting new hep-th papers today. The first author of the first paper is Heisenberg, Kallosh and Linde have a post-Planck update on the CMB – a pattern claimed to be compatible with the KKLT, there are 17 new papers omitting cross-listings, but I choose the second paper:
500.
In particular, the previous neglected "geometric factor" is \[
\frac{1}{\pi^{m/2}}\int \det ({\mathcal R} + \omega\cdot 1)
\] OK, something like a one-loop measure factor in path integrals. This factor influences some density of the vacua. Does the factor matter?
They decide that sometimes it does, sometimes it does not. More curiously, they find out that this factor tends to be an interestingly behaved function of the topological invariants. It's intensely decreasing towards a minimum, as you increase some topological numbers, and then it starts to increase again.
Curiously enough, the minimum is pretty much reached for the values of the topological numbers that are exactly expected to be dominant in the string compactifications. In this sense, the critical dimensions and typical invariants in string theory
conspireto produce the lowest number of vacua that is mathematically possible, at least when this geometric factor is what we look at.
This is a "hope" I have been explicitly articulating many times – that if you actually count the vacua really properly, or perhaps with some probability weighting that has to be there to calculate which of them could have arisen at the beginning, the "most special or simplest" vacua could end up dominating.
They're not quite there but they have some
substantialfactor that reduces – but not sufficiently reduces – the number of vacua for very large Hodge numbers i.e. in this sense "complicated topologies of the compactification manifolds". I mean large Hodge numbers. Note that large Hodge numbers (which may become comparable to 1,000 for Calabi-Yau threefolds) are really needed to get high estimates of the number of vacua such as that 10 500. You need many cycles and many types of fluxes to obtain the high degeneracies.
Wouldn't it be prettier if the Occam-style vacua with the lowest Hodge numbers were the contenders to become the string vacuum describing the world around us? There could still be a single viable candidate. I have believed that the counting itself is insufficient and the Hartle-Hawking-like wave function gives a probabilistic weighting that could pick the simplest one. They have some evidence that previously neglected effects could actually suppress the very
numberof the vacua with the large Hodge numbers or other signs of "contrivedness".
Clearly, everyone whose world view finely depends on claims about "the large number of string vacua" has the moral duty to study the paper and perhaps to try to go beyond it. |
Suppose the following linear system is given $$Lx=c,\tag1$$ where $L$ is the weighted Laplacian known to be positive $semi-$definite with a one dimensional null space spanned by $1_n=(1,\dots,1)\in\mathbb{R}^n$, and the translation variance of $x\in\mathbb{R}^{n}$, i.e., $x+a1_n$ does not change the function value (whose derivative is $(1)$). The only positive entries of $L$ are on its diagonal, which is a summation of the absolute values of the negative off-diagonal entries.
I found in one highly cited academic work in its field that, although $L$ is $not~strictly$ diagonally dominant, methods such as Conjugate Gradient, Gauss-Seidl, Jacobi, could still be safely used to solve $(1)$. The rationale is that, because of translation invariance, one is safe to fix one point (eg. remove the first row and column of $L$ and the first entry from $c$ ), thus converting $L$ to a $strictly$ diagonally dominant matrix. Anyway, the original system is solved in the full form of $(1)$, with $L\in\mathbb{R}^{n\times n}$.
Is this assumption correct, and, if so, what are the alternative rationale? I'm trying to understand how the convergence of the methods still hold.
If Jacobi method is convergent with $(1)$, what could one state about the spectral radius $\rho$ of the iteration matrix $D^{-1}(D-L)$, where $D$ is the diagonal matrix with entries of $L$ on its diagonal? Is $\rho(D^{-1}(D-L)\leq1$, thus different from the general convergenceguarantees for $\rho(D^{-1}(D-L))<1$? I'm asking this since the eigenvalues of the Laplacian matrix $D^{-1}L$ with ones on the diagonal
should be in range $[0, 2]$.
From the original work:
......................................
At each iteration, we compute a new layout (x(t +1), y(t + 1)) by solving the following linear system: $$ L · x(t + 1) = L(x(t),y(t)) · x(t) \\ L · y(t + 1) = L(x(t),y(t)) · y(t) \tag 8$$ Without loss of generality we can fix the location of one of the sensors (utilizing the translation degree of freedom of the localized stress) and obtain a strictly diagonally dominant matrix. Therefore, we can safely use Jacobi iteration for solving (8)
.......................................
In the above, the notion of "iteration" is related to the underlying minimization procedure, and is not to be confused with Jacobi iteration. So, the system is solved by Jacobi (iteratively), and then the solution is bought to the right-hand side of (8), but now for another iteration of the underlying minimization. I hope this clarifies the matter.
Note that I found Which iterative linear solvers converge for positive semidefinite matrices? , but am looking for a more elaborate answer. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.