Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
What's the Coulomb Branch and why is it important? I'm studying the introduction of flavour degrees of freedom in the AdS/CFT correspondence and now I'm supposed to calculate the mass spectrum of mesons in the Coulomb branch. I have searched the concept but I always find very long and complex explanations. Could anyone explain it in a direct way, pointing out some physical intuition?
| If the vacuum of the theory is supersymmetric - i.e. SUSY is not broken - then it is annihilated by the SUSY generators. On the other hand, using the SUSY algebra one can show that the hamiltonian can be written in terms of the SUSY generators. This implies that the vacuum $|0\rangle$ is supersymmetric if and only if $\langle 0|H|0\rangle=0$, i.e. the vev of the order parameters of the theory vanishes. The classical moduli space is defined as the space of the scalar field configurations that vanish the scalar potential of the theory.
Now consider $\mathcal N=2$ Super Yang-Mills. This theory has three dynamic scalar fields, one of them is in the adjoint of the gauge group (supermultiplet) and the other two in an arbitrary representation (hypermultiplet). The configurations with non vanishing vev for the scalar in the adjoint and vanishing vev for the hypermultiplet scalars form the Coulomb branch of the moduli space. The configurations with vanishing vev for the scalar in the adjoint and a non vanishing vev for the scalars in other representation than the adjoint form the Higgs branch of the theory.
The main importance of these branches in my opinion is that they characterize different phases of supersymmetric gauge theories. A little bit more can be found here: What is the relation between the representation the Higgs field transforms under, the types of couplings in the theory and Higgs/Coulomb branches?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/102272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Why are free electrons free? This is what I understand so far: in a conductor, the ions have a weak pull on the valence electrons. So when an electric field is applied, the free electrons are able to easily move about. Makes sense.
In a neutral conductor with no electric field, the free electrons aren't bound to any ions. Why? I understand that the ions have a weak pull on the electrons, but what makes electrons leave the ion and stay free?
| An atom in isolation offers a potential well, and electrons form bound states in the well. The energy of those bound states can be calculated exactly in the case of a single-electron (hydrogen-like) atoms or by variational computational methods for more complicated cases.
Now when you put several atoms together in a tight and regular array, they offer a combined well resulting from the sum of all their potentials.
That combined well might look (in cartoon form) something like this:
Here the red lines represent electron energy level that are still contained by the locally stronger effect of their "own" nucleus, but the blue line represents those slightly higher energy levels that see the combined potential as a single large energy well (with some high-frequency detailed structure).
In a conductor, non-conduction electrons fill all the red (local) energy levels and the remaining electrons must (because of Pauli exclusion) then occupy the blue (non-localized) conduction levels.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/102345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Convert $\text{mm}^2$ to $\text{m}^2$ Stuck on a very very basic concept.
I have $1750\text{ mm}^2$ and need to get into $\text{m}^2$.
I figured $1750\text{ mm}^2 = (1750\text{ mm}) \cdot (1750\text{ mm})$, but I know this isnt right.
| What you need to do is realize how many $\text{mm}$ fit into one $\text{m}$. The answer is $1000=10^3$. Therefore: $1 \text{m}=10^3 \text{mm}$, or equivalently, $1 \text{mm}= 10^{-3} \text{m}$.
$$1750 \text{mm}^2=1750 (10^{-3}\text{m})^2=1750 * 10^{-6} \text{m}^2$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/102400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How does a thin film of soap help defog bathroom mirrors? We often find bathroom mirrors get fogged while hot water is used. Looking this up on the internet, we find several easy solutions:
*
*use a hot air blower
*use a heater behind the mirror
*vinegar + water mixture
*thin film shaving cream/foam lather
*thin film of regular(any) bath soap lather
I would like to know:
How does a thin film of soap lather prevent condensation on the mirror surface during a hot shower?
A thread on DIY(StackExchange): technique to make a shower mirror fog-free.
| If you look at condensation fog through a strong magnifier, you'll notice that the fog is actually composed of a large number of hemispherical water droplets. The optical effect is caused by the fact that the tiny droplets act like lenses, scattering the light. On a vertical glass panel like a mirror, the maximum stable droplet radius is strongly affected by the surface tension of the liquid; with a high surface tension, large hemispherical droplets are stable.
If you coat the mirror with soap, however, the surface tension is vastly reduced, and thus there is a reduction in the maximum stable condensation droplet radius. As a result, instead of condensing as a large number of hemispherical droplets which scatter light, you get a uniform thin film of soap water, which has no optical scattering effect.
So it's not so much that you're eliminating condensation (the water will still condense), but rather you're changing the geometric (and thus optical) properties of the water droplets that condense on the surface.
Admittedly, this is a guess, so if anyone has any corrections feel free to point them out.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/102609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What may be effect of air friction to the velocity of satellite? What is the effect of air friction to the velocity of satellite? I have heard satellite's speed increases with air friction. But I'm in confusion how is it possible?
| Air friction (simply a form of friction) as we observe in our everyday life opposes the motion/state of the body (in this case motion of satellite ).
It's true that air friction is responsible for decreasing the speed of the satellite, thereby decreasing the kinetic energy and ultimately the total energy of satellite.
But as we know that any form of system transits to lower energy state after losing energy (exactly the case of excited electrons coming back to ground state after losing energy in an atom).
As satellite also begins to revolve in a new orbit of smaller radius than earlier, thus by the conservation of angular momentum it must have higher angular velocity around the planet.
And increase in angular velocity means definitely increase in its velocity, v=rw.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/102699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Understanding fields and their correlation to forces I seem to be confused between the concept of a "force", and a field.
Now let's assume there is a magnetic field of $1$ $\mathrm{Tesla}$, what does that mean in relation to force?
Finally, if field is $1$ $\mathrm{Tesla}$ does that always mean, the force at that field is always the same?
Example a magnetic field source (From Solenoid) of $1$ $\mathrm{Tesla}$ can apply a force of $10,000$ $\mathrm{Newtons}$, magnetic field source (Permanent Magnet) generates the same field strength, at the same conditions does it produce the same force?
| No, magnetic field does not determined force alone. Force depends on other things. If the force discussed is magnetic force on a moving eletric charge, the force is given by
$$
q \mathbf v\times \mathbf B
$$
in SI units. Here $q$ is electric charge of the particle, $\mathbf v$ its velocity and $\mathbf B$ is magnetic field.
If the force is magnetic force on a piece of uncharged or static body, then the formula is more complicated and depends on the magnetization, shape and orientation of the body. In the simplest case of a small body in time independent magnetic field, the net magnetic force is given by
$$
\mathbf m \cdot \nabla \mathbf B
$$
were $\mathbf m$ is magnetic moment of the body. In more complicated cases (force of big electromagnet on a car), net EM force on the body may be calculated with help of the Maxwell stress tensor.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/102772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why would spacetime curvature cause gravity? It is fine to say that for an object flying past a massive object, the spacetime is curved by the massive object, and so the object flying past follows the curved path of the geodesic, so it "appears" to be experiencing gravitational acceleration. Do we also say along with it, that the object flying past in reality exeriences NO attraction force towards the massive object? Is it just following the spacetime geodesic curve while experiencing NO attractive force?
Now come to the other issue: Supposing two objects are at rest relative to each other, ie they are not following any spacetime geodesic. Then why will they experience gravitational attraction towards each other? E.g. why will an apple fall to earth? Why won't it sit there in its original position high above the earth? How does the curvature of spacetime cause it to experience an attraction force towards the earth, and why would we need to exert a force in reverse direction to prevent it from falling? How does the curvature of spacetime cause this?
When the apple was detatched from the branch of the tree, it was stationary, so it did not have to follow any geodesic curve. So we cannot just say that it fell to earth because its geodesic curve passed through the earth. Why did the spacetime curvature cause it to start moving in the first place?
|
When the apple was detatched from the branch of the tree, it was stationary, so it did not have to follow any geodesic curve.
Even when at rest in space, the apple still advances in space-time. Here is a visualization of the falling apple in distorted space-time:
http://www.youtube.com/watch?v=DdC0QN6f3G4
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/102910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "156",
"answer_count": 6,
"answer_id": 0
} |
Bogoliubov transformation with a slight twist Given a Hamiltonian of the form
$$H=\sum_k \begin{pmatrix}a_k^\dagger & b_k^\dagger \end{pmatrix}
\begin{pmatrix}\omega_0 & \Omega f_k \\ \Omega f_k^* & \omega_0\end{pmatrix} \begin{pmatrix}a_k \\ b_k\end{pmatrix}, $$
where $a_k$ and $b_k$ are bosonic annihilation operators, $\omega_0$ and $\Omega$ are real constants and $f_k$ is a complex constant.
How does one diagonalise this with a Bogoliubov transformation? I've seen an excellent answer to a similar Phys.SE question here, but I'm not quite sure how it translates to this example. Any hints or pointers much appreciated.
| This is an eigenvalue problem.
Let's assume your Bogoliubov transformation is of the form: $(a_k,b_k)^T=X(c_k,d_k)^T$. What this transformation do is let your Hamiltonian become: $H_k=w_1c_k^\dagger c_k+w_2 d_k^\dagger d_k$, with the anti-commute relation holds for new field operators $c_k$ and $d_k$.
Now you can check that $X$ is just the matrix where its columns are just the Normalized Eigenvectors of your original matrix.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/102967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
Prove EM Waves Are Transverse In Nature Why we say that EM waves are transverse in nature? I have seen some proofs regarding my question but they all calculate flux through imaginary cube. Here is My REAL problem that I can't here imagine infinitesimal area for calculating flux because em line of force will intersect (perpendicular or not) surface at only one point so $E.ds$ will be zero so even flux through one surface of cube will always be zero. I am Bit Confused. I DON'T KNOW VECTOR CALCULUS BUT KNOW CALCULUS.
| I am going to try to "unconfuse" you.
Let's start with a picture of ocean waves; as seen from the side, they look like "sine" waves (just like BMS's red EM wave); as seen from the top, the crests and troughs make lines (not points). The same thing happens with the E and B waves (make lines), and since they are perpendicular to each other, they form a plane (two lines make a plane). This is why the flux is evaluated over a surface area (not a point).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/103171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Can someone please explain the "infrared catastrophe"? In my readings I've run into this idea of an "infrared catastrophe" associated with 1/f noise. As far as I can tell it is because when you graph the periodogram of the 1/f signal you see the PSD goes to infinity as frequency goes to 0. Not sure what that means practically though. If we are talking about a sound wave, does that mean the sound becomes infinitely loud at low frequencies? What is the "catastrophe"?
| $1\over f$ noise does have finite energy. It does not have an infrared catastrophe. The infrared catastrophe was the (unrealistic) result of an attempt to theoretically explain blackbody radiation. The result implied that blackbody radiation were infinitely powerful in the infrared. This did not appear to be the case, so this result was regarded as a catastrophe to the model. It was.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/103212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Effect of linear terms on a QFT I was told when first learning QFT that linear terms in the Lagrangian are harmless and we can essentially just ignore them. However, I've recently seen in the linear sigma model,
\begin{equation}
{\cal L} = \frac{1}{2} \partial _\mu \phi _i \partial ^\mu \phi _i - \frac{m ^2 }{2} \phi _i \phi _i - \frac{ \lambda }{ 4} ( \phi _i \phi _i ) ^2
\end{equation}
with $m ^2 =-\mu^2 > 0$, adding a linear term in one of the fields $\phi_N$, does change the final results as you no longer have Goldstone bosons (since the $O(N)$ symmetry is broken to begin with).
Are there any other effects of linear terms that we should keep in mind or is this the only exception of the "forget about linear terms" rule?
| Linear terms can be thought as source terms. They are important to define the effective potential (which is the Legendre transform of the (log of) the partition function with respect to the source).
I'm not sure why one would say that one can forget about them, since, for instance, they imply a non zero value of $\langle \phi\rangle$ even in the symmetric phase. This implies, for example, that the tadpole diagrams are non-zero, that you have effectively $\phi^3$ vertices, etc. Maybe the reason is that if you shift the field $\phi\to\phi-\langle\phi\rangle$ the source disappears from the Lagrangian...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/103328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 3,
"answer_id": 0
} |
Difference between positron and electron scattering in Coulomb field In first order of perturbation theory the S-matrix amplitude for electron scattering in the Coulomb field will be (up to normalization factors)
$$
S_{fi} = \frac{iZ q^2}{\sqrt{2E_{f}2E_{i}}}\bar {u}(p_{f}, s_{f})\gamma_{0}u (p_{i}, s_{i}) \int \frac{d^{4}xe^{i(p_{f} - p_{i})x}}{|\mathbf x|},
$$
where $f, i$-indices mark correspondingly final and initial waves, $s_{i, f}$ marks the polarization.
What is the difference for the case of positron scattering (the amplitude of scattering has different expression)?
| For positrons there is an overall minus sign and the spinors are the positron spinors. But to calculate the cross section you need to take the square of the S- matrix, so the sign does not matter. If you are going to calculate the unpolarized cross section you need to sum over the spins. In this case spinor parts are going to be the same too. Thus the unpolarized cross section for electrons and positrons turn out the be the same. But only for the first order S-matrix. If you write the S-matrix with the higher orders the they differ. Because of the signs of the charges, the second order term contributions have different signs for electrons and positrons.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/103387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Derivation of the general Lorentz transformation The standard Lorentz transformation or boost with velocity $u$ is given by
$$\left(\begin{matrix} ct \\ x \\ y \\ z \end{matrix}\right) = \left(\begin{matrix}
\gamma & \gamma u/c & 0 & 0 \\
\gamma u/c & \gamma & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{matrix}\right) \, \left(\begin{matrix} ct^\prime \\ x^\prime \\ y^\prime \\ z^\prime \end{matrix}\right) = L_u \,\left(\begin{matrix} ct^\prime \\ x^\prime \\ y^\prime \\ z^\prime \end{matrix}\right)$$
where $\gamma = \gamma(u) = 1/\sqrt{1-u^2/c^2}$. In the standard Lorentz transformation, it is assumed that the $x$ and $x^\prime$ axes coincide, and that $O^\prime$ is moving directly away from $O$.
If we drop the first condition, allowing the inertial frames to have arbitrary orientations, then "we must combine [the standard Lorentz transformation] with an orthogonal transformation of the $x$, $y$, $z$ coordinates and an orthogonal transformation of the $x^\prime$, $y^\prime$, $z^\prime$ coordinates. The result is
$$\left(\begin{matrix} ct \\ x \\ y \\ z \end{matrix}\right) = L \,\left(\begin{matrix} ct^\prime \\ x^\prime \\ y^\prime \\ z^\prime \end{matrix}\right)$$
with
$$L = \left(\begin{matrix} 1 & 0 \\ 0 & H \end{matrix}\right)\, L_u \,\left(\begin{matrix} 1 & 0 \\ 0 & K^\textrm{t} \end{matrix}\right)$$
where $H$ and $K$ are $3 \times 3$ proper orthogonal matrices, $L_u$ is the standard Lorentz transformation matrix with velocity $u$, for some $u < c$, [and 't' denotes matrix transpose]."
I have two questions:
*
*Why are two orthogonal transformations, for both the unprimed and primed spatial coordinates, necessary? That is, why isn't one orthogonal transformation sufficient to align the axes of the inertial frames?
*Why does the first orthogonal transformation use the transposed orthogonal matrix $K^\textrm{t}$?
| To apply the Lorentz transformation to some vector $\vec v$, having a $L_x$ matrix, but doing it along another axis $\vec q$, you can temporarily change coordinates so that the vector is parallel to $\vec e_x$:
$$\vec v'=U\vec v.$$
Now your intermediate result would be
$$L_x\vec v'=L_x U\vec v.$$
But it's still in the temporary basis. Let's now go back to original basis. As $U$ is orthogonal, its inverse equals its transpose, so we get:
$$L_q\vec v=U^TL_xU\vec v.$$
Thus,
$$L_q=U^TL_xU.$$
So, the answers are:
*
*First transformation converts vector to temporary basis so that axis of Lorentz rotation coincides with the axis of the rotation you need, second one returns back to original basis.
*Inverse of an orthogonal matrix is equal to its transpose, so it's just easier to use a transpose of transformation to return back to original basis.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/103560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Why don't the leaves of an electrometer repel each other in water? A normal electrometer filled with air will repel like it should do for electrostatic demonstration, but what if it is filled with water instead or even oil, what will happen?
My guess is that the water is charged too, making the net repelling force equal to zero.
But what will happen if it is filled with oil or another liquid?
| There are a few options.:
A) the liquid is a conductor and the electrometer is discharged.
B) the liquid is non conductive, but it's viscosity is too high, so the leaves will not move
C) if the viscosity is low enough, the leaves will move
In normal water you will get A.
As coulomb forces are usually not that big, in an insulating liquid B is most probable.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/103619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
What is the exact relation between $\mathrm{SU(3)}$ flavour symmetry and the Gell-Mann–Nishijima relation I'm trying to understand how the Gell-Mann–Nishijima relation has been derived:
\begin{equation}
Q = I_3 + \frac{Y}{2}
\end{equation}
where $Q$ is the electric charge of the quarks, $I_3$ is the isospin quantum number and $Y$ is the hypercharge given by:
\begin{equation}
Y = B + S
\end{equation}
where $B$ is the baryon number and $S$ is the strangeness number.
Most books (I have looked at) discuss the Gell-Mann–Nishijima in relation to the approximate global $\mathrm{SU(3)}$ flavour symmetry that is associated with the up-,down- and strange-quark at high enough energies. But I have yet to fully understand the connection between the Gell-Mann–Nishijima and the $\mathrm{SU(3)}$ flavour symmetry.
Can the Gell-Mann–Nishijima relation somehow been derived or has it simply been postulated by noticing the relation between $Q$, $I_3$ and $Y$? If it can be derived, then I would be very grateful if someone can give a brief outline of how it is derived.
| The Gell-Mann–Nishijima relation arises from electroweak symmetry breaking. If we vev our Higgs SU(2) doublet,
$$
\langle(\phi^+, \phi^0)\rangle = (0, v/\sqrt{2}),
$$
we find that the theory remains invariant under a combination of the diagonal, Cartan SU(2) generator, the weak hypercharge, and the hypercharge, $Y$, because
$$
e^{iQ} \langle(\phi^+, \phi^0)\rangle = \langle(\phi^+, \phi^0)\rangle\\
Q \langle(\phi^+, \phi^0)\rangle = (T_3 + Y/2) \langle(\phi^+, \phi^0)\rangle = 0
$$
because on our Higgs doublet,
$$
T_3 + Y/2 = \left(\begin{array}{cc}
1/2 & 0\\
0 & -1/2\end{array}\right) +
\left(\begin{array}{cc}
1/2 & 0\\
0 & 1/2\end{array}\right) =
\left(\begin{array}{cc}
1 & 0\\
0 & 0\end{array}\right)
$$
One can always find a combination of the U(1) and Cartan generator of SU(2) which annihilates the vacuum. This form of the relation and these hypercharge assignments are a convention. The general form is $Q=T+aY$, with $a$ determined from the the hypercharge of the Higgs boson.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/103717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Surface current and current density I want to know when I am asked to find the surface current density and the space current density what should I do? Also what is the difference between the surface and space current? Can the space current density help me to find the latter if I know it?
| The following discussion refers to electrodynamics in three dimensions.
Definitions.
Let a two-dimensional surface $\Sigma\subset\mathbb R^3$ be given, then a surface current density on $\Sigma$ is a function $\mathbf K:\Sigma\to \mathbb R^3$. In other words, it is a vector field on the surface. For each point $p$ on the surface, it physically represents the charge per unit time passing through a unit cross-sectional length on the surface.
You can think of surface current density like this. Consider some point $p$ on the surface $\Sigma$. Let $\gamma$ be a short curve segment on $\Sigma$ passing through $p$. Let $\mathbf n$ denote the unit vector tangent to $\Sigma$ at $p$ but normal to $\gamma$, then
$$
(\mathbf K(p)\cdot\mathbf n)\mathrm{len}(\gamma)
$$
approximates the charge per unit time flowing on the surface and passing through $\gamma$.
On the other hand, a current density (or space current density as you call it) is a function $\mathbf J:\mathbb R^3\to\mathbb R^3$. In other words, it is a vector field. For each point $\mathbf x$, the current density at that point represents the charge per unit time passing through a unit cross-sectional area.
You can think of the current density like this. Consider some point $\mathbf x\in\mathbb R^3$, and let $\alpha$ denote a small surface segment passing through $\mathbf x$. Let $\mathbf n$ denote the unit vector at $\mathbf x$ normal to $\alpha$, then
$$
(\mathbf J(\mathbf x)\cdot \mathbf n)\mathrm{area}(\alpha)
$$
approximates the charge per unit time passing through $\alpha$.
Computing $\mathbf J$ from $\mathbf K$.
A surface current density $\mathbf K$ can be written as a current density $\mathbf J$ that includes Dirac deltas. For example, the surface current density of a spinning sphere of radius $R$ with angular velocity $\omega$ and with uniform surface charge density $\sigma$ fixed to it is given in spherical coordinates by
\begin{align}
\mathbf K = \sigma\omega R\sin\theta\,\hat{\boldsymbol\phi}
\end{align}
which can equivalently be written as a current density
\begin{align}
\mathbf J = \sigma\omega R\sin\theta\,\delta(r-R)\,\hat{\boldsymbol\phi}
\end{align}
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/103803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Should water cool faster if is inserted metal canister with ice inside either mix only with ice? Let's say that we have two canister first bigger (metal canister 2l) with 1l of water at 100C, and second smaller (metal canister 1l) with 1l of ice. And we want to cool down boiled water to 50C. So we will insert ice (only ice not hole canister) into boiled water and after some time get wanted water (50C).
But what if we insert into boiled canister - small canister (hole canister with ice inside), should boiled water be cooled down faster? (cause smaller canister with metal surface have bigger thermal conductivity than pure water)
| If the cold canister you insert has more surface area than all the ice cubes that you would otherwise insert, then we would presume (for most circumstance) that the canister will perform a cooling action more quickly.
A variance occurs if the canister is filled with ice cubes. Then you have two surface-area problems; canister-to-larger-vessel, and ice-cubes-to-canister. Here, the cold-canister walls are assumed to be perfectly conducting of heat.
If any of this seems off-target, please help by refining the grammar in your post. I often revise the grammar in posts, but here I am not able because there are ambiguities that cannot be resolved.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/103861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Current through two inductors after a long time I'm having trouble with visualizing the following problem, which is asking me for the final, steady current in both inductors $L_{1}$ and $L_{2}$. I was thinking that after a long time, essentially the current will be stable and thus there will be no induced current in the inductors. However, how would the current in the circuit split in this case? (I suppose that we would have two short wires as the resistance of the inductors is 0). I know that the current through the resistor is:
$I_{R} = \displaystyle\frac{e}{R}$
However, I really can't visualize how short circuits work, and mostly in this case in which we have two paths with zero resistance. How would the currents split?
Here is a picture of the problem.
| try taking a look at this page it goes in to good detail about inductors in a dc circuit.
http://www.ibiblio.org/kuphaldt/electricCircuits/DC/DC_15.html
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/103945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Do photons make the universe expand? I have a problem understanding the ideas behind a basic assumption of cosmology. The Friedmann equations follow from Newtonian mechanics and conservation of Energy-momentum $(E_{kin}+E_{pot}=E_{tot})$ or equally from Einsteins field equation with a Friedmann-Lemaitre-Robertson-Walker metric. In a radiation dominated, flat universe, standard cosmology uses the result from electrodynamics for radiation pressure
$P_{rad}=\frac{1}{3}\rho_{rad}$, where $\rho_{rad}$ is the energy density of radiation in the universe and $P_{rad}$ is the associated pressure. It then puts the second Friedmann equation into the first to derive the standard result $\rho_{rad} \propto a^{-4}$, where $a$ is the scale factor on the metric. Putting this result into the first Friedmann equation yields $$a\propto\sqrt{t}$$ where $t$ is the proper time.
Therefore we used the standard radiation pressure of classical electrodynamics to derive an expression for the expansion of the universe. My problem is in understanding why this is justified. Is the picture that photons crash into the walls of the universe to drive the expansion really valid? Certainly not (what would the walls of the universe be, and what are they made of ;)?), but at least this is how the radiation equation of state is derived. Is there any further justification for this? Be aware of that I'm talking here about a radiation dominated universe, so matter and dark energy can be neglected. Therefore, can we not derive a certain rate of expansion for the universe without anything mysterious like Dark Energy?
| The fallacy here is assuming conservation of energy. See, Noether's theorem ensures the energy is conserved if the hamiltonian of the system is time invariant. But the universe is expanding, so the very frame in which you describe it is changing over time, therefore you cannot apply energy conservation.
It is, however, rotationally invariant, so you know angular momentum is conserved. It is also shift invariant, so linear momentum is also conserved.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/104041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 2
} |
Will a sound composed of the frequencies 450Hz, 650Hz 850Hz have a clearly defined musical pitch? Why? According to my lecturer, the perceived pitch of a sound composed of the following harmonics: 750Hz, 1000Hz, 1250Hz is equal to the fundamental frequency which is the highest common factor of the harmonic frequencies; so 250Hz.
He also said that the harmonic frequencies 450Hz, 650Hz 850Hz do not have a clearly defined musical pitch.
How can this be true when the harmonic frequencies have a highest common factor and therefore a fundamental frequency of 50Hz?
| The brain is quite good at filling in for a few missing harmonics. For example music still sounds reasonable on a smartphone speaker even though that speaker is incapable of creating low frequencies. This Wikipedia article explains the phenomenon (thanks to Glen for the link).
So it's possible that if you heard the 2nd, 3rd and 4th overtones of 250Hz your brain would fill in the blanks and it would still sound like a 250hz note. I use the qualifier possible because I'd want to try the experiment before committing myself - taking out both the fundamental and the first overtone seems quite a big change.
However for 450Hz, 650hz and 850Hz to sound like a 50Hz note you brain would have to fill in the fundamental and the first seven overtones. I suspect this would be a mental step too far, and this combination of frequencies would instead sound like a dischord.
There must be sound synthesis apps for PCs that could generate this combination of frequencies. It would be an interesting experiment to try.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/104298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Galilean, SE(3), Poincare groups - Central Extension After having learnt that the Galilean (with its central extension) with an unitary operator
$$ U = \sum_{i=1}^3\Big(\delta\theta_iL_i + \delta x_iP_i + \delta\lambda_iG_i +dtH\Big) + \delta\phi\mathbb{\hat1} = \sum_{i=1}^{10} \delta s_iK_i + \delta\phi\mathbb{\hat1} $$
This makes sure the commutation relations hold good in the group (especially for the boosts). However, in the case of Poincare group, the commutators still hold without a central extension. Similarly, is the case with SE(3) (there are no central extensions involved).
My question is why is there a necessity for central extensions in the first case, but not later ??
PS: This answer is somewhat related to the question, but am not able to get to the bottom of this thing.
| The central extensions are classified by the second cohomology group:
http://en.wikipedia.org/wiki/Group_extension .
If this group is trivial then each central extension is semidirect (and hence in some sense trivial). In particular, this is the case for the Poincare group but not for the Galilei group.
However, if you want to take a nonrelativistic limit starting with the Poincare group, you need to introduce the nonrelativistic energy $E= cp_0 -mc^2$, which can be done only in a (trivial) 11-dimensional central extension of the Poincare group by a central generator, the mass $m$. In this form, the presentations of the Poincare group and the Galilei group can be made to look very similar.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/104442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
Temperature of gases I can't find any law that states this (maybe the combined gas law does and I'm misinterpreting it?), but Feynman said that if you compress a gas, the temperature increases. This makes sense, for example, a diesel engine (or gas engine with insufficient octane or too high a compression ratio). Also, must thinking about a piston "hitting" particles as it is compressed makes sense that energy is imparted.
But he goes on to say that when the gas expands, there is a decrease in temperature. This used to make more sense to me, but the more I think about it, it doesn't at all. Why would the particles lose energy if the container expands?
| Just for completeness, when a gas expands its temperature does not necessarily change. The temperature of the gas only changes if it does work on something, for example its container as discussed in Danu's answer. If a gas is expanding into a vacuum it does no work and (to a first approximation) its temperature does not change. This type of expansion is known as a Joule expansion.
I used the qualifier to a first approximation above because the temperature is only guaranteed not to change if the gas is ideal. In real gases there are forces acting between the gas molecules and even in a Joule expansion the gas may do work against these forces and the kinetic energy of the gas molecules and therefore the temperature may change. This is known as the Joule-Thomson effect.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/104519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
When combining three spin $\frac{1}{2}$ particles what are the corresponding states? I want to combine three spin half particles and this is what I have so far.
I used the lowering operator $J_{-}$ on the top states and found the following states fine:
$$|\frac{3}{2},\frac{3}{2}\rangle , |\frac{3}{2},\frac{1}{2}\rangle , |\frac{3}{2},0\rangle , |\frac{3}{2},-\frac{1}{2}\rangle , |\frac{3}{2},-\frac{3}{2}\rangle $$
So that is the combination of three spin $\frac{1}{2}$ particles is equivalent to a spin $\frac{3}{2}$ particle, right?
In the case where I did it for combining two spin $\frac{1}{2}$ particles I found this was equivalent to a spin 1 particle and an additional spin zero particle.
So my question is, is there a singlet or anymore that accompany what I found for the case where I combined three spin $\frac{1}{2}$?
| No, it is not equivalent. The Spin $s = S = 3/2$ particle will have spin projections between $S_3 = 3/2$ and $-3/2$, as you have worked out. That is it, it will just be a multiplet with 5 members.
The three particles with spin $s = 1/2$ can also have a combined spin with $S = 3/2$ which will form the same 5-multiplet. However, they can also form a $S = 1/2$ multiplet, which is a triplet. With two electrons for instance, you will have a symmetric and an antisymmetric comination of up and down, one will the the $|1, 0\rangle$ from the triplet, where the other will be the $|0, 0\rangle$ from the singlet.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/104591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Help understanding proof in simultaneous diagonalization The proof is from Principles of Quantum Mechanics by Shankar. The theorem is:
If $\Omega$ and $\Lambda$ are two commuting Hermitian operators, there exists (at least) a basis of common eigenvectors that diagonalizes them both.
The proof is:
Consider first the case where at least one of the operators is nondegenerate, i.e. to a given eigenvalue, there is just one eigenvector, up to a scale. Let us assume $\Omega$ is nondegenerate. Consider any one of its eigenvectors:
$$\Omega\left|\omega_i\right\rangle=\omega_i\left|\omega_i\right\rangle$$
$$\Lambda\Omega\left|\omega_i\right\rangle=\omega_i\Lambda\left|\omega_i\right\rangle$$
Since $[\Omega,\Lambda]=0$
$$\Omega\Lambda\left|\omega_i\right\rangle=\omega_i\Lambda\left|\omega_i\right\rangle$$
i.e., $\Lambda\left|\omega_i\right\rangle$ is an eigenvector with eigenvalue $\omega_i$. Since this vector is unique up to a scale,
$$\Lambda\left|\omega_i\right\rangle=\lambda_i\left|\omega_i\right\rangle$$
Thus $\left|\omega_i\right\rangle$ is also an eigenvector of $\Lambda$ with eigenvalue $\lambda_i$...
What I do not understand is the statement/argument "Since this vector is unique up to a scale." I do not see how the argument allows to state the equation following it. What axiom or what other theorem is he using when he states "since this vector is unique up to a scale"?
| When $\lambda_1$ is an eigenvalue of a matrix and $v_1$ and $v_2$ are the components of the corresponding eigenvector, then the following equation holds:
$\begin{pmatrix} a-\lambda_1 & b \\ c &d-\lambda_1 \end{pmatrix}\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}=\begin{pmatrix} 0 \\ 0 \end{pmatrix}$
Now when you scale up the eigenvector (say by three) it looks like this:
$\begin{pmatrix} a-\lambda_1 & b \\ c &d-\lambda_1 \end{pmatrix}\begin{pmatrix} 3v_1 \\ 3v_2 \end{pmatrix}=\begin{pmatrix} 0 \\ 0 \end{pmatrix}$
This you can write as
$3 \begin{pmatrix} a-\lambda_1 & b \\ c &d-\lambda_1 \end{pmatrix}\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}=\begin{pmatrix} 0 \\ 0 \end{pmatrix}$
But the matrix multiplied with the eigenvector still yields the zero vector!
This is what he meant when he said "Since this vector is unique up to a scale.": any scaled up eigenvector of a matrix is still an eigenvector. And how does the last equation follow from it?
When you write
$$\Omega\Lambda\left|\omega_i\right\rangle=\omega_i\Lambda\left|\omega_i\right\rangle$$ then you know that $\Lambda\left|\omega_i\right\rangle$ gives you a vector which is an eigenvector of $\Omega$. But you said that $\Omega$ is nondegenerate, so for any $\omega_i$ there is a unique $\left|\omega_i\right\rangle$. What this means is that this eigenvector you get by applying $\Lambda\left|\omega_i\right\rangle$ must be $\left|\omega_i\right\rangle$.
Luckily, any eigenvector which is scaled up (here by $\lambda_i$) is still an eigenvector, so that you get
$$\Omega\lambda_i\left|\omega_i\right\rangle=\omega_i \lambda_i \left|\omega_i\right\rangle$$
or
$$\Lambda\left|\omega_i\right\rangle=\lambda_i\left|\omega_i\right\rangle $$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/104674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Proving a step in this field-theoretic derivation of the Bogoliubov de Gennes (BdG) equations In derivation of the BdG mean field Hamiltonian as follows, I have a confusion here in the second step:
$H_{MF-eff} = \int d^{3}r\psi_{\uparrow}^{\dagger}(\mathbf{r})H_{E}(\mathbf{r})\psi_{\uparrow}(\mathbf{r})+\int d^{3}r\psi_{\downarrow}^{\dagger}(\mathbf{r})H_{E}(\mathbf{r})\psi_{\downarrow}(\mathbf{r})
+\int d^{3}r\triangle^{\star}(\mathbf{r})\psi_{\downarrow}(\mathbf{r})\psi_{\uparrow}(\mathbf{r})+\int d^{3}r\psi_{\uparrow}^{\dagger}(\mathbf{r})\psi_{\downarrow}^{\dagger}(\mathbf{r})\triangle(\mathbf{r})-\int d^{3}r\frac{|\triangle(\mathbf{r})|^{2}}{U}$
$ = \int d^{3}r\psi_{\uparrow}^{\dagger}(\mathbf{r})H_{E}(\mathbf{r})\psi_{\uparrow}(\mathbf{r})-\int d^{3}r\psi_{\downarrow}(\mathbf{r})H_{E}^{\star}(\mathbf{r})\psi_{\downarrow}^{\dagger}(\mathbf{r})
+\int d^{3}r\triangle^{\star}(\mathbf{r})\psi_{\downarrow}(\mathbf{r})\psi_{\uparrow}(\mathbf{r})+\int d^{3}r\psi_{\uparrow}^{\dagger}(\mathbf{r})\psi_{\downarrow}^{\dagger}(\mathbf{r})\triangle(\mathbf{r})-\int d^{3}r\frac{|\triangle(\mathbf{r})|^{2}}{U}$
$= \int d^{3}r\left(\begin{array}{cc}
\psi_{\uparrow}^{\dagger}(\mathbf{r}) & \psi_{\downarrow}(\mathbf{r})\end{array}\right)\left(\begin{array}{cc}
H_{E}(\mathbf{r}) & \triangle(\mathbf{r})\\
\triangle^{\star}(\mathbf{r}) & -H_{E}^{\star}(\mathbf{r})
\end{array}\right)\left(\begin{array}{c}
\psi_{\uparrow}(\mathbf{r})\\
\psi_{\downarrow}^{\dagger}(\mathbf{r})
\end{array}\right)+const.
$
with
$H_{E}(\mathbf{r})=\frac{-\hbar^{2}}{2m}\nabla^{2}$
In the second step, we have taken
$\int d^{3}r\psi_{\downarrow}^{\dagger}(\mathbf{r})\nabla^{2}\psi_{\downarrow}(\mathbf{r}) = -\int d^{3}r\psi_{\downarrow}(\mathbf{r})\nabla^{2}\psi_{\downarrow}^{\dagger}(\mathbf{r})$............(1).
I can prove (by integration by parts and putting the surface terms to 0) that $\int d^{3}r\psi_{\downarrow}^{\dagger}(\mathbf{r})\nabla^{2}\psi_{\downarrow}(\mathbf{r}) = \int d^{3}r \nabla^{2}\psi_{\downarrow}^{\dagger}(\mathbf{r})\psi_{\downarrow}(\mathbf{r})$
but how is it justified to now take
$\int d^{3}r \nabla^{2}\psi_{\downarrow}^{\dagger}(\mathbf{r})\psi_{\downarrow}(\mathbf{r}) = - \int d^{3}r\psi_{\downarrow}(\mathbf{r})\nabla^{2}\psi_{\downarrow}^{\dagger}(\mathbf{r})$
in order to prove (1) ?
| Write the differential operator $\nabla^2$ in terms of derivatives as $\nabla^2=\partial^2_x+\partial^2_y+\partial^2_z$. Write each derivative as a limit (i.e.: $\partial_x f(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} $). Rearrange the fields by putting $\psi_\downarrow (r')$ on the left, of course keep track of fermionic exchanges with a minus sign. Reinstate the limit as a derivative and then as a differential operator.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/104741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Can effect of gravity be broken (counteracted) by electric force? Can we make a jacket using an electronic circuit that uses electric force to cancel the effect of gravity so that we get lifted in air.
| Short answer not really.
There are 2 3 ways to cancel gravity,
-one you create anti gravitation particles which no human scientist has ever produced,
let alone observed. (or where they its own antiparticle i don't remember)
-two you propel your self with air,
so basically if with electronic jacket you mean a electric powered airplane then yes
-use photons like a highly powered laser beam
Electromagnetic forces only have a strong interaction between 2 charged entities the earth is neutral so therefore it would be slightly problematic.
edit
Electrostatic levitation is a process that can only apply when both the person floating and the surface he is floating above are oppositely charged. so there can not be a singular jacket that can help you levitate. the maglev train uses electromagnet force to not only levitate but also propel itself at high speed without touching the ground. This process requires a rail and it cannot hover on it own.
p.s
In physics there is a concept called a antimatter engine that emits a beam of photons to propel itself, because photons are but excitation's of the electromagnet field a jacket having a mini antimatter engine could in theory propel you.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/104813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Collection Efficiency of Photon I am currently reading this paper:
http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.116802#references
In the second column of text in page 2, the author makes various claims about the number of photons detected and the collection efficiencies of different optical components.
'we detect 2.6 million counts per second from the QD with excitation
laser power above QD saturation. The trion lifetime is 0.65 ns; i.e.
the QD emits ∼7.7 × 10$^8$ photons per second when driven well above
saturation. Taking into account the effect of the ∼90 ns dead time of
our time-correlated single photon counting module, our overall
collection efficiency is 0.45%. After correcting for the SSPD
efficiency (≃40%), beam splitter (80%), and polarizer (50%) losses as
well as the fiber coupling efficiency (40%), we conclude that 7.0% of
the photons emitted by the QD are collected by the objective.'
QD - Quantum dot
SSPD - Superconducting single-photon detector
TCSPC - time-correlated single photon counting
While I understand that the dead time of 90ns means that for every photon detected, the instrument will not be able to detect any photons for the next 90ns, I do not understand how it factors into the calculation to arrive at the $0.45\%$ overall collection efficiency.
The following is the steps that I did to try to arrive at the 0.45% collection efficiency:
Given the trion lifetime, $\tau$ $= 0.65ns$
no. of photons from radiative recombination per second $= 1s/0.65ns$ $= 1.538*10^9 $
no. of photons emitted by the sample $= 1.538*10^9 * 0.5 = 7.7*10^8$ (value given in the paper) The factor of half is due to the fact that there is a thin metal gate with a power reflectance $R < 0.5$ deposited on the top surface (page 1, second column).
From the different collection efficiencies of the optical components, the resultant efficiency is straightforward:
$0.4*0.8*0.5*0.4 = 0.07$ i.e. $7\%$ (value given in the text)
Therefore, of the emitted photons
$7.7*10^8*0.07 = 5.36*10^7$ photons get detected by the SSPD (not the TCSPC module).
With these values I seem to be getting a much higher overall collection efficiency.
Can anyone clarify the steps taken to arrive at the overall collection efficiency of $0.45\%$?
Thank you very much.
| 2.6 million counts, with each count producing a dead time of 90 nsec, gives a total dead time of .234 seconds / second. So the total number of counts should be 3.4 million counts (2.6 / (1 - .234), assuming no dead time. Since each QD emits 770 million photons per second, the detector efficiency is 3.4 / 770 (which I get as .44%).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/104889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Symmetry Breaking And Phase transition
*
*Is every phase transition associated with a symmetry breaking? If yes, what is the symmetry that a gaseous phase have but the liquid phase does not?
*What is the extra symmetry that normal $\bf He$ has but superfluid $\bf He$ does not? Is the symmetry breaking, in this case, a gauge symmetry breaking?
Update Unlike gases, liquids have short-range order. Does it not mean that during the gas-to-liquid transition, the short-range order of liquids breaks the translation symmetry? At least locally?
| The classical situation with no symmetry breaking is the case of the, so-called, isostructural transitions. The word "isostructural" is misleading, since what is meant is "isosymetric". However, historically the term emerged. There is a number of examples of such transiotions. One is the alpha-alpha' transitions in the hydrogen-metal systems, another is phase separations in fluids and polymer solutions, the coil-globule transition in polymers. Such a transition in a solid phase has been reported for SmS. In the case of the solid phase the crystal lattice changes its volume, but preserves its structure (this gave rise to its name).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/105166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 3,
"answer_id": 2
} |
Scattering geometry question While reading up on light scattering I came across this slide:
My vector maths is a bit rusty and I am having trouble understanding the last term (scattering geometry).
What is the significance of $\hat{r} \times \hat{E_{i}} \times \hat{r}$
| you have three terms in the equation, which together define the scattered wave. the first two define the amplitude at any given time and place along the vector $\hat{r}$, which points the direction you're interested in.
$\hat{r}\times\hat{E}\times\hat{r}$ tells you that the wave is going to be perpendicular to $\hat{r}$ and also in the plane defined by two vectors $\hat{r},\hat{E}$
you picture is not good, shows only the angle $\theta$ to the direction of the light, it doesn't show the angle to $\hat{E}$. this angle is important, as if $\hat{r}$ is not perpendicular to $\hat{E}$, then the scattered wave will be smaller in amplitude. for instance, if $\hat{r}$ was vertical, i.e. parallel to $E$ in your picture, then the scattered wave would have zero amplitude.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/105260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
What happens when we bring an electron and a proton together? I have a couple of conceptual questions that I have always been asking myself.
Suppose we have an electron and a proton at very large distance apart, with nothing in their way. They would feel each the other particle's field - however weak - and start accelerating towards each other.
Now:
1) Do they collide and bounce off? (conserving momentum)
2) Does the electron get through the proton, i.e. between its quarks?
3) Do both charges give off Brehmsstrahlung radiation while moving towards each other?
Different scenario:
Suppose I can control the two particles, and I bring them very close to each other (but they are not moving so quickly as before, so they have almost no momentum).
Then I let them go:
1) Would an atom be spontaneously formed?
2) If anything else happens: what kind of assumptions do we make before solving the TISE for an Hydrogen atom? Does the fact that the electron is bound enter in it?
This is to say: is quantum mechanics (thus solving the Schrödinger equation) the answer to all my questions here?
| To rephrase: an electron approaches an cation.
Most likely result: a photon is emitted and the electron is captured.
Added: The way I see it: an ion and an electron at a large distance apart. This would probably be an electron in a very high orbital of the ion. As the electron approaches, all the natural forces affect the electron and the electron slows down when it approaches the nucleus. This would equate to dropping down orbitals which would release a photon.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/105400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 3
} |
Uncertainty principle in Quantum mechanics The Uncertainty principle says that "△x△p>h/2"; we cannot precisely obtain both position $x$ and momentum $p$ simultaneously.
Is this because the uncertainty is the natural characteristic or it is because we do not know additional values? (ex. like additional 11 dimensions in superstring theory.)
| "△x△p>h/2" is a simple consequence of the fundamental principle of using wavefunctions ("Amplitudes") to determine the probability of finding a particle.
A plane wave is evenly spread over all space and is the eigenfunction of one precisely known value of p.
In order to get anything other than such complete indeterminacy of position x, one must add several plane waves with different p, forming a wave packet which tails out at the spacial extremes and becomes more and more localised at one value of x, the more different p are added to the superposition.
In the limit you get an infinitely narrow wave packet (Dirac impulse), which is the eigenfunction of a precisely known value of x, which contains all possible p values (p is completely indeterminate).
Reality always lies in between these two extreme situations, and △x△p>h/2 follows from a Fourier analysis of the wave superposition (see e.g. Schiff: Quantum Mechanics).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/105457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Non-Hermitian operator with real eigenvalues? So we know that in Quantum Mechanics we require the operators to be Hermitian, so that their eigenvalues are real ($\in \mathbb{R}$) because they correspond to observables.
What about a non-Hermitian operator which, among the others, also has real ($\mathbb{R}$) eigenvalues? Would they correspond to observables? If no, why not?
| For Hermitian matrices eigenvectors corresponding to different eigenvalues are orthogonal. This guarantees that not only are the eigenvalues real, expectation values are too.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/105563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Why is QM maximally predictive? Let's suppose I'm in the lab and I claim that I can predict more than QM can, specifically, I can predict exactly at which moment in time a particle decays. You don't believe me (naturally) so I set up the experiment, provide a piece of paper with a time written on it, and start the clock. At the time I have written down, the particle decays.
Exactly which of the six postulates of QM would this violate? As far as I can tell, it violates none of them so long as the results from multiple identical trials of this experiment reproduce the correct particle decay time distribution.
(And yes, I'm aware of this paper http://arxiv.org/abs/1005.5173, but I would prefer a simpler explanation.)
| Particle decay is not a good example here bc a lot of the stochastics come from not following the nuclear dynamics closely. A better example might be the decay of a metastable state in an atom. There we have an atom that is not in an energy eigenstate by itself but the whole system is. There are well studied toy models of an atom in a quantized electromagnetic field that allow us to predict the decay rates. But prior to measurement, the system is still in a superposition of decayed and not decayed. Measurement is what seems to be the culprit, then. But if you could predict the outcome of your measurement, then the most glaring postulate you would violate would be unitarity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/105598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 3
} |
Total Electrical potential energy of two particle system I recently have been studying Electro-statics and I couldn't understand properly how the potential energy of two particle system is found.
Suppose you have two particles with charges $Q_1$ and $Q_2$ respectively. The distance between them is $r$. What is the total electrical potential energy of the system comprising of the two particles?
Well, I know the answer. I want to know the reasons behind the answer.
My book says "Fix one of the charges and then bring the other from infinity. Hence we get the answer. I am not satisfied with the answer, as I saw in Wikipedia they find the potential of each other with respect to the other and then add and divide by $2$.
Why is this taking potential with respect to each other valid? Why do they divide by two?
What does taking potential with respect to other mean? Do they fix one charge and bring another one?
what about three charges $Q_1, Q_2$ and $Q_3$ which are situated in the vertices of an equilateral triangle with side length $r$?
| If a positive charge $Q_1$ is fixed at some point in space and another positive charge $Q_2$ is brought close to it, it will experience a repulsive force and will therefore have potential energy.
Now to find this potential energy, we assume that the potential energy with respect to that point at infinity is $0$. So if a test charge $q$ is brought from infinity to a distance $r$ from $Q_1$, the work done in doing so is said to be the potential energy of $Q_1$ which is equal to $$U=k\frac{qQ_1}{r}$$
If you want to find the Total Electric Potential Energy of a system with charges $Q_1$ and $Q_2$ with respect to a test charge $q$, you just add the individual potential energies, so it will be $$U_t=kq(\frac{Q_1}{r_1}+\frac{Q_2}{r_2})$$ where $r_1$ and $r_2$ are the respective distances. Remember, this is not the potential energy of the charges with respect to each other but with respect to a test charge, which when taken as a unit test charge will be the Electric Potential of the system.
I think you are mistaken, it is never divided by $2$ for any reason. That might be the magnitude of the test charge.
For charges in a triangle, just add the three individual potential energies of each side.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/105732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Quantum Mechanics - Rectangular Potential Barrier - Normalisation I have a quick question regarding the normalisation of the wave function of a particle incident on a potential barrier specifically regarding the normalisation of the wave functions.
The problem is set up as on this webpage:
https://www.ntmdt.com/spm-basics/view/tunneling-effect
And through consideration of the boundary conditions (continuity of wave function and derivative of wave function) the final wave function looks like this:
My question is: are the wave functions either side of the barrier normalisable? And if not does this mean the situation is not physical?
| They are not normalisable because they either come from, or extend to infinity. This essentially means that the probability density blows off and gives non-physical results.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/105786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does field line concept explain electric field due to dipole? Consider an electric dipole consisting of charges $-q$ and $+q$, separated by a distance $2a$ and placed in free space. Let $P$ be point on the line joining the two charges (axial line) at a distance $r$ from the centre of $O$ of the dipole.
You can observe in the above figure that electric field has two directions at the same point $P$, does this mean electric field lines of two charges can intersect?
This is the common figure given in all text books, what I can observe is that field line due to $+q$ charge ends at $-q$ charge and then it doesn't progress towards the point $P$, and nowhere I see the lines of $+q$ and $-q$ intersecting, now I can't conclude that field lines do intersect. So, does field line concept explain electric field due to electric dipole?
What I observed in Wikipedia dipole page is that, axial line is only vanishing!
Sometimes I might have misunderstood the concept, if so pardon me and explain.
| At any point the electric field is the vector sum of the fields from the two charges. So while the fields from $A$ and $B$ are indeed in opposite directions at your point $p$ you just add them (well, subtract their magnitudes since they're in opposite directions) and this gives you the net field.
I wouldn't take the field lines too seriously. They are not physical objects, they are just notional paths following the direction the field vector points in. If you look at the length of a field line as a function of its angle to the axis you'll find the length goes to infinity as the angle goes to zero. So the field line exactly on the axis has an infinite length and therefore never reaches the other charge. But, as I say, these field lines just show the direction of the field so there's no special physical significance to the infinite length.
See also the question: Are the axial electric field lines of a dipole the only ones that extend to infinity?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/105915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Why does the topological entropy scale with $\log(L)$ in 1D? Why, in one dimension, does the topological entropy scale with the size of system as $S \sim \ln L$, while in a 2D system it scales with $S \sim L$? Why does dimensionality play such an important role here? I mean, is there any simple but straightforward idea to understand these results?
| In a 1D system, all you can do is vary the size of a subset, which only ever gives $\propto L$ possibilities. Then the entropy takes the logarithm, and we have $\propto \ln L$.
In higher dimensions however, you can also vary the shape, and that is combinatorially much more powerful: you have exponentially many possibilities (think of how you can thread a wire through a grid). The logarithm is then merely able to resolve that into a linear relationship, so it's $S \propto L$ here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/105967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Which temperature to evaluate fluid properties in pipe? I am always confused about which temperature to evaluate fluid properties at. Let's say I have a helical pipe and I know the inlet temperature, outlet temperature, and surface temperature and the inlet Reynold's number. I must determine the length of the pipe needed to satisfy the outlet temperature which means I must know the mass flow rate. I can do this by determining the inlet density and viscosity.
When I use the inlet temperature for these properties, the length is 1.046 m
When I use the average between the inlet and the surface, the length is 0.3994 m.
When I use the average between the inlet and the outlet, the length is 0.5768 m.
As you can see, the temperature I use drastically changes the pipe length.
Also, I am always confused as to what temperature to evaluate the properties at for the Nusselt number as well.
| Changes are huge, I would recommend to re-derive the pipe flow rate with a (linear) temperature dependent formula for viscosity and density. You'll get $Q(T)$, from this can get the heat flux and thus will have a nonlinear differential equation for $T(x)$, which you can integrate numerically. Then find the intercept of $T(x)$ with desired outlet temperature.
Beforehand check that a linear dependence is accurate enough for the fluid and temperature range you consider.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/106048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why Do Sausages Always Split Lengthwise? Sausages universally split parallel to the length of the sausage. Why is that?
| I'll have to take a page from my EE background and say it's because of the path of least resistance. If you look at the end of a sausage, there is already tension along that plane, in multiple locations:
A chain is only as strong as its weakest link (see what I did there? :) ), so it's natural for a hot dog/sausage to split along a path that's already strained.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/106098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "49",
"answer_count": 2,
"answer_id": 1
} |
Speed of light originating from a star with gravitational pull close to black-hole strength? Imagine you have a star which is on the brink of turning into a black hole. Lets say it is infinitely close to become a black hole, but not there yet.
Since there is no event horizon, but a great gravity pull. Will the speed of light originating from this star be close to 0?
A follow-up question: What will the speed of light be after it travels far enough from the star to escape its gravitational pull?
| I assume you don't mean the speed of light, but you are essentially asking: Will light escape that strong gravitational pull?
If this is your question then first:
Direct quote from wikipedia -> "An object whose radius is smaller than its Schwarzschild radius ($r_s = \frac{2GM}{c^2}$) is called a black hole. The surface at the Schwarzschild radius acts as an event horizon in a non-rotating body (a rotating black hole operates slightly differently)"
Everything has a Schwarzschild radius. Ordinary object's Schwarzschild radius is very very small though (Earth's is only about 9.0 mm). If your star is infinitely close to become a black hole then its radius is equal to the Schwarzschild radius and light will not escape from it.
As for your follow up question, if any light escapes this Schwarzschild radius (because as noted by Jim, light could hypothetically escape if your star is not yet definitely a black hole) it will always travel at the speed of light $c = $ 299 792 458 m / s
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/106224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Centrifugal force when there is no friction Assume that a coin is placed on circular disk and now a disk is rotated with constant angular velocity.
If there is no friction between the surfaces of a disk and coin, according to theory the coin will move away from centre of disk. But I have confusion here that the centripetal and centrifugal forces are of equal magnitude so why latter comes to play effectively?
| The coin will not move.
First, to differentiate between centrifugal and centripetal, I'll start by stating the definitions first.
Centrifugal force is the apparent force that draws a rotating body away from the center of rotation. It is caused by the inertia of the body as the body's path is continually redirected.
Centripetal force is a force that makes a body follow a curved path: its direction is always orthogonal to the velocity of the body, toward the fixed point of the instantaneous center of curvature of the path. Centripetal force is generally the cause of circular motion.
This means centrifugal force is a fictitious force (or pseudo force). It isn't actually there. Any body travelling in a straight tries to resist its direction from being changed (inertia). When a body moves in a circle, it tries to go straight at every point on the circumference. It is centripetal force which makes the body keep rotating by pulling it back to the circular trajectory. So centripetal force is basically the cause of circular motion. That is why when you rotate anything over your head and release it, it goes straight, as a tangent to the circle.
Coming back to the question, friction itself will act as a centripetal force in this case. If there is no friction, there is no centripetal force and hence, there is no centrifugal force. Therefore, the coin does not move.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/106378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
force on a moving charge in magnetic field Need help in understanding the direction of magnetic force in the magnetic field!Totally confused by directions.
Why is it that magnetic force is perpendicular to the direction of magnetic field and velocity of charged particle.
Why is it(force) not in the same direction as the magnetuc field
| A magnetic field exerts a force on a moving charge. Given a magnetic field, $\vec{B}$, and a charge, $q$, moving with velocity, $\vec{v}$, the magnetic force, $\vec{F}$, on the charge is:$$\vec{F}= q(\vec{v} \times \vec{B})$$
The directions of these with respect to one another can be found using the right hand rule. See the picture below.
The magnetic force is perpendicular to the direction of magnetic field because $\vec{v}$ and $\vec{B}$ are in cross product. Give this a read.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/106521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
If I drop a leaf twice from the height of a tree in a completely controlled environment, will the trajectory in each case be the same? Putting my question in other words, can earth form again if a similar initial universe condition is given? The uncertainty principle says that we cannot tell with certainty the position of a particle if we know its velocity with greater surety, and vice versa. But I have always felt that this restriction will vanish for a 'god' who has the advantage of knowing how the particles initially were in the beginning, and how they would interact and how the story would go on.... Therefore, can complete knowledge of the initial conditions of the world completely remove the uncertainty principle?
| From my understanding of your questions, you are confusing the "scientific method" and the "uncertainty principle. The scientific method says that "given the same starting conditions, within a controlled environment, etc., the "results" should be the same (ei. repeatable within some degree of accuracy). The uncertainty principle "deals" with an entirely different thing. It tells us that either one of the measurements of a particle's position and momentum can be accurately determined, but not both at the same time.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/106640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 5
} |
How to express magnetic field vector in terms of force on current I am preparing for an exam and one of the questions I have come across asks:
Define the electric field $\mathbf{E}$ and the magnetic flux density $\mathbf{B}$, in terms of the force on charges and currents.
By the Lorentz force law we have:
$$\mathbf{F}=q(\mathbf{E}+\mathbf{v}\times \mathbf{B})$$
Where $\mathbf{v}$ is the velocity of the charge carrying particle. If we then set $\mathbf{B}=\vec{0}$ we get:
$$\mathbf{E}=\lim_{q\to 0}\left(\frac{\mathbf{F}}{q}\right)$$
However, setting $\mathbf{E}=\vec{0}$ we get: $\mathbf{F}=q\mathbf{v}\times\mathbf{B}=\mathbf{I}\times\mathbf{B}$, where $\mathbf{I}$ is the current vector. However, there is no unique inversion for the cross product and therefore I am not sure how I am supposed to define $\mathbf{B}$ in terms of $\mathbf{F}$ and $\mathbf{I}$? Is there a standard definition like for the electric field?
| You might be interested in this. Lets look in case where $\vec E=0$. If you know $q \vec v$, $\vec F$ and $\alpha=\angle(\vec B;q \vec v)$ you can actually find $\vec B$.
$$\vec F=q \vec v \times \vec B$$
$$\vec B=\frac{|F|}{ q|v| sin \alpha} \left (\frac{q \vec v cos \alpha}{q |v|}+\frac{q\vec v \times \vec F sin \alpha}{|q\vec v \times \vec F|} \right)=
\frac{|F|}{ q|v| sin \alpha} \left (\frac{ \vec v cos \alpha}{ |v|}+\frac{\vec v \times \vec F sin \alpha}{|\vec v \times \vec F|} \right)$$
Lets look at case where $\vec E \neq0$
$$\vec F-q \vec E=q \vec v \times B$$
$$\vec B=\frac{|\vec F- q \vec E|}{ q|v| sin \alpha} \left (\frac{q \vec v cos \alpha}{q |v|}+\frac{q\vec v \times (\vec F- q \vec E) sin \alpha}{|q\vec v \times (\vec F- q \vec E)|} \right)=
\frac{|\vec F- q \vec E|}{ q|v| sin \alpha} \left (\frac{ \vec v cos \alpha}{ |v|}+\frac{\vec v \times (\vec F- q \vec E) sin \alpha}{|\vec v \times (\vec F- q \vec E)|} \right)=
$$
If you interested how this equations were derived look here: https://math.stackexchange.com/questions/4277293/finding-vec-b-from-vec-a-times-vec-b-vec-a-and-alpha-angle-vec.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/106781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Electrostatic and gravitational forces? Electrostatic force between two charged particles depends on the magnitude of the charges and the distance between them. If the charges have mass $m$ and $m'$ then, what will be the total force including gravitational and electrostatics forces? Distance between them is $d$.
| Seeing your comment, it seems you are concerned about group of charges with certain mass. Then you need to apply Gauss law for the cases where it becomes difficult to apply coulombs law or principle of superposition. In case of gravitational force, find the center of masses of either configuration and you can proceed to find force using Newtons law of gravitation. Vector sum of either force will then give your net force.
For sum special cases like force between earth and other planets which also have magnetism, you can add magnetic force for the net force.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/106947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Would QM be detectable in a all boson universe If there was a universe with the same laws as this one, but there were only bosons in it, would QM 'do anything'?
Would there be any QM effects - such as an energy level (but that would require fermions..).
| WSC - Not all matter is made of fermions. Things such as helium-4 and carbon-12 are bosons, and there are are many other composite particles and molecules that are actually composite bosons. Any composite particle with an even number of fermions, and thus with an integer value of spin, is boronic, which to me seems a little moronic.
I imagine that you are thinking primarily of the force carrying bosons and not the many composite bosons. It is easy to loose track of which particles are in which family as there are so many ways to classify any particular compound. I think that there is something wrong with a classification system that groups the magical force carryi Png particles with something as interesting as, but as entirely different (to me at least) as helium-4. Yes, helium-4, especially when ultracold and behaviour as a superduper fluid, is wonderful, but it seems like it should be in a different family than the Higgs boson, photon, as well as the weak and strong force particles and the colorful gluon zoo. Perhaps composite bosons should be called something else. I propose "PartyOns"
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/107029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Finding friction forces of stacked boxes on a table Consider the following system. The given friction coefficients are for static. Let $g=10$ (meter per square second).
If the $F(t)=10t$, for example, determine the friction forces $f_{AB}(t)$ (between the boxes $A$ and $B$) and $f_{BF}(t)$ (between the box $B$ and the floor).
I am confused how the friction forces grow reacting the external force $F(t)$. Could you explain the concept to solve this kind of problem? It is not a homework for sure.
| $f_{AB}(t)$ will be on the left side and $f_{BF}(t)$ on the right.
Maximum $f_{BF}(t)=\mu (m_A+m_B)g=(0.6)(30)(10)=180N$
So, $F(t)$ will have to be greater than $180N$ so that $B$ can move. When it does, $A$ will experience a pseudo force say $F'(t)$ in the left direction. If $B$ accelerates at $a_B$, then $$F'(t)=m_Aa_B=10(180/30)=60N$$
But, Maximum $f_{AB}(t)=\mu m_Ag=(0.3)(10)(10)=30N$
Thus, $F'(t)$>$f_{AB}(t)$ and hence $A$ will appear to slide towards the left if $B$ moves and will eventually fall.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/107105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why do rocket engines have a throat? Diagrams of rocket engines like this one,
(source)
always seem to show a combustion chamber with a throat, followed by a nozzle.
Why is there a throat? Wouldn't the thrust be the same if the whole engine was a U-shaped combustion chamber with a nozzle?
| The whole point to the throat is to increase the exhaust velocity. But not just increase it a little bit -- a rocket nozzle is designed so that the nozzle chokes. This is another way of saying that the flow accelerates so much that it reaches sonic conditions at the throat. This choking is important. Because it means the flow is sonic at the throat, no information can travel upstream from the throat into the chamber. So the outside pressure no longer has an effect on the combustion chamber properties.
Once it is sonic at the throat, and assuming the nozzle is properly designed, some interesting things happen. When we look at subsonic flow, the gas speeds up as the area decreases and slows down as the area increases. This is the traditional Venturi effect. However, when the flow is supersonic, the opposite happens. The flow accelerates as the area increases and slows as it decreases.
So, once the flow is sonic at the throat, the flow then continues to accelerate through the expanding nozzle. This all works together to increase the exhaust velocity to very high values.
From a nomenclature standpoint, the throat of a nozzle is the location where the area is the smallest. So a "U-shaped chamber with a nozzle" will still have a throat -- it's defined as wherever the area is the smallest. If the nozzle is a straight pipe then there is no throat to speak of.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/107191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52",
"answer_count": 4,
"answer_id": 0
} |
Force exerted on ceiling by a simple mecahnical system I have a simple static mechanical system, but I reach a conclusion that seems to me counter-intuitive:
There is a pulley fixed to the ceiling and there is a weight fixed to a rope which goes through the pulley and is fixed to a point on the floor. I denote by $T$ the tension in the rope and draw the forces applied to the weight (since the weight is in equilibrium, I should have $T=mg$), and to the pulley. Since the pulley is in equilibrium, the force exerted on it by the ceiling should be $2T$, and thus I arrive at the conclusion that in this setting, the force exerted on the ceiling (by Newton's 3rd law) is equal to $2mg$. Is my reasoning correct?
| If there is no air drag, the pulley and the rope are frictionless and massless, the rope is not slacking anywhere, the rope is unstretchable and the rope is attached at exactly the centre of mass of the object so that no torque is produced, yes.
How is it counter-intuitive? It may seem like that it would require an exact $mg$ force on the top too but don't forget that one end of the rope is attached to the ground. So that extra $mg$ comes into the picture because there is an extra tension component on the other rope which means that the rope on the top with tension has to balance that by having tension $2T$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/107259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Sum of acceleration vectors If a point mass has some accelerations $\mathbf{a_1} $ and $\mathbf{a_2} $, why is mathematically true that the "total" acceleration is $\mathbf{a}= \mathbf {a_1}+\mathbf {a_2}$?
| This is due to the superposition principle: when several forces act upon a body, the net force is the sum of the individual forces: $$\vec F_{net} = \sum \vec F_i $$
However, this is only true when the relation between the force and the acceleration is linear.
Let's take the gravitational force as an example: say you have three bodies and you have already calculated $\vec a_1$ and $\vec a_2$ - the accelerations felt from the third body due to the other two. Then the force on the third one would be $$m \vec a =\vec F_1 + \vec F_2= m \vec a_1 + m \vec a_2 = m(\vec a_1 + \vec a_2)=m\vec a_{1+2} = \vec F_{net} $$ since the force is linear in $\vec a$. Here $\vec a_{1+2}$ - the total acceleration - is really $\vec a_1 + \vec a_2$.
Counter-example:
If you had an environment where the acceleration is proportional to the force squared then the superposition principle would not be true. Let's say that this quadratic relationship would be the case for the gravitational force, then the force on the third body would be (I'm just considering the x-component here):
$$\begin{align}
m a_x & = (F_{net})^2\\
&=( F_{1x} + F_{2x})^2\\
&=(m a_{1x} + m a_{2x})^2\\
& = (m a_{1x})^2+2m^2 a_{1x} a_{2x}+(m a_{2x} )^2\\
&=( F_{1x})^2+ (F_{2x})^2 + 2m^2 a_{1x} a_{2x}\\
\end{align}$$
The linearity is not given ($(a+b)^2\neq (a^2+b^2)$) and hence the superposition principle not valid. You can see this by looking at the $2m^2 a_{1x}...$ term: in principle the superposition principle just says that the sum of the forces has the same effect as the combination of the individual forces. Although here, the squared sum has the effect of the combined squared forces plus another term.
This in turn means that in this case, the total acceleration which you get on the right hand side is not just $\vec a_1 + \vec a_2$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/107383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Why is the periodicity of fields in finite temperature QCD consequence of Trace in the action? In finite temperature QCD, the gauge fields must be periodic in temporal direction. They say this is the consequence of trace in the action for gauge fields. How does trace imply that the fields must be periodic?
| A derivation is given for the case of a scalar field in these lecture notes. The same arguments apply for gauge fields in QCD.
The idea is that when calculating the partition function, which is equivalent to the euclidean action of the quantum field theory, we take the trace of the exponential of the hamiltonian $\hat{H}$ and possible other terms involving a chemical potential $\mu$ and the number operator $\hat{N}$:
$$Z=\text{Tr}\;\text{exp}[-\beta(\hat{H}-\mu \hat{N})].$$
The trace is given explicitely in terms of the fields $\phi$ by
$$Z=\int d\phi\langle\phi|\text{exp}[-\beta(\hat{H}-\mu \hat{N})]|\phi\rangle.$$
As described in the lecture notes, further manipulation of this expression reveals a delta function, which ensures the periodicity of the fields with respect to time.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/107500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Behavior of Saha and Boltzmann So I'm just wondering why the Saha and Boltzmann distributions behave differently as temperature increases?
I know one is for ionization levels while the other is for energy levels but is that the answer?
| It's likely due to a combination of two reasons, one of which you already mentioned:
*
*As you state, the Saha equation relates the ionization levels while the Maxwell-Boltzmann distribution deals with the energy levels
*The Saha equation incorporates quantum mechanical effects in deriving it's equation while the Maxwell-Boltzmann consider quantum effects negligible.
I recommend checking out Fundamentals of Plasma Physics by J Bittencourt, as the Maxwell-Boltzmann distribution and Saha ionization equations are well covered in the book.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/107575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Electric field in a sphere with a cylindrical hole drilled through it Suppose that you have a sphere of radius $R$ and uniform charge density $\rho$; a cylindrical hole with radius $a$ ($a\ll R$) is drilled through the center of the sphere, leaving it like a "necklace bead".
I would like to find a function for the electric field (1) very far away from the sphere ($r\gg R$) and (2) inside the hole, near the center of the bead $r\ll R$.
In case (1), I simply treat it as a point charge and calculating the electric field is trivial.
However, I am uncertain how to approach part (2) and would appreciate any assistance. The combination of spherical and cylindrical geometries seems to make this quite tricky. I am unsure what approximation or simplification to make from the knowledge that $r\ll R$.
Would it perhaps be correct to find the electric field from (1) a complete, uniformly charged sphere and (2) a cylinder of charge density $-\rho$? Summed together, the charge densities would result in our original "bead" system, so then I can just add together the expressions for the electric field. Doing case (1) is quite easy, but (2) is nontrivial for positions that are not along the axis of the cylinder, but perhaps due to our condition that $r\ll R$ and $a\ll R$, we can assume that the field from the cylinder along the $z$-axis is a good enough approximation.
| For a cylinder:
$$dV=\pi a^2dr\\dq=\rho dV=\rho\pi a^2dr\\dE=Kdq/r^2=K\rho\pi a^2dr/r^2\\E=\int dE=K\rho\pi a^2\int_{r_0}^{r_0+l} dr/r^2=\frac{K\rho\pi a^2l}{r_0(r_0+l)}$$
In case inside of it as in the figure the field due to $R-x$ length cylinder is cancelled by a similiar one in the opposite side thus the resultant field is:
$$E=\frac{K\rho\pi a^2l}{r_0(r_0+l)}
=\frac{K\rho\pi a^2(2x)}{(R-x)({R-x}+2x)}\\
=\frac{2K\rho\pi a^2x}{R^2-x^2}
=\frac{2K\rho\pi a^2x}{R^2\left(1-\frac{x^2}{R^2}\right)}
\approx\frac{2K\rho\pi a^2x}{R^2} \text{ as } x\ll R\\
=\frac{\rho a^2x}{2\epsilon_0R^2}$$
And for sphere:
$$\large E=\begin{cases}
\frac{\rho x}{3\epsilon_0}\;0\le x\le R
\\\frac{\rho R^3}{3\epsilon_0 x^2}\;x\ge R
\end{cases}$$
Now $E$ can be easily calculated
$$E_{out}=
\frac{\rho R^3}{3\epsilon_0 r^2}-\frac{\rho a^2(2R)}{4\epsilon_0(r-R)(r-R+2R)}\\
=\frac{\rho R^3}{3\epsilon_0 r^2}-\frac{2\rho a^2R}{4\epsilon_0(r^2-R^2)}\\
\approx \frac{\rho R^3}{3\epsilon_0 r^2}-\frac{\rho a^2R}{2\epsilon_0r^2} \text{ as } r\gg R \\
=\frac{\rho R}{6\epsilon_0 r^2}\left[2R^2-3a^2\right]
$$
Note that, $\large\lim_{a\to 0}E=\frac{\rho R^3}{3\epsilon_0 r^2}$
Similiarly
$$E_{in}=\frac{\rho x}{3\epsilon_0}-\frac{\rho a^2x}{2\epsilon_0R^2}\\
=\frac{\rho x}{6\epsilon_0}.\left[2-3\frac{a^2}{R^2}\right]$$
Here also, $\large\lim_{a\to 0}E=\frac{\rho x}{3\epsilon_0}$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/107758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Sign convention for EMF When we define the field generate by EMF, why there is not negative sign in $\mathcal{E} = \oint \vec{E} \cdot d\vec{l}$?
Usually we talk about potential, there should be a negative sign, right?
| The conceptual problem here is that of EMF, $\mathcal{E}$ vs Electric Potential, V. They aren't really the same thing despite being measured in the same units. For instance the EMF is caused by an external agent that isn't the conservative electrostatic field, like say a chemical reaction in a battery or a solar cell. Work is done to cause a charge separation, this charge separation however produces an electrostatic field opposite in direction to the EMF that causes the separation (Example visualization, the EMF provides a positive field to separate negative and positive charges so that the positive ones move in the direction of the EMF to the right of the negative ones, for a given right pointing EMF). If you have drawn this situation you can now see that since the charge is separated, there now exists an electrostatic field caused by the charges themselves which points to the left equal in magnitude but opposite in direction to the applied EMF. Hence there is a sign difference between the definition of electric potential and EMF. This is kind of a simple visualization, there are more rigorous ways to prove it.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/107841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
About the recent discovery of tetraquark boundstates I am referring to this,
http://home.web.cern.ch/about/updates/2014/04/lhcb-confirms-existence-exotic-hadron
So how does this work if we stick to keeping quarks in the 3 dimensional fundamental representation of $SU(3)$?
This bound-state seems to have 2 anti-quarks and 2 quarks. So with just 3 colours how do we make the whole thing anti-symmetric with respect to the colour quantum number?
Is there anything called "anti-colour" quantum number that an anti-quark can posses so that there are a total of $(3\times 2)^2$ colour options to choose from for the 2 quarks and 2 anti-quarks? I have never heard of such a thing!
The point is that unlike the $U(1)$ charge, the non-Abelian charge doesn't occur in the Lagrangian for the quarks. The Lagrangian only sees the different flavours, the gauge groups and the gauge coupling constant.
| Antiquarks can be distinguished from quarks, so you only need to antisymmetrize two at a time. That's no problem, and even if you had 3 quarks it wouldn't be. Furthermore, you only need the total state to be antisymmetric. You could have antisymmetry in space, symmetry in spin and symmetry in color, and the whole thing would be antisymmetric. (Like how you can put two electrons in each atomic orbital and both singlet and triplet are allowed.).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/107990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Experimental evidence for the relic neutrinos What are the experimental (indirect) evidence for the cosmic neutrino background? Where can I read more about this?
The discussion on the wikipedia page about the C$\nu$B seems to me to be more about the evidence of the number of generation of neutrinos, than about the cosmic neutrino background...
| We have, at this time, no tools capable of detecting neutrinos at the very low energies of the cosmic neutrino background, and if such tools existed they would have to contend with numerous backgrounds making the experiment ferociously difficult.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/108075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
De Broglie Wavelengths I have a working knowledge of wave-particle duality, I think. I know the de Broglie wavelength is a sort of probability of finding a particle in a specific position, and is calculated by $\lambda=\frac{h}{\vec{p}}$. I have a couple questions I'm hoping to have cleared up, though.
First, since $\vec{p}$ is momentum, a vector, what happens to the direction in the above formula? Is it meant to be the magnitude of momentum instead? $\lambda$ having a direction doesn't seem to make sense.
Second, clearly if $\vec{v}=\vec{0}$, then $\vec{p}=m \vec{v}=\vec{0}$, and $\lambda$ is undefined. Does this mean particles that are stationary have no wave character? This also doesn't make sense, so I think this is not the case.
| I have never seen de Broglie's relation written with vector quantities. A quick search online reveals a lack of vectors as well. In the relation
$$\lambda = \frac{h}{p}$$
it is implied that the quantity $p$ is the magnitude of the momentum $\left | \vec{p} \right |=p.$ Yes, the word momentum in a strict sense refers to a vector quantity, but often physicists will use this term when referring to the well-defined scalar $p$.
On a related note, the term wavelength typically (always, probably) refers to a scalar, not a vector. So trying to ascribe a direction for a wavelength is something one might expect isn't done. Though as David H described in comments, there is a commonly-used vector quantity that are related to wavelength: wave vector:
$$\vec{k}=\frac{2\pi}{\lambda}\hat{v},$$
where the unit vector $\hat{v}$ typically points in the direction of propagation. If you fiddle around with these definitions you can write something akin to the de Broglie relation with vectors: $\vec{p}=\hbar \vec{k},$ with $\hbar=h/(2\pi)$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/108158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What is the notion of a spatial angle in general relativity? Is there a notion of spatial angles in general relativity?
Example:
The world line of a photon is given by $x^{\mu}(\lambda)$. Suppose it flies into my lab where I have a mirror. I align the mirror in such a way, that I measure a right angle between the incoming and outgoing photon. How can I now calculate the wordlline of the outgoing photon?
Of course I cheated a bit since I just used the old euclidean notion of an angle when I say "I measure a right angle between the incoming and outgoing photon". But I hope that this is allowed locally.
| As a geometric quantity, the value of an "angle" can be determined and expressed in a coordinate-free way:
Given three pairwise space-like events, "$A$", "$B$" and "$C$", and given the positive real numbers
$\frac{s^2[ A C ]}{s^2[ A B ]}$, $\frac{s^2[ A C ]}{s^2[ B C ]}$ and $\frac{s^2[ B C ]}{s^2[ A B ]} = \frac{s^2[ A C ]}{s^2[ A B ]} / \frac{s^2[ A C ]}{s^2[ B C ]}$
as ratios between squares of interval magnitudes (or minimum arc lengths) between event pairs,
then the value of "angle at $B$, between $A$ and $C$" may either be expressed (directly) as
$\angle [ A B C ] := $
$\text{ArcSin} \left[ \frac{1}{2} \sqrt{ 2 + 2 \frac{s^2[ A C ]}{s^2[ A B ]} + 2 \frac{s^2[ A C ]}{s^2[ B C ]} - \frac{s^2[ B C ]}{s^2[ A B ]} - \frac{s^2[ A B ]}{s^2[ B C ]} - \frac{s^2[ A C ]}{s^2[ A B ]} \frac{s^2[ A C ]}{s^2[ B C ]} } \right];$
or, more generally, in terms of squares of interval magnitudes (or minimum arc lengths) involving additional events "$F$", "$G$" (named in the sense of variables) which are space-like to each other as well as to "$A$", "$B$" and "$C$", as
$\angle [ A B C ] := \text{Limit}_{\{ F, G \}} {\huge[} $
${\Large \{ } \frac{s^2[ B F ]}{s^2[ A B ]} \rightarrow 0, \frac{s^2[ B G ]}{s^2[ B C ]} \rightarrow 0, $
$2 + 2 \frac{s^2[ B F ]}{s^2[ A B ]} + 2 \frac{s^2[ B F ]}{s^2[ A F ]} - \frac{s^2[ A F ]}{s^2[ A B ]} - \frac{s^2[ A B ]}{s^2[ A F ]} - \frac{s^2[ B F ]}{s^2[ A B ]} \frac{s^2[ B F ]}{s^2[ A F ]} \rightarrow 0, $
$2 + 2 \frac{s^2[ B G ]}{s^2[ B C ]} + 2 \frac{s^2[ B G ]}{s^2[ C G ]} - \frac{s^2[ C G ]}{s^2[ B C ]} - \frac{s^2[ B C ]}{s^2[ C G ]} - \frac{s^2[ B G ]}{s^2[ B C ]} \frac{s^2[ B G ]}{s^2[ C G ]} \rightarrow 0 {\Large \} }; $
$\text{ArcSin} \left[ \frac{1}{2} \sqrt{ 2 + 2 \frac{s^2[ F G ]}{s^2[ B F ]} + 2 \frac{s^2[ F G ]}{s^2[ B G ]} - \frac{s^2[ B G ]}{s^2[ B F ]} - \frac{s^2[ B F ]}{s^2[ B G ]} - \frac{s^2[ F G ]}{s^2[ B F ]} \frac{s^2[ F G ]}{s^2[ B G ]} } \right] {\huge]} . $
If the region containing the events $A$, $B$ and $C$ is flat then these two "angle" values are equal.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/108359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Could Charles-Augustin de Coulomb measure the charge in Coulombs?
*
*Did Charles-Augustin de Coulomb know:
*
*Coulomb's constant
*Coulomb (as a unit)
if not then what was the first time it was measured?
| No, Coulomb did not know the Coulomb as a unit. According to this page, the Coulomb was defined at the 9th CGPM (General Conference on Weights and Measures) conference, in 1948. Wikipedia gives the same date.
Coulomb could not measure charges, but he could create a charge and then halve it, quarter it, etc by letting the charge flow from one object to another identical one. That way he established that the force between 2 charges is inversely proportional to the square of the distance between them, and proportional to the product of the charges. Note that this is the same formula as Newton's law of gravity, it just uses charges instead of masses.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/108719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Constant of motion An exercise from Goldstein (9.31-3rd Ed) asks to show that for a one-dimensional harmonic oscillator $u(q,p,t)$ is a constant of motion where
$$
u(q,p,t)=\ln(p+im\omega q)-i\omega t
$$
and $\omega=(k/m)^{1/2}$. The demonstration is easy but the physical significance of the constant of motion is not so clear to me. Indeed I can show that $u$ can be rewritten like:
$$
u(q,p,t)=i\phi+\ln(m\omega A)
$$
where $\phi$ is the phase and $A$ the amplitude of the vibration of the oscillator. I can also demonstrate that $m\omega A=\sqrt{2mE}$, where $E$ is the total energy of the oscillator. But there is any further significance of $u$ that I'm missing?
| The quantity inside the natural log seems to be proportional to the classical analog of the raising operator in quantum mechanics:
$$
a_{+}= \frac{1}{\sqrt{2m}}\bigg(\frac{\hbar{}}{i}\frac{d}{dx}+im\omega{}x\bigg)\\
a_{+}= \frac{1}{\sqrt{2m}}\bigg(\hat{p}+im\omega{}q\bigg)
$$
Where $\hat{p}$ is the quantum mechanical momentum operator and I have changed x to the generalized coordinate q to show the similarity to the problem.
As you noted, $\omega{}t$ is related to $\phi{}$.
Conclusion: This constant of motion, u, is probably related to the raising operator for a time dependent problem.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/108928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
An identity for spinor helicity formalism I have a question about the spinor helicity formalism from arXiv:1308.1697
Denote the massless spin-1/2 fermions as Eqs. (2.10)-(2.11) in that paper
$$v_+(p)= \begin{pmatrix} |p]_a \\ 0 \end{pmatrix} $$
$$v_-(p)= \begin{pmatrix} 0 \\ |p \rangle^{\dot{a}} \end{pmatrix} $$
$$\bar{u}_-(p)= (0, \langle p |_{\dot{a}})$$
$$\bar{u}_+(p)= ([p |^{a},0)$$
For real momenta, there is an identity in that paper
$$ [k| \gamma^{\mu} |p \rangle^*= [p|\gamma^{\mu}|k \rangle \tag{2.33}$$
My question is, how to prove (2.33)? I know $$[p|^a=(|p \rangle^{\cdot{a}})^*, \langle p |_{\dot{a}} = (|p]_a)^* \tag{2.14}$$ for real momenta.
By using (2.14) I got $$[k| \gamma^{\mu} |p \rangle = ([p|)^* | \gamma^{\mu} (|k \rangle)^* $$, since $\gamma^{\mu*} \neq \gamma^{\mu}, \mu=2$, I still miss a complex conjugation...
| The problem is sloppy (but convenient) notation. The objects,
\begin{equation}
\left| k \right] ^a , \quad \left| p \right\rangle ^{ \dot{a} }
\end{equation}
are two component spinors while $
\gamma_\mu$ is a 4x4 matrix. So its not even clear what the brakets mean. When we write the braket,
\begin{equation}
\left[ k | \gamma ^\mu | p \right\rangle
\end{equation}
what we really mean is that we pick out the Pauli matrix in the $ \gamma ^\mu $ with the correct index structure. For example,
\begin{equation}
\left[ k \right| ^a \left( \begin{array}{cc}
0 & \sigma ^\mu _{ a \dot{a} } \\
\bar{\sigma} ^\mu _{ \dot{a} a } & 0
\end{array} \right) \left| p \right\rangle^ {\dot{a}} = \left[ k \right| ^a \sigma ^{ \mu } _{ a \dot{a} } \left| p \right\rangle ^{ \dot{a} }
\end{equation}
with similar results for the other brakets.
Now getting back to your question. The $ \gamma ^\mu $ matrix is not invariant under complex conjugation:
\begin{equation}
\left( \gamma ^\mu \right) ^\dagger = \gamma _0 \gamma ^\mu \gamma _0 = \left( \begin{array}{cc}
0 & \bar{\sigma} ^\mu \\
\sigma ^\mu & 0
\end{array} \right)
\end{equation}
so all complex conjugation does is switch the positions of the $ \sigma ^\mu $ and $ \bar{\sigma} ^\mu $ matrices. Therefore, we can just omit the complex conjugation if we remember the ``pick Pauli matrix with the correct index structure'' prescription.
Then we have,
\begin{equation}
\left( \left[ k \right| ^a ( \gamma ^\mu ) _{ a \dot{a} } \left| p \right\rangle ^{ \dot{a} } \right) ^\ast = \left[ p \right| \gamma ^\mu \left| k \right\rangle
\end{equation}
where it is understood that now we are picking out the $ \bar{\sigma} ^\mu $ matrix instead of $ \sigma ^\mu $ in $ \gamma ^\mu $.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/109066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How do I simulate this simple quantum circuit in MATLAB? I want to simulate a circuit similar to the one below in MATLAB. If you have a state matrix describing the state of 3 qubits, I understand that you could apply a CNOT matrix tensored with and identity matrix to $\psi_{0} $ get $\psi_{1}$, but if you want to apply a controlled operation to the 1st and 3rd qubit to get $\psi_2$, how can you do this? It's like you need "remove" the information about the second qubit, apply a CNOT gate, and then somehow integrate the result back with the superposition of the second qubit... I do not understand how to do this.
In general if I have a superposition of N qubits, how do I apply a controlled operation on qubits i and j?
| I think this will answer your question. How does the CNOT between qubits one and three work?
$$\left|000\right\rangle \to \left|000\right\rangle$$
$$\left|001\right\rangle \to \left|001\right\rangle$$
$$\left|010\right\rangle \to \left|010\right\rangle$$
$$\left|011\right\rangle \to \left|011\right\rangle$$
$$\left|100\right\rangle \to \left|101\right\rangle$$
$$\left|101\right\rangle \to \left|100\right\rangle$$
$$\left|110\right\rangle \to \left|111\right\rangle$$
$$\left|111\right\rangle \to \left|110\right\rangle$$
So its matrix would look like:
$$\left(\begin{array}{cccccccc}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
\end{array}\right)$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/109161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why maximum energy transfer at natural frequency even if max amplitude occurs below $f_0$ This is a paragraph from my book:
"For a damped system, the resonant frequency at which the amplitude is a maximum is lower than the natural frequency.However, maximum transfer of energy, or energy resonance always occurs when applied frequency is equal to natural frequency"
This doesn't make intuitive sense to me. I understand that if there is damping, maximum amplitude occurs below $f_0$... so, shouldn't maximum energy be transferred at this driving frequency as amplitude maximum instead of at $f_o$?
| Your question has been answered by following standard mathematical procedures. Nonetheless, your concern about intuitive sense is quite fair. On one hand, the idea that energy transfer should be maximum at a frequency below the natural frequency $\omega_0$ is not wrong at all. The work done by the external force over one period of the stationary oscillation is maximum at $$\omega''=\omega_0 \sqrt{\frac{\lambda ^2+2 \sqrt{\lambda ^4-\lambda
^2+1}-2}{3\lambda ^2}},$$ where $\lambda=\omega_0 \tau$, and $\tau$ is the characteristic decay time in the factor $\exp(-t/\tau)$ of the underdamped transient. The frequency $\omega''$ is below $\omega_0$ and above the frequency of the periodic part of the underdamped oscillation, namely $\omega'=\sqrt{\omega_0^2-1/\tau^2}$. On the other hand, the maximum averaged power, which is the work done over one period divided by the period $T=2\pi/\omega$, will have its maximum to the right of $\omega''$. The standard procedure shows it is at $\omega_0$. The point here is whether our intuition sense deals with the work done over one period or with the average power.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/109242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Is the molecule of hot water heavier than that of cold water? We know that the molecule of hot water($H_2O$) has more energy than that of cold water (temperature = energy)
and according to Einstein relation $E=mc^2$ ,this extra energy of the hot molecule has a mass.
Does that make the hot molecule heavier?
| According to http://www.verticallearning.org/curriculum/science/gr7/student/unit01/page05.html , the average velocity of a water molecule at 20C is approximately 590 m/s:
If you now calculate the difference in mass between water molecules "at rest" and moving at 590 m/s, this ratio is given by
$$\gamma=\sqrt{\left(1-\frac{v^2}{c^2}\right)}$$
Expanding this for the case where $v<<c$, we get
$$\frac{\delta m}{m} = \frac12\left(\frac{v}{c}\right)^2 \sim 2\cdot10^{-12}$$
The difference is there; it might be measurable; but it is very, very tiny. Any condensation on your colder sample will be many times heavier than the difference in mass.
Note that the above calculation is for "cool water" vs "water at absolute zero". The difference between "hot" and "cold" water will be even smaller. Also note that to do this calculation properly, you really need to evaluate the expression for $\gamma$ at every velocity, and integrate. This doesn't change the basic answer: "yes it changes; but it is a very very small change".
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/109353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
What would we observe from thompson's lamp thought experiment? The thought experiment goes like this:
Say there is some circuit which turns a lamp on/off with just a flick
of a switch. Say its off; you flick it, it turns on; flick it again it
turns off, and so on.
So say you are conducting the experiment for two minutes. When the
remaining time halves, you flick the switch; and when that remaining
time halves, you flick again...This process goes on infinitely, until
two minutes is reached.
Mathematics says that the sum of all those remaining half-times will eventually tend to two minutes. And the outcome i.e. the final state is always unpredictable. We can't really know if its on/off.
Lets not concern about the final state. What would we actually observe towards the end, few micro seconds towards the two minute mark? Wouldn't the lamp appear turned on? What would happen if the time between two flicks tend to zero?
| The lamp would be "on", but with a reduced luminous flux output. This is very similar to pulse-width modulation, which is used to control the brightness of LEDs in various applications.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/109412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
What is charge? I know this isn't the right place for asking this question, but in other places the answers are so awfull.. I'm studying eletricity, so, I start seeing things like "charges", "electrons has negative charges",etc. But I didn't quite understand what charge is. I search a little bit on the internet and found it related to electromagnetic fields, then I thought "negative and positive may be associeted with the behaviour of the particle in the field, great!", but the articles about e.m. fields already presuppose "negative" and "positive" charges. In other places, I see answers relating charges to the amount of electrons/protons in an atom, but if that's right, the "negative" electron is an atom without any protons? What about the neutron? So, my questions are (1) What are charges; and (2) How a particle can "be" electrically charged. What does that really mean?
Thanks for your time.
|
Ans 1: Charge is the physical observable corresponding to a conservation of a certain symmetry.
-
Ans 2: The electric charge or Electromagnetic charge is due to the electromagnetic U(1) gauge symmetry.
-
See also this post: Is there any theory for origination of charge?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/109535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 4,
"answer_id": 0
} |
How is "Band Intensity" related to absorption coefficient I am interested in the linear absorption of $762\,\rm nm$ light near a transition of molecular oxygen. I need to find some experimental numbers that will tell us how far the $762\,\rm nm$ light will propagate before getting absorbed. Specifically, I want to know the e-folding length, $\gamma^{-1}$ (the length over which the intensity will drop by $e^{-1}$). I believe this is also called the optical depth when using Beer-Lambert law.
My main problem is that I do not know the definitions of experimentally measured quantities and how they relate to the e-folding length. I was reading "Atmospheric Propagation of Radiation" by Frederick Smith and page 61 says that for the inverse wavelength $\lambda^{-1}=13\,120.909\,\rm cm^{-1}$ the Band Intensity is $1.95\times10^{-22}\,\rm cm$. In "Laser Remote Chemical Analysis" they call it the integrated band intensity for this line but with units of cm-molecule (basically the same thing).
Does anyone know how the band intensity relates to the e-folding or absorption length?
Our best guess based on physical and dimensional arguments is that the e-folding length will go like $\gamma^{-1} \propto 1 / (B N \Delta\lambda)$ where $B$ is the band intensity with units of $\rm cm$, $N$ is the number density with units of $\rm cm^{-3}$, and $\Delta\lambda$ is the line width of the transition with units of $\rm cm$.
| What the question refers to as "band intensity" is also referred to a "line strength" $S$. To calculate an absorption coefficient $k$ from $S$, a line shape function $f(\nu - \nu_0)$, where $\nu_0$ is the center of the line.
$$k = Sf(\nu - \nu_0)$$
Then "optical depth" = $ku$, where $u$ is called "path length" but is really a measure of the absorbing substance in the path.
See pages 15 and 16 of this lecture for more information: http://irina.eas.gatech.edu/EAS8803_Fall2009/Lec5.pdf
and also: http://nit.colorado.edu/atoc5560/week4.pdf
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/109601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Range of poissons ratio I know the range of poisson's ratio is -1 to 0.5 but how do you arrive at this expression? I am a 11th grade student and I am not too familiar with advanced physics
| Poisson's ratio can be expressed in terms of Young's modulus $E$, Bulk modulus $K$ and Shear modulus $G$. I hope you know how do arrive at these relationships;
$$ G=\frac{E}{2(1+\nu)}$$
$$ K=\frac{E}{3(1-2\nu)}$$
$E, K$ and $G$ and all NOT less than zero.
From the equation of $G$ you can see that it's the value of $v=-1$ which would make the denominator $0$ and $G$ very large. That's the limit for $G$.
And from the equation of $K$ it's the value of $v=0.5$ that makes the denominator $0$ and $K$ very large. The limit for $K$.
Therefore $v$ can only fall between $-1$ and $0.5$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/109705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Could you theoretically map the internal distribution of mass in a black hole using Hawking radiation? Assuming you could measure the qualities of the radiation emanating from all around a black hole, could this be used to determine the internal geometry or makeup of the mass inside?
| The answer is no, if our current understanding of Hawking radiation is correct. The problem is that Hawking radiation is thermal. This means that it comes in a statistical ensemble of quantum states. In particular it can't carry information about the evolution of quantum states you dropped into the black hole. This is the origin of the famous information paradox.
Of course nobody has solved the information paradox, although several competing explanations have been advocated. It's conceivable that perhaps there is hidden information in Hawking radiation within a more complete understanding.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/109759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Why does Hawking Radiation not add up to zero? First off: I am not familiar with the details of quantum mechanics or relativity.
My understanding of Hawking radiation is as follows: Pairs of particles and anti-particles can and will spontaneously form in space and when that happens near the event horizon of a black hole the anti-particle might fall into the black hole and the particle not, effectively causing the black hole to lose mass.
Now, this is totally fine with me. The thing I do not understand is: Why do more anti-particles fall into the black hole than particles? If these pairs pop up randomly, shouldn't the effects of Hawking-radiation and anti-Hawking-radiation cancel out?
| What do you mean, "cancel out"? Even if a particle and its antiparticle annihilate, energy is still conserved, e.g. an electron annihilating with a positron gives two photons (well, depending on their energy). The radiated energy can't just vanish without a trace.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/110026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Optics biconvex and plano convex What is the resultant focal length If a biconvex lens is cut into half and then the resulting 2 plano-convex lens arranged in such a way that the plane surface of one faces the curved surface of the other plano-convex lens considering the focal length of the plano-convex lens to be f.
| Let f' be the focal length of the biconvex lens initially & let 'f' be the focal length of the two plano convex lens each.
Then, by Gullstrand's Equation,
P'=P1+P2-P1P2d
or, in terms of focal length,
$$1/feq=1/f1+1/f2-d/f1f2$$ (for separated lens)
or, $$1/feq=1/f1+1/f2-d/nf1f2$$ (for thick lens)
Here, P' is the power of the biconvex lens initially and P1 and P2 are the powers of the two plano convex lens.
Then, according to the question, the planoconvex lens are arranged so that the curved surface of one touches the plane surface of the other. Therefore, the effective distance between them is zero.
So, P'=P1+P2. (Since d=0)
Since the plano convex lens are of equal focal length, they also have equal power.
=> 1/f' = 1/f + 1/f
=> 1/f' = 2/f
Therefore, f=2f'
which implies that, the focal length of the resultant plano-convex lens is twice the focal length of the biconvex lens.
The Gullstrand's Equation used here can be easily derived from the Lens' Maker's formula also.
Using the Lens-Maker's equation:
$$ \frac{1}{f} = \frac{n_{lens} - n_o}{n_o}\left(\frac{1}{R_{left}}-\frac{1}{R_{right}}\right)$$
And because we have an equiconvex lens, $R_{right}=-R_{left}$ (the negative sign is from the sign convention of the equation), so we have:
$$ \frac{1}{f_{equi}} = \frac{n_{lens} - n_o}{n_o}\left(\frac{1}{R_{left}}+\frac{1}{R_{left}}\right) = 2\frac{n_{lens} - n_o}{n_o}\frac{1}{R_{left}}.$$
So $f_{equi}=\frac{R_{left}n_o}{2(n_{lens} - n_o)}$ and for the planoconvex lens, $R_{right} \approx \infty$ so the equation becomes:
$$ \frac{1}{f_{plano}} = \frac{n_{lens} - n_o}{n_o}\left(\frac{1}{R_{left}}+\frac{1}{\infty}\right) = \frac{n_{lens} - n_o}{n_o}\frac{1}{R_{left}}.$$
So when you cut the lens in half, the focal length is actually doubled when you cut the lens, since $f_{plano}=\frac{R_{left}n_o}{n_{lens} - n_o}=2 f_{equi}$.
For more info on the Lens-Maker's equation, you can look here for an explanation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/110239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why do He-3 atoms repel each other much more strongly than electrons? Is there a simple answer to this question ? see last line of this paragraph
http://en.wikipedia.org/wiki/Fermionic_condensate#Fermionic_superfluids
| The following may be more clear:
The BCS theory can be applied to 3He, as well as to electrons. However, the Cooper pairs for 3He are much more complex creatures than the those in a conventional superconductor. Due to the hard-core repulsion of the helium nuclei, the two 3He atoms in the Cooper pair feel a greater need to keep away from each other than the electrons do; so while the electrons bind together in a tight, symmetric, s-wave package, the 3He atoms bind together in a loose, anti-symmetric, p-wave package. "P-wave" just means that the pair has angular orbital momentum L=1. An L=1 system has three different quantum-mechanical states, denoted by ml = 1, 0, or -1.
Since all combinations of fermions must be anti-symmetric, the spin angular momentum in this case must be symmetric. There are three symmetric ways to combine the spins of two spin-1/2 fermions, Y = |++>, |-->, or |+-> + |-+>. (Read this as both spin up, both spin down, or the symmetric combination of the two.)
So a 3He Cooper pair has three different possible orbital states, and three different possible spin states. This gives a total of nine different combinations, each of which is weighted in the order parameter Y by a complex number, giving 18 degrees of freedom. This allows the 3He superfluid to behave in much more complex ways than the conventional superconductor, with its two degrees of freedom.
Note the "Due to the hard-core repulsion of the helium nuclei, the two 3He atoms in the Cooper pair feel a greater need to keep away from each other than the electrons do;", an equally handwaved statement to the Wiki's : " These complications arise because helium atoms repel each other much more strongly than electrons".
The last statement I would say is the mundane : "two electrons tied in tandem with other two repel more than one with one", and He3 has two electrons in orbitals. Both try to explain why in the He3 Cooper pair we find the He3 in P waves and not in S waves.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/110311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding Tension in an Elastic String? I know that this is a homework type question and I'm not asking a particular physics question, but I'm really desperate for help.
Here's the question:
I tried to divide the string to 2 parts with $O$ as the mid-point of $AB$.
Let $AO$=$T_{1}$ and $OB$=$T_{2}$, then $T_{2}-T_{1}=m\ddot x $. I don't know what to do next.
Here's the marking scheme:
| Six years later:
We are given that the natural length of the string is $2l$.
When the mass is suspended at the midpoint, $O$, of the string, the extension perpendicular to $AB$ is $x$. Let $M$ be the position of the mass at perpendicular extension $x$.
Then the extended length of the string from $A$ to $M$ (the hypotenuse of a triangle with sides $a$ and $x$) is:
$$AM=\sqrt(a^{2}+x^{2})$$
Since the string is extended at its mid-point, this is the same as the extension from M to B. Hence, the total length, $AMB$, of the extended string is
$$L=2\sqrt(a^{2}+x^{2})$$
The extension of the string from its natural length is
$$\delta {l}=2\sqrt(a^{2}+x^{2})-2l=2\left[{\sqrt(a^{2}+x^{2})-l} \right]$$
Using the formula for Tension in a string and substituting for $\delta {l}$:
$$\begin{align} T &=\lambda \frac{\delta {l}}{l} \\
&=\lambda \frac{2\left[{\sqrt(a^{2}+x^{2})-l} \right]}{2l} \\
&=\lambda \frac{\left[{\sqrt(a^{2}+x^{2})-l} \right]}{l}\end{align} $$
Finally, if $x^{2} \ll 1$, this gives:
$$\begin{align} T &=\lambda \frac{\left[{\sqrt(a^{2})-l} \right]}{l} \\
&=\lambda \frac{(a-l)}{l}
\end{align}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/110391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How might eddy currents and terminal velocity be broken down? This is a branch from this question.
With regards eddy currents and terminal velocity I have located a homework question but would appreciate more detail (and translation of the symbols described therein) about the forces at play.
Eddy currents are generated though the resistance of electrons interacting with magnetic fields as they cut across coils and force them downwards.
Since there is resistance it would stand to reason that this would reduce the speed at which a magnet falls - but that there would be an equilibrium whereby further slowing a magnet would reduce the flux generation required to sustain that reduced descent.
So the forces at play include:
*
*gravity
*mass of the magnet
*coils of the coil (solenoid?)
*air resistance (may be discounted?)
May these be further broken down? Are other variables at play?
| Yes, faraday law and eddy current will cause terminal velocities to exist for falling objects outside of any other friction consideration. The most widely used example to show it is this setup:
These are two connected rails of width $l$ with a third rail freely gliding along the two main rails. The whole setup can be reduced down to a single winding of varying (increasing) size (varying length $L$), and of ~fixed resistance R. The winding is placed in a perpendicular uniform constant magnetic field $\vec{B}(\vec{r})=B_0 \vec{1_z}$. An analysis of this simplified problem quickly gives:
$$
\text{When terminal velocity is reached, the whole gravitic power is dissipated}\\
\text{through the resistance, and none contributes to an actual increase in kinetic energy, hence:}\\
P_{gravity}=P_{dissipated}\\
\text{an expression of mechanical power, and an expression of electrical power:}\\
\Leftrightarrow m\vec{g}\cdot\vec{v}=mgv=\frac{U^2}{R}\\
\text{Faraday Law:}\\
\Leftrightarrow mgv=\frac{\left(\frac{d\Phi}{dt}\right)^2}{R}\\
\text{Expression of the magnetic Flux. The only non time-constant variable is L:}\\
\Leftrightarrow mvg=\frac{\left(\frac{d(B\cdot L\cdot l)}{dt}\right)^2}{R}
=\frac{\left(B\cdot l \cdot \frac{dL}{dt}\right)^2}{R}\\
\Leftrightarrow mvg=\frac{(Blv)^2}{R}\Leftrightarrow v=\frac{mgR}{(Bl)^2}$$
Once you have understood this, you will be able to physically grasp why a terminal velocity would exist both in this situation and when the magnet is falling through the coil, and to apply the exact same method to your precise situation if you want to further analyse your experiment.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/110461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Lorentz invariance? What exactly is meant by Lorentz invariance?
Is it just an experimental observation, or is there a theory that postulates it?
What quantities do we expect to be Lorentz invariant?
Charge? Charge densities? Forces? Lagrangians?
| Lorentz invariant is a short-hand for "invariant under action of the Lorentz transformation". It is used to classify quantity that has value that is the same in all inertial frames. For example, electric charge of any electron, or the quantity $\Delta x_\mu \Delta x^\mu$ of any two events.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/110547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Are there two different ways to express the position operator $x$ in terms of the creation and annihilation operator? As we known, to express the position operator $x$ in terms of the creation and annihilation operator $a^{+}$ and $a$, one way is:
$$x= \sqrt{\frac{\hbar}{2\mu\omega}}(a^++a);$$
$$p= i\sqrt{\frac{\mu\hbar\omega}{2}}(a^+-a).$$
But how about
$$x= -i\sqrt{\frac{\hbar}{2\mu\omega}}(a^+-a);$$
$$p= \sqrt{\frac{\mu\hbar\omega}{2}}(a^++a)? $$
I want to know whether the two expressions are fine.
PS: It seems that I can use them to calculate something like fluctuation $\langle\Delta_x\rangle^2$ at coherent state $|\alpha\rangle$.
Both expressions can give the right answer.
By the 1st expression,
$\langle\alpha|x^2|\alpha\rangle=\frac{\hbar}{2\mu\omega}[(\alpha+\alpha^*)^2+1]$,
$\langle\alpha|x|\alpha\rangle^2=\frac{\hbar}{2\mu\omega}[(\alpha+\alpha^*)^2]$
then,
$\langle\Delta_x\rangle^2=\langle\alpha|x^2|\alpha\rangle - \langle\alpha|x|\alpha\rangle^2 =\frac{\hbar}{2\mu\omega}$.
By the 2nd way,
$\langle\alpha|x^2|\alpha\rangle=-\frac{\hbar}{2\mu\omega}[(\alpha-\alpha^*)^2-1]$,
$\langle\alpha|x|\alpha\rangle^2=-\frac{\hbar}{2\mu\omega}[(\alpha-\alpha^*)^2]$
therefore,
$\langle\Delta_x\rangle^2=\langle\alpha|x^2|\alpha\rangle - \langle\alpha|x|\alpha\rangle^2 =\frac{\hbar}{2\mu\omega}$.
The result is the same.
And if we check the dimension, this formula is also OK.
| You are just dealing with the unitary transformation
$$U : L^2(\mathbb R) \to L^2(\mathbb R)\:.$$
defined by the unique linear continuous extension of
$$U|n\rangle := i^n |n\rangle\:,$$
which implies
$$ U ^\dagger a U = i a\:,\qquad U ^\dagger a^\dagger U = -i a^\dagger\:,$$
The two pairs of operators $x,p$ are related by means of the same unitary transformations. As is well-known, unitary transformations preserve the structure of quantum mechanics.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/110611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Is gravitational time dilation different from other forms of time dilation? Is gravitational time dilation caused by gravity, or is it an effect of the inertial force caused by gravity?
Is gravitational time dilation fundamentally different from time dilation due to acceleration, are they the same but examples of different configurations?
Could you recreate the same kind of time dilation without gravity using centrifugal force?
| Well, the answer is "no" time dilation is always the same effect and is due to velocity! Indeed, when an object is located in a gravitational field it is falling. Even when you sit on your chair you are falling in the Earth's gravitational field, otherwise you would float in the air as in the ISS! Let's equate the factors for time dilation of special and of general relativity, where $R_S$ is the Schwarzschild radius:
\begin{equation}
\Delta t'=\frac{\Delta t}{\sqrt{1-\frac{v^{2}}{c^{2}}}}=\frac{\Delta t}{\sqrt{1-\frac{R_{S}}{r}}}\label{eq:time dilation compared-1-1}
\end{equation}
hence $\frac{v^{2}}{c^{2}}=\frac{R_{S}}{r}\Longrightarrow v^{2}=\frac{2GM}{r}\label{eq:time dilation compared-2-1}$ and
$v=\sqrt{2rg}=\sqrt{2V}$, where $V$ is the gravitational potential.
We therefore put in evidence the relationship that links velocity and gravity in relativistic time dilation (both in SR and GR):
\begin{equation}
v=\sqrt{2V}
\end{equation}
Thinking of time dilation as always due to velocity also explains why in artificial gravity (e.g. in a hypothetical vertically accelerating space elevator) we would experience both the sensation of natural gravity "and" time dilation, though no natural gravity is present.
If you love gravity and relativity, I would add that curved spacetime produces quantitatively exact results but "qualitatively" (I think) it is a wrong concept. There is no curved spacetime but a fluid quantum vacuum. The relativistic effects are the same:
https://hal.archives-ouvertes.fr/hal-01423134v6
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/110669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 5,
"answer_id": 3
} |
Why does squeezing a water bottle make the water come out? This seems natural, but I can't wrap my head around it when I think about it.
When I squeeze an open bottle filled with water, the water will spill out. When I squeeze a bottle, the material collapses where I squeeze it, but expands in other areas, resulting in a constant volume. If the volume is constant, then I would think that the water shouldn't spill out.
If I were to guess, there is something related to the pressure my hand is creating inside the bottle, but I'm not entirely sure.
| jdj081's answer is good. I just want to address where I think you originally went wrong.
Your confusion lies in using the word "volume" in two different ways. You should differentiate between volume of the container (capacity is a better term, as jdj081 states) and volume of the liquid.
The liquid's volume doesn't change. The container's volume does. Since not all of the liquid can fit, it spills out of the container. Yes, the liquid's volume is still the same, but this does not mean the volume's container is also the same.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/110747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Why electrons have less energy than photons with the same wavelength? I am studying quantum physics and I have a question: what is the physical explanation for electrons having less energy than photons with the same wavelength?
Energy of a photon : $E = h c/\lambda$.
Energy of an electron: $E = h^2/(2m\lambda^2)$
| The energy of the particle is proportional to the oscillation frequency of its wavefunction, $E=h\nu$. A photon always moves at the speed $c$, so its wavelength is related to the frequency in the usual way for a traveling wave, $\lambda = c/\nu = hc/E$.
A massive particle moves more slowly than the photon, so its wavelength is shorter for the same amount of energy. Naively, we might guess that a particle moving at speed $v$ would have $\lambda = hv/E$ as its wavelength. This is not correct because it fails to account for relativity, but it may give you an idea of why the wavelength is shorter for a particle with mass.
To get the correct relationship, we need to consider the relativistic energy of the particle. According to special relativity, the energy is actually $E = \sqrt{p^2c^2 + m^2c^4}$. For a particle at rest, this is the famous $E=mc^2$. The kinetic energy is the difference between the total energy and the energy at rest (mass energy).
For a photon, all of the energy is kinetic because it has no mass. For a non-relativistic electron, with momentum $p \ll mc$, we can use a Taylor expansion to get an approximate expression for the kinetic energy.
\begin{align}
KE &= \sqrt{m^2c^4 + p^2c^2}-mc^2\\
&\approx mc^2\left(1+{1 \over 2}{p^2c^2 \over m^2c^4 }\right)-mc^2\\
&={p^2 \over 2m}
\end{align}
The DeBroglie wavelength is related to momentum by $\lambda =h/p$, and pluggin it in we obtain the formulas you asked about.
\begin{align}
E_{\mathrm{photon}} &= {hc \over \lambda}\\
E_\mathrm{electron} &= {h^2 \over 2m\lambda^2}
\end{align}
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/110844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
} |
Do I need to take weight of the rocket into account when calculating escape velocity? Here there is the old problem.
I know from the old problem that the work $W_v$ that I need to make a rocket fast enough to reach the escape velocity is
$$W_v= G \frac{mM}{r}$$
therefore because $$W_v=F\cdot S = G \frac{mM}{r} \rightarrow F_v=\frac{W}{S}=G \frac{mM}{rS} $$
that is the force I need to make a rocket fast enough to reach the escape velocity BUT
Do I also have to count the weight of the rocket?
If yes then the equation will be like this:
$$F_f=F_g - F_v= G \frac{mM}{r^2}-G \frac{mM}{rS}=G \frac{mM}{r}\biggl(\frac{1}{r} \cdot \frac{1}{S}\biggr) = G \frac{mM}{r}(rS)^{-1} $$
| Escape Velocity doesn't depend on mass(but work depends on mass) of a rocket, Escape Velocity equation looks like this:
$$
v=\sqrt{\frac{2GM_{earth}}{r}}
$$
And Work is just change in kinetic energy $\Delta KE$ or $KE_{final}-KE_{initial}$, So Work equation looks like this:
$$
W=\frac{m_{rocket}v_{final}^2}{2}-\frac{m_{rocket}v_{initial}^2}{2}=\frac{m_{rocket}}{2}(v_{final}^2-v_{initial}^2)=\frac{m_{rocket}\Delta v^2}{2}
$$
And Don't forget that Fuel mass is decreasing (When Engine is powered) So That means that acceleration is changing (It's just Newton's second law that says this: $a=\frac{F}{m(t)}$ or full equation looks like this: $a=\frac{F}{m_{rocket}+m_{fuel}(t)}$). In order to solve this you have to "construct" differential equation. (The aforementioned equation in differential form looks like this: $\ddot x= \frac{F}{m_{rocket}+m_{fuel}(t)}$ and $V=\int_{t_0}^{t_1} a(t)$ $dt$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/110976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How does band gap vary with the cell volume? How does band gap vary with the cell volume? is there a relation?
If the volume is compressed, the interaction between atoms would be more, therefore the perturbation is higher hence the splitting would be more. Is my assumption right?
What is the mathematical background? Any help would be greatly appreciated.
| Let's assume, you take a one-dimensional chain of atoms and compress it.
In order to investigate the bandstructure, you will need to determine the electronic wavefunctions of quantum-mechanically allowed states. If you know your wavefunctions for the initial condition, before you compress your chain of atoms, you need to also scale the solution in order to still fullfill Bloch's theorem.
Thus, the energies of your wavefunctions (and therefore also the width of forbidden regions, meaning bandgaps) will scale with your transformation of the lattice. Compression will therefore result in shorter wavelengths and therefore higher energies.
Hope this helps!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/111021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Magdeburg Hemispheres The Magdeburg Hemisphere experiment was the experiment that showed the effect of pressure differences on a vacuumed sphere.
We know that the Force caused by pressure is $\Delta p A$ and so you can calculate the force by using the area of the base of one of the hemispheres of the the vacuumed ball.
It is intuitive to me that the area we use is the area created by the diameter of the sphere, a line through the center of the sphere, but is there a clearer reason that we use this area? I'm not sure how to explain why this "important" area.
| Integrating the force over the surface of the half-sphere yields
$$F=\int_\text{C}\Delta p\hat{\mathbf{n}}dS=\int_0^{\pi/2}\int_0^{2\pi}\Delta p\left(
\begin{array}{c}
\sin (\theta ) \cos (\phi ) \\
\sin (\theta ) \sin (\phi ) \\
\cos (\theta ) \\
\end{array}
\right)R^2\sin(\theta)\,d\phi d\theta=\left(
\begin{array}{c}
0 \\
0 \\
\pi R^2\Delta p \\
\end{array}
\right)$$
where $C$ is the surface of one hemisphere, assuming that the base is sitting on the $xy$-plane and that the radius is $R$. Thus $|F|=A\Delta p$, as you originally wrote.
Mathematica proof:
Integrate[\[CapitalDelta]p Sin[\[Theta]] r^2 {Cos[\[Phi]] \
Sin[\[Theta]], Sin[\[Theta]] Sin[\[Phi]], Cos[\[Theta]]}, {\[Theta],
0, \[Pi]/2}, {\[Phi], 0, 2 \[Pi]}]
{0, 0, [Pi] r^2 [CapitalDelta]p}
General Case
The general case can be argued in two ways. The traditional way is by geometry, which is what dmckee used in his answer. Here is a second way:
Consider an arbitrary surface $C$ (not necessarily a hemisphere), with a flat base $B$ with area $A$. The total surface of the object is, of course, $D=B\cup C$. The total force acting on the object is
$$\mathbf{F}=\int_D\Delta p\hat{\mathbf{n}}\,dS=\mathbf{0}.$$
The reason it's zero is because otherwise, the object would spontaneously accelerate without any source of propulsion, which contradicts reality.
We can then break up the integral to give
$$\mathbf{0}=\int_B\Delta p\hat{\mathbf{n}}\,dS+\int_C\Delta p\hat{\mathbf{n}}\,dS=A\Delta p\hat{\mathbf{n}}_B+\int_C\Delta p\hat{\mathbf{n}}\,dS
\\
\Rightarrow \int_C\Delta p\hat{\mathbf{n}}\,dS=-A\Delta p\hat{\mathbf{n}}_B$$
where $\hat{\mathbf{n}}_B$ is a vector normal to the base $B$. Substituting $A=\pi R^2$ and $\hat{\mathbf{n}}_B=-\hat{\mathbf{z}}$ gives the hemisphere result I wrote above as a special case (but again, it doesn't have to be a hemisphere, it can be anything).
Note: implicit in the above derivation was that $\Delta p$ was constant. If $\Delta p$ is not constant, then the object can spontaneously accelerate (that's how balloons work).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/111133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Effect of wavelength on photon detection When some photon detector detects a photon, is it an instantaneous process (because a photon can be thought of as a point particle), or does the detection require a finite amount of time depending on the wavelength of the photon?
EDIT: I guess what I am wondering is if a photon has a wavelength and travels at a finite speed, then if a photon had a wavelength of 300,000,000m, would its interaction with the detector last 1s? Or does the uncertainty principle say that a photon with wavelength 300,000,000m (and therefore energy E), it cannot be known exactly when it hit the detector with an accuracy better than 1s. Or is it more like this: suppose there is a stream of photons moving towards the detector with wavelengths of 300,000,000m and they reach the detector at a rate of 10 photons/second and the detector has a shutter speed such that the shutter is open for 1s at a time, then it would record 10 photon hits (records all the photons). But if the shutter speed is only 0.5s, then it would record 2.5 hits on average?
EDIT2: I'm not interested in the practical functioning of the detector and amplification delays. I'm looking at and ideal case (suppose the photon is 'detected' the instant an electron is released from the first photomultiplier plate). It is a question regarding the theory of the measurement, not the practical implementation.
| The wavelength of a photon is closely related to the minimum possible uncertainty in its position. So, for a photon has a wavelength of $3x10^8m$, and if we assume that before the detection we already know as much about the photon as is possible, we still won't know exactly when it will be detected, and the uncertainty will be on the order of a second.
The interaction between the photon and the detector is itself in principle instantaneous, and in principle there is no limit on how accurately we can measure when the interaction occurred. It's all about what we can or can't predict in advance.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/111231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 7,
"answer_id": 3
} |
The choice of pivot point in non-equilibrium scenarios It is true that under equilibrium condition, no matter what pivot point one choose, the resulting net torque will always be zero. I wonder if this principle apply to non-equilibrium scenarios as well, namely, the net torque of the system being the same regardless of choice of pivot point? For example, if we put a sphere on a inclined smooth ramp, we would expect the sphere to slip down the ramp without rolling. However, if we choose the point of contact between the sphere and the surface of the ramp as the pivot point, we will have a torque due to gravity. Apparently this contradicts our assumption. What's the mistake here?
| Yes only on equilibrium the choice of point where moments are summed is arbitrary. This is because to move from one point to another the moment vector transformation is
$$ \sum\vec{M}_A = \sum\vec{M}_B + \vec{r}_{AB} \times \sum\vec{F} $$
and if $\sum \vec{F} = 0$ (static equilibrium) then
$$ \sum\vec{M}_A = \sum\vec{M}_B $$
When motion is involved then $\sum \vec{F} = m \vec{a}$ and so the net moment changes from point to point. To get the correct equations of motion here is what you need to do
*
*Sum of all forces equals mass times acceleration of the center of mass. $\sum \vec{F} = m \vec{a}_{cm}$
*Sum of all moments about the center of mass equals the rate of change of angular momentum $\sum \vec{M}_{cm} = I_{cm} \vec\alpha + \vec \omega \times I_{cm} \vec{\omega}$
Another way to look at it is that net torque represents some force at a distance, and so by moving the point around you are moving the distance unless the force is zero.
For the sphere on the incline all the forces acting on the sphere pass through the center of mass and so $\sum\vec{M}_{cm} = \sum (\vec{r}_i-\vec{r}_{cm}) \times \vec{F}_i = 0$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/111323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why fermions have a first order (Dirac) equation and bosons a second order one? Is there a deep reason for a fermion to have a first order equation in the derivative while the bosons have a second order one? Does this imply deep theoretical differences (like space phase dimesion etc)?
I understand that for a fermion, with half integer spin, you can form another Lorentz invariant using the gamma matrices $\gamma^\nu\partial_\nu $, which contracted with a partial derivative are kind of the square root of the D'Alembertian $\partial^\nu\partial_\nu$. Why can't we do the same for a boson?
Finally, how is this treated in a Supersymmetric theory? Do a particle and its superpartner share a same order equation or not?
| Spin-1/2 admits first order equations simply because
$
(\mathbf{1/2,1/2})\otimes (\mathbf{0,1/2})
$
contains the representation $(\mathbf{1/2,0})$ so that a linear equation for free particles can be written (i.e. it contains a derivative acting on one field and returning one field).
The first term in the product is the derivative that transforms as a Lorentz vector $(\mathbf{1/2,1/2})$, whereas $(\mathbf{0,1/2})$ and $(\mathbf{1/2,0})$ are left and right-handed spinors respectively. The clebsch-Gordan coefficients are nothing but that the gamma matrices.
Clearly the same is forbidden for the integer spins. For a scalar is trivial. For a spin-1 field $(\mathbf{1/2,1/2})$ one has that $( \mathbf{ 1/2, 1/2}) \otimes ( \mathbf{1/2,1/2}) $ does not contain $(\mathbf{1/2,1/2})$, and so on for higher integer spins. It is basically the group structure of Lorentz symmetry that forbids first order eq. for integer spins.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/111401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46",
"answer_count": 7,
"answer_id": 3
} |
How does a spinning electron produce a magnetic field? I learned in my undergraduate physics class that atoms have magnetic fields produced by the orbit of electrons and the spin of electrons. I understand how an orbit can induce a magnetic field because a charge moving in a circle is the same as a loop of current.
What I do not understand is how a spinning ball of charge can produce a magnetic field. Can someone explain how spin works, preferably in a way I can understand?
| Electron is not like a ball, as it has no volume at all. So it can not spin like a ball. Magnetic moment comes "as is" from quantum mechanics, which do not explain its nature.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/111538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 8,
"answer_id": 7
} |
Isotropy of Space Weinberg writes in his Cosmology text "Likewise,isotropy requires the mean value of any three-tensor $t_{ij}$ at $x=0$ to be proportional to $\delta_{ij}$ and hence to $g_{ij}$, which equals $a^2\delta_{ij}$ at $x = 0$"
May someone please illuminate the point.
| Since we have local isotropy, if I zoom in enough the space-time has rotational invariance. In other words, at any local point in space, the mean value of any tensor must be rotationally invariant, i.e. under rotation, we must have
$$
\langle t_{ij} \rangle \to R_i{}^k R_j{}^l\langle t_{kl} \rangle = \langle t_{ij} \rangle
$$
for all rotations $R$. In matrix notation, we must have
$$
[ \langle t \rangle, R] = 0
$$
Now, since this is true for any rotation matrix $R$, $\langle t_{ij} \rangle \propto \delta_{ij} $ (This is the statement of Schur's lemma)
Now, at $x=0$, since $g_{ij} \propto \delta_{ij}$, we have
$$
\langle t_{ij} \rangle \propto g_{ij}
$$
More generally isotropy implies that for any tensor, we must have under rotations
$$
\langle t_{i_1i_2\cdots} \rangle \to R_{i_1}{}^{j_1} R_{i_2}{}^{j_2} \cdots \langle t_{j_1 j_2 \cdots } \rangle = \langle t_{i_1 i_2 \cdots } \rangle
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/111669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is there an equation to calculate the average speed of liquid molecules? I seem to remember from first year physics that we can calculate the RMS speed of a stationary, ideal gas with $v=\sqrt{\frac{3RT}{M}}$. Does a similar equation exist for liquids?
| If you are looking for root mean square velocity, it can be very high (relative to long range velocity).
If treat water as gas (which it is not) velocity comes around 650 m/s http://www.madsci.org/posts/archives/2007-07/1184005380.Bc.r.html
For order of magnitude calculation velocity in liquid water is about 8 times slower then gas. So velocity becomes around 80 m/s
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/111743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Heat generated on thin disc plate in magnetic field There are the eddy currents in thin circular copper plate thickness h and radius a, in a harmonic magnetic field which is perpendicular to plate and the effect of auto-induction is negligible. Now we must get average heat generated on it.
As I said, I have final equation, but how do you get to this eqution. Does anyone know solution from first step to this?
$$
\overline P=\frac{1}{16}\pi\gamma ha^4B_0^2\omega
$$
| In macroscopic EM theory, we can calculate heat being evolved due to electric current in unit time in region $V$ by
$$
\int_V \mathbf j \cdot \mathbf E\,dV.
$$
If you express electric field in terms of $B_0$ and $\omega$ and assume Ohm's law $\mathbf j = \frac{1}{\gamma}\mathbf E$ where $\gamma$ is resistivity, you should be able to derive formula similar or same to the one you wrote above.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/111807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A question about a complex integration in Peskin's QFT textbook In page 27 (2.52), the integration is
$$\int_{-\infty}^{\infty}dp \frac{p e^{ipr}}{\sqrt{p^2+m^2}}$$
He says that there are two branch cuts starting from $\pm im$
But I learn in complex analysis that $\sqrt{z^2+m^2}$ has only one branch cut from $-im$ to $im$, because point going around $im$ or $-im$ only will gets a minus, but point going around $\infty$ only will keep the sign. Therefore $\infty$ point is not the branch point and the branch cut is from $-im$ to $im$. So who is wrong?
| Remember this all start from the fact that $\left(z^2+m^2\right)^{-\frac{1}{2}}=\left(z+im\right)^{-\frac{1}{2}}\left(z-im\right)^{-\frac{1}{2}}.$
So you have two branch points: one in $-im$ and one in $im.$
Let's now consider the branch cuts as on the image you posted. If you rotate upper branch of $\pi$, so is directed to the bottom, then the two brances will intersect. You have the following situation:
*
*From $im$ to $-im$: you have only one branch (the one starting from $im$ that has been rotated)
*From $-im$ to, let's say, "$0-i\infty$": you have two branches overlapping. In this situation, since the sum of the exponents is an integer number ($-\frac{1}{2}-\frac{1}{2}=-1$), you have no polydromy. Then you have no line of discontinuites here.
So, rotating the branch cuts in figure you obtain the well known branch cuts from $im$ to $-im.$ Then they are equivalent.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/111939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Why wasn't the meter defined using a round-number fraction (like 1/300 000 000) of the distance travelled by light in 1 second? We know that 1 meter is the distance travelled by light in vacuum within a time interval of 1/299,792,458 second. My question is why we didn't take a simpler number like 1/300,000.000 or why not just 1?
| Because $299\,792\,458\ \mathrm{m/s}$ is the speed of light. By using $300\times10^6$ we won't get one meter
after $1/300\times10^6\ \mathrm{s}$.
We could change the speed of light and set it to $300\times10^6\ \mathrm{m/s}$ by changing either the
definition of second or the previous definition of meter. However, it would be harder to
do that because changing the definition of second or old definition of meter would change
other units.
In short, people had defined meter by the distance between two marks on a metal rod. Then they
decided to change this definition by using light so that we get the same meter as before.
After all the speed of light had been measured before with the old definition of meter.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/112096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Description of the heat equation with an additional term I have the following equation:
$$\frac{\partial U}{\partial t}=k\frac{\partial^2 U}{\partial x^2}-v_{0}\frac{\partial U}{\partial x}, x>0$$
with initial conditions:
$$U(0,t)=0$$
$$U(x,0)=f(x)$$
In the problem is requested to give an interpretation of each of the terms in the above equation, and noting that such systems can model, besides solving by Fourier Transform.
The Fourier Transform solution is quite simple to do; however, I can not give a physical interpretation of the terms of the equation not to mention a system that can model it. So I wanted to ask your help to answer this question. Thank you very much for your help and attention.
| The heat equation is an example of a convection-diffusion equation.
Your problem is one-dimensional in space (only $x$), which simplifies it a bit. The term on the left hand side is the time-rate of change of the internal energy $U$ (often a multiple of the temperature).
The second term is a diffusion term, as, in time, it diffuses or "smooths" peaks. The last term is a convective term, i.e. your medium is moving at constant velocity $v_0$ to the right.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/112230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Taylor expansion of the metric Consider a coordinate change
$$
x^a\mapsto \tilde x^a=x^a+\epsilon y^a
$$
In the note I am reading, the author calculate the change of metric by
$$
g_{ab}(x) = \tilde g_{ab}(\tilde x)=\tilde g_{ab}(x^a+\epsilon y^a)=\tilde g_{ab}(x^a)+\epsilon\mathcal{L}_Y\tilde g_{ab}(x^a)+\cdots
$$
My question is, why do we use $\mathcal{L}_Y\tilde g$ rather than $\nabla_Y\tilde g$ in the first order term?
Update
Following is my newly try.
As Prahar pointed out, I should write
$$
g_{ab}(x)\mapsto\tilde g_{ab}(\tilde x)=\frac{\partial x^c}{\partial {\tilde x}^a}\frac{\partial x^d}{\partial {\tilde x}^b} g_{cd}(x)
$$
Then we have
$$
g_{ab}(x)=\tilde g_{ab}(x^a+\epsilon y^a)=\tilde g_{ab}(x^a)+\epsilon\nabla_Y\tilde g_{ab}(x^a)+\cdots
$$
And I have to show
$$
\nabla_Y\tilde g_{ab}(x^a)=\mathcal{L}_Yg_{ab}(x^a)
$$
We have
$$
\begin{align}
\tilde g_{ab}(x^a)&=\frac{\partial x^c}{\partial {\tilde x}^a}\frac{\partial x^d}{\partial {\tilde x}^b} g_{cd}(x^a)\\
&=\left(\delta_a^c-\epsilon\frac{y^c}{\tilde x^a}\right)\left(\delta_b^d-\epsilon\frac{y^d}{\tilde x^b}\right)\left.\right|_{x^a-\epsilon y^a}g_{cd}(x^a-\epsilon y^a)\\
&=g_{ab}(x^a-\epsilon y^a)-\epsilon\left[\frac{y^d}{\tilde x^b}g_{ad}+\frac{y^c}{\tilde x^a}g_{cd}\right]\left.\right|_{x^a-\epsilon y^a}+O(\epsilon^2)\\
&=g_{ab}(x^a) - \epsilon\nabla_Yg_{ab}(x^a)-\epsilon\left[\frac{y^d}{\tilde x^b}g_{ad}+\frac{y^c}{\tilde x^a}g_{cd}\right]\left.\right|_{x^a} +O(\epsilon^2)
\end{align}
$$
So we get
$$
\nabla_Y\tilde g_{ab}(x^a)=\nabla_Yg_{ab}(x^a)-\epsilon\nabla_Y\left(\nabla_Yg_{ab}+\frac{y^d}{\tilde x^b}g_{ad}+\frac{y^c}{\tilde x^a}g_{cd}\right)\left.\right|_{x^a} + O(\epsilon^2)
$$
On the other hand, we have
$$
\mathcal{L}_Yg_{ab}(x^a)=\nabla_Yg_ab{x^a}+g_{ac}\nabla_bY^c+g_{cb}\nabla_aY^c
$$
I cannot see these two are identical. Is there anything wrong in my deduction?
| I'm not quite sure what you are doing in your post. It is not true that
$$
g_{ab}(x) = {\tilde g}_{ab}({\tilde x})
$$
The correct equality as I pointed out is
$$
{\tilde g}_{ab}({\tilde x}) = \frac{ \partial x^c}{ \partial{\tilde x}^a} \frac{ \partial x^d}{ \partial{\tilde x}^b} g_{cd}(x)
$$
where ${\tilde x}^a = x^a + \epsilon y^a(x) \implies x^a = {\tilde x}^a - \epsilon y^a(x) $. This implies
$$
\frac{ \partial x^c}{ \partial{\tilde x}^a} = \delta^c_a - \epsilon \frac{\partial y^c(x)}{\partial {\tilde x}^a } = \delta^c_a - \epsilon \frac{\partial y^c(x)}{\partial x^b } \frac{\partial x^b}{\partial {\tilde x}^a } = \delta^c_a - \epsilon\partial_a y^c + {\cal O}(\epsilon^2)
$$
where we have introduced notation
$$
\partial_a y^c \equiv \frac{\partial y^c(x)}{\partial x^a}
$$
We therefore have
$$
{\tilde g}_{ab}({\tilde x}) = {\tilde g}_{ab} ( x + \epsilon y ) = {\tilde g}_{ab}(x) + \epsilon y^c \partial_c {\tilde g}_{ab}(x) + {\cal O}(\epsilon^2)
$$
From the other side of the equality, we also
\begin{equation}
\begin{split}
{\tilde g}_{ab}({\tilde x}) &= \left( \delta^c_a - \epsilon\partial_a y^c + {\cal O}(\epsilon^2) \right) \left( \delta^d_b - \epsilon\partial_b y^d + {\cal O}(\epsilon^2) \right) g_{cd}(x) \\
&= g_{ab}(x) - \epsilon \left( \partial_a y^c g_{cb} + \partial_b y^d g_{ad} \right) + {\cal O}(\epsilon^2)
\end{split}
\end{equation}
We therefore have
$$
{\tilde g}_{ab}(x) + \epsilon y^c \partial_c {\tilde g}_{ab}(x) + {\cal O}(\epsilon^2) = g_{ab}(x) - \epsilon \left( \partial_a y^c g_{cb} + \partial_b y^d g_{ad} \right) + {\cal O}(\epsilon^2)
$$
and thus
$$
g_{ab}(x) = {\tilde g}_{ab}(x) + \epsilon \left( y^c \partial_c {\tilde g}_{ab} + \partial_a y^c g_{cb} + \partial_b y^d g_{ad} \right) + {\cal O}(\epsilon^2)
$$
Finally, since ${\tilde g}_{ab} = g_{ab}+ {\cal O}(\epsilon)$, we have
$$
g_{ab}(x) = {\tilde g}_{ab}(x) + \epsilon \left( y^c \partial_c g_{ab} + \partial_a y^c g_{cb} + \partial_b y^d g_{ad} \right) + {\cal O}(\epsilon^2)
$$
The quantity in the bracket above is precisely the Lie derivative of $g_{ab}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/112363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Amateur's question on Black Holes Black holes are caused by massive curvature of the fabric of space-time. Is it right in believing theoretically that forces of electromagnetic origin could also lead to distortion of the fabric of space-time, (though it may not be as tremendous as the extent to which distortion is brought about by gravitational forces)? If it is right, then could we venture on the existence of tiny holes in space-time due to electromagnetic effect on space-time fabric?
| Electromagnetic effects do lead to a curvature of spacetime, as gravitation couples to any quantity in the stress-energy tensor, as dictated by the Einstein field equations. Specifically, the tensor is given by,
$$T^{ab}=-\frac{1}{\mu_0}\left( F^{ac} F_{c}^b +\frac{1}{4}g^{ab}F_{cd}F^{cd}\right)$$
where $F$ is the field-strength of the electromagnetic $4$-potential $A$, which the electric and magnetic fields depend on. The corresponding field equations are,
$$R^{ab}-\frac{1}{2}g^{ab}R + g^{ab}\Lambda = \frac{8\pi G}{\mu_0}\left(F^{ac} F_{c}^b +\frac{1}{4}g^{ab}F_{cd}F^{cd} \right)$$
The theory is often referred to as 'Einstein-Maxwell theory.' Analytic and approximate black hole solutions to the theory are known, c.f. Spherically symmetric black hole solutions to Einstein-Maxwell theory with a Gauss-Bonnet term by D.L. Wiltshire. From their abstract:
The only spherically symmetric solutions of the theory are shown to be generalisations of the Reissner-Nordstrom and Robinson-Bertotti solutions. The “Reissner-Nordstrom” solutions have asymptotically flat and asymptotically anti-de Sitter branches, however, the latter are unstable.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/112629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
If I'm floating in space and I turn on a flashlight, will I accelerate? Photons have no mass but they can push things, as evidenced by laser propulsion.
Can photons push the source which is emitting them? If yes, will a more intense flashlight accelerate me more? Does the wavelength of the light matter? Is this practical for space propulsion? Doesn't it defy the law of momentum conservation?
Note: As John Rennie mentioned, all in all the wavelength doesn't matter, but for a more accurate answer regarding that, see the comments in DavePhD's answer .
Related Wikipedia articles: Ion thruster, Space propulsion
| This does not directly answer your question, but this is related. If you are floating in space the photons that hit you are also exerting a force. When you float in space a large number of photons emitted by the sun will hit you. These photons exert a force, this mechanism is referred to as radiation pressure. This force is significant enough that you can actually control a spacecraft with it.
NASA is doing that with the Kepler space telescope. The space telescope lost one of its reaction wheels. Reaction wheels are used to alter a spacecraft's orientation. With the remaining reaction wheels, the orientation of the telescope cannot be controlled with the accuracy needed for scientific missions. NASA devised a way to make use of the radiation pressure for controlling the spacecraft's orientation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/112866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60",
"answer_count": 4,
"answer_id": 1
} |
Single Photon Hits A Linear Polarizer, What Happens? If a linear polarized single photon strikes a linear polarizer such that its polarization is at 45 degrees to the polarization axis of the polarizer, what happens?
There is a nearly 50% chance that the photon is transmitted, and assuming it is transmitted, it somehow rotates its polarization to match that of the polarizer axis. But the photon has energy defined by E=hv before and after it encounters the polarizer. So it cannot lose energy by projecting its amplitude onto the polarization axis, as per the normal explanation.
Also assuming the linear polarizer is a dichroic type, then I understand that this means the long molecules in the film have their long axis aligned perpendicular to the polarization axis - so how does the photon encounter the molecules. Is it absorbed and then re-emitted orthogonal to the molecular axis somehow?
| Since you are talking about photons, you are also talking about quantum mechanics. In the language of QM, the incoming photon induces a transition to a virtual state of a molecule in the polarizer. When the molecule returns to the ground state, a photon of rotated polarization is emitted.
This transition to a virtual state is not absorption. The energy of the photon does not have to be anywhere near the energy of a molecular transition. The photon and the molecule find themselves in a state of mixed photon and molecule character. This state retains its coherence properties, and the phase of the outgoing photon is related to the phase of the incoming photon, and its frequency is identical to that of the incoming photon.
In real absorption, something (like a collision) interrupts the coherence of the mixed state, which could lead to the molecule finding itself in a true excited state. This can only happen if the energy of the photon matches the energy of the excited state. In that case, we say the photon has been absorbed. The molecule will eventually spontaneously drop back to the ground state emitting a photon, but that photon will have no fixed phase relationship with the original, and might not have the same frequency.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/113017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Find the points where potential is null Let's say we have two charges called $q_1$ and $q_2$, respectively $20 \, C$ and $-40\,C$, at a distance $d=1\,m$
We want to find all the points where electric potential is null.
I solved the equation
$$\frac{q_1}{4\pi\epsilon_0r_1} + \frac{q_2}{4\pi\epsilon_0(d-r_1)}=0$$
For $r_1$ (distance from $q_1$), and found $r_1=\frac13\,m$
However this is not the only solution: there is another point not in-between the charges, but $1\,m$ left from $q_1$ ($r_1=-1\,m$)
How can I set up an equation giving me both the solutions?
| Let's draw the setup:
The expression for $V(r)$ is simply (I'll set $\kappa$ to 1 for convenience):
$$ V(r) = \frac{20}{r} - \frac{40}{r+1} $$
so the potential is zero when:
$$ \frac{20}{r} = \frac{40}{r+1} $$
Only this isn't quite right because the potential for each charge is symmetric so the potential due to charge $A$ obeys $V_A(r) = V_A(-r)$ and likewise for the other charge. It's because you're ignoring this that your equation gives you only one null point. The equation really should be:
$$ \frac{20}{|r|} = \frac{40}{|r+1|} $$
The easy way to deal with those modulus operators is to square both sides:
$$ \frac{400}{r^2} = \frac{1600}{(r+1)^2} $$
and if we rearrange this we get the quadratic equation:
$$ 3r^2 - 2r - 1 = 0 $$
Quadratic equations have two roots, and the two roots are going to give you the two null points. If we use the usual expression for the roots of a quadratic equation we get $r = 1$ and $r = -\tfrac{1}{3}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/113091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does light travel in straight paths? Why light travels in straight paths? What's the real cause that makes light photons to go in a straight line? and what are the factors that could change the path of light externally? (I'm excluding reflection, refraction and deviation here.)
| I'll give other ways of getting to the same answer.
First of all, the wonderful path integral formulation (which can be found in Feynman's excellent book, "QED"):
In order to consider the probability of a particle from going to a point $A$ to a pont $B$, you consider ALL the possible paths. Each path has a probability contribution and we just have to sum them up (integrate, to be more precise). You might want to watch this video, where each contribution is represented by an arrow.
The contribution of each trajectory, will roughly depened on the time taken. But this is the Quantum Mechanical view, the thing is that all the deviations from a straight line tend to cancel out, giving us a straight line in the classical limit.
This interpretation has a deep importance in modern physics since it applies not only to photons, but to any other particle. It would be written mathematicaly as:
$$\langle \text{fina state} \vert e^{-iHt} \vert \text{inital state} \rangle = \int e^{iS[x]} Dx(t)$$
Where $S$ is the classical action, as seen in Hamiltonian mechanics.
Another, quite similar view, is that light travels in a way it minimizes the time (this is not true in the case of mirrors, it might be a maximum, but we are no interested in that). This is called Fermat's principle and can be derived from Huygen's principle (which Mike Dunlavey explained).
So in free space, the way you arrive earliest is by going in a straight light.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/113135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Does the Standard Model require neutrinos to be massless? I am an undergraduate student in Physics, I have a basic understanding of Particle Physics and Quantum Mechanics but none whatsoever of Quantum Field Theory.
I know that Neutrino mixing requires neutrinos to be massive (but why? Physically, couldn't neutrinos mix if they were massless?), and that their mass is usually estimated to be lower than an upper threshold.
But mathematically, does the Standard Model actually predict an upper limit on the neutrino mass, or does it just say that they are massless?
In the former case, what is it stopping it from predicting a lower limit?
In the latter case, so is it wrong?
| Your question is addressed in this paper. The Standard Model as is can accomodate massive neutrinos but if the neutrinos have a mass, and no right handed neutrinos are added, the model becomes non-renormalisable. Adding right handed neutrinos fixes this.
The Standard Model doesn't make any predictions of neutrino mass, but then it doesn't predict any of the fermion masses. The masses of leptons and quarks are input parameters.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/113242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
} |
Wrapping plastic in aluminium foil to protect it from heat Does it make any sense to wrap the plastic handle of a pan in aluminium foil to protect it from overheating when placing it to the hot oven?
| Everything will want to go to equiliberium for temperature in the oven.
Aluminium foil is not a good insulator.
Suggest three courses of action:
*
*Remove plastic handle
*find a different pan with a metal handle and use an insulator (i.e. oven mitten) when removing pan from oven
*find a material with a good insulating value to protect the plastic handle
COA 3 will be the most difficult due to you solution will most likely be large/clumsy and will cost $$$
Use the KISS (Keep It Simple Stupid) principle to solve your problem.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/113364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.