Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
$k$-dependence of the energy in solid state physics In a crystal, the electrons are subject to a periodic potential due to the fact that the atoms form a periodic lattice. From this periodicity we can obtain the Bloch theorem, and get a general formula for the electron wave function, known as the Bloch function. Through the Bloch theorem, it seems that we can show the existence of band structures, i.e. that the electronic energies depend on $k$ and on an integer $n$ referring to the band index, and do not depend on $x$. This means we have a Schrödinger equation of the form (in 1D): $$\left[ -\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + V(x) \right] \Psi_{nk}(x) = E_n(k) \Psi_{nk}(x),$$ with $V$ a periodic potential, and $\Psi_{nk}(x)$ a Bloch function. My question is: how can we prove that this non-dependence of the energy on $x$ is true for any periodic potential $V$? I could find examples for specific $V$'s, like for rectangular potential barriers, but not in general.
I think your question is confused, the quantum numbers $k$ and $n$ denote a wavefunction with a particular spatial dependence $\psi(x)$ and energy $E_{n,k}$. So in that sense, the energy does sensitively depend on $x$. One should note that this is the same result as that of the simple harmonic oscillator, which has states labeled by $n$, or the hydrogen atom with states labeled by $n,l,m$. In both of these cases, the quantum numbers uniquely denote an eigenstate with a particular dependence on $x$. Perhaps what you are wondering is how to show that the energies of a lattice model are invariant upon translation by a lattice vector. This is shown in Bloch's theorem (see wikipedia link here: https://en.wikipedia.org/wiki/Bloch%27s_theorem). One can see how this works by substituting $x$ by $x+a$ and recognizing $V(x+a)=V(x)$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/638028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Help understanding Einstein notation This is basically the same question as this one. I have the same problem with the sign. In the Dirac equation $(i\gamma^{\mu}\partial_{\mu}-m)\psi = 0$, the term $i\gamma^{\mu}\partial_{\mu}$ is: $$i\gamma^{\mu}\partial_{\mu} = \sum_{j=0}^{3}i\gamma^{j}\partial_{j}$$ However, the Einstein summation convention is being used. The answer of the linked question says that this is because $\partial_{\mu}$ is contravariant; however, I have seen many times $\partial_{\mu}$ being used the sign convention. For instance, the D'Alembertian is: $$\square^{2} = \partial_{0}^{2}-\partial_{1}^{2}-\partial_{2}^{2}-\partial_{3}^{2} = \partial^{\mu}\partial_{\mu} = \partial_{\mu}\partial^{\mu}$$ So, what's the difference here? When do I have to change the signs and when do I not?
We use the metric $[\eta]=\mathrm{diag}(+,-,-,-).$ Note first that $$X^\mu Y_\mu=X^0Y_0+X^1 Y_1+X^2Y_2+X^3Y_3 \tag{1},$$ but also $$X^\mu Y_\mu=\eta^{\mu\nu}X_\mu Y_\nu=\eta^{00}X_0Y_0+\eta^{11}X_1Y_1+\eta^{22}X_2Y_2+\eta^{33}X_3Y_3, \tag{2}$$ which, using the components of the metric gives $$X^\mu Y_\mu=X_0Y_0-X_1Y_1-X_2Y_2-X_3Y_3. \tag{3}$$ Note the position of the indices in $(3)$ compared to $(1)$. We have both indices down in $(3)$ at the cost of introducing factors of $\pm1$ from the Minkowski metric.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/638990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Relation between critical temperature and density of states The BCS theory predicts that the critical temperature of the superconducting transition is given by $$ T_c \approx \theta \exp \left (- \frac{1}{U D(\epsilon_F)} \right ) $$ where $\theta$ is the Debye temperature, $U$ is the coupling constant of the electron-phonon interaction and $D(\epsilon_F)$ the density of states at the Fermi level. In other words, the larger $D(\epsilon_F)$ (or $U$) is the higher the critical temperature gets. However, strictly speaking, this relation is true only in the framework of BCS and in the weak coupling approximation (i.e. $U D(\epsilon_F) \ll 1$). Question: is there any experimental (or even theoretical) evidence that the rule "higher DOS at the Fermi level implies higher $T_c$" holds also for non-BCS superconductors or outside the weak coupling regime? I'm not asking specifically about the exponential rule, but about the general dependence of $T_c$ on $D(\epsilon_F)$. I'd appreciate any references if this is indeed the case.
It is an experimental fact that HTS superconductors generally show large temperature-independent Pauli susceptibility, indicating high DOS near the Fermi surface. See e.g. here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/639101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Why intensity of light reaching the sensor or film with a particular lens directly proportional to $\frac{D^2}{f^2}$? The following is quoted from my book: "The intensity of light reaching the sensor or film is proportional to the area viewed by the camera lens and to the effective area of the lens. The size of the area that the lens "sees" is proportional to the square of the angle of view of the lens, and so is roughly proportional to $\frac{1}{f^2}$. The effective area of the lens is controlled by means of an adjustable lens aperture, or diaphragm, a nearly circular hole with variable diameter $D$; hence the effective area is propor tional to $D^2$. Putting these factors together, we see that the intensity of light reaching the sensor or film with a particular lens is proportional to $\frac{D^2}{f^2}$." My question is how did they conclude that the area that the lens "sees" is roughly proportional to the square of angle of view of the lens and $\frac{1}{f^2}$ and how is the effective area proportional to $D^2$? Ultimately, my question is how is the intensity of light proportional to $\frac{D^2}{f^2}$? Can someone please explain? I did not understand what the paragraph explained. Please help.
The first term, proportional to $D^2$, is rather simple: The larger your lens aperture, i.e. the larger the area that collects light, the more photons you get. The second term is a bit misleading. Actually, the smaller the focal length $f$, the larger the angle of view $\alpha$ that the camera sees, both in the horizontal and vertical: $$ \alpha = 2 \cdot \arctan \left(\frac{d}{2 f} \right) $$ (see https://en.wikipedia.org/wiki/Angle_of_view). $d$ is either the width or height of your camera chip, $\alpha$ the corresponding angle (so there is $\alpha_w$ and $\alpha_h$). The width of the observed scene is proportional to $\tan(\alpha_w)$, the height is proportional to $\tan(\alpha_h)$. Thus according to the equation above you see that both width and height are proportional to $\frac{1}{f}$. The total amount of photons your camera collects is of course proportional to the image area, therefore to the product of width and height and thus proportional to $\frac{1}{f^2}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/639459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is the number of isotopes of an element bounded? Is there a known reason why any given element has finitely many isotopes? Here I mean both stable and unstable isotopes. If we know this, do we have a reason why, for a given element, are the isotopes limited to that particular number?
In the comments you clarify your question by asking, as an example, about hydrogen isotopes. If you look at a Table of Nuclides, you will see that there are at least 7 hydrogen isotopes which have been identified so far. There are links attached to each entry in the table that give data on the reactions for creating these exotic nuclides. You can see that the He-3 through He-10 have been identified, C-8 through C-22. At the extremely neutron-rich ends, the halflives are extremely short, and neutron emission is prevalent. It seems, based on the experimental data, that the only restriction on neutron-rich isotopes of any element are the experimental ability to make the nucleus long enough/in sufficient number to get repeatable measurements to demonstrate they have actually been made. EDIT: As pointed out in a comment, there most likely is a limiting halflife. Check out this question and answer. There's no reason to assume that H-8 and H-9 won't eventually be formed and identified. It will take a lot of ingenuity, patience, and money.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/639685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
What is the basic physics of current-electricity? Why current decreases when length of resistor increases and How the speed of electricity is almost $c$ (speed of light)?
The field due to the battery sets up a surface charge in the wire. The surface charge is negative near the negative pole of the battery, and positive near the positive terminal, and varies more or less linearly along the wire. The surface charge in turn sets up inside the wire an electric field which is constant across the diameter of the wire, and along the length. This field accelerates electrons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/640065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does mass have an effect on Centripetal Acceleration? I am using an online simulation for a lab concerning Centripetal Acceleration. When I change the mass the graph indicates that the magnitude of the acceleration is constant. According to the Centripetal Acceleration formula: $a=v^2/r$, this is true because no mass is present in the relationship. However, when I use Newton's Second Law of Motion, $a=f/m$, I can see that the mass and the acceleration are inversely proportional. Both of these ideas are found when I look them up online, now I am a bit confused on which one might be more valid.
If you change the mass, then either the acceleration or the centripetal force must change as well (or both). That is what we see from Newton's 2nd law. The simulator you are using seems to adjust the force. That's why you see no acceleration change. That is just a choice by the software developers or possibly something you can adjust for in the simulation settings. Whether that is a reasonable approach depends on the physical scenario. * *In scenarios such as celestial orbits, it is reasonable since the gravitational force increases proportionally with mass. *In scenarios such as a game of tetherball, it might not be reasonable since the string force might be limited or constant regardless of mass. Note that the expression $a=v^2/r$ isn't needed for this analysis.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/640216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 0 }
Why is this result no contradiction to length contraction? Let's consider 2 events in an inertial system $S$: $$x_1=(0,0,0,0) \quad \quad \quad x_2=( \Delta t, \Delta x,0,0)$$ If we assume the two events occur in the same place we have: $\Delta \bar{t}= \gamma \Delta t$ for every other inertial system, also known as time dilation. If we assume instead the 2 events occur at the same time in $S$ we get $\Delta \bar{x} =\gamma \Delta x$, hence in every other reference system (with non zero relative motion with respect to $S$?) we have that this length appears to be bigger than in the system $S$, which appears to be a contradiction to the effect of length contraction. I don't see how to solve this issue: is it related that we give definition of "length" to some objects, which we assume to be at rest in the reference frame where we define the proper length? I'm not sure if in this case $ \Delta x$ can be associated to a length. I'm just very confused and (as you already know by now) new to relativity. Thank you for any help!
Let me start by saying that $x_2=( \Delta t, \Delta x,0,0)$ is not an event. It's somewhat confusing. If you state that $\Delta x=\gamma x'$ you must put $\Delta t'=\frac{1}{\gamma}\Delta t$. You didn't do this though. Let me explain. The Lorenz transformations are: $$x'=\gamma(x-vt)$$ $$t'=\gamma(t-\frac{vx}{c^2})$$ For $t=0$ this gives for $x'$ $$x'=\gamma x,$$ so a distance $\gamma\Delta x$ is squeezed in a distance $\Delta x'$, which means space in the moving frame seems contracted (say a bus is squeezed). When $x=vt$ we get $$t'=\gamma(t-\frac{v^2}{c^2}t)\rightarrow$$ $$t'=\gamma t(1-\frac{v^2}{c^2}),$$ which means that $$t'=\frac{1}{\gamma}t$$ So a duration $\Delta \frac{1}{\gamma}t$ is squeezed in a duration $\Delta t'$. This means that time seems to be dilated.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/640277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
How does physics "remember" that there's an EM wave when $E, B = 0$? I don't understand this subject well but I'll try to explain what I mean. In case of a pendulum or other macro oscillators you can calculate the "next" state from position/angle and velocity/angular velocity. They give you the entire description of the oscillator's state. Position and velocity are out of phase - when the displacement is largest, the velocity goes to 0 and vice versa. In EM waves, $E$ and $B$ are in phase and, if I understand correctly, each of them gives rise to the other. What happens when they both go to $0$? How would physics "remember" that there's a wave? Are there some physical properties like "electric field velocity" or "magnetic field velocity"?
For the photon in a box with the right distance between the walls as a multiple of the wavelength, the E and B fields are out of phase and a sketch looks like this: Here it is obvious that the photon has a constant energy content. Which also corresponds to our intuition.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/640387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
In which frame (with respect to which object) velocity is taken in Bernoulli equation? Duplicate: Bernoulli's equation and reference frames why we need to take velocity of air in tunnel with respect to train & velocity of train with respect to earth (inertial frame) (here we are comparing pressure difference between air inside and outside train)? A train with cross-sectional area $_$ is moving with speed $_$ inside a long tunnel of cross-sectional area $_0 (_0 = 4_)$. Assume that almost all the air (density $\rho$) in front of the train flows back between its sides and the walls of the tunnel. Also, the air flow with respect to the train is steady and laminar. Take the ambient pressure and that inside the train to be $_0$. If the pressure in the region between the sides of the train and the tunnel walls is $$, then $_0 − = \frac{7} {2} _^2$ . The value of is _______. Answer: 9 Solution: https://youtu.be/RXmdziWqI6I?t=540 Question Source: JEE Advanced 2020 Such thing is also applied to find pressure difference above and below airplane wings by NCERT. Do we need to take velocities of fluid with respect to moving object? Question is kept for future readers.
I seem to understand your problem. I feel u are having difficulty in understanding how is V = 4/3.Vtrain. If so then it is very simple. See this relationship is achieved by equation of continuity and in that we have to consider the velocity of fluid w.r.t pipe. Thus :- V. 3A( as out of 4A ,A Is blocked by train) = Vtrain. 4A And talking about Bernoulli's theoram, it is based on energy conservation which can be applied from any from and got the correct answer ( you can check it for yourself)? I hope u have understood if not please revert back
{ "language": "en", "url": "https://physics.stackexchange.com/questions/643194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Value of $\Omega(0)$? If we define $\Omega$ as the following, where $E_r$ is the energy in state $r$, $$\Omega(E)=\int...\int d^{3N}p\ d^{3N}q\ \delta(E-E_r) \tag{1}$$ Then the laplace transform of $\Omega$ is the canonical partition function $Z(\beta)$, $$L\{\Omega(E)\}=\int_{0}^{\infty}\Omega(E)e^{-\beta E}dE=Z(\beta) \tag{2}$$ Motivated by this idea, I was wondering what the following would work out to, $$\beta \equiv \frac{1}{\Omega(E)}\frac{\partial \Omega}{\partial E} \tag{3}$$ $$\beta \Omega(E) = \frac{\partial \Omega}{\partial E} \tag{4}$$ $$L\{\beta \Omega(E)\}=L\left \{\frac{\partial \Omega}{\partial E}\right\} \tag {5}$$ $$\beta Z(\beta) = \beta Z(\beta) - \Omega(0) \tag{6}$$ Which implies that, $$\Omega(0) = 0 \tag{7}$$ This seems incorrect as the number of microstates associated with zero energy should be 1. Shouldn't it? Particularly for a microcanonical ensemble, $$P_r=\frac{1}{\Omega} \tag{8}$$ And taking (7) and (8) into account it seems it would be never possible to have a system with zero energy. It seems like I'm doing some rather loose mixing and matching of ideas from the canonical and microcanonical ensemble. I think that's where I'm getting myself in trouble.
You are mixing equations from the microcanonical and canonical ensemble. For clarity, let us write : $$\beta_*(E) = \frac{\partial \ln \Omega}{\partial E}(E)$$ for the temperature in the microcanonical ensemble and $E_*(\beta)$ for the average energy in the canonical ensemble. Then equation $(4)$ is rewritten : $$\Omega(E) \beta(E) = \frac{\partial \ln \Omega}{\partial E}(E)$$ and because $\beta$ depends on $E$ you have : $$\beta Z(\beta)\neq \int_0^{+\infty}\beta_*(E)\Omega(E)e^{-\beta E} \text d E$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/643515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Are the properties of anti-hydrogen opposite to those of hydrogen? In the series Crisis of Infinite Earths, the whole story is that there is a wave of antimatter rampaging through the multiverse. So I got interested and googled "how to create antimatter", and I found out that when an antiproton and a positron are present in an atom, it creates an anti-hydrogen atom. My question is, what is anti-hydrogen? What can be its properties? We all know that hydrogen is flammable, so is anti-hydrogen a non-flammable material? Because it has opposite properties, right?
The properties of anti-matter in general are very similar to properties of matter. Richard Feynman considered the problem of if we made radio contact with an alien civilization, how would we be able to tell if they were made of matter or anti-matter? It turns out to be a complicated process. See 52-8. Anti-hydrogen is very much like hydrogen. It would combine with anti-oxygen to make anti-water (I am making up names, but it should be clear what I mean.) Anti-water could support life just like water. Anti-people would be made of familiar anti-elements, which would have the same chemistry as elements. The only difference between anti-hydrogen and hydrogen is that it is made of an anti-electron (or positron) and an anti-proton. The only difference between an anti-electronand an electron is the anti-electron has a positive charge instead of negative. Likewise an anti-proton has a negative change instead of positive. Anti-electrons repel each other just like electrons do. Anti-electrons are attracted to anti-protons, just like electrons and protons attract. Anti-electrons in anti-atoms would interact with each other in the same kind of way as electrons in atoms do, and would form the same kinds of chemical bonds. As long as anti-matter and matter don't come in contact, they pretty much have identical behavior. Of course, if they do come in contact, they annihilate each other in a flash of gamma rays.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/643615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
How to generate electric current without a permanent magnet? The question is pretty simple: Can we build a device that coverts mechanical work in electric current1 without employing a permanent magnet and without access to any external source of current? The restrictions in place seem to rule out the possibility of current generation via induction; and I cannot think of another practical method. I have heard that industrial alternators sometimes work with electromagnets, but we don't have access to any external source of current, so this path doesn't seem viable. Do we really need stupid magnetic rocks to produce current? Unacceptable. To be more specific and minimize to risk of misunderstandings: my question is more or less equivalent to the following one Can we build a device, powered by hand via some sort of rotating lever, that produces electric current, crucially without employing any external current and without any permanent magnet? [1]: Usable electric current, let's say sufficient to properly power up a lamp; doesn't matter if AC or DC.
Doesn't a battery do this? Also, capacitors. EDIT: With the edit, it looks like the premise of your question could be satisfied by a Van de Graff generator: https://en.wikipedia.org/wiki/Van_de_Graaff_generator which uses friction to strip electrons from a substance, and create an electrostatic potential.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/643971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 0 }
Can soneome help me understand this part of the Feynman's Lost Lecture? Feynman's lost lecture So at 17:38, 3Blue1Brown states: We know that once the planet has turned an angle $\theta$ off the horizontal with respect to the sun, that corresponds to walking $\theta$ degrees around our circle in our velocity diagram, since the acceleration vector rotates just as much as the radial vector. I'm not sure I quite follow the logic here. Yes, I can see that the acceleration vector rotate just as much as the radius vector since the sun exerts its force along the radius vector. But what I don't understand is how that translate into saying that the when the position of the planet has turned an angle of $\theta$ with respect to the sun, that means that the "that corresponds to walking $\theta$ around our circle in our velocity diagram", and certainly not how that can be derived from the fact that the acceleration vector rotates just as much as the radius vector. I think specifically, I'm not sure the meaning of this sentence "walking $\theta$ degrees around our circle in our velocity diagram".
On the right diagram, there are 4 sections, each section is an equal angle of 15 degrees, so 60 degrees around the orbit. On the left velocity diagram, it said earlier that the tips of the velocity vectors are equally spaced, so 4 of those sections is a fraction of 4 out of 24 'steps' around the circle or 60 degrees again. True, the mention of 'acceleration vector' was not helpful on the video.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/644073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the problem with a rotating singularity? In most cases, people ask how can a point spin, resulting in a 'ringularity' as an answer. But I'm not quite sure why a point can't spin. After all, it's like saying how can something with mass have no volume, or how can a ringularity spin if each frame of its rotation is identical (since each point on the ring is identical).
I guess that you're asking about the question How can a singularity in a black hole rotate if it's just a point?, and this highly upvoted answer. The answer is just wrong. The angular momentum of a black hole is not "in" the singularity, but rather in the shape of the whole spacetime manifold, so there isn't really a problem. The singularity of a nonrotating black hole isn't a point anyway. In the case of a Schwarzschild black hole, the causal structure of the spacetime is such that the $r\to 0$ limit should be treated as a sphere, or rather a cylinder when you include the $t$ dimension. The sphere doesn't have a well defined radius, but it's a sphere, not a point. In the case of a charged black hole, there is good reason to believe that there is a singularity at the inner event horizon, due to the so-called mass inflation instability, so the singularity does have a well-defined size and it isn't zero. The same instability exists in rotating black holes, so the theoretical ring singularity inside the inner horizon probably never exists in the real world. It scarcely matters, because the angular momentum is encoded in the spacetime outside the outer event horizon in any case. Someone wrote a better answer to that question many years later. I just upvoted it and downvoted the other, but it's still 20 points short of being the top answer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/644205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does energy of a capacitor means energy stored in both plates? I've a doubt in this, Does the term potential energy of a parallel plate capacitor means the energy stored in both the plates or a single plate, since the formula $E=Q^2/2C$ , $Q$ is the charge of only one plate? Please help me in this.
It is the energy for the entire system. It is the energy required to put $+Q$ on one plate and $-Q$ on the other plate.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/644466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does the $H_2$ in cosmic clouds virtually not radiate? I am reading the lecture notes of a friend. He has written about the cold (3-20 K) molecular $H_2$ cosmic gas, that "practically doesn't radiate, due to symmetry". Can someone confirm that $H_2$ has this feature of radiating much less electromagnetic radiation than other molecules? And how can this be explained with the symmetry of the molecule? Excuse me, for making you interpret vague statements. Nevertheless, I am very curious
Yes, it is correct that cold $H_2$ radiates much less electromagnetic radiation than other molecules (for example, carbon monoxide). Here's one way to think about this intuitively. On macroscopic scales, electromagnetic radiation is produced when charged objects accelerate. The atoms in a hydrogen molecule consist of charged particles (protons and electrons). These particles are certainly capable of accelerating. However, since momentum is conserved and the two hydrogen atoms are identical, if one of the atoms accelerates, the other atom must accelerate by exactly the same amount in the opposite direction. Therefore, the radiation from the two atoms cancels out. (This assumes that the molecule does not interact with other molecules, which is very reasonable in the interstellar medium.) At molecular scale, the relative acceleration of the two atoms in a diatomic molecule becomes quantized as rotational and vibrational transitions. But the same intuition applies: if the molecule rotates or vibrates, conservation of momentum implies that there is no net radiation. Of course, the protons and electrons in the two atoms can still accelerate relative to each other. Since protons and electrons have very different mass-to-charge ratios, this does produce radiation. However, the proton-electron interaction is much stronger than the atom-atom interaction. Using quantum mechanics, that means that the available energy levels (the electronic levels) are much more widely separated than the rotational and vibrational levels. At the very low temperatures of the interstellar medium, there simply isn't enough energy available to jump between different electronic levels at any significant rate. As such, interstellar hydrogen can't produce much radiation through this process either.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/644697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Are charged particles cold? Are charged particles colder than neutral ones? If a charged particle is vibrating due to temperature, it will release some of its energy as electromagnetic waves. So that means it's losing energy, cooling itself off. Is there an error in my logic?
There are two main errors: Single particles don't have a temperature. Temperature is a statistical feature of bulk matter. Single particles don't emit EM radiation when they move. Instead their energy is quantised. Under Classical theories all atoms would quickly collapse as their electrons radiate all their energy away.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/644809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
If photons have also particle properties why should they not collide with each other? If photons have also particle properties why should they not collide with each other? Collisions between fermions are possible as collisions between fermions and photons(bosons) except collisions between photons (that are described by Albert Einstein as particles in the photoelectric effect experiment). Why?
Having particle properties does not mean that a photon is a small sphere traveling at the speed of light. Collisions in quantum mechanics mean that particles interact and modify their free-particle behavior. Quantum electrodynamics predicts an electron-electron interaction, electron-photon interaction, but also a photon-photon interaction. However, it is weaker than the electron-electron interaction and not observable in practice at usual energies and energy densities. Experimentally it is studied at high energies ( see, for instance, the Wikipedia page on two-photon physics ))
{ "language": "en", "url": "https://physics.stackexchange.com/questions/644911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 3 }
When I walk down the stairs where does my potential energy go? When I leave my room I walk down three flights of stairs releasing about 7kJ of potential energy. Where does it go? Is it all getting dispersed into heat and sound? Is that heat being generated at the point of impact between my feet and the ground, or is it within my muscles? Related question, how much energy do I consume by walking? Obviously there's the work I'm doing against air resistance, but I feel like that doesn't account for all the energy I use when walking.
There is a nice answer by @annav, I would like to add something none of the answers mention, that is atmospheric pressure. However small the atmospheric pressure is changing when you walk down the stars, it does change, and it means that as you descend, your body must take more pressure. At low altitudes above sea level, the pressure decreases by about 1.2 kPa (12 hPa) for every 100 metres. https://en.wikipedia.org/wiki/Atmospheric_pressure As the atmospheric pressure rises (when you walk down the stairs), your body needs to act against it (so your body needs to adjust accordingly), and this needs energy from the body.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/645226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "88", "answer_count": 9, "answer_id": 5 }
Conventions for graded wedge product in supergeometry There are two conventions for the graded exterior product on superspace (see https://ncatlab.org/nlab/show/signs+in+supergeometry): $$\alpha \wedge \beta = (-1)^{pq+|\alpha||\beta|}\beta\wedge\alpha \;(\text{Deligne})\tag{1}$$ $$\alpha \wedge \beta = (-1)^{(p+|\alpha|)(q+|\beta|)}\beta\wedge\alpha \;\;(\text{Bernstein})\tag{2}$$ for a $p$-form $\alpha$ and a $q$-form $\beta$, while $|\cdot|$ represents the Grassmann parity. On the other hand, there are two possibilities for the exterior derivative, depending on whether or not it changes the parity of forms. Thus, for supercoordinates $z^A$ we have: $$ |d|=0 \;\Rightarrow \; |dz^A|=|z^A|, \quad |d|=1 \;\Rightarrow \; |dz^A|=|z^A|+1.\tag{3} $$ I read in this and this references that the cases $|d|=0$ and $|d|=1$ correspond to the Deligne and Bernstein conventions, respectively; but for the $|d|=1$ case it seems not to hold if we consider $\alpha=dz^A$ and $\beta=dz^B$, since for this case $p=1=q$ and $|\alpha|=|z^A|+1,$ $|\beta|=|z^B|+1$ and thus we get $$dz^A\wedge dz^B = (-1)^{|z^A||z^B|}dz^B\wedge dz^A ,\tag{4}$$ meanwhile in the literature we commonly found: $$ \;dz^A\wedge dz^B = (-1)^{1+|z^A||z^B|}dz^B\wedge dz^A \;\;(\text{Deligne}) \tag{5} $$ $$\;dz^A\wedge dz^B = (-1)^{(1+|z^A|)(1+|z^B|)}dz^B\wedge dz^A \;\;(\text{Bernstein}).\tag{6} $$ What am I misunderstanding?
TL;DR: The confusion seems to be that the notation $|\cdot|$ somewhat misleadingly sometimes denotes the Grassmann-grading and sometimes the total grading. * *In the Deligne convention $|d|_g=0$, the form-degree $|\cdot|_f$ and the Grassmann-grading $|\cdot|_g$ are two independent gradings. This leads to eqs. (1) & (5). *In the Bernstein convention $|d|=1$, the total grading $$|\cdot|~=~|\cdot|_f+|\cdot|_g$$ (which determines the Koszul/permutation sign) is the sum of the form-degree $|\cdot|_f$ and the Grassmann-grading $|\cdot|_g$. This leads to eqs. (2) & (6). If this sounds crazy, then consider the following sanity-check: Go to the purely bosonic case with no Grassmann-odd objects. Then the Deligne & Bernstein conventions should preferably collapse to the same standard exterior algebra for differential forms.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/645308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Simultaneity of timelike separated events It can be easily shown that if we have two time-like separated events which are not simultaneous in one frame then they cannot be made simultaneous by Lorentz transformation. But, if those events are simultaneous in one frame then they can be made non-simultaneous by Lorentz transform, which contradicts the previous statement because we can Lorentz transform back to the original frame making it simultaneous again? Am I missing something?
if we have two time-like separated events...But, if those events are simultaneous in one frame then they can be made non-simultaneous by Lorentz transform If two events are both time-like separated and simultaneous then the two events have the same spacetime coordinates and will be simultaneous in all frames. Furthermore, if the two events are simultaneous in one frame but not in another then the events cannot be time-like separated. So there is no contradiction. In the second scenario you either have to drop time-like separation or drop that there is a frame where the events are not simultaneous; you can't keep both.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/645407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Shor's algorithm entanglement verification I would like to ask whether the entanglement verification is necessary in Shor's algorithm In the paper, Nature Photon 6, 773–776 (2012), they mentioned that they tried to factorize 21 to avoid the entanglement verification, but I'm not sure why this verification can be avoided. Are there any certain numbers that don't require entanglement verification process? If so, could you please let me know the reason?
In the abstract of this paper they say The algorithmic output is distinguishable from noise, in contrast to previous demonstrations. The previous demonstrations seem to be claims to have factored $15$. Shor's algorithm works (slowly) even if your "quantum" computer has a coherence time of zero. The reason is that it runs the quantum subroutine in a loop, classically checking the output until it finds a factor. If the qubits are effectively classical (measured after every gate operation), then the output of the quantum subroutine will be uniformly random, and eventually it will be correct. If you're factoring a large number, then the algorithm terminating in less than the age of the universe proves that your quantum computer is working. If you're factoring a tiny number that could be factored instantaneously by trial division, then Shor's algorithm terminating by itself proves nothing. You need some other evidence that the computation is really quantum, such as the frequency distribution of different outputs. It's somewhat moot, though, because neither this paper nor the previous demonstrations actually implemented Shor's algorithm. Their quantum computers couldn't even store the number they claimed to factor, much less do modular exponentiation. There may have been merit to the experiments from an engineering perspective, but tying them to Shor's algorithm was just clickbait. See Smolin, Smith and Vargo, "Oversimplifying quantum factoring" and Dattani and Bryans, "Quantum factorization of 56153 with only 4 qubits".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/645525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Group velocity is zero at zone boundaries The E-k diagram derived in Kronig Penney model looks like: As we can see that slope of this curve is zero at all zone boundaries as well as k=0, which makes group velocity zero at these points. Why is it so? Why group velocity is zero at these points?
At the boundaries of the Brillouin zone, the wavelength is an integer multiple of the lattice constant. Therefore, strong reflected waves are generated and they are mixed with the traveling wave. Eventually, the waves become standing waves and do not have group velocity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/645660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does Motional EMF depend on the shape of path taken or depends on the shortest distance between given two points? A semi-circular conducting wire of radius 2m is rotated in a uniform magnetic field $B=0.1\text{ T}$ ($\vec k$) about point $O$ with angular speed $\omega=10\text{ rad/s}$ as shown in the figure. The axis of rotation is parallel to $B$. Find the magnitude of the potential difference between point $M$ and point $N$. My attempt: I actually tried to perform integration for each arc. Assumed the speed of a small angle and tried to write $\mathrm d\vec E=(\vec V \times \vec B)\cdot\mathrm d\vec l$, but it doesn't prove beneficial as the direction at every point is changing. But in the given solution, they assumed a straight line between ($O$ and $M$), ($O$ and $N$) respectively, treated those parts as rod rotating about $O$, but I can't understand this approach. Can anyone please explain the concepts we can use in this question or any alternate approaches?
The given solution uses the fact that a closed loop that encloses a constant amount of magnetic flux has no e.m.f. Let $\alpha$ denote the path $\overset{\Huge\frown}{OM}$ and $\beta$ the straight line $\overline{OM}$. By Lenz's law, $$ \oint \mathbf{E}\cdot \text{d}\mathbf{r} = \int_\alpha \mathbf{E}\cdot \text{d}\mathbf{r} - \int_\beta \mathbf{E}\cdot \text{d}\mathbf{r} = V_{\overset{\Huge\frown}{OM}} - V_{\overline{OM}} = -\frac{\text{d}\Phi_\mathbf{B}}{\text{d}t} = 0.$$ Hence, for the purposes of calculating the potential difference between $M$ and $N$, the curved wire $\overset{\Huge\frown}{OM}$ can be replaced by the straight wire $\overline{OM}$. The same reasoning applies for the arc $\overset{\Huge\frown}{ON}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/645742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Explain the theory behind this problem(if possible using number line or graph or any other method of pictorial representation) In a situation in which data are known to three significant digits, we write 6.379 m = 6.38 m and 6.374 = 6.37 m. When a number ends in 5, we arbitrarily choose to write 6.375 m = 6.38 m. We could equally well write 6.375 = 6.37 m, “rounding down” instead of “rounding up,” because we would change the number 6.375 by equal increments in both cases. Now consider an order of-magnitude estimate, in which factors of change rather than increments are important. We write $500 m$ ~ $10^{3}$ m because 500 differs from 100 by a factor of 5 while it differs from 1 000 by only a factor of 2. We write 437 m ~ $10^{3}$ m and 305 m ~ $10^{2}$ m. What distance differs from 100 m and from 1000 m by equal factors so that we could equally well choose to represent its order of magnitude as , $10^{2}$ m or as , $10^{3}$ m?
You are looking for a value $x$ such that $\displaystyle \frac x {100} = \frac{1000} x \\ \Rightarrow x^2 = 10^5 \\ \Rightarrow x = \sqrt{10^5} = 100\sqrt{10} \approx 316$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/645967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If an antimatter-matter reaction is a perfect transfer of matter to energy (light), is there a perfect transfer of energy to matter? I am aware that two photons can combine to form an electron and positron pair, but if any matter and antimatter can annihilate to form photons, shouldn't there be a way for photons, or any energy for that matter, to turn back into quarks other than electrons? EDIT FOR CLARIFICATION: I am simply wondering if photons, or any energy for that matter, can create matter other than electrons/positrons. Specifically, I am wondering if photons could form antiquarks/antiprotons.
Gamma rays are photons of very large energy. Technology has advanced to the point of talking of gamma gamma colliders, so yes, the electromagnetic energy will be used to create a lot of elementary particles for study. Photon beams can be made so energetic and so intense that when brought into collision with each other they can produce copious amounts of elementary particles. ... For example: a GAMMA-GAMMA collider is uniquely suited for a direct measurement of the partial decay width of a Higgs boson into two gamma quanta. The decay amplitude involves loops of any charged particles whose mass is derived from the Higgs mechanism, A measurement of the two-photon width would be a sensitive test of various models predicting higher mass particles without producing them directly in accelerators. There are several such models with two-photon couplings different from that of the Standard Model:supersymmetric models, techni-color models, and other extensions of the Standard Model
{ "language": "en", "url": "https://physics.stackexchange.com/questions/646068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do sound waves cause air molecules to oscillate? I would like to clear up some confusion about the mechanics of air particles that are propagating a sound wave. I understand that there is no net movement of air molecules when a sound wave passes through air. Instead, the particles oscillate and the wave is propagated through various elastic collisions between air molecules which cause the compression to keep moving forward. What I don’t understand is how the air molecules move back to approximately their original position after colliding with the other particles to keep the wave moving further. Doesn’t this seemingly violate the laws of conservation of momentum? (This can’t be the case since sound exists) If a particle hits its neighbor, and that neighbor molecule now has the momentum from the wave to keep moving forward, how can the original particle have the momentum to move backward and to its original place. I would also like to clarify that I understand gas molecules have their own random motion in addition to the wave motion and was wondering if this had something to do with the aforementioned phenomena.
Sound waves are referred to as pressure waves, and if you understand this, it should answer your question. When sound waves propagate, they form alternate regions of high pressure, called compressions, and low pressure, called rarefactions in the air. The air molecules move toward and away from these regions as the wave propagates. It is not that momentum is not conserved (momentum is always conserved), but more about the motion of particles moving into lower pressure regions, and those that move away from higher pressure regions, as the wave travels through the air. Also, the motion of individual gas molecules in the air is pretty much random, but as explained above, as the sound waves travel through the air, there is a collective motion of gas molecules in the volume surrounding the vibrating sound wave.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/646174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Why is difference of points not a valid definition for a vector in curved space? In page-49 of MTW (1973 edtn), the following picture is shown: After seeing this picture, the question which arose in my head is why exactly can we not define a vector as difference of points in curved space?
The short answer is that vectors must satisfy the eight axioms of a vector space in order to qualify as vectors. Flat space is professionally known as affine space, which I have described in my answer here. In affine space, one can subtract two points to obtain a vector. By defining an origin and position vectors, one can also make points behave like vectors, therefore making it a vector space. However, on a manifold, the situation is quite different. Position vectors can no longer be defined because there is no way to make points add like vectors. Even if we smoothly label the points using coordinates $x^1, \ldots ,x^n$, where $n$ is the dimension, there is simply no way to choose coordinates such that points obey the vector space axioms. This is why the "difference of two points" is no longer a meaningful definition of a vector. Conversely, if the points in our manifold do end up behaving like vectors, our manifold is an affine space. However, the partial derivatives along all smooth curves passing through a point on the manifold do form a vector space, which is the tangent space. I believe this answer on Mathematics.SE (which I personally upvoted) gives a really good summary of the situation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/646316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
If $E=mc^2$, then why do different substances have different calorific values? Today during a classroom discussion, I realised that if we consider the equation $E=mc^2$, then we are establishing a relation between energy and mass but we often observe that different substances produce different amount of energy when they are burned. For example: Burning a kilogram of wood will not produce same amount of energy as 1 kilogram of petrol.
If you converted 1 pound of fuel completely into energy it would be the same as converting 1 pound of wood completely into energy. But when you burn wood, you get leftover mass in two forms: smoke and ashes. When you burn gas, you get leftover mass in exhaust ($CO_2$, $CO$, etc.) The energy you get from burning a substance is: $E = c^2(m_{substance} - m_{leftovers})$ But if you convert the whole thing to energy so that there's no mass left then you get $E = mc^2$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/646875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 3 }
Functional integrations We often see functional versions of Gaussian integrations $$ \int_{-\infty}^{\infty} d^dx e^{-x^{T}Mx} = \frac{1}{\sqrt{2\pi^d \det M}} \to \int[\mathcal{D}X] e^{-i\int X \mathcal{O}X} = (\det{O})^{-1}$$ where $\mathcal{O}$ is some hermitian operator. The map to convert a gaussian integral into a path integral is very obvious here $$x \to X\\M \to \mathcal{O} $$ Similarly, is there a way to convert the following integral $$\int^{\infty}_{0} dt~ e^{-st} = \frac{1}{s}$$ into a functional integral (or path integral over fields)?
The second integral's functional equivalent, say \begin{equation} \int \mathcal{D}X e^{-i \int \mathcal{O} X} \end{equation} is just a Lagrange multiplier on the operator $\mathcal{O}$ so you'll just get a delta function on whatever the operator $\mathcal{O}$. If you'd like to make this into a more precise answer, I suggest having a look into some algebraic QFT. Also this is a similar question to this entry.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/647089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Computing the metric tensor from its Killing vectors? On page. 139 of Carroll's GR book, during the discussion of Killing vectors, he quotes an explicit coordinate basis representation for the Killing vectors on $S^2$: \begin{array}{l} R=\partial_{\phi} \\ S=\cos \phi\,\partial_{\theta}-\cot \theta \sin \phi\,\partial_{\phi} \\ T=-\sin \phi\,\partial_{\theta}-\cot \theta \cos \phi\,\partial_{\phi} . \tag{3.188} \end{array} I am trying to understand why one can go backwards, and compute the metric tensor components from the Killing vectors. It is straightforward to show that \begin{align} g^{\mu \nu} = K_i^\mu K_i^\nu \end{align} where $\mu, \nu = \theta, \phi$ and $i = 1, 2, 3$ corresponding to the $R, S, T$ Killing vectors, respectively. I can't find a discussion of this sort of calculation of $g^{\mu \nu}$ from Killing vectors in Carroll's book, or anywhere else really. Does this always work? Is there an intuitive physical reason why it should work? I was trying to show that it is true using Killing's equation, but this was unsuccessful.
You're going to need some sort of extra assumption in order to make this idea work, because not every spacetime even has a single global Killing vector, much less enough Killing vectors to span the spacetime.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/647196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Quantum volume required to break a 2048 bit number used in RSA encryption I am curious as to whether there is a specific amount of quantum volume that will allow a quantum computer to break a 2048 bit number used in RSA encryption, and if so, what that number is. (within a realistic time frame of less than 1 hour) Thanks
To perform integer factorization on a quantum computer sucessfully depends mainly on number of available qubits and their quality (low noise and long decoherence time). Of course, quantum volume is linked to these two parameters. According to the article this article dissused here, some millions of qubits are necessary to break 2,048 bit RSA key.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/647336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can you exit the event horizon with a rocket? The reason given in most places about why one cannot escape out from an event horizon is the fact that the escape velocity at the event horizon is equal to the speed of light, and no one can go faster than speed of light. But, you don't really need to reach the escape velocity to get away from a massive object like a planet. For example, a rocket leaving earth doesn't have escape velocity at launch, but it still can get away from earth since it has propulsion. So, if a rocket is just inside the event horizon of a black hole, it doesn't need to have the escape velocity to get out, and it should at least be able to come out of the event horizon through propulsion. Also, if the black hole is sufficiently large, the gravitational force near the event horizon will be weaker, so a normal rocket should be able to get out easily. Is this really theoretically possible? If it was just the escape velocity being too high was the problem of getting out, I don't see any reason why a rocket cannot get out. This is a similar question, but my question is not about a ship with Alcubierre drive.
The force of gravity is small. But it's a small force on a thing that contains zero energy. So the force gravity is infinite in a sense. The rocket lost all of its energy when it was lowered to the event horizon. It may be worth mentioning that the rocket motor burns fuel at infinitely slow rate. That has some effect on how much force the rocket generates.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/647487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48", "answer_count": 6, "answer_id": 4 }
Quantum mechanics and logical statements I am a math student and currently working on my bachelor thesis with a philosophy professor. The subject is paraconsistency and thus also dialetheism which is the believe that a statement can be true and false at the same time. I had a general/introductory course to QM and one experiment I recall is the Stern-Gerlach experiment where you measure spin with a magnetic field. Consider the statement (I choose $\varphi$, $J$ to specify a atom and measurement, is that a good idea?): The next specific silver atom $\varphi$ I send trough the Stern-Gerlach experiment has spin up during that specific measurement $J$. As I understood the validity of this statement can only be determined after executing the experiment. Because of our understanding of QM and the current accepted theory this is inherently probabilistic and not predictable? What do you think about the formulation of the statement? Do you think I can say that the above statement is neither true or false (or arguably both) before actually sending the atom through the experiment? Or do i have some misunderstanding about QM and what statements you can make about it?
There's a difference between paraconsistent and quantum logics (there's more than one of each, and there is some overlap). In paraconsistent logics, the explosion theorem $(p\land\neg p)\to q$ fails in general. In quantum logics, the distributive laws $p\land(q\lor r)=(p\land q)\lor(p\land r)$ and $p\lor(q\land r)=(p\lor q)\land(p\lor r)$ fail. Both are weaker than classical logic, and classical logic and the strongest quantum logic need to be somewhat weakened to obtain something paraconsistent. Critics of quantum logic have argued classical logic is adequate to describe quantum phenomena. The easiest way to do this is to phrasing everything in terms of eigenstates rather than the values of classical variables. We thereby consider an observable's value undefined in states which aren't eigenstates of its associated operator on the Hilbert space.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/647760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Expansion of $\Gamma(-\epsilon)$ and $\Gamma(1-\epsilon)$ I'm currently trying to do some loop calculations in QFT and have come to a point where I need to expand a product of $\Gamma$-functions. Let $\epsilon$ be the parameter introduced in the $\overline{\text{MS}}$-regularization scheme that we want to let go to zero in the end. I'm aware of the usual expansion $$ \Gamma(\epsilon)\approx \frac{1}{\epsilon}-\gamma_{EM}+\mathcal{O}(\epsilon) \quad\text{and}\quad \Gamma(-1+\epsilon)\approx -1+\frac{1}{\epsilon}-\gamma_{EM}+\mathcal{O}(\epsilon). $$ What I need are slightly different variations of this, i.e. $\Gamma(-\epsilon)$ and $ \Gamma(1-\epsilon)$. According to this paper there exists an identity $$\Gamma(-x)=-\frac{\Gamma(1-x)}{x}, $$ which would solve the problem. Unfortunately, I can't find another source that confirms this identity or that states what the contraints on $x$ are, if there are any... Does someone know of a proof of this or another source?
$\Gamma(1-\varepsilon)$ is the easier one since we know that $\Gamma(n)=(n-1)!$ so $\Gamma(1)=0!=1$; so $$\Gamma(1-\varepsilon)=1-\varepsilon\Gamma'(1).$$ The derivative of Gamma function is related to another special function called $\psi$ function defined by $$\psi(x)=\frac{\Gamma'(x)}{\Gamma(x)}=(\ln\Gamma(x))'.$$ By consulting some tables you will find that $$\psi(1)=-\gamma;$$ so $$\Gamma(1-\varepsilon)=1+\gamma\varepsilon.$$ While for $\Gamma(-\varepsilon)$, the formula you found in other's paper is nothing else but basic properties of Gamma function $$\Gamma(z+1)=z\Gamma(z).$$ Or you can understand this from Euler's reflection formula $$\Gamma(1-z)\Gamma(z)=\frac{\pi}{\sin\pi z}.$$ A proof for this formula can be found in Wikipedia.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/647828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is everything not invisible if 99% space is empty? If every object is $99$% empty space, how is reflection possible? Why doesn't light just pass through? Also light passes as a straight line, doesn't it? The wave nature doesn't say anything about its motion. Also, does light reflect after striking an electron or atom or what?
It depends on what you are calling "empty". Electrons are point particles. Or maybe they are not, but their size is probed down to 10^-22 m and found to be less than this already pretty much little value. Atomic nuclei are not points. They are made of protons and neutrons that are not points either. But protons and neutrons are made of quarks that ARE point particles in the same sense the electron is. That's why one can conclude that not 99%, but a honest 100% of the space is empty. Then again, what makes the matter non-transparent (not only to light, but to other particles as well) are the interactions. The light interacts with charged particles (electrons in the first place) and this is what makes some substances opaque, reflective ot whatever. In the quantum world, one gets used to the notion of the "interaction cross-section". This is not a real size of anything. This is a probability of an interaction expressed in terms of a virtual "area" of a target particle that will get the same probability of an interaction, if the particles were "classic" projectiles and targets. The "classic" analogy ends when one discovers that the interaction cross-section depends in very complex manner on the energy of the particles, as well as the interaction type in the first place.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/648041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 12, "answer_id": 3 }
Can I apply the Born rule to a Dirac spinor? How does a Dirac spinor such as: $$ \psi = \pmatrix{a_0+ib_0\\a_1+ib_1\\a_2+ib_2\\a_3+ib_3} $$ Connect to a probability? Can one apply the Born rule of this object?
The Born rule, in its simplest form, tells us that the probability (density) of finding a particle at a given point is the square of the particle's wave function at that point. That is, $$\rho = \mid \psi (x,t)\mid^2$$ which is always a positive real number. As such, the Dirac spinor is not a wave function. They do not even represent traditional states in a Hilbert space. By convention, spinors are taken to be the elements of a representation space of the Dirac algebra. But anyway, from the fermion 4-current $$j^{\mu}=\bar \psi\gamma^{\mu}\psi$$ and if we take the $\mu=0$ component, we define the probability density as $$j^0=\rho=\bar \psi\gamma^{0}\psi = \psi^{\dagger} \psi$$ and this quantity is not always positive definite.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/648202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 4 }
Magnetic field induced by a rotating charged disk Let's assume that there is a disk of total electric charge $Q$ rotating about its axis with a constant angular velocity $\vec{\omega}$. I know that one can easily compute the magnetic field generated on the axis of the rotation of the disk, which is possible due to the symmetry of the problem. My question is the following: Is there any way to use the Biot-Savart law (or any other method) in order to compute the magnetic field on an arbitrary point on the plane of the disk itself? I would, probably naively, think of somehow reducing the problem to a $2D$ one, but fail to implement it.
Your rotating charged disk can be thought of as many concentric rings of current. Each ring can be thought of a many short elements of current, each of which contributes to the magnet field at any chosen point as predicted by Biot-Savart. Not being a mathematician, if I had to do this I would let a computer do a numeric summation. (Decades ago, I did a similar calculation for selected points within the field of a Helmholtz coil.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/648551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do Newton's laws of motion hold true in non-inertial frames of reference? My book derived the formula for the acceleration of a rocket at any instant in the following way: $v_r=$ velocity of gas released from the nozzle relative to the rocket $dt=$ infintesimal time interval $dm=$ mass of the gas released from the nozzle in time interval $dt$ $dP=$ momentum of the gas released in time interval $dt$ $F=$ thrust force acting directly opposite to the direction of the release of gas $M=$ mass of the rocket after time interval $dt$ We know from Newton's 2nd law, $$F=\frac{dP}{dt}$$ $$\implies F=\frac{dm}{dt}v_r$$ $$\implies a=\frac{1}{M}\frac{dm}{dt}v_r$$ In this derivation, the velocity of the released gas, $v_r$, has been calculated from the perspective of the rocket, which is constantly accelerating, making it a non-inertial frame of reference. Newton's laws don't hold true in non-inertial frames of reference, but we used newton's 2nd law in this derivation. So, how is this derivation correct? PS: A similar derivation can be found in Fundamentals of Physics by Halliday, Walker & Resnick
The derivation that you show is not calculated from the perspective of the rocket. It is calculated from the perspective of a stationary outside observer. The calculation simply uses the relative velocity of the gas to calculate what force the rocket will experience. Newton's laws are valid in non-inertial reference frames as well (with the usual non-relativistic limitations), as long as you treat the reference frame acceleration as a gravity-like, external force on all masses. In your specific example that would not be very practical, because the purpose of the calculation is to determine the size of that external force. But you can use Newton's laws to determine what will happen to objects inside the rocket while it is being accelerated. For example, if you want to calculate the trajectory of a ball that is being tossed by an astronaut in the rocket, from the perspective of that astronaut.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/648688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 3 }
How does a string pulls pulley at both the ends? For an Atwood's machine, why does the string exert a downwards force on the pulley that is twice its tension? I'm neither able to understand nor imagine this scenario. How does a wrapped string with tension $T$ pull a pulley downwards with a force of $T$ at each end? In short I want to understand that how does tension T in rope pulls pulley ( please open the link, I was not able to upload image here)? https://linksharing.samsungcloud.com/cdbxRTLOi3d2
At one level the answer is trivial- the pulley is subject to two parallel forces T, so the resultant force is 2T. If, however, your question is exactly how is the force applied to the pulley, the answer is that it is the vertical component of the normal force integrated over the surface of the pulley. To see that, imagine the pulley is replaced by a square. You will easily see that a normal force exists at the top left and top right corners of the square, where the rope changes direction, and that the vertical components of those normal forces add to 2T. If you replace the square by a hexagon, you can repeat the calculation of normal forces at each of the corners where the rope changes direction, and you will see again that the vertical components add to 2T. if you like, you can repeat it for an octagon, or any polygon with an increasing number of sides, and you will find the vertical components of the normal forces at the corners always sum to 2T. The case of a circular pulley is the same arrangement taken to an infinite number of corners of infinitely short line segments.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/648806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What interval to use when proving orthogonality of wavefunctions? When proving that $\psi_1=\sin(n\pi x/a)$ and $\psi_2=\cos(n\pi x/a)$ are orthogonal to each other in a 1D box, the main problem that I am facing is what to use as the domain of integration. If I take the interval $[0,a]$ as we use in the Schrodinger wave equation, the result does not give $0$, but if I take the interval from $[-a,a]$, it satisfies the orthogonality. How do I know which interval I am to use? Is there any rule?
There is no rule, it depends on what the author of the exercise wants to use. However, the solutions $\psi_1$ and $\psi_2$ will also come out slightly differently, depending on the setting chosen in the exercise. If the solutions satisfy the Schröedinger equation for the correct potential, they will automatically come out as orthogonal. The problem you face is that you are using solutions for one potential in a different potential, so they don't actually solve your Schroedinger equation. In short: You shouldn't think how to choose the correct potential for given solutions, but rather how to get the correct solutions for a given potential.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/649076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Block on an accelerating wedge In the above figure, the wedge is been accelerated towards right as shown in figure. According to my teacher it is possible to keep the block at rest or even accelerate it in the upward direction along the the inclined plane of wedge. To explain he taught us about pseudo forces. Stating that in the Wedge frame we can assume a pseudo force in the right on the block as shown in figure. He also said that we can also do the same from ground frame but when I am trying to do it I am not finding any force that resists the motion of of the wedge down the inclined plane.
In the laboratory frame, The equation of motion given by $$N\cos\theta-mg=0$$ $$N\sin\theta =ma$$ In Accelerating frame, The equation of motion given by $$N\cos\theta-mg=0$$ $$N\sin\theta-ma=0$$ Both are equivalent.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/649341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Relative Circular Motion Four particles $P_1, P_2,P_3,P_4$ are moving in a plane. At $t=0$, they are at the four corners of a square $ABCD$ of edge length $l$. Each of the particles has a constant speed $v$. The velocity of $P_1$ is always directed to $P_2$, that of $P_2$ is always directed to $P_3$, that of $P_3$ is always directed to $P_4$, that of $P_4$ is always directed to $P_1$. At what time $t$ will all the four particles collide and where will they collide? Also how to calculate the angular acceleration of the line joining the particle and the final point of collision at any instant? My attempt: First I assumed for any small time $dt$, any of the one particle moves a distance $v$ $dt$. Then for an adjacent particle the change in the angle (very small) $\Rightarrow$ $tan(d\theta)=\frac{vdt}{l}$ and it gives $\omega=\frac{v}{l}$. So all the particles are rotating with this angular frequency. But now how can we use this value of angular frequency? Firstly, I am finding it difficult to visualize the situation due to which I am not able to understand exactly what concept should we apply. Secondly, can we use linear equations of motion to approach this? Also can we view the motion with respect to a particle? Found this on Wikipedia (Mice Problem/Pursuit Curve)
Hint: Try putting 4 dots on the paths after a short time. Join the 4 dots and see what happens. There is a nice simple answer to this...
{ "language": "en", "url": "https://physics.stackexchange.com/questions/649446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Does real life have "update lag" for mirrors? This may sound like a ridiculous question, but it struck me as something that might be the case. Suppose that you have a gigantic mirror mounted at a huge stadium. In front, there's a bunch of people facing the mirror, with a long distance between them and the mirror. Behind them, there is a man making moves for them to follow by looking at him through the mirror. Will they see his movements exactly when he makes them, just as if they had been simply facing him, or will there be some amount of "optical lag"?
When I read a few answers to this question, I noted how everyone wants to emphasize that it is a small effect because light moves so quickly. So that set me pondering whether we can find an example with longer distances. If we look out into the depths of space, we don't see mirrors but we do see something that acts to direct the path of light, namely gravitational 'lenses'. A gravitational lens is just a heavy thing such as a galaxy that happens to be on or near the line of sight from Earth to some other galaxy. It can then happen that light can travel from the further galaxy to Earth by more than one path, and the paths do not have to be of the same length. So we end up seeing the same distant galaxy at two or more different moments in its life, all at the same time (i.e. time of arrival of the light) for us. And the time differences now can be long: not nanosecond or seconds, but years or millennia. Any friendly astronomer reading this may care to give examples.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/649738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 7, "answer_id": 0 }
Electric flux due to a charge on a square surface what is the flux through the square if a charge q is placed on the surface of the square ? Now, according to me if we use the solid angle method the flux should be $$([q/\epsilon]*2\pi)/4\pi$$ as the solid angle is $2\pi$ however I'm not sure if this is correct :( . Now,what if I placed the charge over some distance d above the square can i still use solid angles? I feel it's not possible to use solid angles in the second case. (To sum it up can I always use solid angles to find flux??) (Edit: I am a high school student so complex math stuff isn't good with me :(.Also, I'm not too good with solid angles ) also I feel that the flux cant be zero as it can be clearly seen that some field lines are passing along the area vector through the y -axis
If the charge is in the surface, the electric field vectors are also parallel to the surface, due to symmetry. Then $$\Phi_E=\int \vec E\cdot d\vec A=\int \vec E\cdot \vec n d A=0$$ because of $\vec E\cdot \vec n=0$. No need for solid angles to get that result. The electric field of a point charge at the origin is (in suitable charge units) $$\vec E = q \frac{\vec r}{r^3}$$ For every plane through the point charge, the electric field in that plane is trivially parallel to that plane (because $\vec E\sim \vec r$). Hence, it is perpendicular to the normal of that plane.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/650171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can we cool Earth by shooting powerful lasers into space? In a sense, the climate change discussion revolves around the unwanted warming of the earth's atmosphere as a whole. It seems a bit too obvious to be true, but could we cool the atmosphere by simply shooting that unwanted energy somewhere else? Energy might be collected from remote expanses where it would otherwise be somewhat pointless to harvest it due to lack of habitability and resulting anticipated losses due to transmission (ocean surface, ???) If so, what would be a good place to shoot it?
... could we cool the atmosphere by simply shooting that unwanted energy somewhere else? Yes, but only by making our problems worse than they are already. A heat pump removes unwanted heat (that is how a refrigerator works) but it takes energy to make a heat pump work. Either this energy comes from non-renewable resources - which just generates more heat and makes our energy problems worse - or it comes from renewable resources such as solar power. But using energy from renewable resources to export heat that you have generated by consuming non-renewable resources makes no sense - just replace the non-renewable resources with the renewable resources in the first place. Here is an analogy - you have lit a fire in your room to run a steam engine which generates all your electricity. But the fire is making the room too hot and you want to run an air conditioning unit to cool the room down. You could build a bigger fire to generate more electricity to run the air conditioning unit - but that makes your problem worse. Or you could install solar panels to generate electricity to power the air conditioning unit - but if you have solar panels why do you still need your fire and steam engine ?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/650490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 8, "answer_id": 0 }
On existence of orthonormal basis for each subsystem in Separable state A separable state in $\mathcal{H}_{a}\otimes\mathcal{H}_{b}$ is given by $$\rho_{s}=\sum_{\alpha,\beta}p(\alpha,\beta)|\alpha\rangle\!\langle\alpha|\otimes|\beta\rangle\!\langle\beta|.$$ Now, my question is, can what can we say about the existence of $\{|\alpha \rangle\}$ and $\{|\beta \rangle\}$ such that all of them are elements from a complete basis (non-unique) in individual subsystem? Whether they exist? or for what kind state they exists (for example, in case zero discord state-they exist, as pointed out by one of the commentators1) A reason I think such a set of basis exist for all separable state is because separable state space is the convex hull of tesnor products of symmetric rank-$1$ projectors which are affinely independent (Caratheodory's theorem), thus by definifion, there exists a set of linearly independent vectors for each subsystem (mostly, nonunique), let's say those are $\{\alpha\rangle\}$ and $\{|\beta\rangle\}$. Surely, we can add few more vectors to both suitably to span the whole space!! Is it true? Any help is appreciated.
No, such a decomposition does not always exist. To see this, we need two facts: (1) The set of all separable state has the same dimension as the set of all density operators (it is a finite-volume subset) -- that is, it has $\approx D^4$ real parameters (if both systems have dimension $D$). (2) The family you descibe is fully specified by the $p$ ($\approx D^2$ parameters) and the basis choice for the two bases, which are given by a $D\times D$ unitary matrix (i.e. $\approx D^2$ real parameters) each, i.e., $\approx 3D^2$ parameters in total. Thus, the ansatz you describe has by far not enough parameters to describe all separable states.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/650622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Equation of motion of a gas's center of mass due to heating So here's a pretty basic question that I don't know how to solve within standard thermodynamics. Let's say I have a container with a gas in it. I transfer heat, $Q$, to the gas from the bottom of the container. Let us assume the local equilibrium hypothesis is true for such a system. We know that, since hot air is lighter, the lighter air will move up, causing the center of mass to move to the bottom. Let the center of mass undergo a displacement, $\vec s_{CM}$. Is there a way to find the equation of motion for the velocity $\vec v_{CM}$ of the center of mass of the gas? (Assume any required parameters such as density $\rho$, etc.)
The natural convection problem you are envisioning can be solved for the time- and spatial variations of the temperature, the velocity, and density using the basic spatially differentiated transport equations (Navier-Stokes equations {with typically a linearized buoyancy term} in conjunction with the continuity {mass balance} equation and the differential energy balance equation {first law of thermodynamics}). For approaches to setting this up, see Transport Phenomena by Bird, Stewart, and Lightfoot. Depending on the specific geometry, an analytic solution is usually not possible, and one would have to resort to a numerical solution, typically involving computational fluid dynamics (CFD).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/650779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Gravitational wave and 1st law of thermodynamics Introduction: A prediction of the general relativity is that any moving mass produces fluctuation in the space-time fabric, commonly referred as Gravitational-Wave. This prediction was recently confirmed by the LIGO experiment. The generation of such gravitational waves requires energy, as stated on the wiki article linked above: Water waves, sound waves, and electromagnetic waves are able to carry energy, momentum, and angular momentum and by doing so they carry those away from the source. Gravitational waves perform the same function. Thus, for example, a binary system loses angular momentum as the two orbiting objects spiral towards each other—the angular momentum is radiated away by gravitational waves. The first law of thermodynamics states that: The first law of thermodynamics, also known as Law of Conservation of Energy, states that energy can neither be created nor destroyed Given that, one can imply that any moving object having a mass would create gravitational waves - even ever so tiny -, thus having a drag. Question: How does a system, for instance earth-moon orbit, can be stable and not decaying over time over the model of the general relativity? (Where does the energy comes from?) Is this question solved?
The objects that are radiating gravitational waves must be losing the energy and angular momentum they had in their orbits. For example, when two black holes spiral in toward each other, they radiate away this energy and angular momentum as their orbit decays and they eventually merge. How does a system, for instance earth-moon orbit, can be stable and not decaying over time over the model of the general relativity? For the moon-Earth system, the amount of gravitational wave radiation is incredibly small. The time needed for any noticeable orbital decay is greater than the age of the universe. More energy is lost to dissipative forces, and so the moon or planets are in fixed orbits where the total energy and angular momentum is practically constant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/650879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Does the physical singularity of the Reissner-Nordstrom metric have a ring structure? The physical singularity of the Kerr metric has a ring structure due to the axi-symmetric nature of the metric. The Reissner-Nordstrom metric is the solution for a non-spinning, electrically charged black hole, and has two horizons: an event horizon and a Cauchy surface, the locations of both depend on the black hole's mass and charge. Question: mathematically what is the structure of the physical singularity in the Reissner-Nordstrom metric? Is it a point like the Schwarzschild case? Is it a ring like the Kerr case? Or is it different from both? And why? I've done some digging but cannot find a concrete explanation. The answer at this SE question is helpful, since it shows that even theoretically, the existence of the Kerr singularity could be a mathematical artifact. But I'm curious about if such a mathematical feature exists for the Reissner-Nordstrom metric, and why?
The singularity in a Reissner-Nordström geometry is at $r = 0$. With a little assistance from Mathematica I find the Kretschmann scalar to be: $$ K = \frac{56r_q^4}{r^8} - \frac{48 r_q^2 r_s}{r^7} + \frac{12 r_s^2}{r^6} $$ and this is finite everywhere except at $r = 0$. theoretically, the existence of the Kerr singularity could be a mathematical artifact. Neither the RN nor the Kerr metric can represent a real black hole since they are both time independent and have existed for an infinite time in the past, which is obviously not possible in a universe $13.8$ billion years old. So we can be confident they are both a mathematical ideal not a real object. In addition it is widely believed (though I think not proved) that the geometry inside the inner horizon is unstable to perturbations. So even if a real RN or Kerr geometry could be constructed it would immediately evolve into some other geometry in some way that is not fully understood.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/650955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Proton Electron Merger Can somebody explain what would happen if an electron & a proton, very close to each other are left to "fall" to each other in a straight line?
I wish to point out that electrons are LEPTONS, and protons are HADRONS (forgive the SCREAMING). All protons are made up of 3 quarks (u u d). Neutrons have (u d d) quarks. Leptons have 0 quarks and do not participate in strong force interactions which are mediated by gluon exchanges between the hadron's component quarks. A lepton can only contribute energy (from its kinetic motion). While Coulomb attraction exists between the proton and incoming electron, you need a lot of energy to get a proton u quark to transition to d (a naive probability of ⅔, assuming you approach to within $10^{-17}$ meters), but to balance out all the quantum book-keeping, you also need an electron anti-neutrino! A Feynman diagram would show this (see http://hst-archive.web.cern.ch/archiv/HST2002/feynman/examples.htm). As such, the electron would more probably lose its energy as Bremsstralung emissions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/651283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 0 }
Bloch functions vs. Bloch state vectors Let $$\hat{H}=\frac{\hat{\mathbf{p}}^2}{2m} + V_L(\mathbf{r})$$ where the lattice potential $V_L(\mathbf{r})=V_L(\mathbf{r}+\mathbf{R})$ for any lattice vector $\mathbf{R}$, and let $\hat{T}_{\mathbf{R}}$ denote translation by $\mathbf{R}$. Skipping the details for now, we can derive the following eigenvalue equations: \begin{align} \hat{T}_{\mathbf{R}}|n,\mathbf{k}\rangle&=\exp\left(-i\mathbf{k}\cdot\mathbf{R}\right)|n,\mathbf{k}\rangle\\ \hat{H}|n,\mathbf{k}\rangle&=E_n\left(\mathbf{k}\right)|n,\mathbf{k}\rangle \end{align} The first equation implies that the wave function $\psi_{n\mathbf{k}}\left(\mathbf{r}\right)$ associated with $|n,\mathbf{k}\rangle$ obeys $$\psi_{n\mathbf{k}}\left(\mathbf{r}+\mathbf{R}\right)=\exp\left(i\mathbf{k}\cdot\mathbf{R}\right)\psi_{n\mathbf{k}}\left(\mathbf{r}\right)$$ and letting $$u_{n\mathbf{k}}\left(\mathbf{r}\right)=\exp\left(-i\mathbf{k}\cdot\mathbf{r}\right)\psi_{n\mathbf{k}}\left(\mathbf{r}\right)$$ we see that $u_{n\mathbf{k}}\left(\mathbf{r}+\mathbf{R}\right)=u_{n\mathbf{k}}\left(\mathbf{r}\right)$, which is Bloch's theorem. Here's my question. What is $|u_{n\mathbf{k}}\rangle$? It is tempting to claim that $|u_{n\mathbf{k}}\rangle=\exp\left(-i\mathbf{k}\cdot\mathbf{r}\right)|n,\mathbf{k}\rangle$, but this does not seem to be the case: \begin{align*} |u_{n\mathbf{k}}\rangle&=\int d^3r|\mathbf{r}\rangle u_{n\mathbf{k}}\left(\mathbf{r}\right)\\ &= \int d^3r|\mathbf{r}\rangle\exp\left(-i\mathbf{k}\cdot\mathbf{r}\right)\psi_{n\mathbf{k}}\left(\mathbf{r}\right) \end{align*} and the $\exp\left(-i\mathbf{k}\cdot\mathbf{r}\right)$ factor cannot just be pulled out of the integral.
The trick is to replace the $\mathbb C$-valued function $\exp(i\mathbf k\cdot\mathbf r)$ by the operator $\exp(i\mathbf k\cdot\hat{\mathbf r})$ (which no longers depend on $\mathbf r$. More explicitely, you are defining $u$ by : $$ u_{n\mathbf{k}}\left(\mathbf{r}\right)=\exp\left(-i\mathbf{k}\cdot\mathbf{r}\right)\psi_{n\mathbf{k}}\left(\mathbf{r}\right)$$ ie : $$\langle \mathbf r|u_{n\mathbf k}\rangle =\exp(-i\mathbf k\cdot\mathbf r)\langle \mathbf r|\psi_{n\mathbf k}\rangle$$ Then, you can compute : \begin{align} |u_{n\mathbf k}\rangle &= \int \text d\mathbf r|\mathbf r\rangle \langle\mathbf r|u_{n\mathbf k}\rangle \\ &= \int \text d\mathbf r|\mathbf r\rangle\exp(-i\mathbf k\cdot\mathbf r)\langle \mathbf r|\psi_{n\mathbf k}\rangle \\ &= \int \text d\mathbf r\exp(-i\mathbf k\cdot\hat{\mathbf r})|\mathbf r\rangle\langle \mathbf r|\psi_{n\mathbf k}\rangle \\ &= \exp(-i\mathbf k\cdot\hat{\mathbf r})\int \text d\mathbf r|\mathbf r\rangle\langle \mathbf r|\psi_{n\mathbf k}\rangle \\ &= \exp(-i\mathbf k\cdot\hat{\mathbf r})|\psi_{n\mathbf k}\rangle \end{align}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/651420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
True or False: energy is conserved in all collisions Using introductory physics, how would you answer this question? (I have a disagreement with my instructor and I’m curious to hear your input) One of us says true because the question doesn’t specify “kinetic energy,” or a “system” and all energy is always conserved. The other says false because “only perfectly elastic collisions conserve energy. Otherwise energy will be lost to sound or light” What’s your opinion?
In any situation you will encounter, most notably using mere introductory physics, it is "true". Energy is always conserved, but gets quite handily transmuted between different forms. As a last resort, energy becomes simple thermal heat. But it is still energy, and is conserved. In the typical teacher's mind, the answer is "false". Because the average teacher will state that kinetic energy is lost in an inelastic collision. It is, but only as kinetic energy. What actually happens is that the energy just puts on a different nametag.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/651797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 12, "answer_id": 1 }
What is the difference between elastic force and restoring force? Elasticity is the ability of a material to return to its original shape after being stretched or compressed. When an elastic material is stretched or compressed, it exerts an elastic force. The restoring force is a force that acts to bring a body to its equilibrium position. Are elastic force and restoring force same?
This is a matter of semantics, but I’d say that an elastic force is a specific type of restoring force, in the context of a physical elastic/spring type system. A restoring force is any general force that opposes an applied force, usually describing harmonic oscillation. A different type of restoring force could be electromagnetic, for example.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/651882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Inclined Stationary Bike Why would it make you work harder if you incline a stationary exercise bike? What's the physics involved? Assuming all other factors remain constant and only the incline changes, why would you burn more calories?
To the best of my knowledge, it would not. Perhaps pedalling would become more awkward due to the incline but in terms of the mechanics/physics of the bike it would be the same. Most stationary trainers have a resistance setting to deal with the difficulty of the incline. But this is not really your question. I can’t imagine any significant mechanism that would make an incline burn more energy. Is there some mechanism you are thinking of when asking this question?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/652165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Coefficient of an 1-form in position-representation of momentum operator where configuration space is NOT $\mathbb{R}^m$ I found this in the book Geometric Phase in Quantum Systems by A. Bohm et al. Where the position space representation of the momentum operator carries a (Where exactly my doubt is) coefficient of 1-form with the condition $$\partial_i \omega_j - \partial_j \omega_i =0 \implies d\omega=0$$ The author(s) argued about $Poincare \ lemma$ and how, for $\mathbb{R}^m$ configuration space the term can be $gauged \ away$. I understand usual momentum operator representation without this 1-form, and this is very non-trivial for me. Can someone please explain me how this comes and what it means in details?
* *For $M=\mathbb{R}^m$, starting from the canonical commutation relations (CCRs), we have the Stone-von Neumann theorem, which proves the existence of the standard Schrödinger position representation (i.e. without the 1-form $\omega$) up to unitary equivalence, cf. e.g. this Phys.SE post. *However conjugating with unitary multiplication operators naturally induces an exact $\omega$ one-form, cf. e.g. my Phys.SE answer here. *Therefore it is quite natural to consider a 1-form $\omega$ from the onset, even for a topologically non-trivial $m$-dimensional configuration space $M$. *The CCRs then imply that the 1-from $\omega$ must be a closed one-form. The cohomology class $[\omega]$ is conserved under conjugating with unitary multiplication operators. *Since $M=\mathbb{R}^m$ is a contractible space, we may use Poincare Lemma to show that the cohomology class $[\omega]=[0]$ is trivial.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/652280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is electron-positron annihilation time reversible? Consider that a low energy electron and positron annihilate creating two 511keV photons with no other particles around. To time reverse this process, we send two 511eV photons to collide hoping that they would produce an electron/positron pair. However photons don’t interact with each other, at least not at such low energies. Instead of colliding, they just ignore each other and move on. The picture changes when other particles are involved, but then it would not be the exact time reversal of the annihilation described above, but a time reversal of a different process involving other particles. If I cross electron and positron beams, I’d have fireworks with likely most particles annihilating to gamma rays. Yet if I cross two beams of 511keV gamma rays in vacuum, nothing happens. They just pass through each other with no interaction whatsoever. Not a single electron/positron pair would be created to detect. So is annihilation time reversible? If yes, then how? If no, then would it not violate some fundamental symmetry of nature?
However photons don’t interact with each other, at least not at such low energies. Half an Mev photon is a gamma ray, and you are asking about the crossection of gamma gamma scattering. In this paper the reverse reaction is considered to use in astrophysics obsvervations. Instead of colliding, they just ignore each other and move on. This is true for low energy photons. It has to do with coupling constants and quantum number conservation but the time reversed diagrams mathematicaly work . They are considering gamma gamma colliders.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/652544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Proof of form of 4D rotation matrices I am considering rotations in 4D space. We use $x, y, z, w$ as coordinates in a Cartesian basis. I have found sources that give a parameterization of the rotation matrices as \begin{align} &R_{yz}(\theta) = \begin{pmatrix} 1&0&0&0\\0&\cos\theta&-\sin\theta&0\\0&\sin\theta&\cos\theta&0\\0&0&0&1 \end{pmatrix}, R_{zx}(\theta) = \begin{pmatrix} \cos\theta&0&\sin\theta&0\\0&1&0&0\\-\sin\theta&0&\cos\theta&0\\0&0&0&1 \end{pmatrix},\\ &R_{xy}(\theta) = \begin{pmatrix} \cos\theta&-\sin\theta&0&0\\\sin\theta&\cos\theta&0&0\\0&0&1&0\\0&0&0&1 \end{pmatrix}, R_{xw}(\theta) = \begin{pmatrix} \cos\theta&0&0&-\sin\theta\\0&1&0&0\\0&0&1&0\\\sin\theta&0&0&\cos\theta \end{pmatrix},\\ &R_{yw}(\theta) = \begin{pmatrix} 1&0&0&0\\0&\cos\theta&0&-\sin\theta\\0&0&1&0\\0&\sin\theta&0&\cos\theta \end{pmatrix}, R_{zw}(\theta) = \begin{pmatrix} 1&0&0&0\\0&1&0&0\\0&0&\cos\theta&-\sin\theta\\0&0&\sin\theta&\cos\theta \end{pmatrix}, \end{align} where the subscript labels a plane that is being rotated. This seems to be a very intuitive extension of lower dimensional rotations. However, I would really like to see a proof that these are correct, and I'm not sure how I could go about doing that. By correct, I mean that these 6 matrices can generate any 4D rotation. My initial attempt was to construct a set of transformations from the definition of the transformations (as matrices) that define a 4D rotation, \begin{align} \{R|RR^T = I\}, \end{align} where $I$ is the identity matrix (4D), but this has 16 (constrained) parameters and I thought that there must be an easier way.
For each of your 4D rotation matrix $~\mathbf R~$ if this equation $$\mathbf Z^T\, \mathbf Z= \left(\mathbf R\,\mathbf Z\right)^T\,\left(\mathbf R\,\mathbf Z\right)$$ is fulfilled the rotation matrix $~\mathbf R~$ is orthonormal .$~\mathbf R^T\,\mathbf R=\mathbf I_4$ where $$\mathbf Z= \begin{bmatrix} x \\ y \\ z \\ w \\ \end{bmatrix}$$ Edit you can also check the determinate of the Rotation matrix ,if the determinate of the Rotation matrix is equal one the matrix is orthonormal ?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/652658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Does gravity get stronger the higher up you are on a mountain? So I saw this article stating that gravity is stronger on the top on the mountain due to there being more mass under you however I have read some questions other people have asked and most of the responses state that the mass is concentrated at the middle of the earth meaning gravity doesn't get stronger the higher up you go. I would like to know which one of these it is as the article is a pretty reliable source. Here is the link to the article https://nasaviz.gsfc.nasa.gov/11234
You are getting different answers from NASA and from other sources, as they are talking about slightly different things. NASA is talking about the acceleration of the GRACE satellite towards the earth, as it orbited over different regions. When it went over the Himalayas, for example, the acceleration (gravity) was higher than average. Other sources are talking about the difference in acceleration due to gravity at ground level, compared to if you were to walk up the Himalayas, then the acceleration would decrease. That's because even though there would be more mass underneath, you've increased the distance from the earth. More detail: At the bottom of a cone shaped mountain of mass $m$, radius $r$ and height $r$, the acceleration due to gravity is $g$, due to the earth of mass $M$, radius $R$. $$g=\frac{GM}{R^2}\tag1$$ the difference in gravity after climbing the mountain is $$\frac{GM}{{(R+r)}^2}+\frac{Gm}{{(\frac{3}{4}r)}^2} - g\tag2$$ The 3/4 is due to the position of the COM of a cone. Using 1) it's $$g\bigl((1+\frac{r}{R})^{-2}+\frac{16mR^2}{9Mr^2}-1\bigr)\tag3$$ From formulae for the volume of a sphere and a cone and assuming equal density $$\frac{m}{M} = \frac{r^3}{4R^3}\tag4$$ so 3) becomes, in terms of $g$ $$y= (1+\frac{r}{R})^{-2}+\frac{4r}{9R}-1\tag5$$, putting $x = \frac{r}{R}$ $$y= (1+x)^{-2}+\frac{4}{9}x-1\tag6$$ plotting this shows that there is a decrease in the acceleration due to gravity for all realistic cone shaped mountains. For Everest, if it were a cone, $x=0.0014$ and the reduction in gravity is $y=0.002g$, so the usual $9.81$ becomes about $9.79 \;\text{m}\,\text{s}^{-2}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/652752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 6, "answer_id": 2 }
Why does $dG < 0$ imply that processes involving chemical reactions are spontaneous? Here is a short proof/derivation of why $dG < 0$ implies that a process is spontaneous (for constant temperature and pressure): But this derivation assumes that only mechanical work is done on the system. If the process involves chemical changes the expression for internal energy becomes: With this expression for $dU$ the last step in the derivation no longer holds: since $dG = dQ - TdS + \sum_i \mu_iN_i \neq dQ - TdS$, $dQ < TdS$ is not equivalent to $dG < 0$. It seems to me like there could be a hypothetical process for which $dG < 0$ but where $dQ > TdS$. With a process like that, the total entropy change (the environment plus the system) is negative (i.e. it is impossible). Yet, $dG < 0$ is commonly used by chemists as a criterion for the spontaneity of chemical reactions. What am I missing?
Indeed the derivation assumes that only mechanical work can be done on the system, and no transfer of particles and associated energy can happen; it is in the first sentence, "no matter can come in or out". This does not mean chemical changes inside the system are forbidden; instead, they can happen but they don't affect the fact that the system is closed and don't contradict the assumption that only mechanical work is possible. So the derivation is correct even if internal changes of chemical composition happen. On the formal aspect: the derivation uses the expression $$ dU = dQ - PdV $$ which is valid due to assumptions. But it does not use your expression for $dU$ in terms of $dS,dV,dN_i$, so there is no problem with the derivation, even if $N_i$ change due to chemical processes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/653042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Law of conservation of energy using work done vs maxwell theory of electromagnetic radiation Consider the motion of a charged particle in a uniform magnetic field. $\vec{B} = B_0(-\hat{k})$. Let the initial velocity with which it enters the field be $\vec{v_i} = v_0(-\hat{i})$. It is well known that it follows a circular path of radius $R = \frac{m v_0}{qB_0}$. * *Using Work Energy Theorem $$∆K = W_B = \int \vec{F_B}.\vec{v}dt = \int q(\vec{v}×\vec{B}).\vec{v}dt = 0$$ $$ K_f = K_i $$ Therefore speed of the charged particle and radius of the path remains constant. *Using Theory of Electromagnetic Radiation Direction of velocity changes continuously. Therefore, the charged particle is in accelerated motion. Therefore, it continuously loses energy in the form of electromagnetic radiation. Therefore it must follow a spiral path. Which of the following is correct? N.B. I am high school student. So, please limit the answer within high school mathematics.
Yes, your second reasoning is correct. Accelerating charges do emit electromagnetic radiation, and what you described is known as synchrotron radiation. This energy loss increases sharply as the particle approaches the speed of light. In fact, this is a limiting factor of the maximum speed that particle accelerators can produce. If no additional energy is provided, then the particle will indeed spiral. The first one is just a simplification, since radiation from accelerating charges are not typically considered at the high-school level. The power radiated by a point charge is given by the Larmor formula and a derivation of the angular distribution of the radiation (which is slightly more complicated) can be found here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/653328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How would I determine the charge density of an active lava flow? It is my understanding that in molten silicates, such as a basaltic lava flow, electrons are decoupled from solid state structures. If this is the case, as lava is flowing through a tube, how would I estimate the charge density and current of the flow for a given composition...say at 10m/s?
Perhaps through the use of some sort of semiconductor material. I am watching a documentary right now and it is talking about the sun being misunderstood as just a ball of burning gas . On the contrary it is an electrically charged sphere of what they believe is a liquid material. Could be very possible that lava could have a similar electrical charge. It would also would explain lightening when some volcanoes explode or even how gravity works . Perhaps there is an electromagnetic condition that comes into play deep under the Earth's surface.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/653699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What's the difference between a perfect fluid and an ideal gas? This is how I understand it at the moment: * *A perfect fluid is a collection of non-interacting particles, which are as a whole characterised by energy and pressure. *An ideal gas is also a collection of non-interacting particles, but here the ideal gas law holds. There, we have pressure, volume and temperature (let's assume a fixed number of particles for both cases), s.t. by applying the ideal gas law, again two parameters remain. Furthermore, the stress-energy tensor of a perfect fluid can be seen e.g. here on Wikipedia, but I haven't found the stress-energy tensor of an ideal gas.
An ideal fluid is a fluid that is incompressible and no internal resistance to flow (zero viscosity). In addition ideal fluid particles undergo no rotation about their center of mass (irrotational). An ideal fluid can flow in a circular pattern, but the individual fluid particles are irrotational. Real fluids exhibit all of these properties to some degree, but we shall often model fluids as ideal in order to approximate the behavior of real fluids. When we do so, one must be extremely cautious in applying results associated with ideal fluids to non-ideal fluids. Italics mine. The ideal gas law: The ideal gas law states that $PV = NkT$, where P is the absolute pressure of a gas, V is the volume it occupies, N is the number of atoms and molecules in the gas, and T is its absolute temperature. It is compressible because the volume depends on the pressure and temperature. Of course real gasses and real fluids have deviations in their behavior,but I think the main difference leading to different mathematical models is compressibility.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/653790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Can incompatible observables share an eigenvector? I was recently introduced to the concept of compatible and incompatible observables and specifically the generalized uncertainty principle, which is written in my textbook as: $$ \sigma_A^2\sigma_B^2 \geq \left(\frac{1}{2i} \langle [A, B] \rangle \right)^2 $$ where $A$ and $B$ are some observables. If $A$ and $B$ do not commute then they cannot have a complete set of common eigenfunctions and this is given as an exercise to prove in my textbook. My question is can they even share a single eigenvector? If the wavefunction points along the common eigenvector, wouldn't both standard deviations be equal to zero and the equation not be satisfied, as the left-hand side of the inequality would be equal to zero, and the right side would be a positive number (as the operators don't commute). But my textbook seems to imply that non-commuting operators can share an eigenvector.
It is possible for two non commuting operators to have an eigenvector in common. Consider two operators $A$ and $B$, that in some basis can be written as $$ A = \begin{pmatrix} 1 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 3 \end{pmatrix}$$ $$ B = \begin{pmatrix} 1 & 0 & 0\\ 0 & 0 & 1\\ 0 & 1 & 0 \end{pmatrix}$$ $$ AB-BA = \begin{pmatrix} 0 & 0 & 0\\ 0 & 0 & -1\\ 0 & 1 & 0 \end{pmatrix} \neq 0$$ Yet, the vector $$\textbf{v} = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}$$ is a common eigenvector. But the inequality is still satisfied, because on the RHS there is the expected value of $[A,B]$ over the state. $\textbf{v}$ is an eigenstate of $[A,B]$ with eigenvalue 0, therefore the expected value is 0 and the inequality holds. This is a general result. If two operators share a common eigenvector, it will be also an eigenvector of their commutators, with eigenvalue 0. Proof: If $A\textbf{v} = a\textbf{v}$ and $B\textbf{v} = b\textbf{v}$, then $$(AB-BA)\textbf{v} = ab\textbf{v}-ba\textbf{v} = 0$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/656029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Could wavefunction values be quantized? According to standard quantum mechanics, Hilbert space is defined over the complex numbers, and amplitudes in a superposition can take on values with arbitrarily small magnitude. This probably does not trouble non-realist interpretations of the wavefunction, or explicit wave function collapse interpretations, but does come into play in serious considerations of realist interjections that reject explicit collapse (e.g. many worlds, and the quantum suicide paradox). Are there, or could there be, models of "quantum mechanics" on Hilbert-like spaces where amplitudes belong to a discretized subset of the complex numbers like the Gaussian integers -- in effect, a module generalization of a Hilbert space -- and where quantum mechanics emerges as a continuum limit? (Clearly ray equivalence would still be an important consideration for this space, since almost all states with Gaussian-integer valued amplitudes will have $\left< \psi | \psi \right> > 1$. Granted, a lot of machinery would need to change, with the normal formalism emerging as the continuous limit. An immediate project with discretized amplitudes could be the preferred basis problem, as long as there are allowed bases that themselves differ by arbitrarily small rotations. Note: the original wording the question was a bit misleading and a few responders thought I was requiring all amplitudes to have a magnitude strictly greater than zero; I've reworded much of the question.
One problem with this idea is normalization: $$\int_{\mathbb R} \psi^* (x) \psi(x)~ dx = 1$$ You are integrating over infinite space. If $\psi$ has a minimum non-zero value, $\psi$ must be $0$ everywhere except a finite volume. Now switch to the momentum basis. Because $\psi$ has bounded support, the Fourier Transform of it cannot have. To be normalizable, the tails would have to have infinitesimal values. So you cannot have discrete values in momentum space. Does this fit your theory? Another problem is that wave functions are continuous. If there are only a discrete set of values, you would have discontinuous functions. Unless you are talking about a space with holes in it? Constant values in distinct regions? Given $$\hat p \psi(x) = -i\hbar \frac{\partial\psi}{\partial x}$$ a $\psi$ that was constant, except where interrupted by discontinuities would correspond to $\hat p = 0$ except where it is undefined or perhaps has infinite spikes. Likewise $$-\frac{\hbar^2}{2m}\frac{\partial^2\psi}{\partial x^2} = E \psi$$ would lead to $E = 0$ except perhaps at the discontinuities. $$$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/656167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Brachistochrone to a vertical line Just for fun, I am working through some problems in Mathematics of Classical and Quantum Physics by Byron and Fuller. Problem 2.13 reads: Prove that a particle moving under gravity in a plane from a fixed point $P$ to a vertical line $L$ will reach the line in minimum time by following the cycloid from $P$ to $L$ that intersects $L$ at right angles. Solving the Brachistochrone is more or less straightforward calculus of variations, taking $P$ to be the origin wlog. The action to be minimized is given by (ignoring the optimization irrelevant leading factor of $\frac{1}{\sqrt{2g}}$): $$ f(x, y, y') = \sqrt{\frac{1 + {\left(\frac{dy}{dx}\right)}^2}{y}} $$ And the derivation of the resulting cycloid, which can be found literally everywhere, gives the parametric function: \begin{align*} x &= a(\theta - \sin \theta) \\ y &= a(1 - \cos \theta) \end{align*} Where $a$ is a free parameter allowed to vary. (Note: gravity is taken to be in the y-positive direction to eliminate a pair of redundant minus signs.) If $L$ is taken to be the line $x = x_b$, then in theory it ought to be possible to minimize the cost function with respect to $a$. If the cycloid intersects the line at a right angle, then it follows that $\frac{dy}{d\theta} = 0$ and $\frac{dx}{d\theta} \ne 0$ at the point of intersection. \begin{align*} \frac{dy}{d\theta} = 0 &= a \sin(\theta) \\ \theta &= n\pi : n \in \mathbb{Z} \\ \frac{dx}{d\theta} \ne 0 &= a(1 - \cos \theta) \\ \theta &\ne 2n\pi : n \in \mathbb{Z} \end{align*} Because only the first solution could possibly be the curve of fastest descent, the value of $\theta$ must be $\pi$, which gives $a = x_b / \pi$. So, if we take the derivative of the cost function with respect to $a$ and set it equal to zero, it ought to be the case that we find the equal yields this value. Unfortunately, this is not what I find. Instead, I think I might be being too careless with what is a function of what. The cost function is given by: $$ I = \int_0^{x_b} \sqrt{\frac{x'^2 + y'^2}{y}} dx $$ Where $x$ and $y$ are functions of $\theta$, and their primed derivatives are with respect to $\theta$. The most problematic part is going to be finding the upper limit ($\theta_b$), which we only know implicitly: $$ x_b = a(\theta_b - \sin \theta_b) $$ Because $\theta_b$ is also a function of $a$, it is necessarily to make this explicit. So let $\theta_b(a)$ be the unique solution to the aforementioned implicit equation. With some algebraic manipulation $I$ becomes: \begin{align*} I &= \int_0^{x_b} \sqrt{\frac{{x'(\theta)}^2 + {y'(\theta)}^2}{y(\theta)}} dx \\ &= \int_0^{\theta_b(a)} a(1 - \cos \theta) \sqrt{\frac{a^2{(1 - \cos \theta)}^2 + a^2{(\sin \theta)}^2}{a(1-\cos \theta)}} d\theta \\ &= \int_0^{\theta_b(a)} a(1 - \cos \theta) \sqrt{2a} d\theta \\ &= {\left[\sqrt{2} a^{3/2} (\theta - \sin \theta)\right]}_{0}^{\theta_b(a)} \\ &= \sqrt{2} a^{3/2} (\theta_b(a) - \sin \theta_b(a)) \\ \end{align*} Next, take the derivative and set to 0. \begin{align*} \frac{dI}{da} = 0 &= \frac1{\sqrt{2a}} (3a(\theta_b(a) - \sin \theta_b(a)) + 2a^2 \theta_b'(a) (1 - \cos \theta_b(a))) \\ &= \frac1{\sqrt{2a}}(3x_b + 2a^2 \theta_b'(a) (1 - \cos \theta_b(a))) \end{align*} From here, we have to take care to find $\theta_b'(a)$. \begin{align*} x_b &= a(\theta_b(a) - \sin \theta_b(a)) \\ \frac{d}{da} x_b &= \frac{d}{da} (a(\theta_b(a) - \sin \theta_b(a))) \\ 0 &= \theta_b(a) - \sin \theta_b(a) + a \theta_b'(a) (1 - \cos \theta_b(a)) \\ \theta_b'(a) &= - \frac{\theta_b(a) - \sin \theta_b(a)}{a (1 - \cos \theta_b(a))} \\ \end{align*} Now to continue solving: \begin{align*} 0 &= \frac1{\sqrt{2a}}(3x_b + 2a^2 \theta_b'(a) (1 - \cos \theta_b(a))) \\ &= \frac1{\sqrt{2a}}(3x_b - 2x_b) \\ &= \frac{x_b}{\sqrt{2a}} \end{align*} Which seems to imply that the critical point lies at $a = +\infty$, which is clearly wrong (given this would imply an infinite distance to travel), and also not equal to the previously found value $a = x_b/\pi$. Which begs the question, where is my error? And is there a better approach for this problem?
Hint : You must start from the time to travel between $\theta_1$ and $\theta_2$ on a cycloid : \begin{equation} t_{2}-t_{1}=\sqrt{\dfrac{\,R\,}{g}}\, \int\limits_{\theta_{1}}^{\theta_{2}}\mathrm{d}\theta=\sqrt{\dfrac{\,R\,}{g}}\, \left(\theta_{2}-\theta_{1}\right) \tag{01}\label{01} \end{equation} If the motion of the particle starts at $\left(x_{1},y_{1}\right)=\left(0,0\right)$, so $\:\theta_{1}=0\:$, then the time $\:t\:$ needed to reach at $\left(x_{2},y_{2}\right)=\left(x,y\right)$, with $\:\theta_{2}=\theta \:$, is \begin{equation} t=\sqrt{\dfrac{\,R\,}{g}}\, \theta = \dfrac{\theta}{\omega} \tag{02}\label{02} \end{equation} where \begin{equation} \omega = \dfrac{\,\theta \,}{t}=\dfrac{\mathrm{d}\theta }{\mathrm{d} t}=\sqrt{\dfrac{\,g\,}{R}}=\text{constant} \tag{03}\label{03} \end{equation} Note : The variable $\,R\,$ is yours $\,a$. Then think of a $\,R-$parametric family of cycloids intersecting the vertical line with $\theta(R)$. The time for the $\,R-$cycloid would be \begin{equation} t(R)=\sqrt{\dfrac{\,R\,}{g}}\, \theta(R) \tag{04}\label{04} \end{equation} and for the minimum $\,t(R)\,$ find the roots of \begin{equation} \dfrac{\mathrm d t(R)}{\mathrm d R}=0 \tag{05}\label{05} \end{equation} The angle $\theta(R)$ is the following implicit function of $\,R$(1) \begin{equation} \theta(R)-\sin\left[\theta(R)\right]=L/R \tag{06}\label{06} \end{equation} where $\,L\,$ the horizontal distance of the vertical line from point $\texttt P$. We must remind that the slope $\,\mathrm d y(\theta)/\mathrm d x(\theta)\,$ on a cycloid and the angle $\,\theta\,$ are related as follows \begin{equation} \dfrac{\mathrm d y}{\mathrm d x}=\dfrac{\sin\theta}{1-\cos\theta}=\cot\left(\dfrac{\theta}{2}\right) \tag{07}\label{07} \end{equation} $=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!$ (1) The angle $\,\theta(L/R)\,$ as function of the dimensionless variable $\,L/R\,$ would have a graph like that in Figure-02 below. Note that for $\,L/R=k\pi\,(k=1,2,3\cdots)\,$ we have $\,\theta\left(L/R\right)=k\pi\,$ while the graph is repeat of a segment of width $\,2\pi$ : $\,\theta\left(L/R+2\pi\right)=\theta\left(L/R\right)+2\pi\,$. $=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!$ \begin{equation} \sqrt{g}\,\dfrac{\mathrm d t(R)}{\mathrm d R}\boldsymbol{=}\dfrac{\mathrm d }{\mathrm d R}\left(\sqrt{R}\,\theta\right)\boldsymbol{=}\dfrac{1}{2\sqrt{R}}\,\theta\boldsymbol{+}\sqrt{R}\,\dfrac{\mathrm d \theta}{\mathrm d R} \tag{08}\label{08} \end{equation} \begin{equation} \theta\boldsymbol{=}\sin\theta\boldsymbol{+}\dfrac{L}{R} \boldsymbol{\implies} \tag{09}\label{09} \end{equation} \begin{equation} \dfrac{\mathrm d \theta}{\mathrm d R}\boldsymbol{=}\cos\theta\dfrac{\mathrm d \theta}{\mathrm d R}\boldsymbol{-}\dfrac{L}{R^2} \boldsymbol{\implies} \nonumber \end{equation} \begin{equation} \dfrac{\mathrm d \theta}{\mathrm d R}\boldsymbol{=}\boldsymbol{-}\dfrac{L}{(1\boldsymbol{-}\cos\theta)R^2} \tag{10}\label{10} \end{equation} \eqref{08},\eqref{09},\eqref{10} $\:\boldsymbol{\implies}$ \begin{equation} \sqrt{g}\,\dfrac{\mathrm d t(R)}{\mathrm d R}\boldsymbol{=}\dfrac{1}{2\sqrt{R}}\left(\sin\theta\boldsymbol{+}\dfrac{L}{R}\right)\boldsymbol{-}\sqrt{R}\dfrac{L}{(1\boldsymbol{-}\cos\theta)R^2} \tag{11}\label{11} \end{equation} \begin{equation} \dfrac{\mathrm d t(R)}{\mathrm d R}\boldsymbol{=}0 \quad \boldsymbol{\implies}\quad \cos\left(\dfrac{\theta}{2}\right)\cdot \left[\sin^3\left(\dfrac{\theta}{2}\right)\boldsymbol{-}\dfrac{L}{2R}\cos\left(\dfrac{\theta}{2}\right)\right]\boldsymbol{=}0 \tag{12}\label{12} \end{equation} etc
{ "language": "en", "url": "https://physics.stackexchange.com/questions/656249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Units of a scalar field Consider the Lagrangian density $$\mathscr{L} = \frac{1}{2} \partial_\mu a \partial^\mu a + \frac{m^2}{2} a^2.$$ I understand why $[a]=m$, i.e. $a$ has mass dimension one. What and why are the units of $a$ in the SI system? If I were to know the units of $a$ in the SI, then knowing where and how many $c$'s and $\hbar$'s to place is easy to know, just match the units of each term. In this case, however, since I don't know the units of $a$ in the SI I can put any number of $c$'s and $\hbar$'s I want and give $a$ any units I want.
There is no standardization of units for fields in particle physics. The fields themselves are not physical observables, so there is no need to specify precise units for them. As a consequence, they are normally given (like everything else) in units of power of energy, $[a]=$ GeV. If you wanted to convert that to SI, you could, keeping it in units of energy, make the units of the field $a$ joules. However, there would be nothing wrong with expressing them in other units that are related by powers of $c$ and $\hbar$; there is no physics in the specific choice.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/656390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Understanding two algebra steps in Altland & Simons I would like to understand two summations on page 184-185, where we are told that assuming $x > 0$: $$ G_{\pm} (x, \tau) = - \frac{T}{L} \sum_{p, \omega_{\, n}} \frac{1}{- i \omega_n \mp p} e^{- i p x - i \omega_{\, n} \; \tau} \; $$ $$= \mp i T \sum_n \Theta(\pm n) e^{\omega_{\, n} \; (\mp x - i \tau)} \;$$ $$ \approx \frac{1}{2 \pi} \frac{1}{\pm i x - \tau}.$$ How do we get the last two steps (the second equality and the approximation)? The only comments the book makes is that for the first step we integrate over momenta, and the second is that we approximate the frequency sum by an integral. It's not clear to me how to perform these summations/integrals.
I think Altland&Simons have in mind approximating both momentum and Matsubara frequency sums by integrals, which is permissible in the limits $L\to \infty$ and $T\to 0$ respectively. Introducing the spacings of momentum and frequency grids $\delta p=2\pi/L$ and $\delta\omega=2\pi T$ $$ G_{+} (x, \tau) = - \frac{T}{L} \sum_{p_m, \omega_{\, n}} \frac{1}{- i \omega_n - p_m} e^{- i p_m x - i \omega_{\, n} \; \tau} \;=\\- \frac{1}{(2\pi)^2}\sum_{p_m, \omega_{\, n}} \frac{\delta p\;\delta\omega}{- i \omega_n - p_m} e^{- i p_m x - i \omega_{\, n} \; \tau} \; \approx\\- \frac{1}{(2\pi)^2} \int_{ -\infty}^{\infty} \delta\omega \int_{-\infty}^{\infty}dp\;\frac{ e^{- i p x - i \omega\tau}}{{- i \omega - p}} = \\ -\frac{i}{2\pi} \int_{ 0}^{\infty} \delta\omega\;e^{- \omega x - i \omega\tau} =\frac{1}{2 \pi} \frac{1}{ i x - \tau}.$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/656523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why the amplitude in the third region is decreasing as compare to that of first region? Please explain its physical significance Consider a finite potential barrier and the case where the energy of the particle is lower than that of finite potential. The figure is shown and we have three regions. As I solved the Schrödinger equation in the three regions. I can see that in the third region and first region, we have plane wave solution but in the third region the amplitude of the wave decreases as compare to the first region? Why it is happening so? Please elaborate it.
If it's about a particle moving from left to right, classically it wouldn't go past the barrier. However in quantum mechanics it's possible that it can do 'tunnelling' and there is a small probability that it can get to the other side. The amplitudes of the waves squared are proportional to the probability that the particle is at a given side of the barrier - so the amplitudes are showing that there is less chance that the particle is on the right than the left. The amplitude reduces as the wave passes through the barrier, so there is more chance of a particle tunnelling through a thinner barrier.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/656631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Bremsstrahlung Radiation A thought experiment. Consider an electron falling into a black hole. From an external observer to the electron and the black hole, the electron accelerates, and should give off Bremsstrahlung radiation From the electron's frame of reference, it is travelling along a geodesic in free fall, and so is not accelerating at all so doesn't generate Bremsstrahlung radiation. Which is the correct situation and why?
You don't need a black hole for this thought experiment: just drop an electron from a height on the surface of the Earth, and you have exactly the same problem. The Equivalence Principle of General Relativity claims that such a system should be indistinguishable from an accelerated electron. However, Maxwell's Equations tell us that accelerated charges emit radiation, but no such radiation appears to be observed. This "thought experiment" has been studied since 1909, and Wikipedia even has an article about it, including a resolution. The bottom line is that such a charge does indeed radiate. Essentially, while in the charge's rest frame it would appear to not radiate, when one moves into the lab frame, this transformation is not a Lorentz Transformation, and it leads to a radiating solution in the lab frame.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/656736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Water drainage from AC to a bucket kept on floor I have a split AC at home, the water outlet pipe from that was extended and kept into a bucket so that the water doesn't make puddles on the floor . The AC mechanic told when the bucket fills with water and the pipe gets submerged the water would stop flowing and it would start leaking from the split AC unit from the top. I argued against it and told gravity would take care of it even if the pipe gets submerged since the water is coming from an upper level (atleast 8 feet difference) , the water would flow down from pipe to the bucket and the bucket should eventually overflow. Turns out the mechanic was right , as soon as the tip of the pipe got submerged , water started overflowing from top. I still don't know why it would happen. Can someone explain . Crude image for the situation https://imgur.com/SQFTe8W
Basically, when the bottom pipe is submerged in the bucket, the pressure of the water surrounding the mouth of this pipe will be $$\tag 1 P=P_0+\rho gh$$ where $h$ is the depth of the pipe, $P_0$ is atmospheric pressure, $\rho$ is the density of water and $g$ is the acceleration due to gravity. Even though at the top point where the water is released by the AC outlet, the water there is being pulled down by gravity, there will be a point when even the pressure in the pipe there will be exceeded by the pressure given by equation (1), as $h$ in equation (1) increases. The point when this happens does depend on the height of the AC unit, but when $P\gt P_{AC}$ (the pressure at the top), the water in the pipe will stop flowing downward, and the AC unit will leak at the top.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/657116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Gravitational wave of two interacting masses Consider two masses $m_1$ and $m_2$ that are connected by a spring. Mass 1 follows the worldline $x_1(\tau)$ while mass 2 follows $x_2(\tau)$. Note that the argument $\tau$ is the proper time in the rest frame of mass 1 and in the rest frame of mass 2 respectively but we do not use a different symbol for both. I want to calculate how the system reacts to an incident gravitational wave (GW). Both worldlines fulfill the geodesic equation: $$m_1\frac{d^2x_1^\mu}{d\tau^2}=f^\mu_1-m_1\Gamma^\mu_{\nu\lambda}(x_1(\tau))\frac{dx_1^\nu}{d\tau}\frac{dx_1^\lambda}{d\tau}$$ $$m_2\frac{d^2x_2^\mu}{d\tau^2}=f^\mu_2-m_2\Gamma^\mu_{\nu\lambda}(x_2(\tau))\frac{dx_2^\nu}{d\tau}\frac{dx_2^\lambda}{d\tau}$$ where $f^\mu_1$ and $f^\mu_2$ are forces that act on the different masses. How do we calculate the four-forces? Do we need a specific frame for that?
What is the freely falling system? Weber writes in his paper that it is the center of mass between the two masses. Why should the center of mass be freely falling in the presence of a GW? In an old-fashioned Newtonian picture, you could say a "freely-falling" system has no forces acting on it besides gravity. As responsibly grown-up relativists, we should say a "freely-falling" system means one following a geodesic in space-time. Since gravitational waves are part of the metric, particles moving with a gravitational wave, without being acted on by an external force, are also following geodesics and so are also freely falling. For me this looks wrong since in a freely falling system Γ=0 and then the geodesic equations would look different? As implied by my comment above, $\Gamma$ need not be zero for a freely falling observer. I didn't follow all of your algebra, but I am sure your last term $\partial_\alpha \Gamma^\mu_{\nu \rho} L^\alpha$ cannot be correct. The reason is that you should have an equation between tensors. The left hand side $\frac{D^2 \xi}{D \tau^2}$ is a tensor, and the first term $\sim R x^2$ is a tensor, but $\partial \Gamma L$ is not a tensor. Therefore, this equation can at best hold in one coordinate system. However there's nothing in your calculation that picks out a special coordinate system that explains why you would get a term that's not a tensor.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/657236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Pressure exerted by an ideal gas according to kinetic theory of gases In my textbook and Wikipedia, I have observed that force exerted on a wall of the container by one molecule is taken into account. Such that $F=\frac{mu} {\Delta t}$ where ${\Delta t}=\frac{2l}{u}$. But this change in time is the time required for a molecule to move from one wall to the opposite. In a gas container, each gas molecule doesn't get to move this freely. Then why do we assume ${\Delta t}=\frac{2l}{u}$? Is it that the molecules remain in random motion and tends to maintain constant density all over the place for which the statistical value of $\Delta t$ turns out to be the same? Another small question, were polyatomic molecules also considered as one sphere each in the kinetic theory of gas? Or was it each atom resembled a sphere but not a molecule?
Your answer to the first question is correct. We assume individual atoms travel the whole way across. This is not true. They collide some (although much less often than you’d think; try creating two jets of air from fans and notice they seem to have no effect on each other. Like point one at your face, and then bring in another going sideways across that first stream, but not hitting you. No change is noticed, as if they don’t interact at all.) But if the collisions are perfectly elastic and no energy is lost during collision, the average of the entire situation is the same as each particle having to make the whole journey. The answer to the second question is that we model each molecule as a sphere; we don’t try to make shapes of atoms connected as spheres. More accurately, we don’t consider shape here. That was just for illustration. The only sense in which we pretend it’s a sphere is that the colliding particle is returned along the impact line, but that is it. We assume a point and mass and elastic collision. But to answer, the unit of analysis is a molecule not an atom.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/657905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is the relation between the frequency vector and the Nyquist frequency? When trying to comprehend the concept of Nyquist frequency in FFT, I came across the following definition for half of the frequency range: $$f = -f_{n}/2:df:f_{n}/2-1;$$ where $f$ represents the frequency vector and $f_n$ represents the Nyquist frequency. Why only take half of the frequency range into consideration? And why specifically this half and not start at the 0 point? I understand that the Nyquist frequency is used in signal processing to avoid aliasing, but how was the above range set as a relation between the frequency vector and the Nyquist frequency?
Why only take half of the frequency range into consideration? This is actually the full frequency range. Any other frequency you can think of outside this range is an alias of some frequency in this range. And why specifically this half and not start at the 0 point? It's an arbitrary choice. One reason to do it this way is if you are using the discrete Fourier transform (The FFT being one method of calculating the DFT) as an approximation for a continuous-time Fourier transform, then using the range $[-f_n/2,f_n/2]$ gives an approximation of what would be measured over the same frequency range in the continuous-time system. while if you use the range $[0,f_n]$ the person examining the data will have to consider that in the continuous-time system being approximated the features seen in the range $[f_n/2, f_n]$ would actually occur (assuming the sampling rate is above the Nyquist rate) in the range $[-f_n/2,0]$. And why do we take 1 in the end, instead of leaving it solely as fn/2? That "1" should actually be $df$. Or else $f_n/2-1$ should have been $f_{n/2-1}$. You need to subtract it off because frequency $f_n$ is an alias of $-f_n$, so the DFT does not provide separate results for both frequencies.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/658268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do geodesics explain two identical balls thrown up at the different speeds? As stated in the title, two identical balls, both thrown directly upward, but at different speeds. The slower ball will reverse direction at a lower height than the faster ball. But the curvature of spacetime that they are passing through would be nearly identical. How do geodesics explain these two paths? Another way of looking at the same issue: Two identical balls dropped, but from different heights. Both balls travel straight down but hit the ground at different velocities. Each ball should be passing through the same spacetime curvature at the point of impact. Since neither ball experienced a force or acceleration and their motion is purely a product of the curvature of spacetime, why are they traveling a different speeds at impact?
You must take into account the time dimension. You are assuming that since both balls where thrown upward from the same location on the same gravitating mass that they will follow the same geodesic. This is not true. You must take into account the time dimension (in addition to space). When you do this, you see that they do not follow the same path in space-time. That is, it takes less time for the ball thrown faster to reach a certain height. The other ball will take longer to reach this point (or may never reach this point because it may have already reached its maximum height and may be falling downward). Therefore, the two balls follow different paths in space-time and therefore their motion is determined by two different geodesics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/658395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Is there a relationship between quantum physics and chaos theory on a classical scale? Im a complete physics lay person and I read somewhere that chaotic systems are subject to tiny differences in initial conditions and that the brain is a chaotic system. Does that mean our thoughts are subject to quantum randomness?
We don’t know to what extent (if at all) our thoughts are influenced by quantum phenomena. Some scientists, including Nobel laureate Roger Penrose, are convinced that quantum phenomena are a fundamental component of consciousness. Others, including Max Tegmark, argue that neurons are too large and too slow to be significantly affected by quantum-level events.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/658544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
What happens within an extended object passing the event horizon of a black hole? I hope this question is not too infantile: Falling into a Black Hole (intentionally or by accident) and thereby passing the Schwarzschild radius is very often said to be "nothing special" in the view of the falling observer. Tidal forces can be neglected when the mass is only big enough. But let’s say you are in a starship which is 100m long. There is a point in time, where the head of the rocket is inside the event horizon and the tail outside. Then, it cannot be possible to switch on a light on the tail by moving a switch on the head, because this would mean, that a signal is sent out of the Schwarzschild radius. Same for communicating to the tail or even controlling the movement of your legs (outside) from the brain (inside). I cannot imagine that this absurd situation is indeed the case - what is wrong with this funny view? Is inside and outside not defined in the view of the observer?
I'm inclined to say that that your "funny" view is correct. It is impossible to transmit information out of a black hole if the information originates inside the event horizon. You can't turn on a light on the tail by sending a signal from the head, because what will carry the signal? Light cannot escape the event horizon. Neither can electrons, which would carry electric currents. Neither can you send a signal from your brain to your legs, again because your body's molecules cannot escape the event horizon. Of course, all known materials would be ripped apart if they straddled an event horizon. Atoms are held together by forces between the nucleus and electron, and photons of light carry the force between them. If the light cannot travel from the nucleus to the electron or vice vearsa because that atom straddles the event horizon, the atom itself would be ripped apart.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/658782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Can we feel heat in outer space? Is there air outside of earth atmosphere? If not, could we feel heat coming from sun?
The pressure in space is very low as there are very few atoms per unit volume (compared to the atmosphere). As a result, convection and conduction do not work for heat transfer, but radiation still works. Without sunlight, there is only 2.7 K background microwave radiation from big bang, so very cold (-270 deg Celsius). With sunlight you get quite warm, because the only way for you to loose temperature is again to radiate it - vacuum is a good isolator. In this context, it is worth to mention that due to the low pressure around you, the boiling point is very low and liquids in your body like blood will boil very fast. There are interesting questions and answers about what actually happens if an astronaut takes off his helmet. Here is just one from another forum as an example https://www.quora.com/What-would-happen-to-an-astronaut-who-took-their-helmet-off-in-space-for-a-few-seconds.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/659074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 0 }
Perturbative expansion of $\varphi^4$ theory and Green's functions I'm working in the $\varphi^4$ QFT where $$S(\varphi)=\frac{1}{2} \mu \varphi^{2}+\frac{1}{4 !} \lambda_{4} \varphi^{4}$$ and the text says that we can expand (assuming small $\lambda_4$) as $$\exp (-S(\varphi))=\exp \left(-\frac{1}{2} \mu \varphi^{2}\right) \sum_{k \geq 0} \frac{1}{k !}\left(-\frac{\lambda_{4}}{24}\right)^{k} \varphi^{4 k}$$ which makes sense, but then it says that we interchange the series expansion in $\lambda_4$ with an integration over $\varphi$ and arrive at the following Green's functions $$G_{2 n}=H_{2 n} / H_{0}$$ where $$H_{2 n}=\frac{1}{\mu^{n}} \sum_{k \geq 0} \frac{(4 k+2 n) !}{2^{2 k+n}(2 k+n) ! k !}\left(-\frac{\lambda_{4}}{24 \mu^{2}}\right)^{k}$$ and for the life of me I can't figure out how they are getting to this. I understand how it worked for the free theory, where $$\begin{aligned} Z(J) &=N \int \exp \left(-\frac{1}{2} \mu \varphi^{2}+J \varphi\right) d \varphi \\ &=N \int \exp \left(-\frac{1}{2} \mu\left(\varphi-\frac{J}{\mu}\right)^{2}+\frac{J^{2}}{2 \mu}\right) d \varphi=\exp \left(\frac{J^{2}}{2 \mu}\right) \end{aligned}$$ and we can use the taylor expansion of $\exp \left(\frac{J^{2}}{2 \mu}\right)$ to get $$G_{2 n}=\frac{(2 n) !}{2^{n} n !} \frac{1}{\mu^{n}}$$ But I'm miffed how I can't get to the result stated for the $\varphi^4$ theory. If I were to follow the logic directly from the free theory case, then I wouldn't even taylor expand to get $$\exp (-S(\varphi))=\exp \left(-\frac{1}{2} \mu \varphi^{2}\right) \sum_{k \geq 0} \frac{1}{k !}\left(-\frac{\lambda_{4}}{24}\right)^{k} \varphi^{4 k}$$ but instead complete the square on this expression. Any insights would be greatly appreciated!
Here are some tips (to do in order): * *Swap the order of integration and summation (truncate the series first since we cannot do this with an infinite series and we are just forming an asymptotic expansion anyway). *Change the integration variable to $x=\frac{\mu^2 \phi^2}{2}$. *Use the definition of the gamma function. *Use Stirling's approximation. (just google the definitions of 3 and 4 if you need to). Let me know if you need any more guidance!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/659223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
About the "worst prediction in all of physics": the cosmological problem Physicist Sabine Hossenfelder on YouTube just recently posted a video about vacuum energy, the cosmological constant and the "worst prediction" in physics. The worst prediction in physics refers in this case to the enormous discrepancy of a factor of $10^{120}$ between the measured value of the cosmological constant and the calculated vacuum energy density from quantum field theory. This is known as cosmological constant problem. What's confusing is that she claims this isn't a problem at all, because "the cosmological constant has nothing to do with the vacuum energy from QFT". As I understand, she says that the cosmological constant is simply a natural constant related to spacetime itself, just like, say, the gravitational constant. This seems to go against everything I've heard about this problem from other physicists. To my understanding, it isn't wrong to interpret the cosmological constant as just a constant in the Einstein equations, but the hope was to explain the measured value using QFT. Moreover, if space really has such a huge vacuum energy density, then it should have been completely "ripped" apart by now. Not to mention that there are other interpretations of dark energy apart from a pure cosmological constant. What is really the deal with all of this? Note: related videos are Are Dark Matter and Dark Energy Scientific?, Dark Energy might not exist after all. The video in the first link is Physicist Despairs over Vacuum Energy.
The 120 orders of discreapancy between predicted QED enormous value and actual tiny value measured by cosmolgical constant is due to the fact that Dark energy of free space is hidden from our time-like spacetime. We see only the tip of the ice-berg in the form of Dark Energy's noise thus the ZPF energy. Beyond, ZPF vacuum energy we have normal matter and light in our Universe.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/659313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 8, "answer_id": 6 }
Stress developed in a Hoop due to rotation In the given question with constant angular velocity. It asks us to find longitudinal stress at each of the positions. Now I'm not even sure what longitudinal stress is but here is what I tried as A is the farthest it will have the highest Radius and hence highest centripetal force. Therefore component of tension should be the highest at A. But the answer is given is that it's minimum at A. Can someone tell me where am I going wrong?
I would think that the tensions at points B and C should combine to produce the centripetal acceleration of the center of mass for the right side of the hoop. I currently have no idea how you would find the tensions at O and A.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/659529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Magnetic moment and angular momentum of electron I recently got to know about something really interesting. These are as follows: 1: The magnetic moment of an electron is, $\cfrac{ev}{2πr}$, where $e$ is the charge of the electron, $v$ is its velocity, and $r$ is the radius of the orbit it revolves. 2: The direction of the magnetic moment of the electron is anti-parallel to the direction of angular momentum. 3: The ratio $\cfrac ML$, where M is the magnetic moment, and L is the angular momentum, is constant $\cfrac e{2m}$. Are these facts somewhat or in some way related to the spin quantum number?
What you described is called the gyromagnetic ratio between magnetic moment ($M$) and orbital angular momentum ($L$) of the electron $$\gamma_l=\frac{M}{L}=\frac{e}{2m}$$ where $e$ and $m$ are charge and mass of the electron. However this relation holds only for the orbital angular momentum. The corresponding relation for the spin angular momentum ($S$) experimentally turned out to be $$\gamma_s=\frac{M}{S}=\frac{e}{m}$$ Notably, this ratio is not the same, but double of the $\gamma_l$ above.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/659749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is the Heisenberg equation invariant under unitary transformation of momentum and position operators? Let $P$ be the momentum operator, $Q$ the position operator and $H\left(Q,P\right)$ the Hamiltonian operator. The Heisenberg equation of motion is $$\dot{P}=-i\left[P,H\left(Q,P\right)\right] \tag 1$$ Suppose we have $P=U^\dagger P'U$ and $Q=U^\dagger Q'U$. If substitute this in $(1)$ we obtain $$\frac{d}{dt}\left(U^\dagger P' U\right)=-iU^\dagger \left[P',H'\left(Q',P'\right)\right]U \tag2$$ Now from $(2)$, in order for the Heisenberg equation to be invariant, we should have $$\frac{d}{dt}\left(U^\dagger P'U\right)=U^\dagger \dot{P'}U \tag 3$$ Expanding the rhs of $(3)$ we obtain $$\frac{d}{dt}\left(U^\dagger P'U\right)=\frac{d}{dt}\left(U^\dagger\right)P'U+U^\dagger \dot{P'}U+U^\dagger P'\frac{d}{dt}U$$ Finally in order to $(3)$ to be true we should have $$\frac{d}{dt}\left(U^\dagger\right)P'U+U^\dagger P'\frac{d}{dt}U=0 \tag 4$$ Why is this last expression true?
Every unitary operator $U$ that depends on time can be written in the form $U=e^{if(t)A}$ where $A$ is hermitian and $f:\mathbb{R}\rightarrow\mathbb{R}$ is a sufficiently regular function of time. We then have $U^{\dagger}=e^{-if(t)A}$. Thus we have $\dot{U}=i\dot{f}Ae^{if(t)A}=iAU$ and $\dot{U^{\dagger}}=-i\dot{f}Ae^{-if(t)A}=-iAU^{\dagger}$. Substituting in, we see that $(4)$ is solved as long as $A$ and $P$ commute.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/659883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
A lattice of synchronized clocks Over the years I have seen this image which always confused me: (from Wikipedia Spacetime) "In special relativity, the observer measures events against an infinite latticework of synchronized clocks." This sounds needlessly artificial and abstract. Let me take a stab at what they are trying to say. Say the event is a star exploding which is 1 billion light years away. The light reaches my eyeballs 1 billion years after the event happened. Therefore, one of these synchronized clocks mis-times the event. Do these clocks account for the time taken for the light from events to reach our eyeballs? In the exact same frame, I could have an observer 1 billion light years away right next to the star when it exploded registering a "more correct" time. So the observers disagree on the timing of the events, even though they are in the same frame? Or does the far-away observer back-calculate the time of the event, knowing the distance to the star, and then they agree? Now that I think about it, are all these clocks actually out of synch, but appear to be in synch by the time the light from all of them has reached the eyeballs of our observer?
All it means is that in your stationary frame you have a plane of simultaneity, so if it is 2:37pm where you are it is 2:37pm everywhere in your frame of reference. For example, if it is 2:37pm where you are 'now' it is also 2:37pm 'now' on Jupiter, which is about 33 light minutes away. If a light from Jupiter arrives at you now at 2:37, then in your frame of reference it must have left Jupiter 33 minutes ago at 2:04pm. So all it really means is that in your intertial rest frame it is the same time everywhere. To take your exploding star example, if it is 2:37pm on August 18th 2021 when you see the explosion, then it is also 'now' the same time and date a billion light years away, and the star would have exploded a billion years earlier.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/659986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 7, "answer_id": 0 }
How to treat pointlike objects in General Relativity? In general relativity we usually treat falling bodies and most small objects as pointlike. It is then enough to solve the geodesic equation in order to predict their motion. However, it appears to me that a pointlike object in General Relativity should always give rise to a Black Hole. Is that the case? And if so, how is that compatible with our treatment?
Even an extended body has a metric that appears to be that of a black hole far enough from the body. For example, the Schwarzschild interior solution is an exact solution for the interior of on incompressible fluid, which can be matched exactly to the exterior Schwarzschild solution (i.e. the Schwarzschild black hole solution) at the edge of the fluid. So if your particle's mass is very small, then its Schwarzschild radius is very small, and unless you can get very close to the particle, you cannot really tell the difference for most purposes between this and a "not-quite-point-like" object has a very small spatial extent. If your point particle has a large mass, then you probably do want to think of it as a black hole anyway. If you somehow have a situation that's between these extremes, then you probably need a theory of quantum gravity, which, of course, we don't yet have in consistent form.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/660090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Deriving the equivalent capacitance in a series circuit formula When we derive the formula for the effective capacitance in series, we say: $$Q/C_{eqv} = Q/C_1 + Q/C_2 + Q/C_3$$ (if there were 3 capacitors in this case). We would then cancel $Q$ to obtain the formula. I understand why each capacitor has the same charge, but why does the effective capacitor have the same charge as each individual capacitor? I'd expect the effective capacitor to store a total charge of 3Q (in the given example), not Q? When the capacitor discharges, would the overall amount of charge released not be 3Q (i.e. the overall charge of the capacitors)? I saw a similar question on here, and it was answered by explaining that the 'inner capacitors' are isolated from the rest of the circuit, and the +Q and -Q charges cancel? But even so, the isolated charges can trigger electron flow from the 'outer capacitors' during discharge. If anyone can clear up these doubts, I would be grateful.
Just think of one c with the plattes some distance. Now put a conducting plate in the middle without touching the two outer plates. This plate will have one one side positiv charges, on the other side negativ charges. and it is now the same as two C in line. but you cannot get any of the charges on the inner plate. If you connect the outer plates only the charges of the outer plates will make a current , the interpolate will just adjust the charges going back to the other side of the plate, so it is true you will have a current from one side of the plate to the other, but no cares going outside since in the some there were never any charges inside, the + and - were jaust separated.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/660337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Virtual displacement for a block sliding down a wedge A block slides on a frictionless wedge which rests on a smooth horizontal plane. There are two constraints in this system. One that the wedge can only move horizontally and another that the block must remain in contact with the wedge. We want to find the virtual displacements for the two block system. To find those virtual displacements we imagine to freeze the constraints and look for the possible displacements. Now if I freeze the constraints then the wedge cannot move. The only possible motion is that the small block slides parallal along the incline. However I have found on many articles online that there is a virtual displacement for the wedge as well. This confuses me how to view the virtual displacements in this case. Can anyone please explain this.
If the block is accelerated down the ramp, it gains momentum, but momentum is conserved, so the ramp must move in the other direction. This can be calculated using Lagrangian mechanics as follows: * *The kinetic energies of the block ($b$) and the wedge ($w$) are $$ T_b = \frac 12 m_b (({\underbrace{\dot d \cos(\alpha) - \dot x}_{\text{net velocity of the block in $x$-direction}}})^2 + \dot d^2 \sin^2(\alpha))~, \qquad T_w = \frac 12 m_w \dot x^2~, $$ where $m$ denotes the mass, $d$ is as in your sketch and $x$ is the coordinate parallel to the horizontal plane. *The potential energies are $$ V_b = m_b g (h - d\sin(\alpha))~, \qquad V_w = 0~, $$ where $h$ denotes the maximum height of the wedge. *The Lagrangian is $$ \mathcal L = T_b + T_w - V_b - V_w = \frac 12 m_b (\dot d \cos(\alpha) - \dot x)^2 + \frac 12 m_b \dot d^2 \sin^2(\alpha) + \frac 12 m_w \dot x^2 - m_b g (h - d\sin(\alpha))~, $$ and $d,x$ are the independent variables. Because of $$ (\dot d \cos(\alpha) - \dot x)^2 = \dot d^2 \cos^2(\alpha) - 2 \dot d \dot x\cos(\alpha) + \dot x^2~, $$ and $\sin^2(\alpha) + \cos^2(\alpha) = 1$, it holds $$ \mathcal L = \frac 12 m_b \dot d^2 - \frac 12 m_b \dot d \dot x \cos(\alpha) + \frac 12 (m_b + m_w) \dot x^2 - m_bg(h - d \sin(\alpha))~. $$ *The Euler-Lagrange equations are $$ \partial_t \partial_{\dot d} \mathcal L = \partial_d \mathcal L \qquad \Leftrightarrow \qquad m_b \ddot d - \frac 12 m_b \ddot x \cos(\alpha) = m_b g \sin(\alpha)~, \tag{1} $$ $$ \partial_t \partial_{\dot x} \mathcal L = \partial_x \mathcal L \qquad \Leftrightarrow \qquad \frac 12 m_b \ddot d \cos(\alpha) + (m_b + m_w) \ddot x = 0~. \tag{2} $$ *From equation (2) there follows $$ \ddot x = - \frac 12 \frac{m_b}{m_b + m_w} \ddot d \cos(\alpha)~, \tag{3} $$ which can be plugged into (1) to obtain $$ m_b \ddot d + \frac 14 \frac{m_b^2}{m_b + m_w} \ddot d \cos^2(\alpha) = m_b g \sin(\alpha)~. $$ This can be solved for $\ddot d$: $$ \ddot d = \frac{ g \sin (\alpha) }{1 + \frac 14 \frac{m_b}{m_b + m_w} \cos^2(\alpha)}~. $$ And with (3) we find $$ \ddot x = - \frac{ g \sin (\alpha) }{2 \frac{m_b + m_w}{m_b \cos(\alpha)} + \frac 12 \cos(\alpha)}~. $$ This result means, that the wedge is accelerated in negative $x$-direction, just like momentum conservation demands.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/660425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does transformer with load drain more power from source than just load? Let's consider AC source of $\rm 12\ V$ and load of $\rm 12 \ Ohm$. Current is $1\ A$ and power is $\rm 12\ W$. Now, we have the same source and the same load, but now is an ideal transformer in path with ratio of turns $p=1/100$. Then current through load should be : $I=U_{\rm source}/p×R_{\rm load}=100\ \rm A$ And power is : $\frac {U_{\rm source}I_{\rm load}}{p}=120 000\ \rm W$ So, it seems that transformer with load drains more power from source than just load directly connected to source. Am I right ? If so, it also means that source, which do not kill me directly, can kill me through transformer ?
A load is normally given in units of power. A resistance becomes a load when you apply a particular voltage. So, no, the load (power) doesn't change across an ideal transformer. But the power across a resistance does vary as you change the voltage. It is definitely possible for a transformer to make an ideal voltage source more dangerous. I just wouldn't use the term "load" in describing that scenario.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/660590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A bowling ball on an infinitely long track We knew that a after a bowling ball is threw out with a certain velocity to a non smooth track, it first rolls and skids as the translational velocity (decelerates due to friction) of the center of mass is greater than tangential velocity of the point of contact of ball with the floor which has a opposite direction. But after some time torque due to friction causes the angular velocity to increase and eventually the tangential velocity of point of contact achieve same value with translational velocity and have v=0, it starts rolling without slipping, and eventually come to stop. My question here is after rolling without slipping is achieve how does the translational kinetic energy and rotational kinetic energy change? How does translational kinetic energy decrease while rotational kinetic energy is increase by torque due to friction?
My question here is after rolling without slipping is achieve how does the translational kinetic energy and rotational kinetic energy change? How does translational kinetic energy decrease while rotational kinetic energy is increase by torque due to friction? Once the ball is rolling without slip, it will roll forever, with its translational and rotational kinetic energy unchanged. There will be no friction force acting at the point of contact, so long as the track is flat and horizontal.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/660670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 3 }
Is a westward flying plane heavier than an eastward one? I understand that you weigh less at the equator due to the increased centrifugal force. From my understanding, the faster you circle the Earth, the less your effective normal force you would feel, up to the point where you are orbiting the planet and the force becomes zero. The Earth rotates about 1000mph near the equator, faster than an average commercial plane. It seems to me then that a commercial plane travelling west near the equator would cancel some of the centrifugal force of the Earth's rotation and become heavier compared to an eastward flying plane. Would it?
The airplanes experience identical centrifugal forces at the equator, regardless of their flight direction. The equation of motion in a frame fixed to the surface of the Earth is: $$\vec F -2m\vec{\Omega}\times\vec{v}-m\vec{\Omega}\times(\vec{\Omega}\times\vec{r})=m\vec a$$ If we put that on the equator in standard North-East-Down (NED) coordinates, the centrifugal term becomes a negative force in the down direction: $$ F_D^{\rm cent} = -m\Omega^2 a \approx -m\frac{g}{289}$$ where $a=6378137\,m$ is the semi-major axis, $g=9.80665\,$m/s$^2$ is the $g$, and $\Omega=2\pi/{\rm day}$ is the angular frequency. Clearly, the velocity doesn't enter the expression for centrifugal force, so it can't depend on the velocity in the east direction. For Eastward velocities, $\pm v_E$ (with v_E>0), the cross product in the Coriolis force is also strictly up or down: $$ F_D^{\rm Cor} = \mp 2mv_E \Omega \approx \mp mv_E\frac g {67238}$$ Hence the east (west) bound plane is pushed up (down), and thus weighs less (more) than a stationary plane. This is known as the Eötvös effect, which is expressed as a reduction (increase) in the gravity for east (west) bound ships...and now of course, planes. In an inertial frame, there are neither centrifugal no Coriolis forces. The difference in apparent weight is due to different centripetal forces on, which has magnitude for east and west bound planes of: $$ F_D = -m\frac{(a\Omega + v_E)^2}{a} - m\frac{(a\Omega - v_E)^2}{a}$$ $$ F_D = -\frac{m}{a}\big[((a\Omega)^2+2v_Ea\Omega+v_E^2)- ((a\Omega)^2-2v_Ea\Omega+v_E^2)\big]$$ $$ F_D = -\frac{m}{a}\big[4v_Ea\Omega\big]$$ $$ F_D = -4mv_E\Omega$$ which exactly the expression for the Coriolis force (when comparing $+v_E$ with $-v_E$, the earlier expression is for $\pm v_E$, hence the factor of 2 difference).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/660739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 3, "answer_id": 0 }
It more efficient to generate heat burning electrolyzed hydrogen, or through an electric resistance? It's convenient and simple to use electricity to generate heat, doubtlessly, say in en electric kettle or boiler or heater, but is it more efficient to generate heat burning electrolyzed hydrogen, or through the resistance (kettle / boiler / heater)? Assume the same amount of electricity was used.
According to this wikipedia article, electrolysis has an efficiency between 70-80%, so not all of the electric energy used for electrolysis would go into dissociating water molecules, but some would be lost as heat. Therefore, the energy that can be obtained from burning the hydrogen obtained from electrolysis would be less than the electric energy spent for electrolysis.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/660854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Quantum properties of long wavelength electromagnetic radiation How could we have known that Electromagnetic radiation is quantized if we only knew about long wavelength radiation? What are the 'quantum' properties shown by long wavelength electromagnetic radiation?
I will assume here that by "long wavelength" you mean longer than ~150 meters (i.e., in the AM radio band). It is true that all EM radiation exists in quanta, each photon with energy E = (planck's constant x frequency). But in the AM radio regime, the energy per photon is so small that there is no experimental evidence at all that this electromagnetic radiation consists of quanta. RF transmitters and receivers that work with 150 meter wavelengths are designed, built, and operated with no considerations of quantization at all. So if all we had to work with were AM radio technology, we would have no experimental evidence in hand that long wavelength electromagnetic radiation consists of individual photons, because its quantum properties are so small.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/660972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
$\rm kph/MeV$ for Light yield? I was reading an article on Scintillation and I came across a peculiar unit $\rm kph/MeV$ for Light yield. It stated for Organic Scintillators, it has a Lower light yield (1-10 kph/MeV). Here do kph mean kilometers per hour or am I missing something? I am technically used to Photons per MeV for Scintillation Yield.
As nobody else seems to have a better idea, I'll convert my comment into an answer. Without seeing the "article on Scintillation" I can't really say what the writer was talking about, but one possibility is that they meant "thousands (k) of photons (ph) per MeV".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/661106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can an ideal dipole experience an electric force? It is known that electric force a charged body is given as $\vec{F} = q \vec{E}$ given that $\vec{E}$ is uniform. Now, for an ideal dipole, what would we take as the charge for calculating the force exerted on it by an external electric field?
You can simply use the formula you stated to compute the force on each component of the dipole separately and then you add them up to get the net force. Note that this will be zero unless the electric field varies in space. To give more details, consider a dipole of positive charge $q$ with separation $d\,$ lying along the $x$-axis such that the charges are at $x = 0, x = d$. We will take the ideal limit at the end. The force on the positive charge is $qE(d)$, while that on the negative charge is $-qE(0)$. The total force is then $$F = q(E(d) - E(0)) = qd\frac{E(d) - E(0)}{d}$$ Now, the ideal dipole limit is $d \to 0$, with $qd = p$ fixed. We are thus left with $F = p\frac{dE}{dx}\Bigl|_{x=0}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/661208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Rigorous proof that a net force of zero guarantees zero linear acceleration in rigid bodies I've never found a rigorous proof of this fact. The center of mass' acceleration is not necessarily the linear acceleration, specially if the body is attached to a pin or another geometric constrain, then the center of mass spins like the rest of the body. So how can we find the linear acceleration of a body? EDIT: ok, the pin or constrain seems to add an external force and thus is a bad example to ilustrate the zero transltational acceleration derived from zero net force. Yet, the result is still seemly true.
I may be misinterpreting this question. But if not, then the statement is true by definition. If there is some other constraint which redirects the center-of-mass motion, such as a pin at the corner or whatever, then that constraint has by definition imparted a force. For any acceleration of the center of mass, the corresponding force may be found with Newton’s second law. To some extent this is a matter of semantics, but I think we all agree on what we’re talking about.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/661405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Is a 6-quark particle viable? It is my understanding (which may be flawed) that protons and neutrons are stable because the 3 (R, G, and B) quarks form a "white" color singlet. Wouldn't 6 quarks or even 9 quarks create a white singlet? What about RGBGR?
The deuteron (or $^{2}$H nucleus) is a color-neutral bound state of three $u$ quarks and three $d$ quarks. Because six quarks can combine to make a color-neutral state, this six-quark state is possible. However, the quarks inside are not arranged in anything like a symmetric state. They are tightly bound into two substructures which are already themselves color neutral, one with composition $uud$ and the other $udd$; these are just the proton and neutron constituents of the deuterium nucleus. There are also nine-quark structures, the nuclei of $^{3}$H and $^{3}$He. The former is unstable, but its decay lifetime is a matter of years, not the yoctoseconds typical of really unstable strong states. The latter is stable outright. As in the deuteron, these states are almost entirely combinations of smaller color-neutral nucleons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/661653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 0 }
Is superconductor just a perfect conductor or anything more than that? If I had a hypothetical perfect conductor having infinite conductivity and I cool it below a certain temperature, will it be a superconductor? If not, then how can we distinguish between the two using the experimental and theoretical methods? I only know the following thing from Wikipedia: Distinction between a perfect conductor and a superconductor Superconductors, in addition to having no electrical resistance, exhibit quantum effects such as the Meissner effect and quantization of magnetic flux. In perfect conductors, the interior magnetic field must remain fixed but can have a zero or nonzero value. In real superconductors, all magnetic flux is expelled during the phase transition to superconductivity (the Meissner effect), and the magnetic field is always zero within the bulk of the superconductor. Please clarify for me about this doubt and let me know if there is something additional to it.
The main difference between superconductor (SC) and perfect conductor: In a SC the kinetic energy of every charge carrier is quantized, so a supercurrent, once established, can be dissipated only by external energy excitations stronger than the kinetic energy quantum. In a perfect conductor the kinetic energy of every charge carrier can increase/decrease smoothly, by arbitrarily small values, so the current dissipates due to vanishingly weak energy fluctuations (thermal, electromagnetic etc.) The experimental method to distinguish SC and perfect conductor is trivial: SC can keep a supercurrent forever, a perfect conductor cannot (despite a vanishingly small resistivity). Note, the Meissner effect is a direct consequence from the persistent supercurrents in a SC.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/661726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Do we have an upper bound to the size of the six hypothetical curled up dimensions in string theory? String theory requires ten (or eleven for M-theory) extra dimensions. These dimensions are not observed at large scales and so it has been hypothesised that they are curled up and invisible at larger scales. Often times the example of an ant on a lamppost is given. To the ant, the space is two-dimensional (up/down and around) but to an outside observer it appears to have just one dimension (up/down). Have any experiments been conducted to find these extra dimensions and down to what size have they excluded/failed to find the existence of these dimensions?
Roughly $10^{-19}$ meters. http://arxiv.org/abs/1012.3375 , Search for Microscopic Black Hole Signatures at the Large Hadron Collider Mureika et al, http://arxiv.org/abs/1111.5830 , Any black holes at the LHC?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/661868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What would have caused the Gravitational Waves in the Gravitational Wave Background? We have discovered the CMB, Cosmic Microwave Background, in 1965. It's the oldest light that we can see and it came from 3,80,000 years after the Big Bang. It has been proposed that there must be a Gravitational Wave Background (GWB) too. But on what grounds it is proposed ? There were no massive binary mergers at that time. What could have caused this GWB ?
It's actually the same mechanism that's ultimately responsible for the temperature fluctuations in the CMB. Quantum fluctuations during inflation get stretched outside the horizon, and "freeze" into a classical state which we later observe as background radiation. The CMB temperature fluctuations are thought to be fluctuations in density, which arise from over and under densities in the gravitational field and in matter from this process. The GWB would come from gravitational waves produced in this process. Some references [1] https://arxiv.org/abs/0907.5424 [2] https://arxiv.org/abs/1801.04268 [3] https://arxiv.org/abs/1605.01615
{ "language": "en", "url": "https://physics.stackexchange.com/questions/662080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }