Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Why are the diagonals of the pressure tensor non-negative? I understand that the pressure tensor is simply the momentum flux which makes sense to me (pressure is force per unit area which is momentum change per unit time per unit area). From this, a simple mathematical deduction shows that the flux in any direction of the momentum component in that direction must be non-negative. But why is this the case? Surely, there can be more flux in one direction than the other i.e. more particles with more speed coming from one direction than the other. It seems weirdly unintuitive physically speaking that a quantity's flux, regardless of the direction, is positive. Simplifying it to the one-dimensional case, this would mean the momentum flux is always positive in both directions. How can this make any sense? I understand the mathematics but I think bridge between it and the physics is really missing for me. Thank you in advance!
From a mechanical perspective, the mechanical pressure $p$ is the magnitude of a momentum density flux, but certainly does not correspond to the entirety of the non-convective momentum density flux in the system. Recall that the mechanical pressure $p$ is related to the stress tensor $\bar{\bar{\sigma}}$ in the following way: $$p = -\frac{\text{Tr}(\bar{\bar{\sigma}})}{3} = -\langle{\sigma_{ii}}\rangle$$ Since the mechanical pressure is a function of a stress tensor invariant, it is isotropic and does not change with respect to coordinate frame rotations (as you point out). However, the net momentum density flux between control volumes is not always zero because you have additional momentum density flux contributions from both external forces $\vec{f}$ and the deviatoric component of the stress tensor $\bar{\bar{s}}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What determines the color of the light emitted in a Tokamak? We see images of Tokamak plasma with all sorts of colours from red to purple. Why do we see any light at all, since the plasma should be so hot to have dissociated all its electrons? It is all from contamination or unwanted cooling?
In most cases it's a fake: they often use test gasses with the explicit goal of being partially ionized so they give off light and thus allows us to take fancy photographs. The ultimate example of this was the DCX at the 1958 Atoms for Peace where they injected xenon (IIRC) to make a visible glowing ring. Its also possible to take pictures during startup, when it isn't entirely thermalized. There are some youtube videos showing the MAST startup and you can see this happen.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
In what context is enthalpy a convenient concept? Internal energy $U$ is clearly an important concept; the first law of thermodynamics states that for an isolated system internal energy is constant $(\Delta U=0)$ and that for a closed system the change in internal energy is the heat absorbed by the system $Q$ and work done on the system $W$ $(\Delta U=Q+W)$. Enthalpy $H$ is the sum of the internal energy and the product of the pressure and volume of the system $(H=U+PV)$. I was taught that enthalpy is a preferred quantity to internal energy for constant pressure systems where $\Delta H=Q$, as opposed to constant volume systems where $\Delta U=Q$. But why would anyone care about what quantity is equal to $Q$ under certain conditions instead of simply reporting $Q$? Enthalpy seems redundant in this context. Is enthalpy a convenient concept in other contexts, such as systems with varying pressure? Is it described by any fundamental laws as internal energy is by the first law?
The quick answer is: flow processes. e.g. fluid flowing into a chemical plant; gas running through a turbine; chemicals reacting at constant pressure. The equation $dU = T dS - p dV$ gives us $$ dH = T dS + V dp$$ which you can regard as a 'fundamental law' (equivalent to the familiar one for $dU$). An important application is the Joule-Kelvin or throttling process, where $H$ is constant. This in turn leads to Bernoulli's equation for laminar flow in the absence of heat transport. In chemistry you get the van't Hoff equation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Compton Scattering Lowest Order Feyman Diagram I have some trouble understanding the lowest order feyman diagrams for compton scattering. Does two feyman diagrams mean that compton scattering can happen through two processes? What is the specific process in each case?
Here is a clearer sum of the two lowest order diagrams: It shows the two geometric ways energy and momentum can be exchanged between the two incoming particles to produce the two outgoing, to first order in a series expansion. Does two feyman diagrams mean that compton scattering can happen through two processes? What is the specific process in each case? Crossections are calculated in quantum field theory as sums of a convergent series expansion of the original scattering amplitude. The series has diminishing constants in front of each order, as in all series expansions. These are the first order diagrams corresponding to two integrals over the available momentum and energy phase space. There is a one to one correspondence of all the elements in the feynman diagram to terms within the integral. Have a look here. That there are two lowest order diagrams does not mean that they can be separated, they contribute to the calculations. The true value needs the sum of all orders, but for most experimental checks, the first order is sufficient. Here is how a calculation for these two diagrams goes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is a nucleus a collection of quarks or a collection of neutrons and protons? I do not much about particle physics. But people say that neutron and proton are composed of quarks, and in turn a nucleus is composed of neutrons and protons. Therefore, the question is, is this hierarchy necessarily well defined? Could we view a nucleus as a collection of quarks, without the in-between neutrons or protons?
Could we view a nucleus as a collection of quarks, without the in-between neutrons or protons? Conceptually this is similar to asking "could we view a building as a collection of bricks cement glass and iron". In some sense, one can, but it would be a funny way of defining a building because there is a hierarchy, and just the materials do not a building make. A proton and neutron are a collection of quarks organized by the strong force, in a whole, with small leaks of the strong force, called nuclear force, which allow for neutrons and protons to organize into nuclei. The strong force is studied with lattice QCD which has fitted the observed spectra of bound quarks. The nuclei are studied with the shell model using the nuclear force, which fits the observed periodic table of elements. So it is all due to the self organization that the forces impose on complex systems composed of quarks. One can say that people are a collection of quarks and electrons after all, as far as elementary particles are involved, ignoring the organizing principles of the forces between them.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Periods of non-circular Schwarzschild orbits I have been thinking about non-circular orbits in the Schwarzschild spacetime. How would you define a period of one orbit? I was thinking, in terms of proper time, for $r$, how long it takes to go from one apogee to another. For $\phi$, again in terms of $\tau$, how long it takes to cover $2\pi$. What about $t$, though? Is my reasoning wrong?
You actually described two inequivalent definitions of "period," both legitimate. The paper * *Geisler and McVittie (1965), "Orbital periods in the Schwarzschild space-time" (http://adsabs.harvard.edu/full/1965AJ.....70...14G) considers essentially the same two definitions of "period" that you described. One is the time from one perihelion to the next; they call this the anomalistic period. The other is the lapse between two successive passages across $\phi=0$; they call this the sidereal period. The two periods are not the same. Both periods may be expressed either in terms of the object's own proper time $\tau$, or in terms of the coordinate time $t$. These again are not the same. The key is to specify the worldline by expressing all of the coordinates as functions of a shared parameter, which we can take to be the object's proper time. Then we have functions $r(\tau)$, $\phi(\tau)$, and $t(\tau)$. We need all of these functions anyway to solve the free-fall equations that define what "orbit" means. Given those functions, we can use $r(\tau)$ to compute what those authors called the anomalistic period, or we can use $\phi(\tau)$ to compute what those authors called the sidereal period, both in terms of the object's proper time. To relate those proper-time periods to coordinate-time periods, we can use the function $t(\tau)$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is the length of null geodesic? There are many questions about this but none of them adresses my concrete question. If it is indeed true that for light we have $ds^2 = 0$ does that mean that in 4d spacetime total "distance" is zero for light? By distance I mean lenght of a geodesic that light moves on? Describes?
If it is indeed true that for light we have $ds^2=0$ does that mean that in 4d spacetime total "distance" is zero for light? Yes, but the scare quotes on the word “distance” are exceptionally important. The spacetime interval is not just a distance in spacetime. In space a distance is always just measured by a ruler. In spacetime the interval comes in three flavors, spacelike which is measured by rulers, timelike which is measured by clocks, and lightlike which is zero. Also, if two points in space are separated by 0 distance then they are the same point. The same is not true of the interval. There are many events with a spacetime interval of zero from any given event, they form what is called the light cone. Because the interval is zero between lightlike separated events we need a different parameter for identifying events on a lightlike line. This is the affine parameter. For timelike lines the affine parameter is proportional to proper time, but it also works for lightlike lines.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Please help identify this physics apparatus! This was my grandfather’s and have no idea what it is only that it is some piece of physics equipment! The main black cylinder doesn’t seem like it wants to rotate but not sure if it should?
It looks like an induction coil with the make and break device at the bottom and a switch right at the bottom. If you connect it up to an accumulator, be very, very careful as the output between the two balls, when separate, could be lethal. Also the electrical insulation elsewhere may be poor and you might get a shock just by touching the switch. Use with very great care and preferably have somebody who knows about such devices with you.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 2, "answer_id": 0 }
Difference in spectrum of green laser and green LED In an experiment I conducted I used a spectrometer to find the spectrum of green laser and green led. this is what I found: LED spectrum: Laser spectrum: why is the spectral width of the LED is wide compared with the laser?
In short, because they produce light using completely different mechanisms. In fact, the light produced in a green laser is actually frequency-doubled infrared light (which is part of why cheap green laser pointers that don't properly filter out the leftover infrared light are dangerous).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Falling object and the side wind I don't have a big knowledge on Physics so I'm sorry in advance if it doesn't make sense.. If a brick is falling from 100m tall-building when there's the side wind of 30m/s, is there any way I can find how far the brick would have traveled from the origin? If there's any equation I can utilize please let me know. I'm from architecture and trying to find out how to protect pedestrians from the falling objects. Thank you very much!
There's no easy way but it is possible to 'model' what you're asking. The brick in question has a velocity vector $\vec{v}$ made up of two components, that are completely independent of each other: * *A vertical one $v_y$: For simplicity's sake we'll assume the no significant drag forces act in the $y$ direction. If $t=0$ is the moment the brick starts falling, then (with $g=9.81\:\mathrm{ms^{-2}}$): $$v_y=gt$$ and the distance travelled: $$y=\frac12 gt^2$$ Using the height of the building $y=H$ we can then calculate the falling time as: $$t_f=\sqrt{\frac{2H}{g}}$$ *A horizontal one $v_x$, caused by air drag: For high Reynolds numbers, the brick will experience a horizontal force cause by the wind, modelled as: $$F_D=\frac12 \rho u^2C_DA$$ Where $u$ is the sideways wind velocity and I refer to this reference for the other symbols' meaning. More precisely $u$ is the relative speed of wind $v_{wind}$ to horizontal velocity $v_x$. Assuming $v_{wind}\gg v_x$ then we can assume $u$ is constant then the force $F_D$ is also constant. Using Newton: $$F_D=ma_x$$ With $a_x$ the horizontal acceleration: $$a_x=\frac{\mathrm{d}}{\mathrm{dt}}v_x$$ The distance $x$ travelled sideways is given by: $$x=\frac12 a_xt^2$$ Because the horizontal and vertical movements are independent of each other, using $t_f$ would give us $x_f$, the distance travelled sideways during the fall: $$\boxed{x_f\approx \frac{1}{2mg}\rho u^2 C_DAH}$$ Now, this is all very fine and dandy but finding a reliable value for the drag coefficient $C_D$ may prove the hardest part.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Reason for force in Stern-Gerlach experiment I'm currently working at an assignment, and I'm having some trouble understanding how the magnetic field deflects the silver atoms passing trough it. From what I understood, the atoms are deflected up or down of a specific amount according to their magnetic moment, but I can't understand what is causing this phenomenon. I'm an engineering student and physics isn't my main field, so I would prefer that the explanation is not too specific. Thanks in advance for your help.
If an atom has a magnetic moment (not momentum), that means it acts like a tiny dipole magnet -- like a tiny bar magnet. A uniform magnetic field will not exert a net force on a dipole magnet, because (thinking of it as a bar magnet) the magnetic field will push on one pole and pull on the other pole with equal but opposite force. However, if the magnetic field is nonuniform - if it diverges upward, for example - you can imagine that when the dipole is aligned with the field, the field is stronger at the location of one of the poles than at the other because of the divergence. Let's say the nonuniform field is due to the N pole of an electromagnet. Then the N pole of the dipole is repelled (upward in this scenario) and the S pole of the dipole is attracted downward. But the net force (upward minus downward) depends on whether the dipole is aligned with its North pole up or its S pole up. In the former case, the net force is downward and in the latter case the net force is upward. The key result of the Stern-Gerlach experiment is not that the atoms are deflected, but that they are deflected by a fixed amount upward or downward - with nothing in between. This is proof of a quantum property of spin, which relates to the magnetic moment: projection of the spin onto any axis (in this case an axis aligned with the diverging magnetic field) is quantized. "Quantized" means the spin angular momentum along an axis can only have certain discrete values.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/447457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can a battleship float in a tiny amount of water? Given a battleship, suppose we construct a tub with exactly the same shape as the hull of the battleship, but 3 cm larger. We fill the tub with just enough water to equal the volume of space between the hull and the tub. Now, we very carefully lower the battleship into the tub. Does the battleship float in the tub? I tried it with two large glass bowls, and the inner bowl seemed to float. But if the battleship floats, doesn't this contradict what we learned in school? Archimedes' principle says "Any floating object displaces its own weight of fluid." Surely the battleship weighs far more than the small amount of water that it would displace in the tub. Note: I originally specified the tub to be just 1 mm larger in every direction, but I figured you would probably tell me when a layer of fluid becomes so thin, buoyancy is overtaken by surface tension, cohesion, adhesion, hydroplaning, or whatever. I wanted this to be a question about buoyancy alone.
Yes it floats. And it has displaced its "own weight of water" in the sense that if you had filled the container with water and only then lowered the ship into the container, nearly all that water would have been dispaced and is now sloshing around on the floor.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/448673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 8, "answer_id": 2 }
The water analogy seems to imply that power = current. Why is this incorrect? Many people think of the water analogy to try to explain how electromagnetic energy is delivered to a device in a circuit. Using that analogy, in a DC circuit, one could imagine the power-consuming device is like a water wheel being pushed by the current. In the case of an actual water wheel, the more water that flows per unit of time, the more energy gets delivered to the wheel per unit of time: power = current, but in electric circuits power = voltage x current. Why is this?
The analogy with water actually holds really nicely if you consider a water wheel, or other hydroelectric system. But what you're missing is that the power produced does not only depend on the amount of water going past - it also depends on the speed at which it does so. (this makes sense for the hydro system because kinetic energy depends on velocity and mass) To make the analogy better, rather than thinking of the speed of the flow, think about how far it has fallen to aquire that speed. At this point you have a volume per second of water - the current - and you have a loss of height, which is literally a potential difference.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/448724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 9, "answer_id": 7 }
Energy dependence on boundary conditions for particle in a box I am taking a course in solid state physics, and I have some trouble with the "hard wall" and the periodic boundary conditions for a particle in a box. The thing is that we obtain, for a box of length L, $k= n \pi/L$, $n\in \mathbb{N}$ for the hard box, where we have insisted that $\psi(0)=\psi(L)$. We obtain $k= 2\pi m/L$ , $m\in \mathbb{Z}$, for the case of periodic boundary conditions, i.e. by insisting that $\psi(x)=\psi(x+L)$. Now, if we solve this for the energies for k, we obtain for a particle of mass M $E= \frac{\hbar^2 k^2}{ 2 M}$ , and because the allowed values of k are different for the two sets of boundary conditions, we end up with different energies as well $E_n= \frac{\hbar^2 n^2}{ 2 M L^2}$ - for the hard wall $E_m= 4\frac{ \hbar^2 m^2}{ 2 M L^2}$ - for the periodic bondary conditions So the choice of boundary conditions seemingly affects the energies. Shouldn't the physical situation be unaffected by the choice of boundary conditions? For example, in treating the free electron model of metals, some authors solve the allowed energies of the electrons by modelling the metal as a cube of side length L and using periodic boundary conditions. If one used the hard wall boundary conditions instead, we would obtain an energy different by a factor of 4. What don't I understand?
From what I understand of your question, you are treating a quantum problem classically. When you’re dealing with h-bar , you leave behind deterministic-classical physics, and enter the world of probablistic-quantum physics. If you consider your wave-function, with sinusoidal changes within a box, the particle is considered a wave with energies in a probable range. So of course the attributes of a wave will change given the length of your box. The choice of boundary condition affects the physical probabilities. In other words, you change the physical situation, when you change the boundary conditions for particle-wave behaviour. This was super quick , and dirty, but hopefully helps you to think about it differently https://en.m.wikipedia.org/wiki/Particle_in_a_box
{ "language": "en", "url": "https://physics.stackexchange.com/questions/448882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Is numerical lattice wavefunction smooth? -- graphene tight binding case I tried to follow exactly Sec. II.K [page 112-113, Hamiltonian after Eq. (113)] of the standard Review of Modern Physics paper on graphene, which is a tight-binding model of a graphene stripe under magnetic field. It's periodic and hence fourier transformed along x, but open along y. The resulted Landau-level-like energy spectrum looks perfectly fine as in the paper. However, I got confused by the wavefunctions since they look somewhat messy, sawtooth, and not smooth. I haven't played with tight-binding models much and am not sure if this is correct or not. Probably one don't expect lattice wavefunctions to be smooth at all? Another question is whether Landau level (LL) degeneracy is in general lifted in lattice models. If so, is the lattice LL a certain superposition of many degenerate LLs, which depends on the lattice model details? Here I plot, at a certain $k_x$ around the flat bands, norms of wavefunctions of the lowest 4 armchair bands (from left to right) on one sublattice.
The interaction in almost all tight-binding models has finite range (e. g. only includes nearest-neighbor or next-nearest-neighbor interactions), and therefore the associated matrix-valued trigonometric polynomial is smooth (even analytic). Hence, Bloch functions and energy band functions are locally smooth away from band crossings.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Two wheels and a ball problem confusion When I was preparing for a job interview for chemistry, the following question appeared: Now, my answer was as such: If the bigger wheel is rotated clockwise, the smaller wheel moves anticlockwise so the ball will move down and fall off the plank. However, the answer (which was pretty brief) only mentioned the Centre of Masses being inline, so it will stay level. I'm just left quite confused. Why is the Centre of Mass being "level" have any effect on this system?
The drawing is not entirely clear, but I'll try... Assuming that the black dots are bearings with axis normal to the picture, both blue wheels can rotate freely. The rotational motion of the small blue wheel is generated through an horizontal friction force. This force do not causes motion of the green bar since it is contained in the horizontal plane. The friction force has no vertical component because it is tangent to the contact surfaces. Therefore, there is no mechanism to impose any motion to the ball. If there is no ball motion and the wheels center of mass is in its centers, then the system's center of mass is not changed. However, if the center of mass of the small wheel is not in its center, and assuming the big wheel is free to rotate but not to translate, then a small centrifugal force will move the ball in the vertical direction. Note that the ball is not in a stable equilibrium position. Therefore any perturbation in the horizontal plane will make the ball move in this plane.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can two electrons repel if it's impossible for free electrons to absorb or emit energy? There is no acceptable/viable mechanism for a free electron to absorb or emit energy, without violating energy or momentum conservation. So its wavefunction cannot collapse into becoming a particle, right? How do 2 free electrons repel each other then?
To resolve this paradox requires study of time dependent perturbation theory; solving Schrodinger's equation with a time dependent perturbation corresponding to the interaction time of two particles. If you do this you arrive at the following conclusions: A single free electron cannot absorb a free photon ( $e + \gamma \to e$ is not a valid interaction) A single free electron cannot emit a free photon ( $e \to e + \gamma$ is not a valid interaction) However, two electrons can scatter by exchange of energy ( $ e + e \to e + e$ is a valid interaction) In this later case it is common to refer to this process being due to exchange of "a virtual photon" between the two electrons. But this is just a description of the calculation of time dependent perturbation theory.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 0 }
Why Coulomb gauge is a possible gauge choice? In classical field theory we can get, that adding gradient of some scalar field to magnetic vector potential does not change the physics at all. So, we have such a symmetry: $\boldsymbol{A}\rightarrow\boldsymbol{A}+\nabla f$ Then there is such a thing written almost in every book on elecrodynamics: "For example, we can use Coulomb gauge $\nabla \cdot \boldsymbol{A}=0$." I can't understand this implication. Why this symmetry allow us to say, that divergence is zero? What is $f$ in this case?
Why this symmetry allow us to say, that divergence is zero? Stipulate a vector potential with non-zero divergence $$\nabla\cdot\mathbf{A}\ne 0$$ 'Gauge away' the divergence $$\nabla\cdot\mathbf{A}'=\nabla\cdot\left(\mathbf{A}+\nabla f \right)=0$$ and it follows that $$\nabla\cdot\mathbf{A}+\nabla^2f=0\Rightarrow\nabla^2f=-\nabla\cdot\mathbf{A}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Completeness of an electron-electron-scattering in the Feynman diagram During an electron-electron-scattering in any case the electrons change their directions and by this undergo accelerations. Shouldn’t the electrons emit photons in this time, loosing a part of its kinetic energy and slowing down a bit?
Shouldn’t the electrons emit photons in this time...? Yes. They do. Just as in classical physics, an accelerating (or scattering) electron in QFT emits EM radiation. If desired, this radiation can be described in terms of photons, although (as usual) that's not necessarily the most natural description. To see this, consider the same Feynman diagram but with one or more external photon lines emanating from one or more of the electron lines. These amplitudes are non-zero, so this process does occur. In fact, when we consider higher-order terms in the small-coupling expansion (and these diagrams with extra photon lines are of higher order), we must include this effect in order to get a meaningful result. Weinberg's book (The Quantum Theory of Fields, Volume 1) has an entire chapter devoted to this subject (Chapter 13: "Infrared Effects"). Page 544 emphasizes that we must account for this effect even if we don't care about or don't detect the EM radiation: ...it is not really possible to measure the rate... for a reaction... involving definite numbers of photons and charged particles, because photons of very low energy can always escape undetected. What can be measured is the rate... for such a reaction to take place with no unobserved photon having an energy greater than some small quantity..., and with not more than some small total energy... going into any number of unobserved photons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can we define energy other than the definition that it's a capability to do work? It is actually a property of energy that it can do some work not an actual mean to define it because we cannot define a thing on the basis of what it is doing or what it can do.
To set up a mathematical model that describes a physical system, one has to define observables, and one has to define "laws", i.e. axioms imposed on the mathematical model so that there is a correspondence of the mathematical solutions to the measured data and also predictions for future behavior. Here we start with the definition of a force in words: One of the foundation concepts of physics, a force may be thought of as any influence which tends to change the motion of an object. "Foundation" means it is a definition, expressed differently in different physics models This leads sequentially to to the concepts of "work" "energy" "power" work:refers to an activity involving a force and movement in the directon of the force. A force of 20 newtons pushing an object 5 meters in the direction of the force does 100 joules of work. energy:is the capacity for doing work. You must have energy to accomplish work - it is like the "currency" for performing work. To do 100 joules of work, you must expend 100 joules of energy. So in the manner that the physics models have developed, there is no other definition. Can one invent another definition? Yes, Example :One may start postulating the Lorenz transformations on four vectors and postulate that "energy is the value in the four vector that gives as the length the invariant mass of the particle given the momentum". Momentum, mv, is the basis in this case. This would be correct, but a very complicated way of defining energy, particularly energy in the emergent classical mechanics system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is gas flow always compressible? From Franz Durst's Fluid Mechanics: An Introduction to the Theory of Fluid Flows: When a fluid element reacts to pressure changes by adjusting its volume and consequently its density, the fluid is called compressible. When no volume or density changes occur with pressure or temperature, the fluid is regarded as incompressible although, strictly, incompressible fluids do not exist. So, strictly speaking, although the fluid is always compressible is there a case where gas fluid flow maintains constant density?
For "normal" gas flows, you can only have a constant density if the fluid pressure is not changing (assuming constant temperature). Unfortunately, fluid flow requires a pressure drop, so as a gas is flowing down a pipe, the pressure is decreasing, the gas volume is increasing, the density is decreasing, and the gas is slowly accelerating down the pipe as a result. There may be a theoretical way to increase the gas temperature at the correct rate as it flows down a pipe, such that the increase in temperature causes an increase in pressure, and a corresponding change in density profile such that density remains constant. However, this would (again) interfere with the pressure drop required to maintain fluid flow, so the answer appears to be "no" for this question.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
I have a question regarding the Painlevé-Gullstrand (PG) metric with factor 2 I have a question regarding the Painlevé-Gullstrand (PG) metric. If we have the line element in a radial fall we get: $$d\theta = d\phi = 0$$ $$ds^2 = -dT^2 + \left(dr+\sqrt{\frac{r_s}{r}}dT\right)^2.$$ Writing out the binomial formula we obtain: $$ds^2 = -dT^2 + dr^2 + 2 \sqrt{\frac{r_s}{r}} dr dT + \frac{r_s}{r} dT^2.$$ If we now want to write down the metric tensor, we should obtain: $$g_{\mu\nu} = \begin{pmatrix} 1 & 2\sqrt{\frac{r_s}{r}} \\ 2\sqrt{\frac{r_s}{r}} & \frac{r_s}{r} -1\\ \end{pmatrix}.$$ So am I right, that the factor 2 also comes into the metric?
You can write the line element with this general ansatz: $ds^2=\begin{bmatrix} dT & dr \\ \end{bmatrix} \begin{bmatrix} g_{00} & g_{01} \\ g_{01} & g_{11} \\ \end{bmatrix} \begin{bmatrix} dT \\ dr \\ \end{bmatrix} \qquad (1)$ The metric $g_{\mu\nu}$ must be symmetric! from equation (1) we obtain : $ds^2=g_{00} \,dT^2+2\,dT\,dr\,g_{01}+g_{11}\,dr^2$ Now compare the coefficients with your line element: $g_{00}=\frac{r_s}{r}-1$ $2\,g_{01}=2\sqrt{\frac{r_s}{r}}$ $g_{01}=\sqrt{\frac{r_s}{r}}$ $g_{11}=1$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Non-Relativistic Limit of Klein-Gordon Probability Density In the lecture notes accompanying an introductory course in relativistic quantum mechanics, the Klein-Gordon probability density and current are defined as: $$ \begin{eqnarray} P & = & \dfrac{i\hbar}{2mc^2}\left(\Phi^*\dfrac{\partial\Phi}{\partial t}-\Phi\dfrac{\partial\Phi^*}{\partial t}\right) \\ \vec{j} &=& \dfrac{\hbar}{2mi}\left(\Phi^*\vec{\nabla}\Phi-\Phi\vec{\nabla}\Phi^*\right) \end{eqnarray} $$ together with the statement that: One can show that in the non-relativistic limit, the known expressions for the probability density and current are recovered. The 'known' expressions are: $$ \begin{eqnarray} \rho &=& \Psi^*\Psi \\ \vec{j} &=& \dfrac{\hbar}{2mi}\left(\Psi^*\vec{\nabla}\Psi-\Psi\vec{\nabla}\Psi^*\right) \end{eqnarray} $$ When taking a 'non-relativistic limit', I am used to taking the "limit" $c \to \infty$, which does give the right result for $\vec{j}$, but for the density produces $P=0$. How should one then take said limit to recover the non-relativistic equations?
You can substitute $\Phi = e^{-mc^2t/\hbar} \Psi$ and then neglect the second order time derivative of $\Psi$. Drop the constant $mc^2$ and you will have recovered the Schrödinger equation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Why do we care only about canonical transformations? In Hamiltonian mechanics we search change of coordinates that leaves the Hamilton equation invariant: these are the canonical transformations. My question is: why we want to leave the equations invariant? I mean: we want to solve a differential equation (the Hamilton equation), so why restrict ourselves only to a certain kind of change of coordinates? Why we care about the "form"? I suppose that sometimes could exist some non-canonical transformation that makes me solve the equation... or not?
You're right. Sometimes it can be fine to take a non-canonical transformation and this is often done in practice (though not usually emphasized). I'll explain in an example from quantum mechanics. The analogy to classical mechanics can be seen by replacing the commutator with the Poisson bracket.. Consider a harmonic oscillator Hamiltonian $$ H = \frac{1}{2} m \omega^2 X_0^2 + \frac{1}{2m}P_0^2 $$ We have that $X_0$ and $P_0$ satisfy the canonical commutation relations $[X_0,P_0] = i\hbar$. Suppose we want to think about the harmonic oscillator in phase space but with $X$ and $P$ on equal footing. We can rescale the variables in such a way that 1) the new variables have the same dimensions and 2) canonical commutation relations are preserved. \begin{align} X' &= \sqrt{m\omega} X_0\\ P' &= \frac{1}{\sqrt{m\omega}} P_0 \end{align} This gives us \begin{align} H &= \frac{\omega}{2}(X'^2 + P'^2)\\ [X',P'] &= i\hbar \end{align} This was a canonical transformation because it preserves the commutation relations. Ok. Now I'm going to describe and perform a non-canonical transformation and afterwards I will motivate it more. Say that not only do we want $X$ and $P$ to have the same dimension but we actually want them to be dimensionless! In quantum mechanics there is $\hbar$ which conveniently has units of position times momentum (see canonical commutation relation). If we divide through by this we can make both dimensionless. \begin{align} X &= \frac{1}{\sqrt{2\hbar}} X'\\ P &= \frac{1}{\sqrt{2\hbar}} P'\\ H &= \hbar \omega (X^2+P^2)\\ [X,P] &= \frac{i}{2} \end{align} We see that this transformation is non-canonical because it changes the commutation relations. However, it is still useful because it brings the Hamiltonian into a very nice and recognizable form. First of all, we can see that $$ X^2 + P^2 = a^{\dagger}a + \frac{1}{2} $$ Where $a$ and $a^{\dagger}$ are the annihilation and creation operators. The $\frac{1}{2}$ is unimportant for this discussion. A bit of work can show in this form we have \begin{align} a &= X+iP\\ a^{\dagger} &= X-iP\\ X &= \frac{1}{2}(a^{\dagger}+a)\\ P &= \frac{i}{2}(a^{\dagger}-a) \end{align} This transformation was really nice because now that $X$ and $P$ are unitless we can ALSO treat them on equal ground with the unitless bosonic creation and annihilation operators and it is very obvious that $X$ and $P$ are directly related to the real and imaginary parts (Hermitian and ant-Hermitian) parts of $a$. All of this was to motivate that sometimes it is useful to do non-canonical transformations. Now a note as to why we might want to avoid non-canonical commutation relations. The answer is just that it introduces the possibility of making a mistake. Say we are looking for the equations of motion. We know the Heisenberg equation of motion for an operator is $$ \dot{O} = -\frac{i}{\hbar}[O,H] $$ We can calculate the time evolution for $X$ as \begin{align} \dot{X} &= -\frac{i}{\hbar}\hbar \omega [X,P^2] = -\frac{i}{\hbar} 2\frac{i}{2}\hbar \omega P = \omega P \end{align} However, if we had given the Hamiltonian to our friend but forgotten to tell them that we are using $X$ and $P$ with $[X,P] = \frac{i}{2}$ then they may have accidentally used the canonical commutation relations and calculated \begin{align} \dot{X} = -\frac{i}{\hbar}\hbar \omega [X,P^2] \rightarrow -\frac{i}{\hbar} \hbar \omega 2 i \hbar P= 2\hbar \omega P \end{align} This would be incorrect. So basically, the canonical commutation relations are sort of like conventional commutation relations. You can use non-canonical transformations, and sometimes this may be useful, but allowing yourself to use non-canonical transformations means you have to do a better job keeping track of what variables you are using. In quantum mechanics this is something which is natural to do anyways so it is not really surprising or commented upon when such non-canonical variables are used. In classical mechanics it may be less useful and more strongly enforced that one should not use non-canonical coordinates so it is done less often. See the Wikipedia on Optical Phase Space for an example where these sorts of $X$ and $P$ quadrature operators are used. Otherwise see any intro quantum optics book.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 1 }
Potential Difference of a battery - What does it mean? I have studied current electricity for a while now. When I look back at basic concepts, I am quite clear about what current, electron, resistance is. But I cannot imagine about the potential difference or voltage of a battery. Or in a circuit, it is said that potential drops across a resistance, why is that so? What does it mean to have a potential difference? I asked my friends too, but none of them have quite understood the concept too. So, can you clarify the concept of p.d? using analogies or any way that might easy.
A voltage difference of 1 volt means that you gain 1 joule if you move 1 Coulomb of charge between the electrodes from plus to minus. A resistor exerts friction on charge carriers, so if this charge is moved through a resistor, this energy will appear as heat in the resistor. If you move the charge through vacuum, the energy will appear as kinetic energy at the low voltage electrode. In case of alternating voltage, which is always sinusoidal, 1 volt means that the mean square voltage is 1 volt so the amplitude of sinusoidal time dependent voltage is $\sqrt{2} $ volt.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
What is ground state useful for? Many textbooks and research are oriented to computing or finding the ground state of a quantum mechanical system. Why is it such a big deal? What can be done once one has the ground state?
The ground state is often the state of lowest energy, i.e. the state that nature will most probably occupy, and if not, there must be a reason why not, a reason why the system should be excited. In addition, the ground state is very often the zero-order approximation, e.g. of perturbation theory or mean-field approximation, so the ground state gives the highest contribution to your solution and the higher orders will give certain, maybe in your case even negligible, contributions or corrections to the calculated energy level. It gives you a raw estimate of whatever solution you are looking for.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
It seems that the Euler equation in thermodynamics and The first law of thermodynamics are in contradiction The Euler equation in thermodynamics are as followed: $U=TS-PV+\mu N$ But The first law of thermodynamics states that $dU=TdS-PdV+\mu dN$ But I think that The Euler equation can be written by $dU=TdS+SdT-PdV-VdP+\mu dN+Nd\mu$ Then, $SdT-VdP+Nd\mu=0$ But I don't think that this is always correct. Edit I see that it's only correct if the system's homogeneous. Can you give me an example of a homogeneous system and nonhomogeneous system?
The Euler equation is a consequence of the extensive property of energy $U(\lambda S,\lambda V,\lambda N)= \lambda U(S,V,N)$. \begin{align}U(S,V,N)&=\Big(\frac{\partial{U}}{\partial{S}}\Big)_{N,V}S+\Big(\frac{\partial{U}}{\partial{V}}\Big)_{N,S}V+\Big(\frac{\partial{U}}{\partial{N}}\Big)_{S,V}N\\ &=TS-PV+\mu N\end{align} And the first law of thermodynamics is just a statement which says that energy is conserved. $U(S,V,N)$ is a state function and therefore $dU(S,V,N)$ is an exact differential, it can be written in the form shown below. \begin{align}dU(S,V,N)&= \Big(\frac{\partial{U}}{\partial{S}}\Big)_{N,V}dS+\Big(\frac{\partial{U}}{\partial{V}}\Big)_{N,S}dV+\Big(\frac{\partial{U}}{\partial{N}}\Big)_{S,V}dN\\&=TdS-PdV+\mu dN \end{align} Both the Euler equation and the first law of thermodynamics are logical consequences of different properties of energy, $U(S,V,N)$ (extensivity and conservation of energy) and both are true. As you concluded, this leads to $SdT-VdP+Nd\mu=0$, which is the Gibbs-Duhem equation. Your question is the proof of Gibbs-Duhem equation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does the inverse-square law apply to linearly polarized light? It's a stupid question but: We did and experiment using linearly polarized microwave radiation generators and receivers. Our teacher asked to check experimentally if the receiver measurements are proportional to the intensity of the radiation I or to the intensity of the electric field $E$. To check that we took the measurements M of the receiver to different distances R between the receiver and the generator. She said that if the receiver goes with I the diagram of $M=f(\frac{1}{R^2})$ will fit better to linear fitting than the $M=f(\frac{1}{R})$ diagram. If the reverse happens the receiver goes with $E$. So that's why I am confused. Shouldn't the $\frac{1}{R^2}$ relationship indicate that the receiver's measurements are proportional to $E$?
Static electric fields fall off as $1/R^{2}$, but the ${\bf E}$ field associated with a spherical wave does not. The electric field of a radiating spherical wave falls off as $1/R$. In finding the solutions of problems with accelerating charges, the difference in behavior can often be used to separate space into a "quasi-static zone" (close to the sources where the ${\bf E}$ field falls off as $1/R^{2}$) and a "radiation zone" (farther away, where ${\bf E}$ falls as $1/R$). If you are measuring a wave's intensity as a function of distance from the source, your should observe the $1/R^{2}$ behavior. The reasons for this is that the amount of energy carried by an outgoing wave is proportional to ${\bf E}^{2}$, and thus falls off as $1/R^{2}$ even though ${\bf E}$ itself goes down as $1/R$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Question about Lagrange method and line element Consider the following line element: $$ds^{2} = K(x,y,z,t)(-dt^2+dx^2)+M(x,y,z,t)dxdt+dy^2+dz^2$$ Then the lagragian method give to us the lagrangian from line element: $$\mathcal{L}^2 = K(x,y,z,t)(-\dot{t}^2+\dot{x}^2)+M(x,y,z,t)\dot{x}\dot{t}+\dot{y}^2+\dot{z}^2 \tag{1}$$ where, for example, $\dot{x} := \frac{dx}{d\lambda}$ and $\lambda$ is an affine parameter. Given now one of the four Lagrange equations: $$\frac{\partial \mathcal{L}^2}{\partial t}-\frac{d}{d\lambda}\Big( \frac{\partial \mathcal{L}^2}{\partial \dot{t}} \Big) = 0$$ Consider just a part of this expression: $$-\frac{d}{d\lambda}\Big( \frac{\partial \mathcal{L}^2}{\partial \dot{t}} \Big)$$ then, $$-\frac{d}{d\lambda}\Big( \frac{\partial}{\partial \dot{t}} [K(x,y,z,t)(-\dot{t}^2+\dot{x}^2)+M(x,y,z,t)\dot{x}\dot{t}+\dot{y}^2+\dot{z}^2] \Big) = -\frac{d}{d\lambda}\Big( -2K\dot{t}+M\dot{x}\Big) = 2K\ddot{t}-M\ddot{x} \implies $$ $$ 2K\ddot{t}-M\ddot{x} = 0 \tag{2}$$ My question is: both $(1)$ and $(2)$ are valid? I mean, did I calculate right?
The Lagrangian is $$ L= \frac{1}{2} g_{\mu\nu}\frac{dx^\mu}{d\lambda}\frac{dx^\nu}{d\lambda}$$ and the Euler/Lagrange eqs. are $$\frac{d}{d\lambda}\frac{\partial L}{\partial(dx^\mu/d\lambda)} = \frac{\partial L}{\partial x^\mu} $$ Now the part in the $t$ component that you considered $$-\frac{d}{d\lambda}\Big( \frac{\partial}{\partial \dot{t}} [K(x,y,z,t)(-\dot{t}^2+\dot{x}^2)+M(x,y,z,t)\dot{x}\dot{t}+\dot{y}^2+\dot{z}^2] \Big) = -\frac{d}{d\lambda}\Big( -2K\dot{t}+M\dot{x}\Big) = 2K\ddot{t}-M\ddot{x} +2\frac{dK}{d\lambda} \dot{t}-\frac{dM}{d\lambda}\dot{x}$$ Where the total derivatives $\frac{dK}{d\lambda}$ and $\frac{dM}{d\lambda}$ are calculated as $\frac{\partial K}{\partial x^\alpha}\frac{d x^\alpha}{d\lambda}$ and $\frac{\partial M}{\partial x^\alpha}\frac{d x^\alpha}{d\lambda}$ , respectively. So , to answer directly your questions: (1) is almost right (2) was definitely incomplete (and therefore wrong), since you forgot the indirect dependence of K and M on $\lambda$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do mass and motion affect space-time differently? Mass is said to create curvatures in space-time thereby creating gravity, yet technically the smallest movements, even on Earth, create gravitational waves. Are there different "types" of disturbances of space-time? How?
Spacetime curvature is actually caused not by mass but by the density and flow of energy and momentum. These quantities are encapsulated in something called the “energy-momentum-stress tensor”. Even massless particles like photons can cause spacetime to curve because they have energy and momentum even though they don’t have mass. But this is a tiny theoretical effect, and of course realistically it is the energy of massive particles that causes gravity. So I’ll answer your other question by talking in terms of mass. If a distribution of mass is sitting still, it creates a static gravitational field that decreases at large distances like $1/r^2$. But if it is moving in a certain way, it radiates a gravitational wave whose field decreases like $1/r$. In order to radiate, the quadrupole moment of the mass distribution must have a nonzero third time derivative. (Or the octupole moment can have a nonzero fourth derivative, etc.) This means, for example, that a spinning sphere does not radiate, but a dumbbell spinning end over end does.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The reasoning behind doing series expansions and approximating functions in physics It is usual in physics, that when we have a variable that is very small or very large we do a power series expansion of the function of that variable, and eliminate the high order terms, but my question is, why do we usually make the expansion and then approximate, why don't we just do the limit in that function, when that value is very small (tends to zero) or is very large (tends to infinity).
Consider the function $f(x)$ defined by $$ f(x)\equiv \int^\infty_{-\infty} ds\ \big(\exp(-s^2-xs^4) - \exp(-s^2)\big). \tag{1} $$ When $x=0$, we get $f(0)=0$. What if we want to know the value of $f(x)$ when $x$ is a very small positive number? We don't know how to evaluate this integral exactly and explicitly, and just saying that the result will be "close to zero" is not very enlightening. We could evaluate the integral numerically, but that requires a computer (or a very patient person with a lot of time), and if we do the calculation that way, then we have to re-do it for each new value of $x$ that we care about. An alternative is to expand in powers of $x$: $$ f(x) = \int^\infty_{-\infty} ds\ (-xs^4)\exp(-s^2) + \int^\infty_{-\infty} ds\ \frac{(-xs^4)^2}{2!}\exp(-s^2) + \int^\infty_{-\infty} ds\ \frac{(-xs^4)^3}{3!}\exp(-s^2) +\cdots \tag{2} $$ Each term in this expansion is an elementary integral, which can be evaluated explicitly, so we end up a series in powers of $x$ with explicit numeric coefficients. The expansion doesn't converge (it's an asymptotic expansion), but if $x$ is small enough, then the first few terms give a good approximation, and we don't have to re-compute the coefficients every time we want to try a new value of $x$. Examples like this are everywhere in physics. This particular example is the single-variable version of an integral that shows up in the simplest type of non-trivial quantum field theory (the "$\phi^4$ model").
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 0 }
Will it be complicated to use Gauss's Law in this problem? The electric field strength depends only on the $x$ and $y$ coordinates according to the law $\vec{E}=a(x\hat{i}+y\hat{j})/(x^2+y^2)$ where $a$ is a constant. Find the flux of the vector$\vec{E}$ through the closed surface $x^4+y^4+z^4=81$. I used integral form of Gauss's Law to solve this question but the integration was very complicated. I took the direction normal of the surface by taking out its gradient. Then I did surface integral. If this would be a sphere, it would be very easy in doing surface integral. So how do I do this? Is there any other method?
Hint : The surface is as in the picture. Accidentally the electric field is that produced by the infinite $z-$axis with uniform linear charge density $\lambda$. So, it would be interesting to compare your result with that applying Gauss's Law for the charge on the $z-$axis from $z=-3$ to $z=+3$. Note that it's not necessary to find the flux through the given surface but you can use another more convenient surface (sphere or cube or cylinder for example) with proven equal flux value.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What would happen if antiprotons were studied under the conditions of particle deceleration that they are collected under? I have not had a very clear understanding of how antiprotons are collected, but I do know that when they collide with matter they explode in a puff of energy. Or so I have been told. If the current properties studied of antimatter, being antiprotons, then perhaps they will have different properties in an environment they were collected in?
I do know that when they collide with matter they explode in a puff of energy. Or so I have been told. Don't trust whoever told that. They told nonsense. There is nothing like "a puff of energy". Energy is not a substance, it's a physical quantity particles may have in variable amounts. Collision of an antiproton with "matter" can be better analyzed as a proton-antiproton annihilation (strictly speaking this isn't the only possibility - also a neutron-antiproton annihilation is possible, but it's slightly less simple to analyze). From proton-antiproton annihilation in vacuum you're left with almost 2 GeV of energy available in COM frame and a lot of possible products may result: pions (provided total charge remains zero), photons, kaons, and several other particles, with different probabilities. In the end (after microseconds) only stable particles will remain, i.e. photons, electrons, positrons, neutrinos, antineutrinos.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Computing a matrix element with the Wigner-Eckart-theorem I learned about the Wigner-Eckart theorem and want to apply it to the following matrix element \begin{equation} \langle j \, m | r_kr_l | j' \, m'\rangle. \end{equation} I know this can be done by writing the involved tensor operator as a sum of irreducible tensors \begin{equation} r_kr_l = \left(\frac{1}{3}\delta_{kl}\vec{r}^2\right) + \left(r_kr_l - \frac{1}{3}\delta_{kl}\vec{r}^2\right) \end{equation} which leads to \begin{align} \langle j \, m | r_kr_l | j' \, m'\rangle &= \langle j \, m | \frac{1}{3}\delta_{kl}\vec{r}^2 | j' \, m'\rangle + \langle j \, m | r_kr_l - \frac{1}{3}\delta_{kl}\vec{r}^2 | j' \, m'\rangle\\ &= \frac{1}{3}\lambda(j)\delta_{jj'}\delta_{mm'}\delta_{kl} + \langle j \, m | r_kr_l - \frac{1}{3}\delta_{kl}\vec{r}^2 | j' \, m'\rangle, \end{align} as $\vec{r}^2$ is a scalar. But I don't know how to proceed. How can I compute the second matrix element? Do I have to write the tensor in spherical coordinates so that I can apply Wigner-Eckart theorem to $Y_l^m(\vartheta,\phi)$ here?
I am assuming here that with $r_i$ you mean the Cartesian components of the position operator. In general the decomposition of $u_i \, v_j$, with $\vec{u}$ and $\vec{v}$ vectors, includes an additional antisymetric component $\frac{1}{2} \left(u_i \, v_j - u_j \, v_i\right)$ with 3 additional DOFs. In this case we don't have to worry about that. Writing everything explicitly in terms of spherical harmonics reveals that the job is not fully done by just splitting this object in the trace and the traceless symmetric component. A bit of algebra leads us to \begin{equation} r_i \, r_j \, = \, \frac{\sqrt{4 \pi}}{3} \, Y^{m=0}_{l=0} \, r^2 \, \delta_{ij} \, + \,r^2 \, S_{ij} \, , \end{equation} with \begin{equation} S = \begin{bmatrix} -\frac{2}{3}\sqrt{\frac{\pi}{5}} Y^{m=0}_{l=2} + 4 \sqrt{\frac{\pi}{30}} \mathrm{Re}\left[Y^{m=2}_{l=2}\right] & 4 \sqrt{\frac{\pi}{30}} \mathrm{Im}\left[Y^{m=2}_{l=2}\right] & - 2 \sqrt{\frac{2\pi}{15}} \mathrm{Re}\left[Y^{m=1}_{l=2}\right] \\ 4 \sqrt{\frac{\pi}{30}} \mathrm{Im}\left[Y^{m=2}_{l=2}\right] & -\frac{2}{3}\sqrt{\frac{\pi}{5}} Y^{m=0}_{l=2} - 4 \sqrt{\frac{\pi}{30}} \mathrm{Re}\left[Y^{m=2}_{l=2}\right] & - 2 \sqrt{\frac{2\pi}{15}} \mathrm{Im}\left[Y^{m=1}_{l=2}\right] \\ - 2 \sqrt{\frac{2\pi}{15}} \mathrm{Re}\left[Y^{m=1}_{l=2}\right] & - 2 \sqrt{\frac{2\pi}{15}} \mathrm{Im}\left[Y^{m=1}_{l=2}\right] & \frac{4}{3}\sqrt{\frac{\pi}{5}} Y^{m=0}_{l=2} \end{bmatrix} . \end{equation} This shows that the traceless symmetric part contains only components of a rank 2 spherical tensor. But the relation between Cartesian and Spherical components certainly mixes distinct values of $m$ for a specific Cartesian component. In particular, the real and imaginary parts of spherical harmonics will always mix $Y^{l=l_i}_{m=|m_i|}$ and $Y^{l=l_i}_{m=-|m_i|}$. We see here that when it comes to applying the Wigner-Eckart theorem to a Cartesian tensor like this, we may be able to establish the rank $l$ of the distinct pieces directly and deduce the contribution to the matrix element that depends solely on $l$. On the other hand, like $S$, each piece clearly contains different irreducible components with different $m$'s in (generically) non-trivial combinations and the Wigner-Eckart formula depends on $m$ via a Clebsch-Gordan coefficient. Hence, in order to write the full explicit result, including the $m$ dependent CG coefficient, we would have to go through each irreducible term in each Cartesian component. It should be mentioned that here we had the advantage of having the position vector directly involved and we were able to write things in terms of the Spherical Harmonics. In general, the relation between irreducible Spherical components and (products of) general Cartesian vectors can be computed as an extrapolation of the one you find for spherical harmonics (see section 3.10 Tensor Operators in Sakurai's Modern Quantum Mechanics, revised edition). Finally, after you are done with the decomposition, you are free to use the Wigner-Eckart theorem in its general form to compute the matrix elements. Or, alternatively, look directly at the integrals which end up having integrands with 3 spherical harmonics. For these, there is a formula (eq 3.7.73 in Sakurai's book), which is nothing but an explicit application of the Wigner-Eckart theorem.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are Maxwell's equations "physical"? The canonical Maxwell's equations are derivable from the Lagrangian $${\cal L} = -\frac{1}{4}F_{\mu\nu}F^{\mu\nu} $$ by solving the Euler-Lagrange equations. However: The Lagrangian above is invariant under the gauge transformation $$A_\mu \to A_\mu - \partial_\mu \Lambda(x) $$ for some scalar fiend $\Lambda(x)$ that vanishes at infinity. This implies that there will be redundant degrees of freedom in our equations of motion (i.e. Maxwell's equations). Therefore, as I understand gauge fixing, this implies that Maxwell's equations (without gauge fixing) can lead to unphysical predictions. Question: Hence my question is simply are Maxwell's equations (the ones derived from $\cal{L}$ above) actually physical, in the sense they do not make unphysical predictions? Example: The general solution to the equations of motion derived from $\cal{L}$ is given by $$A_\mu(x) = \sum_{r=0}^3 \int \frac{d^3p}{(2\pi)^3} \frac{1}{\sqrt{2E_{\mathbf{p}}}}\left(\epsilon^r_\mu(\mathbf{p}) a^r_\mathbf{p}e^{-ipx} + \epsilon^{*r}_\mu(\mathbf{p}) (a^r_\mathbf{p})^\dagger e^{ipx} \right)$$ where we have, at first, 4 polarization states for external photons. My understanding: is that we can remove one of these degrees of freedom by realizing that $A_0$ is not dynamical, but to remove the other one we have to impose gauge invariance (cf. (2)). This seems to imply that unless we fix a gauge Maxwell's equations will predict a longitudinal polarization for the photon.
* *You are correct that there are gauge degrees of freedom in the solution for $A_\mu$ - precisely the ordinary gauge transformations. But $A$ is not physical, the electromagnetic field strength $F$ is. There are no gauge degrees of freedom in $F$, and as an equation for $F$ Maxwell's equations are physical. *The polarization of the classical $A_\mu$ has nothing to do with any photons. There are no photons in a classical theory. Maxwell's equations alone classically fully suffice to allow only transverse EM waves, see e.g. this question and its answers.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 0 }
If the universe is flat, does that imply that the Big Bang produced an infinite amount of energy? Too much density and the universe is closed, analogous to a sphere in four dimensions: you travel in a straight line and you end up where you started. Too little and you have a saddle: not sure about the destination if you travel in a straight line. Just the right amount and the topology is flat. The flat topology is infinite: you travel in a straight line forever. If the topology is flat (and at this point all evidence indicates that it is to within 0.4%), then multiplying the critical density by an infinite amount of cubic meters gives you an infinite energy/stress.$$\rho_{CRIT}\space kg\space m^{-3}\times \infty\space m^3=\infty\space kg$$ Is that a reasonable interpretation?
What you calculated above ($\infty$, for a flat space universe) is just the proper mass-energy of matter. It doesn't take into account the - negative - energy of the gravitational "field" itself, which cannot be localised in General Relativity. The "total energy" of the universe could be 0, but there's no way we could give a physical sense to it, since the whole universe energy cannot be measured from "inside".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
are there changing magnetic and electric fields that are not EM radiation? Let us consider these two Maxwell equations: $$\frac{\partial \vec{B}}{\partial t}=-\vec{\nabla}\times \vec{E}$$ and $$\frac{\partial \vec{E}}{\partial t}=\frac{1}{\epsilon_0}\left(-\vec{J}+\frac{1}{\mu_0}\vec{\nabla}\times \vec{B}\right).$$ When we consider faraday's law of induction, we usually assume that the changes are slow, and thus we can neglect radiation by assuming that the left hand side of the second equation is zero. That is, a changing current creates a changing magnetic field, which in turn creates a changing electric field, per the first equation. I cannot understand this. First, if we can neglect the change in E from the second equation, should not we also neglect the change in B in the first equation? Second, this imply that we can have changing electric and magnetic fields that are not electromagnetic waves. But are not all changing magnetic or electric fields EM waves? or is this approximation equivalent to a charge moving at constant speed, in which the change in E and B are not due to radiation but just to the translation motion of the static field lines
An electric or magnetic field always obeys a wave equation this can be proven by eliminating one or the other from the two equations that you display. In order to qualify as radiation the wave should transport energy, that is, propagate. Evanescent fields exist only near the current or charge, see https://en.wikipedia.org/wiki/Evanescent_field. These do not transport energy away from the source and are not radiation. For slowly time varying currents and charges the fields are nearly purely evanescent. The faster the time variation, the higher the fraction of propagating fields is.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Stone skipping does spinning help? I have tried to hurl a stone with some added rotation and it performed slightly better but I have great difficulty replicating this feat for consistency, my question is should I add angular momentum to my throw or it was just a waste of energy?
There are two tricks to a good skipping stone. Lydéric Bocquet published some meaningful work on this subject. Firstly, the spin makes sure the stone maintains its orientation just as a spinning top does. Secondly, the orientation you want to impart to the stone should cause its trailing edge to hit the water first so that the collision will drive it back upwards due to "angle of attack" (kinda like holding your flat hand out the window of a moving car and orienting it 'trailing side down'). The spin is not wasted energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Interpretation of the Boltzmann factor and partition function $$p_i = \frac{ \exp\left(-\frac{\epsilon _i}{k_BT} \right)}{Z} $$ $$ Z= \sum_{i} \exp\left(-\frac{\epsilon _i}{k_BT} \right)$$ A) Is $p_i$ the probability of the system having an energy equal to $\epsilon_i$? (Probability to be in any of the many microstates that have energy $\epsilon_i$). B) Or is $p_i$ the probability of the system being in one particular microstate which happens to have energy $\epsilon_i$? (This microstate is not the only microstate with the same energy). If A) is correct then: $$ Z= \sum_{\epsilon_i} \exp\left(-\frac{\epsilon _i}{k_BT} \right)$$ If B) is correct then: $$ Z= \sum_{\epsilon_i} \Omega_i\exp\left(-\frac{\epsilon _i}{k_BT} \right),$$ where $\Omega_i$ is the multiplicity of the macrostate of energy $\epsilon_i$. From the derivation of the Boltzmann distribution I am inclined to understand it as B). But I have never seen the multiplicity in the partition function. What is the correct interpretation of the Boltzmann distribution?
To the first question, the answer is B: $p_i$ is the probability of being in the $i$-th microstate, which happens to have an energy $\varepsilon_i$. However, microstates other than the $i$-th one may also have an energy $\varepsilon_i$. The reason you never see the multiplicity in the partition function is because you are probably looking at summations done over the microstates: $$Z=\sum_i e^{-\frac{\varepsilon_i}{k_BT}}$$ instead of over the internal energies as you’ve written above.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Why is a temperature gradient set up in a heated rod? Suppose a cylindrical rod is maintained at 100 degree Celsius and the other at 0 degree Celsius. My book says that after reaching "Steady State" the rod will have developed a constant temperature gradient all throughout the rod. Why does the rod reach a steady state shouldn't the rod keep absorbing heat until it reaches 100 degrees. Assume the curved portion of the rod is perfectly insulated.
You can use your same argument for the $0$ degrees Celsius. In other words, you state: ... shouldn't the rod keep absorbing heat until it reaches 100 degrees? But you could just as easily say "Shouldn't the rod keep 'losing heat' until it reaches $0$ degrees?" You might find this statement subjectively a little more unreasonable than your statement, but hopefully it can then show you how your question also cannot be the case. To show why you get a constant temperature gradient, the other answers here explain why this is.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
On Planets orbiting binary stars Several years ago a discovery was made of planet orbiting a star of a binary system (two stars orbiting each other). Since binary star systems are plentiful in our galaxy, I presume we will be discovering even more such planets. However, as far as I know, no planet has been discovered that orbits both stars of a binary star system. Question, is this feasible and likely that a planet would orbit both stars of a binary system?
1 Generally speaking, binaries cannot have shared planets orbiting at close distances. Neither dynamics nor kinematics holds. If a binary has a common planet, the planet must be far enough, that is, the radius of the circle is large enough to be far greater than the distance between the two stars. This approximates two stars as one star at the center of mass. The acceleration of the winding provides centripetal force, so the system can be stable. It's impossible to circle two stars at close range. Each star of the binary can have its own planet, in this case, the planet need to close to its own star and far away to other star.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Voltmeter has high resistance If voltage remains constant in parallel combination then why doesn't the voltmeter with low resistance give correct reading if there is only one resistor? If there are two resistors then the one on which we are calculating voltage would be less but if there is only one resistor in the circuit and we are not seeing it through the current perspective then please tell whether I am correct or not.
Voltage in a parallel combination is same across all device in the combination. If the voltmeter has low resistance it will act like a "load" It will draw current itself leading to a change in current passing through resistance . Let me explain mathematically, In a parallel combination current passing through each component is I1 = V /R1 where I1 is current through 1 component and R1 it's resistance . If for voltmeter R1 is not very high then current will be high leading to a change in current passing through the resistance . Hence , voltmeter with high resistance should be used .
{ "language": "en", "url": "https://physics.stackexchange.com/questions/453121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do we measure the age of the universe? How do we measure the age of the universe if time is relative to the observer? What is the reference frame we use to measure the age of the universe?
Cosmologists use the “comoving frame”, which is the unique reference frame at each point in which the universe appears isotropic. For example, if you measure the temperature of the cosmic microwave background to be 2.725 K in every direction, you are in a comoving frame. Observers moving relative to this frame see a hotter CMB in the direction of their motion. So the age of the universe is, in principle, the age measured by a clock in a comoving frame. Of course, it is really a calculated age, not a directly-observed time on a clock. We calculate the age of the universe by measuring how fast the universe is expanding, and by using General Relativity to extrapolate back in time to the start of the expansion, the Big Bang.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/453202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why air spacing for high power beam splitters? Ordinary glass cube beam splitters are constructed with a dielectric or hybrid coating on the hypotenuse of a right angle prism which is then cemented to another right angle prism. These fail in high energy applications because the cement absorbs enough energy to cause problems. For higher power applications, the two prisms are optically contacted so that there is no cement to fail. I have read that for even higher power applications, the prisms are air spaced. Are these air spaced prisms constructed by coating one prism, and bringing it close to, but not touching, the second prism? If so, why do these have a higher damage threshold than optically contacted prisms? Is the spacing chosen to be large enough to avoid frustrated total internal reflection? Or, when people refer to air spaced prisms do they mean that the mechanism is frustrated total internal reflection, and there is no coating on either prism?
There's a really good answer in another thread. Basically its FTIR that's causing the one part of the beam, and evanescent waves that couple to the second prism for the second beam. That way there's no coating to burn off. The air is not important, just the spacing of the gap, which controls how much of the beam goes where. Vacuum works just as well, but is obviously somewhat more difficult to arrange. I suspect vacuum gapped versions are common in NIF though.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/453313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is the composite fermion not included in the anyon contents of FQH topological orders? For example, both the $\nu=1/3$ Laughlin state and the Moore-Read state has a simple interpretation in terms of composite fermions, which are bound states of an electron and two fluxes. Both the Laughlin states and the Moore-Read state also have anyons, since they are both topologically ordered. Laughlin states have Abelian $ne/m$ anyons, with $m=1/\nu$ and $n<m$, and Moore-Read state hosts non-Abelian anyons $\sigma$ with charge $e/4$ and a neutral fermion $\chi$. However, composite fermions themselves do not appear in the anyon contents of either state, despite being such an important step in describing these states. My question is why.
Just some comments on your reply to Jainendra. (I cannot directly comment under your reply because I am a new user.) As both Wen and Jain have mentioned, the anyons are the excitations of FQH states which have fractional braiding statistics and can be described by excited CF wave functions. On the other hand, if you attach a fractional number of quantized vortices to electrons, in principle, you can have some particles with anyonic statistics. We can write down such wave functions, in a framework consistent with CFs, as shown in our paper https://journals.aps.org/prb/abstract/10.1103/PhysRevB.104.115135 . We numerically confirm these intermediate states between two FQH states are gapped, as suggested by the adiabatic approach proposed by Greiter and Wilczek. But so far we treat these states as fictitious and we do not have a clear idea of their physical realizations.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/453495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Why does diode built-in voltage persist when current begins flowing? When P and N type semiconductors are connected together, some of the electrons drift from the N type into the P type to recombine with holes. This leaves positive ions on the N type and negative ions on the P type. Therefore an electric field exists across the depletion zone that stops further drift from electrons. This is also the potential that an external voltage source must exceed in order for current begin flowing. But once current starts flowing, why does the built in voltage still remain at the depletion zone? Once current begins flowing, don't the electrons that drifted over the PN junction at the beginning now flow forward? Why is there still an electric field across the junction?
The built-in field comes from diffusion of charge carriers, from a region of high concentration into a region of lower concentration. While an applied forward current DOES change those concentrations, the numbers (for silicon diodes at least) do not indicate a zero-thickness depletion/space-charge region at any feasible current density. Calculating the current density that does cause the depletion region to vanish is a standard sort of textbook problem; that much current would melt any normal diode if it didn't vaporize the attachment wires. The more useful effect is to reverse bias a PN junction to widen the depletion region, which modulates the stray capacitance (making a voltage-variable capacitance, a varicap diode).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/453800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Still Confused about Linear Momentum in a Circle A point mass with mass $m$, distance $r$ from circle and constant tangential velocity and constant angular velocity is swung around a circle. ($p$ is linear momentum) Angular momentum is radius x linear momentum. It is conserved. If $r$ is increased, linear momentum decreases. However, shouldn't linear momentum be conserved as well? Where does linear momentum go?
"Linear momentum is conserved when there is no force acting on the system". So far so good. As the mass is moving in a circle, there is a force acting on it and linear momentum is not conserved. If the force vanishes, the bass will move on a straight line and both linear and angular momentum are conserved.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/453940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Varying a scalar field Lagrangian density I was varying a scalar field density and I look at this term $${\cal L}~=~-\frac{1}{2}\partial _\mu\phi\partial^\mu\phi.$$ The result that I need to come is $$-\frac{1}{2}\delta(\partial _\mu\phi\partial^\mu\phi)=\nabla_\mu\nabla^\mu\phi\delta\phi.$$ But I can't find a way to do it. How the partial derivates transform into the nabla? Reference: * *https://arxiv.org/abs/1809.09274. This is the paper where I see it, the action is in (1) and the variation result is in (4).
It is the through the 4d-version of the 3d vector identity $$ \nabla\cdot (\delta \phi \nabla \phi)=(\nabla \delta \phi) \cdot (\nabla\phi)+\delta \phi \nabla^2 \phi $$ together with an integration by parts that gives $$ \delta\left\{\frac 12 \int (\nabla\phi)\cdot (\nabla\phi) d^3x\right\} $$ $$ = \int (\nabla\delta \phi)\cdot (\nabla\phi) d^3x$$ $$ = - \int \delta \phi \nabla^2 \phi \,d^3x $$ There is no difference really between the 3d and 4 cases except for the range of summation over the repeated indices.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why did Planck took frequency from a wave and incorporated it to explain particle behavior of light? Well if we assume anything to be a particle then it can never be a wave right?? But when planck developed his idea then he incorporated the frequency into the formula E = hv but frequency is a character of waves, not particles ?
Planck was trying to explain experimentally measured spectra of thermal EM radiation within electromagnetic theory, where radiation is a collection of waves. The formula $$ \Delta E = hf $$ was for quantum of energy by which energy of an oscillator can change, it does not in any way require that there be particles of light involved. The idea of particles of light never played any role in Planck's contributions. It was Einstein and other physicists after him who pushed through the concept of particles of light, which carry energy $hf$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Explanation of Lenz's Law phenomena If we drop a magnet through a copper pipe (without it touching any of the sides), it would fall slower than it would if there were no pipe. Having the pipe otherwise accelerate the magnet would be in violation of Lenz's law and conservation of energy. I agree that, if there is any interaction between the copper pipe and the magnet, then it should be to decelerate the magnet. But, why is there even any interaction between the pipe and magnet in the first place? Why can't there just be zero interaction? This way, the magnet is only affected by gravity, just how it would be if it were just a normal wooden or plastic pipe! Would zero interaction be a violation of some other kind of laws? Such as, the Lorentz force law, or Biot-Savart, or something else? Would zero interaction violate special relativity? Or, some rules of quantum mechanics?
It looks like what we call Lenz's Law can actually be explained by the Lorentz force and Biot-Savart Law. Suppose in the classic example of dropping a magnet down a conducting pipe (not touching the walls), the north pole is facing downwards while the south faces up. The relative motion of the magnet and pipe will induce a Lorentz force on the charges in the pipe, specifically in a horizontal direction that will appear counterclockwise around the pipe when viewed from above. By Biot-Savart, such current will then create its own magnetic field, and (inside the pipe) with an orientation against the field of the falling magnet. Hence, we observe Lenz's law.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Water mixture at different temperatures Let's say that we mix homegeneously and instantly cold water at $t^\circ $ C and hot water at $T^\circ$ C (like in a water tap) in ratio $p:1$. My question is the following: What is the instant temperature of this mixture? Is there a law in that sense? I am a mathematician, not a physicist;all I know about that is Newton's cooling law. However I can't see how to apply it here.
The above answers are correct at normal pressure, where all is linear. At high pressure and temperature the heat capacity of water strongly depends on temperature and simple averaging no longer works.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
How does measuring the location of one entangled photon affect its pair? If you have two entangled photons that are moving in opposite directions and you measure the location of one of them, what happens to the wave function of the location of the other photon?
For two entangled particles there is only one wave function, the wave function of the entangled system. It is not the case that each entangled particle has its own wave function. If you want to say there is such a thing as the collapse of the wave function, then when you measure one of the particles, the wave function of the system collapses, so now you have information about both particles. To give a concrete example lets consider the case of two electrons in the singlet spin state with wave function: $$|\psi\rangle~=~\frac{1}{\sqrt{2}}\left(|+\rangle|-\rangle~-~|-\rangle|+\rangle\right)$$ Then, for example, if you make a measurement of spin along the z-direction, your state will "collapse" to either $|+\rangle|-\rangle$, or $|-\rangle|+\rangle$, meaning, that if the spin of the first electron is $+1/2$, the spin of the second will be $-1/2$, and vice versa. Everything works out fine, and there is no mention of a wave function for each electron, which is the same as saying that an entangled state is not a tensor-product state, it cannot be separated into two independent states.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Do real electrons solve the Thomson Problem? The question of how $N$ electrons (seen as point charges) on a conducting sphere will arrange themselves in the electrostatic final state was first posed by J.J. Thomson in 1904--hence, aka the Thomson Problem. If these abstract point charge electrons are initially placed randomly, they will migrate along potential gradients to a state with a locally minimum potential energy. The Thomson Problem is seen as finding the geometrical arrangement of the N charges with the global minimum potential energy. Unfortunately there are usually a great number of local minima, e.g. on the order of $10^6$ for $N$ of several hundred, so numerical techniques don't necessarily produce the global minimum and analytic techniques to date have only solved the problem for some small values of $N$. Notwithstanding the applications of the problem to many other practical phenomena, I have some general questions about the specific real case of electrons on a conducting sphere. If the sphere initially has a random distribution of excess charge ($N$ electrons), will they in fact somehow end up in the global minimum potential energy state, or will they as in a numerical simulation just find a local minimum and be stuck there? Is there any way to know that? If so, how do they do it? Another question: is it valid to think of the electrons as ultimately stationary points on the vertices of some geometric arrangement on the sphere in the first place? I.e., given quantum effects, statistical considerations, etc.
If the sphere is a good conductor then the electrons will be all over the sphere. If it is an insulator then they will remain where you put them.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/455060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why are my interference patterns completely out of phase? DIY physics enthusiast here doing a double slit eraser experiment at home with a laser pointer, double slit diaphragm, and few linear polarizers (horizontal at one slit, vertical at the other, +/-45 degrees for the eraser). When I angle the eraser polarizer at -45 degrees or +45 degrees I get the interference pattern back, however the interference patterns (light/dark bands) are completely out of phase for -45 vs +45. How come that happens? +45 =   |   |   |   |   |   | -45 =  |   |   |   |   |   | P.S. My physics "knowledge" is all from the University of YouTube, so you may have to explain it to me like I'm 5. :) Edit: Here is a link to a video I uploaded showing the setup and shifting interference pattern I am talking about: https://www.youtube.com/watch?v=1rN3iLcbb2M
The +45° polarised light has positive amplitude for both being vertically polarised and horizontally polarised, on the other hand -45° polarised light has positive amplitude for being vertically polarised and negative amplitude for being horizontally polarised, this difference makes the interference pattern on the screen out of phase.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/455244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why are Spin 1/2 particles invariant to $4\pi$ rotation loops while Spin 1 particles are invariant to $2\pi$ loops? Why do Spin 1/2 particles when turning them by 360 deg get a phase factor of -1 and a loop of 720 deg leads to the identity while for spin 1 particles a loop of 360 deg gives already the identity?
You ask for "why". Well the reason is that the rotation operator in matrix form does that. For a spinor $\left( \begin{array}{} a \\ b \end{array} \right)$, you find that the rotation operator about an axis defined by the unit vector $\hat{n}$ along an angle $\theta$ is: $$U(\theta)=cos\left(\frac{\theta}{2}\right)\mathbb{I}-isin\left(\frac{\theta}{2}\right)(\vec{\sigma}\cdot\hat{n})$$ Where $\sigma_i$ represent Pauli's matrices. This is equivalent to $$\left( \begin{array}{cc} \cos\left(\frac{\theta}{2}\right)-i \sin\left(\frac{\theta}{2}\right)n_z & -i\sin\left(\frac{\theta}{2}\right)(n_x-in_y) \\ -i\sin\left(\frac{\theta}{2}\right)(n_x+in_y) & \cos\left(\frac{\theta}{2}\right) + i \sin\left(\frac{\theta}{2}\right) n_z \end{array} \right)$$ For simplicity, let's choose $\hat{n}=\hat{z}$, that is, turning about the z axis. The matrix becomes $$U_z(\theta)=\left( \begin{array}{cc} \cos\left(\frac{\theta}{2}\right)-i \sin\left(\frac{\theta}{2}\right) & 0 \\ 0 & \cos\left(\frac{\theta}{2}\right) + i \sin\left(\frac{\theta}{2}\right) \end{array} \right)=\left( \begin{array}{cc} \exp\left(-i\frac{\theta}{2}\right) & 0 \\ 0 & \exp\left(+i\frac{\theta}{2}\right) \end{array} \right)$$ So, whereas the rottation matrix for objects in real space is the usual one, spins get transformed this way. This is the "why". Mathematics work like this. But let me point out some ideas on common misconceptions: * *When the space is rotated by the usul matrices, spins suffer this transformatio, which is not actually a rotation. Real rotations induce these spin transfomations. *This matrix, acting on the $\left( \begin{array}{} a \\ b \end{array} \right)$ produces a minus sign, that is: $$U_z(2\pi)\left( \begin{array}{} 1 \\ 0 \end{array} \right)=-1\left( \begin{array}{} 1 \\ 0 \end{array} \right)$$ But check that $-1=\exp(i\pi)$ is just a global phase that is not relevant. The state keeps being eigenvector of $S_z$ with the same eigenvalue of $\hbar /2$. This is coherent with the fact that a spin "pointing in $z$ is ot affected by a rotation about that axis. This is a global phase factor. It doesn't affect physics, as it must be. It'd only affect if you rotated only part of the system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/455383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Height of water fountain due to pressure difference This was a question on a fluid dynamics exam: the pressure $p_A$ on a plane is lower than the atmospheric pressure. We fill a water bottle with a straw on the ground and open it in the plane. Water will come out due to the pressure difference. Calculate the maximum height of the water fountain above the straw. What I think happens is that the only place where water can come out is at the straw, so we have a force upwards due to the pressure difference $p_{bottle}-p_A$. Water will only move upwards if this force can counteract the gravitational force $\rho_{water} g$ (I think we can negate the fact that we are high up and the gravitational force is weaker). Then I tried to calculate it as if it was a point mass thrown up with a force $p_{water}-p_A$. But to be honest I'm not even able to calculate the initial velocity (I'm notoriously bad at physics). Any help or insight will be appreciated.
In my judgment, it is invalid to apply the Bernoulli equation to this problem because the Bernoulli equation applies only to steady state flows (or to flows that are nearly steady state), and this problem involves neither. However, water is very nearly incompressible, particularly for a change in pressure that is relatively small (like in the present case). So the volume of water will not change significantly when the pressure on it is reduced. That means that, for all practical purposes, there will be no fountain effect. All that will happen will be that the pressure in the water within the bottle will drop to the new pressure, essentially instantaneously.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/455519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
3D perfectly elastic collision between two points There is a high probability, I think, that this question is a duplicate of some other question ... but to may knowledge, it hasn't been posed in this exact manner: Assume we have 2 points, $P_1$ and $P_2$, of mass $m_1$ and $m_2$ in a world coordinate system $(O, \vec{i}_0, \vec{j}_0, \vec{k}_0)$. The point $P_1$ is moving with the constant velocity $\begin{bmatrix} v_{x1i}\\ v_{y1i}\\ v_{z1i}\end{bmatrix}$ while the point $P_2$ is stationary. The point $P_1$ undergoes a perfectly elastic collision with $P_2$. How will these two points move after the collision? My attempt This problem is about the conservation of linear momentum: therefore the momentum of the system formed by these two points remains constant. Before the collision the momentum of the system is: $$\vec{P}_{init} = m_1\cdot \begin{bmatrix} v_{x1i}\\ v_{y1i}\\ v_{z1i}\end{bmatrix} + m_2 \cdot \begin{bmatrix} 0\\0\\0\end{bmatrix}$$ After the collision, the linear momentum of the system is: $$ \vec{P}_{fin} = m_1\cdot \begin{bmatrix} v_{x1f}\\ v_{y1f}\\ v_{z1f}\end{bmatrix} + m_2 \cdot \begin{bmatrix} v_{x2f}\\v_{y2f}\\v_{z2f}\end{bmatrix}$$ The unknows are $v_{x1f}, v_{y1f}, v_{z1f}, v_{x2f}, v_{y2f}, v_{z2f}$. But we have only three equations $\vec{P}_{init} = \vec{P}_{fin}$ and six unknowns ... One can also use the law of conservation of energy to obtain another equation: $$ \frac{m_1}{2} \cdot \left(v_{x1i}^2 + v_{y1i}^2 + v_{z1i}^2\right) = \frac{m_1}{2}\cdot \left( v_{x1f}^2+v_{y1f}^2+v_{z1f}^2\right) + \frac{m_2}{2}\cdot \left( v_{x2f}^2+v_{y2f}^2+v_{z2f}^2\right)$$ but there are still only four equations and six variables ...
You must know the direction for at least one of the masses after the collision. Then, for an elastic collision, you can use the fact that the speeds relative to the center of mass are reversed during the collision.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/455657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Van der Waals equation of state plot limitations When I plot the van der Waals equation of state in terms of Pressure (bar) versus density (mol/L) for propane at 400 K, $$P=\frac{RT}{\big(V_m-b\big)}-\frac{a}{V_m^2}$$ in terms of density, $$P=\frac{RT}{\big(\frac{1}{\rho}-b\big)}-\frac{a}{\big(\frac{1}{\rho}\big)^2}$$ I get a very large negative value at $\rho=1/b$. I understand that this is because the first term becomes undefined at this value for the density but is there also a physical reason or understanding as to why the equation fails to model propane accurately? Also, past the critical density of $\rho_c=5 molL^{-1}$ it does not accurately model propane, why is this?
The Van der Waals equation is a model. It's popular because it gives physically realistic predictions without too much work, but it's rarely used in serious modelling because it doesn't describe real substances with especially good accuracy. Overall: it is not surprising that the model doesn't model propane correctly as there really isn't any substance that the model describes especially well at all conditions. The $b$ term in the Van der Waals equation represents the volume of the particles (which is treated as zero in the ideal gas law). As you observe, $P\rightarrow \infty$ as $1/\rho\rightarrow b$; this represents the fact that, once there is no space between the particles, no amount of force can shrink the system further (the model assumes that the gas is composed of infinitely hard particles). If propane followed the Van der Waals equation exactly we would find that $b = \frac{1}{3\rho_c}$ i.e. the density of the particles themselves would be $3$ times the critical density. As $\rho \rightarrow 0$, the behaviour of real gases, Van der Waals gases, and ideal gases all converge. All reasonable models should agree in this limit. As $\rho \rightarrow \infty$, non-ideal behaviours become more significant and the three models begin to diverge. For the isotherm that you plotted, the difference between the real pressure and the prediction of the Van der Waals model begins to grow very quickly around $\rho = \rho_c$, but to my knowledge this is just a coincidence. The exact point at which the model breaks down is likely different for different isotherms.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/455750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Why we neglect the $\hbar ω/2$ in the Hamiltonian of the the Electromagnetic Field? After the quantization of the electric and the magnetic field, we get the Hamiltonian of the electromagnetic field: $$H= \hbar ω(a^{\dagger}a +1/2) .$$ with $\hbar$ the planck constant and $a^{\dagger}$ the creation operator. Why can we neglect the term $\hbar ω/2$ in many cases, e.g. when we want to describe the Rabi Hamiltonian, where we just take $H= \hbar ωa^{\dagger}a$ .
Because the vacuum-energy (lowest energy that's being neglected) doesn't affect the anaylsis that you want to do if you're using the Rabi Hamiltonian.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/455835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
What does it mean for a plane wave to be an eigenstate of a 3D vector? Show that $$ \phi_p(r) = \left(\frac{1}{2\pi\hbar}\right)^{3/2}e^{ip\cdot r/\hbar}$$ is an eigenstate of p, where p and r are 3D vectors. I'm unclear on what the final equation I'm trying to get to should look like. I thought that the final result would be showing that $\phi_p$ is equal to $ap$, with $a$ being some constant. However, I also thought that a vector is an eigenvector for a particular matrix/operator, but I'm not sure what should be the operator for which p and $\phi_p$ are eigenvectors.
It's almost certain that you're misreading the text, or that it got mangled in some other way, and that it should really read Show that $$ \phi_{\vec p}(\vec r) =\left(\frac{1}{2\pi\hbar}\right)^{3/2}e^{i\vec p\cdot \vec r/\hbar}$$ is an eigenstate of $\hat{\vec p}$, where $\vec p$ and $\vec r$ are 3D vectors, and where $$ \hat{\vec p} = -i\hbar \nabla $$ is the quantum-mechanical momentum operator. You're being asked to show that $\hat{\vec p} \phi_{\vec p} = -i\hbar \nabla\phi_{\vec p}(\vec r)$ is a multiple of $\phi_{\vec p}(\vec r)$. If that's not the case, then you should double-check with whoever set you that exercise, or look for a better textbook. The exercise text as you've quoted it makes no sense other than as a mangled version of the quote in this answer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/455979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\hbar \approx 0$ and the spread of QM wave function Is there a direct mathematical method to show that if a quantum wave funtion is initially sharply localized, then it will stay sharply localized if $\hbar \approx 0$? In that case the Ehrenfest theorem implies the transition from quantum mechanics to classical mechanics. Of course, we are dealing with the propagation of a wave function, but let's not mess with path integrals. Thus, the structure of the general solution of Schrödinger equation should imply the result - if possible.
The problem with trying to understand the spread of the wavefunction in the classical limit by taking $\hbar\approx 0$ or taking the limit $\hbar\to 0$ is that in reality $\hbar\neq 0$. Taking the real non-zero value of $\hbar$ there are cases of perfectly ordinary objects for which Ehrenfest's theorem doesn't imply anything like classical behaviour. Rather the wavefunction of that object spreads out a lot over time. The classical limit is actually a result of decoherence and information being copied out of 'classical' objects into the environment. See Zurek and Paz's paper on the quantum mechanical theory of the orbit of Saturn's moon Hyperion: https://arxiv.org/abs/quant-ph/9612037
{ "language": "en", "url": "https://physics.stackexchange.com/questions/456575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Equation for: How does Thickness of Aluminum Pipe affect the velocity of the magnet falling through? I need help in finding the equation for how the velocity of a magnet dropped through an aluminum foil pipe is affected when the thickness of the pipe is changed. Thank You.
An exact equation would require an exact description of the shape, strength, orientation of the magnet. But generally, we can say the thicker the walls, the slower the fall. This is because the electrical resistance of the walls is less, the thicker they are.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/456728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Being Frictionless, surface of contact I frequently hear references to a smooth surface, or frictionless pulley. Can being frictionless be obtained if only one of the 2 surfaces has 0 coefficient of friction? Or is it for the contact of those 2 surfaces (in friction due to material roughness)? What in the cause of friction, molecular attraction or adhesion between materials?
I realize this is not an answer. But frankly, I don’t think there is a simple answer. The very good, in my opinion, comments posted thus far confirm that friction is a complex topic. Although we try to use some simple models, one can always find examples where the model we use do not apply. If we were faced with actually trying to determine a friction force involving two objects, we would probably have to resort to performing actual experiments involving the objects involved. Then we may be able to predict, within specific limits, what the friction force actually is. In the meantime, the use of the admittedly oversimplified models can at least be useful in apply other physics concepts (e.g. determining friction work) on an academic level. Hope this helps (but probably isn’t very satisfying). It was a very good question.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/456919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How does the negative energy solution to the Dirac equation predict the antielectron? Please, can someone explain how the negative energy solution can be used to predict the existence of the antielectron?
It was not a prediction, it was a conjecture, in what was Dirac's hole theory: The negative E solutions to the equation are problematic, for it was assumed that the particle has a positive energy. Mathematically speaking, however, there seems to be no reason for us to reject the negative-energy solutions. .... any electron placed in a positive-energy eigenstate would decay into negative-energy eigenstates of successively lower energy. Real electrons obviously do not behave in this way, or they would disappear by emitting energy in the form of photons. To cope with this problem, Dirac introduced the hypothesis, known as hole theory, that the vacuum is the many-body quantum state in which all the negative-energy electron eigenstates are occupied. This description of the vacuum as a "sea" of electrons is called the Dirac sea. Since the Pauli exclusion principle forbids electrons from occupying the same state, any additional electron would be forced to occupy a positive-energy eigenstate, and positive-energy electrons would be forbidden from decaying into negative-energy eigenstates. It was a theory that was superseded by quantum electrodynamics and the problems of the theory were not resolved, although In certain applications of condensed matter physics, however, the underlying concepts of "hole theory" are valid. The sea of conduction electrons in an electrical conductor, called a Fermi sea, contains electrons with energies up to the chemical potential of the system. An unfilled state in the Fermi sea behaves like a positively charged electron, though it is referred to as a "hole" rather than a "positron". The negative charge of the Fermi sea is balanced by the positively charged ionic lattice of the material. One should read the link and links therein for a clear picture.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Derivation of the work-energy theorem We state the following version of work-energy theorem : $$ K_2-K_1=Fd=W $$ Where acceleration is assumed to be constant, so is the force $F$. Then the physicists proceed by writing $K_2-K_1=F[x(t_2)-x(t_1)]$ $$=F(x_2-x_1)$$ Notice, $x(t)$ was a polynomial of time $t$ with highest degree of $2$. Now $x_2-x_1$ is just a quantity. Then they write $K_2-K_1=F \left. x \right|_{x_1}^{x_2}= F \displaystyle \int_{x_1}^{x_2} \, dx $ $$K_2-K_1=\displaystyle \int_{x_1}^{x_2} F \, dx $$ If $G(x)$ is a function such that $\dfrac{\mathrm dG(x)}{\mathrm dx}=F$ then one finds, $$K_2-K_1=G_2-G_1$$ Substituting $G(x)=-U(x)$ yields, $K_2-K_1=-U_2+U_1$ Or this $$E \equiv K_1+U_1=K_2+U_2$$ My question is, isn’t $F$ constant here too? If it is constant then how they apply these equations to non-constant forces like Spring force $F(x)=-kx$?
The crucial point lies in the equation itself Then they write $K_2-K_1=F \left. x \right|_{x_1}^{x_2}= F \displaystyle \int_{x_1}^{x_2} \, dx $ We can take out of the $F$ from the integral only and only if $F$ is not a function of $x$. For example, $F=10N$ is a constant force. It does not depend on time or the position of the particle. Only in this condition, we can write the above equation. Also, you wrote that Where acceleration is assumed to be constant ... So this is the case. However the equation that you write down ($F=-kx$) depends on $x$ or the position of the particle, so it's not a constant force as you said. In this sense, we cannot write the above equation. Since we cannot take something from out of integral if it depends on the differential part of it. In math, $$F(x)\int dx\neq \int F(x)dx\,\,(Eqn.1)$$ So, in general, we define the work done as $$W=\int F(x)dx$$ To answer your question in the case of $F=-kx$ clearly a non-constant equation. So we cannot use the derivation that you wrote above but we have to use the general form of the work (Eqn.1)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
If mass can be converted to energy than how is it said that energy can't be created? From the mass-energy equivalence E=m(c*c), it can be seen that energy can be created and it is not converted from one form of energy. Or am I wrong ? Can you explain?
Mass and energy are equivalent terms. From the $E=mc^2$ equation,the change in mass corresponds to change in the mass. For better understanding, consider a collision between two bodies A and B of equal mass $m_0$ and moving with opposite velocities to each other in a inertial frame of reference. Let us consider a collision to be inelastic (kinetic energy is not conserved), the two bodies form a third body C which is at rest (by law of conservation of momentum). Now the resulting body doesn't have mass $2m_0$ as one may expect. But the resulting mass is the sum of $2m_0$ and the conversion of kinetic energy of two bodies into mass. In this way, both mass and energy are equivalent. For further understanding, refer to Special Relativity by Robert Resnick.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Why does sound behave differently in water than in air? I noticed in some experiments at home that sound does not behave the same in water than in air. Is there a good scientific explanation to this? I noticed that the sound sounded distorted in water but not in air. I also used a software that I could use to hear the sound as if I had ears that are meant for underwater. I do not have the files because they are self wiped after I am done
You can think of waves in matter, whether light waves or sounds waves, as acting on tiny tuning forks. Light is scattered, reflected, etc, by being re-radiated by atoms. Like with a tuning fork, There's a characteristic frequency, the Natural Frequency, at which the electron in an atom will oscillate ideally leading to less dampening. Other frequencies are absorbed. this gives rise to objects having a characteristic color. Natural Frequency gives color. A similar principle applies to sound. Materials treat different wavelengths differently in transmission, re-radiation, etc. Strike a tuning fork, its natural freqency yields a characteristic tone. So as mentioned above, the ear is attuned to certain wave properties in the air, and the air itself propagates sound in a characteristic way. Water transmits sounds differently, the composite "tuning forks" are stiffer. Then the transfer of the tuning forks of water to the tuning forks of the ear also have a different relation to the air sound relationship. Paul G. Hewitt's Conceptual Physics has a really good treatment of this in non-mathematical terms.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
Conceptual understanding of operators in QM Do operators in QM represent in some fashion the action of the measurement apparatus on a state being measured? Usually operators in QM are introduced as abstract transformations whose eigenvectors/eigenvalues are axiomatically the possible results of measurement, with an explanation along the lines of "because it works". However it seems like a coincidence that the operators that determine possible measurement results are, well, operators that transform states on which they act, as though the dynamical act of measurement itself were being modeled by a coarsely-grained apparatus-state interaction during the process of measurement, and the possible results of measurement are those fixed-point states for which the operator isn't "scrambling things up" during measurement (i.e the eigenfunctions). For example the momentum operator is associated with infinitesimal spatial translations, which makes sense because an apparatus that measures momentum has to in some fashion probe how a state translates in space without changing it. Has a view like this been fleshed out? It seems like it could shed some light on the measurement problem; it would make sense for the dynamical evolution of states being acted on by operators to eventually settle down (collapse) to the fixed points of the operator.
Quantum observables, i. e. hermitian/selfadjoint operators are mathematical representations of measurable quantities. The spectrum of the operator, namely the set \begin{align*} \sigma(F) = \bigl \{ z \in \mathbb{C} \; \; \vert \; \; F - z \; \, \mbox{not invertible} \bigr \} \end{align*} of complex numbers where F - z is not invertible, is the set of possible outcomes of measurements. Of course, for hermitian operators, the spectrum is real. The statistics of the outcome distribution is captured by the “projection-valued measure” associated to $F$, which quantifies the “density of states” in a given spectral region. The measurement process itself has nothing to do with the observable. If you wish to model that, then you need to modify the hamtiltonian, i. e. dynamics. There is no collapse of wave functions to some fixed point.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 2 }
Why does a SUSY Lagrangian only contain $F$ and $D$-terms? I'm reading a book on AdS/CFT by Ammon and Erdmenger and chapter 3 covers supersymmetry. This isn't my first look at SUSY but it's my first in depth look to really try to understand it, and when they talk about constructing a Lagrangian for $\mathcal{N}=1$ chiral superfields they write the most general form, $$ \mathcal{L} = \underbrace{K(\Phi^k\Phi^{k\dagger})_{|\theta^2\bar{\theta^2}}}_{\text{D-term}} + \underbrace{\left( W(\Phi^k)_{|\theta^2} + W^{\dagger}(\Phi^{k\dagger})_{|\bar{\theta^2}}\right)}_{\text{F-terms}} $$ Initially this rustled my Jimmies because they had just spent the preceding section having me jump through hoops to deal with all the other component fields then decided only to use these 3, but then they address this directly: "In the Lagrangian, only the $D$-term ... and the $F$-terms enter". Unfortunately, this is as detailed as the explanation gets and I was hoping someone could please explain why this is the case and why the remaining 6 component fields don't show up? I posted this in PhysicsForums and failed to get an answer unfortunately, so hopefully this sees more attention
This answer should help https://physics.stackexchange.com/a/403388/221660 . Also, you may check Section 95 of "Quantum Field Theory" by Srednicki or Section 4 and 5 of "Supersymmetry and Supergravity " by Wess and Baggar. I wanted to write this as a comment but I can't due to the low reputation. XD
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What happens to the spin when photon is absorbed by an electron? Photon is spin 1 and electron is spin 1/2, so when a photon is absorbed by an electron it is destroyed and the electron becomes excited by that amount of energy. The next moment the electron will go back to it's ground state and emits a photon with the same energy as the original and everything seems good so far. I wonder what happens to the spin of the photon? Is it similar to the momentum of photon scattered by a free electron that relatively speaking electron is at rest the frequency of photon emitted is shifted or electron moves while frequency of photon remains the same as original?
As has been indicated above, it is not the electron which will gain some internal angular momentum (spin) from the photon, but it is the atom which absorbs the energy as well as the angular momentum of the photon (1 hbar). This will result in an increase of the orbital angular momentum exectly by 1 hbar (l=l+1) plus an increase of the energy (n=n+1,2,3...) depending on the frquency (=energy) of the photon. Remember: Total energy and total angular momentum are the quantities which both have to be conserved!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Can you jump higher if you run? If so, why? (High jumping) I have often wondered why high jumpers can jump higher if you run. The way I see it is that you only build up horizontal speed, and since you're running on a plane, I cannot see how this speed can be used to increase the vertical speed. I cannot say that I have any definite proof of this claim, but there must be a reason why ALL high jumpers run instead of just jumping from standing still. After giving it some thought, and with the help from comments, I figured that running gives you the speed needed to not having to convert any vertical speed to horizontal speed. In high jumping, you need some horizontal speed to get over the pole. However, the world record for standing high jump is 1.90m, while the record for regular high jump is 2.45. I have a hard time believing that you can gain 55cm purely from this. And especially since standing high jumpers are allowed to perform the jump with both feet. Regular high jumpers are only allowed to have one foot touching the ground during take off.
You don't. Velocity is a vector, and the speed you are accounting is in the x-coord which is perpendicular from y-coord. To illustrate this think on the time it takes a bullet to touch the ground when shot (without air)... it will take the same amount of time as if you drop it without shooting, so it does't account for the y time lapse. With ax=0, ay=-9.8 m/s^2 = g, vxo=vxo and vyo=0. if vxo=0, then vy=gt , vx=0, y=yo+vyt+gayt^2/2 and x=xo if vxo\=0, then vy=gt , vx=vxo , y=y0+vyt+gayt^2/2 and x=xo*t. which is the same result for y.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 6, "answer_id": 5 }
Friction acting as an internal force I was solving this problem in my assignment: Assuming a frictional force F acts on the block of mass m, a force -F will act on plank of mass M. Hence, the net work done by frictional force should be zero, as friction is an internal force , but option D is given incorrect. What's the error in my reasoning? Thanks in advance.
The inner actions have a zero resultant but the associated power can be non-zero when there is slipping. We have in general $\overrightarrow{{{F}_{1\to 2}}}=-\overrightarrow{{{F}_{2\to 1}}}$ but $P=\overrightarrow{{{F}_{1\to 2}}}\centerdot \overrightarrow{{{v}_{2}}}+\overrightarrow{{{F}_{2\to 1}}}\centerdot \overrightarrow{{{v}_{1}}}=\overrightarrow{{{F}_{1\to 2}}}\centerdot \left( \overrightarrow{{{v}_{2}}}-\overrightarrow{{{v}_{1}}} \right)=\overrightarrow{{{F}_{1\to 2}}}\centerdot \overrightarrow{{{v}_{g}}}$ with $\overrightarrow{{{v}_{g}}}$ the sliding velocity. Sorry for my english !
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Different expressions for distance & displacement : $\int$$d$$|\vec r|$, $\int$$|$$d$$\vec r$|, and $|$$\int$$d$$\vec r|$ I came across these expressions in my book. And the book says that all these are different from each other. The expressions are : $\int$$d$$|\vec r|$, $\int$$|$$d$$\vec r$|, and $|$$\int$$d$$\vec r|$ Are they different from each other? I know that * *$|$$\int$$d$$\vec r|$ means magnitude of displacement, *$\int$$|$$d$$\vec r$| means total distance, *But what about $\int$$d$$|\vec r|$? I think it should mean total distance too, but I’m not sure if $\int$$d$$|\vec r|$ and $\int$$|$$d$$\vec r$| have the same meaning, do they? Edit : Some of the answers say that the question is not very clear, and that a little more explanation would help. I’m not sure what else to add, so I’m attaching a picture of that page
Looks like an integral over radial distance.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Why is the far point of human eye infinite? In my exams, the presence of this question, which unfortunately I couldn't answer, made me wonder why is the far point of an eye infinite? First thing that came into my mind was that how come we can see till infinity? Far point of eye is sometimes described as the farthest point from the eye at which images are clear. As stated here There's obviously a limit to a distance where the eye can see. If there is, then why isn't that taken in consideration for accurate measurements?
I think you are confusing the optical definition of infinity with the literal definition of it. In an optical system, including your eyes, infinity is that distance at which light entering the optical system is considered parallel. This site gives the defnition as: In optics, it is the region from which a point on an object sends rays of light which are considered to be parallel onto an optical system. Consequently it forms a clear image in the focal plane of that system. In clinical optometry, 6 metres is usually regarded as infinity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Calculation of Gravitational Potential at the centre of the cube I came across this problem in a test and have been able to come up with a solution however I am unsure if it is correct. I started by building a cube of twice the initial dimensions to bring point P to the centre of a larger cube. This was to bring an element of symmetry to the figure. Since, each cube contributes potential V to the centre, the net gravitational potential will be 8V since there are 8 cubes. Now I wanted to find the Gravitational Potential at the centre of my original cube. So I considered an imaginary solid cube inside the larger one such that point P lies at the centre of both the cubes i.e the larger and smaller one. Now, I found the factor by which the gravitational potential had increased at the centre. Since, gravitational potential is directly proportional to the mass of the particle and inversely proportional to the distance, I considered the fact that on average, particles on the inner imaginary solid cube would be at a distance twice of that between particles on the outer larger sphere and point P. Since the mass of the larger cube is 8 times the mass of the smaller one, I concluded that the potential at point P due to a cube with same dimensions as the original cube would be 2V. Therefore the potential due to the remaining parts of the larger cube would be 6V. If we divide the remaining part of the cube providing a potential of 6V into 8 symmetrical parts, they resemble what the problem asks us to find. Thus, the answer came out to be 3V/4. However I recently received a hint that this was incorrect and have not been able to conclude why.
Each cube in the top layer has a matching cube in the bottom layer exactly opposite it: Top layer: t1 t2 t3 t4 Bottom Layer b1 b2 b3 b4 t1 is pulling the center with the same force but in exactly the opposite direction as b4, so the net force is 0. The applies to t2/b3, t3/b2 and t4/b1. All of these have a net zero affect, so the net gravitational force at the center point is 0. This exactly analogous to why the net gravity at the center of a sphere is 0.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can I recreate the greenhouse effect in my car? I am trying to preheat my car during winter mornings using the greenhouse effect. I understand the greenhouse effect in cars works by visible light passing through the glass, with most UV and infrared being blocked by the glass. The visible light is either absorbed by the interior and turned into heat or reflected largely as infrared and later turned into heat. Darker colors absorb more light. I set a 500W halogen spotlight inches from my car windshield and turned it on. Unfortunately, after approximately 1 hour, I do not see any increase in interior temperature. What might I be doing wrong?
500W is the power input, not the light output. Halogen bulbs have a Luminous efficiency of less than 5%. So if your windshield blocks IR, you're probably getting less than 20W of that power into the car. It's going to be difficult to measure the effect of that. Full sunlight on a windshield could be more than 1500W.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does the resistance of the voltmeter affect the behavior of this circuit? I have this setup. It consists of a battery of no internal resistance with voltage $V$ and a resistor with resistance $R$. It also consists of a voltmeter of some (not so large) resistance as good ones should have. Now my question is, will the resistance of the voltmeter matter for this particular circuit? I think that the voltmeters need to be high resistance only when there are multiple resistances in the circuit... and as the internal resistance of the battery always contributes to the total resistance in real cases, a voltmeter is always expected to be very high resistance. (Please correct me if I'm wrong) I have seen other questions related to voltmeters on this site and googled up but couldn't find an answer.
You are right that a low resistance in the voltmeter will not affect its measurement. The rule that a voltmeter resistance should be high compared with all resistors in the circuit is so that you can use the voltmeter in series resistance/impedance circuits without have to ask your question. I would modify your claim to include impedance and not just resistance.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 4 }
Why and how fast does lightning propagate? While looking at this youtube video of lightning in 1000 fps I was somewhat surprised that such a frame rate is enough to capture the propagation of lightning. I have no clue about photography or physics, but the naive questions I want to ask are: * *How "quickly" does lightning propagate, and what are the main contributing factors to the propagation speed (e.g density of clouds etc)? *What exactly is propagating? It can't be light, since it's much too fast, so somehow the "sequence of discharges" that is lightning propagates. Why does it do this?
There are two processes going on in a lightning strike. There are stepped leaders, searching for the path of least resistance, and there is the lightning strike when that path is found. See youtube.com/watch?v=XWuZqw3LopE. Each process has its own speed, and the stepped leader process may not proceed at a constant speed. Also see youtube.com/watch?v=RLWIBrweSU8.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can we know whether it’s a $1D$ or a $2D$ motion just by looking at the position-time relation? How do I know whether it is a $2D$ or a $1D$ motion, just by looking at position-time, or velocity-time, or acceleration-time equations? Maybe the question is not very clear, I’m not sure I’m getting it across properly, so I’ll try to use some examples to make the question clearer. We have a position time equation : $\vec r$ = $6t^2$$\hat i$ + $3t^2$$\hat j$ It’s easy to see that it is a $1D$ motion, because its locus equation is a straight line. Likewise, $\vec r$ = $5t$$\hat i$ + $2t^2$$\hat j$ is a $2D$ motion, because its trajectory equation is a parabola. Other examples of two-dimensional motions are : $\vec r$ = $30t$$\hat i$ + ($20$ - $10t^2$)$\hat j$ (projectile motion) $\vec r$ = $sin2t$$\hat i$ + $cos2t$$\hat j$ (Circular motion) How did I know that these were $2$-Dimensional motions? I checked their trajectory equations. My question is, is it possible to know just by looking at position-time equations, whether the body is moving in a straight line or changing its direction (i.e $2D$ motion), without checking its equation of trajectory?
In general the position vectors you are looking at take the form $$\mathbf x=f(t)\hat i+g(t)\hat j$$ Now, let's think about what is first taught when learning about lines. In the x-y plane, a line can be described by $$y=mx+b$$ Now, you can probably convince yourself that our vector components $\langle i,j\rangle$ can be viewed as coordinates $(x,y)$. Therefore, our motion is along a line if $$g(t)=mf(t)+b$$ for constants $m$ and $b$. (Unless $f$ is a constant function, then $g$ can be anything and we will still have a line without following this form (analagous to lines of the form $x=c$). Except if $f$ and $g$ are both constant functions, then you are just sitting at a point).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why magnetic field starts from north pole and ends in south pole? In magnets such as bar magnets the magnetic field lines are starting from North pole and ends in south pole..but I don't know what is the reason for it..why this happens.
In the question are some misunderstandings about magnetic fields. In a permanent magnet the magnetic field is the result of the alignment of subatomic particles, mostly of electrons. In detail, subatomic particles have an intrinsic magnetic dipole and in permanent magnets these dipoles are aligned (frozen). Any magnetic field is a closed loop. This holds for the magnetic field of the subatomic particles as well as for the macroscopic field in a permanent magnet, which is the superposition of the magnetic fields of the involved subatomic particles. In a permanent magnet the aligned particles are aligned somehow in chains. But the magnet is of a limited length and at the beginning and the end of the body the magnetic field leave the body and are able to continue even in vacuum. Between the ends of a magnet the magnetic field is bend and connects this ends. Because magnetic fields in parts are going through the empty space to form closed loops it is possible to align other magnets. Having named the “poles” of a first magnet, the alignment of other magnets in the magnetic field of this first magnet allows to mark their poles. This was done many centuries ago because the earth has a magnetic field and some materials were found on the earth, which are permanent magnets. The compass was invented and the earth’s poles were named. In magnets such as bar magnets the magnetic field lines are starting from North pole and ends in south pole. So the field lines are not ending, they are loops. And some part of the loop is going through space. This is right for every magnet (if not closed in a donut). The magnetic fields of two magnets get in superposition thanks to the part of the field lines in empty space.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
How can there will be current in purely inductive circuit? In a purely inductive circiut there are two emf's one is applied and other is induced applied emf and induced emf are equal and opposite, then how can there will be current in a purely inductive circuit. induced emf is not a potential drop and it produce current in opposite direction of change in current. im confused in understanding the concept, please help.
Going along a path around the circuit, the potential differences must be given by $$V_s+V_L=0$$ where $V_s$ is the potential of the source and $V_L$ is the potential across the inductor. But we know for the inductor $$V_L=-L\frac{\text d I}{\text d t}$$ So we have $$V_s=L\frac{\text d I}{\text d t}$$ If the source has the form $V_s(t)=V_0\sin(\omega t)$ then we can find, upon integration $$I(t)=-\frac{V_0}{\omega L}\cos(\omega t)$$ So we see that we get an alternating current with our alternating potential.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Length Contraction Scenario Suppose a space ship is traveling from star A to star B at some significant fraction of the speed of light. In the frame of the ship, the distance A to B is less than the distance in A and B's rest frame. Is it possible for the ship to quickly increase its speed so that in its frame the ship is then closer to A compared to when it was going slower? That is, in the time interval that it is accelerating it moves further away from A due to its speed but the effect of increased length contraction has it end up closer? That sounds like an odd situation to be in.
This is an interesting puzzle and I don't think I agree with the previous answer. Suppose the ship is stationary, halfway between A and B, which are $L$ apart. In a very short (negligible) time it accelerates towards B at high speed. A and B are now $L/\gamma$ apart. The ship is still halfway between A and B (playing the 'negligible time' card). So A was $L/2$ away and is now $L/2\gamma$ away. A gets closer. If you want, you can avoid the 'negligible time' argument and the fact that acceleration is off-limits for special relativity by hypothesising that another ship travelling at constant high speed just happens to pass the stationary one, and the two pilots compare notes and data as they pass. Odd situation - yes. But not inconsistent.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/460049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What does it mean to say that glass has refractive index 1.5? The refractive index of a material depends on the wavelength of the light incident upon it which is why dispersion happens. When we say that glass has refractive index 1.5 which wavelength do we have in mind?
In this Wikipedia page it says: Standard refractive index measurements are taken at the "yellow doublet" sodium D line, with a wavelength of 589 nanometers. Therefore it's in the middle of the visible light band. In the list provided in that page you can see that glass (it calls it fused silica) at 20 Celsius degrees has a refractive index of around $1.5$ at that wavelength.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/460314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
When is the order of magnitude not equal to the exponent of scientific notation? Explain why the order of magnitude is sometimes not the same as the exponent in scientific notation. It is because of the units?
$n \,e \,m$ is just usually thought of as $n \times 10^m$, coming from the early age of computers/pocket calculators, it's a historical way of writing the scientific notation. On the contrary, $n \times e^m$ is just that, it is never written as $n \,e \,m$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/460562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Unclear assumption in deriving fluid energy conservation laws I am currently working through Alexandre Chorin's Mathematical Introduction to Fluid Mechanics. In the first chapter, he treats the change in Kinetic energy of a fluid region $W\subset D$ subject to the fluid flow map $\varphi_t:\mathbf{x}\mapsto\varphi(\mathbf{x},t)$ in the following manner ($\frac{D}{Dt}$ denoting the material derivative): \begin{aligned} \frac { d } { d t } E _ { \text { kinetic } } & = \frac { d } { d t } \left[ \frac { 1 } { 2 } \int _ { W _ { t } } \rho \| \mathbf { u } \| ^ { 2 } d V \right] \\ & = \frac { 1 } { 2 } \int _ { W _ { t } } \rho \frac { D \| \mathbf { u } \| ^ { 2 } } { D t } d V \\ & = \int _ { W _ { t } } \rho \left( \mathbf { u } \cdot \left( \frac { \partial \mathbf { u } } { \partial t } + ( \mathbf { u } \cdot \nabla ) \mathbf { u } \right) \right) d V \end{aligned} As best as I can determine, an implicit assumption seems to be made that the fluid has a density constant with time in this derivation. Specifically, it appears that the assumption $\frac{\partial}{\partial t}(\rho\| \mathbf { u } \| ^ { 2 }) = \rho\frac{D}{Dt}(\| \mathbf { u } \| ^ { 2 })$ is being made rather than obeying the typical product rule. I am not quite sure why. I would greatly appreciate it if someone could help shed some light on what is going on here! If more context is needed, I can provide it upon request, or you may reference the presentation which is given on page 12 of Chorin's book.
Indeed, there is no assumption. You can use the Reynolds transport theorem https://en.wikipedia.org/wiki/Reynolds_transport_theorem With the Green theorem , it gives : $\frac{d}{dt}\int\limits_{\Omega (t)}{f\rho d\tau }=\int\limits_{\Omega (t)}{\left( \frac{\partial f\rho }{\partial t}+\nabla \cdot (f\rho \overrightarrow{v}) \right)d\tau }$ Then : $\overrightarrow{\nabla }\cdot (f\rho \overrightarrow{v})=\rho \overrightarrow{v}\cdot \overrightarrow{\nabla }\cdot (f)+f\overrightarrow{\nabla }\cdot (\rho \overrightarrow{v})$ $\frac{\partial f\rho }{\partial t}+\nabla \cdot (f\rho \overrightarrow{v})=\rho \left( \frac{\partial f}{\partial t}+\overrightarrow{v}\cdot \overrightarrow{\nabla }\cdot (f) \right)+f\underbrace{\left( \frac{\partial \rho }{\partial t}+\overrightarrow{\nabla }\cdot (\rho \overrightarrow{v}) \right)}_{0}$ $\frac{d}{dt}\int\limits_{\Omega (t)}{f\rho d\tau }=\int\limits_{\Omega (t)}{\left( \frac{\partial f}{\partial t}+\vec{v}\cdot \vec{\nabla }\cdot (f) \right)\rho d\tau }=\int\limits_{\Omega (t)}{\left( \frac{\partial f}{\partial t}+(\vec{v}\cdot \vec{\nabla })\cdot (f) \right)dm}=\int\limits_{\Omega (t)}{\left( \frac{Df}{Dt} \right)dm}$ It is rather intuitive. You apply Newton law to a small fluid particle and you add all. Sorry for my english !
{ "language": "en", "url": "https://physics.stackexchange.com/questions/460722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Entropy of the big bang At the moment of big bang, all the matter was in perfect order, that is entropy 0 so what force or disturbance would occur to begin the chaos and the entropy start to increase?
At the moment of big bang, all the matter was in perfect order, that is entropy 0 [...] This is not true as far as we know. We think it was in a state of less than maximum entropy (else heat death would have occurred instantly), but there is no reason to think it had 0 entropy. "Typical" initial conditions would have had maximum entropy. [...] so what force or disturbance would occur to begin the chaos and the entropy start to increase? A force or disturbance is not necessary in order to make entropy increase.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/461229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why do electromagnetic waves have the magnetic and electric field intensities in the same phase? My question is: in electromagnetic waves, if we consider the electric field as a sine function, the magnetic field will be also a sine function, but I am confused why that is this way. If I look at Maxwell's equation, the changing magnetic field generates the electric field and the changing electric field generates the magnetic field, so according to my opinion if the accelerating electron generates a sine electric field change, then its magnetic field should be a cosine function because $\frac{d(\sin x)}{dx}=\cos x$.
E and B are in phase for a running plane wave, but are 90 degrees out of phase for a standing wave. This can be easily seen by considering the vector potential, $A(t, x) $. Using $E = \partial_t A$ and $B=\partial_x A$. For $A=sin(\omega t - kx) $ you find that E and B are in phase. For $A=sin(\omega t) sin(kx) $, a standing wave, E and B are out of phase.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/461393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 0 }
Variational Navier-Stokes: where to find study material "for dummies"? I have worked with the Navier Stokes equations before but I'm a physicist. I was talking to a mathematician and they use a complete different notation and I am very lost. First of all, I use the Control Volume method for discretization and they use Finite Element. Second, they talk about variational forms and H and Q spaces \Omega domains, which I have seen for the first time. Can anybody point my way to a document, or book, or small chapter where I can understand the mathematical variational point of view of the Navier Stokes equation as simple as possible? (I'm interested in the incompressible stationary case for a fluid, so, very simple.)
I'm a physics undergrad student. I've been to self-studying continuum mechanics, and as a part of that reading some fluid dynamics stuff. At first I faced similar difficulties. Kip S Thorne's Modern Classical Physics helped me to grasp the physical intuition behind most of the concepts. Then I was motivated to study the variational formulation of fluid. It uses physicist's familiar language. * *Badin, Gualtiero, Crisciani, Fulvio; Variational Formulation of Fluid and Geophysical Fluid Dynamics: Mechanics, Symmetries and Conservation Laws
{ "language": "en", "url": "https://physics.stackexchange.com/questions/461521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why is blowing so different than sucking? Why is it so easy to blow out a candle from a significant distance, but nearly impossible to suck enough air to do the same? Even without focusing the airflow through a nozzle or something, this affect seems to be present. For example, it's easy to feel the air coming out of a box fan, but very hard to feel the air going into one.
Assume a constant density of the air $\rho$. Consider an imaginary tube of air in the region of the candle, of area $A_{\rm open}$ and speed $v_{\rm open}$, being reduced to an area $A_{\rm lips}$ and speed $v_{\rm lips}$ at the lips. Using conservation of mass $A_{\rm open}\, v_{\rm open}\, \rho =A_{\rm lips} \,v_{\rm lips}\, \rho$. This is a greatly simplified analysis of what happens when you are sucking. If $A_{\rm open}\gg A_{\rm lips}$ then the candle is in a region where the speed of the air $v_{\rm open}$ is not moving very fast and so is not blown out. When you blow the air through your lips the air is directed towards the candle through the imaginary tube of air which only increases in area by a relatively small amount as blowing the air through your lips gives the air momentum in a direction of the candle. In this case the speed of the air is fast enough to blow out the candle. —- When sucking the air which eventually enters the lips comes from a multitude of directions whereas when blowing the air exiting the lips is moving in approximately one direction. Having the nozzle of a vacuum cleaner a few centimetres above a surface is not a very efficient way of removing dirt from a surface. To remove the dirt efficiently the nozzle needs to be placed close to the surface to ensure that the air passing over the dirt is fast moving. You could suck out the candle if you have your lips around the candle!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/461572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Resistor Capacitor circuits I am a high school physics teacher and came across some issues today that I am unable to explain. We made a circuit with an $11$ V power source connected to a $100~k\Omega$ resistor and $100~\mu F$ capacitor. We used a Vernier voltage probe to observe the potential across the capacitor increase. The potential increased to about $6$ V and stayed relatively constant. I'm really unsure of why it would not go up to $11$ V. We also observed the capacitor discharging by removing the power source from the circuit. It seemed to discharge down to about 0.2V and stay around there. I short circuited the capacitor to completely drain it and the potential went down to zero but then slowly built back up to 0.2V. We discovered that the voltage probe itself seems to be charging the capacitor and that just doesn't make sense to me. Big thanks in advance to anybody that can help me make sense of either or both of these issues.
For one, at $11\,\text{V}$ you are using the probe outside its specifications (up to $10\,\text{V}$). The $6\,\text{V}$ issue might have to do with the black lead of the sensor being directly connected to ground through the interface connector (or whatever potential your measurement device like a tablet or laptop is at). This could possibly also cause the charging of the capacitor when the power is disconnected. From the information given this is really mostly speculation. Best would be to test again with a voltage differential probe that doesn't have this property, if you have one handy. Or just a regular multimeter will do. With an internal resistance of $1\,\text{M}\Omega$, the probe acting as a voltage divider should not affect the results to such a degree. However, you can easily eliminate this possibility by using a different resistor and measuring again.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/461844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why don't you get burned by the wood benches in a sauna? When you go to the sauna you may sit in a room with 90°C+. If it is a "commercial" sauna it will be on for the whole day. How does it come that when you sit on the wood you don't get burned? I believe this question is different than the "classical" one concerning the "feeling" of heat, which may be explained with a low heat transfer. After a much shorter time other objects seem much "hotter", and the heat transfer is not different (as it's still a room filled with the same air). My guess would be that the reason is the heat capacity but I cannot really explain it. In my understanding a capacity is the ability to store something (heat, charge, ...). Why should an object be cooler if it can store less heat? Also, cannot this be ignored in this case, as the wood is exposed to the temperature for a very long time?
Wood is a poor conductor of heat. The thermal conductivity of wood is relatively low because of the porosity of timber. Thermal conductivity declines as the density of the wood decreases. ... For example, the thermal conductivity of pine in the direction of the grain is 0.22 W/moC, and perpendicular to the grain 0.14 W/moC Wood across the grain, white pine 0.12 Wood across the grain, balsa 0.055 Wood across the grain, yellow pine, timber 0.147 Wood, oak 0.17 Wool, felt 0.07 Wood wool, slab 0.1 - 0.15 (https://www.engineeringtoolbox.com/thermal-conductivity-d_429.html)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/461971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "68", "answer_count": 4, "answer_id": 1 }
Why can all solutions to the simple harmonic motion equation be written in terms of sines and cosines? The defining property of SHM (simple harmonic motion) is that the force experienced at any value of displacement from the mean position is directly proportional to it and is directed towards the mean position, i.e. $F=-k(x)$. From this, $$m\left(\frac{d^2x}{dt^2}\right) +kx=0.$$ Then I read from this site Let us interpret this equation. The second derivative of a function of x plus the function itself (times a constant) is equal to zero. Thus the second derivative of our function must have the same form as the function itself. What readily comes to mind is the sine and cosine function. How can we assume so plainly that it should be sin or cosine only? They do satisfy the equation, but why are they brought into the picture so directly? What I want to ask is: why can the SHM displacement, velocity etc. be expressed in terms of sin and cosine? I know the "SHM is the projection of uniform circular motion" proof, but an algebraic proof would be appreciated.
These are all good and correct answers, but I will answer from a different perspective. Any linear-differential equation of degree $n$ has $n$ linearly independent solutions, ie. these $n$ solutions span a vector space, with sets of solutions forming a basis. For simple harmonic motion, the differential equation is: $$m(\dfrac{d^2x}{dt^2})+kx = 0$$ As stated in other answers, one can take the solutions to be linear combinations of $\sin(\omega t)$ and $\cos(\omega t)$, or one could take $\exp(i\omega t)$ and $\exp(-i\omega t)$. These are both sets of linearly independent functions, and both pairs solve the equation, yet they are not the same functions - they are two different sets of basis functions. To get from one set of solution to another, one needs to change the basis.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/462245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 1 }
Rand-walk/Brownian-motion on 2D lattice I started to learn stochastic processes this year. Only had two classes, but I already have some problem. We learned about Einstein's and Langevin's description of Brownian-motion and now I need to solve a task, related to Brownian-motion/rand-walk. The task's text is the following (sorry, maybe it turned out as a very raw translation): To understand the Perrin-experiment, consider the following variation of the 2D Brownian-motion: On a 2D lattice with lattice constant $l$ there is a moving particle. In every $\tau$ intervals, there is an equal probability that the particle jumps into any of the 4 neighbouring lattice points. These events are independent from each other. At $t = 0$, the particle is staying in the $(x_{0} = 0, y_{0} = 0)$ point. Calculte the expected value of the displacement $\sqrt{\left< r_{t}^{2} \right>} = \sqrt{\left< x_{t}^{2} \right> + \left< y_{t}^{2} \right>}$, after $t = N \cdot \tau$. I already solved the 1D brownian motion (and also rand-walk), using the Chapman-Kolmogorov-equation, then the Kramers-Moyal expansion to get the Fokker-Planck-equation: $$ P \left( x, t + \tau \right) = \int_{- \infty}^{\infty} \Phi \left( \Delta \right) P \left( x - \Delta, t \right)\, d \Delta \quad \text{(Chapman-Kolmogorov)}$$ $$\vdots$$ $$ \frac{\partial P \left( x, t \right)}{\partial t} = \frac{\overline{\Delta^{2}}}{2 \tau} \frac{\partial^{2} P \left( x, t \right)}{\partial x^{2}} \quad \text{(Fokker-Planck)} $$ Then naturally said, that "oh wow, what a coincidence, that's totally some Gaussian, if the initial condition is $P \left( x, t=0 \right) = \delta \left( x \right)$" (so the start is from the center of the 1D coordinate system). And then I easily found $\left< x \right> = 0$ and $\left< x^{2} \right> = 2Dt$. But I'm pretty confused, what should i do in the 2D case. I sense - i'm not sure -, that now I have a $P \left( x, y \right) \equiv P \left( x, y, t \right)$ probability for the system so I can only calculate $\left< x_{t}^{2} y_{t}^{2} \right>$, but I can cut that into two parts: $$ \left< x_{t}^{2} y_{t}^{2} \right> = \iint x^{2} \cdot y^{2} P \left( x, y \right)dxdy = \int x^{2} f_{1}(x) dx \cdot \int y^{2} f_{2}(y) dy = \left< x^{2} \right> \cdot \left< y^{2} \right> $$ because we assumed, that jumping into either way is independent from everything. (I actually didn't even know why did i wrote this, just trying to show the actual confusion in my head...) Now I have two 1D rand-walk problem, where jumping into one or to the other way is equally $P = \frac{1}{4}$. But if I do the same calculations I also get $2Dt$ for $\left< x^{2} \right>$ and $\left< y^{2} \right>$ and then get, that $\sqrt{\left< r_{t}^{2} \right>} = \sqrt{\left< x_{t}^{2} \right> + \left< y_{t}^{2} \right>} = \sqrt{2Dt + 2Dt} = 2 \sqrt{Dt}$ which looks totally BS to me, but that's just my opinion. Where did my thinking go wrong??
Just to make this post clear for anyone, who comes here in the future: Case closed, I actually had the right answer in the post, and Aaron Stevens confirmed it in a reply, thank you. So for the 2D diffusion, $\left< r^{2} \right> = 4Dt$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/462353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Could magnetic fields really be completely substituted by relativity and electric fields? In many textbooks (especially those for undergraduate level), magnetic fields are described merely as a relativistic side product of electric fields when considering frames in motion relative to moving charges. Everybody knows the argument, so I will not repeat it here. Is this view correct in terms of "deeper theory"? I learned that on university in the relativistic lectures, but nevertheless we are all talking about magnetic fields as "own entities", obeying Maxwell's equations. Could we really remove magnetic fields from physics and still get the same laws? For my personal feeling, all we know is, that E and B are related by certain relativistic transformation rules which are intrinsically consistent, but I cannot imagine physics without B at all. In particular, how could Faraday's law be deduced, which at first assumes no particular relative movement but makes a statement about $\partial \vec B/\partial t$ and $\nabla \times \vec E$ in fixed frames?
No, this is a common misunderstanding of the relativistic argument. Consider the three spatial dimensions of height, width, and depth. You describe paintings in terms of their height and width, but one day you discover that if you rotate the painting, some of the height and width can turn into depth. That tells us that rotational symmetry relates the three spatial dimensions. It does not say that depth isn't real -- the whole point is that the three dimensions are on an equal footing. It does not say that we can always make every object have zero depth -- good luck building a house using that principle. It especially does not mean that invoking depth is a "mistake" due to not correctly considering rotational effects. The only thing it tells us is that if we have something with no depth, we can still rotate it into something with depth. The exact same holds for electromagnetism; substitute "rotations" for "Lorentz boosts", "height/width" for "electric fields", and "depth" for "magnetic fields".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/462603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What's the variation of the Christoffel symbols with respect to the metric? By the Leibniz rule, I expected it to be $$\delta \Gamma^\sigma_{\mu\nu} = \frac 12 (\delta g)^{\sigma\lambda}(g_{\mu\lambda,\nu}+g_{\nu\lambda,\mu}-g_{\mu\nu,\lambda}) + \frac 12 g^{\sigma\lambda}(\partial_\nu (\delta g)_{\mu\lambda}+\partial_\mu (\delta g)_{\nu\lambda}-\partial_\lambda (\delta g)_{\mu\nu}) .\tag{1}$$ However, according to Sean Carroll, $$\delta \Gamma^\sigma_{\mu\nu} = \frac 12 g^{\sigma\lambda}(\nabla_\nu (\delta g)_{\mu\lambda}+\nabla_\mu (\delta g)_{\nu\lambda}-\nabla_\lambda (\delta g)_{\mu\nu}).$$ In other words, the first term on the RHS of (1) is not there and partials on the second term are replaced by covariant derivatives. Why?
$ \Gamma^{a}_{bc} = \cfrac{1}{2}g^{ad}(\partial_{b}g_{dc} + \partial_{c}g_{bd} - \partial_{d}g_{bc}) \Rightarrow $ $ δ\Gamma^{a}_{bc} = \cfrac{1}{2}δg^{ad}(\partial_{b}g_{dc} + \partial_{c}g_{bd} - \partial_{d}g_{bc}) + \cfrac{1}{2}g^{ad}(\partial_{b}δg_{dc} + \partial_{c}δg_{bd} - \partial_{d}δg_{bc}) \Rightarrow$ $δ\Gamma^{a}_{bc} = -\cfrac{1}{2}g^{af}g^{de}(\delta g_{fe})(\partial_{b}g_{dc} + \partial_{c}g_{bd} - \partial_{d}g_{bc}) + \cfrac{1}{2}g^{ad}(\partial_{b}δg_{dc} + \partial_{c}δg_{bd} - \partial_{d}δg_{bc}) \Rightarrow $ $δ\Gamma^{a}_{bc} = -g^{af}(\delta g_{fe})\Gamma^{e}_{bc}+ \cfrac{1}{2}g^{ad}(\partial_{b}δg_{dc} + \partial_{c}δg_{bd} - \partial_{d}δg_{bc}) \Rightarrow $ $δ\Gamma^{a}_{bc} = -g^{ad}(\delta g_{de})\Gamma^{e}_{bc}+ \cfrac{1}{2}g^{ad}(\partial_{b}δg_{dc} + \partial_{c}δg_{bd} - \partial_{d}δg_{bc}) \Rightarrow $ $δ\Gamma^{a}_{bc} = \cfrac{1}{2}g^{ad}\big[\partial_{b}δg_{dc} + \partial_{c}δg_{bd} - \partial_{d}δg_{bc} - 2\delta g_{de}\Gamma^{e}_{bc}\big] \Rightarrow $ $δ\Gamma^{a}_{bc} =\cfrac{1}{2} g^{ad}\big[ \partial_{b}δg_{dc} -\Gamma^{e}_{bc}\delta g_{ed} -\Gamma^{e}_{bd}\delta g_{ec} $ $+\partial_{c}δg_{bd} -\Gamma^{e}_{cd}\delta g_{eb} - \Gamma^{e}_{cb}\delta g_{ed} - \partial_{d}δg_{bc} + \Gamma^{e}_{db}\delta g_{ec} + \Gamma^{e}_{dc}\delta g_{eb} \big] \Rightarrow$ $δ\Gamma^{a}_{bc} = \cfrac{1}{2}g^{ad}\big[\nabla_{b}δg_{dc}+\nabla_{c}δg_{bd}- \nabla_{d}δg_{bc}\big] $
{ "language": "en", "url": "https://physics.stackexchange.com/questions/462686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 1 }
Do orbiting planets have infinite energy? I know that planets can't have infinite energy, due to the law of conservation of energy. However, I'm confused because I see a contradiction and it would be great if someone could explain it. Energy is defined as the capacity to do work. Work is defined as Force x Distance. Force is defined as Mass x Acceleration. Thus, if we accelerate a mass for some distance by using some force, we are doing work, and we must have had energy in order to do that work. In orbit, planets change direction, which is a change in velocity, which is an acceleration. Planets have mass, and they are moving over a particular distance. Thus, work is being done to move the planets. In an ideal world, planets continue to orbit forever. Thus, infinite work will be done on the planets as they orbit. How can infinite work be done (or finite work over an infinite time period, if you'd like to think of it that way) with a finite amount of energy? Where is the flaw in this argument?
Power expended when moving in orbit $\vec {F}.\vec {v}=-\nabla \phi .\frac {d\vec {r}}{dt}=-\frac {d\phi}{dt}$ , $\phi$ is gravitational potential. Hence the work of gravitational forces is $W=\int {\vec {F}.\vec {v} dt}=-\int {\frac {d\phi}{dt}dt}$. For a periodic motion, the integral $W$ over the period is zero. For hyperbolic motion, the integral $W$ over the entire time of motion is zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/462768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Electric charge of the Higgs field The Higgs field is \begin{equation} \Phi = \frac{1}{\sqrt{2}} \left( \begin{array}{cc} \phi_{1} + i\phi_{2} \\ \phi_{3} + i\phi_{4} \end{array} \right) \tag{1} \end{equation} with $\phi_{1}$ and $\phi_{2}$ carrying electric charge $+1$ respectively, while $\phi_{3}$ and $\phi_{4}$ are electrically neutral. Under the entry "Higgs Boson" in Wikipedia, it states: It (the Higgs field) consists of four components: two neutral ones and two charged component fields. Both of the charged components and one of the neutral fields are Goldstone bosons, which act as the longitudinal third-polarisation components of the massive $W^+$, $W^−$, and $Z$ bosons. Before interaction between the Higgs field and the gauge bosons, the total electric charge is the total charge of $\phi_{1}$ and $\phi_{2}$, which is $+2$. After the interaction, however, the total electric charge is the sum of the charges of $W^{+}$ and $W^{-}$, which is $0$; the electric charge is not conserved. What is wrong? Besides, if two components of the Higgs field carry positive electric charge, the whole space (even the whole universe) is electrically positive since the Higgs field permeates the whole space. This is very doubtful and seems not reasonable to me. Is this case true?
Simply read the WP article and heed its consistency. It is in P&S conventions, so please do not look at Srednicki, whose opposite conventions evidently confuse you consistently. Now, $$Q=T_3+Y_w/2, $$ so for the Higgs doublet, Y =1, hence Q = +1 for the upper component and 0 for the lower component, the one that picks up the v.e.v., cf. (1) in the WP article. So both the physical Higgs and the goldston pumping into and out of the vacuum are neutral. SSB does not break charge. The $\phi_1+i\phi_2$ has then charge + , whereas the individual real and imaginary components in it do not have a well-defined charge. There are the corresponding terms in the lagrangian that involve the conjugate Higgs doublet, with co-equal goldstons $\phi_1-i\phi_2$ of negative charge, just as there are as many electrons as positrons, so to speak, poetically. Check all therms $\Phi ^* \cdot \Phi$ and their functions are neutral in charge, weak isospin and hypercharge. It is a charge +1 goldston eaten by $(W_1-iW_2)/\sqrt{2}\equiv W^+$, with similar charge properties, perhaps counterintuitively: $W_1-iW_2$ ate $\partial (\phi_1+i\phi_2)$! I don't have a glib maxim for it, but it follows direct calculation from the covariantly completed Higgs kinetic terms: Remind yourself of how the resulting mass term emerges proportional to $W^+ W^-$, skipping Lorentz indices, so it conserves charge, likewise. A good SM text should help. As always, check the electron-neutrino-Higgs Yukawa on the last line of (6) in the WP article to ensure you understand how it conserves charge, weak hypercharge, and weak isospin.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/463026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How do I determine the components of a cinematic jump, for vertical and horizontal velocity? I have been tasked with determining the feasibility of The Rock's jump in the movie 'Skyscraper' I am using projectile motion equations to determine it, but have gotten stuck whilst calculating my horizontal and vertical velocity. However, the values I have been getting have not been close to satisfying a vector sum for the total velocity. My calculations are listed as below: Distance Run= 26 metres Time Taken= 3.03 seconds Horizontal Velocity = distance/time = 36/3.03 = 8.58m/s I calculated the angle of the jump to be 70 degrees, from a snippet of the movie, and the maximum height of the jump to occur after 0.73 seconds Velocity= vsinangle + gravity* time 0=v*sin70 -9.8*0.73 7.15=v*sin*70 v= 7.61m/s The sum of the vectors is 16.19m/s, which does not satisfy a Pythagorean triangle. I am wondering where I have gone wrong, and what I have to do to solve for the initial velocity. Thankyou.
Of course the three numbers don't describe the sides of a right triangle. You are incorrectly assuming that $v_0=v_{0,x}+v_{0,y}$ and then also expecting the Pythagorean theorem to work out with the then incorrect value of $v_0$. This makes sense because, in general, $$v_{0,x}+v_{0,y}\neq\sqrt{v_{0,x}^2+v_{0,y}^2}$$ The sum of your vectors should be a vector sum $$\mathbf{v}_0=v_{0,x}\hat i +v_{0,y}\hat j$$ Whose magnitude is then given by $$v_0=\sqrt{v_{0,x}^2+v_{0,y}^2}$$ And whose angle is given by $$\tan\theta=\frac{v_{0,y}}{v_{0,x}}$$ These equations should then be self consistent between your measured and calculated values. Then you will be able to smell what the math is cooking.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/463251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Do Chern Insulators (QAHE) have topological order (long-range quantum entanglement)? I know IQHE is a example having "invertible" topological order from Professor Wen's definition. And Topological Insulators is SRE because of necessary of underlying symmetry protection. After that, the Chern Insulator (QAHE) needs a underlying TRS-broken not a TRS protection like IQHE. Except the external magnetic field, it is almost as same as IQHE. Is it also a example having long-range quantum entanglement? More precisely speaking, what topological order does it have? Also "invertible" topological order?
An "invertible" topological order characterized by a gravitational Chern-Simons term.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/463466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why is the central maximum of single slit diffraction pattern wider than other fringes? I had answered a question Single Slit Diffraction Experiment, telling why the central maximum is the brightest one, but it asks: Why is the central maximum twice as wide as the others? Now, that is tricking me, I have tried my best to search the reason and seen many diagrams, but I can't understand why.
You can know that from the derivation of fringe width in single slit experiment, though I will give you a simple answer, without much mathematics. If you look at diagrams of Fraunhofer single-slit diffraction patterns, you will see that the central maximum is symmetrical about the center line. All the other maxima and minima are found on just one side of the center line. Of course, the pattern as a whole is also symmetrical about the center line. Both the mathematics and experimental results show that the positions of the minima are at distances from the center line that depend on the details of the particular experiment. Those minima distances are found fairly accurately by multiplying a constant (which depends on the experiment) by a whole number (that number is $1$ for the first two minima and $2$ for the second two minima, and so on). There is no minimum on the center line, of course (corresponding to the number $0$). Let $\text{Min}_1$ be the distance from the center line to the first two minima, one on each side of the center line. Let the distance of the second two minima from the center line be $\text{Min}_2$, and so on. $\text{Min}_2$ is about twice of $\text{Min}_1$. The width of the maxima is roughly the distance between adjacent minima. So, all the secondary maxima are about $\text{Min}_1$ in width. However, the distance between the two first two minima, on each side of the center line is $2 \times \text{Min}_1$. The central maximum lies between the first two minima (located on opposite sides of the center line). All the secondary bright fringes lie between adjacent minima on the same side of the center line. So now you can see why the width of the central maximum is about twice the width of the other bright fringes
{ "language": "en", "url": "https://physics.stackexchange.com/questions/463640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }