Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Length contraction and simultaneous length measurements I am just working through an argument from Halliday Resnick to derive the Lorentz contraction (see quote below). Some paragraphs before this, the authors note that: If the rod is moving, however, you must note the positions of the end points simultaneously (in your reference frame) or your measurement cannot be called a length. A paragraph later they invoke the following argument: Length contraction is a direct consequence of time dilation. Consider once more our two observers. This time, both Sally, seated on a train moving through a station, and Sam, again on the station platform, want to measure the length of the platform. Sam, using a tape measure, finds the length to be $L_0$, a proper length because the platform is at rest with respect to him. Sam also notes that Sally, on the train, moves through this length in a time $\Delta t = L_0/v$ where $v$ is the speed of the train; that is, $$ L_0 = v \Delta t \quad \text{(Sam)} $$ This time interval $\Delta t$ is not a proper time interval because the two events that define it (Sally passes the back of the platform and Sally passes the front of the platform) occur at two different places, and therefore Sam must use two synchronized clocks to measure the time interval $\Delta t$. For Sally, however, the platform is moving past her. She finds that the two events measured by Sam occur at the same place in her reference frame. She can time them with a single stationary clock, and so the interval $\Delta t_0$ that she measures is a proper time interval. To her, the length $L$ of the platform is given by $$ L = v \Delta t_0 \quad \text{(Sally)}. $$ Then they conclude by dividing the two equations above: $$ \frac{L}{L_0} = \frac{v\Delta t_0}{v \Delta t} = \frac{1}{\gamma}$$ or $$ L = \frac{L_0}{\gamma} $$ which is the length contraction equation. However I don't see in what sense the length was measured simultanous in the derivation above, how is the detailed connection between the statement that the length measurement has to be simulanous and the quoted derivation?
I think in the derivation above length was not measured by measuring both ends of the moving rod (platform) simultaneously. Einstein gives an example of a simultaneous measurement in his popular book explaining special relativity:
{ "language": "en", "url": "https://physics.stackexchange.com/questions/127852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How was Newton able to guess that gravitational force is inversely proportional to distance squared? This question is puzzling me since I learnt about the gravitation law in school. Why did Newton guess/assume that gravitational force is inversely proportional to the square of distance? Did he verify that experimentally? (I remember hearing that the first experimental verification of the law of gravitation was after Newton's death.) If the answer to the above question is no, is it for example more plausible to suppose that $F\propto1/r^2$ than to suppose that $F\propto1/r^4$? Did Newton carry out a thought experiment that makes $F\propto1/r^2$ a plausible guess? So in summary: Why did Newton choose exponent of $-2$ instead of any other exponent? Was it a guess that depended on pure luck or an educated guess?
Well, for one, if $F$ goes like $r^{-4}$, all orbits except the unstable circular orbit will fly out or will collapse into the center! So that's a no-no. (I know that because Newton's laws in a central force field give conservation of angular momentum, and in that case, if $U$ is your potential, you can get a differential equation for $r$, the radius, as $\ddot{r}=-\frac{\partial}{\partial r}(U(r)+\frac{M}{2r^2})$. With $U=-\frac{1}{r}$ you have a nice stable dip in which $r$ can oscillate. With $U=\frac{1}{r^3}$ a particle can always lose potential energy by moving in the same direction.) I haven't read the history in detail, but I believe that the order of events was something like this: First, Kepler shows that "The orbit of every planet is an ellipse with the Sun at one of the two foci." from wikipedia Then, Newton shows that an elliptical orbit together with the methods of Newtonian mechanics, implies that the force must act as an inverse square. (A proof of this can be found in A P French's Newtonian Mechanics, though I forget how much history is given there and how geometric his methods are). Disclaimer: You can see my sources are shaky. There may be more to the story.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/128245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Is there a substance that doesn't reflect OR absorb light from the visible light spectrum? Is there a substance that doesn't reflect or absorb visible light but may reflect light from another spectrum? Is there a theoretical substance that would have these properties? EDIT: Sorry I wasn't quite clear with my original question. I've updated it to express more what I was thinking. Would a substance be "invisible" if it didn't reflect or absorb light? Does a real or theoretical substance like that exist? I assume we can see glass and so on because it refracts light and to a certain extent reflects it.
It's surprisingly difficult to get a material that absorbs almost no light across the whole visible range but one candidate would be black silicon. This has a textured surface created by etching, and the texture means light hitting the surface is multiply reflected sideways before it gets a chance to be reflected back away from the surface. With some absorption at each reflection the multiple reflections mean the overall reflectivity can be a few percent. The material still reflects in the medium to far infra-red because the length scale of the texture is smaller than the wavelength of medium to far infra-red light. An example of the relectance spectrum is given here: Response to question v2: Assuming I understand what you're getting at, the only technology I know of that would fit is metamaterials and in particular metamaterial cloaking. Metamaterial cloaks are effectively waveguides that can bend light around an object so the object in effect neither reflects nor absorbs light. I should add that we are a long way from actually achieving this in any useful way, and so far what invisible materials do exist work only at microwave wavelengths. However the limitations are essentially technical and the idea would work if we could but fabricate the structures required.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/128458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Why are magnetic fields so much weaker than electric? In EM radiation, the magnetic field is $ 3*10^8$ times smaller than the electric field, but is it valid to say it's "weaker". These fields have different units, so I don't think you can compare them, but even so it seems like we only interact with the electric field of EM radiation, not the magnetic field. Why is this?
As you already indicated, physical units need to be considered. When working in SI units, the ratio of electric field strength over magnetic field strength in EM radiation equals 299 792 458 m/s, the speed of light $c$. However, the numerical value for $c$ depends on the units used. When working in units in which the speed of light $c=1$, one would conclude that both fields are equal in magnitude. A better way to look at this is to consider the energy carried by an electromagnetic wave. It turns out that the energy associated with the electric field is equal to the energy associated with the magnetic field. So in terms of energies electric and magnetic fields are equals.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/128512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 2, "answer_id": 0 }
Higher-dimensional metrics in (hyper)-spherical coordinates I want to compute the components of the Riemann curvature tensor (for a case similar to the Schwarzschild solution) in 4 + 1 dimensions, but I want to use a higher-dimensional analogue of spherical coordinates. I first want to investigate a metric for the Euclidean case, i.e. "flat" space-time with no matter or energy present. How would I write this Euclidean metric using a higher-dimensional analogue of spherical coordinates?
Aren't you basically asking to the formula of higher dimensional vectors to compute yourself being in the sphere at the center with appropriate lines to all vector points? I think I get what your asking for I asked the same questions when I was younger Anyways here's a hyperlink to 6th dimensional operation. The operator is used in quantum physics, but clearly they are talking about bosons and fermions. The reason I am referencing this is because this goes in the quantum level from one to six dimensions as far as the x y and the z but they use all this jargon now referencing your sphere of influence that you were talking about you could always plug in the sphere formula into the equation
{ "language": "en", "url": "https://physics.stackexchange.com/questions/128662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does Free Will Theorem imply that quantum mechanics plays crucial role in our brain’s functioning (consciousness)? * *Does Free Will Theorem imply that quantum mechanics plays crucial role in our brain’s functioning (consciousness)? *Is opposite statement of Free Will Theorem right: If elementary particles have a certain amount of free will, then so must we? Because to me elementary particles does have a bit of free will – quantum mechanics guarantees that nobody can predict what one is going to do, say in double slit experiment. *So Penrose was right and origins of our consciousness lie in the laws of quantum mechanics? *Is the only way our free will can come from is that of quantum mechanics?
I would bet that consciousness is an emergent property of our neural network and not a quantum mechanical effect. Quantum effects are most probably washed away very quickly by the thermal bath in our brain.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/128814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
What is the relation between photoelectric current and frequency of incident light? I googled it a bit and found that photoelectric current is independent of frequency(of incident light). Some further look revealed that actually "saturation current" is independent of frequency.I could not find about the instantaneous current(current other than saturation current). Speculation 1: If saturation current is not reached, then radiation with higher frequency will give greater photoelectric current. Reason 1: Greater frequency means greater velocity of electrons, which will help them to counteract "space charge" and more electrons can reach the anode. Problem 1:Let's say the intensity (W/m²) of the radiation remains the same, and only the frequency increases. The intensity multiplied by the Area of the plate results in the total energy that arrives the plate in each second. So IA = E/Δt, where E = nhf (n photons of frequency f) Let's say each photon is able to pull out one electron from the plate, so the current i = ne/Δt, where e is the charge of the electron. That gives n/Δt = i/e, and IAe=hfi -> i = IAe/hf, so when we increase the frequency, if the intensity of the radiation remains the same, the current decreases. Is this right? Are my speculation and reason correct? And please help me to resolve my problem ?
If we increase the frequency..the current will not increase because frequency is the power by which the photon is hitting the electron...the electron will move fast towards the anode but..the number of electrons will not increase therefore the current is not increasing..
{ "language": "en", "url": "https://physics.stackexchange.com/questions/128964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 4 }
Maximum Density that We Can Store Information at? I was informed that: There is a maximum density at which we can store information. For a sphere with surface area A, the maximum information that can be contained within is equivalent to the maximum entropy of a sphere of size A, which is given by $$S_{max} = \dfrac{A}{4l_p^2}$$ where $l_p$ is the Planck length and Boltzmann constant is set to $1$. Incidentally, that's the equation for the entropy of a black hole. Is this true? If so, why or how does it work? Why is the Boltzmann constant set to 1, and how does that relate to the Planck length?
Yes, it is true. The Planck length is defined as $$ L = \sqrt{\frac{G\hbar}{c^3}} $$ which, in the real world, happens to be equal to $1.616\times 10^{-35}\,{\rm m}$ (meters). In everyday life, we use units like the SI units – based on kilograms, meters, second, kelvins etc. But adult theoretical physicists often use smarter, more natural units chosen so that the numerical value of several universal constants, namely those below, is equal to one: $$ c = \hbar = k = \epsilon_0 = 1$$ and sometimes $G=1$, too. There is no obvious relationship between the Planck length and the Boltzmann constant – the usual formulae for the former don't even include the latter because the former is a non-thermal concept. The only relationship is that both of them like to be set to one by adult physicists. At any rate, if one tries to compress too much information (imagine memory chips) to too small space, the information has to be carried by matter which is massive and gravitationally attract. If the density increases above the density where a star would collapse to a black hole, any piece of matter will collapse, too. The black hole carries a huge entropy which is – because the black hole is the ultimate stage of macroscopic evolution and because the entropy never goes down (the second law) – the maximum entropy that a localized object of the same mass or the same size may have. The black hole entropy (information it carries in the invisible "atoms" that the black hole is composed of) is equal to $$ S = k \frac{A}{4L^2} = k\frac{Ac^3}{4 G\hbar} $$ which, by the choice of units I mentioned, physicists often simplify as $S=A/4G$ or even $A/4$. The black holes just can't be beaten in the amount of information, assuming that something else is kept fixed. The fact that the information can't be any denser is also the basis of the holographic principle whose most explicit and mathematical incarnation (or proof) is the AdS/CFT correspondence.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/129032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
When Fire meets Water To preface, I am not a scientific mind, but a writer looking for some validity to a possible scene. That being said, please forgive me! In my scene, huge masses of fire are raining from the sky and crashing into a salt-water ocean. [Edit]: I would imagine the buring substance as some sort of 'napalm'? I am unfamiliar with how fire would exist in the atmosphere. I guess that's the magic part (; My question is: What would happen when fire meets water in such a way? Would it make noise? Would it cause large amounts of steam? How about smoke? If it occurred near land, would the steam or smoke drift away or towards the land? I realize I am asking about the results of interactions between something magical and the physical world, but please bear with me!
It matters what's on fire. * *If it's a flammable material that floats, like oil or gasoline, some fraction of it will remain on or return to the surface and you may have flames on the water. (This is basically the only thing that I remember from watching Black Beauty as a kid.) *If it's a flammable material that sinks, the water will probably extinguish the flames, but the residual heat may boil the water around the fuel and send up clouds of steam. *If it's a flammable material that reacts with water, like metallic sodium or lithium, it may burn more violently when it hits the water.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/129101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Can Minkowski spacetime be redefined as a non-flat riemannian manifold? Minkowski space time is defined in terms of a flat pseudo-Riemannian manifold. I have wondered if it can be redefined as Riamannian manifold and in the case what type of curvature would there appear. Formally: Let M be a semi-Riemannian manifold of dimension 4, corresponding to the Minkowski space, and let g be the metric tensor (non positive definite), T be the Riemann curvature tensor and P a generic point of M. Question 1 Which (if any) of the following is true? a. at every P there exists one system of coordinates for which the metric g becomes Riemannian (positive definite) in a ball of radius R non infinitesimal centred in P b. there exists one system of coordinates for which g is Riemannian (positive definite) at all P of M Comment: in words, can we, with a change of coordinates, get rid of semi-Riemannianity – either in finite region or globally? If this is the case, how do we pay it in terms of curvature? This the next question: Question 2 c. if previous a) is true, is it true that T cannot be null in the entire ball? And what type of curvature T "displays"? d. if previous b) is true, is it true that T cannot be null in the entire ball? And what type of curvature T "displays"? Thanks a lot
As mentioned by Qmechanic, the answer to your questions is no. However, assuming space-time is oriented, we have the following: For any pseudo-Riemannian metric $g$, there exists a normalized time-like one-form $h^0$ and a Riemannian metric $g^R$ so that $$ g = 2h^0\otimes h^0 - g^R $$ This yields a locally Euclidean topology compatible with the manifold topology.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/129187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Normal modes of the 2D double pendulum I'm performing an experiment with a 2D double pendulum, and in part of it I want to investigate the normal modes of the double pendulum, where the pendula are not of equal length or of equal mass. My question is - how will I actually know when I've successfully excited a normal mode? I start by setting the initial angles to be in (roughly) the correct proportion to one another in order for the initial setup to be an eigenvector, but of course once I release the pendulum there is a slight 'jolt' which means I can't be sure the initial conditions were exactly an eigenvector (of course, realistically I'm only going to be closely approximating one). Then, once data is recorded I can generate a phase portrait with the computer, and the phase portrait I see when I get quite close to an eigenvector looks kind of like a tilted cylinder (sorry, don't know how to post a Matlab graph here). Is there a way to tell from this phase portrait (with the angles $\theta_1$ and $\theta_2$) whether I've hit a normal mode or not? Would appreciate some help. (ie. if someone could post a picture of a phase portrait of the normal modes of a double pendulum, so that I know what I'm looking for, that would be very appreciated)
When you have excited a normal mode ,both of the pendulumns will oscillate with the same frequency and will be either in phase or out of phase.I think a better method to see the normal mode will be to oscillate the point of suspension with one of the normal mode frequencies.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/129250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Symmetric eigenfunctions? So a symmetric eigenfunction / wavefunction is defined as: $$P_{ij} ψ_a (r_1,r_2,…,r_i,…,r_j,…,r_N )=ψ_a(r_1,r_2,…,r_i,…,r_j,…,r_N )$$ But for it to be symmetric does this have to be true for all $ij$ combinations or only one? (note that $P_{ij}$ is the exchanges the element $r_i$ for $r_j$)
Well, it's just terminology: A (wave)function $\psi(x_1,\dots,x_N)$ is * *symmetric in $i,j$ iff $\psi(x_1,\dots,x_i,\dots,x_j,\dots,x_N) = \psi(x_1,\dots,x_j,\dots,x_i,\dots,x_N)$ *antisymmetric in $i,j$ iff $\psi(x_1,\dots,x_i,\dots,x_j,\dots,x_N) = -\psi(x_1,\dots,x_j,\dots,x_i,\dots,x_N)$ *fully symmetric iff symmetric in all $i,j$ *fully antisymmetric iff antisymmetric in all $i,j$. The wavefunction of multiple bosons is fully symmetric, the wavefunction of multiple fermions is fully antisymmetric.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/129310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there an analogous Gauss' law which is applicable for a gravitational field? Consider the Earth to be a flat infinite plane having linear mass density equal to the mass density of the actual earth. * *Can there be an analogous Gauss' law that can give the gravitational field for any point on a particular Gaussian surface. *If yes, what would be the substitute for the permittivity constant that appears in the Gauss' law for an electric field?
Any inverse square law can be substituted by a Gauss law. In Gravitation, gravitational field $$E_{g}(R)=\frac{GM}{R^{2}}$$ Think of a sphere of Radius $R$ around the object of mass $M$ (This can be generalized to any shape). This gravitational flux coming out of it is $$\phi_{g}= E_{R}\times4\pi R^{2}=4\pi GM$$ So the gauss law will read as $$\phi_{g}=4\pi G\times M_{enclosed}$$ Similarly for electric charge $$\phi_{e}=\frac{q_{enclosed}}{\epsilon_{0}}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/129380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there inductance to a DC circuit? When a DC circuit is carrying current, large amounts or small, is there induced-emf due to the inductance? Or is it only applied to AC circuits?
Yes, there is Inductance in a DC circuit and if the Inductance changes which is the magnetic flux to current ratio then Inductance can and will change the original current flow. if you have a continuous moving positive contact that changes the amount of inductance per the amount of current then one can control the current flow with induction as current and induction are interchangeable. either one can control the other as long as the contact and or the loop count is changing. $$V_L=N\frac{\mathrm{d}\phi}{\mathrm{d}t}$$ where $V_L$ is the induced voltage in volts, $N$ is the number of turns in the coil, and $\frac{\mathrm{d}\phi}{\mathrm{d}t}$is the rate of change of magnetic flux in webers per second. The equation simply states that the amount of induced voltage ($V_L$) is proportional to the number of turns in the coil and the rate of change of the magnetic flux ($\mathrm{d}\phi/\mathrm{d}t$). In other words, when the frequency of the flux is increased or the number of turns in the coil is increased, the amount of opposing induced voltage will also increase thus with a moving contact you have an on going change in current flow from self induction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/129441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
What is $c + (-c)$? If object A is moving at velocity $v$ (normalized so that $c=1$) relative to a ground observer emits object B at velocity $w$ relative to A, the velocity of B relative to the ground observer is $$ v \oplus w = \frac{v+w}{1+vw} $$ As expected, $v \oplus 1 = 1$, as "nothing can go faster than light". Similarly, $v \oplus -1 = -1$. (same thing in the other direction) But what if object A is moving at the speed of light and emits object B at the speed of light in the exact opposite direction? In other words, what is the value of $$1 \oplus -1?$$ Putting the values in the formula yields the indeterminate form $\frac{0}{0}$. This invites making sense of things by taking a limit, but $$ \lim_{(v,w)\to (1,-1)} \frac{v+w}{1+vw}$$ is not well-defined, because the limit depends on the path taken. So what would the ground observer see? Is this even a meaningful question? Edit: I understand $1 \oplus -1$ doesn't make sense mathematically (thought I made it clear above!), I'm asking what would happen physically. I'm getting the sense that my fears were correct, it's physically a nonsensical situation.
As already noted, Special Relativity cannot account for an observer moving at the speed of light. It is also instructive to calculate the proper time for the object ${\bf A}$: as you approach the speed of light the proper time becomes zero. So it is even impossible to define in ${\bf A}$'s frame of reference the moment at which ${\bf B}$ will be emitted.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/129636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 7, "answer_id": 1 }
How does exciting an electron's surrounding electromagnetic field cause 'electron excitation'? In more meaningful words than the ones above, how does adding energy to the EM field cause the electron to to change orbitals or oscillate in a different pattern.
how does adding energy to the EM field cause the electron to to change orbitals or oscillate in a different pattern. Here one is using two frameworks, the classical and the quantum mechanical. The classical electromagnetic field is composed by an enormous number of photons each with energy=h*nu, nu the frequency of the electromagnetic field. The electron is an elementary particle, a quantum mechanical entity. It interacts with photons via a well prescribed mathematically calculational mechanism , the Feynman diagrams. The electron may be free, and then it can scatter with a photon and take part of its energy or have even more complicated interactions, but since you are framing your question in EM fields , this is another story. Electrons bound in free atoms can be kicked up to higher energy levels by a photon with the correct energy. Electrons in a solid are also bound but complicated potentials exist which may allow many energy levels common to the solid to the point of seeming continuous. An EM field with the appropriate energy may move the electrons around in allowed orbitals for the solid which may have different manifestations in the solid than it had before being exposed to the said EM field, as oscillations because of different collective orbitals
{ "language": "en", "url": "https://physics.stackexchange.com/questions/129699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Potential Energy Concept Imagine a book that we lift it with a force that is exactly equal to the force of gravity so the forces cancel out and the book moves with a constant velocity. Consider the situation after the book has been lifted, and it has come to rest once again. According to the work and kinetic energy laws $\Delta W = \Delta K.$ This seems to hold here, since both are zero. That's okay, but where did the increase in the potential energy come from? Is energy not conserved?
Imagine a book that we lift it with a force that is exactly equal to the force of gravity so the forces cancel out Ok, so sum of the forces is 0 and the acceleration is zero. and the book moves with a constant velocity. Spooky. Was the book moving initially? ...after the book has been lifted, and it has come to rest once again. According to the work and kinetic energy laws ΔW=ΔK This seems to hold here, since both are zero. Presumably not then; sounds like magic. Energy is always conserved (globally). If there was a change in height, a force was applied to the book and energy was added to the book-Earth system. If no net force was applied to the book, it didn't change height and no gravitational potential was added to the system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/129777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Diffusion in the standard map Consider the standard map (also known as Chirikov map): $$ p_{n+1} = p_n + K \sin(\theta_n) \\ \theta_{n+1} = \theta_n + p_{n+1} $$ I know that the diffusion coefficient according to the Einstein relation is defined through the relation: $$ \sigma^2 = 2Dt $$ being $\sigma$ the variance of the position in the case of the $1$-D random walker. Can you figure out why the diffusion coefficient for the standard map is given by: $$ D = \lim_{n \rightarrow \infty} \frac{<(p_n - p_0)^2>}{n} \quad ? $$ I mean, the $n$ makes sense, but what about the variance, why this is given just in terms of $p$, what happens with the $x$.
The following is a passage from Diffusion in the standard map (pdf) by Itzhack Dana and Shmuel Fishman: A major difficulty in the analysis of chaotic behavior of Hamiltonian systems is the proximity of chaotic and regular orbits on various scales [2]. Thus, for $K \approx 1$, the phase plane of (1) is an intricate mixture of regular and chaotic orbits [3]. For $K \gg 1$ chaotic orbits fill almost the entire phase plane, but "islets of stability", in particular accelerator modes [2, 7], are known to exist for $K$ arbitrarily large. When a chaotic orbit approaches such an islet, it may wander near it in a "regular" fashion. The area of each islet generally decreases when $K$ increases. Distinctive features of a stochastic or random motion, which allow a statistical description of it, are the rapid decay of correlations and diffusion. The decay of correlations in the chaotic region is related to the local instability, measure Lyapunov characteristic exponent [1, 2, 8]. For $K \gg 1$, the decay of correlations is actually exponential, with the decay exponent proportional to the Lyapunov exponent [8]. The existence of a definite characteristic time for the decay of correlations and the resulting statistical independence generally imply diffusion in the unbounded direction of the action $I$ for $K > K_\mathrm{c}$ [...]. Therefore a definite diffusion coefficient $D$ can be associated with the chaotic motion $$ D = \lim_{n\to\infty} \frac{\big\langle(I_n-I_0)^2\big\rangle}{n}, $$ where the average is taken over some ensemble of initial positions $(I_0,\theta_0)$ within the chaotic region. For large $K$, where the Lyapunov exponent is large, diffusion was verified unambiguously [2].
{ "language": "en", "url": "https://physics.stackexchange.com/questions/130119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
When I move my arm forward in vacuum, will my body move backward? Let's say I stay at point $x=0$ in vacuum. When I move my arm forward such that it will have a positive $x$ position (say $x=5$) will the rest of my body move backward such that it will have a negative $x$ position (like $x=-1$)?
Yes. There are only internal forces of your body. Without external forces, the center of mass of your body cannot change position. As your center of mass did not move, the main body should move in the opposite direction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/130265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to differentiate between rotating frame and linearly accelerating frame? Two friends, $A$ and $B$ are part of an experiment. $A$ is placed in a closed box and made to accelerate in free space at an acceleration $g$. $B$ is also placed in a closed box, but is made to rotate in a circle at uniform speed, such that the radial acceleration is also $g$. Can $A$ and $B$ perform some experiment from their boxes to tell who is moving radially and who is moving linearly?
A simple pendulum would be a good experiment to detect both non-inertial frames - rotating and linearly accelerating: If you know weight of the pendulum in inertial frame, in rotating frame, its weight would decrease because of radially outward centrifugal force acting on it. (If earth stops rotating, our weight would increase!) Think: what would happen if you are in a rotating hollow sphere? In linearly accelerating frame, the pendulum cannot be perpendicular to the direction of motion, but would make some angle wrt the 'vertical line' because of pseudo force. The measure of angle is proportional to the magnitude of acceleration. (This is a principle behind measuring acceleration & hence speed of a spacecraft in the outer space, where there is no considerable gravity).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/130422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How can gravity affect light? I understand that a black hole bends the fabric of space time to a point that no object can escape. I understand that light travels in a straight line along spacetime unless distorted by gravity. If spacetime is being curved by gravity then light should follow that bend in spacetime. In Newton's Law of Universal Gravitation, the mass of both objects must be entered, but photon has no mass, why should a massless photon be affected by gravity in by Newton's equations? What am I missing?
If the mass of light is assumed to be strictly zero, Newton gravity would produce zero force. However the orbit of light is determined by acceleration, not force. For zero mass, acceleration is undefined. In the limit of photon mass going to zero the force goes to zero, but acceleration is of course independent of photon mass. One can therefore simply apply Newton acceleration to an object moving at speed c. This results in the value of Einstein's paper of 1911, which is half the GRT prediction and the experimental value. See https://en.m.wikipedia.org/wiki/Gravitational_lens #History.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/130552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 6, "answer_id": 4 }
Which way does the scale tip? I found the problem described in the attached picture on the internet. In the comment sections there were two opposing solutions. So it made me wonder which of those would be the actual solution. So basically the question would be the following. Assume we would have two identical beakers, filled with the same amount of the same liquid, lets say water. In the left beaker a ping pong ball would be attached to the bottom of the beaker with a string and above the right beaker a steel ball of the same size (volume) as the ping pong ball would be hung by a string, submerging the steel ball in the water as shown in the picture. If both beakers would be put on to a scale, what side would tip? According to the internet either of the following answers was believed to be the solution. * *The left side would tip down, because the ping pong ball and the cord add mass to the left side, since they are actually connected to the system. *The right side would tip down, because of buoyancy of the water on the steel ball pushing the steel ball up and the scale down. Now what would the solution be according to physics?
Here is a free body diagram of the balls: … and one of the water volume: The four balance equations are $$ \begin{align} B_1 - T_1 - m_1 g & =0 \\ B_2 + T_2 - m_2 g & = 0 \\ F_1 + T_1 - B_1 - M g & = 0 \\ F_2 - B_2 - M g & = 0 \end{align} $$ where $\color{magenta}{B_1}$,$\color{magenta}{B_2}$ are the buoyancy forces, $\color{red}{T_1}$,$\color{red}{T_2}$ are the cord tensions and $M g$ is the weight of the water, $m_1 g$ the weight of the ping pong ball and $m_2 g$ the weight of the steel ball. Solving the above gives $$\begin{align} F_1 & = (M+m_1) g \\ F_2 & = M g + B_2 \\ T_1 & = B_1 - m_1 g \\ T_2 & = m_2 g - B_2 \end{align} $$ So it will tip to the right if the buoyancy of the steel ball $B_2$ is more than the weight of the ping pong ball $m_1 g$. $$\boxed{F_2-F_1 = B_2 - m_1 g > 0}$$ This is the same answer as @rodrigo but with diagrams and equations.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/130688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81", "answer_count": 7, "answer_id": 1 }
Some objects seem to have the same color whether the light that we perceive is emitted or reflected Is color only a property of perception, considering these two examples: * *The glass used in a green traffic light looks green no matter how it is illuminated, either by a white light bulb behind it or a white light bulb in front and *A green LED (to stay with the same color) emits a green color and receptors in our eyes that are excited by green light are stimulated and we eventually perceive this emitted light as green. Does not the correspondence between the color of the objects(LEDs) and our perception indicate that some property of color is a part of the "real" world?
"Color" is a property of perception. "Optical spectrum" is a physical property of light and objects that interact with it. There is no great mystery behind the (scientific) definitions of the words "color" and "spectrum", which refer to very different aspects of light. When a physicist talks about "red light", it only means that the choice of light source is either obvious or simply not important to the experiment. If the light "looks reasonably red" to a person of normal eyesight, it will probably do in the context the physicist is talking in. However, if you want to explore the detailed relationship between color perception and physical spectrum, then you will hit on a large number of important and non-trivial relationships that are partly a consequence of the physics of light and partly a consequence of biology. I would say that even a cursory exploration of the topic is way past what can be done on this site.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/130876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What exactly are crystal planes and how do they reflect x-rays? * *What exactly are crystal planes and how do they reflect x-rays? *Are crystal planes real physical planes or just an abstract concept? *What are these planes made of? *If they are an abstraction, what do the x-rays hit and get reflected by? Individual atoms? *Then where is the concept of a plane coming in? *And why do different crystal structures have different 'active' reflecting planes? *Could someone please clear up the concept of these planes and Bragg's Law for me?
* *and 2. Crystal planes contain sets of atoms, which occupy identical positions in the primitive cell. *So to say, the planes are made out of atoms (and nothing in between). Or if you want a more quantum mechanical picture, you would have electron orbitals in between, which are responsible for the chemical bonding. *X-rays get scattered by individual atoms. If many atoms act as scattering centers (e.g. a crystal plane or a set of), the scattered spherical waves, originating from the atoms as point sources, become a planar wave again. *This depends on the crystal structure and symmetry. *Wikipedia might help with nice pictures there.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/130950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Will the Universe eventually stop expanding Sorry if this is a naive question, not being even a part qualified physicist in any way shape or form. I've read that the universe is expanding and the rate of expansion is increasing. The assumption being that it will continue expanding indefinitely. However isn't there another possibility. Let me illustrate my question prior to posing it. If I throw a ball in the air.. it accelerates away from my hand and keeps accelerating, until momentum is lost and it starts to slow down, eventually falling back to my hand. Is it possible that the expanding universe is also in the throws (excuse the pun) of an initial acceleration phase, prior to that acceleration slowing and eventually the universe compressing? Again sorry if its a stupid observation!
Carroll (2004) introducing the standard Friedmann-Lemaitre-Robertson-Walker model, shows that this will become clear in a formula that sheds light on the fate of the cosmos: To determine the dividing line between perpetual expansion and eventual recollapse, note that collapse requires the Hubble parameter to pass through zero as it changes from positive to negative. The scale factor $a_{*}$ at which this turnaround occurs can be found by setting $H=0$ in the Friedmann equation. Which after rearranging gives $\Omega_{\Lambda0}a_{*}^3+(1-\Omega_{M0}-\Omega_{\Lambda})a_{*}+\Omega_{M0}=0$ Where $\Omega's$ are the density parameters associated with vacuum energy and matter in the cosmos defined as $\frac {8 \pi G \rho}{3H^2}.$ The cubic equation in terms of $a_{*}$, the scale factor, gives us predictive power. If there is no real positive solution to it then we have to expect “perpetual” expansion. And the current experimental data indeed favor such an open ended cosmic future.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
What is entropy really? On this site, change in entropy is defined as the amount of energy dispersed divided by the absolute temperature. But I want to know: What is the definition of entropy? Here, entropy is defined as average heat capacity averaged over the specific temperature. But I couldn't understand that definition of entropy: $\Delta S$ = $S_\textrm{final} - S_\textrm{initial}$. What is entropy initially (is there any dispersal of energy initially)? Please give the definition of entropy and not its change. To clarify, I'm interested in the definition of entropy in terms of temperature, not in terms of microstates, but would appreciate explanation from both perspectives.
The entropy of a system is the amount of information needed to specify the exact physical state of a system given its incomplete macroscopic specification. So, if a system can be in $\Omega$ possible states with equal probability then the number of bits needed to specify in exactly which one of these $\Omega$ states the system really is in would be $\log_{2}(\Omega)$. In conventional units we express the entropy as $S = k_\text{B}\log(\Omega)$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46", "answer_count": 14, "answer_id": 5 }
How is the integrand concluded to be identically zero? In expanding the classical Klein-Gordon field in Fourier space to write it in terms of $\phi(\mathbf{p})$ instead of $\phi(\mathbf{x})$, I reached the following result. $$\int \mathrm{d}^3p\exp({i\mathbf{p}\cdot\mathbf{x}})\left[\frac{\partial^2}{\partial \mathrm{t}^2}+|\mathbf{p}|^2 + m^2\right]\phi(\mathbf{p},t) =0$$ Now, how is it concluded that $$\left[\frac{\partial^2}{\partial \mathrm{t}^2}+|\mathbf{p}|^2 + m^2\right]\phi(\mathbf{p},t) =0.$$ I suspect there is a physical rather than mathematical reasoning for this, since, at least apparently, the integrand could have the whole complex plane as its range. (Or is the assumption that $\phi(\mathbf{x})$ is real, and therefore $\phi ^*(\mathbf{p})=\phi(\mathbf{-p})$, relevant?)
This is essentially the statement that the Fourier transform is injective and therefore invertible. In looser language, it states that the functions $\mathbf x\mapsto\exp(i\mathbf p\cdot\mathbf x)$ are linearly independent, so any sum of them (i.e. $\int\mathrm d^3\mathbf p$) that gives zero must have identically zero coefficients. Thus, it applies to any (nice enough) function $f$: if $$ \tilde f(\mathbf x)=\frac{1}{(2\pi)^{3/2}}\int \mathrm d^3\mathbf p f(\mathbf p)\exp(i\mathbf p\cdot\mathbf x)=0 $$ then you can conclude that $f(\mathbf p)$ itself is identically zero. The standard way to see this is via a direct inversion. The Fourier transform is essentially its own inverse, and one can calculate the double transform to see that $$ f(\mathbf p)=\frac{1}{(2\pi)^{3/2}}\int \mathrm d^3\mathbf x \tilde f(\mathbf x)\exp(-i\mathbf p\cdot\mathbf x) $$ under appropriate niceness conditions on $f$. With that calculation in hand, it is trivial to show that if $\tilde f$ vanishes then $f$ must do so as well. This does take some getting used to. You are correct in your feeling that the fact that an integral $\int g(p)\mathrm dp$ vanishes does not by itself imply that the integrand is zero. Instead, the principle at play is slightly subtler: the integral is of the form $\int g(x,p)\mathrm dp$, and it vanishes for each and every $x$. In the former case you have a single piece of information, which is not enough to pin down an arbitrary function of position, but in the latter case you have one condition for each real $x$, and that turns out (if the dependence of $g(x,p)$ on $p$ for different $x$s is 'different enough') to be sufficient.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Transformer ratios - 1:2 vs 50:100 I am only the equivalent of a high school student, so please, if possible, don't answer this question with anything too complex or really advanced university level. I am very happy to research new concepts anyone mentions, but can you please keep it reasonably simple. With transformer windings, primary vs secondary, is there any difference between or advantage to using a 50:100 ratio rather than a 1:2 ratio? With a 1:2 ratio it would seem easier to make the wires thicker and allow for larger currents, whereas with a 50:100 ratio to give the same cross sectional area of wire, the coil would have to be much longer, larger, and involve more metal in production. So there are obvious disadvantages to using 50:100, but are there any advantages? Thank you very much.
I cannot possible improve on CuriousOne's excellent answer but I can try and respond to Carl Witthoft,s comment: "Consider the inductance as well as the coupling efficiency in each design." When not loaded the primary coil of a transformer acts as a self inductance. The self inductance is proportional to the number of turns squared whereas the resistance of the wires will depend on the number of turns $N$. This is an approximating because the coils do not all have the same circumference. For a given supply voltage the current depends on something like $1/N^2$ and if the coils have resistance the ohmic heating ($I^2R$) depends on something like $1/N^3$. Furthermore the B-field depends on NI and so something like $1/N$. So the chances of saturation are reduced. Finally if the transformer with no load is connected to the mains supply then on average over a cycle there is no power dissipated but current still have to flow through the supply cables which have resistance. With a larger self inductance the current is smaller and so is the ohmic heating in the supply lines.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 2 }
Is there a difference in handwritten nabla $\vec{\nabla}$ with an overset arrow and typeset nabla $\nabla$? According to some physicist at KIT it is usual to write the following when using pen and paper: whereas in typeset texts you write $\nabla$. Is that true? Are there sources for this convention?
Yes, the reason for this is that in maths $\nabla$ is often used as the vector differential opertator. This is a vector. When typesetting the convention to denote a vector is bold text e.g. $\bf{x}$. However for handwriting you can't really write bold font so other conventions are needed. Common ones are putting an arrow over as in your example or underscoring. Note this is probably not how this notation was developed what with handwritten maths having existed before typesetting but this is the standard modern useage, although some books using other typesetting.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Amplification of magnetic field can we by any means amplify magnetic signal as we can with electric signal. As both electric and magnetic field can be represented in the form of a wave the analogy seems to be natural. I want the input and output as magnetic signal.
I presume your idea revolves around permanent magnets, if so stacking decreasing sized magnets in a pyramid configuration grants you a powerful localized field at the smallest point. Halbach configurations allow for combined field manipulation which comes with the downside of slowly degrading the total field strength over time. Please do let me know what you find as I am researching a similer querry.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Why is density an intensive property? I am still trying to understand what are intensive and extensive properties. Possibly someone can give a pointer to a decent text (preferably on the web), as I am not too happy (to say the least) with what I found so far on the web. I already asked here one question on this, which I finally answered myself. My new problem (among several others) is that density seems to be one of the first properties taken as example of an intensive property. While it seems a good approximation of what I know about solids and liquids, it seems to me a lot more problematic with gas, as they tend to occupy all the available space you give them. But none of the documents I found seems to make any resriction regarding density of gas. It seems to me that my opinion (apparently contested) that velocity is an intensive property, may be easier to support than the intensiveness of density in the case of gas. Or to put it differently, I do not see why pressure should be more intensive than volume, while wikipedia lists pressure as intensive, but not volume. Ideal gas law states that $PV=nRT$, which apparently gives a pretty symmetrical role to $P$ and $V$. And density depends on pressure (actually using this same formula and molecular weight). If it were not for the fact that some principles seem to be based on the concept, such as the state postulate which I found on wikipedia, I would start wondering whether these are real concept in physics.
From the ideal gas law $ PV = nRT $ we can develop: $$PV = \frac {m}{M}RT \rightarrow PM= \frac {m} {V} RT \quad $$ and since $ \frac {m}{V}= \rho \quad $ where $\rho$ is the density of the gas and $M$ the molar mass then we have $$ PM = \rho RT \rightarrow \rho = \frac {PM}{RT}$$ So density is dependable only of intensive properties. Let's prove that the ratio of intensive properties is also intensive. There are 3 properties $a, b, c \quad$ which relate by $a= \frac {b}{c}$ Suppose $b$ and $c$ intensive and $a$ extensive, that would lead to $ac = b$ Which is a contradiction because LHS depends on system's size and the RHS of the equation does not. PS: You seem confused with pressure being intensive, if true check this and this out.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
What makes us move in time? Time is considered to be a dimension, and we are moving at certain rate in one direction in time. What force makes us move in time? I mean it must be ether time moving or us moving in time so there has to be some force that 'pushes'/'pulls'? Was this 'time inertia' acquired during big bang, or nothing is moving and I am just being silly?
Take a landscape. It can be modeled by a function f(x,y,z). If all the derivatives, df/dx, df/dy, df/dz are zero, the landscape is flat to infinity and nothing interesting exists in the landscape. If one of the derivatives is different than zero, then we perceive a shape, and generally a landscape has a shape. As an example, suppose that we have a cone for this landscape, and there exists a funny "life" that exists in the distance from the center of the cone . All in one snapshot for us, birth is at the cone, middle age is some distance and death is where f is zero. In a similar manner we can think of time for each of us as starting at birth making a four dimensional shape and ending at death. Another life form will see us as I explain in the example with the cone. Thus time as a dimension for human perception is a df/dt. If nothing changed, there would be an uninteresting landscape . Now we have developed means of studying what at first is a fourth dimension time axis, because all matter exists and has a df/dt in the four dimensional space. We have concluded from our observations that time has an arrow, i.e. one cannot "move" in the negative direction, from observing how nature behaves thermodynamically and microscopically. Entropy always increases , and that defines an arrow of time independent of the human perception. The "motion" of time is the motion of our perception. When we look at a three dimensional landscape we can perceive it from zero to infinity. The landscape is not moving. With time we are at a specific time=t_0 sequentially and the four dimensional landscape opens to our perception in slices ultimately controlled by the rate of increase in entropy in our surroundings. The "force" is the usual statistical mechanics and quantum statistical mechanics that rules the nature of matter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Magnetic Force on a Ferromagenetic Material I am currently working on a project involving solenoids, and I needed a force(Newtons, not a measure of magnetic field strength) equation. What I came up with after some digging around on the internet, is the equation: $$F = (NI)\mu_0\frac{\text{Area}}{2g^2}$$ Where $F$ is force (in Newtons), $N$ is the number of turns in the coil, $I$ is the current being passed through the coil, $μ_0$ is the magnetic permeability of vacuum, and $g$ is the gap between the coil and the ferromagnetic material. (Area $A$ and $g$ can be any units, as long as you're consistent with the usage) I don't know in which plane exactly the area $A$ is taken. Assuming I have a rod, moving lengthwise into a solenoid, which plane would $A$ represent? Plane a, plane b, or another plane that I did not consider relevant to this problem? Rod: Edit: I was looking for the force an electromagnet would exert on a ferromagnetic material moving into the coil. something like this. Edit: If the equation I was using before does not work, I don't suppose anyone has the correct one? Edit: After looking at the equation some more, I realized I had written it wrong. It should be: $$F = (NI)^2\mu_0\frac{\text{Area}}{2g^2}$$
I don't know where you got your formula from, but but I derived it this way: Field inside the solenoid$=\mu_0ni \hat{z}$ (say) Since the material is ferromagnetic, there is an induced, bound surface current $K\hat{\phi}$ (and $K=M$, where $M$ is magnetization). The magnetization is uniform, so bound current is zero,$$ J_b~=~\nabla \times \left(M\hat{x}\right)~=~0 \,.$$ From the Lorentz force, $F=i \vec{l}\times \vec{B}$: $$ \begin{align} \Rightarrow F &= A\vec{K}\times\vec{B} \\ &=AK\left(\mu_0ni\right) \left(\hat{\phi}\times\hat{z}\right) \\ &=AM(\mu_0ni)\hat{r} \\ &=A(\mu_0ni)(\chi_m*H)\hat{r} \\ &=A(\mu_0ni)(\chi_m*\frac{\mu_0ni}{\mu_0})\hat{r} \\ &=\chi_mA\mu_0 \left(ni \right)^2\hat{r}\,, \end{align} $$where $A$ is area. Here $A$ is the area of the surface that $K$ is flowing on, i.e., the curved surface of the cylinder$=2\pi RL$ where $R$ is radius and $L$ the length of ferromagnet. The force is radially out of the surface of the core, stretching it out as if to fill the coil. I don't know why the force should depend on the "gap" between the two.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Optical signal filters Are there any optical filters which filter the signal's frequency and not based on the wavelength of the light? So what I mean is, if I have a modulated/pulsating light signal riding on a large DC offset, is there some way I can filter out the DC offset using optics alone? I've tried searching the internet for this but this is obviously hard since I don't know which keywords to use... "optical filters" are always based on filtering the spectrum of light, not on the temporal signal. Would appreciate any help and/or discussions/debates about this, thanks a lot!
You can use a reverse saturable absorber in such a way that the threshold of the RSA must be such that the RSA transmits only the AC part and acts as an opaque object for the DC signal.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
why are the square of the components of velocity equal? In Principles of Physics by Resnick,Halliday,Walker, when evaluating the $v_{rms}$ ,they first found $p = \frac{nM(v_x)^2}{V}$ & then substiuted $(v_{avg})^2 = \frac{1}{3} .v^2$ where $v^2 = (v_x)^2 + (v_y)^2 + (v_z)^2 $ saying that there are many molecules and all are moving in random directions hence average values of the square of the velocity components are equal and wrote $v^2= \frac{1}{3} (v_x)^2$. But i couldn't understand their reasoning. How have they came to this conclusion that the square of the components of the velocity are equal? Can anyone explain me their reasoning why they have taken the square of the components of velocity be equal? Help.
It is just splitting of vectors component wise: $$v_{i}= v_{x}^{2}+v_{y}^{2}+v_{z}^{2} \,.$$ Like when you split the velocity of a projectile in to two components. We know from the molecular model of ideal gas: $$ v_{x}=v_{y}=v_{z} \,,$$ squaring all sides we get $$ v_{x}^{2}=v_{y}^{2}=v_{z}^{2} \,.$$ Thus $$ v_{i}= v_{x}^{2}+v_{x}^{2}+v_{x}^{2}=3v_{x}^{2} \,.$$ I am guessing you meant $ v_{i}= 3v_{x}^{2}$, instead of $ v_{i}= \frac{1}{3}v_{x}^{2}$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
How can I calculate the force that is applied on a tube by another tube? Let's say there is two tubes(cylinders with no tops or bottoms) with charges $q_1$ and $q_2$, radii $b_1$ and $b_2$, lengths $l_1$ and $l_2$. These tubes are located along the axis of each other's surfaces like in this figure: If the electric field that the first tube creates on a point is; $$ E = \frac{q}{4\pi\varepsilon_0}\left(\frac{1}{\sqrt{b^2 + (c-a)^2}} - \frac{1}{\sqrt{b^2 + (c+a)^2}}\right) $$ where $b$ is the radius of the tube, $c-a$ is the distance between the centre of the furthest part of the tube and the point, $c+a$ is the distance between the centre of the closest part of the tube and the point, $q$ is the total cahrge on the tube and $\epsilon_0$ is the electric constant. Here is the figure of the tube and the point for those who didn't understand from my description: The question is how can I calculate the force between these two tubes? Update:The electric field formula I found is not true since it is valid for a point on axis of the cylinder. Thus I would be pleased if you could show me how to solve the problem from the beginning.
The answers already in here are good; unfortunately the integrals that arise are quite nasty, and don't have solutions in terms of elementary functions. Here is some more detail, in the special case when the tubes have zero length (so they are just charged circular loops), and further they have the same radius $b$, with separation $d$. You'll see that this is plenty nasty already! By general considerations (dimensional analysis in particular), the force will be directed along the common axis and will take the form $$ F=\frac{q_1q_2}{4\pi\epsilon_0 d^2}f\left(\frac{b}{d}\right) $$ for some function $f$. All the nontrivial information in the problem is encoded in the function $f$, which tells us how the force depends on the geometry of the setup. It remains to work out what this function looks like. We can get a long way with some physical intuition and limiting cases. Firstly, the limiting case when the ratio $x=\frac{b}{d}$ of radius to separation is very small. Now the rings essentially become point particles, so we should reduce to Coulomb's law: $f(x) \sim 1$ as $x\to0$. Now, what happens if we increase the radius while keeping separation fixed? The charge is tending to get more separated, so the forces should be decreasing: we should find that $f(x)$ decreases as $x$ increases. Finally, the limiting case when $x$ is very large: now, the rings have such a wide radius that locally the problem looks like the force between parallel charged wires, which is $\frac{\lambda_1\lambda_2}{2\pi\epsilon_0 d}$ per unit length, where the $\lambda$s denote charges per unit length. From this, you can work out that $F\sim \frac{1}{2\pi b}\frac{q_1q_2}{2\pi\epsilon_0 d}$,and $f(x)\sim\frac{1}{\pi x}$ as $x\to\infty$. Now the main physics lesson to draw is that you've now learnt pretty much everything qualitative about the force from these simple considerations without a calculation! Unless you really need to, you can stop here... But I suppose I'll carry on a little further. Consider two small elements of the circular wires. We can put them at positions $(b\cos\theta_1,b\sin\theta_1,0)$ and $(b\cos\theta_2,b\sin\theta_2,d)$, with ends separated by small angles $\delta \theta_1,\delta \theta_2$. The $\theta$s denote cylindrical polar angles of the positions of the charge elements in question. They carry charges $\frac{\delta \theta_1}{2\pi}q_1,\frac{\delta \theta_2}{2\pi}q_2$. They are separated by distance $r=\sqrt{(b\cos\theta_1-b\cos\theta_2)^2+(b\sin\theta_1-b\sin\theta_2)^2+d^2}=\sqrt{d^2+2b^2(1-\cos(\theta_1-\theta_2))}$. The force between them in the $z$-direction (other directions give 0 in the end by symmetry) is hence $$ \frac{q_1q_2\delta\theta_1\delta\theta_2 d}{(2\pi)^24\pi\epsilon_0 r^3} $$ from Coulomb's law. The total force is then given by summing over all such elements, which in the limit as they become very small is the integral: $$ F =\int_{-\pi}^\pi\int_{-\pi}^\pi\frac{q_1q_2d}{(2\pi)^24\pi\epsilon_0 r^3}d\theta_1d\theta_2 \\ =\frac{q_1q_2}{4\pi\epsilon_0 d^2} \frac{1}{4\pi^2}\int_{-\pi}^\pi\int_{-\pi}^\pi \left[1+2\frac{b^2}{d^2}(1-\cos(\theta_1-\theta_2))\right]^{-3/2}d\theta_1d\theta_2. $$ Here one of the integrals can be shifted by periodicity to be over $\phi=\theta_1-\theta_2$, and the second will then give simply $2\pi$. The remaining integral is what gives us $f$, which can be simplified to $$ f(x)=\frac{1}{2\pi}\int_{-\pi}^\pi \left[1+4x^2 \sin^2\left(\frac\phi2\right)\right]^{-3/2}d\phi, $$ which finally can be evaluated, but only in terms of the Jacobi elliptic integral: $$ f(x)=\frac{2 E\left(\frac{1}{1+\frac{1}{4 x^2}}\right)}{\pi \sqrt{4 x^2+1}} $$ where $E$ is the special function, the Complete Elliptic Integral of the second kind. Here's a graph of $f$. It has all the properties that we worked out without the messy computation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Why do the space time get curved around a massive object?What problems do we face if we do not consider the space time to be curved? As far as I have the knowledge of GTR that a mass bends the space time around it.But why does this bend occur?The example from real life that when a mass is placed on a net then the net bends but it us very difficult for me to visualise the situation of bending of spacetime due to a mass.What is actually happening?What is the physical basis behind this bend?I want to say that the books which I have studied till now just say that the space time "just bends" around an object and even my teacher told me that it just bends without any explanation so I just want to know the reason for the occurrence of this bend.What is the need for spacetime to bend around an object?What physical problem we can encounter if we do not the bending of space time.
In general relativity the fundamental equation is roughly $$\textrm{Curvature} = \textrm{Matter content}$$ .In the context of general relativity it does not make sense to ask "why" matter curves spacetime, because this is the most basic assumption in the theory. It makes more sense to ask why one would try to construct a theory where gravitation is a geometric theory. The answer to that question is the equivalence of gravitational and inertial mass: the dynamics of a particle in gravitational fields are independent of the particle mass. Accepting that the equation for gravitation should be something like $$\textrm{Geometry} = \textrm{Matter}$$ one can make physical arguments for why he left-hand side should be (a particular measure of) curvature: the curvature has the the right number of derivatives (two), it is zero for the spacetime of special relativity, it has the correct number of components, and using the correct measure of curvature, conservation of energy is automatic.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why should any physicist know, to some degree, experimental physics? I've been trying to design a list with reasons why a proper theoretical physicist should understand the methods and the difficulty of doing experimental physics. So far I've only thought of two points: * *Know how a theory can or cannot be verified; *Be able to read papers based on experimental data; But that's pretty much what I can think of. Don't get me wrong, I think experimental physics is very hard to work on and I'm not trying to diminish it with my ridiculously short list. I truly can't think of any other reason. Can somebody help me?
Because otherwise you are a mathematician. The point of Physics is to describe the nature using the language of maths, but the only ways to stay in contact with nature is to interact with it through experiments and observations. If you completely lose the ability to grasp how a process starts and develops, how much it can be influenced by external factors, how to extract significant data in order to understand it and reproduce it; then you are just playing with numbers. You may find interesting things but you are not doing Physics any more. Moreover nowadays lots of theories get tested with computational simulations which share many of the techniques being known for ages by the experimentalists, especially in data analysis. Getting your hands dirty from time to time, will make you a much better physicist, not only when it comes to designing an experimental test for your work: indeed this would be an easy task if you had kept your model close enough to nature.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 6, "answer_id": 2 }
Why isn't jumping against a wall an elastic collision? According to this calculator http://www.abecedarical.com/javascript/script_collision1d.html when low mass object hits high mass object it is reflected gaining opposite velocity almost the same as initial velocity. If I jump onto the wall why my body is not reflected? I know that collision is not fully elastic but it should be at least similar.
Changing shape could still be an elastic deformation (of a rigid body). So obviously there are also plastic deformations involved, when jumping against a wall.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Poynting vector plane wave I'm calculating the poynting vector for a plain wave and I have some doubt. $$ \bar S = \frac 1 2 \bar E \times \bar H^* = ... = \frac {| \bar E|^2} {2 \zeta} \hat i_k $$ Now if I consider a cylindrical volume and apply the divergence theorem I get $$ \int_{s_1} Re \,\, \bar S \,\,\hat i_n dS = - \frac {| \bar E|^2} {2 \zeta} A$$ $$ \int_{s_2} Re \,\, \bar S \,\,\hat i_n dS = \frac {| \bar E|^2} {2 \zeta} A$$ $$ \int_{s_l} Re \,\, \bar S \,\,\hat i_n dS = 0$$ where $s_1$ is the left face, $s_2$ is the right face and $s_l$ is the lateral face of the cylinder. So I should have $$ \int_{S} Re \,\, \bar S \,\,\hat i_n dS = - \frac {| \bar E|^2} {2 \zeta}A +\frac {| \bar E|^2} {2 \zeta} A = 0$$ Is it possible?
You calculate the amount of energy the is radiated from a cylinder with no source, and only a plane wave going through it, and get 0... seems right to me... By the way, Re seems superfluous in you equations, since S is already real.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Gravity doesn't seem to work the way it is supposed to This has been a bit of an awkward question that's been plaguing me ever since I started watching space documentaries on discovery about 10 years ago. I was saving this for the day I would ever meet Professor Brian Cox where I can point and say "HA!" see you can't explain that one (he probably can and so could someone here). It's a simple question: when you get on a merry-go-round or something that rotates fast, then the force pushes you outward, but when they talk about things in space that rotate they refer to the force that pulls them inwards. So which is it. If I spun something like a big beach ball really really fast then I'd still be thrown off it. If I took that into space should that suddenly keep me glued to it? I apologise for my ignorance but I've never had this explained to me.
The external force is the centrifugal force, which is a pseudo-force (or virtual force), it arises due to the fact that the motion is taking place in an accelerating force of reference. The force pulling you inwards when in an orbit is actually Gravity. A pseudo-force is called that because it isn't actually exerted. When we experience a pseudo force, it is because our frame of reference (the point from which the measurements are being made) is itself accelerating, like a car turning. Classical mechanics doesn't distinguish between rest and uniform motion (at constant velocity). Acceleration however, changes the forces.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Direction of current in concentric cylinders Example 7.2 in David Griffiths E & M book (3rd edition) has a side view of 2 concentric cylinders, with smaller radius $a$ and larger radius $b$. The region in between $a$ and $b$ has conductivity $\sigma$. "If they are maintained at a potential difference $\textit{V}$, what current flows from one cylinder to the other for a given length $L$?" The E field is pointing radially outward along $\textit{s}$. My question is: what direction is the current? Do electrons flow in the opposite direction of an E field? If so, does that mean the current is flowing radially inward, along $\textit{-s}$, from $b$ to $a$?
Yes - electrons flow from the negative to the positive, so in the opposite direction to the conventional direction of the electric field (which points from positive to negative). So if the E field points outwards, the electrons flow from the outer to the inner cylinder. The direction does not affect the answer (the calculation of the flow) though - at least not in magnitude. And while you might say the electrons flow from outer to inner, you would still say that the current (conventional sense) flows from inner to outer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Who does work while walking? While walking, the work done by friction is zero. But who does the work, actually? How someone is getting displaced? This situation also arises when someone climbs without slipping or is climbing a ladder.
Let's try to look at it from two different perspectives: * *Internal work: Like work requires to stretch a spring, you do work or spend energy while stretching and contracting the leg muscles and also (angle) bending your legs. *External work: 2a. When you move a crate on a horizontal surface, you do work if you have to work against the frictional force. If the friction is F and the crate (or your body) is displaced by d, then work done W by you is W=Fd (irrespective of the body mass or weight; which factors in friction). F will be larger for heavier bodies. 2b. When you walk, the center of gravity (CG) of your body moves slightly up (while legs crossing each other) and down (legs apart). Here, while taking your CG up you do work against gravity. Now think of you are repeating a process of raising a stone to a certain height and dropping it down. Each time you raise you do work; each time you drop you are not doing any work! (I may need expand this 2b further).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 10, "answer_id": 7 }
Ice and liquid water interacting across a boundary Imagine we have two thermodynamic systems, one a mass of ice and the other an equal mass of liquid water, with both at 273.16K. Each system is isolated, except that they can interact with each other across a boundary that permits the exchange of heat but not matter or work. What will the two systems look like at equilibrium? Somehow I want to automatically imagine that each system will be identical, a combination of liquid water, ice, and water vapor at 273.16K. But if this is true then the two systems were initially at the same temperature but not in thermodynamic equilibrium, an apparent violation of the zeroth law of thermodynamics.
The original state is not in thermodynamic equilibrium. The liquid side has a lot more heat than the solid side due to the heat of fusion.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What's causing the effect of salt in voltage arcs? I just came across this video demonstrating that salt increases length of voltage arcs. There is no explanation which leaves me quite confused. Does the salt decompose during the process?
SPECULATION: "Does the salt decompose during the process?" I suspect that it does. The salt ionizes easily and the ions would migrate under the influence of the electric field. In so doing, they will further ionize the air they traverse, creating a stream of charge carriers. As the electrodes are pulled further apart, the energy available for ionization becomes smaller (the ions gain less energy per unit length as the voltage is constant but the distance increases, so the electric field strength goes down). At a certain distance, the ions lose more energy during each collision than they gain during the acceleration due to the electric field - and then the arc extinguishes. So the arc is a result of the balance between ionization and recombination. The thing about having an electrolyte on one electrode only: there is no charged particle to recombine with. That probably really helps. If I'm right, then putting electrolyte on both electrodes would be less effective. Now I'm curious… Incidentally - it sounded like this was an AC arc (there was a distinct low frequency buzzing) - but that doesn't jive with the above explanation unless the ions have a really long lifetime...
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Potential difference between point on surface and point on axis of uniformly charged cylinder Question: Charge is uniformly distributed with charge density $ρ$ inside a very long cylinder of radius $R$. Find the potential difference between the surface and the axis of the cylinder. Express your answer in terms of the variables $ρ$, $R$, and appropriate constants. $Attempt:$ I am struggling with determining which Gaussian surface to use. If I use a cylinder, then the cylinder would have an infinite area, right? How can I deal with that? If I use a sphere (since I am trying to find the potential difference between only two points, one on the surface and one on the axis), what will be the charge inside the sphere? If I use a sphere as my Gaussian surface, I get: $$\int \overrightarrow{E}.d\overrightarrow{A}=\frac{Q }{\epsilon _{0}}$$ $$\Delta V = -\int_{i}^{f}\overrightarrow{E}.d\overrightarrow{s}$$ $$E = \frac{\rho }{4\pi R^{2}\epsilon _{0}}$$ $$\Delta V = \frac{\rho }{4\pi R^{2}\epsilon _{0}} \int_{0}^{R}dR=\frac{\rho }{4\pi R\epsilon _{0}}$$ But this is wrong.
Actually, using a cylinder for your Gaussian surface is your best approach. The fact the area is infinite should not matter, if you expression the infinite length of the cylinder as a variable, say $l$. Noting that the Gaussian surface area, $A = 2\pi Rl$, and that $Q = \rho l$, the $l$ term should eventually cancel out in your working out.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Why photon has a wave nature? Wave theory does not account for the photon model, which was developed only to explain quantum effects like photoelectric effect. Then why do we talk about a photon's reflection and rarefaction, as that would require it to have wave properties? This has been mentioned here: (http://en.m.wikipedia.org/wiki/Photon) It is light that has wave-particle duality, not photon; it simply is a means of explaining the particle nature of light.
It's a matter of definition. For example, by photon, do you mean first or second quantization (the latter being the canonical treatment)? Although there's no ambiguity in theoretical physics, the nature of photon has generated much debate in the community of applied physics. See, for example, these articles: http://arxiv.org/pdf/quant-ph/0605102.pdf http://www.cft.edu.pl/~birula/publ/CQO7.pdf
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why are some variables summed infinitesimally and others aren't? This is something that has been bothering me and I hope the title kind of makes sense. It may be a stupid question but please be gentle. My question is, let's say we have current: $$I=\frac{dq}{dt}$$ And I understand that if we want to find the total charge over some time, for example, you do: $$q=\int I \; dt$$ And what this does, by the definition of an integral, is it sums every product of the infinitesimally small $dt$ and the value of $I$ at that $t$. However, how come you can't have an infinitesimally small current? Meaning, why can't this exist: $$dI=\frac{dq}{dt}$$ Is this a calculus question? Or is this just by the definition of current? I'm really confused. Thanks in advance.
The current by definition is the flow of charge. Also, time and charge can be quantified infinetsimally at least when dealing with macrospcopic system. As such, a small amount of charge during a small time interval is what current it. As the system gets smaller, quantum level, current is quantized based on ballistic charge trasport. So, dealing with current, if you want an infinitesimal current, it will be the second derivative if the current flow. This can be used to talk about charge drift-diffusion for example, as used to describe curvature is in calculus
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Relating Quantum Mechanics to Classic Electromagnetism I've been directed to a few articles, and I am sure there is a related post, but can someone explain the procedure by which we can view classic electromagnetism through quantum mechanics? Indeed we need to be able to look at any field as an ensemble of particles (photons), but how can we develop classic field theory assuming quantum mechanics?
I think the fundamental problem that many people have with quantum mechanics is, that is seems to be about particles, when, in reality, it is about quanta. A quantum is not the same thing as a particle! A quantum is the exchanged amount of physical quantities between two parts of a physical system. That can be a quantum of energy, a quantum of momentum, a quantum of spin etc.. Quanta do NOT have to be discrete amounts (e.g. integer multiples of a unit). The numerical amount of the quantum of energy exchanged between an atom and a field can, for instance, be from the continuum of the atom's spectrum, in which case it is not discretized. What is "discrete" about the exchange of quanta is the "before and after" picture. In case of quantum systems, one can't divide the events "before" and "after" into arbitrarily small fractions or time slices. It's an all or nothing kind of deal. Either the interaction has happened, or it has not, but one can't "watch" it happen halfway trough, as one can in classical mechanics. As a result, we are not looking at fields as an ensemble of particles. Even quantum fields are, as the name implies, "smooth" objects. What differentiates them from classical fields is, that they can only interact by exchanging their physical states in form of quanta and that we can only measure the differences between initial and final states in terms of quanta, which, in many important cases, can be interpreted as particles. What determines the physical dynamics, however, is not the individual particle that shows up in our measurement devices, but the totality of quanta that can be exchanged in the process of interest.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can there be a voltage when there is no current? I'm told at school that the Electromotive Force (e.m.f) of a battery equals the potential difference between the terminals of the battery when there is no current. How is that possible? How can there be a potential difference with no charge flowing?
Another useful analogy, apart from the gravity one described by David Z, is temperature. You can think temperature as your potential, and the heat flow as your current. Two points of space may be at different temperature, but if they are correctly insulated, they won't exchange heat. The heat will flow only if they are connected somehow. For the current is the same: negative charges go from low to high potentials, if there is a suitable way to go through!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
If everything is relative to each other in this universe, why do we keep the Sun to be the reference point? and study the solar system and universe relative to it and why not relative to the Earth?
why not relative to the Earth? Scientists do express things relative to the Earth, where that makes sense. I couldn't imagine trying to forecast the weather or model the global circulation of the Earth's atmosphere from the perspective of a non-rotating frame with it's origin at the solar system barycenter. Astronomers, at least those dealing with Earth-bound equipment also have to express things relative to the rotating Earth. They need to know where to point their Earth-bound equipment, after all. Meteorologists not only represent the weather in terms of a geocentric point of view, they model the weather from that perspective. This means that all kinds of fictitious forces arise in their models because the Earth is both accelerating and rotating. There's nothing wrong with that per se, so long as one does the mathematics correctly. Since any alternative (e.g., a non-rotating frame) is even worse, meteorologists make sure they do the mathematics correctly. On the other hand, using an Earth-centered, Earth-fixed perspective to describe the behaviors of the distant objects that astronomers observe is even more ludicrous than is trying to describe the Earth's weather from the perspective of a non-rotating, barycenter-based perspective. It's much easier to describe the behaviors of solar system bodies from the perspective of a non-rotating, barycenter-based frame. That frame similarly isn't all that good for describing the behavior of the galaxy as a whole.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
How to prove the Levi-Civita contraction? I want to prove the following relation \begin{align} \epsilon_{ijk}\epsilon^{pqk} = \delta_{i}^{p}\delta_{j}^{q}-\delta_{i}^{q}\delta_{j}^{p} \end{align} I tried expanding the sum \begin{align} \epsilon_{ijk}\epsilon^{pqk} &=& \epsilon_{ij1}\epsilon^{pq1} + \epsilon_{ij2}\epsilon^{pq2} + \epsilon_{ij3}\epsilon^{pq3} \end{align} I made the assumption that $\epsilon_{ij1} = \delta_{2i}\delta_{3j} - \delta_{3i}\delta_{2j}$, then I tried to argue the following using cyclical permutations \begin{align} \epsilon_{ijk}\epsilon^{pqk} &=& (\delta_{2i}\delta_{3j}-\delta_{3i}\delta_{2j})(\delta^{2p}\delta^{3q}-\delta^{3p}\delta^{2q}) \\&+& (\delta_{3i}\delta_{1j}-\delta_{1i}\delta_{3j})(\delta^{1p}\delta^{3q}-\delta^{1p}\delta^{3q}) \\&+& (\delta_{1i}\delta_{2j}-\delta_{2i}\delta_{1j})(\delta^{1p}\delta^{2q}-\delta^{2p}\delta^{1q}) \end{align} and then I realized that this was getting long and messy and I lost my way. How does one prove the Levi-Civita contraction?
One way to see this is to consider the fact that the vector space of rank (3,3) completely antisymmetric tensors ($ \Lambda_3^3(R^3) $) has dimension one (it's just a linear algebra exercise). Then define the tensor: $$ M_{ijk}^{lmn} = \delta_i^{[l} \, \delta_j^m \, \delta_k^{n]} = \frac{1}{3!} \sum_{\sigma \in S_3} sgn(\sigma) \, \delta_i^{\sigma(l)} \, \delta_j^{\sigma(m)} \, \delta_k^{\sigma(n)} $$ where we are summing over all the permutations of three numbers $\sigma$, and $sgn(\sigma)$ denotes the sign of the permutation. It's worthy to note that $$ M_{ijk}^{lmn}=\frac{1}{3!}\begin{vmatrix} \delta_i^l & \delta_i^m & \delta_i^n\\ \delta_j^l & \delta_j^m & \delta_j^n \\ \delta_k^l & \delta_k^m & \delta_k^n \end{vmatrix} $$ by the Leibniz formula of the determinant (http://en.wikipedia.org/wiki/Leibniz_formula_for_determinants). So we have $M \in \Lambda_3^3(R^3) $. Since $M \neq 0$, $M$ is a basis for the space $ \Lambda_3^3(R^3) $. Now consider the tensor $$ \epsilon_{ijk} \, \epsilon^{lmn} = B_{ijk}^{lmn} $$ Since $B \in \Lambda_3^3(R^3) $ and $M$ is a basis, there exists a constant $k$ such that $$ B_{ijk}^{lmn} = k \, M_{ijk}^{lmn} \implies \epsilon_{ijk} \, \epsilon^{lmn} = k \, \delta_i^{[l} \, \delta_j^m \, \delta_k^{n]} $$ Now to determine $k$ contract $\epsilon_{lmn}$ on both sides and use the fact that $\epsilon_{lmn} \, \epsilon^{lmn} =3!$ (since you sum $3!$ terms equal to one) $$ \epsilon_{ijk} \, \epsilon^{lmn} \, \epsilon_{lmn} = 3! \, \epsilon_{ijk} = k \, \delta_i^{[l} \, \delta_j^m \, \delta_k^{n]} \epsilon_{lmn} = k \, \delta_i^{l} \, \delta_j^m \, \delta_k^{n} \, \epsilon_{[lmn]} = k \, \delta_i^{l} \, \delta_j^m \, \delta_k^{n} \, \epsilon_{lmn} = k \, \epsilon_{ijk} $$ So we finally get $k=3!$ and $$ \epsilon_{ijk} \, \epsilon^{lmn}=\begin{vmatrix} \delta_i^l & \delta_i^m & \delta_i^n\\ \delta_j^l & \delta_j^m & \delta_j^n \\ \delta_k^l & \delta_k^m & \delta_k^n \end{vmatrix} $$ And you can get the identity you want by contracting. The same argument could be used in any dimension to show that $$ \epsilon_{i_1,\dots,i_n} \, \epsilon^{j_1,\dots,j_n} = \begin{vmatrix} \delta_{i_1}^{j_1} & \dots & \delta_{i_1}^{j_n}\\ \vdots & & \vdots \\ \delta_{i_n}^{j_1} & \dots & \delta_{i_n}^{j_n} \end{vmatrix} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
How many molecules rub off when I press a key? I have a lot of questions below but my overarching questions are: Do surfaces rubbing lightly together always strip molecules off of each other? and How can we model that? Clearly the answer to the first question is yes in general. We've all seen worn out keyboards and the like. But it is not clear (at least to my non-expertise) that two substances might exist such that rubbing them together does no damage on the molecular level. If you lightly brushed the face of a diamond with a feather, how long would it be before a visible dent would form? Is it more likely that the feather would be "used up" first? Does hardness play a role? When I hit a key on a computer keyboard, how many of my skin cells rub off? How much plastic rubs off the key (on average)? Where do the plastic molecules end up? Just in the air? Embedded in skin cells? What about very small mechanical systems: Does friction and loss of matter between elements make micro or nanoscopic mechanical devices unreliable? Can they be designed to avoid such issues?
An answer wrote by user Enthalpy found on https://www.chemicalforums.com/index.php?topic=103339.msg363521#msg363521: This question is still debated in mechanical engineering where unsound loads on ball bearings serve to measure a life expectancy which is then extrapolated to normal load. Books give formulas for that, but SKF, the biggest manufacturer, tells "no wear at all below some threshold load". I tend to believe SKF's "zero wear" because they experimented more than books and standards authors did, and because nuclear engineering can detect nearly single atoms ripped from the parts and carried by the lubricant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 2, "answer_id": 1 }
Special Relativity, 2nd Postulate -- Why? As a lowly physics undergrad who has been chewing on this 2nd postulate of special relativity for a year or more, I simply can't wrap my head around reasons why it is true or how Einstein might have been convinced enough to propose this postulate. Consider Alfred who is riding in a car travelling at 88 m/s with his headlights on and Bernard who is on the side of the road hitch hiking. Why does the light propagating from Alfred's car move at $c$ relative to both Alfred and Bernard, and not at $c$ + 88 m/s relative to Bernard? The nifty results of special relativity all kind of hinge on this idea, and asking my professors in class hasnt really yielded an answer much more than "because we have never observed a case otherwise".
Maxwell's theory had predicted that the speed of light varies with the speed of the observer. Initially (prior to Fitzgerald and Lorentz advancing the ad hoc length contraction hypothesis) the Michelson-Morley experiment was compatible with the assumption that the speed of light varies with the speed of the light source (as predicted by Newton's emission theory of light) and incompatible with the assumption that it is independent of the speed of the source (as predicted by the ether theory). So in 1905 Einstein's constant-speed-of-light postulate had no justification. It was just a consequence of the Lorentz transforms. The principle of relativity was also a consequence of the Lorentz transforms. Einstein extracted the two consequences, called them "postulates", and deduced the Lorentz transforms from them. He also procrusteanized space and time to fit the new "theory".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
Converting two component product to four component notation Consider the product of two left Weyl spinors in the notation commonly found in supersymmetry, \begin{equation} \chi ^\alpha\eta_\alpha = \chi ^\alpha \epsilon _{ \alpha \beta } \eta ^\beta \end{equation} This is equal to, \begin{equation} \left( \begin{array}{c} \chi ^\alpha \\ 0 \end{array} \right) ^T\left( \begin{array}{cc} \epsilon _{ \alpha \beta } & 0 \\ 0 & \epsilon ^{ \dot{\alpha} \dot{\beta} } \end{array} \right) \left( \begin{array}{c} \eta ^\beta \\ 0 \end{array} \right) = \bar{\eta} _L ^\ast \gamma _0 C \chi _L \end{equation} where I have used some common spinor identites and defined, $ \eta _L \equiv P _L \eta, \chi _L \equiv P _L \chi $ ($\eta $ and $ \chi$ are now four component spinors). I also use the defintion, $C \equiv i \gamma_0 \gamma _2 $. While I don't think anything is particularly wrong with this derivation, I have never seen a term like this in normal quantum field theory. It there a simpler way to reformulate this to correspond to common expression for such mass terms or is my uncomfort with this term due to my ignorance?
Just realize that you can form ordinary Dirac spinors from 2-spinors by using charge conjugation, $i\sigma_2\eta^*$, that gives a right- handed field that can fit in the right-handed slot (forming a 4 component Majorana field) $$ \Psi_1=\left(\begin{array}{c}\eta \\ i\sigma_2\eta^*\end{array}\right) $$ And analogous for $\Psi_2$ in terms of $\chi$. Then you just look at the 'mass terms' $\bar\Psi_1 \Psi_2$ to get your term (well in fact you need to insert also a $P_L$). I think the textbook by Ramond shows this kind of things. Actually, if you add also the hermitian conjugate to your expression, you can even fit all in a single Dirac spinor $$ \Psi=\left(\begin{array}{c}\eta \\ i\sigma_2\chi^*\end{array}\right) $$ and look at $\bar{\Psi}\Psi$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Difference between heat capacity and entropy? Heat capacity $C$ of an object is the proportionality constant between the heat $Q$ that the object absorbs or loses & the resulting temperature change $\delta T$ of the object. Entropy change is the amount of energy dispersed reversibly at a specific temperature. But they have the same unit joule/kelvin like work & energy. My conscience is saying these two are different as one concerns with temperature change and other only at a specific temperature. I cannot figure out any differences. What are the differences between heat capacity and entropy?
It is bit difficult to see that entropy and heat capacities are entire different concepts. They are both related to the filling of energy "destinations" in a system as temperature increases. They are different ways of seeing the same thing. As a starting point we can clearly see that the two are related: $C=T\left(\frac{\partial S}{\partial T}\right)_\textrm{constraints}$ Here's an initial attempt to put the concepts into words to at least give some sense of the relationship: Entropy is the cumulative filling of energy destinations between absolute zero (motionless) and a given temperature. Heat capacity is the rate of change of entropy with temperature… scaled by temperature. Thoughts?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 6, "answer_id": 2 }
Why isn't the quark charge taken as primitive? Why are electrons taken implicitly to be the elementary charge? It would save a lot of fractions in particle physics problems.
It's a matter of history. When George Stoney developed Stoney units in 1881, or when Robert Millikan performed the oil drop experiment in 1909, it wasn't yet known that it was possible for anything to have a charge smaller in magnitude than the charge of an electron. By the time the quark model was proposed, in 1964, the use of the "elementary charge" being taken to be the magnitude of the charge of an electron was already firmly established. Changing the definition of an "elementary charge" unit due to the quark having been discovered would have led to too much confusion.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is the fine structure constant a rational number? Since the fine structure constant (denoted alpha) is a pure real number, it just occured to me to ask if it is a rational number or not.
As other people have stated, we are currently getting this quantity experimentally, so thus far this is not an appropriate question. The origin of the value ~1/137 remains unexplained; There is currently no theoretical explanation for why is has the value that is has. In the future, physicists might be able to explain its origin, perhaps by deriving it from other more fundamental constants. Then we would be able to answer your question. Contrast this with, say, the gyroscopic factor, which characterizes the magnetic moment of quantum particles. There is a theoretical derivation for this dimensionless quantity which can be derived from the theory of quantum electrodynamics. Someday, we hope, the same will be true of $\alpha$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why do we need $2^\text{nd}$ quantization of the Dirac equation As a Mathematician reading about the Dirac equation on the internet, leaves me with a great deal of confusion about it. So let me start with its definition: The Dirac equation is given by, $$ i \hbar \gamma^\mu \partial_\mu \psi = m c\cdot \psi $$ where the Dirac matrices $\gamma^\mu$ are defined by $\gamma^\mu\gamma^\nu + \gamma^\nu\gamma^\mu = \eta^{\mu\nu}$ and where $\psi$ is a "solution". The first deal of confusion already starts with the $\psi$'s. It seems that people freely see them as spinor valued functions or as "operator fields". But if I understand this correctly, seeing them as operators, is not part of the original picture, but was later added as the so called second quantization. Right? Now my question is the following: Why do we need this second quantization of the Dirac equation? What experiments can not be described by the original Dirac equation? Maybe there is a list somewhere or such?
Solutions of the Dirac equation were originally interpreted as multi-dimensional wave functions or states. Each component is similar to good ol' non-relativistic quantum mechanics. This non-operator theory is sometimes called relativistic quantum mechanical spinor theory. Yes, second quantization is a method that, after all is said and done, requires the solutions to be interpreted as operators rather than states. This is because we impose particular commutation relations among the players of the solutions, which would simply commute if they were states/wave functions. This new theory is called QFT (for spin-half particles). One disadvantage of the non-operator theory is that some states have negative energy. Apparently that's bad. The operator-valued QFT theory, on the other hand, has all positive energies. Source: If you don't mind paying for textbooks, this is a particularly self-study friendly one.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 7, "answer_id": 0 }
Hydrogen Bomb Mass to Energy? How much mass is converted to energy when a hydrogen bomb explodes? I remember an eighth grade chemistry class where, by going through the nuclear processes, my teacher estimated that roughly 2g of matter was converted in a fission bomb.This is a surprisingly small amount of mass! I have never seen the process involved in a fusion device.
The most powerful hydrogen bomb ever exploded had a TNT equivalent of 50 Mt TNT, if I remember correctly, TNT energy equivalent is 4184 MJ/kg, that gives a mass loss of about 2.3 kg, if my calculations are correct.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Quantum mechanics threshold First of all I beg your forgiveness as I am not a physicist and the question I am going to ask may sound silly. I am aware that beyond a certain threshold in the hierarchy of building blocks of matter (electrons, atoms, etc.) the 'standard' laws of physics (e.g. Newtonian physics) do not apply and we enter a totally different environment where the so called quantum mechanics apply. * *Where is this threshold located in relation to types of particles? *Are there any other similar thresholds in physics indicating completely new environments? If yes, what are they? (other than classical mechanics, quantum mechanics, ...maybe string theory?).
Newtonian physics is generally a good approximation in a problem as long as any significant differences in the action involved in the problem are much larger than Planck's constant (if not, quantum mechanics will be needed), the speeds involved in the problem are much less than the speed of light (if not, special relativity will be needed), and as long as the Schwarzschild radius of any gravitating object in the problem is much smaller than the object's radius (if not, general relativity will be needed). In addition, if a problem meets the criteria for needing both quantum mechanics and special relativity, then quantum field theory is needed. Quantum mechanics is generally adequate for analyzing the electrons within atoms, but quantum field theory is generally needed for any other kind of subatomic particles. The above are really just rules of thumb. For example, macroscopic quantum phenomena exist, in which quantum phenomena are apparent at a macroscopic scale.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 3 }
Work done against gravity The work done against gravity is $mgh$, well at least that's what my textbook says. I have a question: I can apply a force say 50N, so total work done = $mgh + mah$. Where $ma$ = Force. But the truth is irrespective of the force applied, the work done against gravity is always $mgh$. Why? For example, when I move an object with a force, the work done is more, so work depends on the Force. But in case of gravity it always depends upon the weight
If I take a mass $m$ and apply a force $F$ (greater than $mg$) to it for a distance $h$ upwards then I will do work of: $$ W = Fh \tag{1} $$ The force $F$ has to be greater than the force due to gravity, $mg$, or the object won't move upwards, so let's write the force I apply to the mass as: $$F = mg + F'$$ then equation (1) becomes: $$\begin{align} W &= (mg + F')h \\ &= mgh + F'h \end{align}$$ and the first term $mgh$ is work done against gravity while the second term is the work done to increase the velocity of the mass i.e. after the distance $h$ the velocity of the object will be given by: $$ \tfrac{1}{2}mv^2 = F'h $$ or: $$ v = \sqrt{\frac{2F'h}{m}} $$ So if you apply any force $F$ over a distance $h$ then subtract off the increase in the kinetic energy you'll be left with an amount of energy equal to $mgh$. That's why the work done against gravity is always $mgh$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
The purpose of auxiliary lens in microscope Can someone explain me the purpose of auxiliary lens in microscope. The specification says: Eyepiece: Extra wide field 10x Eyepiece w Spectacle Correction 30mm Ocular Objective 0.7-45x , Auxiliary 2x And the manufacturer claims that it is having magnification of 90X. Is it true ? Perhaps they are multiplying the power of Objective, Eyepiece and Auxiliary to get 90X. But is correct to say the magnification is 90 X BTW this is the microscope I am talking about and the specification is given at the end.
This is likely to be a standard Barlow Lens (see http://en.wikipedia.org/wiki/Barlow_lens) which, when attached to one of the normal eyepieces, decreases the focal length by half and thus increases the magnification by 2X. The previous answer describes a more complicated type of Barlow lens that turns any eyepiece into a zoom lens, which you may or may not have.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Only transverse photons are gauge-invariant (Peskin page 298) Seven lines down from the top of page 298 of P & S, it says "Single particle states containing one electron, one positron, or one transversely polarized photon are gauge-invariant, while states with timelike and longitudinal photon polarizations transform under gauge motions". Here is eqn (4.6)(78) $\psi(x) \rightarrow e^{i\alpha(x)}\psi(x), A_{\mu} \rightarrow A_{\mu} - \frac{1}{e}\partial_{\mu}\alpha(x)$ I see that in a gauge transformation, the transformation of electrons and positrons is nothing more than a phase change and so these are manifestly gauge-invariant. However, for photons, $A_1$ and $A_2$ (the transverse photons) change in just the same way as $A_0$ and $A_3$ (the timelike and longitudinal photons). What's more, they all seem to be transformed, not gauge invariant. Probably I am looking at this in the wrong way. Can someone help me to see this in the proper light?
Just consider the gauge transformation after Fourier transforming everything. A Fourier transform turns derivatives into momenta, such that we get \begin{equation} \tilde A_\mu \rightarrow \tilde A_\mu - \frac1e k_\mu \tilde\alpha \;. \end{equation} This mean that only the component parallel to $k_\mu$ (the longitudinal one) will change, while the transversely polarized components (perpendicular to $k_\mu$) are left unchanged.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
$\rm Lux$ and $W/m^2$ relationship? I am reading a bit about solar energy, and for my own curiosity, I would really like to know the insolation on my balcony. That could tell me how much a solar panel could produce. Now, I don't have any equipment, but I do have a smartphone, and an app called Light Meter, which tells me the luminious flux per area in the unit lux. Can I in some way calculate W/m2 from lux? E.g. the current value of 6000lux.
There is no simple conversion, it depends on the wavelength or color of the light. However, for the sun there is an approximate conversion of $0.0079 \, \text{W/m}^2$ per Lux. To plug in numbers as an example: if we read 75,000 Lux on a light sensor, we convert that reading to $\text{W/m}^2$ as follows: $$75,000 \times 0.0079 = 590 \, \text{W/m}^2 \, .$$ Source
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 1 }
Dirac Notation Question Appearing In a Projection So I have a part of the energy eigenvalue equation that look like this: $$ \delta(\hat{x})|n\rangle $$ Where n is the energy basis of the Hamiltonian I'm considering. To deal with this, I tried projecting onto a complete set of position states and I obtained: $$ \int^{\infty}_{-\infty}|x\rangle\langle x|\delta(\hat{x})|n\rangle dx $$ Which becomes: $$ \int^{\infty}_{-\infty}\delta(x) \phi_n(x) |x\rangle dx $$ Where $\phi_n(x)$ is the position representation of the energy eigenstate. And this is where I am getting confused. Obviously with the delta function we should be able to collapse the integral, evaluating $\phi_n(x)$ at zero, but I am confused about what to do with the other ket, or if there is anything I can do with it. I would love to be able to move it outside of the integral, but I don't know if there is any justification in that.
As Alfred Centauri said, I don't know that there's much more to do here than to say $$ \int^{\infty}_{-\infty} dx \ \delta(x) \phi_n(x) |x\rangle = \phi_n(0) |x=0\rangle $$ You could instead write this as $$ \sum_m \phi_n(0) |m\rangle \langle m |x=0\rangle = \phi_n(0) \sum_m \phi^*_m(0) |m\rangle $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What technology can result from such expensive experiment as undertaken in CERN? I wonder what technology can be obtained from such very expensive experiments/institutes as e.g. undertaken in CERN? I understand that e.g. the discovery of the Higgs Boson confirms our understanding matter. However, what can result form this effort? Are there examples in history where such experiments directly or indirectly lead to corresponding(!) important new technology? Or is the progress that comes from developing and building such machines greater than those from the actual experimental results?
Places like CERN are a huge forcing function for computer science - think high performance computing, networking, data storage, etc. If my memory is correct, Tim Berners-Lee was at CERN when he started developing the WWW...
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49", "answer_count": 8, "answer_id": 3 }
Directions and magnitudes of static and kinetic friction between stacked blocks Say you have block1 on top of block 2 and the whole system is accelerating toward the right at a certain acceleration. Due to inertia block 1 wants to move backward to the left so there has to be a force of static friction on it that acts to the right in order for it to stay in place. Let's say the maximum static force that block 2 can put on block 1 is equal to $\mu N=M_1a$. That is also the maximum acceleration of the system, if acceleration is to be greater than this maximum: * *would the block would start to moving to the left? *would the kinetic friction force still points in the same direction as the static friction force? *if 1 and 2 is true, is the kinetic friction force greater than the force of static friction?
* *No, block 1 would move to the right but with smaller acceleration than block 2. You could say it would move to the left relative to the block 2. *Yes it would, since block 1 still 'wants' to move to the left relative to the block 2 (the 'relative' part is very important) *No it wouldn't, it would be smaller. Block 1 would now move with acceleration that is less then mentioned maximum acceleration, because frictional force (which was giving block 1 the acceleration) has now decreased. Kinetic friction is always smaller than stationary. (if anyone knows any examples of the opposite, please comment)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Tensor product notation convention? For two particle state, the Dirac ket is writren as $$\lvert\textbf{r}_1\rangle \otimes \lvert\textbf{r}_2 \rangle. $$ Then how do we write its bra vector, $$\langle\textbf{r}_1\rvert \otimes \langle\textbf{r}_2\rvert ~~\text{or}~~\langle\textbf{r}_2\rvert \otimes \langle\textbf{r}_1\rvert ~~\text{?} $$ Is there any rule or convention? I'm just asking the order of bra vector.
The way I imagine it is that the left side of the direct product is exclusively reserved for hilbert space 1 and the right side is for Hilbert space 2. So that the total hilbert space you are working in is written as: $$ H=H_1⊗H_2 $$ And so when you have a wavefunction in H you write: $$|\psi\rangle = |r_1\rangle \otimes |r_2 \rangle $$ And then: $$|\psi\rangle^\dagger = (|r_1\rangle \otimes |r_2 \rangle)^\dagger = |r_1\rangle^\dagger \otimes |r_2 \rangle^\dagger $$ $$\langle\psi| = \langle r_1| \otimes \langle r_2| $$ Hence there is no way $r_1$ and $r_2$ can swap places
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
Does a pulled rubber band contain as much energy as a twisted rubber band? lets say I take two similar rubber bands. One of them I pull until it almost reaches its breaking point. The other I twist until it almost reached its breaking point. Do both of these rubber bands contain (roughly) the same amount of energy? It seems that the twisted band would contain more, but I am having a hard time justifying this.
Yes, I believe they do possess the same amounts of elastic potential energy. By stretching both rubber bands to breaking points, this means that both are stretched for equal distance, only that one loops around itself when twisted, while the other gets stretched far apart. In the end, they will possess the same amounts of elastic potential energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Relationship between two viewpoints on Thermodynamics I've always seem the standard viewpoint on Thermodynamics that it is all about studying phenomena related to a property of systems called temperature. Then we have the zeroth law which allows us to give means to measure temperature and so on. Every basic Physics course takes this viewpoint when discussing Thermodynamics, and I believe that historically this was the first viewpoint indeed. Now, studying Thermodynamics in more in-depth books there's another viewpoint: that Thermodynamics is concerned with the study of equilibrium states of macroscopic matter. That is, we have some macroscopic piece of matter and we are interested in some states of equilibrium of that piece of matter. It, however, doesn't seem obvious to me the connection between these two viewpoints. Of course one could say: "one of the properties of macroscopic matter on those equilibrium states is temperature", but again, it's not clear at first the connection between this and the first viewpoint. It's not really clear what temperature has to do with those equilibrium states. So what's the relationship between those two viewpoints on Thermodynamics?
Regarding the first viewpoint, it has also been long emphasized that thermodynamics is about studying phenomena related to a property of equilibrium systems called temperature. The equilibrium is always important in all thermodynamics --- although you can make up some measure of temperature for a nonequilibrium system, none of the familiar thermodynamic formulas will apply and so you don't really have thermodynamic behaviour anymore. The second viewpoint is not really thermodynamics per se. Rather this is the point of view of statistical mechanics, i.e., the rational justification of thermodynamics from mechanical laws. Statistical mechanics tells us that standard thermodynamics is most plainly evident in the macroscopic limit. Of course statistical mechanics also goes beyond the macroscopic limit and in a way lets us "extend" thermodynamics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
"Magic" Speed to Drive Over a Speed Bump: Myth or Reality? A somewhat controversial aspect of speed bumps (sleeping policemen for those in the UK) is that they arguably cause more loss of life than they prevent by acting as a hindrance to emergency response vehicles. In thinking of ways to minimize this adverse impact, I vaguely remembered hearing the claim that if a car travels fast enough, there is a critical velocity beyond which the vehicle would just glide over the bump as if there were no obstruction at all. This sounds too outlandish to be true, but then again Mother Nature has surprised me before. I'm obviously not promoting (or condoning) unsafe driving practices, but is there any experimental or theoretical merit to the claim that the impulse of a speed bump can be negated by merely driving fast enough across it?
Yes it's true. I had a 1990 Toyota Camry and my friend had a 1987 Honda Accord. If we drove 30mph over 15mph speed bumps it glide right over and when hit 50mph over our many 25mph speed bumps on Ramblewood Drive Columbus Ohio 43235 lived on Maleka Ct 43235 from 1986/87 to 1998/parents moved 2002. I'm not sure if we had to exactly double the speed or if we could go higher to glide over. If we went 35mph over the 50mph it was really bumpy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Does air conduct heat better than saturated steam? This engineering toolbox table shows thermal conductivity of steam at 0.016. I understand that water is better in conducting heat than air, but if I read this correctly, steam is worse in conducting heat than water? Would there be any difference of steam in a vacuum? Or would it be also 0.016?
If you got vaccum inside a closed container, you would get the water inside or(some water) goes to vapor directly without any heat source till you get some equilibrium.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
DC Generator with magnet as rotor DC generators convert the AC current in them by split ring commutators right and the graph of the current will be like this but the question is how would be the graph if the magnet is the rotor and not the armature? Me and a part of my friends are on the opinion that the the graph will be like this and a few of my friends say that it will be like the usual I would like to know how it would be
Yes is correct. The rotation of the armature commutes the connection to the circuit so current in the circuit flows in the same direction regardless of the direction in the armature. If the armature is fixed, it will only couple to the circuit in a unique way, and the circuit will experience the current in the same sense as the armature, which is the way you are showing below.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Advanced kinematics problem regarding $F = ma$ I really need help with this problem. Thank you for your time. Question: Look at the figure below. Derive the formula for the magnitude of the force F⃗ exerted on the large block (mC) in the figure (Figure 1) such that the mass mA does not move relative to mC. Ignore all friction. Assume mB does not make contact with mC. Approach: I drew force diagrams to analyze the problem. I found out that F applied to mC must equal to the tension exerted on mA. What makes the problem difficult is that the tension of mC is tilted, which means that the tension of mC is not simply 9.81 * mB.
"I found out that F applied to mC must equal to the tension exerted on mA" is wrong F=(m_a+m_b+m_c)*a T=m_a*a T_x=m_b * a T_y=m_b *g T^2=T_x^2+T_y^2 Replace and get F=[(m_a+m_b+m_c)*m_b*g]/sqrt(m_a^2-m_b^2)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why does Energy-Momentum have a special case? I was reading Energy-momentum, and I came across this simplified equation: $$E^2 = (mc^2)^2 + (pc)^2$$ where $m$ is the mass and $p$ is momentum of the object. That said, the equation is pretty fundamental and nothing is wrong when looked upon, I similarly also believed this but I came across a "special" cases where this does not apply: * *If the body's speed $v$ is much less than $c$, then the equation reduces to $E = (mv^2/2) + mc^2$. I find this really crazy, because first Einstein, always wanted to create a theory\equation that applied to every aspect of physics and has no "fudge" factors, that said irony is present from Einstein. Next, why does this not work in every aspect? surely a equation should be "universal" and should still work with any values given. Most importantly, why does this not work, if velocity is "much" slower than light? What do they mean by "much slower", what is the boundary for "much slower"? Regards,
First, your findings are correct and are found by Einstein. However the fact that a new term appears is only showing that our previous knowledge was incomplete. The new term expresses the Energy related to the rest mass of the body which was not considered before. And it makes sense because before, in equating the Energy of a body only were included those terms related to forms of Energy into which we knew some interchange could happen. Since mass was not known to interchange with Energy, the law for mass conservation was known but separated. In conclusion, the new term is not incompatible with the reproducibility of previous physics, it is part of the new knowledge brought. As for what is understood for "much slower", it depends on the sensitivity of your experiment or phenomena. But a good rule of thumb is: when the kinetic energy is of the order of the rest mass. Remember that the total energy of a particle relates to the rest mass $m_0$ as $$E = \frac{m_0c^2}{\sqrt{1-\frac{v^2}{c^2}}}$$ Expanding for $\frac{v}{c} << 1$ you get $$E = m_0c^2 + m_0\frac{v^2}{2} + \frac{3}{8}m_0 \frac{v^4}{c^2} + ...$$ so you should account for relativity at least qhe the 3rd term is important for you.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 10, "answer_id": 2 }
How can I calculate the speed of an object knowing its horizontal and vertical velocity components? Let's say a ball is thrown and it experiences typical projectile motion (moves in a parabolic arc etc.) and the only information we know are the equations for the horizontal and vertical components of its velocity for it's entire path. From the given information, how does one calculate the total/actual speed of the ball relative to the direction it is travelling in at any given point (ignoring drag)? As an example (horizontal and vertical components of velocity respectively): $$V_x = 30$$ $$V_y = 20 - 9.81t$$ Is it simply a matter of using Pythagoras' theorem and taking the magnitude? $$ V=\sqrt{(V_x)^2 + (V_y)^2} $$
Yes, of course. Velocity is just a vector, and its norm is $\sqrt{V \cdot V}$, i.e., $\sqrt{\sum_i V_i ^2}$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to carry out the perturbation expansion of an anharmonic oscillator to high orders? I think this is a standard problem in quantum mechanics. Consider the anharmonic oscillator $E \psi = \left(- \frac{1}{2} \frac{\partial^2}{\partial^2 x } + \frac{1}{2}x^2 + \epsilon x^4 \right) \psi$. Formally, the ground state energy has the asymptotic expansion $E(\epsilon) \sim \sum_{n=0}^\infty a_n \epsilon^n$. How to calculate the coefficients $a_n$ to high orders, say, for $n= 20$?
As mentioned in the comments by Bubble, this is answered in Ground State Energy Calculations for the Quartic Anharmonic Oscillator, Robert Smith. Notes for Math 4901, University of Minnesota, Morris (2013). but as the document is not crawlable by the Wayback Machine I'll summarize it here. Smith considers hamiltonians of the form $$ H=-\frac12 \frac{\mathrm d^2}{\mathrm dx^2}+\frac12 x^2+\lambda x^{2K} $$ for $K=2,3,4,\ldots$, and the resulting spectrum is of the form $$ E(\lambda)=\sum_{i=0}^\infty E_i \lambda^i, $$ where the coefficients are given as $$ E_i=\frac1iX_{K,i-1}, $$ for $i\geq1$, in terms of the variables $X_{K,i}$ which satisfy the recurrence relations $$ X_{j,i}=\frac1{2j}\left\{ \frac{j-1}2 \left[4(j-1)^2-1\right]X_{j-2,i}+2(2j-1)\sum_{m=0}^iE_mX_{j-1,i-m}-2(2j+k-1)X_{j+k-1,i-1} \right\} $$ and $$ X_{j,0}=\frac{2j-1}j E_0 X_{j-1,0}+\frac{j-1}{4j}(4j^2-8j+3)X_{j-2,0} $$ with initial conditions $X_{0,i}=\delta_{0,i}$. The first few terms of this expansion are $$ E(\lambda)=\frac12+\frac34\lambda-\frac{21}8\lambda^2+\frac{333}{16}\lambda^3-\frac{30885}{128}\lambda^4+\cdots. $$ This result is obtained via the method of Swenson and Danforth, which Smith explains in detail in Appendix A, with a further reference to the method in Fernandez, Francisco. Introduction to Perturbation Theory in Quantum Mechanics. New York, CRC Press, 2001. (Personally, though, I tend to think that if you're getting this intense about an $x^4$ anharmonicity, then you should also be worrying about terms in $x^6$ - and that's in an ideal case with no symmetry-breaking odd terms. But that's just me.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Loads in decelerating a falling object I remove trees in confined spaces setting up rigging systems, I am keen to look at the science behind it in more depth so far the formulas that I have found don't take into effect the ongoing effect of gravity once the object or log begins decelerating. So far, If a $300\:kg$ log falls $1\:m$ and then is slowed down (evenly) over $2\:m$ to a stop how much force is required (or exerted on the elements within the system)? $$KE = \frac12mv^2 = Fd$$ where $d$ is stopping distance, or $$F=\frac{\frac12mv^2}{d}$$ $$F=\frac{\frac12\cdot300\cdot(1\cdot9.806)^2}{2}=7212\:N$$ is the stopping force required to balance the force of the log. as this force is on both sides of the pulley it is doubled (I know widening the angle at the pulley can reduce this). I'm after confirmation that this is the correct formula for this scenario. Scratch that; I know this formula is wrong for the scenario as if the log's velocity is zero there is still force on the system, but not according to the formula???
It may help to decompose the force required into two parts. First, notice that if you provide mg, or 300 x 9.8 newtons in opposition, the log will not accelerate. If stationary it will remain so, and if moving at some velocity it will continue to move at that velocity. Now look at the 1 meter fall. This will provide a velocity as you have described, although your force calculation is incorrect (v is 9.8 m/sec if the log falls for 1 second, not it if falls for 1 meter) in terms of counteracting the momentum and energy produced, but only in the absence of any extra forces such as gravity. Fortunately, you dealt with that in the first step. Actually, your force calculation doesn't need to deal with velocity at all. $$mgh = Fd$$ (force times distance equals force times distance) and $$F=\frac{mgh}{d} = \frac{300 \times 9.8 \times 1}{2} = 1470 N$$ So the force required will be$$F_{total}= F + mg = 1470 + (300\times9.8) = 4410 N$$ This, of course, is a wildly optimistic figure, since it essentially assumes that you can produce just the right force so that deceleration is constant over the 2 meters of controlled fall. Things like the elasticity of the rope and the difficulty of guaging just how much force is being applied will cause peak forces to be considerably higher.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
What will happen in principle if one tries to push a neutron star? When we push an object it moves due to the mutual repulsion between the electrons in our hand and the electrons in that object. Since a neutron star contains only neutrons, what will happen in principle if one tries to push it?
Repulsion happens with any fermion particles, not just electrons. If neutrons would not repel each other we wouldn't have neutron stars. All of them would always collapse into black holes instead. So neutrons in our atoms still repel neutron star. However what would really happen is, neutron star has really massive gravity to the point that protons and electrons would merge into neutrons. Any kind of atom couldn't keep being atom anymore. So when anything tries to touch neutron star, it would be suck in by gravity and collapse into lump of neutrons and feed their mass into that neutron star. And if it collects enough mass it would collapse into a black hole.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is cesium used to measure time in atomic clocks? Seconds are measured by the frequency emission of cesium. Why is a frequency from the emission spectrum of cesium used as the standard in defining a second? Why particularly cesium?
Rather than write something unintelligible, I'll quote from a page on cesium clocks. According to quantum theory, atoms can only exist in certain discrete ("quantized") energy states depending on what orbits about their nuclei are occupied by their electrons. Different transitions are possible; those in question refer to a change in the electron and nuclear spin ("hyperfine") energy level of the lowest set of orbits called the "ground state." Cesium is the best choice of atom for such a measurement because all of its 55 electrons but the outermost are confined to orbits in stable shells of electromagnetic force. Thus, the outermost electron is not disturbed much by the others. The point is that we want an extremely spectrally "pure" source so there's mimimal uncertainty about the wavelength.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Natural frequency of spring-mass system We can found on internet videos or articles about resonance. For this they for e.g. take a system of two spring and mass in between. When they set force frequency to close to natural frequency. But from where they know the natural frequency of the system?
You can find the natural frequency of such a system theoretically, if you know the stiffness of the springs. Refer:this example on wolfram. How do you find the stiffness of the springs? Well... do experiments on simple pendulum, find the natural frequency from time period of oscillation, reverse calculate the stiffness of the spring as the mass of the test mass is known. That was for the 2-spring-1-mass system. For general vibrating systems, one has to again do corresponding experiments (which involves high frequency oscillations, making observations difficult) to find the natural frequencies.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The effect of windspeed on a car I've worked problems in the past in trig class concerning the effect of wind on the speed of a plane and it's flight path and was wondering if a similar thing occurs with a car. First off, I'm pretty sure that if the speedometer reads 60 mph, even if the wind is blowing 15 mph in the same direction you will still have only traveled 60 miles at the end of an hour. My question is whether the car is traveling faster due to aid of the wind with respect to amount of work the engine has to perform and the gas consumed. Is it correct to believe that the car is traveling 60 mph off of 45 mph effort, or, does it not work that way?
Yes indeed. The wind does do some work in increasing the speed of the car. But in everyday situations, the speed of the wind and therefore the force exerted by it is just too less to do any work on the car. To get an idea as to how much work the wind is doing while the car moves, look at how much the car moves only due to the wind after you stop the car. The car almost doesn't move at all. Now the work done by the wind while the car moves is even lesser, as the relative velocity of the wind with respect to the car becomes lesser. Thus the car essentially moves only due to the work done by the engine. In case of the wind blowing opposite the direction of the car; at high velocities of the car, the wind applies considerable force due to higher relative velocity. Thus you must take into account the work done by the wind in this case.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Will 5 pizzas in the same Hot Bag stay warmer than 5 pizzas in 5 separate Hot Bags? For example, say I am delivering 5 pepperoni pizzas to 5 different addresses. In one scenario, I Keep all 5 in the same insulated Hot Bag, I carry that bag to the door, and I quickly remove one of the pizzas from the bag to give to the customer. In the other scenario, I use a separate Hot Bag for each pizza. This would mean that only one bag would need to be opened while the other 4 bags could stay closed. Which method would keep the pizzas warmer?
My guess is it's best to put each pizza into its own insulator, but then stack them for transport so that the stack has the same size and surface area a larger insulating container would have. Stacking the boxes in transport minimizes heat loss since it's proportional to exposed outside area. Opening a container would let significant heat out, so that should be avoided. Even better would be to make some insulated structure that you put the pizzas in during transport, each in their own insulated container.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
When is the displacement current equal to conduction current in case of a parallel plate capacitor being charged? I came across a text - "Whenever a conduction current is oscillating in time, the displacement current is equal to the conduction current in case of parallel plate capacitor." I am not sure what's oscillating in time means here! Below is the question that triggered this : (I am trying to discuss part (b) of the question) of the question)
Assuming that we have a uniform electric field between the plates of a capacitor, we can find its intensity to be (using Gauss' Law) $E(t)=\frac{\sigma(t)}{\epsilon_0}=\frac{q(t)}{\epsilon_0 A}$, where $q(t)$ is the absolute value of the charge on each plate, and which is not constant in time since the capacitor is being charged. Now, considering a circular surface between the plates, with same radius R and the same axis as the plates, the electric flux will be $$\Phi_E(t)=E(t) S=\frac{q(t)}{\epsilon_0 A}\pi R^2=\frac{q(t)}{\epsilon_0}.$$ So, the displacement current is $$I_D=\epsilon_0\frac{\partial\Phi_E}{\partial t}=\frac{dq(t)}{dt}=I(t),$$ hence, even if the current varies in time, the displacement current will be equal to it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why Earth's atmosphere behaves like an ideal gas? In every book I found this sentence like an assumption, without explananions, somebody can help me understand it better?
The Earth's atmosphere is mostly nitrogen and oxygen, both of whose behaviors are very close to ideal at the temperatures and pressures found in the atmosphere. Nitrogen, the dominant gas in the atmosphere, comes particularly close to exhibiting ideal behavior. Gaseous oxygen exhibits about a 3% departure at 20 atmospheres at standard temperature, with departures from ideal reducing more or less linearly at reduced pressures. There is one component whose behavior is markedly non-ideal, and that is H2O. Water vapor is a trace gas, at most a few percent of the atmosphere (and that extreme occurs only in very humid tropical regions). Unless you are studying cloud physics, you can pretty much ignore the departures from the ideal gas approximation. The errors that result from assuming ideal behavior are generally less than one percent. Non-ideal behavior becomes important if you want to model the atmospheres of Venus or the gas giants because of the higher pressures. For example, the CO2 that is by far the dominant component of Venus's atmosphere is a supercritical fluid at the surface of Venus. Departures from ideal behavior can be quite extreme in this regime.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
When referring to weights and mass of weights in a physics laboratory, do we use the term mass or weights? What terminology is used to refer to weights/ mass/ weight of mass/ mass of weights when referring to the mass of weights in a physics report? My question is more of the weights that we use in the physics laboratory to hang on springs/ rulers, do we refer to them as masses or weights?
If you care about the inertia you use "mass". When you are considering the force of gravity you use "weight". So when you do calculations about the force in the cables due to acceleration of an elevator car you need to know both it's mass and it's weight... The (calibrated) object you place on a scale is called a "weight" - because that is the property you most care about. But a calibrated weight is never calibrated in Newtons - it is calibrated in grams. This is because weight is a function of position on earth - mass is not. So when you want to determine the mass of an unknown object you can use scales to compare its weight with that of a known object. When you know the ratio of weights you know the ratio of masses. Thus it is OK to calibrate a scale in grams and indicate mass in grams - as long as you understand that you had to calibrate locally using a known mass to get the conversion factor. This is one reason why even cheap 5-digit digital balances (you can get them at Amazon for about $30) ship with calibration weights - to get that 0.01% accuracy you need a local reference calibration. There is about a 0.7% change in force of gravity from North Pole to the Nevado Huescaràn mountain in Peru (you are heavier at the North Pole) - or about 0.01% for every degree of latitude (on average). In most other contexts when you talk about a physical object you call it a "mass".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the difference between diffraction and interference of light? I know these two phenomena but I want to know a little deep explanation. What type of fringes are obtained in these phenomena?
Feynman has come from heaven to answer your question! Listen to him: No one has ever been able to define the difference between interference and diffraction satisfactorily. It is just a quest of usage, and there is no specific, important physical difference between them. The best we can do is, roughly speaking, is to say that when there are only a few sources, say two interference sources, then the result is usually called interference, but if there is a large number of them, it seems that the word diffraction is more often used.$_1$ To be more explicit read this passage from Ajoy Ghatak: We should point out that there is not much of a difference between the phenomenon of interference and diffraction, indeed, interference corresponds to the situation when we consider the superposition of waves coming out from a number of point sources and diffraction corresponds to the situation when we consider waves coming out from an area sources like a circular or rectangular aperture or even a large number of rectangular apertures (like the diffraction grating). $_2$ Credits: $_1$ Feynman Lectures on Physics $_2$Optics-Ajoy Ghatak.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 7, "answer_id": 1 }
Will an un-scattered photon go to the edge of the universe? Will an unhindered (un-scattered) photon go to the edge of the universe?
Will an unhindered (un-scattered) photon go to the edge of the universe? To add to the answer of @RedAct : If you are thinking of the three dimensional part of the universe we observe now, it is an instantaneous universe, i.e. time t has one value. In this sense we are at the edge of the expanding universe, the right cutoff at 13.8 billion years in the image below. Unscattered photons from the beginning of the photon separation at 380000 years from the Big Bang give us the cosmic microwave background radiation, and have reached the edge of the observable universe as defined above. For a complete answer see the answer of RedAct.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why is cooling much harder than heating? I'm trying to invent a distillation apparatus that runs solely on electricity. Suddenly, I realized that cooling things is really hard, while heating them up is so easy. Actually, it seems that there are just three ways to cool something down: * *Peltier modules (incredibly ineffective) *Compressing and expanding gasses (hard to make at home, the device is too big) *Some rare endothermic reactions, such as dissolving $KNO_3$ in water My question, however, is not how to solve my issue. I want to know why there are so limited cooling options and why they are so expensive and tricky. For heating, the options are much easier: * *Current flow (just pick a wire and a battery) *Rubbing things *Burning/dissolving acids in water and other chemistry (if you're lucky, you get so much heat you won't need any more in your life) *Absorbing el-mag waves I, for one, blame the second law of thermodynamics.
It is because heating and cooling happen because of different physics. Consider the following equation: $\partial T/\partial t =c \nabla^2 T + H(x,t, T)$ Where $H(x,t)$ is the external heating. This is the source term. Cooling would be a diffusive process but heating could be nonlinear. In many cases $H(x,t) >> c \nabla^2 T$ and therefore heating is faster. This equation also tells us what you are implying may not be true in all cases. If $H(x,t) << c \nabla^2 T$, you will have faster cooling. So it depends on the parameter regime.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 5, "answer_id": 4 }
$ε_0$ affects electric field intensity, but $μ_0$ doesn't affect magnetic field intensity? I'll be honest: this question is actually a homework problem. I've spent the past hour going through Google and several textbooks trying to answer the question "Why does $ϵ_0$ affect electric field intensity but $μ_0$ does not affect the magnetic field intensity?" I don't understand much about electromagnetism, but I haven't found an explanation for this. From what I've seen, $μ_0$ seems to be inherently related to the magnetic field strength in the same way that $ϵ_0$ is related to the electric field strength. Can anyone help me by providing a very general answer or by pointing me to a good resource? I don't want or expect anyone to do my homework for me, but a nudge in the right direction would be much appreciated!
What's probably happening here is the following: The fundamental or microscopic fields $\mathbf{E}$ and $\mathbf{B}$ are technically called the electric field strength and the magnetic induction, while $\mathbf{D}$ and $\mathbf{H}$, their macroscopic counterparts, are called the electric displacement and the magnetic field, a quite weird nomenclature, since you would think $\mathbf{E}$ and $\mathbf{B}$ would be simply called the fields, but that's history for you. In this context, saying that $\mu_0$ doesn't affect the magnetic field intensity would mean that it doesn't affect $\mathbf{H}$, in much the same way that $\epsilon_0$ doesn't affect the electric displacement, that is, $\mathbf{D}$. What $\mu_0$ does affect is the magnetic induction $\mathbf{B}$, which is often simply called the magnetic field.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
how is an electron able to emit and absorb the same (virtual) photon? When calculating the lowest order self energy corrections for an electron for example, feynman diagrams involving the emission and re-absorption of a (virtual) photon need to be considered, (as here for example: http://quantummechanics.ucsd.edu/ph130a/130_notes/node475.html ) but how is it possible for the emitted photon to be re-absorbed by the same electron? why hasn't the photon immediately raced away into the distance leaving the electron far behind before it can be re-absorbed? is this just a case of not taking these diagrams too literally? Edit: After the electron emits the photon, both the particles should be considered as free particles and hence they cannot meet again unless the photon collides with some other particle and races back to be absorbed by the source electron. In other words, we need atleast a third vertex in the Feynman diagram for it to be possible for an electron to emit and absorb the same photon. So, how can such a process be possible with just two vertex?
Indeed, do not take Feynman diagrams as literal representations of what is happening in a particle picture. Only the external lines of a diagram correspond to real particles - the internal lines, though called virtual particles, are little more than artifacts of the perturbative expansion we do to calculate QFT amplitudes, and there is little reason to ascribe to these virtual particles any kind of physical reality. Instead of saying "the correction to the self-energy comes from the electron emitting and reabsorbing a virtual photon", it is perfectly fine to say that the self energy correction comes from interaction with the electromagnetic field, and is, to lowest order in the canonical perturbative expansion, represented by the Feynman graph you descibe. The talk about emitting/absorbing virtual particles is wholly superfluous.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Simple estimation of the critical temperature of water I'm trying to develop fermi estimation skills and I came up with a question for which I don't even know where to start from. Here goes: Is it possible to estimate the critical temperature (say in Kelvin degrees) of water in a simple way using fermi estimation? By critical temperature I mean the temperature of the point at the end of the coexistence line of water and vapour. See this plot.
I haven't done a calculation yet, but I would use an extrapolation based on the Clausius-Clapeyron formula: $$\frac{dP}{dT} = \frac{L}{T\Delta V}$$ You then take any two known thermodynamic quantities of water and water vapor and linearly extrapolate to that point where the difference is zero. A good choice could be the entropy, the entropy of water vapor is easily estimated by treating it as an ideal gas and taking into account the internal degrees of freedom of the H2O molecule. The entropy of water then follows from the latent heat, the way this depends on temperature follows from the well known heat capacity of water. So, I think it's not that difficult to come up with an estimate with only a blank sheet of paper and a calculator.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Drift velocity of electrons with changing area What would happen with the drift velocity of a cylindrical resistor's diameter increases, with a given voltage between its terminals? According to the expression: \begin{align} R&=\rho\frac{L}{A}\\ I&=neAv_d\\ \Delta V &= IR\\ v_d&=\frac{\Delta V}{\rho L n e}\\ \end{align} The resistivity does not change, neither does the length of the resistor nor the term $ne$ but the resistance does change as well as the current, so the area is eliminated from the expression. I wonder if the drift velocity would be the same after increasing the diameter or if my derivation is wrong.
The drift velocity is the average velocity due to an applied electric field. In a conductor, electrons scatter around at the Fermi velocity but have a net zero average (i.e., equal scattering in all directions). When the electric field is applied, the electrons are given a small velocity in one direction. Thus, we can say, $$ v_\textrm{drift}=\eta E $$ where $\eta$ is some constant. Since the electric field comes from a gradient in a potential, which changes as a function of the length of the bar, $L$. This approximates to $$ v_\textrm{drift}\simeq\eta\frac{V}{L} $$ which is similar to what you have. Since there is no factor of $A$ in the latter equation (not in my $\eta$ here), then increasing the area (by increasing the diameter) should not change the drift velocity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why rubber is incompressible material? Why rubber is incompressible material? I know its Poisson's ratio is nearing to 0.5. So I don't understand physically, what it means by 0.5 Poisson's ratio and incompressibility. When I tried searching it, I found that rubber (or similar polymers) conserve volume after deformation and so they are incompressible. But same is the case with steel (Poisson's ratio around 0.3), it conserves volume after deformation. So can someone explain this?
Bulk modulus of elasticity is given by : E/3(1-2v) where E= strain (or linear deformation and in your case compressibility) v= Poisson ratio ( for rubber it is 0.5) and bulk modulus of elasticity basically tells us the capability of compressing of a material. hence inputting the values we result in a zero denominator making the B.M.O.E infinite. which makes it impossible AF to compress :D
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 3 }
why non orthogonal states are indistinguishable? I want to know what does it mean by distinguishable quantum state from Mathematics perspective I mean mathematically. As a non physics background student could any one explain me why non orthogonal states are indistinguishable by using linear algebra only?
Two states $\rho_0$ and $\rho_1$ are perfectly distinguishable if there exists a POVM measurement $\{P_0,P_1\}$ such that $$\operatorname{Tr}(\rho_0P_0)=1$$ $$\operatorname{Tr}(\rho_0P_1)=0$$ $$\operatorname{Tr}(\rho_1P_0)=1$$ $$\operatorname{Tr}(\rho_1P_1)=0$$ and $P_0+P_1=I$, where $I$ is the identity matrix. You can interpret the above equations as follows: If we obtain outcome $P_0$, we know for sure that the state was $\rho_0$, whereas if we obtain outcome $P_1$, we know with certainty that the state was $\rho_1$. It is not too hard to show that the above conditions can be satisfied if and only if the states are orthogonal. If they are non-orthogonal, there is always a non-zero probability of making a mistake when we try to identify which state we were measuring.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Origin of quark masses Does all the mass of the quarks in the standard model come from the Higgs sector or is there also a contribution to quark masses due to QCD chiral symmetry breaking?
In the standard model the mass for the quarks and other elementary particles comes from from the Higgs mechanism. In the case of the quarks, the masses given in the table of the link are calculated using convoluted theoretical models and data input from scattering experiments.4 At the moment chiral symmetry breaking does not contribute to the quark masses except as it is used in the models fitting for the masses. Chiral symmetry breaking is what binds the quarks in a proton or a pion with a fixed mass. Protons would fly apart except for the strong nuclear force, which is carried by gluons (force lines between quarks). When gluons clump together they form a “glueball.” Modern conception of the proton includes more than the three “valence” quarks — a down (d) and two up (u) quarks — which account for only about 2 percent of the proton’s mass. The rest comes from a “sea” of virtual quarks and glueballs. Even outside the bounds of the proton, virtual particles spring into and out of existence within the teeming vacuum described by QCD. (Rendering courtesy of Alex Dzierba, Curtis Meyer and Eric Swanson) All these four vectors including the valence quark four vectors added up and taken into a four dot product give the mass square of the proton, in this example. With the self interaction strong force terms the valence quarks would be bound, by all this sea, but what defines that the mass of the proton is exactly what is measured, as well as the mass of all the rest of the hadrons? Here is where chiral symmetry breaking in QCD comes in and its relation to hadronic masses. The resulting effective theory of baryon bound states of QCD (which describes protons and neutrons), then, has mass terms for these, disallowed by the original linear realization of the chiral symmetry, but allowed by the nonlinear (spontaneously broken) realization thus achieved as a result of the strong interactions.4 Bound states means stable states, resonances.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Gibbs free energy and maximum work I'm getting confused between two important results. The Gibbs free energy is $G = H-TS$ where $H$ is the enthalpy and $S$ is the entropy. When the temperature and pressure are constant the change in the Gibbs energy represents maximum net work available from the given change in system . But $dG = VdP-SdT$, so at constant temperature and pressure i'm getting $dG=0$. This is the criteria for phase equilibria. I'm getting Gibbs free energy change at constant $T$ and $P$ as maximum work in one relation and zero in another. How are these compatible?
It means that there exists a third variable, say, $X$, describing the system. The corresponding thermodynamic relation should be \begin{equation} dG = VdP - SdT + \bigg(\frac{\partial G}{\partial X}\bigg)_{P,T} dX. \end{equation} If $X$ is not fixed, the system at constant $T$ and $P$ adjusts itself such that $G$ is minimized with respect to $X$. Then, $(\partial G/\partial X)_{P,T} = 0$, leading to $dG = 0$ even if $X$ changes by a small amount $dX$. Now suppose that one controls the system to make it go through a thermodynamic process involving a change in $X$. During this process, the non-expansive work the system does on the outside is less than or equal to $-\Delta G$ (equal in a reversible process). For example, consider a battery. The decrease in its Gibbs free energy from the fully charged state to the completely discharged state is equal to the maximum (ideal) electrical work the battery can do. In this case, the variable "$X$" is a measure of how far the chemical reaction inside the battery has proceeded.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
If light is electromagnetic then can light produce electricity or attract metals? what I mean to say is that if light is electromagnetic in nature then shouldn't it show electric or magnetic properties on matters? Like if light falls on a metal it should produce current due to its electric nature but it doesn't. Secondly due to its magnetic nature shouldn't it attract metal object or get deflected t words then or magnetite them due to its magnetic field?
I think the OP is refering to the classical nature of light (not the famous quantum effects associated to it). Like, EM radiation can stimulate antennas or induce electric currents on conductors - if light is EM radiation then why doesn't it create electrial currents on conductors? This is because the wavelengths are so small (light is in the THz range) that making antennas for it becomes very very challenging, as the antenna size is inversely proportional to the wavelength. Also the plasma frequency of most conductors is smaller than the frequency of visible light, so the electrons just bounce back and forth quickly with no detectable EMF - this is not the case for lower frequency signals (let's say, radio or TV signals) so the radiation can indeed cause a EMF on the conductor.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/139034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
How to choose the Correct Green's Function? In order to solve the Green’s function of the Helmholtz operator $$(\nabla^2+k^2)G(\vec r-\vec r’)=\delta^{(3)} (\vec r-\vec r’)$$ one can obtain four different Green’s functions corresponding to four different choices of avoiding the poles and choosing the contour. How to choose the correct Green’s function for a given problem?
Consider the case of a free scalar field, governed by the usual Lagrangian, $$\mathcal{L} = \frac{1}{2}\partial_\mu \phi \partial^\mu \phi - \frac{1}{2}m^2 \phi^2$$ The propagator, or equivalently Green's function for the theory is a function which can be though of as a response when we use a delta function as an input in the equations of motion, i.e. $$\square \Delta_F(x-y) \sim \delta^{(4)}(x-y)$$ Explicitly, it is given by a Fourier integral over four-momentum, $$\Delta_F (x-y)=\int \frac{d^4 p}{(2\pi)^4} \frac{i}{p^2-m^2} e^{-ip \cdot (x-y)}$$ We encounter a singularity at $p^0 = \pm \sqrt{p^2+m^2} = \pm E_{p}$. We can choose a contour which avoids these by dipping below the first, and then above the other. However, to apply the residue theorem, it must be closed. For $x^0 > y^0$, we close it in an anti-clockwise direction in the upper-half plane, and the opposite if $y^0 > x^0$. Alternatively, we can define the Feynman propagator, $$\Delta_F = \int \frac{d^4 p}{(2\pi)^4} \, \frac{i}{p^2-m^2 + i\epsilon} e^{-ip\cdot (x-y)}$$ The $i\epsilon$ prescription due to Feynman shifts the poles by an infinitesimal amount away from the real axis; as a result the integral going straight through the real line is equivalent to the aforementioned contour. Depending on your purpose, it may be useful to pick a particular contour, in which case we can define the retarded propagator $\Delta_R$ as the one which chooses to go over each pole on the real line, and the advanced contour going under. See the depiction below: To understand what they mean physically, consider in the context of response theory, the response function $\chi$ which determines how a system changes under the addition of a source $\phi$, i.e. $$\delta \langle \mathcal{O}_i(t) \rangle = \int dt' \chi(t-t')\phi(t')$$ It's clear the aforementioned is in fact a convolution, $\chi \ast \phi$, and $\chi$ also has the interpretation of a Green's function. But we can't affect the past, so clearly, $$\chi(t) = 0, \quad t < 0$$ For the Fourier transform, this is equivalent to stating, $$\chi(\omega) \quad \mathrm{analytic}, \quad \mathrm{Im} \, \omega > 0$$ In other words, $\chi(\omega)$ has no poles in the upper-half plane. This object $\chi(t)$ is in fact our retarded Green's function, and is also called the causal Green's function, precisely because the above requirement is imposed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/139165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }