Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Can a huge gravitational force cause visible distortions on an object In space, would it be possible to have an object generating such a huge gravitational force so it would be possible for an observer (not affected directly by gravitational force and the space time distortion) to see some visual distortions (bending) on another small object placed near it ?
(eg : a building on a very huge planet would have his lower base having a different size than the roof).
We assume that object would not collapse on himself because of the important gravitational force.
| Of course! This is exactly what we see in images of gravitational lensing, where enormous clusters of galaxies bend the light from other galaxies around themselves.
This bending of light was also one of the first lines of evidence that supports Einstein's General Relativity, where we observed our own Sun bending the light from other stars during an eclipse.
More theoretically (but still possible), if we place a long rod near a black hole (pointing toward it), the rod would appear distorted to a distant observer, and the inner tip of the rod would appear more red.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
What is the difference of the two stable states of a Flip-flop?
In electronics, a flip-flop or latch is a circuit that has two stable
states and can be used to store state information. [...] Flip-flops and latches are used as data storage elements. Such data storage can be used for storage of state, and such a circuit is described as sequential logic.
Source: Flip-flop (electronics), Wikipedia
What are these states in Flip-flops?
I know that DRAM-storage makes use of capacitors. The two states are: Either the capacitor is charged or not. But I have no idea how to imagine the state of Flip-flops. What changes in the flip-flop if the state changes?
I am not a physicist, so please keep your answer as simple as possible.
| If you imagine a small loop lying on a surface, how many different directions can an electric current go within it?
The answer is two of course. With some pretzel-like shape changes, that's also what happens in a flip-flop: "flip" is (for example) a counter-clockwise current, and "flop" is a clockwise current (again, the shape is more like a pretzel though).
Because flip-flops are active currents, they take constant power to keep them on. That's why storing information in some other form such as charge (flash drives), magnetic fields held by tiny magnet regions (disk drives and tapes), or physically different "dots" (any form of optical storage) is always more efficient.
On the other hand, the flip-flop is very fast, since it's essentially constantly being "primed" with electrical current. That's why the registers and some forms of cache memory -- the very fastest form of storage in a computer chip -- are done using flip-flops, but not much else. Good use of hierarchies of slower and slower memories can make those very high speeds seem available much more broadly, since computer tend to go over the same small and local set of data many times before reaching out to slower storage devices.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is GPS time measuring the proper time on the mean sea level or the GPS station itself? LeapSecond.com states:
Global Positioning System time is the atomic time scale implemented by the atomic clocks in the GPS ground control stations and the GPS satellites themselves.
Does GPS time measure the proper time on the mean sea level (rotating geoid) or the proper time of the GPS station itself?
In other words, is GPS time exactly 19 SI seconds behind TAI time, where an SI second is defined as the proper time on the rotating geoid equal to the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom at rest and at a temperature of 0 K?
Put another way, is GPS time and TAI time simultaneous?
| GPS time is locked to TAI time, with a constant difference of 19 seconds as you say. In fact GPS is used as part of the TAI process. See for example http://tycho.usno.navy.mil/ptti/1993/Vol%2025_13.pdf and http://tycho.usno.navy.mil/ptti/ptti2000/paper12.pdf for details.
If you're interested in this area there's a lot of useful stuff on the US Navy web site. Have a Google for something like "gps time site:navy.mil" for lots of useful articles.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What equations govern the formation of droplets on a surface? When some smooth surface (like that of a steel or glass plate) is brought in contact with steam (over e.g. boiling milk) then water is usually seen to condense on that surface not uniformly but as droplets. What are the equations which govern the formation and growth of these droplets ? In particular what role does the geometry of the surface plays in it? Also it is possible to prepare experimental conditions where no droplets are formed but water condenses uniformly ?
| It is more about properties of the material and surface roughness/texture, than geometry of the surface on the large scale. There is a nice Wikipedia article on wetting, which you may find useful. In brief, that is the difference in surface tension at liquid-solid ($\gamma_{LS}$), liquid-air ($\gamma_{LA}$), and solid-air ($\gamma_{SA}$) interfaces which define the value of contact angle $\theta=\arccos\left(\left(\gamma_{SA}-\gamma_{LS}\right)/\gamma_{LG}\right)\neq0$, and lead to droplet formation. If the angle is zero, the liquid will tend to cover the whole surface uniformly.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Sound frequency of dropping bomb Everyone has seen cartoons of bombs being dropped, accompanied by a whistling sound as they drop. This sound gets lower in frequency as the bomb nears the ground.
I've been lucky enough to not be near falling bombs, but I assume this sound is based on reality.
Why does the frequency drop? Or does it only drop when it is fallling at an oblique angle away from you, and is produced by doppler shift?
I would have thought that most bombs would fall pretty much straight down (after decelerating horizontally), and therefore they would always be coming slightly closer to me (if I'm on the ground), and thus the frequency should increase..
| If you're on the aircraft which is dropping a bomb that whistles, you'll hear the pitch drop as the bomb accelerates downward away from you. Perhaps the classic cartoon sound effect was inspired by the experience of someone who flew on a bomber rather than an observer on the ground.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 3
} |
Will adding heat to a material increase or decrease entropy? Does adding heat to a material, thereby increasing electrical resistance in the material increase or decrease entropy?
Follow up questions:
Is there a situation were Heat flux ie. thermal flux, will change entropy?
Does increasing resistance to em transfer prevent work from being done?
| If by "heating" you mean "adding heat", then the answer is yes, except for the unusual situation where a material is at negative temperature. When you add heat to a system
$$ dS = {dQ\over T} $$
and this is always positive when T is positive. This is the definition of the thermodynamic temperature in the most fundamental way of looking at it, the partial derivative of S with respect to U at fixed volume and fixing all other conserved quantities is the reciprocal temperature $\beta$.
The only exception to this rule is for systems where there is a negative temperature. This occurs in spin-systems, where there is a maximum energy state and a minimum energy state. As you add heat energy to these systems, the entropy rises, then falls, which means that the inverse temperature smoothly goes down to zero, then turns negative. This corresponds to a temperature that goes to infinity and comes back out the negative side at negative infinity, going toward zero from the other diretion. Negative temperature systems are rare, since they require an upper bound on the energy, which means you are restricted to nuclear spins, which are decoupled from electron spins for a long time.
The question of whether entropy always increases with increasing temperature is a different question, and this has to do with the sign of the specific heat. For normal systems, the specific heat is always positive, so that the temperature increases with energy (beta decreases), and this is true for negative temperature systems too, as long as you define the specific heat properly as the change in beta with U. Even at negative inverse-temperature, the negative inverse-temperature becomes larger negative with increasing energy.
For neutral large black holes, as Carl Brannen points out, the specific heat is negative.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Is my electric kettle collecting old water during the week? I have an electric kettle at work. On Monday, it's empty and I pour in about 1.5 liters of water. I usually end up drinking about 1 liter per day and refilling 1 liter each morning.
At the end of the week, I rinse it out because in my mind, there is .5 liters of "old" water from the beginning of the week on the bottom. Is that true? Or is the water constantly mixed around from being used and then refilled?
| Realistically, you're probably very close to having it fully mixed. There are at least three sources of mixing present. Diffusion is a very slow process, so you can ignore it. You will presumably be agitating the water a lot when you pour the new water in, and that might completely mix everything. If that doesn't fully mix the water, there will also be some convection due to the heating you apply when you boil the water for your tea.
Here's what you get if you assume that it's fully mixed each day. On Monday, you add 1.5 L and drink 1.0 L, leaving 0.5 L. On Tuesday you add 1.0 L and mix everything up. You then drink 1.0 L of the mixture, leaving 0.3333 L of Tuesday's water and 0.1667 L of Monday's water. On Wednesday you add 1.0 L, mix everything up, and drink 1.0 L of the mixture. Etc.
After the $n^{\mathrm{th}}$ day, you have $1.5/3^n$ L of Monday's water left, $1.0/3^{n-1}$ L of Tuesday's water left, and $1.0/3^{n-k}$ L of the $k^{\mathrm{th}}$ day's water left. For the particulars you gave, that leaves you with 6.17 mL of Monday's water 12.3 mL of Tuesday's water, 37.0 mL of Wednesday's water, 111 mL of Thursday's water, and 333 mL of Friday's water, left over after you have poured Friday's tea.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How does one quantize the phase-space semiclassically? Often, when people give talks about semiclassical theories they are very shady about how quantization actually works.
Usually they start with talking about a partition of $\hbar$-cells then end up with something like the WKB-wavefunction and shortly thereafter talk about the limit $\hbar\rightarrow 0$.
The quantity that is quantized is usually the Action $\oint p\, \mathrm{d}q$ which is supposed to be a half-integer times $2\pi\hbar$.
What is the curve we integrate over? Is it a trajectory, a periodic orbit or anything like this? And how does this connect to the partition in Planck-cells?
Additionally, what is the significance of the limit $\hbar\rightarrow 0$?
| Many symplectic manifolds (phase spaces of mechanical systems) admit a coordinate system where the symplectic two form can be written locally as:
$\omega = \sum_i dp_i \wedge dq_i + \sum_j dI_j \wedge d\theta_j$
Where $ p_i, q_i$ are linear coordinates $ I_j$ are radial coordinates and $\theta_j$ are angular coordinates.
The submanifold parameterized by $\theta_j$ is a torus and a result due to Snyatycki states that the system can be quantized by means of a Hilbert space of wave functions (distributions) supported only on points whose coordinates $ I_j$ satisfy:
$ 2 I_j = m \hbar$
One of the simplest examples admitting such a quantization is the two sphere whose symplectic form is its area form
$A = r \mathrm{sin} \theta d\theta \wedge d\phi = dz \wedge d\phi $
where $\theta, \phi$ are the spherical coordinates and $z$ is the coordinate along the axis.
In this case the Bohr-Sommerfeld condition is given by:
$ z =\frac{ m}{2} \hbar$,
which is the spin projection quantization condition.
In a more sophisticated mathematical language one says that an open dense subset of he symplectic manifold is foliated by lagrangian tori and the integration is over the generating cycles of the tori.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 3,
"answer_id": 0
} |
How do I show that all Brillouin zones have the same volume? I have read in a few books that all Brillouin zones have the same volume, and I can vaguely see how it works, but have not been able to think up a formal proof.
Help?
| A Brillouin zone is defined as the range of k's which represent a unique one particle pseudomomentum state in the crystal. If you count the total number of states in the Brillouin zone, you do an integral over k:
$$ \int {d^dk\over (2\pi)^d} = {V\over (2\pi)^d} $$
Where V is equal to the total volume in momentum space. This is equal to the dimension of the Hilbert space, so it is independent of how you define the zone. Another way of saying this is if you have a large crystal of volume V, the dimension of the Hilbert space is $N= V/v$ where V is the total crystal volume and v is the volume of a unit cell, and in Fourier space, you get a dual lattice of equidistributed discrete momentum states, with total number of k states in any Brillouin zone equal to N. In the limit $V\rightarrow\infty$, the total number of states is proportional to the volume of the zone, so it must be the same no matter how you chop it up.
I don't know how formal you want, but one can get as formal as you like.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What is a tensor? I have a pretty good knowledge of physics, but couldn't deeply understand what a tensor is and why it is so fundamental.
| Tensor is a multi dimensional vector in colloquial language.
Where the variations in one direction effects the other.
In Newtonian mechanics we assume all forces, velocities etc that are mutually orthogonal $\Rightarrow$ mutually independent.
$$F=F_x\vec i + F_y\vec j +F_z\vec k $$
$F_x\vec i . F_y\vec j =0$ since $\vec i . \vec j =0$
Whenever we apply some force, we resolve into components and calculate the network done to be zero if the components contribute zero along given direction.
Ie, force applied in one direction will not have any effect in a direction perpendicular to it.
Whereas some physical quantities like pressure applied in one direction can produce effect in other directions also. The corresponding directional quantity is the stress tensor.
If we press a baloon in one direction, we can see expansion in other directions which are mutually perpendicular also.
If we push a solid cube onto a wall, it won't move up whereas a baloon rises up. This is a simple analogy that could distinguish a vector and tensor.
Any variation in x component of stress tensor has its effects reflecting in y z directions as well.
$$ σ = \begin{pmatrix} σ_{11} & σ_{12} & σ_{13}\\ σ_{21} & σ_{22} & σ_{23} \\ σ_{31} & σ_{32} & σ_{33}\end{pmatrix}$$
Similar is the case of moment of inertia.
Here still we deal the effects of cause on other directions independently.
In other words, we deal the effects in y, z directions independently.
Hence stress, moment of inertia are rank 2 tensors.
Ie, at a time we could connect only 2 spatial dimensions.
Levi-Civita is a rank 3 tensor that we use in angular momentum
$$[L_i, L_j]=i\hbar \epsilon_{ijk} L_k$$
Here order of all the three i,j,k decide the value/sign of the function $\epsilon_{ijk}$.
In electrodynamics and relativistic mechanics we may come across rank 4 tensors as well.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "68",
"answer_count": 12,
"answer_id": 10
} |
Phase shift of 180 degrees of transversal wave on reflection from denser medium Can anyone please provide an intuitive explanation of why phase shift of 180 degrees occurs in the Electric Field of a EM wave, when reflected from an optically denser medium?
I tried searching for it but everywhere the result is just used.The reason behind it is never specified.
| It is easy to see why there is a sign change in the case where the electric field reflects from a conducting surface. The electric field would excite a current in the conducting surface, which in turn would force the electric field to be zero at the interface. Therefore, the reflected field must be such that it cancels the incident field at the surface. Hence, the negative sign.
In the case of a dielectric interface one can use a similar argument, but with some differences. In this case not all the power is reflected. Some of it is transmitted into the material on the other side of the interface. However, due to energy conservation, the power that is transmitted is always smaller that the incident power. Hence, the amplitude of the transmitted field is smaller. To satisfy the boundary conditions, the reflected field must have a negative sign so that it can be subtracted from the amplitude of the incident field to match that of the transmitted field.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54",
"answer_count": 6,
"answer_id": 4
} |
Does gravity slow the expansion of the universe? Does gravity slow the expansion of the universe?
I read through the thread http://www.physicsforums.com/showthread.php?t=322633 and I have the same question. I know that the universe is not being stopped by gravity, but is the force of gravity slowing it down in any way? Without the force of gravity, would space expand faster?
Help me formulate this question better if you know what I am asking.
| The answer is that yes gravity does slow the expansion of space (leaving aside dark energy for the moment), but to get a better grasp on what's going on you need to look into this a bit more deeply.
If we make a few simplifying assumptions about the universe, e.g. it's roughly uniform everywhere, we can solve the Einstein equation to give the FLRW metric. This is an equation that tells us how spacetime is expanding, and actually it seems to be a pretty good fit to what we see so we can be reasonably confident it's at least a good approximation to the way the universe behaves.
To reduce gravity you simply reduce the density of matter in the universe because after all it's the matter generating the gravity. At low densities of matter the FLRW metric tells us that the universe expands forever. As you increase the matter density the expansion slows, and for densities above a critical density (known as $\Omega$) the expansion comes to a halt and the universe collapses back again.
So yes, gravity does slow the expansion and the FLRW metric tells us by how much. If you want to pursue this further try Googling for the FLRW metric. The Wikipedia article is very thorough but a bit technical for non GR geeks, but Googling should find you more accessible descriptions.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 7,
"answer_id": 0
} |
Partition function of an interacting gas By reading an article, I found a partition function that, according to the author, describes an interacting with random variables as coupling constant.
$$Z =\int \mathrm{d} \lambda_i e^{i(K^{ij}\lambda_i\lambda_j + V^{ijk}\lambda_i\lambda_j\lambda_k)}\mathrm{exp}(e^{iS_{eff}(\lambda)})$$
This expression is totally unfamiliar to me. Could someone show me how to derive that, providing a reference (online course, textbook, etc.) if necessary?
| This is a quantum partition function, not a statistical mechanical partition function. He is just talking about an idealized self-interacting field. If you have a scalar with cubic self interactions, you write the Lagrangian as
$$ \partial_\mu \phi \partial^\mu \phi - \lambda \phi^3 $$
If you fourier transform the field variables, this is
$$ \int_k k^2 |\phi_k|^2 + \int_{k_1,dk_2,dk_3} \delta(k_1+k_2+k_3) \phi_{k_1}\phi_{k_2}\phi_{k_3} $$
Which, if you think of k as a lattice, can be abstrated to the form Banks writes down. The remaining S_eff term is from renormalization, which changes the low energy theory according to the contributions to the low-energy effective action from high-energy degrees of freedom you are neglecting. This is heuristic, because a real renormalizable model requires a $\phi^4$ term too.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Apparent contradiction between quantum calculations and intuition for reflection at step potential? I am rather confused because it would seem that mathematical conclusions I have drawn here goes against my physical intuition, though both aren't too reliable to begin with.
We have a potential step described by $$V(x)=\begin{cases}0& x\le0\\V_0 & x>0\end{cases}$$
and a wavefunction $\psi(x)$ that satisfies the equation $${\hbar^2\over 2m}{\partial^2 \over \partial x^2}\psi(x)+V(x)\psi(x)=E\psi(x).$$
I wish to find the probability of reflection. By continuity constraints at $x=0$ I have arrived at the reflection amplitude being $$R={k-q\over k+q},$$ where $k=\sqrt{2mE\over \hbar^2}$ and $q=\sqrt{2m(E-V_0)\over \hbar^2}$ then we let $V_0\to -\infty$ giving $q\to \infty$, so $R\to -1$
$\implies |R|^2\to 1.$
But I would have guessed that $|R|^2$ should vanish at the limit so that the incident wave is totally transmitted!
Could someone please explain?
| Recall that if we have an incident wave
$$\frac{1}{\sqrt{k_1}}e^{ik_1 x}$$
from left (region 1, $x<0$, with constant potential $V_1$) that is partially transmitted
$$\frac{T}{\sqrt{k_2}}e^{ik_2 x} $$
to the right (region 2, $x>0$, with constant potential $V_2$), and partly reflected back to region 1,
$$\frac{R}{\sqrt{k_1}}e^{-ik_1 x} $$
then the the reflection coefficient is well-known to be
$$R~=~ \frac{k_1-k_2}{k_1+k_2}, $$
where
$$k_i ~:=~\frac{\sqrt{2m(E-V_i)}}{\hbar}.$$
OP question is related to the fact that the reflection probability $|R|^2$ is invariant under permutation $V_1\leftrightarrow V_2$. Roughly speaking, quantum mechanically, the reflection probability $|R|^2$ is the same whether the incident wave meets a potential barrier/wall, or a potential abyss/well!
Intuitively, one would probably have guessed that the wave tends to go the region with smallest potential $V_i$. Classically, this is because one is forgetting momentum conservation of the wave, and implicitly allowing the wave to layoff/absorb momentum at $x=0$ to/from the environment. Quantum mechanically, the momentum conservation is implemented by the requirement that the left and right derivative of the wave-function at $x=0$ should be the same. Momentum conservation implies that when the wave meets a potential step (either up or down), then a fraction of the wave must always be reflected to preserve momentum.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
About an electrostatics integral and a delta-function kernel I'm having trouble with an integral and I would like some pointers on how to "take" it:
$$
\int \limits_{-\infty}^{\infty}\frac{3\gamma a^{2}d^{3}\mathbf r}{4 \pi \left( r^{2} + \frac{\gamma^{2}}{c^{2}}(\mathbf r \cdot \mathbf u)^{2} + a^{2}\right)^{\frac{5}{2}}}
$$
Here $\mathbf u$, $a$ and $\gamma$ are constants, and the integrand converges to a Dirac delta $\delta(\mathbf r)$ as $a\rightarrow 0$. The integral must be equal to 1.
| First choose a direction for u, along the z-axis. Then the integral is
$$ I = \int {1\over (x^2 + y^2 + A z^2 + B)^{5/2} } dx dy dz $$
Rescale z by $\sqrt{A}$ to get rid of A and restore rotational invariance.
$$ I = {1\over \sqrt{A}} \int {1\over (x^2 + y^2 + z^2 + B)^{2.5}} dx dy dz $$
Now you do find the B dependence immediately from rescaling x y and z by $\sqrt{B}$ (or from dimensional analysis-- B has units length squared):
$$ I = {1\over \sqrt{A} B} \int {1\over (r^2 + 1)^{2.5}} d^3r $$
The only thing undetermined is the transcendental factor, which is just a number. You evaluate it by doing it radially and doing a string of substitutions:
$$ \sqrt{A}B I = 4\pi \int_0^{\infty} {r^2\over (r^2 +1)^{2.5} } dr $$
first $u=r^2 + 1$ gives
$$ \sqrt{A}B I = 2\pi \int_1^\infty {\sqrt{u-1}\over (u)^{2.5}} du $$
Then $v = {1\over u}$ makes it,
$$ \sqrt{A}B I = 2\pi \int_0^1 \sqrt{1-v} dv = {4\pi\over 3} $$
So
$$ I = {4\pi \over 3 \sqrt{A}B} $$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Measurement and uncertainty principle in QM The Wikipedia says on the page for the uncertainty principle:
Mathematically, the uncertainty relation between position and momentum arises because the expressions of the wave function in the two corresponding bases are Fourier transforms of one another (i.e., position and momentum are conjugate variables).
Does that mean that position and momentum are just 2 different measurements of the same wave function? I.e., it is the same thing that is being measured, just in two different ways? Meaning, they are not really two different things, but two different views on the same thing?
| Can I provide a pedestrian answer? When you measure the position of a quantum, you project or force it to commit to a unique position, and from Fourier Analysis, this commitment requires all possible momenta. Think of focusing a quantum wave to single location (ala Dirac Delta), this would require a wave generator to combine all momenta (and therefore no unique momentum). On the other hand, a measured momenta, would hold for the wave throughout all space, and make its position totally arbitrary. Also, when you do measure a quantum system, you do change it as it changes your measurement apparatus, unless you just performed the very same measurement a moment before. A quantum measurement is generally not objective, since your apparatus and the quantum wave are both involved. Measuring is an active process in Quantum Mechanics, changing your lab (its measuring dials) and the quantum wave.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why does vibration loosen screws? I am trying to figure out why vibrations (say, from an engine) loosen screws. It seems to me that there is evident symmetry between loosening and tightening a screw. I am wondering what breaks this symmetry.
| Regardless of whether the "local" situation is symmetric or not with respect to loosening and tightening, what you essentially have is a random walk. At any point in time, the screw can stay where it is, get a little looser or get a little tighter. There is, in practical terms, a limit as to how tight the screw can get but no limit on how loose. For any degree of looseness, there's a positive probability that you'll eventually reach that point, assuming the vibration is strong enough to move the screw at all.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 2,
"answer_id": 1
} |
Hamiltonian and the space-time structure I'm reading Arnold's "Mathematical Methods of Classical Mechanics" but I failed to find rigorous development for the allowed forms of Hamiltonian.
Space-time structure dictates the form of Hamiltonian. Indeed, we know how the free particle should move in inertial frame of references (straight line) so Hamiltonian should respect this.
I know how the form of the free particle Lagrangian can be derived from Galileo transform (see Landau's mechanics).
I'm looking for a text that presents a rigorous incorporation of space-time structure into Hamiltonian mechanics. I'm not interested in Lagrangian or Newtonian approach, only Hamiltonian. The level of the abstraction should correspond to the one in Arnolds' book (symplectic manifolds, etc).
Basically, I want to be able to answer the following question: "Given certain metrics, find the form of kinetic energy".
| From reading the comments to the question, I think that a partial answer could be to show the Hamiltonian character of the relativistic dynamics of a material particle in an electromagnetic field. If this interpretation of the question isn't correct then at least I hope to help in finding out its true interpretation.
Let $(M,g)$ be a Lorentzian $4$-manifold and $F$ the closed $2$-form on $M$ describing the electromagnetic field.
Let $H\in C^{\infty}(T^\ast M)$ be the kinetic energy defined by $H(\nu)=\frac{1}{2}g(g^\sharp\nu,g^\sharp\nu).$
It easily proved that if $\omega_0$ denotes the canonical symplectic form then $\omega_0+(\tau_M^\ast)^\ast F$ is also a symplectic form on $T^\ast M.$ (Here $\tau_M^\ast:T^\ast M\to M$ is the projection on the base.)
Then the motions are the projection on the base of the integral curves for the Hamiltonian vector field $X_H=\omega_F^\sharp(dH).$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
Has anyone else thought about gravity in this way? Picture yourself standing on a ball that is expanding at such a rate that it makes you stick to the ball.
Everything in the universe is expanding at this same rate.
To escape the earths gravitational pull we would need to jet upward faster than the expansion of the earth.
Each object expands at a different rate on its surface according to its size.
Thus different gravity affects for different size planets.
When in space we are subject to being affected by the most distant body if we stand in its way.
I just can not explain the reaction of our tides with our moon.
Have any scientists seriously considered an idea like this?
Follow up July 23
I am no scientist, but I think someone with more knowledge might explore this idea a little further.
At the very lease the idea that every thing in the total universe is expanding, including all parts of the atom can be used as a simple way to see formulas and the same results to the effects of gravity of anything on the surface of a sphere planet or a donut shaped planet.
The area of mass will grow but the density will remain the same.
The idea can be cross referenced by light shifting etc, to see if it falls in line with the known action of planet gravity and the known expansion affect of the whole universe.
Maybe the gravity affect of a planet on its surface dweller is a completely different force than is the force that maintains the orbits of planets. keplers law I believe.
What happens when we have an eclipse of the moon?, does the earths orbit around the sun change for time of this eclipse?
My summary is that if all scientists can not explain gravity totally, then maybe the common thought for all these years is not completely a correct one.
| This view fails to account for free orbiting behavior. You know, the planets around the sun, the moons around the planets, artificial satellites around the Earth.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Can light exist in $2+1$ or $1+1$ spacetime dimensions? Spacetime of special relativity is frequently illustrated with its spatial part reduced to one or two spatial dimension (with light sector or cone, respectively). Taken literally, is it possible for $2+1$ or $1+1$ (flat) spacetime dimensions to accommodate Maxwell's equations and their particular solution - electromagnetic radiation (light)?
| No, because the polarization of the electromagnetic field must be perpendicular to the direction of motion of the light, and there aren't enough directions to enforce this condition. So in 1d, a gauge theory becomes nonpropagating, there are no photons, you just get a long range Coulomb force that is constant with distance.
In the 1960s, Schwinger analyzed QED in 1+1 d (Schwinger model) and showed that electrons are confined with positrons to make positronium mesons. A much more elaborate model was solved by t'Hooft (the t'Hooft model, the nonabelian Schwinger model) which is a model of a confining meson spectrum.
EDIT: 2+1 Dimensions
Yes, light exists in 2+1 dimensions, and there is no major qualitative difference with 3+1 dimensions. I thought you wanted 1+1, where it's interesting.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41",
"answer_count": 4,
"answer_id": 1
} |
What does symplecticity imply? Symplectic systems are a common object of studies in classical physics and nonlinearity sciences.
At first I assumed it was just another way of saying Hamiltonian, but I also heard it in the context of dissipative systems, so I am no longer confident in my assumption.
My question now is, why do authors emphasize symplecticity and what is the property they typically imply with that? Or in other more provocative terms: Why is it worth mentioning that something is symplectic?
| Classical mechanics is the study of second-order systems. The obvious geometric formulation is via semi-sprays, ie second-order vectorfields on the tangent bundle. However, that's not particularly useful as there's no natural way to derive a semi-spray from a function (ie potential).
Lagrangian and Hamiltonian mechanics are two solutions to that problem. While these formalisms are traditionally formulated on the tangent and cotangent bundles (ie velocity and momentum phase space), they were further generalized: Lagrangian mechanics led to the jet-bundle formulation of classical field theory, and Hamiltonian mechanics to the Poisson structure.
The symplectic structure is a stripped-down version of the structure of the cotangent bundle - the part that turned out to be necessary for further results, most prominently probably phase space reduction via symmetries. It doesn't feature prominently in undergraduate mechanics lecture (at least not the ones I attended) because when working in canonical coordinates, it takes a particular simple form - basically the minus in Hamilton's equations, where it's used similarly to the metric tensor in relativity, ie to make a contravariant vector field from the covariant differential of the Hamilton function.
Symplectic geometry also plays its role in thermodynamics: As I understand it, the Gibbs-Duhem relation basically tells us that we're dealing with a Lagrangian submanifold of a symplectic space, which is the reason why the thermodynamical potentials are related via Legendre transformations.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 1
} |
Is it theoretically possible to reach $0$ Kelvin? I'm having a discussion with someone.
I said that it is -even theoretically- impossible to reach $0$ K, because that would imply that all molecules in the substance would stand perfectly still.
He said that this isn't true, because my theory violates energy-time uncertainty principle.
He also told me to look up the Schrödinger equation and solve it for an oscillator approximating a molecule. See that it's lowest energy state is still non-zero.
Is he right in saying this and if so, can you explain me a bit better what he is talking about.
| from WP-negative temperature
In physics, certain systems can achieve negative temperature; that is,
their thermodynamic temperature can be expressed as a negative
quantity on the kelvin scale.
A substance with a negative temperature is not colder than absolute
zero, but rather it is hotter than infinite temperature. As Kittel and
Kroemer (p. 462) put it, "The temperature scale from cold to hot runs:
+0 K, . . . , +300 K, . . . , +∞ K, −∞ K, . . . , −300 K, . . . , −0 K."
. The inverse temperature β = 1/kT (where k is Boltzmann's constant)
scale runs continuously from low energy to high as +∞, . . . , −∞.
from Positive and negative picokelvin temperatures :
... of the procedure for cooling an assembly of silver or rhodium nuclei
to negative nanokelvin temperatures.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 8,
"answer_id": 0
} |
Can I study Quantum Computing or Quantum Mechanics with an Engineering background? I am currently studying Electrical & Electronic Engineering. I wish to pursue Quantum Mechanics or Quantum Computing as my research subject. Is it possible for me to do my M.Tech. and then pursue my research subject? What are the prerequisites for studying these subjects? I would be grateful if you could help me.
| Mathematical prerequisites for studying introductory Quantum Mechanics are basics in complex numbers, fourier analysis, differential equations and linear algebra. I think it is also necessary to have a grounding in Classical Mechanics. It would also help if you are comfortable working with probabilities at a basic level.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Explanation for $E~$ not falling off at $1/r^2$ for infinite line and sheet charges? For an infinite line charge, $E$ falls off with $1/r$; for an infinite sheet of charge it's independent of r! The infinitesimal contributions to $E$ fall off with $1/r^2$, so why doesn't the total $E$ fall off the same way for the infinite line and sheet charges?
| $E$ falls off at $1/r^2$ when there are 3 degrees of freedom for the field lines to spread. When you have an infinite line in three dimensional space, that's equivalent to having the field spread from a point in a cross section of this space, which goes as $1/r$. Similarly, field lines spreading from a flat sheet have only 1 remaining degree of freedom to spread in, it's like spreading from a point in 1-dimensional space, which goes as $1/r^0$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Reflectance vs. Thin Metal film Thickness Graph Is there formula that gives reflectance of very thin film of given metal (tens of nanometers) to the visible light of given wavelength(808nm) ? Which properties of metals are needed for the formula ?
I would like to draw a plot of reflectance that is a function of titanium film thickness. Thanks
| You need to know the index of refrection of the metal AND the substrate. (You didn't mention the substrate, but I assume your 10nm-thick film is not floating in space!!)
This is a three-layer structure: Air, thin film, substrate. You need to know wavelength, the incident angle, the refractive index of all three layers, and the thickness of the film. With those parameters in hand, you can do the calculation.
The calculation is conventionally set up using the transfer matrix method. You can find many programs online. (This page alone has 2 programs, plus links to 9 others on various different websites.) But with only three layers, the formula is sufficiently simple that you can probably do the calculation more quickly by hand.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Dual Resonance Model: Fermions I am going through Ramond's 1971 paper Dual Theory for Free Fermions Phys Rev D3 10, 2415 where he first attempts to introduce fermions into the conventional dual resonance model.
I get the 'gist' of what he's doing: he draws an analogy of the bosonic oscillators satisfying the Klein-Gordon equation, and extends it to incorporate some version of the Dirac equation. Great.
Now without resorting to string theory (since I know nothing about it) and perhaps minimally resorting to field theory (after all, this is still S-matrix theory, right?), how can I understand his correspondence principle (eqn 3)?
$$p^2-m^2=\langle P\rangle\!\cdot\!\langle P\rangle-m^2\rightarrow\langle P\!\cdot\! P\rangle-m^2$$
(the same correspondence principle appears in Frampton's 1986 book "Dual Resonance Models" equation 5.63). Is this a special property of harmonic oscillators?
| Ramond provides some explanation of his usage of averages in a recollection paper, "Early supersymmetry on the prairie".
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Why does burning magnesium explode when sprinkled with water? Magnesium powder burns extremely well and reaches temperatures of 2500°C. However, attempts to extinguish such a magnesium fire with conventional water (e.g. from a garden hose) only make it worse: the flame grows astronomically and the whole thing gets even hotter. Why is this?
| Magnesium is flammable due to the fact is has hydrogen. When the hydrogen and water combine including the magnesium that is not a good combination and the water spreads causing more issues than there already is.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 3
} |
What is a good introductory book on quantum mechanics? I'm really interested in quantum theory and would like to learn all that I can about it. I've followed a few tutorials and read a few books but none satisfied me completely. I'm looking for introductions for beginners which do not depend heavily on linear algebra or calculus, or which provide a soft introduction for the requisite mathematics as they go along.
What are good introductory guides to QM along these lines?
| Many recommendations have already been made. I would just like to recommend Principles of Quantum Mechanics by Ramamurti Shankar.
I like this book because it starts with all necessary algebra, then goes into operator formulation of classical needed in quantum, and then into quantum.
I would recommend it over Griffiths for a person who is not great at linear algebra and is at the preliminary level. After that Griffiths is fine.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "103",
"answer_count": 19,
"answer_id": 9
} |
Must all symmetries have consequences? Must all symmetries have consequences?
We know that transnational invariance, for example, leads to momentum conservation, etc, cf. Noether's Theorem.
Is it possible for a theory or a model to have a symmetry of some kind with no physical consequences at all for that symmetry?
| What do you mean by "consequences"?
Classically, topological numbers might fit that bill. In quantum field theory, that depends upon whether it is sensible to consider superpositions of topological sectors. If there is an S-duality, it might possibly be.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Electrial Conductivity of Thin Metal Films What is the best way to find specific/electric conductivity which is dependent of very thin film thickness?
| One way to do this is either a two point or four point measurement - the four point will be more accurate if you have significant contact resistance. If not, a simpler two point setup will be sufficient.
One basic setup is to pass a known current between two probes which lie on the outside of two other probes, which you are trying to measure the voltage difference across. Using the known current and measured voltage (and a known geometry), you can deduce the sheet resistance in Ohms (per square). From that, you can then multiply by the film thickness to get resistivity.
More information from:
http://en.wikipedia.org/wiki/Four-terminal_sensing
http://en.wikipedia.org/wiki/Sheet_resistance#section_2
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Hawking radiation from point of view of a falling observer This paper tells that Hawking claimed that the falling to a black hole observer will not detect any radiation. But only because the frequency of the Hawking radiation will be of the order $1/R_s$ so that the falling observer will not have a suitable detector.
This argument seems fallacious to me. First, the falling orserver can communicate to a distant observer who has a suitable detector. That observer will tell him that the BH indeed evaporates. So its radius will decrease for both falling and distant observer.
Before the falling observer will touch the hoizon, the radius of the BH will become smaller than his own dimentions so he will be able to detect the radiation. The smaller the BH will become, the higher frequency of the Hawking radiation will be. At some point the apparent temperature of the BH for a falling observer will reach infinity so that the BH will explode.
Am I right that this claim by Hawking is fallacious and in fact all ouside observers will detect Hawking radiation sooner or later?
| According to http://edoc.ub.uni-muenchen.de/6024/1/Deeg_Dorothea.pdf, section 5.3.2, the hawking radiation for the free falling observer vanishes at the event horizon, if the observer is at rest at the horizon, but is nonzero above the horizon.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Understanding Tensors I don't seem to be able to visualize tensors. I am reading The Morgan Kauffman Game Physics Engine Development and he uses tensors to represent aerodynamics but he doesn't explain them so I am not really able to visualize them. Please explain in very simple ways. I just want to understand the basics.
| You could probably do a lot worse than taking a look at the video "What's a Tensor?" by Daniel Fleische on YouTube: http://www.youtube.com/watch?v=f5liqUk0ZTw
He gives a nicely visual, but maths-lite answer to that question by using children's blocks, small arrows, a couple of pieces of cardboard and a pointed stick.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
Bound states in QCD: Why only bound states of 2 or 3 quarks and not more? Why when people/textbooks talk about strong interaction, they talk only about bound states of 2 or 3 quarks to form baryons and mesons?
Does the strong interaction allow bound states of more than 3 quarks?
If so, how is the stability of a bound state of more than 3 quarks studied?
| There is no known reason that you can't have bound states like $qq\bar{q}\bar{q}$ or $qqqq\bar{q}$ or higher number excitations, but none have been observed to date.
You do have to make a color-neutral state, of course.
In the mid-2000 some folks thought that they had of pentaquark states (that the $qqqq\bar{q}$) for a while, but it was eventually concluded that they were wrong.
Added June 2013: Looks like we may have good evidence of four-quark bound states, though the detailed structure is not yet understood, and in the comments Peter Kravchuk points out that pentaquarks have come back while I wasn't paying attention (and the same state, too). Seem some egg may have moved from face to face.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 0
} |
Height of the atmosphere - conflicting answers Okay. I have two ways of working out the height of the atmosphere from pressure, and they give different answers. Could someone please explain which one is wrong and why? (assuming the density is constant throughout the atmosphere)
1) $P=h \rho g$, $\frac{P}{\rho g} = h = \frac{1.01\times 10^5}{1.2\times9.81} = 8600m$
2) Pressure acts over SA of Earth. Let r be the radius of the Earth. Area of the Earth is $4 \pi r^2$
Volume of the atmosphere is the volume of a sphere with radius $(h+r)$ minus the volume of a sphere with radius $r$.
$\frac{4}{3}\pi (h+r)^3 - \frac{4}{3}\pi r^3$
Pressure exerted by the mass of the atmosphere is:
$P=\frac{F}{A}$
$PA=mg$
$4\pi r^2 P = \rho V g$
$4\pi r^2 P = \rho g (\frac{4}{3}\pi (h+r)^3 - \frac{4}{3}\pi r^3)$
$\frac{4\pi r^2 P}{\rho g} = \frac{4}{3}\pi (h+r)^3 - \frac{4}{3}\pi r^3$
$3 \times \frac{r^2 P}{\rho g} = (h+r)^3 - r^3$
$3 \times \frac{r^2 P}{\rho g} + r^3 = (h+r)^3$
$(3 \times \frac{r^2 P}{\rho g} + r^3)^{\frac{1}{3}} - r = h$
$(3 \times \frac{(6400\times10^3)^2 \times 1.01 \times 10^5}{1.23 \times 9.81} + (6400\times10^3)^3)^{\frac{1}{3}} - (6400\times10^3) = h = 8570m$
I know that from Occams razor the first is the right one, but surely since $h\rho g$ comes from considering the weight on the fluid above say a 1m^2 square, considering the weight of the atmosphere above a sphere should give the same answer?
| The first formula is just a first order expansion in $1/r$ of the second formula which is thus the exact one.
The expansion is:
$$h = \frac{P}{\rho g} - \left( \frac{P}{\rho g} \right)^2 \frac{1}{r} + \frac{5}{3} \left( \frac{P}{\rho g} \right)^3 \frac{1}{r^2} + \ldots$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to charge an object with electricity I know this is a rather basic question, but how do you charge an object? Not a battery, an object. I'm guessing it involves static electricity, but I'm not sure. Some resources I've been reading talk about charging two objects with opposing voltages, and I am trying to figure out how you do it. I think you do it with DC current, but past that, I'm not sure.
Here is the paper I am talking about: http://www.avonhistory.org/school/gravitor.htm
| To charge an object, you first need to make sure that it is insulated, so the charge cannot leak away. That is easy if you are charging an insulator, but if you want to charge a metal object you need to mount it on an insulator as metals conduct electricity.
Then you can charge the object. To do this you need to add or remove electrons to/from the objet. You can easily do this by rubbing it with a dielectric material. See for example the Van de Graaff generator. You always create equal and opposite charges: if your metal becomes positive, the rubbing material will be negative. The positive object has electrons removed, the negative one has them added.
I don't know what you think of the article you linked, but controlling gravity is something that cannot be done with electricity - if it is possible at all, and there are good reasons for thinking it's not.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Is dimensional analysis used outside fluid mechanics and transport phenomena? Most dimensionless numbers (at least the ones easily found) used for dimensional analysis are about fluid dynamics, or transport phenomena, convection and heat transfer - arguably also sort of fluid mechanics.
My understanding of dimensional analysis is the following: Derive dimensionless numbers from the description of a system, find the ones physically meaningful, and use them to compare different situations or to scale experiments.
Is this possible in other fields, like classical mechanics, and their engineering applications?
Example: describe a horizontal beam by:
$$
X=\frac
{\text{forces acting on the beam}}
{\text{forces beam can withstand without plastic deformation}}
$$
Both parts of the ratio being functions of shape, density, gravity, material constants etc.
My assumption is yes, it's possible, but most fields outside the sort-of fluid mechanics described above are easy enough to calculate without dimensional analysis.
| Particle physics uses dimensional analysis quite often, not only to derive and verify equations, but also to understand the physics behind many quantities that are not classical.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
How does nature prevent transient toroidal event horizons? How does nature prevent transient toroidal event horizons?.. and does it really need to?
Steps to construct a (transient) toroidal event horizon in a asymptotically flat Minkowski spacetime:
*
*take a circle of radius $R$
*take $N$ equidistant points in the circle.
*consider tangent lines on each equidistant point, label the infinite directions on each tangent as their clockwise or counterclockwise direction relative to the circle
*pick an orientation (CW or CCW), and then throw black holes of radius $r \approx \frac{\pi R}{N - \pi}$ from each tangent line from the asymptotic infinite. Choose the tangential momenta which they are sent to be $p$
When all the black holes arrive at the circle in time $t_0$, their event horizons become connected. Even assuming that nature is abhorrent to this event horizon topology, it will take at least $t= \frac{R}{c}$ for the event horizon to reach the center of the circle. So there is "plenty" of time for causal curves to pass through the inner region and reach infinity
How does the topology censorship theorem avoid this to happen?
| The problem with this argument is that in 4d, the horizon of a black hole scales linearly with the mass. If you divide a circle into N segments, and have black holes whose radius is order R/N, where R is the radius of the big circle, their total mass is order R, so that the light rays passing through the center can be trapped by the total gravitational field of all the black holes inside.
This argument is specific to 4d, where the mass/radius relationship is linear. In 4d, you probably can't form a toroidal horizon even transiently. But in 5d and above, you can have spinning black holes with a toroidal horizon topology, and this argument is what shows that this is possible. The exact stable spinning toroidal black hole solutions were found in the last decade, and are now a major focus of research.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Special Relativity Second Postulate That the speed of light is constant for all inertial frames is the second postulate of special relativity but this does not means that nothing can travel faster than light.
*
*so is it possible the point that nothing can travel faster than light was wrong?
|
So is it possible the point that nothing can travel faster than light was wrong?
No. The "nothing can travel faster than light" restriction logically follows from the two postulates of special relativity.
I'll try to briefly show you how to get to the conclusion.
*
*First you have to convince yourself that the two postulates imply the phenomenon called the relativity of simultaneity. That is the first thing discussed in every textbook on special relativity, so I'm not getting into it.
*Now we use a following claim from p.1: "If one would be able to get from event A to event B only if he could move with faster-than-light speed (spacelike events). Then we can change the time order of the events A and B just by changing our reference frame." We can make A and B simultaneous, make A precede B or make B precede A -- all that just by moving to different reference frame.
*Now we can start a proof by contradiction. Suppose that we have some way to transmit faster-than-light signals. It then immediately follows follows from p.2 that we can transmit instantaneous signals (by making emission and reception events simultaneous) and even signals that are received before they are transmitted (by swapping the order of emission and reception events).
*Imagine that we have two guys $\alpha$ and $\beta$, equipped with such a spectacular communication channel. Then $\alpha$ could send a signal to $\beta$ "back in time", and then $\beta$ will return the signal to $\alpha$ instantaneously. Which means that $\alpha$ will receive his own signal from the future. Such ability instantly leads one to lots of self-contradictory situations. Hence our assumption was false.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 5,
"answer_id": 3
} |
Is it wrong to talk about wave functions of macroscopic bodies? Does a real macroscopic body, like table, human or a cup permits description as a wave function? When is it possible and when not?
For example in the "Statistical Physics, Part I" by Landau & Lifshitz it is argued that such systems must be described via the density matrix (chapter I, about statistical matrix). As far as I got it, roughly speaking, macroscopic bodies are so sensible to external interaction that they never can be counted as systems, one have to include everything else to form a system. Is my interpretation right?
When is it wrong to talk about wave functions of bodies that surround us?
| Those degrees of freedom of a quantum system that are described by a pure partial state must be very well shielded from unwanted interactions with the environment, otherwise they will be decoherered to a mixed state in a moment. This shielding can be done for a few degrees of freedom (like a superconducting current) but not for position and momentum of macroscopic bodies. Therefore these dof are always described by density matrices.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 7,
"answer_id": 0
} |
Is there a non-perturbative remormalization? If so, how does it work? Is there a method to renormalize a theory without using perturbative expansions for the divergences? For example, is there a method to get masses and other renormalized quantities without using expansions and counterterms?
O have heard about Lattice Gauge theory
but what other solutions of examples of non perturbative renormalization (numerical or analytical ) are there?
| The problem with renormalizations in QFT is that, while the conter-terms are known, they cannot be joined with the bare Lagrangian, but are supposed to be treated perturbatively. A quote of Lagrangian from A. Zee (Edition of 2003, page 175):$$L= \left [ \frac{1}{2}[(\partial \phi)^2 -m_P ^2\phi^2]-\frac{\lambda_P}{4!}\phi^4 \right ] + A(\partial \phi)^2 +B\phi^2+C\phi^4\quad (1)$$ Lagrangian in the square brackets gives itself wrong equations with wrong solutions. The wrongness can be represented as "corrections" to the mass, charge and the fields strength. The rest is OK. In order to subtract these unnecessary and harmful corrections in a systematic way, one introduces the counter-terms (everything that is out of big square brackets in (1)) and joins them to the perturbation $\frac{\lambda_P}{4!}\phi ^4$ in the perturbation theory. Then, with appropriate choice of $A$, $B$, and $C$, one can cancel or subtract the harmful corrections in each order appearing due to the "initial interaction" $\frac{\lambda_P}{4!}\phi ^4$. These heavy techniques ("know how") are offered nowadays because of rejecting recognition of the wrongness of the original Lagrangian which is based on the wrong understanding of physics and on mathematical errors.
An example of exact renormalization is presented in my toy model here: http://arxiv.org/abs/1110.3702 and here. The subtraction of counter-terms is done exactly in the wrong (model) Lagrangian, so the reminder is a physical Lagrangian with no problem.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
What criteria distinguishes causality from retrocausality? The brilliant philosopher David Hume remarked that if two events are always found to be correlated to each other with one event happening prior to the other, we call the earlier event the cause and the latter event the effect. However, it has been pointed out that two events can be correlated, with one happening after the other, but only because they both have the same common cause, and not because of any direct causation. It's not "post hoc, ergo propter hoc".
What stops me from defining a new verb "retrocause"? It works just like Hume's definition, but only in reverse. We note the presence of a broken egg is always correlated perfectly (or very very nearly so up to an exponential degree of accuracy. Loschmidt reversal, anyone?) with the existence of an unbroken egg in its past. So, we say broken eggs retrocause unbroken eggs. In the same manner, unbroken eggs retrocause hens laying eggs. And hens retrocause female chicks. And female chicks retrocause hatching eggs. And so on and so forth.
In delayed choice experiments, what stops us from saying the choice of apparatus settings retrocause the preferred basis of a quantum system in its past?
What distinguishes causality from retrocausality?
| I don't see any problem at all with your definition of retrocausality. After all, it's just a definition... So your first question is moot.
The difference between causality and retrocausality is the parameter time. On the quantum scale we do use equations such that time moves backward to describe certain interactions. However, on the larger scale these many interactions form a system where time moves forward (i.e. positive) not backward (negative). Thus we tend to use causality, not retrocausality.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 10,
"answer_id": 9
} |
Hawking radiation and black hole entropy Is black hole entropy, computed by means of quantum field theory on curved spacetime, the entropy of matter degrees of freedom i.e. non-gravitational dofs? What is one actually counting?
| No it isn't. This is a mysterious thing in quantum field theory on curved space, as first noted by 't Hooft. If you assume there is a certain amount of entropy in the quantum fields surrounding the black hole, due to their thermal nature, you might estimate that there is a local contribution to the entropy from each approximate mode at the correct local Hawking temperature of the black hole.
This entropy is divergent in quantum fields in curved space, because the time dilation factor makes it that at a fixed energy, the number of modes diverges as you approach the horizon. This is one of the paradoxes that led t'Hooft to the holographic principle.
Within AdS/CFT models, it is easy to give an answer-- the entropy of a black hole is the entropy of it's CFT description. This includes systems like stacked branes, in which case, the entropy of the black hole is the number of vacuum states. This is Strominger and Vafa's famous calculation of 1995-96. This entropy coincides with the extremal horizon area (although in this case, the black hole is extremal, so the temperature is zero).
Within string theory, this mystery is essentially resolved. The entropy is the entropy of the microscopic constituents of the black hole. It is not resolvable in curved-space QFT because of the 't Hooft divergence, and it is not well resolved in an agreed upon manner in any other approach (this means loops).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Shor's algorithm and Bohmian Mechanics Do quantum computer's tell us anything about the foundations of quantum theory? In particular Shor argued in the famous thread by 't Hooft
Why do people categorically dismiss some simple quantum models?
that quantum computation was at odds with 't Hooft's ideas.
Does quantum computation tell us anything new about hidden variables like Bohmian mechanics (which, at least so far, is 100% in agreement with everything we know about physics, contrary to what some people (e.g. Motl) claim)?
| Let me first mention a recent paper on quantum computing in the Bohm interpretation - http://arxiv.org/abs/1205.2563 , FWIW, though I cannot offer any comments on it right now, sorry.
Another thing. As nightlight noticed in his posts on hidden variables, there is an off-the-shelf mathematical trick (an extension of the Carleman linearization) that embeds a system of partial differential equations into a quantum field theory (see, e.g., my article in Int'l Journ. of Quantum Information ( akhmeteli.org/akh-prepr-ws-ijqi2.pdf ), the end of Section 3, and references there. There is also a substantially updated version at arxiv.org/abs/1111.4630).
nightlight also mentioned what that may mean for quantum computing. One can imagine a situation where Nature is described correctly by a quantum field theory (QFT), whereas actually only a limited subset of the entire set of states of the QFT is realized in Nature, the subset that is correctly described by a (classical) system of partial differential equations, so there are obvious limitations on how fast quantum computing can be. Of course this is highly hypothetical, but perhaps quite relevant to the question above on the relation between quantum computing and foundations of quantum theory.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
How can I estimate the elasto-optic coefficients ($p_{11}$ and $p_{12}$) of a material? I am attempting to estimate the elasto-optic coefficients ($p_{11}$ and $p_{12}$) of $\mathrm{TiO}_2$ and $\mathrm{ZrO}_2$, where $p_{11}$ and $p_{12}$ refer to the elements of a strain-optic tensor for a homogeneous material as given in Hocker (Fiber-optic sensing of pressure and temperature, 1979).
I have found a document which specifies that the longitudinal elasto-optic coefficient ($p_{12}$) can be estimated using the Lorentz-Lorenz relation that it gives as
$$p_{12} = \frac{(n^2 - 1)(n^2 + 2)}{3n^4}$$
however no reference is given, and other sources give the Lorentz-Lorenz relation as something rather different. For example Wikipedia says that the equation relates the refractive index of a substance to its polarizability and gives it as
$$\frac{n^2 - 1}{n^2 + 2} = \frac{4\pi}{3}N\alpha$$
which bares only a vague relation to the earlier equation.
Does anyone know of any other ways in which to estimate the elasto-optic coefficients of a material?
| Boyd is a useful reference. See for example:
E.L. Buckland and R.W. Boyd, "Electrostrictive contribution to the intensity-dependent refractive index of optical fibers," Opt. Lett. 21:1117 (1 Aug. 1996)
where the dielectric constant epsilon is the square of the refractive index n.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
The observation of a non-SM resonance at 38 MeV Was reported here. Of course if this is real it is very exciting. It leads me to the question: given that it took so long to find this resonance at a meager 38 MeV, is it possible that all SUSY particles are hiding down in the MeV or KeV range (or lower)?
| User1247 pointed out my mistaken reading of the scale in a previous answer, now deleted.
Fortunately I found a pi0 mass plot in LHCB which shows that there is gamma gamma mass resolution to clear this point about a 38 MeV diphoton resonance.
By now they could provide us with a definitive plot.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 3
} |
Why isn't Hawking radiation frozen on the boundary, like in-falling matter? From the perspective of a far-away observer, matter falling into a black hole never crosses the boundary. Why doesn't a basic symmetry argument prove that Hawking radiation is therefore also frozen on the boundary, and therefore not observable? Wouldn't the hawking radiation have to have started its journey before the formation of the black hole? Furthermore, wouldn't the radiation be infinitely red-shifted?
| Classically, this is true. Something exiting from a classical static black hole would have had to have started before the universe was created. The frozen-on-the-horizon view of a black hole is the view you get under classical general relativity. When you add quantum mechanics, this view is no longer quite valid, and you get Hawking radiation, which from a far-away observer's viewpoint, interacts with the infalling matter.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
does AdS/CFT implies that there is an CFT in physical horizons? so my rough understanding is that the AdS/CFT duality is some sort of isomorphism between an N dimensional gravitational theory and a N-1 dimensional conformal field theory on the boundary.
The boundary of our 3+1D spacetime is the union of the cosmological horizon plus the event or apparent horizons of black holes in it, which are 2+1D spaces
So my question is: does this imply that there are some conformal fields evolving in those 2+1D physical boundaries at this very moment, that map one-on-one the evolution of the 3+1D spacetime? like some sort of crystal ball?
can you influence what happens in the 3+1D space by affecting what happens in those 2+1D boundaries?
I intentionally made this question as wild as possible so we mere non-stringy mortals can know where we stand with all this duality business
| The answer is no, not from AdS/CFT, but yes from the holographic principle which gives rise to AdS/CFT, and which AdS/CFT confirms. The reason is the "C" in "CFT", the conformal symmetry is the special property of extremal black holes, that their horizons are in curved space. The cosmological horizon is locally flat space, every near horizon geometry is Rindler. The relation between the degrees of freedom of locally flat horizons and the space nearby and inside is not worked out at all, there are no examples of real thermal black hole boundary correspondence. What you do have are states corresponding to black holes in AdS space, but the description is now on the AdS boundary, not on the black hole boundary. Further, if you have two descriptions of the same small-size black hole in two different AdS spaces, they can't be matched up in any known way. So the description of AdS/CFT is only a shadow of full holographic principle, which does suggest something exactly like what you describe in the question.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 1,
"answer_id": 0
} |
Bernoulli equation from energy conservation I have derived the energy conservation equation:
\begin{equation}
\frac{\partial}{\partial t} \left [ \frac{1}{2} \rho v^2 + \varepsilon + \rho \phi \right ] + \frac{\partial}{\partial x_j} \left [ \frac{1}{2} \rho v^2 v_j + \rho h v_j + \rho \phi v_j \right ]=0
\end{equation}
where $\varepsilon$ is internal energy and $\rho \phi$ is the gravitational energy.
I was trying to derive the Bernoulli equation from the above equation for time independent flow. It is not clear to me which term is Bernoulli equation term. How can derive the Bernoulli equation?
|
I was trying to derive the Bernoulli equation from the above equation
for time independent flow
If you are studying something time independent then you just let $\frac{\partial}{\partial t}$ to be zero:
$$
\frac{\partial}{\partial x_j} \left [ \frac{1}{2} \rho v^2 v_j + \rho h v_j + \rho \phi v_j \right ]=0
$$
Next step is to get rid of $\rho$. Bernoulli's equation doesn't contain $\rho$, does it? We'll need mass balance for the stationary state:
$$\frac{\partial \rho v_j}{\partial x_j} = 0$$
which together with the first equation leads to:
$$
\rho v_j \frac{\partial}{\partial x_j} \left [ \frac{1}{2} v^2 + h + \phi \right ]=0
$$
Now one should recall that Bernoulli's law is valid only along streamlines/pathlines, which do coincide for a steady flow. If something called $A$ is constant along the vector field $\boldsymbol v$, then it should suffice the equation
$$v_j \frac{\partial A}{\partial x_j} = 0$$
Actually it is just a derivative of $A$ along the vector field $\boldsymbol v$ --- if the derivative is zero then $A$ is constant along the vector field.
Thus we got the Bernoulli's law:
$$
\frac{1}{2} v^2 + h + \phi = const
$$
along the streamline/pathline for the stationary case.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why is current not 0 in a regular resistor - battery circuit immediately after you closed a circuit? In regular open circuits with either a capacitor or inductor element, (when capacitor is uncharged) with a battery, when a switch is closed to complete the circuit the current is said to be 0 because current doesn't jump immediately.
But in a circuit with just resistors, as soon as a switch is closed the current isn't 0?
Example is this question from 2008 AP Physics C Exam
http://apcentral.collegeboard.com/apc/public/repository/ap08_physics_c_em_frq.pdf
http://www.collegeboard.com/prod_downloads/ap/students/physics/ap08_physics_c_e&m_sgs_rev.pdf
Go to Question 2 for details.
| This question is working within the realm of 'circuit theory', which is an idealization useful for introductory teaching of electromagnetism. It is really a simplification of electrodynamic field theory, just a special case making useful assumptions. A lot of conceptual problems in circuit based questions come from forgetting that you are dealing with a slightly unreal situation.
The answer is as above, the current does not instantaneously propogate throughout the circuit but at some finite speed $<c$. In introductory problems however this speed is much faster than you need to worry about.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to choose a suitable topic for PhD in Physics? After completion of graduate courses when a student is supposed to start real research in Physics, (to be more specific, suppose in high energy physics), how does one select the problem to work on? The area is vast, mature and lots of problems remain to be solved. This vastness of the field and various levels of difficulty of unsolved problems may give rise to confusion regarding choice of problem. The time one can spend at graduate school is limited (let us assume about four years after courses are over) Can anybody guide me or share one's views about this issue?
What type of problems should be avoided at PhD level? I guess problems which even the best theorists had failed to completely solve should be left. There are less difficult problems which are solved in collaboration of say three/four/five or more highly experienced physicists which may not be possible for a beginner who will be working practically alone. So should one start with the simplest unsolved problems? Or is it enough for a problem to be interesting to work on, irrespective of its level of difficulty? In general, what type of work is expected from a graduate student to be eligible for a PhD degree?
| This is what thesis advisors are for.
Indeed it is difficult for a student to identify a problem or topic area which is both interesting enough to potentially get you a job later on, but also has not yet been overgrazed by other physicists. That is why identifying a good thesis advisor, and convincing him to take you as a student, the most critical task for a starting grad student.
Some advisors will involve you in their own research, carving out little subproblems you can tackle while you get up to speed. Some advisors don't collaborate with students but have a knack for identifying promising problems that haven't already been done. Some advisors just aren't very good advisors and leave students to sink or swim on their own. You need to carefully evaluate the options available at your institution.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 0
} |
Black-holes are in which state of matter? Wikipedia says,
A black hole grows by absorbing everything nearby, during its life-cycle. By absorbing other stars, objects, and by merging with other black-holes, they could form supermassive Black-holes
*
*When two black-holes come to merge, don't they rotate with an increasing angular velocity as they come closer and closer (how does it from a neutron star? I mean, who's powerful?)
And it also says,
Inside of the event horizon, all paths bring the particle closer to the center of the black hole.
*
*What happens to the objects that are absorbed into a black-hole? Which state are they really are now? They would've already been plasma during their accretion spin. Would they be on the surface (deposited), or would they still be attracted and moved towards the center? If so, then the surface of black-hole couldn't be a solid.
| the matter inside a black hole would be a non Newtonian liquid of subatomic particles, the mass being too heavy for an atom to keep itself together.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Does water need to pumped up from deep ocean? OTEC (Ocean Thermal Energy Conversion) utilizes the temperature gradient between cold deep ocean water, and warmer water to do work. I understand the pressure in the depths may be as high as a couple of orders greater than surface atmospheric pressure.
I also remember, vaguely, that a fluid moves from an area of high pressure to low pressure; wouldn't a sealed pipe merely need valves at the top to control the flow? Does water need to be pumped up out of the deeps?
| Typically, yes, the water does need to be pumped up.
Because if it released energy by rising, it would already have risen to the surface.
OTEC depends on a high-enough temperature difference between the lower-depth water intake and the higher-depth one, for that temperature difference to do enough work to provide some surplus power, in addition to the power needed to pump the water up.
Tepco's OTEC plant on Nauru (1982-3) reportedly generated 120kW electricity gross, of which 90kW was needed to operate the plant. The surplus 30kW was fed into the grid.
More context: OTEC is estimated to be viable with a ${\Delta}T$ of 20 Kelvin, so definitely the tropics, and predominantly the western Pacific. Unsurprisingly, Japan has been particularly active in OTEC. The global harnessable resource is estimated at $10^{13}W$, which is the same order of magnitude as total global energy consumption.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Calculating equation of motion from action Suppose my action integral is $$S=\int d^4x(\nabla \times A)^2$$ and $\delta S$ gives $$\delta S =\int d^4x [2(\nabla \times A).(\nabla \times \delta A)]$$
I would like to calculate the coefficient of $\delta A$ from this action integral. But I am stuck. How can I separate the $\delta A$ from the term like this?
| Let's do what Heidar says and write it with indices, and identify the Lagrangian.
$$
L=\frac{1}{2}(\vec{\nabla}\times \vec{A})^2 = \frac{1}{2}\epsilon_{ijk}\partial_j A_k \epsilon_{ilm}\partial_l A_m
$$
where, if you haven't heard of it yet, you pretend there is a summation symbol for each repeated index. Then since there are no bare $A_i$ sitting by themselves, only $\partial_i A_j$s the only part of Lagrange's equations that will contribute are
$$
\partial_q \frac{\partial L}{\partial (\partial_q A_p)}
$$
which we set equal to zero following the equations. Then
$$
\frac{\partial L}{\partial (\partial_q A_p)}=\frac{1}{2}(\epsilon_{ijk}\delta_{jq}\delta_{kp}\epsilon_{ilm}\partial_l A_m+\epsilon_{ijk}\partial_j A_k \epsilon_{ilm}\delta_{lq}\delta_{mp})
$$
using
$$
\frac{\partial (\partial_i A_j)}{\partial (\partial_q A_p)}=\delta_{iq}\delta_{jp}.
$$
Then we have
$$
\partial_q \frac{\partial L}{\partial (\partial_q A_p)}=\partial_q(\epsilon_{iqp}\epsilon_{ijk}\partial_i A_j) = \partial_q ((\delta_{qj}\delta_{pk}-\delta_{qk}\delta_{pj})\partial_{i} A_j)=\partial_q (\partial_q A_p - \partial_p A_q)=0
$$
where i used the contracted epsilon identity and changed the repeated indices as i needed them in order to combine terms. Hope this helps.
EDIT:
Well, I'll still try and help out, hopefully I don't make anything any worse.
$$
\delta S = \int d^3 x \frac{1}{2}\epsilon_{ijk}\epsilon_{ilm}\delta(\partial_j A_k)\partial_l A_m+\int d^3 x\frac{1}{2}\epsilon_{ijk}\epsilon_{ilm}\partial_j A_k \delta(\partial_l A_m)
$$
Now with the variations $\delta$ we can interchange the order of $\partial$ and $\delta$
$$
\delta(\partial_i A_j)=\partial_i (\delta A_k)
$$
So with the two terms multiplied above we get
$$
\partial_j( \delta A_k)\partial_l A_m=\partial_j (\delta A_k \partial_l A_m)-\delta A_k \partial_j \partial_l A_m
$$
from the product rule. This helps isolate the variation of the field. Please (everyone) let me know if this is still confusing and/or wrong. Hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Gravitational inverse-square law I was looking at the gravitational inverse-square law:
$$
F_G = G \frac{Mm}{r^2}
$$
This law comes from some experimental data? Why it is an exact inverse-square law? Could it be
$$
F_G = G \frac{Mm}{r^{2.00000000000000001}}
$$
or there is a mathematical method to find exactly this law?
| It is exact in the Newtonian limit, i.e., for speeds slow compared to speed of light, away from strong gravitational fields or horizons, and for big enough objects that quantum effects can be ignored. The 1/r^2 comes from solving Poisson's equation $\nabla^2\phi=4\pi G \rho$. If there would be corrections to this law it would come from additional higher derivative terms. Such higher derivatives would not affect the 1/r^2 behavior at large r. So, in the appropriate limit, yes, it is exact.
It is also extremely well confirmed observationally, since it is the only law that gives elliptical, or stable, planetary orbits.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Where does the heat flow in the Earth's crust switch from primarily solar to geothermal? Ok, maybe more of a geology question than physics, but maybe somebody has been involved in modeling these heat flows?
Essentially I'm asking if we know what sort of depth the heat source becomes primarily from below rather than from above. I've heard that the fully solar domain is down to about 5m, and that temperatures below that are fairly stable, but crucially the temperature at the bottom of a 100m borehole is supposed to be around the mean temperature form that latitude, which suggests to me that solar is at least a large input even to that depth.
I'm struggling to find any reference to this exact point in the literature.
| To clarify your question a bit, the average heat flow is towards the surface (if you average over a long enough period) so I guess you're asking to what depth does the heat flow, i.e. temperature, vary with the time of day or season of the year. In that case Google for something like "diurnal variation temperature depth" and "annual variation temperature depth". You'll get hits like this one that states the diurnal variation is limited to 1 metre and the annual fluctuation to 9 to 12 metres.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
The vacuum light speed: Is it really constant, i.e., independent of location in space-time? I am by no means an expert in this field, however something puzzles me about the speed of light and the relativity of time and space (space-time).
Is is universally acknowledged that the speed of light (299,792,458 m/s) is the universal speed limit, and that nothing can travel faster than light? That is a measurement based on a man-made interpretation of time (hours, minutes and seconds etc. are man made...there is nothing natural dictating how long a second should be).
For instance, according to Einstein, time and space bend around the physical matter of the universe, so for example, time near or on the surface of a "super massive black hole" should be drastically slower, relative to that of earth. Lets say for example that for every second that passes on the black hole, 10 seconds pass on earth, so essentially time on the surface of the black hole is 10 times slower than the time on earth.
Given the example above, is the speed of light at the surface of the black hole still 299,792,458 m/s, or is it 299,792,458,0 m/s?
| As pointed out by Michael Duff here, the speed of light in vacuum is a mere conversion constant. So, just as Michael Duff puts it in his article, you can just as well ask if the number of liters to the gallon is independent of location in space-time.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
A problem of missing energy when charging a second capacitor A capacitor is charged. It is then connected to an identical uncharged capacitor using superconducting wires. Each capacitor has 1/2 the charge as the original, so 1/4 the energy - so we only have 1/2 the energy we started with. What happened?
my first thoughts were that the difference in energy is due to heat produced in the wire. It may be heat, or it may be that this is needed to keep equilibrium.
| Short answer: this is a textbook example of the limitations of ideal circuit theory. There seems to be a paradox until the underlying premises are examined closely.
The fact is that, if we assume ideal capacitors and ideal superconductors, i.e., ideal short circuits, there appears to be unexplained missing energy.
What's not being considered is the energy lost to radiation at the moment the two capacitors are connected together.
At the moment the capacitors are connected, in accord with ideal circuit theory, there should be an impulse (infinitely large, infinitely brief) of current that instantaneously changes the voltage on both capacitors.
But this ignores the self-inductance of the circuit and the associated electromagnetic effects. The missing energy is transferred to the electromagnetic field.
From the comments:
This answer is just plain wrong. – Olin Lathrop
and
Agreed with @OlinLathrop. - Lenzuola
If you find yourself in agreement with the comments above, consider the following excerpt from the exercise "A Capacitor Paradox" by Kirk T. McDonald, Joseph Henry Laboratories, Princeton University:
Two capacitors of equal capacitance C are connected in parallel by
zero-resistance wires and a switch, as shown in the lefthand figure
below.
Initially the switch is open, one capacitor is charged to voltage V0
and the other is uncharged. At time t = 0 the switch is closed. If
there were no damping mechanism, the circuit would then oscillate
forever, at a frequency dependent on the self inductance L and the
capacitance C. However, even in a circuit with zero Ohmic
resistance, damping occurs due to the radiation of the oscillating
charges, and eventually a static charge distribution results.
And then, in problem 2:
Verify that the “missing” stored energy has been radiated away by the
transient current after the switch was closed, supposing that the
Ohmic resistance of all circuit components is negligible.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 1
} |
Do I have the meaning of the property temperature correct? OKay my book just starts out talking about the vague definition we have for temperature and we ended up with the Zeroth law of Thermodynamics which states:
Two systems are in thermal equilibrium if and only if they have the same
temperature.
So does that mean, the (brief) meaning of the property temperature is a measure of whether one system is in thermal equilibrium with another?
| That would be definition of temperature in thermodynamic framework. However, as Ron has remarked, it can be understood better in framework of statistical mechanics which in some sense is a more fundamental science than thermodynamics.
For a non-isolated system (i.e. a system which is allowed to exchange energy with its surroundings) temperature is a parameter which tells how energies are distributed in the system$^{**}$. More precisely consider a system which can exist in various states (configurations) $|1>, |2>, |3>,...$ of energies $E_1,\:E_2,\:E_3\:,..$ respectively. Then saying that this system has temperature $T$ means that probability for this system to be found in state $|i>$ of energy $E_i$ is ~$exp(-E_i/kT)$.
Now consider two systems $A$, and $B$.
Suppose $A$ can exist in states $|A1>, |A2>, |A3>,...$ of energies $E^A_1,\:E^A_2,\:E^A_3\:,..$ respectively; and $B$ can exist in states $|B1>, |B2>, |B3>,...$ of energies $E^B_1,\:E^B_2,\:E^B_3\:,..$ respectively. Now suppose we allow these systems to exchange energy with each other. We say that systems $A$, and $B$ have attained equilibrium when the energy distribution in the combined system $A+B$ as well as as in its subsystems $A$, and $B$ is no more changing with time, i.e. when
$1.$ The combined system $A+B$ is at a definite temperature $T$ (i.e. energy distribution in $A+B$ is given according to parameter $T$ )
$2.$ Systems $A$ and $B$ themselves are at some definite temperatures $T_A$ and $T_B$.
With these definitions of temperature and of equilibrium one can show that at equilibrium we should have $T_A=T_B=T$. (You can try to prove it yourself for the simple case when both the systems $A$ and $B$ have finitely many energy states of distinct energy).
** for an isolated system at energy $E$ temperature is defined in a different way.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Why don't rockets tip over when they launch? Rockets separate from the launch pad and supporting structures very early in flight. It seems like they should tip over once that happens.
*
*Why don't they tip over ?
*Is it due to a well designed center of gravity or do they somehow achieve aerodynamic stabilization ?
| More fundamental than the gimballed thrust system or verniers is the relationship between the "center of gravity" and "center of pressure" on a rocket (or any kind of projectile (e.g., bullet).
For the rocket to fly nose-forward and not flip around, the center of gravity must be ahead of the center of pressure. In building small amateur or model rockets, the guideline is always that the CG should be at least 1 body diameter ahead of the CP.
The center of gravity is the point where the mass components of the rocket "act" with respect to the ground (i.e., you can treat the CG as a point representing "the rocket" when calculating the opposing force of gravity pulling the rocket downward). The center of pressure is the point where aerodynamic forces on the rocket body, nose, and any fins sum together and "act".
In the above answer about gimballed thrust, for example, the gimbals are actually acting to force the CP back under the CG when the nose tilts. Fins (e.g., on missiles or model rockets) act in the same way to keep the CP under the CG.
So individual technologies for doing that may vary, but the underlying principle here is the CG/CP relationship. Hope that helps.
https://web.archive.org/web/20130216063642/http://exploration.grc.nasa.gov/education/rocket/rktcp.html
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 3,
"answer_id": 0
} |
Complete set of observables in classical mechanics I'm reading "Symplectic geometry and geometric quantization" by Matthias Blau and he introduces a complete set of observables for the classical case:
The functions $q^k$ and $p_l$ form a complete set of observables in the sense that any function which Poisson commutes (has vanishing Poisson brackets) with all of them is a constant.
I wonder why is it so? That is why do we call it a complete set of observables? As I understand it means the functions satisfying the condition above form coordinates on a symplectic manifold, but I don't see how.
| Any observable $H$ in classical mechanics defines a flow of states by regarding it as a Hamiltonian. This flow acts on observables $f$ by $df/dt = \{H,f\}$ (this is Hamilton's equation). The idea of a complete set of observables is that it is a set for which any observable with constant flow for all members of the set (ie. Poisson commute with the set) is constant. Intuitively, these flows move all over the phase space, so if $f$ is nonconstant, the flow of $f$ along one of the observables in the complete set can detect this.
These functions don't have to form coordinates.
EDIT: To complement QMechanic's counterexample, here is a compact counterexample: consider the 2-sphere with its ordinary symplectic form and the functions $\mathrm{cos} 2\theta$, $\mathrm{sin}\phi$, and $cos \phi$, where $\theta$ is the polar angle, and $\phi$ is the azimuthal angle. These are symmetric accross the equator, so they don't distinguish points, but it is pretty clear that they are a complete set.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/36017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Driving a solution of optical isomer molecules with the resonant frequency What happens when we drive a solution of optical isomer molecules (enantiomers) with a microwave radiation in resonance with the tunneling frequency of the molecules (the frequency of the transition between the eigenstates of the Hamiltonian)? I expect it will become a racemic mixture. Is that correct?
Update: Any reference for an experiment that does that is appreciated.
| Your intuition is right, as far as your solution of optically active (chiral) molecules can be assimilated to an ensemble of harmonically driven two-level systems. For the two level system (left- and right handed molecules being the respective states) which starts out with only left handed molecules, the radiation drives the system to a state where left- and right handed molecules are present in proportion of 50%.
However, the fine tuning of the frequency is not important in reaching the final 50% distribution. A larger difference between the driving- and the eigenfrequency of the left-right transition leads only to a smoother, more prolonged transition to the final 50% state.
I cannot understand this by pure theory, but numerical simulation shows this .
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/36181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Group Velocity and Phase Velocity of Matter Wave? In quantum mechanics, what is the difference between group velocity and phase velocity of matter wave? How can it also be that phase velocity of matter wave always exceeds the speed of light?
| Actually matter wave describes the probability of finding a particle at any time at any point of space. It is not exactly a sine wave, it is an wave packet. So it contains lots of component of single frequency wave. These the velocity of the single component is called phase velocity. And the overall velocity of composite wave is called the group velocity.
The phase velocity actually don't carry any meaningful physical information for matter wave. All the meaningful information (like, momentum, velocity etc) contains in the group wave. So the velocity of the particle is determined by the group velocity, not phase velocity.
Now since phase velocity does not carry any physical signal, so it can have speed greater than light. It does not violet the theory of relativity. Only the group velocity should be less than speed of light.
Hope this will help you understand
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/36242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Zero entropy change If you put a object in contact with a heat reservoir that is infinitesimally higher in temperature than the object and allow equilibrium to be reached the entropy change is zero right?
| When talking about infinitesimals, you need to specify how close to zero it is. The entropy loss is $\delta Q\over T$ and the gain is $\delta Q \over T-\delta T$, so the net gain is $\delta Q \delta T \over T^2$ to leading infinitesimal order, and it vanishes linearly in $\delta T$ and $\delta Q$ both.
For iterated infinitesimal processes that approximate a path in state space, the entropy gain will zero when you compose it from many nearly-reversible paths, so in this sense, the entropy gain is zero.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/36378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Waves travelling with water flow Suppose I use a tool to create a circular wave in the river. If there are two fish swimming 1m from the source (2m from one another), they will both feel the wave at the same time.
What will happen if the river flows from one fish to the other? How will it affect the waves?
| Assuming the river is deep enough or equivalently the waves are small enough compared with the depth.
Relatively to the bank the wave will remain its circular form, however as a whole it will be moving with the river. So, assuming the fishes swim against the current to keep stationary relatively to the bank, the upper fish will feel the wave later than the other one.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/36485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What are distinguishable and indistinguishable particles in statistical mechanics? What are distinguishable and indistinguishable particles in statistical mechanics? While learning different distributions in statistical mechanics I came across this doubt; Maxwell-Boltzmann distribution is used for solving distinguishable particle and Fermi-Dirac, Bose-Einstein for indistinguishable particles. What is the significance of these two terms in these distributions?
| Assume you have two particle A and B in states 1 and 2. If the two particle are distinguishable, then by exchanging the particles A and B, you will obtain a new state that will have the same properties as the old state i.e. you have degeneracy and you have to count both states when calculating the entropy for example. On the other hand, for indistinguishable particles, exchanging A and B is a transformation that does nothing and you have the same physical state. This means that for indistinguishable particles, particle labels are unphysical and they represent a redundancy in describing the physical state and that is why you would have to divide by some symmetry factor to get the proper counting of states.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/37556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 5,
"answer_id": 0
} |
Quantum Mechanics- Antenna emitting electromagnetic radiation Radio signals are being transmitted in a frequency of $ 8.4 \times 10^9 \text{s}^{-1} $ and being received by an antenna that is capable of receiving power of $ 4 \times 10^{-21} \text{Watt} $ ($ 1 \, \text{Watt} = 1 \, \text{J s}^{-1} $ ) .
Estimate that number of photons per second of this electromagnetic radiation that this antenna is capable of receiving.
I can easily calculate the number of photons per second that this radio produces, dividing the emitted power by the energy of one photon,
$\hbar\omega=h\nu = h\times8.4 \times 10^9 s^{-1}$
but have no idea how to calculate the number of photons that this antenne can receive.
| It doesn't matter how strong the transmitting station is. The receiving antenna is only absorbing a tiny amount of power. The idea of the calculation is to show that even in a "classical" situation like a receiving antenna, at some level the "granular" nature of electromagnetic energy makes itself apparent.
At least that seems to be the way the numbers are set up. It seems to me to be a very dishonest calculation. I don't believe there is any way to observe the quantum nature of radiation by looking at the current in a radio receiver.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/37616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
2D Car Physics including Throttle For a simulation for testing on automatic cruise control, I came across the equation:
$$
v_{n+1} = (1 - k_1 / m) v_n + (1 - k_b) \begin{pmatrix}
T_n \\
θ_n \\
\end{pmatrix}
$$
where:
*
*$T$ = throttle position
*$k_1$ = viscous friction
*$k_b = k_2 / m$
*$k_2 = m g \sin(θ)$
*$v$ = velocity
$k_b$ doesn't make sense. The matrix part doesn't make sense to me. Can anyone expound the equation?
Why isn't the angle put in sin or cos first?
Also, why is there viscous friction in solid physics? Isn't the angular component in $k_2$ enough?
SOURCE:
A Fuzzy Logic Book: "scribd.com/doc/105335356/124/INDUSTRIAL-APPLICATIONS"; Page 508. Number 13.2.
| The "viscous friction" component is a loss of speed that is proportional to velocity. This proportionality is the same that occurs through viscous losses such as viscous dampers or the losses associated with bearings.
$$k_b= \frac{m\,g\,sin(\theta)}{m}$$
This corresponds to the forward or rearward force on the car due to gravity on a hill, divided by the mass, which would correspond to the acceleration due to gravity on a hill.
This acceleration should not be multiplied by the throttle position in a physically accurate simulation. Nor should an angle be multiplied by an acceleration, so I think there must be a transcription error somewhere but I was unable to look at the source book.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/37694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If the earth would stop spinning, what would happen? What would happen if the earth would stop spinning? How much heavier would we be? I mean absolutely stop spinning. How much does the centrifugal force affect us?
If you give technical answers (please do), please explain them in laymen's terms, thanks.
Edit: I am asking what would be different if the earth were not spinning. Nevermind the part about it stopping.
| The centripetal force we feel on the surface would immediately disappear, causing us to feel lighter. The centripetal force is equal to $E=mv^2/r$ and the acceleration is given by $a=v^2/r$. This results in a reduction of the acceleration towards the earth’s centre, also known as gravity. Therefore we would feel lighter
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/37952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Matrix mechanics for those with wave mechanics background Just curious:
Is there any book or resource that teaches matrix mechanics (quantum mechanics) only without wave mechanics stuff - meaning that the book assumes wave mechanics background.
|
This answer contains some additional resources that may be useful. Please note that answers which simply list resources but provide no details are strongly discouraged by the site's policy on resource recommendation questions. This answer is left here to contain additional links that do not yet have commentary.
*
*Modern Quantum Mechanics, by J.J. Sakurai & Jim Napolitano
*Quantum Mechanics by R.L Liboff has some introductory Matrix Mechanics. It is an ok but dated book for a first course in QM.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/38014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Is Heisenberg's matrix mechanics or Schrödinger's wave mechanics more preferred? Which quantum mechanics formulation is popular: Schrödinger's wave mechanics or Heisenberg's matrix mechanics? I find this extremely confusing: Some post-quantum mechanics textbooks seem to prefer wave mechanics version, while quantum mechanics textbooks themselves seem to prefer matrix mechanics more (as most formulations are given in matrix mechanics formulation.)
So, which one is more preferred?
Add: also, how is generalized matrix mechanics different from matrix mechanics?
| Ever since Dirac came up with transformation mechanics (kets and bras), and showed both matrix and wave mechanics are special cases of it, physicists have worked with kets and bras. With von Neumann later, this was reinterpreted as Hilbert spaces.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/38070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 1
} |
Interpretation of field operator Consider a real scalar field operator $\varphi$. It can be written in terms of creation and anihilation operators as
$$\varphi(\textbf{x})=\int \tilde{dk}[ a(k)e^{i\textbf{kx}}+a(k)^{\dagger}e^{-i\textbf{kx}}]$$
where $\tilde{dk}$ is a Lorenz-invariant measure.
If $\varphi$ is interpreted as creating a particle at $\textbf{x}$ when acting on the vacuum, what is its action on a generic state? It seems to be creating a superposition of a state with one added quantum of energy through the creation operator, and a state with one less quanta of energy through the annihilation operator.
| As the formula clearly shows, $\phi(x)$ cannot be interpreted as a pure creation operator of any type. It is a combination of creation and annihilation operators. Creation operators are those called $a(k)^\dagger$ and annihilation operators are called $a(k)$.
So yes, if $\phi(x)$ acts on a generic state with a well-defined number of particles $N$, it produces a linear superposition of states that have $N+1$ and $N-1$ particles, respectively. When it acts on the vacuum, for example, however, the annihilation operator piece drops out and it creates a 1-particle state.
It's somewhat hard to understand what you mean by "interpretation". The only right interpretation is the right calculation. It is an operator that gives something if it acts on a state, and all these answers may be calculated. They shouldn't be interpreted, they should be calculated.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/38210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Why is the solar noon time different every day? If you check the local time for solar noon is different every day. Why is it so? Is it because Earth doesn't make a complete rotation in exactly 24 hours?
The following is an example of the solar noon differences (also sunrise and sunset), computed by the Python Astral module for the city of Guayaquil
| A solar noon is defined when the Sun is at the zenith (directly above). Since the Earth revolves round the Sun, the point that is directly above would have changed because Earth is in a different point from the previous noon.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/38270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 3
} |
Magnitude of New Comet C/2012 S1 (ISON) A new comet (magnitude 18.8) has been discovered beyond the orbit of Jupiter.
Comet ISON will get within 0.012 AU of the Sun by the end of November 2013 and ~0.4 AU from of Earth early in January 2014. It may reach very welcome negative magnitudes at the end of November 2013. Forecasts of comet magnitudes have been disappointing in the past. It's clear that comet magnitudes depend on the quantity of dust and ices that are ablated by the Sun's energy, and therefore nucleus size and distance from the Sun and Earth.
My question is, to what extent is a comet's magnitude dependent on the ratio of nucleus dust to ice and on the type of ices?
| The magnitude of any object which doesn't have its own light is dependent on it's albedo i.e. $$\frac{Light \ reflected}{Light \ Received}$$
So, diffrent types of ice have different albedos, so different magnitude. Though, I can't find a relation between ratio of nucleus dust and ice to the comet's magnitude, the albedo/magnitude can be used to find the composition of the surface of comet(it's not pure ice, is it?)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/38342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Are all static solutions of Einstein's equations spherically symmetric? Is it true that all static solutions in GR are also spherically symmetric? Is there a proof of this?
Similarly, are all stationary solutions axisymmetric?
| The answer to your first question is no. In fact, you can find static, stationary solutions to GR corresponding to cosmic strings and domain walls, or even more exotic solutions, like the c-metric.
Your statement about axisymmetry is harder, mainly because I don't know of many non-axisymmetric solutions, period.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/38394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is the Lagrangian "math" or "science"? I've seen in class that we can get from Lagrangian to derive equations of motion (I know its used elsewhere in physics, but I haven't seen it yet). It's not clear to me whether the Lagrangian itself follows from the equations of motion, or whether it represents a fundamentally different approach - whether it's a different "model", or whether it's largely a mathematical observation (it may illuminate/integrate what came before, but for all intents and purposes it is equivalent).
I'm just curious about how these relationships are generally understood. I ask because it's not at all obvious to me that the principle of least action should be true. I'd also be curious whether the answer to this question would be the same across different fields of physics, though if the answer's "no" much detail would probably go over my head.
| It is a mathematical observation that follows from Newton's laws. See for example Lanczos
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/38445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Determining the center of mass of a cone I'm having some trouble with a simple classical mechanics problem, where I need to calculate the center of mass of a cone whose base radius is $a$ and height $h$..!
I know the required equation. But, I think that I may be making a mistake either with my integral bounds or that $r$ at the last..!
$$z_{cm} = \frac{1}{M}\int_0^h \int_0^{2\pi} \int_0^{a(1-z/h)} r \cdot r \:dr d\phi dz$$
'Cause, once I work this out, I obtain $a \over 2$ instead of $h \over 4$...!
Could someone help me?
| I see the problem you have here, change the $r^2\, \mathrm{d}r\, \mathrm{d}\phi\, \mathrm{d}z$ there to $r \, \mathrm{d}r\, \mathrm{d}\phi\, \mathrm{d}z$, then you should get the correct answer
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/38624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Width of Gaussian Beam and Refractive Index I know that in free space, the width of a Gaussian beam can be written as $W=W_0\sqrt{1+(\frac{z}{z_0})^{2}}$. However, I was wondering if it was possible to express this width as a function of refractive index instead (since I don't believe a Gaussian beam originating in say, glass, will diverge in the same manner as one in air). Anyone have any ideas?
| Yes, and the formula you already have still works. Take z to be the optical path length: refractive index n times physical distance. A Gaussian beam in glass diverges in exactly the same way as in free space, only 'squeezed' in the z direction by a factor of n.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/38749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
New physics at high energies, cosmic rays, particle-detectors in space New physics is expected at high energies and cosmic rays have high energies, so have there been or are there any plans to put particle detectors in space to study cosmic rays for new physics ?
| It is difficult to detect very high energy cosmic rays in space. The reason being that to detect them one needs to interact with them. And an interaction is determined by how much material you put in its way. After you interact, you get massive showers of very high energy particles so you need lots of detectors to detect these. Getting big heavy detectors into space is difficult. That's why cosmic ray observatories are down on at the ground since the atmosphere can be used as a natural cosmic ray stopper.
In saying that, there are some special cosmic rays worth studying. One of them is anti-protons. Since these are pretty easy to stop and detect since it's anti-matter (indeed, none of it makes it to our surface since they disappear so easily). The Alpha Magnetic Spectrometer http://en.wikipedia.org/wiki/Alpha_Magnetic_Spectrometer has just been loaded on to the ISS to do just that.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/38856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Extending the idea of superdense coding I was reading through the superdense coding protocol, that lets A convey two classical bits to B by sending one qubit (assuming B sends A a qubit beforehand). So B creates a 2-qubit state and sends the first qubit to A. A performs a transformation on this qubit and sends it back. Based on that, B can distinguish whether the two bits of A were 00, 01, 10 or 11.
The question is, why can this idea not be extended to convey more than 2 bits? Can A convey n bits to B this way? More specifically, can B create an n-qubit state and send the first qubit A? A can then apply a unitary transformation on this qubit and send it back to B, who can distinguish between the 2^n possibilities and figure out which n bits A had.
| The problem is that you can't create $2^{n}$ orthogonal states on $n$ qubits using only operations on 1 qubit. The operations you can perform on the single qubit are $I,X,Z,XZ$ (or something of the same form). If you use those four you obtain four orthogonal states on $n$ qubits but any other operations will give states that are not orthogonal to these four. If the states are not orthogonal they are not reliably distinguishable. If you use states that are not distinguishable and then some strategy (such as maximum likelihood estimation) to try and guess the state you've received then you'll wind up with a noisy channel which has a capacity of less than or equal to $2n$ bits.
If you used $4$ qubits, sent $2$ and kept $2$ to act on and send later then you can send $4$ bits but obviously this is no better than using $2$ 2-qubit systems.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/38922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Galilean relativity in projectile motion Consider a reference frame $S^'$ moving in the initial direction of motion of a projectile launched at time, $t=0$. In the frame $S$ the projectile motion is:
$$x=u(cos\theta)t$$
$$y=u(sin\theta)t-\frac{g}{2}t^2$$
I know that that at $y_{max}$, $\frac{dy}{dt}=0$ so using this I find that $$t_{y_{max}}=\frac{usin\theta}{g}$$
so therefore: $$y_{max}=\frac{2u^2sin\theta}{g}$$
I know that when the particle lands at $y_{bottom}=0$ the distance in the $x$ direction is $$x_{y_{bottom}}=\frac{2u^2sin^2\theta}{g}$$
but I am confused about how to describe the motion for the particle in $S'$ frame.
| In the $S'$ frame, your variables are $x' = x - t\cdot u \cos\theta $ and $y' = y - t\cdot u \sin\theta$. If you do the change of variable, you get that the motion now is described by
$$x' = 0$$
$$y' = -\frac{g}{2}t^2$$
So in your new frame of reference you have vertical free fall from rest.
This is not very helpful in finding out when or where does the projectile hits the ground, but is very relevant if you want to know where will the projectile be after releasing it from a plane moving at constant velocity: right below it all the time. Disregarding air resistance, of course.
EDIT
The system with a prime is moving with velocity $(u \cos\theta, u\sin\theta)$, so if you have a velocity in the unprimed system, to convert it to the primed system, you have to substract the velocity of the origin:
$$\vec{v'} = \vec{v} - (u \cos\theta, u\sin\theta)$$
Integrating this, you can get the relation for the position vector:
$$\vec{r'} = \vec{r} - (u \cos\theta, u\sin\theta)t + \vec{r}_0$$
where $\vec{r}_0$ is the position of the origin of the primed system for $t=0$. Both systems share origin for $t=0$, so $\vec{r}_0=\vec{0}$.
Now replace $\vec{r'}=(x',y')$ and $\vec{r}=(x,y)$ and you will get the equations above.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/38983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How can the Air friction force be applied to an object? Suppose we have an object and we throw it straight upward in the air. How do we apply the Air friction force to this object while moving upwards and after that downwards? Sorry if it's easy because I'm still a novice!
| You can make a good estimate of the force of air resistance from
Where p is the density of air, v is the speed, A is the cross section area and Cd is a factor depending on the shape (ie how streamlined) of the object - you can look this up for simple shapes.
The tricky bit of the equation is that the drag depends on speed, which is constantly changing, and then the drag also changes, which changes the speed.... The easiest way to deal with this is to do it numerically, work out the speed at each point in a spreadsheet/computer program, then work out the drag at that point and adjust the force.
Remember also that the drag always acts to slow the object, so on the way up it acts doen, and on the way down the force acts up!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/39055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
quantum entanglement at microwave frequencies Entanglement of optical photons using non-linear crystals has been around for a long time. Macroscopic entanglement using diamonds recently reported in the literature and receiving considerable attention. Quantum mechanics in biology has been the subject of fascinating research. Has anyone demonstrated entanglement of low frequency (i.e., microwave) photons? I see no theoretical objection, has it ever been demonstrated? Ditto entanglement at xray?
| You can certainly entangle microwave photons with other quantum objects: http://arxiv.org/abs/1209.0441
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/39103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Schrödinger and thermodynamics I heard that Schrödinger pointed out that (classical/statistical) thermodynamics is impaired by logical inconsistencies and conceptual ambiguities.
I am not sure why he said this and what he is talking about. Can anyone point some direction to study what he said?
| You asked, about logical inconsistencies and logical ambiguities in classical thermodynamics and classical statistical mechanics. As far as classical thermodynamics is concerned, there are no inconsistencies or ambiguities. Classical thermodynamics is based on several axioms (known as laws), and a bunch of definitions. All the rest is deduction. If this makes it seem like Euclidian plane geometry, it is.
Classical statistical mechanics (SM), on the other hand, has big problems. SM attempts to compute thermodynamics properties, i.e. macroscopic variables, from microscopic properties. Classical SM assumes that atoms and molecules obey classical mechanics. They don't. The result is that one cannot reproduce macroscopic reality that way without a number of additional assumptions. The problem of course is that on a microscopic level matter inherently obeys quantum mechanics.
For example classical mechanics assumes that identical particles are, in principle, distinguishable because you can mark them with a sufficiently small pen. In quantum mechanics, identical properties are really identical. You can't mark them, and if you interchange the positions and velocities of two of them, NOTHING changes -- so you can't even tell that the interchange occurred.
----- Paul J. Gans
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/39168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Definition of CFT A standard QFT cannot be defined as a set of Poincare-invariant correlation functions because this does not take into account the possibility of non-perturbative effects (e.g. instantons)
Can we define a CFT as a set of conformally invariant correlation functions?
What is the correct definition of a CFT?
| The exact correlation functions as defined by a lattice simulation do take into account all nonperturbative effects, they contain all the physics. It is only the expansion of the correlation functions that doesn't take instantons into account. So yes, you can define a CFT by its correlation functions.
This is true for the usual situation of fields which can be added naturally and averaged. This excludes cases where the fields are of the sigma model type--- you can't add points on a manifold. In this case, an ad-hoc solution is to embed the sigma model into a larger R^n where you can define addition of points, and then you can define the correlation functions.
The definition of CFT is when you have conformal invariant correlation functions.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/39377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is the number of rays projected by a source of light finite? Take a source of light which gives out infinite number of rays, each ray with finite number of photons and each photon with a finite amount of energy,
Then, Aren't the number of photons become infinite and hence the energy in the beam of light becomes infinite? If this is wrong, then how's this finite?
| I've had the same question but let's look at it differently.
As I a one one dimensional line:
Let's say we have an illuminated line with a finite length.
We know that the photon is a particle hence it has diemensions.
Now let's suppose this lign is illuminated by photons. So the photons are arranged to cover the whole length of the line.
Now let's suppose this line has a length L( finite)
And the diameter of a photon is d( finite) then we have L/l photons ( finite). Now we can apply this to a 2 dimensional plane with a surface S if we suppose ds the surface of.a crossection of a photon then this surface is illuminated by S/ds photons ( finite ) . This way we don't have this confusion of infinite energy since a light ray is nothing physical. It's just a concept created to explain certain situations
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/39416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Einstein's mass-energy relation Suppose we have 1 kg of wood and 1 kg of uranium and if we need to find out how much energy would each of the substance give, we'd have to use Einstein's mass-energy relation as follows:
In the case of wood, $E_{wood} = 1 × (3×10^8)^2 = 9 × 10^{16} J$
In case of Uranium, $E_{uranium} = 1 × (3×10^8)^2 = 9 × 10^{16} J$
Question: According to the equation, both give the same amount of energy. But in reality, 1 kg of uranium will give a lot of energy when compared to 1 kg of wood. What have I missed? Any explanation regarding the $E=mc^2$ problem would be helpful...
| The Einstein's mass-energy relation, $E = mc^2$, gives the total energy content of the system. But this is not the energy we get from the object. When you annihilate an electron with a positron, both particles vanish so that the released energy is equal to the energy of the two particles according to Einstein's formula. But when you burn 1 kg of wood you just transform matter from one form to another (turning organic substances to CO$_2$ and other stuff by oxidation). And when you use nuclear energy of uranium, it just decays into other atoms. Both of these processes release some energy, but it is not equal to $mc^2$, it is much, much less. And because nuclear reactions are much more efficient than burning, you get more energy from 1 kg of uranium than 1 kg of wood.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/39475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Something I don't understand in Quantum Mechanics I've just started on QM and I'm puzzled with a lot of new ideas in it.
1.On a recent lecture I've attended, there is an equation says:
$\langle q'|\sum q|q\rangle \langle q|q' \rangle =\sum q \delta(q,q')$
I don't understand why $\langle q'|q\rangle \langle q|q' \rangle =\delta (q,q')$
Can you explain this equation for me?
2.Actually, I'm still not clear about the bra-ket notation. I've learnt the bra and the ket could be considered as vectors. Then what are the elements of the vectors?
Thank you very much!
| *
*The equation is true, if $|q\rangle$,$|q'\rangle$ are chosen from an orthonormal set of vectors, such as an eigenbasis of an operator. Then, by definition, $\langle q|q' \rangle = \delta_{q,q'}$
*$| q \rangle$ just denotes some vector labeled $q$ in some Hilbert space. The dimension equals the number of distinct classical states that your system can be in.
${{{}}}$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/39540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why do reversible processes not increase the entropy of the universe infinitesimally? The book Commonly Asked Questions in Thermodynamics states:
When we refer to the passage of the system through a sequence of internal equilibrium states without the establishment of equilibrium with the surroundings this is referred to as a reversible change. An example that combines the concept of reversible change and reversible process will now be considered.
For this example, we define a system as a liquid and a vapor of a substance in equilibrium contained within a cylinder that on one circular end has a rigid immovable wall and on the other end has a piston exerting a pressure equal to the vapor pressure of the fluid at the system temperature. Energy in the form of heat is now applied to the outer surface of the metallic cylinder and the heat flows through the cylinder (owing to the relatively high thermal conductivity), increasing the liquid temperature. This results in further evaporation of the liquid and an increase in the vapor pressure. Work must be done on the piston at constant temperature to maintain the pressure. This change in the system is termed a reversible change. It can only be called a reversible process if the temperature of the substance surrounding the cylinder is at the same temperature as that of the liquid and vapor within the cylinder. This requirement arises because if the temperatures were not equal the heat flow through the walls would not be reversible, and thus, the whole process would not be reversible.
But if the system and surroundings are in fact at the same temperature then why would this process occur at all?
My understanding is that in fact that they are infinitesimally different in temperature so I guess my question is why infinitesimality gets these processes "off the hook" for being irreversible. In other words, why do these infinitesimal changes not correspond to an infinitesimal increase in the entropy of the universe, rather than none at all?
| By definition a reversible process in an isolated system cannot increase entropy. If entropy is increased during a process in an isolated system then the process is irreversible, by definition.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/39589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 4
} |
How does one derive the Lamb shift for the Hydrogen atom? I've been perusing my copies of Srednicki and Peskin & Schroeder, and I can't seem to find an explanation of how one derives the Lamb shift that I can follow.
How does one derive the Lamb shift? What order in perturbation theory do you have to go up to? Does one use a path integral or canonical quantization approach?
| A derivation is here:
http://en.wikipedia.org/wiki/Lamb_shift#Derivation
or in Landau-Lifshitz. Bethe's original derivation is found e.g. in Matt Schwartz's Harvard lecture here
http://isites.harvard.edu/fs/docs/icb.topic792163.files/20-LambShift.pdf
The leading contribution to the Lamb shift is the one-loop level (the first non-classical correction) but of course, the effect receives corrections at every higher order, too.
One may derive it in the operator approach or path integral approach, much like pretty much everything in physics. These are just equivalent languages to do physics.
The Lamb shift has to deal with an atom which is not quite an elementary particle. So the usual perturbative rules of QED have to be "generalized" to deal with the composite object. However, otherwise it's about a virtual photon emitted and reabsorbed by the atom. If the atom were elementary, it would be a simple "photon loop" correction to the atom's propagator. A divergent term has to be removed – equivalently, one has to find out sensible limits of the integral – and what is left is some "truncated logarithmic divergence" that produces those 1,000 MHz for the relevant levels.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/39618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What are some of the best books on complex systems and emergence? I'm rather interested in getting my feet wet at the interface of complex systems and emergence. Can anybody give me references to some good books on these topics? I'm looking for very introductory technical books.
| I like the book Energy Landscapes by David Wales. It deals with various classes of complex systems (clusters, glasses, proteins) in the context of chemistry.
I want to add - emergence is fraught with flaky ideas; a lot of appeals to ignorance are rooted from the idea of irreducible complexity. So because we can't, say, predict the weather from $F=ma$, this means that God makes it happen.
If you're interested in something "bigger" than the chemistry of complex systems, I like the work of David Bohn (say, The Undivided Universe).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/39712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 5,
"answer_id": 1
} |
Time dependent Lagrangian Suppose I have a mechanical system with $\ell + m$ degrees of freedom and an associated Lagrangian:
$$L(\alpha, \beta, \dot{\alpha}, \dot{\beta}, t),$$
where $\alpha \in \mathbb{R}^{\ell}$ and $\beta \in \mathbb{R}^{m}$. Now suppose I have a known $\mathbb{R}^{\ell}$-valued function $f(t)$ and define a new Lagrangian:
$$M(\beta, \dot{\beta}, t)~:=~L(f(t), \beta, \dot{f}(t), \dot{\beta}, t)$$
Do the equations that derive from $M$ correctly describe the motion of the initial mechanical system, where the first $\ell$ degrees of freedom are constrained to the motion $f(t)$ (by means of an external force)?
| Let us reformulate OP's question(v2) as follows:
If we have a Lagrangian $L=L(q^1,\ldots, q^N, \dot{q}^1,\ldots, \dot{q}^N,t)$, and if we eliminate some of the $q^i$ variables from the Lagrangian [by using some given fixed curves $q^i=f^i(t)$], can we still derive the correct Lagrangian equations of motion (LEOM) for the remaining $q^i$ variables from this partially reduced Lagrangian $\tilde{L}$ ?
A special case of the above question is:
If we have a Lagrangian $L=L(q^1,\ldots, q^N, \dot{q}^1,\ldots, \dot{q}^N,t)$, and if we eliminate some of the $q^i$ variables from the Lagrangian [by using some of the LEOM and possibly boundary conditions], can we still derive the correct LEOM for the remaining $q^i$ variables from this partially reduced Lagrangian $\tilde{L}$ ?
Answer to the latter case: In many simple cases, this actually holds, but it is easy to give counterexamples.
Counterexample: Let $L=q^1 q^2$ be the product of two variables $q^1$ and $q^2$. The corresponding LEOM are $q^1=0$ and $q^2=0$. Let us now eliminate $q^2$ from the Lagrangian by using the LEOM $q^2=0$. Then the new partially reduced Lagrangian $\tilde{L}=q^1\cdot 0 =0$ vanishes, so that we can no longer deduce the LEOM for the remaining $q^1$ variable.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/40759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Physical meaning of some operators formed by $|Q\rangle \langle Q|$ In Dirac's formulation of quantum mechanics,
Suppose that $q$ represents position observable.
About $|q\rangle \langle q|$: what does this operator mean? I do get that it results in an operator, but unsure of what physical meaning it has. The same with $|\psi \rangle \langle \psi |$. ($\Psi$ representing wavefunction, and the corresponding eigenstate $| \psi \rangle$.)
| It's a projection operator. It measures how much $|\psi \rangle$ a system is.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/40825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Distribution of charge on a hollow metal sphere
A hollow metal sphere is electrically neutral (no excess
charge). A small amount of negative charge is suddenly
placed at one point P on this metal sphere. If we check on
this excess negative charge a few seconds later we will find
one of the following possibilities:
(a) All of the excess charge remains right around P.
(b) The excess charge has distributed itself evenly over the
outside surface of the sphere.
(c) The excess charge is evenly distributed over the inside
and outside surface.
(d) Most of the charge is still at point P, but some will have
spread over the sphere.
(e) There will be no excess charge left.
Which one is correct and why?
I guess it is some kind of electrostatic induction - phenomena going on. Am I right? I understand that excess charge is distributed over hollow sphere and that negative and positive charges are distributed opposite sides, but don't know which one positive or negative go to inside surface.
| Yeah. The right answer would be (B) the negative charge that started on point P will be distributed evenly throughout the surface of the sphere since (as most of the commentators here have mentioned) that point P has too much negative charges in it, so the negatively charged atoms will repel each other more vigorously until they move to a place (i.e. a place NOT on point P ) where there's less negatively charged atoms. But, also note that point P will still be negatively charged just like the rest of the sphere, but now all of the sphere's surface will have the same amount of charges.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/40993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
What are the reasons for leaving the dissipative energy term out of the Hamiltonian when writing the Lyapunov function? I have a problem with one of my study questions for an oral exam:
The Hamiltonian of a nonlinear mechanical system, i.e. the sum of the kinetic and potential energies, is often used as a Lyapunov function for controlling the position and velocity of the system. Consider a damped single degree-of-freedom system, $m\ddot{x}+c\dot{x}+kx=0$, where $m$ is the mass, $c$ is the velocity-proportional damping and $k$ is the stiffness. A candidate Lyapunov function is the Hamiltonian $V=\frac{1}{2}m\dot{x}^2+\frac{1}{2}kx^2$. What are the reasons for leaving out the dissipative energy term when writing the Lyapunov function?
The only thing what comes into my mind for this question is, that a dissipative energy term in the Lyapunov function would have a "-" sign and the Lyapunov function would thus not be positive definite anymore. Is that correct?
| 1) In the presence of friction, the Lagrange equation gets modified
$$\tag{1} \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{x}}\right)-\frac{\partial L}{\partial x}~=~ -\frac{\partial{\cal F} }{\partial \dot{x}}$$
by the Rayleigh dissipation function
$$\tag{2} {\cal F}~: =~ \frac{1}{2} c\dot{x}^2 ~\geq ~0 . $$
Here the Lagrangian is
$$\tag{3} L~:=~T-V, \qquad T~:=~\frac{1}{2} m\dot{x}^2~\geq ~0, \qquad V~:=~\frac{1}{2} kx^2~\geq ~0. $$
It is not possible to write a velocity-dependent potential for the friction force, and a Lagrangian (or Hamiltonian) description of the damped oscillator must be modified a la (1) to accommodate the friction term, cf. e.g. this and this Phys.SE posts.
2) The energy function
$$\tag{4} h(x,\dot{x})~:=~ \dot{x} \frac{\partial L}{\partial \dot{x}}-L ~=~T+V~\geq ~0 $$
is precisely the mechanical energy of the system.
One may show that the energy dissipation rate is given
by the Rayleigh dissipation function
$$\tag{5} \frac{dh}{dt}~=~-2{\cal F} ~\stackrel{(2)}{\leq} ~0. $$
The positive semi definite (4) of $h$, and the negative semi-definite (5) of the time derivative $\frac{dh}{dt}$ are some of the conditions that one usually demand of a Lyapunov function, and it is not hard to see that the mechanical energy $h$ is in fact a Lyapunov function for the damped oscillator.
On the other hand, it is unclear how to include ${\cal F}$ in the Lyapunov function, for reasons explained above.
References:
*
*Herbert Goldstein, Classical Mechanics, Chapter 1 and 2.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/41034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Calibrating an electronic temperature sensor based on power consumption I'm working with an electronic temperature logger that is being affected by heat generated internally.
How does one come up with a calibration equation to calculate a more accurate reading of ambient temperature based on what the temperature sensor reads, taking into account its own power consumption?
details:
After a few hours and in equilibrium, the sensor reports values that are actually 1 degree Celsius higher than the ambient room temperature (22C) measured by a calibrated device. The sensor is accurate to 0.1 degree C at reporting the temperature of the device itself (which due to heat generated by the electronics has gotten warmer)
The device consumes ~0.1 watts of power, weighs about 200g and has an average specific heat capacity of 1.0 j/g (weighted mix of glass, abs, fr-4, copper). Dimensions are 1"x 3"x 4".
What I've got so far is this heating calculation: 200g * 1c * 1.0j/g / 0.1w / 60s = ~33 minutes to heat up 1 degree.
I'm assuming what we need is to figure out Sensor value - Heat-generated + Heat-dissipated to arrive at actual temperature. Which will require measure the K in newton's law? then what?
I'd really appreciate you help here.
| Typically you would attempt to measure rather than calculate the effect.
Perhaps by having a second, calibrated device with a long probe that provides an independent measurement of the temperature. You do this in situ if possible or in some reasonable test stand (which might be as simple as disposable cooler filled with you working fluid).
The only real alternative is to read the data sheet; either for the whole device (if is a off-the-shelf instrument), or for the particular chip (if it is something that you manufactured to spec).
As a desperate fall-back position you might be able to find a rule-of-thumb for devices in the same class, but those are unlikely to be centrally tabulated. Ask around is my best suggestion.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/41140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the difference between a spinor and a vector or a tensor? Why do we call a 1/2 spin particle satisfying the Dirac equation a spinor, and not a vector or a tensor?
| Actually, Dirac equation is some what a "square root" of Klein-Gordon equation, so intuitively it can't represent a Vector or Tensor, as "symbolically" spinor corresponds to a square root of "differential", so the transformation rules had to differ from Tensors (actually one is in some vague sense taking "square root" of Tensor transformation rules, spinors actually come from "HALF" of Gram density for tensor) , hence for vectors. The above discussion can be rigorized in "Principal Bundle" or rather say "Vector bundle" Settings, one very good book is "Spin Geometry (PMS-38) - Princeton University Press by HB Lawson" here one can find why "The Dirac Equation" only describes 1/2 - spin particles.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/41211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32",
"answer_count": 4,
"answer_id": 0
} |
Direction of Potential Gradient & Electric field Potential gradient is the negative of the electric field:
$dV=-\vec{E}\cdot \operatorname{d}\!\vec{r}$
Does the negative sign mean that the direction of potential gradient $\operatorname{d}\!V\!/\!\operatorname{d}\!\vec{r}$ is opposite to that of the electric field $\vec{E}$? And if it does mean so, why is its direction opposite to the field? Explanations would be helpful.
| One thing necessary about Potential. It's simply the work done in moving an electric charge against the electrical force. Thus, The negative sign actually says that the work done is against the electric force (either attraction or repulsion). Or in other words, electric potential decreases in the direction of electric field. Hence, you're quite right about the concept.
This link is explains in an imagery way.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/41283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Will the siphoning effect help a system pump water upwards if the water's entry and exit points are at the same height? I am looking to pump water from a pool up to a roof for solar heating (black plastic tubing) and then back into the pool with the original source water. Does the gravitational force of the water flowing back down the pipe into the pool assist the pump and therefore decrease the pumps required strength?
I have been told that the strength of the pump needed would have to be the same regardless of the exit point's height. i.e. If the water was being pumped into a tank on the roof instead of flowing back to the pool.
| Short answer:
If you have a U-shaped tube full of water with both ends in the pool, and you lift the center of the tube up to roof height, you can pump water through that tube without regard to how high the top is.
The only resistance will be the resistance of water flowing through the tube (and solar collector).
The height will not matter.
(That's if the height is less than 10 meters. If it's more than 10 meters, air pressure will be insufficient to keep the water in the tube, and the water will evaporate, form bubbles, and separate at the top.)
You also have to make sure the tube and solar collector have no air leaks, because, depending on the height, the solar collector and its joints will be at negative pressure.
If enough air leaks into the tube, it could block the water from flowing.
ADDED: If you get bubbles in the tube or solar collector, the way to get rid of them is 1) block both lower ends of the tube so no water can flow out, 2) go to the top, fill up the tube and solar collector with water, to remove all bubbles, 3) seal it up again at the top so no air can leak in, 4) unblock the tube at the bottom ends.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/41331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
why sometimes touching old flickering tube lights starts them properly In my old house there are two old tube lights. Some times they don't start properly, (specially at evening time, may be it is because of low voltage), they starts flickering i.e. on and off continuously. And when my elder brother touches them, mostly at center they just goes on. Even i have tried many times, the touching works, and even after removing the hand, once it is on(working properly), doesn't stop them, and they remains on.
So why and how touching them works.
Sorry for my bad English.
Thanks in advance.
| I'll throw this into the mix. If I plug in one end of my guitar cable to my amp, turn the amp on, bring up the volume a little, and touch only the center contact on the other end of the cable, I get a loud 60 Hertz buzz from my amp. I believe this is because my body is acting as an antenna - picking up some radiation from the house wiring, which is typically unshielded.
Could this "antenna effect" also play a part in triggering the fluorescent lamps to light when they are touched?
I also know that if you take a fluorescent tube and go stand under a high voltage line, it will glow because of the very high E-field levels.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/41503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Why doesn't my particle simulation end in a flat disc? I've made a 3d particle simulator where particles are attracted to each other by the inverse of the square radius. The purpose of my experiment is to see if this alone would create a flat disk (like some galaxies) since the inverse of the radius is the same as gravity and we can consider the particles stars. The particles (or stars) are initially randomly distributed around a a point in space with a start velocity as the cross product of the vector pointing towards the center and the axis of rotation (to make sure there is a net angular momentum).
As you probably already guessed this is not enough to make it form a flat disc. I've been reading about galaxies and found that the cause of the disc shape is because angular momentum is hard to get rid of. However, angular momentum is not really a separate law right? I mean, by using the attraction by the inverse square of the radius the angular momentum should be constant? Please, correct me if I'm wrong.
So what am I missing? Dark matter? Dissertation? I realize that galaxy formation is a REALLY broad and advanced topic but my simulation is ONLY to get particles to form a disc shape.
Any help appreciated!
| You get a disk when the particles lose energy (often by radiating it away) but keep their angular momentum.
Galaxies are made of stars, but the stars are born from clouds of gas. The gas has many ways to radiate energy away, which can cause it to settle into a disk. The stars may then be formed in the disk structure.
Not all galaxies are disks, though, many are spherical. As you say, it's not a simple problem and it depends sensitively on the initial conditions.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/41583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.