text
stringlengths 256
16.4k
|
|---|
Practical and theoretical implementation discussion.
Post Reply
10 posts • Page
1of 1
I've had a question that I couldn't answer myself and haven't found any formal derivation on the topic. If one has a point light with position \(\vec{c}\) and intensity \(I\), then the irradiance at point \(\vec{p}\) at some surface can be derived as:
\(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined?
\(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \)
Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as:
\(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\)
\( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \)
\(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \)
Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined?
Point and directional light sources are not physical and are commonly defined via delta functions/distributions. For a delta directional light, you take the solid angle version of the rendering integration (which is what you have above) and replace \(L_i\) by a an angular delta function, which makes the integral degenerate to a simple integrand evaluation. For a point light, which is a delta volume light, you would take the volume version of the rendering equation, i.e. the one where integrates over 3D space, and then again replace \(L_i\) by a positional delta function.
Click here.You'll thank me later.
I am aware that a point light is not physical, I am just wondering how the commonly accepted formulae in real-time cg are motivated. They are using in most cases the intensity as if it were radiance, with brdfs defined in terms of radiance. Do you have a reference for the part where you mention that you're replacing the radiance with a delta function? That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases.
What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome.
What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome.
Well, you have an integral over an angle that is non-zero if and only if it includes a singular direction. That's a delta function, isn't it? If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. Yup. You know what else causes problems? Perfectly specular reflectors and refractors, which also have deltas in their BSDFs. You have two choices when dealing with them--approximate them with spikey but non-delta BSDFs, or sample them differently. In general, you don't try to evaluate those for arbitrary directions; you just cast one ray in the appropriate direction. It's exactly the same for point and directional lights. Either use light sources that subtend a small but finite solid angle, or special-case them. You evaluate the integral (for a single light source) by ignoring everything but the delta direction and then ignoring the delta factor in the integrand. You can think of it as using a Monte Carlo estimator f(x)/pdf(x), where both the f and the pdf have identical delta functions that cancel out.That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases.
I don't have a specific reference for this, but the first place I'd look for one is the PBRT book.
I understand it in the sense that you want just that one direction, however, I don't see how that solves the intensity vs radiance issue. After all the rendering equation considers radiance and not intensity.
I would be very grateful if you could refer me to a publication of his that tackles this issue, I am not exactly sure how to find his publications from his username.If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. I mean purely theoretical problems, not specifically the ones you are referring to, though I meant precisely the case where you have a perfect mirror/refraction and a point light, since then you get a product of distributions.Yup. You know what else causes problems? PBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. Here's what I found in PBRT after a brief search: http://www.pbr-book.org/3ed-2018/Light_ ... eIntegrandvchizhov wrote: ↑Fri Apr 26, 2019 3:27 pmPBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy.
It classifies point sources as producing delta distributions. It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas.
Thank you, I have looked through pbrt already, but just saying - "it's a Dirac delta" is hardly mathematically robust - there's no derivation. It doesn't even derive central relationships like \(\frac{\cos\theta}{r^2}dA = d\omega\).
The issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. "You didn't use real wrestling. If you use real wrestling, it's impossible to get out of that hold."vchizhov wrote: ↑Fri Apr 26, 2019 7:27 pmThe issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.
- Bobby Hill
If your concern is with undefined quantities, you might need to start by defining exactly you mean by a point light, since it's already a physically implausible entity. E.g., is it meaningful to have the dw or dA terms from your Lambertian derivation? Does it have a normal? Is it a singular point, or is it the limit of an arbitrarily small sphere? If it were me, I'd just start with a definition that is consistent with a delta distribution, because it's consistent with what I want to represent, I know it makes the math work, and I can actually start implementing something.
Beyond that, I'm not sure there's anything else I can say that will convince you without a lot more work than I have time for. Best of luck.
I mostly agree with you,
\(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. vchizhov. My understanding is that the delta function is an ad-hoc construct, often defined as
\(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\)
Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure:
\(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\)
The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework.
So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are
defining the outgoing radiance at the shading pointrather than defining some infinitesimally small light source. To also account for the contribution of other light sources, you'll need to sum that just-defined radiance with a proper reflectance integral. You'd do the same for plastic-like BRDFs that are the sum of perfect specular lobe and a smooth lobe. This seems like a more mathematically correct approach to me. It's more cumbersome though, that's why people prefer the more convenient (and controversial) approach of just thinking of it as defining a point light source instead.
Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it.
Click here.You'll thank me later.
|
I assume that you have a wide-sense stationary discrete-time input noise signal $x(n)$, a linear time-invariant filter with impulse response $h(n)$, and an output noise process $y(n)$. If we assume that $x(n)$ is zero mean (which is not necessary, but makes it easier) and if we model $x(n)$ as perfectly band-limited, we get for its variance
$$\sigma_x^2=\frac{1}{2\pi}\int_{-\pi}^{\pi}S_x(e^{j\theta})d\theta=\frac{1}{\pi}\int_{\theta_1}^{\theta_2}S_x(e^{j\theta})d\theta\tag{1}$$
where $S_x(e^{j\theta})$ is the power spectral density of $x(n)$, $\theta$ is the normalized frequency $\theta=2\pi f/f_s$, and $\theta_1$ and $\theta_2$ are the lower and upper band-edges, respectively. With the frequency response of the filter
$$H(e^{j\theta})=\sum_{n=-\infty}^{\infty}h(n)e^{-jn\theta}$$
we can write the power spectral density of the output noise as
$$S_y(e^{j\theta})=S_x(e^{j\theta})\left|H(e^{j\theta})\right|^2$$
and its variance is
$$\sigma_y^2=\frac{1}{2\pi}\int_{-\pi}^{\pi}S_y(e^{j\theta})d\theta=\frac{1}{2\pi}\int_{-\pi}^{\pi}S_x(e^{j\theta})\left|H(e^{j\theta})\right|^2d\theta=\frac{1}{\pi}\int_{\theta_1}^{\theta_2}S_x(e^{j\theta})\left|H(e^{j\theta})\right|^2d\theta$$
From this equation we can get a lower and upper bound for $\sigma_y^2$ as follows
$$\min_{\theta\in [\theta_1,\theta_2]}\left|H(e^{j\theta})\right|^2\frac{1}{\pi}\int_{\theta_1}^{\theta_2}S_x(e^{j\theta})d\theta\le\sigma_y^2\le\max_{\theta\in [\theta_1,\theta_2]}\left|H(e^{j\theta})\right|^2\frac{1}{\pi}\int_{\theta_1}^{\theta_2}S_x(e^{j\theta})d\theta$$
and from (1) we finally get
$$\min_{\theta\in [\theta_1,\theta_2]}\left|H(e^{j\theta})\right|^2\sigma_x^2\le\sigma_y^2\le\max_{\theta\in [\theta_1,\theta_2]}\left|H(e^{j\theta})\right|^2\sigma_x^2\tag{2}$$
These bounds can be useful (i.e. relatively tight) if the noise bandwidth is small compared to the bandwidth of the filter $h(n)$. Unfortunately, they are almost useless if the filter bandwidth is small compared to the bandwidth of the input noise process $x(n)$.
In the latter case, i.e. if the filter bandwidth is much smaller than the noise bandwidth, you can come up with similar bounds as in (2). You just need to estimate the minimum and maximum of the power spectral density $S_x(e^{j\theta})$ within the filter bandwidth.
|
Practical and theoretical implementation discussion.
Post Reply
10 posts • Page
1of 1
I've had a question that I couldn't answer myself and haven't found any formal derivation on the topic. If one has a point light with position \(\vec{c}\) and intensity \(I\), then the irradiance at point \(\vec{p}\) at some surface can be derived as:
\(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined?
\(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \)
Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as:
\(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\)
\( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \)
\(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \)
Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined?
Point and directional light sources are not physical and are commonly defined via delta functions/distributions. For a delta directional light, you take the solid angle version of the rendering integration (which is what you have above) and replace \(L_i\) by a an angular delta function, which makes the integral degenerate to a simple integrand evaluation. For a point light, which is a delta volume light, you would take the volume version of the rendering equation, i.e. the one where integrates over 3D space, and then again replace \(L_i\) by a positional delta function.
Click here.You'll thank me later.
I am aware that a point light is not physical, I am just wondering how the commonly accepted formulae in real-time cg are motivated. They are using in most cases the intensity as if it were radiance, with brdfs defined in terms of radiance. Do you have a reference for the part where you mention that you're replacing the radiance with a delta function? That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases.
What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome.
What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome.
Well, you have an integral over an angle that is non-zero if and only if it includes a singular direction. That's a delta function, isn't it? If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. Yup. You know what else causes problems? Perfectly specular reflectors and refractors, which also have deltas in their BSDFs. You have two choices when dealing with them--approximate them with spikey but non-delta BSDFs, or sample them differently. In general, you don't try to evaluate those for arbitrary directions; you just cast one ray in the appropriate direction. It's exactly the same for point and directional lights. Either use light sources that subtend a small but finite solid angle, or special-case them. You evaluate the integral (for a single light source) by ignoring everything but the delta direction and then ignoring the delta factor in the integrand. You can think of it as using a Monte Carlo estimator f(x)/pdf(x), where both the f and the pdf have identical delta functions that cancel out.That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases.
I don't have a specific reference for this, but the first place I'd look for one is the PBRT book.
I understand it in the sense that you want just that one direction, however, I don't see how that solves the intensity vs radiance issue. After all the rendering equation considers radiance and not intensity.
I would be very grateful if you could refer me to a publication of his that tackles this issue, I am not exactly sure how to find his publications from his username.If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. I mean purely theoretical problems, not specifically the ones you are referring to, though I meant precisely the case where you have a perfect mirror/refraction and a point light, since then you get a product of distributions.Yup. You know what else causes problems? PBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. Here's what I found in PBRT after a brief search: http://www.pbr-book.org/3ed-2018/Light_ ... eIntegrandvchizhov wrote: ↑Fri Apr 26, 2019 3:27 pmPBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy.
It classifies point sources as producing delta distributions. It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas.
Thank you, I have looked through pbrt already, but just saying - "it's a Dirac delta" is hardly mathematically robust - there's no derivation. It doesn't even derive central relationships like \(\frac{\cos\theta}{r^2}dA = d\omega\).
The issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. "You didn't use real wrestling. If you use real wrestling, it's impossible to get out of that hold."vchizhov wrote: ↑Fri Apr 26, 2019 7:27 pmThe issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.
- Bobby Hill
If your concern is with undefined quantities, you might need to start by defining exactly you mean by a point light, since it's already a physically implausible entity. E.g., is it meaningful to have the dw or dA terms from your Lambertian derivation? Does it have a normal? Is it a singular point, or is it the limit of an arbitrarily small sphere? If it were me, I'd just start with a definition that is consistent with a delta distribution, because it's consistent with what I want to represent, I know it makes the math work, and I can actually start implementing something.
Beyond that, I'm not sure there's anything else I can say that will convince you without a lot more work than I have time for. Best of luck.
I mostly agree with you,
\(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. vchizhov. My understanding is that the delta function is an ad-hoc construct, often defined as
\(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\)
Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure:
\(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\)
The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework.
So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are
defining the outgoing radiance at the shading pointrather than defining some infinitesimally small light source. To also account for the contribution of other light sources, you'll need to sum that just-defined radiance with a proper reflectance integral. You'd do the same for plastic-like BRDFs that are the sum of perfect specular lobe and a smooth lobe. This seems like a more mathematically correct approach to me. It's more cumbersome though, that's why people prefer the more convenient (and controversial) approach of just thinking of it as defining a point light source instead.
Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it.
Click here.You'll thank me later.
|
Calculate the potential inside a uniformly charged solid sphere of radius $R$ and total charge $q$.
My attempt:
There are several ways to solve this problem but I'm curious as to whether this particular method is applicable.
WLOG let the point $P$ lie on the $z$-axis a distance $z$ from the center of the sphere (origin). $z<R$
Consider an infinitesimal volume element whose position vector $\mathbf{r}$ makes an angle $\theta$ with the $z$-axis.
Let $r'$ be the distance between the volume element and $P$. By the cosine rule (as shown in the figure), $$r'=\sqrt{z^2+r^2-2zr\cos\theta}$$
$$\displaystyle dV=\frac{1}{4\pi\epsilon_0}\frac{dQ}{r'}=\frac{1}{4\pi\epsilon_0}\frac{\rho\ d\tau}{r'}=\frac{1}{4\pi\epsilon_0}\frac{\rho r^2\sin\theta\ dr\ d\theta\ d\phi}{\sqrt{z^2+r^2-2zr\cos\theta}}$$
$$\displaystyle V=\int_0^{2\pi}\int_0^\pi\int_0^R\frac{1}{4\pi\epsilon_0}\frac{\rho r^2\sin\theta\ dr\ d\theta\ d\phi}{\sqrt{z^2+r^2-2zr\cos\theta}}$$
The only problem is that the denominator of the integrand goes to $0$ when $\theta=0$ and $r=z$. How do I circumvent this problem?
|
Practical and theoretical implementation discussion.
Post Reply
8 posts • Page
1of 1
Hi,
I have a simple question as in the title. Why is volumetric emission proportional to absorption coefficient? I often see the volumetric emission term is written as I can also see another representation, volumetric emittance function (e.g. in Mark Pauly's thesis: Robust Monte Carlo Methods for Photorealistic Rendering of Volumetric Effects) , which has the unit of radiance divided by metre (that is W sr^-1 m^-3). Do particles that emit light do not scatter light at all? Thanks
I have a simple question as in the title.
Why is volumetric emission proportional to absorption coefficient?
I often see the volumetric emission term is written as I can also see another representation, volumetric emittance function (e.g. in Mark Pauly's thesis: Robust Monte Carlo Methods for Photorealistic Rendering of Volumetric Effects) , which has the unit of radiance divided by metre (that is W sr^-1 m^-3).
Do particles that emit light do not scatter light at all?
Thanks
I don't quite understand the question: doesn't the first term refer to a particle density at an integration position x that emits radiance L_e in viewing direction w, and that the particle density absorbs radiance by w.r.t. sigma_a(x)? The emitted radiance is not proportional to the absorption, but is scaled by sigma_a(x). Let sigma_a := 1.0 be a constant for all x with respect to the density field, then your model only accounts for emission.
With L_b(xb,w)+\int_xb^x L_e(x,w) sigma_a(x), b being the position of a constantly radiating background light and \int_xb^x meaning integration over the viewing ray from the backlight to the integration position, you get the classical emission+absorption model that is e.g. used for interactive DVR in SciVis. In- and out-scattering can be incorporated in the equation. See Nelson Max's '95 paper on optical models for DVR for the specifics: https://www.cs.duke.edu/courses/cps296. ... dering.pdf Also note that those models usually don't consider individual particles, but rather particle densities, and then derive coefficients e.g. by considering the projected area of all particles inside an infinitesimally flat cylinder projected on the cylinder cap. Emission and absorption are sometimes expressed with a single coefficient in code for practical reasons, e.g. so that a single coefficient in [0..1] can be used to look up an RGBA tuple in a single, pre-computed and optionally pre-integrated transfer function texture.
With L_b(xb,w)+\int_xb^x L_e(x,w) sigma_a(x), b being the position of a constantly radiating background light and \int_xb^x meaning integration over the viewing ray from the backlight to the integration position, you get the classical emission+absorption model that is e.g. used for interactive DVR in SciVis. In- and out-scattering can be incorporated in the equation. See Nelson Max's '95 paper on optical models for DVR for the specifics: https://www.cs.duke.edu/courses/cps296. ... dering.pdf
Also note that those models usually don't consider individual particles, but rather particle densities, and then derive coefficients e.g. by considering the projected area of all particles inside an infinitesimally flat cylinder projected on the cylinder cap.
Emission and absorption are sometimes expressed with a single coefficient in code for practical reasons, e.g. so that a single coefficient in [0..1] can be used to look up an RGBA tuple in a single, pre-computed and optionally pre-integrated transfer function texture.
Thanks for reply.
I can find the volumetric emission term which proportional to absorption coefficient for example in, Jensen's Photon Mapping book, Spectral and Decomposition Tracking paper or Wojciech Jarosz's thesis. In the Jarosz's thesis reference, there is the following sentence by the equation (4.12) in the page 60:
I can find the volumetric emission term which proportional to absorption coefficient for example in,
Jensen's Photon Mapping book, Spectral and Decomposition Tracking paper or Wojciech Jarosz's thesis.
In the Jarosz's thesis reference, there is the following sentence by the equation (4.12) in the page 60:
I think it is required that emitting particles should not scatter light in order L_e^V to be represented as a decomposed form \sigma_a(x) L_e(x, w).Media, such as fire, may also emit radiance, Le , by spontaneously converting other forms of energy into visible light. This emission leads to a gain in radiance expressed as: I'm not quite sure if I understand how you come to this assumption and if I totally understand your question, but I don't see why particles that emit light shouldn't also scatter light.I think it is required that emitting particles should not scatter light
However, the mental model behind radiance transfer is not one that considers the interaction of individual particles. The model rather derives the radiance in a density field due to emission, absorption, and scattering phenomena at certain sampling positions and in certain directions. So the question is: for position x, how much light is emitted by particles at or near x, how much light arrives there due to other particles scattering light into direction x ("in-scattering"), and conversely: how much light is absorbed due to local absorption phenomena at x, and how much light is scattered away from x ("out-scattering", distributed w.r.t. the phase function).
See e.g. Hadwiger et al. "Real-time Volume Graphics", p. 6:
(https://doc.lagout.org/science/0_Comput ... aphics.pdf)Analogously, the total emission coefficient can be split into a source term q, which represents emission (e.g., from thermal excitation), and a scattering term
Out-scattering + heat dissipation etc. ==> total absorption at point x contributed to a viewing ray in direction w
In-scattering + emission ==> added radiance at point x along the viewing direction w
It is not about individual particles. The scattering equation is about the four effects contributing to the total radiance at a point x in direction w. There are no individual particles associated with the position x, you consider particle distributions and how they affect the radiance at x. The radiance increases if particles scatter light towards x, or if particles at (or near) x emit light. The radiance goes down due to absorption and out-scattering from the particle density at x. The point x is usually the sampling position that is encountered when marching a ray through the density field, and is not associated with individual particle positions.
I didn't find a more general source and am working with this paper anyway - the paper also shows the scattering equation and states that it has a combined emission+in-scattering term: http://www.vis.uni-stuttgart.de/~amentm ... eprint.pdf (cf. Eq. 3 on page 3).
Hope I'm not misreading your question?
My current thinking process when reading the paper you lastly mentioned is like following:
0. 1. eq. (1) says that contribution from source radiance Lm(x', w) is proportional to the extinction coefficient sigma_t(x'). - I can understand RTE of this form. Probability density with which light interact (one of scattering/absorption/emission) with medium at x' is proportional to particle density, that is sigma_t(x'). 2. eq (3) says that once interaction happens, it is emission with probability (1 - \Lambda) and scattering with probability \Lambda. - I can understand the latter because scattering albedo \Lambda is the probability that scattering happens out of some interaction. This is straightforward. However I can't understand the former. The original question: Why is volumetric emission proportional to absorption coefficient? I can understand absorption happens with the probability (1 - \Lambda), but cannot understand emission also happens with the probability (1 - \Lambda) Now my question can be paraphrased as follows: Shouldn't the probability emission happens be independent of absorption coefficient? I'm sorry in case that the above explanation confuse you more and thank you for your kindness for detailed replying.
0.
- Yes, I know.However, the mental model behind radiance transfer is not one that considers the interaction of individual particles.
1. eq. (1) says that contribution from source radiance Lm(x', w) is proportional to the extinction coefficient sigma_t(x').
- I can understand RTE of this form. Probability density with which light interact (one of scattering/absorption/emission) with medium at x' is proportional to particle density, that is sigma_t(x').
2. eq (3) says that once interaction happens, it is emission with probability (1 - \Lambda) and scattering with probability \Lambda.
- I can understand the latter because scattering albedo \Lambda is the probability that scattering happens out of some interaction. This is straightforward.
However I can't understand the former. The original question: Why is volumetric emission proportional to absorption coefficient?
I can understand absorption happens with the probability (1 - \Lambda), but cannot understand emission also happens with the probability (1 - \Lambda)
Now my question can be paraphrased as follows:
Shouldn't the probability emission happens be independent of absorption coefficient?
I'm sorry in case that the above explanation confuse you more and thank you for your kindness for detailed replying.
I found an interesting lecture script. http://www.ita.uni-heidelberg.de/~dulle ... pter_3.pdf
Section 3.3 Eq 3.9. As I understand, ultimately it is a matter of definition motivated by thermodynamics of a special case. I imagine that the same particles that block light along some beam also emit light of their own. So it makes sense that emission and absorption strength have a common density-related prefactor. The reverse view from the point of importance being emitted into the scene seems more intuitive to me: Importance particle have a chance to interact with particles of the medium in proportion to their cross section. If they interact, the medium transfers energy to the imaging sensor.
Section 3.3 Eq 3.9.
As I understand, ultimately it is a matter of definition motivated by thermodynamics of a special case.
I imagine that the same particles that block light along some beam also emit light of their own. So it makes sense that emission and absorption strength have a common density-related prefactor.
The reverse view from the point of importance being emitted into the scene seems more intuitive to me: Importance particle have a chance to interact with particles of the medium in proportion to their cross section. If they interact, the medium transfers energy to the imaging sensor.
That lecture script says:
"This is Kirchhoff’s law.It says that a medium in thermal equilibrium can have any emissivity jν and extinction αν, as long as their ratio is the Planck function." Which sounds like they really CAN'T have any emissivity and extinction, but have to have them in a specific ratio. For example, for green light of wavelength 570 nm and 2000 degrees K, that ratio is (from the Planck function) 6537. So the extinction is relatively small in comparison. Later it says "If the temperature is constant along the ray, then the intensity will indeed exponentially approach [the Planck function]". Anyway, the real reason I am replying is so I can share this video of a "black" flame. The flame emits light but has no shadow (it seems fires don't have shadows), but can be made to have one and even appear black under single-frequency lighting: https://www.youtube.com/watch?v=5ZNNDA2WUSU This seems to contradict the notion that media has to absorb light in order to emit it.... unless the amount absorbed is very tiny, as suggested by the lecture.
"This is Kirchhoff’s law.It says that a medium in thermal equilibrium can have any emissivity jν and extinction αν, as long as their ratio is the Planck function."
Which sounds like they really CAN'T have any emissivity and extinction, but have to have them in a specific ratio.
For example, for green light of wavelength 570 nm and 2000 degrees K, that ratio is (from the Planck function) 6537. So the extinction is relatively small in comparison.
Later it says "If the temperature is constant along the ray, then the intensity will indeed exponentially approach [the Planck function]".
Anyway, the real reason I am replying is so I can share this video of a "black" flame.
The flame emits light but has no shadow (it seems fires don't have shadows), but can be made to have one and even appear black under single-frequency lighting:
https://www.youtube.com/watch?v=5ZNNDA2WUSU
This seems to contradict the notion that media has to absorb light in order to emit it.... unless the amount absorbed is very tiny, as suggested by the lecture.
Ha! Now this comes a bit late, but I appreciate you posting this experiment. It is very cool indeed.
I think, in contrast to the assumptions in that part of the lecture, the lamp is not a black body. At least, obviously, its emission spectrum is does not follow Planck's law. Please don't ask when in reality the idealization as black body is justified ... Somewhere I read that good emitters are generally also good absorbers, in the sense of material properties. The experiment displays this very well since the Sodium absorbs a lot of the light, whereas normal air and normal flame do not.
I think, in contrast to the assumptions in that part of the lecture, the lamp is not a black body. At least, obviously, its emission spectrum is does not follow Planck's law. Please don't ask when in reality the idealization as black body is justified ...
Somewhere I read that good emitters are generally also good absorbers, in the sense of material properties. The experiment displays this very well since the Sodium absorbs a lot of the light, whereas normal air and normal flame do not.
|
I did the following proof earlier and just wanted conformation as to whether it works. The question was to show$$\frac{d}{dx}x^2=2x$$by the difference-quotient definition of a derivative, and then prove that limit with the $\epsilon$, $\delta$ definition for limits.
I said that $$\frac{d}{dx}x^2=\lim_{h\to0}\frac{(x+h)^2-x^2}{h}=\lim_{h\to0}\frac{h^2+2xh}{h}$$ Then we assert that this is equal to $2x$ which we show is true using the $\epsilon$, $\delta$ definition. We must show that for all $\epsilon > 0$, there exists $\delta>0$ such that for all $|h-0|<\delta$ we have $$\left|2x-\frac{h^2+2xh}{h}\right|<\epsilon$$ Assuming $h\neq 0$ this simplifies to $$\left|2x-h-2x\right|=|-h|=|h|<\epsilon$$ So we're left with that for all $|h|<\delta$ we must have $|h|<\epsilon$, which is true for all $\epsilon$ if $\delta=\epsilon$. Hence we can conclude the limit does equal $2x$ and $$\frac{d}{dx}x^2=2x$$ I know its kind of a dull question, so if you have any cool insights or tips to share that'd be appreciated!
I did the following proof earlier and just wanted conformation as to whether it works. The question was to show$$\frac{d}{dx}x^2=2x$$by the difference-quotient definition of a derivative, and then prove that limit with the $\epsilon$, $\delta$ definition for limits.
Yes it is correct. A tip is to immediately write: $$\lim_{h \rightarrow 0} \frac{h^2+2xh}{h}=\lim_{h \rightarrow 0} 2x+h.$$ You can now see directly, to get $2x+h$ within $\epsilon>0$ of $2x$, we must choose $|h|<\epsilon$.
Note that if $f$ is a continuous function, and $f(0)$ exists, then $\lim_{h\to 0} f(h) = f(0)$ (this is just the definition of continuity).
This lets you immediately plug in $h=0$ in after dividing by $h$, avoiding the epsilon-delta portion of your argument.
|
Ex.7.2 Q7 Coordinate Geometry Solution - NCERT Maths Class 10 Question
Find the coordinates of a point \(A\), where \(AB\) is the diameter of circle whose center is \((2, -3)\) and \(B\) is \((1, 4)\)
Text Solution Reasoning:
The coordinates of the point \(P(x, y)\) which divides the line segment joining the points \(A(x1, y1)\) and \(B(x2, y2)\), internally, in the ratio \(\rm m1 : m2\) is given by the Section Formula.
What is the known?
The \(x\) and \(y\) co-ordinates of the center of the circle and one end of the diameter \(B\).
What is the unknown?
The coordinates of a point \(A\).
Steps:
From the Figure,
Given,
Let the coordinates of point \(A\) be \((x, y)\). Mid-point of \(AB\) is \(C\) \((2, -3)\), which is the center of the circle.
\(\begin{align} \therefore \; (2, - 3) &= \left( {\frac{{{\text{x}} + 1}}{2},\frac{{{\text{y}} + 4}}{2}} \right) \end{align}\)
\(\begin{align} \Rightarrow \frac{{{\text{x}} + 1}}{2}& = 2{\text{ and }}\frac{{{\text{y}} + 4}}{2} = - 3 & & \end{align}\) (By Cross multiplying & transposing)
\(\begin{align} \Rightarrow {\text{x}} + 1 &= 4{\text{ and y}} + 4 = - 6 \end{align}\)
\(\begin{align} \Rightarrow {\text{x}} &= 3{\text{ and y}} = - 10\end{align}\)
Therefore, the coordinates of \(A\) are \((3, -10)\)
|
Practical and theoretical implementation discussion.
Post Reply
10 posts • Page
1of 1
I've had a question that I couldn't answer myself and haven't found any formal derivation on the topic. If one has a point light with position \(\vec{c}\) and intensity \(I\), then the irradiance at point \(\vec{p}\) at some surface can be derived as:
\(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined?
\(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \)
Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as:
\(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\)
\( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \)
\(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \)
Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined?
Point and directional light sources are not physical and are commonly defined via delta functions/distributions. For a delta directional light, you take the solid angle version of the rendering integration (which is what you have above) and replace \(L_i\) by a an angular delta function, which makes the integral degenerate to a simple integrand evaluation. For a point light, which is a delta volume light, you would take the volume version of the rendering equation, i.e. the one where integrates over 3D space, and then again replace \(L_i\) by a positional delta function.
Click here.You'll thank me later.
I am aware that a point light is not physical, I am just wondering how the commonly accepted formulae in real-time cg are motivated. They are using in most cases the intensity as if it were radiance, with brdfs defined in terms of radiance. Do you have a reference for the part where you mention that you're replacing the radiance with a delta function? That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases.
What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome.
What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome.
Well, you have an integral over an angle that is non-zero if and only if it includes a singular direction. That's a delta function, isn't it? If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. Yup. You know what else causes problems? Perfectly specular reflectors and refractors, which also have deltas in their BSDFs. You have two choices when dealing with them--approximate them with spikey but non-delta BSDFs, or sample them differently. In general, you don't try to evaluate those for arbitrary directions; you just cast one ray in the appropriate direction. It's exactly the same for point and directional lights. Either use light sources that subtend a small but finite solid angle, or special-case them. You evaluate the integral (for a single light source) by ignoring everything but the delta direction and then ignoring the delta factor in the integrand. You can think of it as using a Monte Carlo estimator f(x)/pdf(x), where both the f and the pdf have identical delta functions that cancel out.That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases.
I don't have a specific reference for this, but the first place I'd look for one is the PBRT book.
I understand it in the sense that you want just that one direction, however, I don't see how that solves the intensity vs radiance issue. After all the rendering equation considers radiance and not intensity.
I would be very grateful if you could refer me to a publication of his that tackles this issue, I am not exactly sure how to find his publications from his username.If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. I mean purely theoretical problems, not specifically the ones you are referring to, though I meant precisely the case where you have a perfect mirror/refraction and a point light, since then you get a product of distributions.Yup. You know what else causes problems? PBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. Here's what I found in PBRT after a brief search: http://www.pbr-book.org/3ed-2018/Light_ ... eIntegrandvchizhov wrote: ↑Fri Apr 26, 2019 3:27 pmPBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy.
It classifies point sources as producing delta distributions. It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas.
Thank you, I have looked through pbrt already, but just saying - "it's a Dirac delta" is hardly mathematically robust - there's no derivation. It doesn't even derive central relationships like \(\frac{\cos\theta}{r^2}dA = d\omega\).
The issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. "You didn't use real wrestling. If you use real wrestling, it's impossible to get out of that hold."vchizhov wrote: ↑Fri Apr 26, 2019 7:27 pmThe issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.
- Bobby Hill
If your concern is with undefined quantities, you might need to start by defining exactly you mean by a point light, since it's already a physically implausible entity. E.g., is it meaningful to have the dw or dA terms from your Lambertian derivation? Does it have a normal? Is it a singular point, or is it the limit of an arbitrarily small sphere? If it were me, I'd just start with a definition that is consistent with a delta distribution, because it's consistent with what I want to represent, I know it makes the math work, and I can actually start implementing something.
Beyond that, I'm not sure there's anything else I can say that will convince you without a lot more work than I have time for. Best of luck.
I mostly agree with you,
\(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. vchizhov. My understanding is that the delta function is an ad-hoc construct, often defined as
\(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\)
Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure:
\(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\)
The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework.
So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are
defining the outgoing radiance at the shading pointrather than defining some infinitesimally small light source. To also account for the contribution of other light sources, you'll need to sum that just-defined radiance with a proper reflectance integral. You'd do the same for plastic-like BRDFs that are the sum of perfect specular lobe and a smooth lobe. This seems like a more mathematically correct approach to me. It's more cumbersome though, that's why people prefer the more convenient (and controversial) approach of just thinking of it as defining a point light source instead.
Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it.
Click here.You'll thank me later.
|
There are two prominent uses of the term "average" in algorithm analysis.
Average-case as a special case of expected costs
Here, "average case" just means "expected case w.r.t. uniform distribution". Since we
usually analyse with uniform inputs in mind (everything else is hard, and there's not much reason to prefer one distribution over the other in most cases). Example: the average-case running-time cost of sorting algorithmsis often analyzed w.r.t uniformly-random permutations.
Average cost in the classic sense.
When analyzing data structures, we can look at average costs acrossthe contained elements
for a fixed instance -- no probability distribution here (well, you could...).That is, we may still have/want to consider a worst-, average- or best-case instance. Example: Consider BSTs. The average search cost of a given tree is the total cost for searching all contained elements (one after the other)divided by the number of contained elements. This is a classic quantityin AofA called internal path length.
Note: There are situations where average and expected do not usually mean the same thing. For instance, the expected (also: average-case) height of BSTs is in $O(\log n)$ but the average height is in $\Theta(\sqrt{n})$. That is because "expected" is implicitly (by tradition) meant w.r.t. uniformly-random permutations of insertion operations whereas "average" means the average over all BSTs of a given size. The two distributions are not the same, and apparently significantly so!
Recommendation: Whenever you use "expected" or "average-case", be very clear about which quantities are random w.r.t. which distribution.
The specific sentence you quote is indeed not clear if read in isolation -- if you ignore that CLRS specify exactly what "simple uniform hashing" means on the very same page.
There are two potentially random variables here: 1) the content of the hashtable itself, and 2) the key searched for.
Simple uniform hashing is a simple way of specifying both. We abstract from sequences of insertions¹ and just assume that every one of the $n$ elements we inserted independently hashed to each of the $m$ addresses with probability $1/m$. We assume that the searched key hashes to each address with probabilty $1/m$.
That's how the proof works: our search hits each list with probability $1/m$ (cf 2), and they all have the same expected length of $n/m$ (via 1). Hence, the expected cost (under this specific model) for searching for $x$
not in the table is proportional to
$\qquad\displaystyle\begin{align*} T_u(x,n,m) &= 1 + \sum_{i=1}^m \operatorname{Pr}[h(x) = i] \cdot \mathbb{E}[\operatorname{length}(T[i])] \\ &\overset{1,2}{=} 1 +\sum_{i=1}^m \frac{1}{m} \cdot \frac{n}{m} \\ &= 1 + \frac{n}{m}.\end{align*}$
The "$+1$" is there to account for computing $h(x)$ and accessing $T[h(x)]$, the sum represents the cost for searching along the list.
That is fair since the sequence of insertions does not have as much impact on the resulting structure as for, say, BSTs. The hash function shakes everything up. We don't want to talk about the precise interaction of sequence and hash function, so we just assume that the result of both is independently uniform -- that's something we can work with. It may not represent reality, of course!
|
Recently, I read a tutorial about Hough line transform at the OpenCV tutorials. It is a technique to find lines in an image using a parameter space. As explained in the tutorial, for this it is necessary to use the
polar coordinate system. In the commonly used Cartesian coordinate system, a line would be represented by \(y=mx+b\). In the polar coordinate system on the other hand, a line is represented by
This article tries to explain the relation between these two forms.
In \eqref{eq:PolarCoordinateSystem_LineRepresentation} there are two new parameters: radius \(\rho\) and angle \(\theta\) as also depicted in the following figure. \(\rho\) is the length of a vector which always starts at the pole \((0,0)\) (analogous term to the origin in the Cartesian coordinate system) and ends at the line (orange in the figure) so that \(\rho\) will be orthogonal to the line. This is important because otherwise the following conclusions wouldn't work.
So, first start with the \(y\)-intercept \(b=\frac{\rho}{\sin{\theta}}\). Note that the angle \(\theta\) comes up twice: between the \(x\)-axis and the \(\rho\) vector plus between the \(y\)-axis and the blue line (on the right side). We will use trigonometrical functions to calculate the \(y\)-intercept. This is simply done by using the \(\sin\)-function\begin{equation*} \begin{split} \sin{\theta} &= \frac{\text{opposite}}{\text{hypotenuse}} \\ \sin{\theta} &= \frac{\rho}{b} \\ b &= \frac{\rho}{\sin{\theta}} \end{split} \end{equation*}
and that is exactly what the equation said for the \(y\)-intercept.
Now it is time for the slope \(m=-\frac{\cos{\theta}}{\sin{\theta}}\). For this, the relation\begin{equation*} m = \tan{\alpha} \end{equation*}
is needed, where \(\alpha\) is the slope angle of the line. \(\alpha\) can be calculated by using our known \(\theta\) angle:\begin{equation*} \alpha = 180^{\circ} - (180^{\circ} - 90^{\circ} - \theta) = 90^{\circ} + \theta. \end{equation*}
Now we have \(m=\tan{\left(90^{\circ} + \theta\right)}\), which is equivalent to \(m=\frac{\sin{\left(90^{\circ} + \theta\right)}}{\cos{\left(90^{\circ} + \theta\right)}}\). Because of \(\sin{x}=\cos{\left(90^{\circ}-x\right)}\) and \(\cos{x}=\sin{\left(90^{\circ}-x\right)}\) we can do a little bit of rewriting\begin{equation*} m=\frac {\cos\left({90^{\circ} - (90^{\circ} + \theta)}\right)} {\sin\left({90^{\circ} - (90^{\circ} + \theta}\right))} = \frac {\cos\left({-\theta}\right)} {\sin\left({-\theta}\right)} = \frac {\cos\left({\theta}\right)} {-\sin\left({\theta}\right)} = -\frac {\cos\left({\theta}\right)} {\sin\left({\theta}\right)} \end{equation*}
|
I swear I read the question many times, but I still not quite sure what it is about. So I will try to clarify some things hoping that they are relevant.
First off,
any complex multiple of a wave function indeed describes exactly the same state of a system. This directly follows from the postulate of quantum mechanics which states that the expectation value of an observable $A$ represented by a self-adjoint operator $\hat{A}$ is given as follows,$$\langle A \rangle=\frac{\langle \Psi | \hat{A} | \Psi \rangle}{\langle \Psi | \Psi \rangle} \, ,$$where $\Psi$ is the wave function. It is indeed easy to show that given $\Psi$ we can multiply it by any complex number $c$ and this would not change the expectation value of any observable,$$\langle A \rangle=\frac{\langle c \Psi | \hat{A} | c \Psi \rangle}{\langle c \Psi | c \Psi \rangle} =\frac{c^{*} c \langle \Psi | \hat{A} | \Psi \rangle}{c^{*} c \langle \Psi | \Psi \rangle}=\frac{\langle \Psi | \hat{A} | \Psi \rangle}{\langle \Psi | \Psi \rangle} \, ,$$since $c^{*} c$ in numerator and denominator perfectly cancels each other. Also note that the following definition of the expectation value$$\langle A \rangle = \langle \Psi | \hat{A} | \Psi \rangle \, ,$$holds true only for a normalized wave function $\Psi$ and trivially follows from the above mentioned more general definition.
Secondly, for the simplest case of one particle in one spatial dimension, if one would like to interpret the wave function $\Psi(x,t)$ as a
probability amplitude that the particle is at $x$, and consequently, it square modulus, $ \left|\Psi(x, t)\right|^2 = {\Psi(x, t)}^{*}\Psi(x, t)$ as the probability density that the particle is at $x$, he/she have to normalize it as follows,$$\int\limits_{-\infty}^\infty d x \, |\Psi(x,t)|^2 = 1 \, .$$This is required because one of the axioms of the probability theory requires that
the probability that some elementary event in the entire sample space
will occur is 1.
The sample space is just a set of all possible outcomes in an experiment, which in our case is the set of events of finding a particle at any possible position, i.e anywhere between $-\infty$ and $+\infty$. Thus, in our case the probability of finding the particle anywhere between $-\infty$ and $+\infty$ have to be 1, and consequently, the wave function has to be normalized. Otherwise its interpretation as a probability amplitude would be inconsistent with the probability theory.
Hope it helps.
|
Practical and theoretical implementation discussion.
Post Reply
10 posts • Page
1of 1
I've had a question that I couldn't answer myself and haven't found any formal derivation on the topic. If one has a point light with position \(\vec{c}\) and intensity \(I\), then the irradiance at point \(\vec{p}\) at some surface can be derived as:
\(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined?
\(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \)
Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as:
\(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\)
\( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \)
\(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \)
Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined?
Point and directional light sources are not physical and are commonly defined via delta functions/distributions. For a delta directional light, you take the solid angle version of the rendering integration (which is what you have above) and replace \(L_i\) by a an angular delta function, which makes the integral degenerate to a simple integrand evaluation. For a point light, which is a delta volume light, you would take the volume version of the rendering equation, i.e. the one where integrates over 3D space, and then again replace \(L_i\) by a positional delta function.
Click here.You'll thank me later.
I am aware that a point light is not physical, I am just wondering how the commonly accepted formulae in real-time cg are motivated. They are using in most cases the intensity as if it were radiance, with brdfs defined in terms of radiance. Do you have a reference for the part where you mention that you're replacing the radiance with a delta function? That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases.
What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome.
What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome.
Well, you have an integral over an angle that is non-zero if and only if it includes a singular direction. That's a delta function, isn't it? If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. Yup. You know what else causes problems? Perfectly specular reflectors and refractors, which also have deltas in their BSDFs. You have two choices when dealing with them--approximate them with spikey but non-delta BSDFs, or sample them differently. In general, you don't try to evaluate those for arbitrary directions; you just cast one ray in the appropriate direction. It's exactly the same for point and directional lights. Either use light sources that subtend a small but finite solid angle, or special-case them. You evaluate the integral (for a single light source) by ignoring everything but the delta direction and then ignoring the delta factor in the integrand. You can think of it as using a Monte Carlo estimator f(x)/pdf(x), where both the f and the pdf have identical delta functions that cancel out.That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases.
I don't have a specific reference for this, but the first place I'd look for one is the PBRT book.
I understand it in the sense that you want just that one direction, however, I don't see how that solves the intensity vs radiance issue. After all the rendering equation considers radiance and not intensity.
I would be very grateful if you could refer me to a publication of his that tackles this issue, I am not exactly sure how to find his publications from his username.If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. I mean purely theoretical problems, not specifically the ones you are referring to, though I meant precisely the case where you have a perfect mirror/refraction and a point light, since then you get a product of distributions.Yup. You know what else causes problems? PBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. Here's what I found in PBRT after a brief search: http://www.pbr-book.org/3ed-2018/Light_ ... eIntegrandvchizhov wrote: ↑Fri Apr 26, 2019 3:27 pmPBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy.
It classifies point sources as producing delta distributions. It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas.
Thank you, I have looked through pbrt already, but just saying - "it's a Dirac delta" is hardly mathematically robust - there's no derivation. It doesn't even derive central relationships like \(\frac{\cos\theta}{r^2}dA = d\omega\).
The issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. "You didn't use real wrestling. If you use real wrestling, it's impossible to get out of that hold."vchizhov wrote: ↑Fri Apr 26, 2019 7:27 pmThe issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.
- Bobby Hill
If your concern is with undefined quantities, you might need to start by defining exactly you mean by a point light, since it's already a physically implausible entity. E.g., is it meaningful to have the dw or dA terms from your Lambertian derivation? Does it have a normal? Is it a singular point, or is it the limit of an arbitrarily small sphere? If it were me, I'd just start with a definition that is consistent with a delta distribution, because it's consistent with what I want to represent, I know it makes the math work, and I can actually start implementing something.
Beyond that, I'm not sure there's anything else I can say that will convince you without a lot more work than I have time for. Best of luck.
I mostly agree with you,
\(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. vchizhov. My understanding is that the delta function is an ad-hoc construct, often defined as
\(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\)
Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure:
\(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\)
The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework.
So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are
defining the outgoing radiance at the shading pointrather than defining some infinitesimally small light source. To also account for the contribution of other light sources, you'll need to sum that just-defined radiance with a proper reflectance integral. You'd do the same for plastic-like BRDFs that are the sum of perfect specular lobe and a smooth lobe. This seems like a more mathematically correct approach to me. It's more cumbersome though, that's why people prefer the more convenient (and controversial) approach of just thinking of it as defining a point light source instead.
Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it.
Click here.You'll thank me later.
|
Numerical Methods for PDEs: In Occasion of Raytcho Lazarov's 70th Birthday Joseph Pasciak, Texas A&M University An Analysis of Finite Element Approximation to the Eigenvalues of Problems Involving Fractional Order Differential Operators Authors: Bangti Jin, Texas A&M University Raytcho Lazarov, Texas A&M University Joseph Pasciak, Texas A&M University Abstract:
In this talk, we consider an eigenvalue problem coming from a boundary value problem involving fractional derivatives. Specifically, we consider the Caputo and Riemann-Liouville fractional differential operators and associated boundary conditions. These boundary value problems will be investigated from a variational point of view. We are interested in the case when the differential operator is of order \(\alpha\) with \(\alpha \in \left(1, 2\right)\). These derivatives lead to non-symmetric boundary value problems. The Riemann-Liouville case is somewhat simpler as the underlying variational problem is coercive on a natural subspace of \(H^{\alpha/2} \left(0, 1\right)\) even though its solutions are less regular. The variational formulation of the Caputo derivative case is more interesting as it leads to a variational problem involving different test and trial spaces. In this case, one is required to prove variational stability on the discrete level as well.
In both cases, the analysis of the eigenvalue problem involves the derivation of "so-called" shift theorems which demonstrate that the solution of the variational problem and its adjoint are more regular, i.e., are in \(H^{\alpha/2+\gamma} \left(0, 1\right)\) with \(\gamma > 0\). The regularity pickup enables one to prove that the norm of the solution minus that of the finite element approximation converges with
γ dependent rates. This, in turn, can be used to deduce eigenvalue/eigenvector convergence rates. Finally, the results of numerical experiments illustrating the theory will be presented.
|
Practical and theoretical implementation discussion.
Post Reply
10 posts • Page
1of 1
I've had a question that I couldn't answer myself and haven't found any formal derivation on the topic. If one has a point light with position \(\vec{c}\) and intensity \(I\), then the irradiance at point \(\vec{p}\) at some surface can be derived as:
\(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined?
\(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \)
Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as:
\(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\)
\( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \)
\(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \)
Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined?
Point and directional light sources are not physical and are commonly defined via delta functions/distributions. For a delta directional light, you take the solid angle version of the rendering integration (which is what you have above) and replace \(L_i\) by a an angular delta function, which makes the integral degenerate to a simple integrand evaluation. For a point light, which is a delta volume light, you would take the volume version of the rendering equation, i.e. the one where integrates over 3D space, and then again replace \(L_i\) by a positional delta function.
Click here.You'll thank me later.
I am aware that a point light is not physical, I am just wondering how the commonly accepted formulae in real-time cg are motivated. They are using in most cases the intensity as if it were radiance, with brdfs defined in terms of radiance. Do you have a reference for the part where you mention that you're replacing the radiance with a delta function? That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases.
What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome.
What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome.
Well, you have an integral over an angle that is non-zero if and only if it includes a singular direction. That's a delta function, isn't it? If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. Yup. You know what else causes problems? Perfectly specular reflectors and refractors, which also have deltas in their BSDFs. You have two choices when dealing with them--approximate them with spikey but non-delta BSDFs, or sample them differently. In general, you don't try to evaluate those for arbitrary directions; you just cast one ray in the appropriate direction. It's exactly the same for point and directional lights. Either use light sources that subtend a small but finite solid angle, or special-case them. You evaluate the integral (for a single light source) by ignoring everything but the delta direction and then ignoring the delta factor in the integrand. You can think of it as using a Monte Carlo estimator f(x)/pdf(x), where both the f and the pdf have identical delta functions that cancel out.That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases.
I don't have a specific reference for this, but the first place I'd look for one is the PBRT book.
I understand it in the sense that you want just that one direction, however, I don't see how that solves the intensity vs radiance issue. After all the rendering equation considers radiance and not intensity.
I would be very grateful if you could refer me to a publication of his that tackles this issue, I am not exactly sure how to find his publications from his username.If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. I mean purely theoretical problems, not specifically the ones you are referring to, though I meant precisely the case where you have a perfect mirror/refraction and a point light, since then you get a product of distributions.Yup. You know what else causes problems? PBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. Here's what I found in PBRT after a brief search: http://www.pbr-book.org/3ed-2018/Light_ ... eIntegrandvchizhov wrote: ↑Fri Apr 26, 2019 3:27 pmPBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy.
It classifies point sources as producing delta distributions. It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas.
Thank you, I have looked through pbrt already, but just saying - "it's a Dirac delta" is hardly mathematically robust - there's no derivation. It doesn't even derive central relationships like \(\frac{\cos\theta}{r^2}dA = d\omega\).
The issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. "You didn't use real wrestling. If you use real wrestling, it's impossible to get out of that hold."vchizhov wrote: ↑Fri Apr 26, 2019 7:27 pmThe issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.
- Bobby Hill
If your concern is with undefined quantities, you might need to start by defining exactly you mean by a point light, since it's already a physically implausible entity. E.g., is it meaningful to have the dw or dA terms from your Lambertian derivation? Does it have a normal? Is it a singular point, or is it the limit of an arbitrarily small sphere? If it were me, I'd just start with a definition that is consistent with a delta distribution, because it's consistent with what I want to represent, I know it makes the math work, and I can actually start implementing something.
Beyond that, I'm not sure there's anything else I can say that will convince you without a lot more work than I have time for. Best of luck.
I mostly agree with you,
\(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. vchizhov. My understanding is that the delta function is an ad-hoc construct, often defined as
\(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\)
Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure:
\(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\)
The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework.
So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are
defining the outgoing radiance at the shading pointrather than defining some infinitesimally small light source. To also account for the contribution of other light sources, you'll need to sum that just-defined radiance with a proper reflectance integral. You'd do the same for plastic-like BRDFs that are the sum of perfect specular lobe and a smooth lobe. This seems like a more mathematically correct approach to me. It's more cumbersome though, that's why people prefer the more convenient (and controversial) approach of just thinking of it as defining a point light source instead.
Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it.
Click here.You'll thank me later.
|
This article will be permanently flagged as inappropriate and made unaccessible to everyone.
Are you certain this article is inappropriate?
Excessive Violence
Sexual Content
Political / Social
Email Address:
Article Id:
WHEBN0001403783
Reproduction Date:
Bastiaan Cornelis van Fraassen (born 5 April 1941) is Distinguished Professor of Philosophy at San Francisco State University and the McCosh Professor of Philosophy Emeritus at Princeton University, noted for his seminal contributions to philosophy of science.
Van Fraassen earned his B.A. (1963) from the University of Alberta and his M.A. (1964) and Ph.D. (1966, under the direction of Adolf Grünbaum) from the University of Pittsburgh. He previously taught at Yale University, the University of Southern California, the University of Toronto and, from 1982 to 2008, at Princeton University, where he is now emeritus.[1] At San Francisco State University, he teaches courses in the philosophy of science, philosophical logic and the role of models in scientific practice.[2][3]
Van Fraassen is an adult convert to the Roman Catholic Church[4] and is one of the founders of the Kira Institute. He is a fellow of the American Academy of Arts and Sciences; an overseas member of the Royal Netherlands Academy of Arts and Sciences since 1995;[5] and a member of the Académie Internationale de Philosophie des Sciences ("International Academy of the Philosophy of Science").[6] In 1986, van Fraassen received the Lakatos Award for his contributions to the philosophy of science and, in 2012, the Philosophy of Science Association's inaugural Hempel Award for lifetime achievement in philosophy of science.[7]
Among his many students are the philosophers Elisabeth Lloyd at Indiana University and Anja Jauernig at New York University.
Van Fraassen coined the term "constructive empiricism" in his 1980 book The Scientific Image, in which he argued for agnosticism about the reality of unobservable entities. That book was "widely credited with rehabilitating scientific anti-realism."[8] As the Stanford Encyclopedia explains: "The constructive empiricist follows the logical positivists in rejecting metaphysical commitments in science, but she parts with them regarding their endorsement of the verificationist criterion of meaning, as well as their endorsement of the suggestion that theory-laden discourse can and should be removed from science. Before van Fraassen's The Scientific Image, some philosophers had viewed scientific anti-realism as dead, because logical positivism was dead. Van Fraassen showed that there were other ways to be an empiricist with respect to science, without following in the footsteps of the logical positivists."[9]
In his 1989 book Laws and Symmetry, van Fraassen, a philosopher of science, attempted to lay the ground-work for explaining physical phenomena without assuming that such phenomena are caused by rules or laws which can be said to cause or govern their behavior. Focusing on the problem of underdetermination, he argued for the possibility that theories could have empirical equivalence but differ in their ontological commitments. He rejects the notion that the aim of science is to produce an account of the physical world that is literally true, but rather that its aim is to produce theories that are empirically adequate.[10] Van Fraassen has also studied the philosophy of quantum mechanics, philosophical logic, and epistemology.
Van Fraassen has been the editor of the Journal of Philosophical Logic and co-editor of the Journal of Symbolic Logic.[11]
In his paper Singular Terms, Truth-value Gaps, and Free Logic, van Fraassen opens with a very brief introduction of the problem of non-referring names.
Instead of any unique formalization, though, he simply adjusts the axioms of a standard predicate logic such as that found in Quine's Methods of Logic. Instead of an axiom like \forall x\,Px \Rightarrow \exists x\,Px he uses ( \forall x\,Px \and \exists x\,(x = a)) \Rightarrow \exists x\,Px; this will naturally be true if the existential claim of the antecedent is false. If a name fails to refer, then an atomic sentence containing it, that is not an identity statement, can be assigned a truth value arbitrarily. Free logic is proved to be complete under this interpretation.
He indicates that, however, he sees no good reason to call statements which employ them either true or false. Some have attempted to solve this problem by means of many-valued logics; van Fraassen offers in their stead the use of supervaluations. Questions of completeness change when supervaluations are admitted, since they allow for valid arguments that do not correspond to logically true conditionals [12]
In his essay "The Anti-Realist Epistemology of van Fraassen's The Scientific Image ", Paul M. Churchland, one of van Fraassen's critics, contrasted van Fraassen's idea of unobservable phenomena with the idea of merely unobserved phenomena.[13]
Epistemology, Computer science, Philosophy, Aesthetics, Metaphysics
Logic, Rhetoric, Classical logic, Truth, Non-classical logic
Epistemology, Metaphysics, Philosophy of science, Isaac Newton, Philosophical realism
Epistemology, University of Michigan, Analytic philosophy, Philosophy of science, Pragmatism
|
Practical and theoretical implementation discussion.
Post Reply
10 posts • Page
1of 1
I've had a question that I couldn't answer myself and haven't found any formal derivation on the topic. If one has a point light with position \(\vec{c}\) and intensity \(I\), then the irradiance at point \(\vec{p}\) at some surface can be derived as:
\(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined?
\(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \)
Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as:
\(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\)
\( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \)
\(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \)
Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined?
Point and directional light sources are not physical and are commonly defined via delta functions/distributions. For a delta directional light, you take the solid angle version of the rendering integration (which is what you have above) and replace \(L_i\) by a an angular delta function, which makes the integral degenerate to a simple integrand evaluation. For a point light, which is a delta volume light, you would take the volume version of the rendering equation, i.e. the one where integrates over 3D space, and then again replace \(L_i\) by a positional delta function.
Click here.You'll thank me later.
I am aware that a point light is not physical, I am just wondering how the commonly accepted formulae in real-time cg are motivated. They are using in most cases the intensity as if it were radiance, with brdfs defined in terms of radiance. Do you have a reference for the part where you mention that you're replacing the radiance with a delta function? That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases.
What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome.
What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome.
Well, you have an integral over an angle that is non-zero if and only if it includes a singular direction. That's a delta function, isn't it? If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. Yup. You know what else causes problems? Perfectly specular reflectors and refractors, which also have deltas in their BSDFs. You have two choices when dealing with them--approximate them with spikey but non-delta BSDFs, or sample them differently. In general, you don't try to evaluate those for arbitrary directions; you just cast one ray in the appropriate direction. It's exactly the same for point and directional lights. Either use light sources that subtend a small but finite solid angle, or special-case them. You evaluate the integral (for a single light source) by ignoring everything but the delta direction and then ignoring the delta factor in the integrand. You can think of it as using a Monte Carlo estimator f(x)/pdf(x), where both the f and the pdf have identical delta functions that cancel out.That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases.
I don't have a specific reference for this, but the first place I'd look for one is the PBRT book.
I understand it in the sense that you want just that one direction, however, I don't see how that solves the intensity vs radiance issue. After all the rendering equation considers radiance and not intensity.
I would be very grateful if you could refer me to a publication of his that tackles this issue, I am not exactly sure how to find his publications from his username.If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. I mean purely theoretical problems, not specifically the ones you are referring to, though I meant precisely the case where you have a perfect mirror/refraction and a point light, since then you get a product of distributions.Yup. You know what else causes problems? PBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. Here's what I found in PBRT after a brief search: http://www.pbr-book.org/3ed-2018/Light_ ... eIntegrandvchizhov wrote: ↑Fri Apr 26, 2019 3:27 pmPBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy.
It classifies point sources as producing delta distributions. It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas.
Thank you, I have looked through pbrt already, but just saying - "it's a Dirac delta" is hardly mathematically robust - there's no derivation. It doesn't even derive central relationships like \(\frac{\cos\theta}{r^2}dA = d\omega\).
The issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. "You didn't use real wrestling. If you use real wrestling, it's impossible to get out of that hold."vchizhov wrote: ↑Fri Apr 26, 2019 7:27 pmThe issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.
- Bobby Hill
If your concern is with undefined quantities, you might need to start by defining exactly you mean by a point light, since it's already a physically implausible entity. E.g., is it meaningful to have the dw or dA terms from your Lambertian derivation? Does it have a normal? Is it a singular point, or is it the limit of an arbitrarily small sphere? If it were me, I'd just start with a definition that is consistent with a delta distribution, because it's consistent with what I want to represent, I know it makes the math work, and I can actually start implementing something.
Beyond that, I'm not sure there's anything else I can say that will convince you without a lot more work than I have time for. Best of luck.
I mostly agree with you,
\(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. vchizhov. My understanding is that the delta function is an ad-hoc construct, often defined as
\(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\)
Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure:
\(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\)
The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework.
So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are
defining the outgoing radiance at the shading pointrather than defining some infinitesimally small light source. To also account for the contribution of other light sources, you'll need to sum that just-defined radiance with a proper reflectance integral. You'd do the same for plastic-like BRDFs that are the sum of perfect specular lobe and a smooth lobe. This seems like a more mathematically correct approach to me. It's more cumbersome though, that's why people prefer the more convenient (and controversial) approach of just thinking of it as defining a point light source instead.
Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it.
Click here.You'll thank me later.
|
I have tried doing this since the past 2 hours. How do I do this?
It's just a straightforward application of the definitions:
$\bullet \,$If $gf$ is injective, then $f$ is injective because $f(a)=f(b)\Rightarrow gf(a)=gf(b)\Rightarrow a=b$.
$\bullet \,$If $gf$ is surjective, then $g$ is surjective because $b\in S\Rightarrow \exists \,a\in S$ such that $gf(a)=b$, so $f(a)\in T$ gets sent to $b$ by $g$.
|
I am interested in seeing examples of a space $X$ (preferably a closed smooth manifold, but any finite-dimensional CW-complex would also be of interest) with a vector bundle $\xi\colon E \to X$ on it, so that there is exactly one index $i$ with $w_i(\xi) \neq 0$, and $i$ is bigger than $8$. Here are some remarks:
(1) First, this could only happen if $i$ is a power of 2: As a module over the Steenrod algebra, $H^{\ast}(BO;\mathbb F_2) = \mathbb F_2[w_1,w_2,w_3,\dots]$ is generated by $w_{2^k}$, so the first nonzero SW-class is always of degree $2^k$ (this is also an exercixe in Milnor-Stasheff).
(2) For $i=1,2,4,8$ one could take $S^1 = \mathbb RP^1, S^2 = \mathbb CP^1, S^4 = \mathbb HP^1$ and $S^8 = \mathbb OP^1$ with the canoncial bundles on these spaces.
(3) Beyond dimension 8, a sphere (or even a connected sum of products of spheres) does not give rise to any such bundle. This can be seen from analyzing $$w\colon \tilde{KO}(S^{d_1} \times \dots \times S^{d_r}) \to H^{\ast}(S^{d_1} \times \dots \times S^{d_r};\mathbb F_2),$$ the main input for understanding this map is Adams' Hopf invariant one theorem.
|
No, your "following" is not accurate. You wrote a SB Lagrangean invariant under
O(N)×O(N) (⊂ O(2N)), except for the λ term, which is only invariant under its diagonal subgroup O(N), instead.
The
N φs and the N εs fit into 2 N - vector, $(\vec{\phi},\vec{\epsilon})$, so the symmetry starts as O(2N) but the g term is only invariant under its O(N)×O(N) subgroup.The indices of the φs and the εs need not know about each other, except for the λ term contracting them together: think of synchronized swimming. The λ term is thus invariant under O(N), not O(N)×O(N).
To tease a stay against confusion, pick
N =3, so six real scalar fields. Display the O(6) -invariant part, which the g term restricts to O(3)×O(3), and finally the λ term to O(3). Now study the further SSB built in---this is a popular problem I sometimes assign.
The key is always in the
2N×2N Goldstone mass matrix $\langle \delta_i \delta_j V \rangle $ and, in particular, its kernel consisting of the null eigenvectors.
Now simplify the algebra by defining $$\frac{-2m^2}{g} \equiv v^2 , $$ so that the positive overall scale of the potential,
g/8, may be safely dropped. Further shifting the potential by the innocuous constant terms to convert it to a sum of squares, obtain$$V=(\vec{\phi}^2 - v^2)^2+(\vec{\epsilon}^2 -v^2)^2+ \frac{4\lambda}{g} (\vec{\phi}\cdot\vec{\epsilon})^2. $$
It is evident that, for
λ =0, this is the two standard Goldstone hyper-sombrero potentials superposed, so their minima are at $\langle \vec{\phi}^2 \rangle= \langle \vec{\epsilon}^2 \rangle= v^2 $.
Naturally, $\langle \vec{\phi} \rangle$ may pick any orientation in the bottom of its sombrero hypersurface, and $\langle \vec{\epsilon} \rangle$ an arbirary, in general different one, in its own; so the group is SSBroken down to
O(N-1)×O(N-1). Your 2N×2N Goldstone mass matrix will have 2(N-1) null vectors, so goldstons. For N = 3, you get 4 goldstons.
For
λ ≠0, however, the symmetry is only O(N), as stated.
For
λ >0, it is manifest from the potential sum of positive squares form that $$\langle \vec{\phi}^2 \rangle=v^2;\qquad \langle \vec{\epsilon}^2 \rangle=v^2; \qquad\langle \vec{\phi} \cdot \vec{\epsilon}\rangle =0~.$$That is to say, the vacuum orientations of $\vec{\phi}$ and $\vec{\epsilon}$ must be orthogonal. W.l.o.g, take $\langle \phi_1 \rangle=v =\langle \epsilon_2 \rangle$.The surviving invariance is then only O(N-2), and the goldstons 2N-3, so 3 for N =3 -- can you see it in your Goldstone matrix? (Hint: Confirm only $\phi_1, \epsilon_2$ and $\phi_2+\epsilon_1$ are massive, for all N.)
The plot thickens for 0>
λ > - g/2. Now the λ term is compelled to grow, not shrink, and, magnitudes being equal (dictated by the other terms), it presses to align $\langle \vec{\phi} \rangle$ with $\langle \vec{\epsilon} \rangle$, so then, $\langle \vec{\phi} \rangle=\langle \vec{\epsilon} \rangle$.
Specifically, consider the first variation that apparently stymied you in the first place, (recall that stationarity is required for every component of the fields, not just the magnitudes of their group vectors!),$$\langle \frac{\delta V}{\delta\vec{\phi}} \rangle =\langle \frac{\delta V}{\delta\vec{\epsilon}} \rangle =0, $$ and thus $$0=-v^2\vec{\phi}+(\vec{\phi}\cdot\vec{\phi})\vec{\phi} +\frac{2\lambda}{g} (\vec{\phi}\cdot\vec{\epsilon})\vec{\epsilon} \\0=-v^2\vec{\epsilon}+(\vec{\epsilon}\cdot\vec{\epsilon})\vec{\epsilon}+ \frac{2\lambda}{g} (\vec{\epsilon}\cdot \vec{\phi})\vec{\phi}.$$It is manifest that $ \vec{\epsilon}\propto \vec{\phi}$, so define $\vec{\epsilon}\equiv a \vec{\phi}$ for real nonvanishing
a.The extremizing conditions then reduce to just$$v^2=\langle \vec{\phi}\cdot \vec{\phi} \rangle \left(1+ \frac{2\lambda}{g} a^2\right); \qquad v^2=\langle \vec{\phi}\cdot \vec{\phi} \rangle \left(a^2+ \frac{2\lambda}{g}\right),$$so, then, $a^2=1$, recalling the condition $\frac{2\lambda}{g}+1>0$.
Take
a = 1, perfect alignment of $\langle \vec{\phi} \rangle$ with $\langle \vec{\epsilon} \rangle$, and $\langle \vec{\phi}^2 \rangle=\langle \vec{\epsilon}^2 \rangle=v^2/(1+2\lambda/g)$. You then have an unbroken residual subgroup O(N-1),so only N-1 goldstons now, 2 for N =3. Observe the v.e.vs increase without bound with decreasing negative λ.
Given this alignment, you might go back to the potential and monitor how
λ <- g/2, beyond the pale, would overwhelm the terms in the Sombrero potentials and flip them, destabilizing them, thus preventing SSB, among other calamities.
|
You have to check that two conditions are met:
$\cup \mathcal{B} = X$. $\forall B_1, B_2 \in \mathcal{B}: \forall x \in B_1 \cap B_2: \exists B_3 \in \mathcal{B}: x \in B_3 \subseteq B_1 \cap B_2$.
If these hold for a family $\mathcal{B}$ of subsets of $X$, then this family forms a base for some topology on $X$. And these conditions are both necessary and sufficient.
Clearly for your family the first condition is met: for every $x = (x_1, x_2, \ldots) \in \mathbb{R}^\omega$ we can just take $(x_1 - 1, x_1 + 1) \times (x_2 - 1, x_2 + 1) \times \ldots \in \mathcal{B}$ which contains $x$.
As to the second, consider the sets $B_1 = \prod_{n=1}^\infty (-\frac{1}{n+1}, 1-\frac{1}{n+1})$ (indeed in $\mathcal{B}$ as all intervals have length 1) and $B_2 = \prod_{n=1}^\infty (-1+\frac{1}{n+1}, \frac{1}{n+1})$, which is in $\mathcal{B}$ for the same reason. Both contain $0 = (0,0,0,\ldots)$. But $B_1 \cap B_2 = \prod_{n=1}^\infty (-\frac{1}{n+1}, \frac{1}{n+1})$ and there is no member of $\mathcal{B}$ that contains $0$ and is a subset of this intersection (why?).
So it's indeed not a base for a topology, as you suspected.
|
I'm trying to find the Fourier Transform of the following rectangular pulse:
$$ x(t) = rect(t - 1/2) $$
This is simply a rectangular pulse stretching from 0 to 1 with an amplitude of 1. It is 0 elsewhere. I tried using the definition of the Fourier Tranform:
$$ X(\omega) = \int_0^1 (1)*e^{-j\omega*t}dt $$
However carrying out the relatively simple integration and subbing in the bounds results for me in this:
$$ X(\omega) = \frac{1}{j\omega}[e^{-j\omega} - 1] $$
& unfortunately wolfram alpha has a different answer when I use it to compute this fourier transform. It's got the sinc function; I'd appreciate any help on this, if I've got some giant conceptual error. I have an exam on this stuff in a bit less than a week :/
Edit: also realized I used j; it's the same with i (the imaginary #)
|
Answer
$$-\tan x\cos x=\sin(-x)$$ $\text{C}$ is the answer.
Work Step by Step
$$A=-\tan x\cos x$$ A Quotient Identity related to $\tan x$ states that $$\tan\theta=\frac{\sin\theta}{\cos\theta}$$ That means we can rewrite $A$ as follows: $$A=-\frac{\sin x}{\cos x}\cos x$$ $$A=-\sin x$$ Also, from Negative-Angle Identities, we know $$\sin(-\theta)=-\sin\theta$$ Therefore, $$A=\sin(-x)$$ $\text{C}$ is the answer.
|
In Python, objects can declare their textual representation using the
__repr__ method. IPython expands on this idea and allows objects to declare other, rich representations including:
A single object can declare some or all of these representations; all are handled by IPython's
display system. This Notebook shows how you can use this display system to incorporate a broad range of content into your Notebooks.
The
display function is a general purpose tool for displaying different representations of objects. Think of it as
from IPython.display import display
A few points:
display on an object will send
If you want to display a particular representation, there are specific functions for that:
from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg)
To work with images (JPEG, PNG) use the
Image class.
from IPython.display import Image
i = Image(filename='../images/ipython_logo.png')
Returning an
Image object from an expression will automatically display it:
i
Or you can pass an object with a rich representation to
display:
display(i)
An image can also be displayed from raw data or a URL.
Image(url='http://python.org/images/python-logo.gif')
SVG images are also supported out of the box.
from IPython.display import SVGSVG(filename='../images/python_logo.svg')
By default, image data is embedded in the notebook document so that the images can be viewed offline. However it is also possible to tell the
Image class to only store a
link to the image. Let's see how this works using a webcam at Berkeley.
from IPython.display import Imageimg_url = 'http://www.lawrencehallofscience.org/static/scienceview/scienceview.berkeley.edu/html/view/view_assets/images/newview.jpg'# by default Image data are embeddedEmbed = Image(img_url)# if kwarg `url` is given, the embedding is assumed to be falseSoftLinked = Image(url=img_url)# In each case, embed can be specified explicitly with the `embed` kwarg# ForceEmbed = Image(url=img_url, embed=True)
Here is the embedded version. Note that this image was pulled from the webcam when this code cell was originally run and stored in the Notebook. Unless we rerun this cell, this is not todays image.
Embed
Here is today's image from same webcam at Berkeley, (refreshed every minutes, if you reload the notebook), visible only with an active internet connection, that should be different from the previous one. Notebooks saved with this kind of image will be smaller and always reflect the current version of the source, but the image won't display offline.
SoftLinked
Of course, if you re-run this Notebook, the two images will be the same again.
Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the
HTML class.
from IPython.display import HTML
s = """<table><tr><th>Header 1</th><th>Header 2</th></tr><tr><td>row 1, cell 1</td><td>row 1, cell 2</td></tr><tr><td>row 2, cell 1</td><td>row 2, cell 2</td></tr></table>"""
h = HTML(s)
display(h)
Header 1 Header 2 row 1, cell 1 row 1, cell 2 row 2, cell 1 row 2, cell 2
You can also use the
%%html cell magic to accomplish the same thing.
%%html<table><tr><th>Header 1</th><th>Header 2</th></tr><tr><td>row 1, cell 1</td><td>row 1, cell 2</td></tr><tr><td>row 2, cell 1</td><td>row 2, cell 2</td></tr></table>
Header 1 Header 2 row 1, cell 1 row 1, cell 2 row 2, cell 1 row 2, cell 2
The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output.
from IPython.display import Javascript
Pass a string of JavaScript source code to the
JavaScript object and then display it.
js = Javascript('alert("hi")');
display(js)
The same thing can be accomplished using the
%%javascript cell magic:
%%javascriptalert("hi");
Here is a more complicated example that loads
d3.js from a CDN, uses the
%%html magic to load CSS styles onto the page and then runs ones of the
d3.js examples.
Javascript( """$.getScript('//cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""")
%%html<style type="text/css">circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px;}.leaf circle { fill: #ff7f0e; fill-opacity: 1;}text { font: 10px sans-serif;}</style>
%%javascript// element is the jQuery element we will append tovar e = element.get(0); var diameter = 600, format = d3.format(",d");var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; });var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)");d3.json("data/flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); });});d3.select(self.frameElement).style("height", diameter + "px");
The IPython display system also has builtin support for the display of mathematical expressions typeset in LaTeX, which is rendered in the browser using MathJax.
You can pass raw LaTeX test as a string to the
Math object:
from IPython.display import MathMath(r'F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx')
With the
Latex class, you have to include the delimiters yourself. This allows you to use other LaTeX modes such as
eqnarray:
from IPython.display import LatexLatex(r"""\begin{eqnarray}\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\\nabla \cdot \vec{\mathbf{B}} & = 0 \end{eqnarray}""")
Or you can enter LaTeX directly with the
%%latex cell magic:
%%latex\begin{align}\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\\nabla \cdot \vec{\mathbf{B}} & = 0\end{align}
IPython makes it easy to work with sounds interactively. The
Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the
Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers.
from IPython.display import AudioAudio(url="http://www.nch.com.au/acm/8k16bitpcm.wav")
A NumPy array can be auralized automatically. The
Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook.
For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur. This can be auralised as follows:
import numpy as npmax_time = 3f1 = 220.0f2 = 224.0rate = 8000.0L = 3times = np.linspace(0,L,rate*L)signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)Audio(data=signal, rate=rate)
More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load:
from IPython.display import YouTubeVideoYouTubeVideo('sjfsUzECqK0')
Using the nascent video capabilities of modern browsers, you may also be able to display local videos. At the moment this doesn't work very well in all browsers, so it may or may not work for you; we will continue testing this and looking for ways to make it more robust.
The following cell loads a local file called
animation.m4v, encodes the raw video as base64 for httptransport, and uses the HTML5 video tag to load it. On Chrome 15 it works correctly, displaying a control bar at the bottom with a play/pause button and a location slider.
from IPython.display import HTMLfrom base64 import b64encodevideo = open("../images/animation.m4v", "rb").read()video_encoded = b64encode(video).decode('ascii')video_tag = '<video controls alt="test" src="data:video/x-m4v;base64,{0}">'.format(video_encoded)HTML(data=video_tag)
You can even embed an entire page from another site in an iframe; for example this is today's Wikipedia page for mobile users:
from IPython.display import IFrameIFrame('http://jupyter.org', width='100%', height=350)
IPython provides builtin display classes for generating links to local files. Create a link to a single file using the
FileLink object:
from IPython.display import FileLink, FileLinksFileLink('Cell Magics.ipynb')
Alternatively, to generate links to all of the files in a directory, use the
FileLinks object, passing
'.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory,
FileLinks would work in a recursive manner creating links to files in all sub-directories as well.
FileLinks('.')
The IPython Notebook allows arbitrary code execution in both the IPython kernel and in the browser, though HTML and JavaScript output. More importantly, because IPython has a JavaScript API for running code in the browser, HTML and JavaScript output can actually trigger code to be run in the kernel. This poses a significant security risk as it would allow IPython Notebooks to execute arbitrary code on your computers.
To protect against these risks, the IPython Notebook has a security model that specifies how dangerous output is handled. Here is a short summary:
A full description of the IPython security model can be found on this page.
Much of the power of the Notebook is that it enables users to share notebooks with each other using http://nbviewer.ipython.org, without installing IPython locally. As of IPython 2.0, notebooks rendere on nbviewer will display all output, including HTML and JavaScript. Furthermore, to provide a consistent JavaScript environment on the live Notebook and nbviewer, the following JavaScript libraries are loaded onto the nbviewer page,
before the notebook and its output is displayed:
Libraries such as mpld3 use these capabilities to generate interactive visualizations that work on nbviewer.
|
Temperature Amount of a substance Luminous intensity
are pretty much bogus fundamental units. The unit temperature is just an expression of the Boltzmann constant (or you could say the converse, that the Boltzmann constant is not fundamental as it is merely an expression of the anthropocentric and arbitrary unit temperature).
The unit energy will be whatever is the unit of force times the unit of length. AJoule is the same as a Newton-Meter, which are already defined in the SI system.
You should read the NIST page on units to get the low-down on it.
In my opinion, electric charge is a more fundamental physical quantity than electric current, but NIST (or more accurately, BIPM) defined the unit current first and then, using the unit current and unit time, they defined the unit charge. I would have sorta defined charge first and then current.
Just like the unit charge (or current) is just another way to express the vacuum permittivity or, alternatively the Coulomb constant and the unit temperature is just another way to express the Boltzmann constant, the unit time, unit length, and unit mass, all three taken together
could be just another way to express the speed of light, the Planck constant, and the gravitational constant. But because $G$ is not easy to measure (given independent units of measure) and can never be measured as accurately as we can measure the frequency of "radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom", we will never have $G$ as a defined constant as we do for $c$ and as we will soon for $\hbar$ and perhaps for $\epsilon_0$ and $k_\text{B}$.
But once we define length, time, and mass independently, we cannot define energy independently. The Joule is a "derived unit".
EDIT: so i will try to explain why the candela is bogus. (i had already for the mol.) so there is a sorta arbitrary specification of frequency, then what is the difference between 1 Candela and $\frac{4 \pi}{683} \approx$ 0.0184 watts? bogus base unit.
|
While doing some exercises on the variation of the metric tensor $g_{\mu\nu}$ and of its inverse $g^{\mu\nu}$, I came across the following identity:
$$\begin{align} & \delta(g_{\mu\nu}g^{\mu\nu})=\delta g_{\mu\nu} g^{\mu\nu} + g_{\mu\nu}\delta g^{\mu\nu} \overset{!}{=} 0 \\ \iff & \delta g_{\mu\nu} g^{\mu\nu} = - g_{\mu\nu}\delta g^{\mu\nu} \tag{1} \end{align}$$
This has the following consequence for the variation of the square root of the determinant of the metric:
$$\begin{align}\delta\sqrt{-g} &= \frac{1}{2} \sqrt{-g} g^{\mu\nu} \delta g_{\mu\nu} \tag{2} \\ & \overset{!}{=} - \frac{1}{2} \sqrt{-g} g_{\mu\nu} \delta g^{\mu\nu}. \tag{3}\end{align}$$
Then say I have a non-linear action, which I want to expand around $\eta_{\mu\nu}$ (with $g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu}(x)$). I observe a contradiction which I couldn't resolve so far, and I would be very thankful if somebody could indicate me where I am (probably) making a mistake.
Let's take the following term:
$$S = \partial_\mu g^{\mu\nu} \partial_\nu \sqrt{-g}. \tag{4}$$
I can expand using $(2)$, and I get:
$$\begin{align} \partial_\mu g^{\mu\nu} \partial_\nu \sqrt{-g} &= \partial_\mu g^{\mu\nu} \frac{1}{2} \sqrt{-g} g^{\alpha\beta} \partial_\nu g_{\alpha\beta} \\ &= \frac{1}{2} \partial_\mu h^{\mu\nu} \eta^{\alpha\beta} \partial_\nu h_{\alpha\beta} + \mathcal{O}(h^3) \\ & = \frac{1}{2} \partial_\mu h^{\mu\nu} \partial_\nu h + \mathcal{O}(h^3) \end{align}$$
where I defined $h=\eta^{\alpha\beta} h_{\alpha\beta}$. Now doing the same using $(3)$, I get:
$$\begin{align} \partial_\mu g^{\mu\nu} \partial_\nu \sqrt{-g} & = \partial_\mu g^{\mu\nu} \left( -\frac{1}{2} \right) \sqrt{-g} g_{\alpha\beta} \partial_\nu g^{\alpha\beta} \\ &= -\frac{1}{2} \partial_\mu h^{\mu\nu} \eta_{\alpha\beta} \partial_\nu h^{\alpha\beta} + \mathcal{O}(h^3) \\ &= -\frac{1}{2} \partial_\mu h^{\mu\nu} \partial_\nu h + \mathcal{O}(h^3) \end{align}$$
So I get the same result with an extra minus sign. Which one is right, and why?
Thank you very much in advance!
|
Here is a hotch-potch of examples, counterexamples, theorems, ... which I
plagiarized adapted from the answer by some guy with a complicated name to the analogous question for complex analytic spaces. I hope they will give you some intuition for flatness, that "riddle that comes out of algebra, but which technically is the answer to many prayers" (Mumford, Red Book, page 214).
Let $f:X\to Y$ be a scheme morphism, locally of finite presentation. Then:
a) $f$ smooth $\implies$ $f$ flat.
b) $f$ flat $\implies$ $f$ open (i.e. sends open subsets to open subsets).
Beware however that the natural morphism $\operatorname {Spec}\mathbb Q \to \operatorname {Spec} \mathbb Z$ is flat and yet not open: this is because it is not locally of finite presentation.
c) Open immersions are flat.
d) However general open maps need not be flat. A counterexample is: $$\operatorname {Spec k}\to \operatorname {Spec} k[\epsilon]=\operatorname {Spec} \frac {k[T]}{\langle T^2\rangle }$$
e) The normalization $X=Y^{\operatorname {nor}}\to Y$ of a non-normal scheme is
NEVER flat. For example the normalization of the cusp $C=V(y^2-x^3)\subset \mathbb A^2$ :$$\mathbb A^1\to C:t\mapsto (t^2,t^3)$$ is not a flat morphism.
f) A closed immersion it is
NEVER flat, unless it is also an open immersion [cf. c)].
g) If $X,Y$ are regular and $f:X\to Y$ is finite and surjective, then $f$ is flat.
for example the projection of the parabola $y=x^2$ onto the $y$-axis is flat, even though one fiber is single point (but a non reduced one!) while the other fibers have two points (both reduced).As another illustration, every non constant morphism between smooth projective curves is flat.
h) If $Y$ is integral and $X\subset Y\times \mathbb P^n$ is a closed subscheme, the projection $X\to Y$ is flat if and only if all fibers $X_y=\operatorname {Spec}\kappa(y)\times X$ ($y$ closed in $Y$) have the same Hilbert polynomial.
In particular the fibers must have the same dimension, so that for example the blow-up morphism $\widetilde {\mathbb P^n}\to \mathbb P^n$ of $\mathbb P^n$ at a point $O$ is not flat, since all fibers are a single point, except the fiber at $O$ which is a $\mathbb P^{n-1}$. Notice how the morphism $\operatorname {Spec k}\to \operatorname {Spec} k[\epsilon]$ evoked above (for which you have only one fiber!) yields a counterexample to g) if you do not assume $Y$ reduced. This very general result h) (which is at the heart of the theory of Hilbert schemes) might be the best illustration of what flatness really means.
|
Given the Klein Gordon equation $$\left(\Box +m^{2}\right)\phi(t,\mathbf{x})=0$$ it is possible to find a solution $\phi(t,\mathbf{x})$ by carrying out a Fourier decomposition of the scalar field $\phi$ at a given instant in time $t$, such that $$\phi(t,\mathbf{x})=\int\frac{d^{3}x}{(2\pi)^{3}}\tilde{\phi}\left(t,\mathbf{k}\right)e^{i\mathbf{k}\cdot\mathbf{x}}$$ where $\tilde{\phi}\left(t,\mathbf{k}\right)$ are the Fourier modes of the corresponding field $\phi(t,\mathbf{x})$.
From this we can calculate the required evolution of the Fourier modes $\tilde{\phi}\left(t,\mathbf{k}\right)$ such that at each instant in time $t$, $\phi(t,\mathbf{x})$ is a solution to the Klein Gordon equation. This can be done, following on from the above, as follows: $$\left(\Box +m^{2}\right)\phi(t,\mathbf{x})=\left(\Box +m^{2}\right)\int\frac{d^{3}x}{(2\pi)^{3}}\tilde{\phi}\left(t,\mathbf{k}\right)e^{i\mathbf{k}\cdot\mathbf{x}}\qquad\qquad\qquad\qquad\qquad\qquad\;\;\,\\ =\int\frac{d^{3}x}{(2\pi)^{3}}\left[\left(\partial^{2}_{t}+\mathbf{k}^{2}+m^{2}\right)\tilde{\phi}\left(t,\mathbf{k}\right)\right]e^{i\mathbf{k}\cdot\mathbf{x}} =0\\ \Rightarrow \left(\partial^{2}_{t}+\mathbf{k}^{2}+m^{2}\right)\tilde{\phi}\left(t,\mathbf{k}\right)=0 \qquad\qquad\qquad$$
Question: This is all well and good, but why is it that in this case we only perform a Fourier decomposition of the spatial part only, whereas in other cases, such as for finding solutions for propagators (Green's functions), we perform a Fourier decomposition over all 4 spacetime coordinates? [e.g. $$G(x-y)=\int\frac{d^{4}x}{(2\pi)^{4}}\tilde{G}\left(t,\mathbf{k}\right)e^{ik\cdot x}$$ (where in this case $k\cdot x\equiv k_{\mu}x^{\mu}$).]
Is it simply because when we construct the appropriate QFT for a scalar field we do so in the Heisenberg picture, or is there something else to it?
Apologies if this is a really dumb question but it's really been bugging me for a while and I want to get the reasoning straight in my mind!
|
Will that affect the quality of speech comparison, and by how much?
That is impossible to tell without knowing what the Node.js thing does internally; I think it's a bit much too ask for us to search for what you meant. As a comment: signal processing in JavaScript sounds like a bad idea, performance-wise
and development-wise; it's really not what JS was designed and optimized for, and there's a significant lack of libraries, let alone efficient ones. I always urge people to use the right tools for their job, and JS isn't that here, I think.
</comment>
That being said:
From a pure signal point of view, you can model the effect of quantization as noise. The problem here being that this kind of noise is neither uncorrelated to the signal, nor necessarily white. To make things a bit harder on the compensation side, its amplitude typically isn't Gaussian even. Oh, well, but here goes quantization noise power, a figure that is both very important to understand the maximum SNR you can get out of a digital system and doesn't say much as long as you don't say how well the signal processing can deal with this specific kind of noise.
Luckily, for "small enough" quantization steps, a little stochastic consideration¹ says that the assumption that quantization noise (QN henceforth) is additive is pretty justified.
Now, assuming your signal amplitude is really uniformly distributed, and your ADC being perfectly uniform, the amplitude $\text{SNR}_Q$ (signal-to-QN-ratio) for an M-bit ADC becomes
$$\begin{align*}\text{SNR}_Q &= 2^M\\\text{SNR}_Q \text{[dB]}&= 20 \log_{10}(2^M)\\ &= 20 \log_{10}(2)\,M \\&\approx 6 M\text,\end{align*}$$which implies that for 8bit, your $\text{SNR}_{Q,8b} \approx 48\,\text{dB}$, and for 16bit $\text{SNR}_{Q,16b} \approx 96\,\text{dB}$.
Now, speech definitely isn't uniform in amplitude; it's a bit hard to justify this model without knowing what your recording looks like, but I'd rather say it's composed of sines; in that case, you get an additional $1.8\text{dB}$ noise for both cases.
Point here is that I doubt that a "real world, non-studio equipment, non-anechoic-chamber silence" speech recording will ever be any close to $48\,\text{dB}$, so probably, no, that's a fine choice, and by the way you ask this question, I kind of doubt it will be trivial to extend the algorithm (which probably uses a lot of elegant numerical math internally) to 16bit, anyway.
¹ the amount of noise power correlated to the signal necessarily being very limited, and an mutually independent set of sufficiently many i.i.d. realizations (Normal due to CLT) added up still being Normal if their moments weren't too different ...
|
Let $K$ be a complete, algebraically closed non-Archimedean field, and let $p \in K[x]$ be of degree $d > 0$ and norm 1. (Here the norm of a polynomial is the maximum of the norms of its coefficients.)
I am interested in finding unique representations for elements of the affinoid algebra $$A = K \langle x, p^{-1} \rangle = K \langle x, y \rangle /(p(x)y - 1).$$ Led by the case $p = x$, I thought it might hold that every element of this algebra can be written uniquely as a series $\sum_{i = -\infty}^\infty g_i p^i$, where $g_i \in K[x]$ has $\deg g_i < d$ and $|g_i| \to 0$ as $|i| \to \infty$.
I realised my "proof" of this fact is incorrect, owing to my inability to compare $\left |\sum_{i = -\infty}^\infty g_i p^i \right |$ to $\max_i |g_i|$. It's simple to reduce the claim to elements of $K \langle x \rangle \subseteq A$, and it holds by an easy induction on degree for $K[x]$. But now I'm starting to think the claim itself might be incorrect. If so, is there a standard form of any kind?
If it's relevant, we can assume $p$ has all non-zero coefficients of norm 1 and is divisible by $x$; feel free to add more restrictions to $p$ if they're needed.
|
Answer
$8(\cos(30^{\circ})+i\sin(30^{\circ}))$
Work Step by Step
To find the trigonometric form of a complex number from its Cartesian form, two things must be done. Firstly, the modulus of the complex number must be found. Secondly, the argument of the complex number must be found. To find the modulus of the complex number: Apply Pythagoras theorem to the coefficients of the complex number. i.e. $r=\sqrt{(4)^{2}+(4\sqrt{3})^{2}}\\=\sqrt{64}\\=8$ To find the argument of the complex number: First, find the basic argument of the complex number this is done by finding $\arctan$ of the fraction of the absolute value of the imaginary number coefficient over the coefficient of the real number. Therefore, the basic argument is $\arctan({\frac{4}{4\sqrt{3}}})\\=\arctan(\frac{1}{\sqrt{3}})\\=30^{\circ}$ Second, identify the quadrant that the line graph of the complex number lie in. Below are the conditions for the four quadrants that the complex number can lie in. First quadrant: both coefficient of real and imaginary parts are positive [argument=basic argument] Second quadrant: coefficient of real part is negative while coefficient of imaginary part is positive [argument=$180^{\circ}$-basic argument] Third quadrant: coefficient of real and imaginary part is negative [argument=$180^{\circ}$+basic argument] Fourth quadrant: coefficient of real part is positive while coefficient of imaginary part is negative [argument=$360^{\circ}$-basic argument] Since both the coefficient of the real part and imaginary part is positive, the Cartesian equation lies in the first quadrant and the argument for the complex number trigonometric form is thus $30^{\circ}$ Thus, the trigonometric form of the equation of the Cartesian complex number $4\sqrt{3}+4i$ is $8(\cos(30^{\circ})+i\sin(30^{\circ}))$
|
Search found 5 matches
Search found 5 matches • Page
1of 1 Fri Apr 26, 2019 10:18 pm Forum: General Development Topic: Point lights and non-diffuse brdfs Replies: 9 Views: 1095
Thanks anyways, I appreciate it. I'll keep looking for a formal proof, and if I find one I'll make sure to link it for future readers.
Fri Apr 26, 2019 7:27 pm Forum: General Development Topic: Point lights and non-diffuse brdfs Replies: 9 Views: 1095
Thank you, I have looked through pbrt already, but just saying - "it's a Dirac delta" is hardly mathematically robust - there's no derivation. It doesn't even derive central relationships like \frac{\cos\theta}{r^2}dA = d\omega . It doesn't try to evaluate the incoming radiance directly, but instead...
Fri Apr 26, 2019 3:27 pm Forum: General Development Topic: Point lights and non-diffuse brdfs Replies: 9 Views: 1095
I understand it in the sense that you want just that one direction, however, I don't see how that solves the intensity vs radiance issue. After all the rendering equation considers radiance and not intensity. If you really need academic respectability to be convinced, maybe you should take a look at...
Thu Apr 25, 2019 7:11 pm Forum: General Development Topic: Point lights and non-diffuse brdfs Replies: 9 Views: 1095
I am aware that a point light is not physical, I am just wondering how the commonly accepted formulae in real-time cg are motivated. They are using in most cases the intensity as if it were radiance, with brdfs defined in terms of radiance. Do you have a reference for the part where you mention that...
Wed Apr 24, 2019 5:00 pm Forum: General Development Topic: Point lights and non-diffuse brdfs Replies: 9 Views: 1095
I've had a question that I couldn't answer myself and haven't found any formal derivation on the topic. If one has a point light with position \vec{c} and intensity I , then the irradiance at point \vec{p} at some surface can be derived as: E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{...
|
The proof will be done in three parts. First we explain the martingale difference method used to obtain concentration inequalities. At some point, we will need to estimate the probability of discrepancy between coupled delayed renewal processes, and thus, the second part of the proof is the explicit construction of a coupling of these processes. The proof of the lemma is concluded in a third step.
Notation alert B.1
For notational simplicity, we will translate the indexes of one unit along the present section, to study \(\mathbb {E}\prod _{i=1}^{k}\alpha _i^{\xi _i}\) instead of \(\mathbb {E}\prod _{i=0}^{k-1}\alpha _i^{\xi _{i+1}}\).
The method of martingale difference.
We refer to Sect. 4
of McDiarmid [10
] for what we now present. Let \(g(\xi _1,\ldots ,\xi _{k}):=\sum _{i=1}^{k} \xi _i\log \alpha _i\)
. We will upper-bound \(\mathbb {E}(e^{g-\mathbb {E}g})\)
. Let, for \(i=1,\ldots ,k\)
$$\begin{aligned} \Delta _i:=\mathbb {E}(g|\mathcal {F}_{1}^{i})-\mathbb {E}(g|\mathcal {F}_{1}^{i-1}), \end{aligned}$$
where, for \(i\ge 1\)
, \(\mathcal {F}_1^i\)
is the \(\sigma \)
-algebra generated by \(\xi _j,j=1,\ldots ,i\)
and \(\mathcal {F}_1^0\)
is the trivial \(\sigma \)
-algebra. These quantities sum telescopically in
i
to \(\sum _{i=1}^{k}\Delta _i(\xi _{1}^{i})=g(\xi _{1}^{k})-\mathbb {E}g(\xi _1^k)\)
. Since \(\mathcal {F}_1^{i-1}\subset \mathcal {F}_1^{i}\)
, we have \(\mathbb {E}(\Delta _i|\mathcal {F}_{1}^{i-1})= 0\)
which means that \(\Delta _i\)
, \(i\ge 1\)
forms a martingale difference sequence. If there exists, for any \(i\ge 1\)
, a finite real number \(d_i\)
such that \(|\Delta _i|\le d_i\)
a.s., we can use the Azuma-Hoeffding inequality (see the proof of Lemma 4.1 of McDiarmid [10
]) which states
$$\begin{aligned} \mathbb {E}e^{g-\mathbb {E}g}\le e^{\frac{1}{8}\sum _{i=1}^kd_i^2}. \end{aligned}$$
(9)
To upper bound \(\sum _id_i^2\)
, let us compute, for \(i=1,\ldots ,k-1\)
$$\begin{aligned} \Delta _i(\xi _{1}^{i})=&\sum _{u_{i+1}^{k}\in \{0,1\}^{k-i}}\mathbb {P}(u_{i+1}^{k}|\xi _{1}^{i})g(\xi _1^iu_{i+1}^{k})-\sum _{u_{i}^{k}\in \{0,1\}^{k-i+1}}\mathbb {P}(u_{i}^{k}|\xi _{1}^{i-1})g(\xi _1^{i-1}u_{i}^{k})\\ \le&|\sum _{u_{i+1}^{k}}\mathbb {P}(u_{i+1}^{k}|\xi _{1}^{i-1}1)g(\xi _1^{i-1}1u_{i+1}^{k})-\sum _{u_{i+1}^{k}}\mathbb {P}(u_{i+1}^{k}|\xi _{1}^{i-1}0)g(\xi _1^{i-1}0u_{i+1}^{k})| \end{aligned}$$
where we used the convention that \(\xi _1^0=\emptyset \)
and the notation of concatenation between strings \(a_i^jb_k^l=(a_i,\ldots ,a_j,b_k,\ldots ,b_l)\)
. A similar computation yields \(\Delta _k(\xi _{1}^{k})\le \left| \log \alpha _k\right| \)
.
Therefore,
$$\begin{aligned} \Delta _i(\xi _{1}^{i})\le \sum _{j=0}^{k-i}D_{i,i+j}(\xi _{1}^{i})|\log \alpha _{i+j}|=(D(\xi _1^k)L)_i\,,\,\,i=1,\ldots ,k \end{aligned}$$
where \(D(\xi _{1}^{n})\)
is the upper triangular \(k\times k\)
matrix defined by
$$\begin{aligned} D_{i,i+j}(\xi _{1}^{i})&:=\sum _{u_{i+1}^{k},v_{i+1}^{k}\in \{0,1\}^{k-i}}\mathcal {Q}(u_{i+1}^{k},v_{i+1}^{k}|1\xi _{1}^{i-1}1,1\xi _{1}^{i-1}0)\mathbf{1}\{u_{i+j}\ne v_{i+j}\}\\&=\mathcal {Q}(u_{i+j}\ne v_{i+j}|1\xi _{1}^{i-1}1,1\xi _{1}^{i-1}0), \quad \text{ for } j=0,\ldots ,k-i, \end{aligned}$$
L
is the \(k\times 1\)
matrix (column vector) with entries \(L_{i,1}=|\log \alpha _i|,\,i=1,\ldots ,k\)
and finally \(\mathcal {Q}((\cdot ,\cdot )|1\xi _{1}^{i-1}1,1\xi _{1}^{i-1}0)\)
denotes the law of a coupling between two discrete renewal sequence having the same inter-arrival distribution and starting from the different configurations \(1\xi _{1}^{i-1}1\)
and \(1\xi _{1}^{i-1}0\)
.
Coupling and conclusion of the proof of the Lemma.
Let \(\ell (1\xi _{1}^{i})\)
denote the smaller integer
k
such that \(\xi ^i_{i-k+1}=0^k\)
, where \(0^k=(0,\ldots ,0)\)
denotes the string of
k
consecutive 0’s. Observe that \(\ell (1\xi _{1}^{i})\le \ell (10^i)=i\)
(this is one of the main differences with the paper of Chazottes et al. [3
]). Then, for \(i\ge 1\)
, \(D_{i,i+j}(\xi _{1}^{i})\)
is equal to the probability that two coupled renewal processes, one undelayed (the one starting with \(1\xi _{1}^{i-1}1\)
), and the other with delay \(\ell (1\xi _{1}^{i-1}0)\ge 1\)
(the one starting with \(1\xi _{1}^{i-1}0\)
), disagree at time
j
. Recall the coupling that we define before the statement of the lemma. Using this coupling we have the upper bound,
$$\begin{aligned} D_{i,i+j}(\xi _{1}^{i})=\mathbb {P}_\mathbf{U}\left( {\tilde{\xi }}^{(0)}_j\ne {\tilde{\xi }}^{(\ell (\xi _{1}^{i}))}_j\right) \le \mathbb {P}_\mathbf{U}(\tau _{0,\ell (\xi _{1}^{i})}\ge j) \le \sup _{\ell =1,\ldots ,i}\mathbb {P}_\mathbf{U}(\tau _{0,\ell }\ge j) \end{aligned}$$
where the last inequality follows taking the supremum over all the possible values of \(\ell (\xi _{1}^{i})\)
for any \(i=1,\ldots ,k\)
. In view of (9
), we now take
$$\begin{aligned} d_i:=( DL)_i=\sum _{j=i}^k\sup _{\ell =1,\ldots ,i}\mathbb {P}_\mathbf{U}(\tau _{0,\ell }\ge j)|\log \alpha _j| \end{aligned}$$
and thus,
$$\begin{aligned} \sum _{i=1}^kd_i^2\le \sum _{i=1}^k\left( \sum _{j=i}^k\sup _{\ell =1,\ldots ,i}\mathbb {P}_\mathbf{U}(\tau _{0,\ell }\ge j)|\log \alpha _j|\right) ^2. \end{aligned}$$
Recalling that we translated the indexes in the beginning of the proof, what we proved, from (9
), is that
$$\begin{aligned} \mathbb {E}\left( e^{\sum _{i=0}^{k-1} \xi _i\log \alpha _i-\mathbb {E}\sum _{i=0}^{k-1} \xi _i\log \alpha _i}\right) \le e^{\frac{1}{8}\sum _{i=1}^k\left( \sum _{j=i}^k\sup _{\ell =1,\ldots ,i}\mathbb {P}_\mathbf{U}(\tau _{0,\ell }\ge j)|\log \alpha _{j-1}|\right) ^2} \end{aligned}$$
and therefore
$$\begin{aligned} \mathbb {E}\prod _{i=0}^{k-1}\alpha _i^{\xi _i}\le \prod _{i=0}^{k-1}\alpha _i^{\mathbb {P}(\xi _i=1)}e^{\frac{1}{8}\sum _{i=1}^k\left( \sum _{j=i}^k\sup _{\ell =1,\ldots ,i}\mathbb {P}_\mathbf{U}(\tau _{0,\ell }\ge j)|\log \alpha _{j-1}|\right) ^2}. \end{aligned}$$
(10)
This concludes the proof of the lemma.
|
Lecture: HGX205, M 18:30-21
Section: HGW2403, F 18:30-20 Exercise 01 Prove that \(\neg\Box(\Diamond\varphi\wedge\Diamond\neg\varphi)\) is equivalent to \(\Box\Diamond\varphi\rightarrow\Diamond\Box\varphi\). What you have assumed? Define strategyand winning strategyfor modal evaluation games. Prove Key Lemma: \(M,s\vDash\varphi\) iff V has a winning strategy in \(G(M,s,\varphi)\). Prove that modal evaluation games are determined, i.e. either V or F has a winning strategy.
And all exercises for Chapter 2 (see page 23,
open minds) Exercise 02 Let \(T\) with root \(r\) be the tree unraveling of some possible world model, and \(T’\) be the tree unraveling of \(T,r\). Show that \(T\) and \(T’\) are isomorphic. Prove that the union of a set of bisimulations between \(M\) and \(N\) is a bisimulation between the two models. We define the bisimulation contraction of a possible world model \(M\) to be the “quotient model”. Prove that the relation links every world \(x\) in \(M\) to the equivalent class \([x]\) is a bisimulation between the original model and its bisimulation contraction.
And exercises for Chapter 3 (see page 35,
open minds): 1 (a) (b), 2. Exercise 03 Prove that modal formulas (under possible world semantics) have ‘Finite Depth Property’.
And exercises for Chapter 4 (see page 47,
open minds): 1 – 3. Exercise 04 Prove the principle of Replacement by Provable Equivalents: if \(\vdash\alpha\leftrightarrow\beta\), then \(\vdash\varphi[\alpha]\leftrightarrow\varphi[\beta]\). Prove the following statements. “For each formula \(\varphi\), \(\vdash\varphi\) is equivalent to \(\vDash\varphi\)” is equivalent to “for each formula \(\varphi\), \(\varphi\) being consistent is equivalent to \(\varphi\) being satisfiable”. “For every set of formulas \(\Sigma\) and formula \(\varphi\), \(\Sigma\vdash\varphi\) is equivalent to \(\Sigma\vDash\varphi\)” is equivalent to “for every set of formulas \(\Sigma\), \(\Sigma\) being consistent is equivalent to \(\Sigma\) being satisfiable”. Prove that “for each formula \(\varphi\), \(\varphi\) being consistent is equivalent to \(\varphi\) being satisfiable” using the finite version of Henkin model.
And exercises for Chapter 5 (see page 60,
open minds): 1 – 5. Exercise 05
Exercises for Chapter 6 (see page 69,
open minds): 1 – 3. Exercise 06 Show that “being equivalent to a modal formula” is not decidable for arbitrary first-order formulas.
Exercises for Chapter 7 (see page 88,
open minds): 1 – 6. For exercise 2 (a) – (d), replace the existential modality E with the difference modality D. In the clause (b) of exercise 4, “completeness” should be “correctness”. Exercise 07 Show that there are infinitely many non-equivalent modalities under T. Show that GL + Idis inconsistent and Unproves GL. Give a complete proof of the fact: In S5, Every formula is equivalent to one of modal depth \(\leq 1\).
Exercises for Chapter 8 (see page 99,
open minds): 1, 2, 4 – 6. Exercise 08 Let \(\Sigma\) be a set of modal formulas closed under substitution. Show that \[(W,R,V),w\vDash\Sigma~\Leftrightarrow~ (W,R,V’),w\vDash\Sigma\] hold for any valuation \(V\) and \(V’\). Define a \(p\)- morphismbetween \((W,R),w\) and \((W’,R’),w’\) as a “functional bisimulation”, namely bisimulation regardless of valuation. Show that if there is a \(p\)-morphism between \((W,R),w\) and \((W’,R’),w’\), then for any valuation \(V\) and \(V’\), we have \[(W,R,V),w\vDash\Sigma~\Leftrightarrow~ (W’,R’,V’),w\vDash\Sigma.\]
Exercises for Chapter 9 (see page 99,
open minds). Exercise the last
Exercises for Chapter 10 and 11 (see page 117 and 125,
open minds).
|
Context.If have an algebraic element $\alpha$ over $\mathbb Q$, and I want to write $\mathbb Z[\alpha]:=\{a+\alpha b,\ a,b\in \mathbb Z\}$ as a quotient ring of the form $\mathbb Z[X]/I$.
Is the following approach correct?
Let $\pi$ be an irreducible of $\mathbb Z[X]$ such that $\pi(\alpha)=0$.
Then let's consider the function
$$ \begin{matrix}\varphi\colon& \mathbb Z[X] & \to & \mathbb Z[\alpha] \\ &P& \mapsto& P(\alpha).\end{matrix}$$
The function $\varphi$ is a surjective ring morphism.
Plus, if $\varphi(P)=0$, then let's do the euclidean division (in $\mathbb Q[X]$) of $P$ by $\pi$:
$$P=Q\pi + R.$$
So we have $Q(\alpha)\pi(\alpha)+R(\alpha)=0$, so $R(\alpha)=0$ since $\pi(\alpha)=0$.
So $P\in (\pi)$ where $(\pi)=\pi\mathbb Z[X]$ the ideal generated by $\pi$.
Reciprocally, if $P\in (\pi)$ we obviously have $P\in \mathrm{ker}(\varphi)$.
Then,
$$\mathbb Z[X]/(\pi)\simeq \mathbb Z[\alpha].$$
Edit.
Thanks to a comment, I should assume certain conditions on $\alpha$ which would assure that $\mathbb Z[\alpha]$ is a ring. It seems that $\alpha$ is algebraic of degree $2$ is sufficient, so I will be assuming this.
|
Let $e_1,e_2,\dots$ be a Schauder basis for a Hilbert space $(V , \langle \cdot , \cdot \rangle)$. Let $A:V \to V$ be an operator. Finally, let $V_n = {\rm span}( e_1, \dots, e_n)$. Let $i_n : V_n \to V$ be the injection so that $i_n^\dagger$ is the orthogonal projection. Finally, define $A_n = i_n^\dagger \circ A \circ i_n : V_n \to V_n$.
1) Are necessary and sufficient conditions known for the spectrum of $A_n$ to converge to the spectrum of $A$?
2) Same question, but for the eigen-spaces?
(p.s. I am an engineer with a fair knowledge of differential geometry. I apologize if this question is trivial. Functional analysis is a weakness for me.)
|
There are a thousand apps for organising your life, calendars, todo lists, note trackers, but the big kahuna, the true Swiss army knife is org-mode. Out of the box, org-mode understands LaTeX and code snippets, todo lists and bookmarks, projects and agendas. Best of all, it comes with a powerful text editor, Emacs! This week, Ben gave us a live demo of org-mode in Emacs
You can get the original org-mode file from here
What is ORG mode? Text-based way to organise notes, links, code, and more Time management tool TODO lists Tasks, projects Table editor, spreadsheet Interactive code notebook Can be exported to LaTeX, markdown, HTML,…
All in Emacs!
The ORG file format
A simple text format like Markdown.
Anything can be edited, fixed easily by hand if needed. You can start simple, and discover extra features [Try Alt right/left, shift right/left, TAB to collapse/expand]
* Some section** A subsection*** Subsection**** subsubsection - list - another item - sub-items** Another subsection
Links
Links files, URLs, shell command, elisp, DOI, …
<10.5281/zenodo.2530733>
LaTeX Standard LaTeX formulae are recognised and rendered
Inline like this (e^{i\pi} = -1)
[ e^{i\theta} = \cos\left(\theta\right) + i\sin\left(\theta\right) ]
Toggle equation view: C-c C-x C-l
Tables Start creating a table with “ headings ” then TAB Alt + arrow keys move columns, rows Functions can also manipulate cells
| b | a | c || | | || | kjdhfsf | |
Source code, notebooks Type “<s TAB” (other shortcuts for Examples, Quotes, …) Supports many languages C-c C-c runs the code block
#+BEGIN_SRC python :results outputprint("hello")#+END_SRC#+RESULTS:: hello
Tables and code blocks Tables can be used as input and output to code blocks Provides a way to pass data between languages
#+NAME: cxx-generate#+BEGIN_SRC C++ :includes <iostream> for(int i = 0; i < 5; i++) { std::cout << i << ", " << i*i*i - 2*i << "\n";}#+END_SRC#+RESULTS: cxx-generate| 0 | 0 || 1 | -1 || 2 | 4 || 3 | 21 || 4 | 56 |#+BEGIN_SRC python :var data=cxx-generateimport matplotlib.pyplot as pltimport numpy as npd = np.array(data)plt.plot(d[:, 0], d[:, 1])plt.show()#+END_SRC#+RESULTS:: None
Task management
Creating tasks in org-mode:
Add “TODO” to the start of a (sub)section (or S-right) C-c C-d to choose a deadline from calendar C-c a to see Agenda views
** Project1*** TODO thing1 DEADLINE: <2019-02-22 Fri>*** TODO thing2 DEADLINE: <2019-02-20 Wed>** Project2*** TODO do something DEADLINE: <2019-02-27 Wed>*** TODO send email
More task management Once tasks are done they can be marked “DONE” Other states can be customised: NEXT, WAITING, CANCELLED,… S-right to cycle between states, type C-c C-t or just write yourself.
** DONE that thing CLOSED: [2019-02-18 Mon 10:28] - State "DONE" from "WAITING" [2019-02-18 Mon 10:28] - State "WAITING" from "DONE" [2019-02-18 Mon 10:27] \\ waiting for X
This can be customised to fit your preferred way of working
Getting Things Done (GTD) Time management
How much time to you spend on each task?
C-c C-x C-i clock in C-c C-x C-o clock out C-c a c Agenda clock view C-c C-x C-r Insert / update clock table Presentations!
This presentation is Org mode with org-show
Files can be exported to many other formats: C-c C-e
e.g. LaTeX -> PDF C-c C-e l p
Can be used as alternative to writing raw LaTeX.
|
Sign-changing solutions for some nonhomogeneous nonlocal critical elliptic problems
Departamento de Matemática, Universidad Técnica Federico Santa María, Avenida España 1680, Valparaíso, Chile
$ (-\Delta_{\Omega})^{s} u = \left| u\right| ^{\frac{4}{N-2s}}u +\varepsilon f(x)\quad \mbox{in }\Omega, $
$ \Omega $
$ \mathbb{R}^{N} $
$ N>4s $
$ s\in (0,1] $
$ f\in L^{\infty}(\Omega) $
$ f\geq 0 $
$ f\neq0 $
$ \varepsilon>0 $
$ (-\Delta_{\Omega})^{s} $ spectralfractional Laplacian. We show that the number of sign-changing solutions goes to infinity as
$ \varepsilon\rightarrow 0 $
$ \Omega $
$ f $ Keywords:Fractional Laplacian, sign-changing solution, nonlinear elliptic equation, critical exponent, reduction method. Mathematics Subject Classification:Primary: 35B20, 35B40; Secondary: 35J60, 35B38. Citation:Salomón Alarcón, Jinggang Tan. Sign-changing solutions for some nonhomogeneous nonlocal critical elliptic problems. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 5825-5846. doi: 10.3934/dcds.2019256
References:
[1]
S. Alarcón,
Double-spike solutions for a critical inhomogeneous elliptic problem in domains with small holes,
[2] [3]
B. Barrios, E. Colorado, A. de Pablo and U. Sánchez,
On some critical problems for the fractional Laplacian operator,
[4]
H. Brezis and L. Nirenberg,
Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents,
[5] [6] [7]
D. Cao and H. Zhou,
On the existence of multiple solutions of nonhomogeneous elliptic equations involving critical Sobolev exponents,
[8]
A. Capella,
Solutions of a pure critical exponent problem involving the half-Laplacian in annular-shaped domains,
[9]
A. Capella, J. Dávila, L. Dupaigne and Y. Sire,
Regularity of radial extremal solutions for some non-local semilinear equations,
[10] [11]
W. Choi, S. Kim and K. Lee,
Asymptotic behavior of solutions for nonlinear elliptic problems with the fractional Laplacian,
[12]
M. Clapp, M. del Pino and M. Musso,
Multiple solutions for a non-homogeneous elliptic equation at the critical exponent,
[13] [14]
J. Dávila, M. del Pino and Y. Sire,
Nondegeneracy of the bubble in the critical case for nonlocal equations,
[15] [16]
M. del Pino, P. Felmer and M. Musso,
Two-bubble solutions in the super-critical Bahri-Coron's problem,
[17] [18]
A. A. Kilbas, H. M. Srivastava and J. J. Trujillo, Fractional differential equations: a emergent field in applied and mathematical sciences,
[19]
N. Krall and A. W. Trivelpiece,
Principles of Plasma Physics, Academic Press, New York, London, 1973.
Google Scholar
[20]
M. Musso,
Sign-changing blowing-up solutions for a non-homogeneous elliptic equation at the critical exponent,
[21] [22]
X. Shang, Y. Yang and J. Zhang,
Positive solutions of nonhomogeneous fractional laplacian problem with critical exponent,
[23] [24] [25]
G. Tarantello,
On nonhomogeneous elliptic equations involving critical Sobolev exponent,
[26]
A. Upadhyaya, J.-P. Rieu, J. A. Glazier and Y. Sawada,
Anomalous diffusion and non-Gaussian velocity distribution of Hydra cells in cellular aggregates,
show all references
References:
[1]
S. Alarcón,
Double-spike solutions for a critical inhomogeneous elliptic problem in domains with small holes,
[2] [3]
B. Barrios, E. Colorado, A. de Pablo and U. Sánchez,
On some critical problems for the fractional Laplacian operator,
[4]
H. Brezis and L. Nirenberg,
Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents,
[5] [6] [7]
D. Cao and H. Zhou,
On the existence of multiple solutions of nonhomogeneous elliptic equations involving critical Sobolev exponents,
[8]
A. Capella,
Solutions of a pure critical exponent problem involving the half-Laplacian in annular-shaped domains,
[9]
A. Capella, J. Dávila, L. Dupaigne and Y. Sire,
Regularity of radial extremal solutions for some non-local semilinear equations,
[10] [11]
W. Choi, S. Kim and K. Lee,
Asymptotic behavior of solutions for nonlinear elliptic problems with the fractional Laplacian,
[12]
M. Clapp, M. del Pino and M. Musso,
Multiple solutions for a non-homogeneous elliptic equation at the critical exponent,
[13] [14]
J. Dávila, M. del Pino and Y. Sire,
Nondegeneracy of the bubble in the critical case for nonlocal equations,
[15] [16]
M. del Pino, P. Felmer and M. Musso,
Two-bubble solutions in the super-critical Bahri-Coron's problem,
[17] [18]
A. A. Kilbas, H. M. Srivastava and J. J. Trujillo, Fractional differential equations: a emergent field in applied and mathematical sciences,
[19]
N. Krall and A. W. Trivelpiece,
Principles of Plasma Physics, Academic Press, New York, London, 1973.
Google Scholar
[20]
M. Musso,
Sign-changing blowing-up solutions for a non-homogeneous elliptic equation at the critical exponent,
[21] [22]
X. Shang, Y. Yang and J. Zhang,
Positive solutions of nonhomogeneous fractional laplacian problem with critical exponent,
[23] [24] [25]
G. Tarantello,
On nonhomogeneous elliptic equations involving critical Sobolev exponent,
[26]
A. Upadhyaya, J.-P. Rieu, J. A. Glazier and Y. Sawada,
Anomalous diffusion and non-Gaussian velocity distribution of Hydra cells in cellular aggregates,
[1] [2]
Mateus Balbino Guimarães, Rodrigo da Silva Rodrigues.
Elliptic equations involving linear and superlinear terms and critical
Caffarelli-Kohn-Nirenberg exponent with sign-changing weight functions.
[3]
Yohei Sato, Zhi-Qiang Wang.
On the least energy sign-changing solutions for a nonlinear elliptic system.
[4]
Yanfang Peng, Jing Yang.
Sign-changing solutions to elliptic problems with two critical Sobolev-Hardy exponents.
[5]
Tsung-Fang Wu.
On semilinear elliptic equations involving critical Sobolev exponents and sign-changing weight function.
[6]
Gabriele Cora, Alessandro Iacopetti.
Sign-changing bubble-tower solutions to fractional semilinear elliptic problems.
[7]
Yohei Sato.
Sign-changing multi-peak solutions for nonlinear Schrödinger equations with critical frequency.
[8]
Wei Long, Shuangjie Peng, Jing Yang.
Infinitely many positive and sign-changing solutions for nonlinear fractional scalar field equations.
[9]
Jun Yang, Yaotian Shen.
Weighted Sobolev-Hardy spaces and sign-changing solutions of
degenerate elliptic equation.
[10] [11] [12]
M. Ben Ayed, Kamal Ould Bouh.
Nonexistence results of sign-changing solutions to a supercritical nonlinear problem.
[13]
Guirong Liu, Yuanwei Qi.
Sign-changing solutions of a quasilinear heat equation with a source term.
[14]
Yuxin Ge, Monica Musso, A. Pistoia, Daniel Pollack.
A refined result on sign changing solutions for a critical elliptic problem.
[15]
Yuanxiao Li, Ming Mei, Kaijun Zhang.
Existence of multiple nontrivial solutions for a $p$-Kirchhoff type
elliptic problem involving sign-changing weight functions.
[16]
Wen Zhang, Xianhua Tang, Bitao Cheng, Jian Zhang.
Sign-changing solutions for fourth order elliptic equations with Kirchhoff-type.
[17]
Huxiao Luo, Xianhua Tang, Zu Gao.
Sign-changing solutions for non-local elliptic equations with asymptotically linear term.
[18]
Bartosz Bieganowski, Jaros law Mederski.
Nonlinear SchrÖdinger equations with sum of periodic and vanishing potentials and sign-changing nonlinearities.
[19]
Teodora-Liliana Dinu.
Entire solutions of the nonlinear eigenvalue logistic problem with sign-changing potential and absorption.
[20]
Angela Pistoia, Tonia Ricciardi.
Sign-changing tower of bubbles for a sinh-Poisson equation with asymmetric exponents.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
freqz uses an FFT-basedalgorithm to calculate the Z-transform frequency response of a digitalfilter. Specifically, the statement
[h,w] = freqz(b,a,p)
returns the
p-point complex frequency response, H( e ),of the digital filter. jω
In its simplest form,
freqz accepts the filtercoefficient vectors
b and
a,and an integer
p specifying the number of pointsat which to calculate the frequency response.
freqz returnsthe complex frequency response in vector
h, andthe actual frequency points in vector
w in rad/s.
freqz can accept other parameters, such asa sampling frequency or a vector of arbitrary frequency points. Theexample below finds the 256-point frequency response for a 12th-orderChebyshev Type I filter. The call to
freqz specifiesa sampling frequency
fs of 1000 Hz:
[b,a] = cheby1(12,0.5,200/500); [h,f] = freqz(b,a,256,1000);
Because the parameter list includes a sampling frequency,
freqz returnsa vector
f that contains the 256 frequency pointsbetween 0 and
fs/2 used in the frequency responsecalculation.
This toolbox uses the convention that unit frequency is theNyquist frequency, defined as half the sampling frequency. The cutofffrequency parameter for all basic filter design functions is normalizedby the Nyquist frequency. For a system with a 1000 Hzsampling frequency, for example, 300 Hz is 300/500 = 0.6. To convert normalized frequencyto angular frequency around the unit circle, multiply by
π.To convert normalized frequency back to hertz, multiply by half thesample frequency.
If you call
freqz with no output arguments,it plots both magnitude versus frequency and phase versus frequency.For example, a ninth-order Butterworth lowpass filter with a cutofffrequency of 400 Hz, based on a 2000 Hz sampling frequency, is
[b,a] = butter(9,400/1000);
To calculate the 256-point complex frequency response for thisfilter, and plot the magnitude and phase with
freqz,use
freqz(b,a,256,2000)
freqz can also accept a vector of arbitraryfrequency points for use in the frequency response calculation. Forexample,
w = linspace(0,pi); h = freqz(b,a,w);
calculates the complex frequency response at the frequency pointsin
w for the filter defined byvectors
b and
a.The frequency points can range from 0 to 2
π.To specify a frequency vector that ranges from zero to your samplingfrequency, include both the frequency vector and the sampling frequencyvalue in the parameter list.
These examples show how to compute and display digital frequency responses.
Compute and display the magnitude response of the third-order IIR lowpass filter described by the following transfer function:
Express the numerator and denominator as polynomial convolutions. Find the frequency response at 2001 points spanning the complete unit circle.
b0 = 0.05634;b1 = [1 1];b2 = [1 -1.0166 1];a1 = [1 -0.683];a2 = [1 -1.4461 0.7957];b = b0*conv(b1,b2);a = conv(a1,a2);[h,w] = freqz(b,a,'whole',2001);
Plot the magnitude response expressed in decibels.
plot(w/pi,20*log10(abs(h))) ax = gca; ax.YLim = [-100 20]; ax.XTick = 0:.5:2; xlabel('Normalized Frequency (\times\pi rad/sample)') ylabel('Magnitude (dB)')
Design an FIR bandpass filter with passband between and rad/sample and 3 dB of ripple. The first stopband goes from to rad/sample and has an attenuation of 40 dB. The second stopband goes from rad/sample to the Nyquist frequency and has an attenuation of 30 dB. Compute the frequency response. Plot its magnitude in both linear units and decibels. Highlight the passband.
sf1 = 0.1; pf1 = 0.35; pf2 = 0.8; sf2 = 0.9; pb = linspace(pf1,pf2,1e3)*pi; bp = designfilt('bandpassfir', ... 'StopbandAttenuation1',40, 'StopbandFrequency1',sf1,... 'PassbandFrequency1',pf1,'PassbandRipple',3,'PassbandFrequency2',pf2, ... 'StopbandFrequency2',sf2,'StopbandAttenuation2',30); [h,w] = freqz(bp,1024); hpb = freqz(bp,pb); subplot(2,1,1) plot(w/pi,abs(h),pb/pi,abs(hpb),'.-') axis([0 1 -1 2]) legend('Response','Passband','Location','South') ylabel('Magnitude') subplot(2,1,2) plot(w/pi,db(h),pb/pi,db(hpb),'.-') axis([0 1 -60 10]) xlabel('Normalized Frequency (\times\pi rad/sample)') ylabel('Magnitude (dB)')
Design a 3rd-order highpass Butterworth filter having a normalized 3-dB frequency of rad/sample. Compute its frequency response. Express the magnitude response in decibels and plot it.
[b,a] = butter(3,0.5,'high'); [h,w] = freqz(b,a); dB = mag2db(abs(h)); plot(w/pi,dB) xlabel('\omega / \pi') ylabel('Magnitude (dB)') ylim([-82 5])
Repeat the computation using
fvtool.
fvtool(b,a)
freqs evaluates frequencyresponse for an analog filter defined by two input coefficient vectors,
b and
a. Its operation is similar to thatof
freqz; you can specify a number of frequencypoints to use, supply a vector of arbitrary frequency points, andplot the magnitude and phase response of the filter. This exampleshows how to compute and display analog frequency responses.
Design a 5th-order analog Butterworth lowpass filter with a cutoff frequency of 2 GHz. Multiply by to convert the frequency to radians per second. Compute the frequency response of the filter at 4096 points.
n = 5;f = 2e9;[zb,pb,kb] = butter(n,2*pi*f,'s');[bb,ab] = zp2tf(zb,pb,kb);[hb,wb] = freqs(bb,ab,4096);
Design a 5th-order Chebyshev Type I filter with the same edge frequency and 3 dB of passband ripple. Compute its frequency response.
[z1,p1,k1] = cheby1(n,3,2*pi*f,'s');[b1,a1] = zp2tf(z1,p1,k1);[h1,w1] = freqs(b1,a1,4096);
Design a 5th-order Chebyshev Type II filter with the same edge frequency and 30 dB of stopband attenuation. Compute its frequency response.
[z2,p2,k2] = cheby2(n,30,2*pi*f,'s');[b2,a2] = zp2tf(z2,p2,k2);[h2,w2] = freqs(b2,a2,4096);
Design a 5th-order elliptic filter with the same edge frequency, 3 dB of passband ripple, and 30 dB of stopband attenuation. Compute its frequency response.
[ze,pe,ke] = ellip(n,3,30,2*pi*f,'s');[be,ae] = zp2tf(ze,pe,ke);[he,we] = freqs(be,ae,4096);
Plot the attenuation in decibels. Express the frequency in gigahertz. Compare the filters.
plot(wb/(2e9*pi),mag2db(abs(hb))) hold on plot(w1/(2e9*pi),mag2db(abs(h1))) plot(w2/(2e9*pi),mag2db(abs(h2))) plot(we/(2e9*pi),mag2db(abs(he))) axis([0 4 -40 5]) grid xlabel('Frequency (GHz)') ylabel('Attenuation (dB)') legend('butter','cheby1','cheby2','ellip')
The Butterworth and Chebyshev Type II filters have flat passbands and wide transition bands. The Chebyshev Type I and elliptic filters roll off faster but have passband ripple. The frequency input to the Chebyshev Type II design function sets the beginning of the stopband rather than the end of the passband.
|
The following describes my understanding of floating point representations.
(For numbers $b, m, E_{min}, E_{max} \in \mathbb{N} \setminus \{0\}$, $b>1$, $E_{min} \leq E_{max}$...)
Let $F(b, m, E_{min}, E_{max})$ be the set of all real numbers $x \in \mathbb{R}$ that can represented as
$x = \sigma \cdot (\sum_{i=0}^{m-1} s_i b^{-i}) \cdot b^E$
where $\sigma \in \{-1, 1\}$, $s_i \in \{0, ..., b-1\}$, $s_0 \neq 0$, $E \in \{E_{min}, ..., E_{max}\}$.
$F(b, m, E_{min}, E_{max})$ is then called a floating point range.
The mapping of an arbitrary real number $x \in \mathbb{R}$ to it's closest floating point representation $x' \in F(b, m, E_{min}, E_{max})$, where the floting point range is given, is called rounding.
Now my question is how I do the rounding for an arbitrary real number given an arbitrary floating point range (Where the base $b$ is not neccesarily $2$ or $10$). And once I have this, I want to know how I can convert floating point representations from one floating point range into another floating point range, for example with another base. There won't be always an exact representation, so I have to find the closest again. How do I do this?
Thank you in advance.
|
Construct two functions $ f,g: R^+ → R^+ $ satisfying:
$f, g$ are continuous; $f, g$ are monotonically increasing; $f \ne O(g)$ and $g \ne O(f)$.
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
There are many examples for such functions. Perhaps the easiest way to understand how to get such an example, is by manually constructing it.
Let's start with function over the natural numbers, as they can be continuously completed to the reals.
A good way to ensure that $f\neq O(g)$ and $g\neq O(f)$ is to
alternate between their orders of magnitude. For example, we could define
$f(n)=\begin{cases} n & n \text{ is odd}\\ n^2 & n \text{ is even}\\ \end{cases}$
Then, we could have $g$ behave the opposite on the odds and evens. However, this
doesn't work for you, because these functions are not monotonically increasing.
However, the choice of $n,n^2$ was somewhat arbitrary, and we could simply increase the magnitudes so as to have monotonicity. This way, we may come up with:
$f(n)=\begin{cases} n^{2n} & n \text{ is odd}\\ n^{2n-1} & n \text{ is even}\\ \end{cases}$, and $g(n)=\begin{cases} n^{2n-1} & n \text{ is odd}\\ n^{2n} & n \text{ is even}\\ \end{cases}$
Clearly these are monotone functions. Also, $f(n)\neq O(g(n))$, since on the odd integers, $f$ behaves like $n^{2n}$ while $g$ behaves like $n^{2n-1}=n^{2n}/n=o(n^{2n})$, and vice-versa on the evens.
Now all you need is to complete them to the reals (e.g. by adding linear parts between the integers, but this is really beside the point).
Also, now that you have this idea, you could use the trigonometric functions in order to construct ``closed formulas'' for such functions, since $\sin$ and $\cos$ are oscillating, and peak on alternating points.
Good illustration for me is: http://www.wolframalpha.com/input/?i=sin%28x%29%2B2x%2C+cos%28x%29%2B2x
$$ f(n) =2x+sin(x) $$ $$ g(n) =2x+cos(x) $$ $$ f\neq O(g) $$ $$ g\neq O(f) $$
|
I've seen Julius' MATLAB code and I know what it does.
Essentially, given an LTI filter with impulse response, $h[n]$, and frequency response:
$$\begin{align} H(e^{j \omega}) &\triangleq \Big| H(e^{j \omega}) \Big| \, e^{j \phi(\omega) } \\&= \sum\limits_{n=-\infty}^{\infty} h[n] \, e^{-j \omega n}\end{align}$$
Then $\Big| H(e^{j \omega}) \Big| > 0$ is the
magnitude response and $\phi(\omega)$ is the phase response.
To be a minimum-phase filter, the
phase response (expressed in radians) is the negative of the Hilbert Transform of the natural log of the magnitude response.
The natural log of the magnitude response, $\log \big| H(e^{j \omega}) \big|$, is just like decibels (dB) but is expressed in a different dimensionless unit, the
neper. 8.685889638 dB is equal to 1 neper. Essentially the nepers is the real part of the complex natural log of the complex frequency response and the phase in radians is the imaginary part.
Just like radians is the mathematically natural unit for angle, so also are nepers the mathematically natural unit for relative change of magnitude. A change of magnitude of 1% is nearly the same as 0.01 neper. But, like dB, going up 0.1 neper followed by going down 0.1 neper will land you at exactly the original magnitude. But going up 10% followed by going down 10% will land you at slightly less than the original magnitude.
So, the (unwrapped) minimum phase of a filter, given it's magnitude response is:
$$\begin{align} \phi(\omega) &= - \mathscr{H} \Big\{ \log \big| H(e^{j \omega}) \big| \Big\} \\\\ &= - \int\limits_{-\pi}^{\pi} \log \Big| H(e^{j \theta}) \Big| \, \Big( 2 \pi \tan \big(\tfrac{\omega-\theta}{2} \big) \Big)^{-1} \, \mathrm{d}\theta \\\end{align}$$
Where $\mathscr{H} \big\{ \cdot \big\}$ is the Hilbert Transform.
If we were to evaluate that integral, we would need to do something called
"take the Principal Value" to deal with the division by zero when $\theta = \omega$. But we won't do the Hilbert transform that way.
Okay, so Step 2 is understanding that the Discrete-Time Fourier Transform (DTFT) of this sequence:
$$ g[n] \triangleq \begin{cases}0 \qquad & n<0 \\1 \qquad & n=0 \\2 \qquad & n>0 \\\end{cases} $$
is
$$\begin{align} G(e^{j\omega}) &= \sum\limits_{n=-\infty}^{\infty} g[n] \, e^{-j \omega n} \\\\ &= \frac{e^{j\omega}+1}{e^{j\omega}-1} \\\\ &= \frac{e^{j\omega/2}+e^{-j\omega/2}}{e^{j\omega/2}-e^{-j\omega/2}} \\\\ &= \frac{2\cos(\omega/2)}{2j\sin(\omega/2)} \\\\ &= -j \frac{1}{\tan(\omega/2)} \\\end{align}$$
i'm running outa time. i gotta return to this.
|
Although I've not specifically attempted a \$15\:\text{A}\$ boosted LM317 before, this is along the lines of what I'd try out first. This is roughly taken from the Figure 23 you mentioned:
simulate this circuit – Schematic created using CircuitLab
In this case, I went for the D44/D45 series devices. (The PNP version has simply HORRIBLE Early Effect, but it's not a big deal here.)
The values of \$R_6\$, \$R_8\$, and \$R_9\$ are set to drop somewhere from \$150-200\:\text{mV}\$ at full load. They will need to be rated for at least \$1\:\text{W}\$, but I would not feel comfortable with less than \$2\:\text{W}\$ resistors there. If you adjust those values, please keep in mind the dissipation question. You are talking about a lot of current.
To reduce the oscillation, you really want some ESR in \$C_2\$ to add a nice 'zero'. If you see oscillation in the output, try adding a small series resistor to \$C_2\$. \$15-39\:\text{m}\Omega\$ (as shown with \$R_{10}\$) should put a crimp in the oscillation. You might just make provisions for it and jumper it, without using a resistor, if your output seems fine with the output capacitor you selected. But here is one of those cases where output capacitor ESR is actually a good thing.
Your schematic shows an AC input. That's not good. I hope your schematic was just mistaken, there.
Since the minimum specification for the LM317 is \$3\:\text{V}\$ from input terminal to output terminal, the externally added circuit will always have more than enough headroom to operate so long as you supply that difference.
Keep in mind this is a linear power supply. With \$\approx 3.3\:\text{V}\$ output and \$\approx 3\:\text{V}\$ overhead, you will have little better than 50% efficiency. At full load, you will have \$\ge 45\:\text{W}\$ wasted dissipation, not counting the load's dissipation. And more than that, likely, because this ignores whatever you have supplying the unregulated input DC voltage -- where it is likely you have still more dissipation in diode rectifiers from AC, etc.
While perhaps \$3\:\text{W}\$ dissipation might occur in the emitter resistors, that still leaves a pretty much all the rest with the bypass BJTs. Getting rid of \$15\:\text{W}\$ each will be the challenge. Note that if you want to allow a maximum junction temperature of say, \$100\:^\circ\text{C}\$, and the worst case ambient temperature you care about supporting is \$45\:^\circ\text{C}\$, then this means you need \$\frac{100^\circ\:\text{C}-45^\circ\:\text{C}}{15\:\text{W}}\approx 3.7\:\frac{^\circ\text{C}}{\text{W}}\$. For the parts I mentioned, junction to case is already \$1.8\:\frac{^\circ\text{C}}{\text{W}}\$. That leaves you only \$1.9\:\frac{^\circ\text{C}}{\text{W}}\$ for whatever you use as a heatsink plus the bonding interface between the BJTs and that heatsink. That's not a lot to work with.
You might consider putting more of the dissipation into the emitter resistors, I suppose. More degeneration won't hurt you. I chose to set them at about a minimum resistance for the circuit, so increasing their values will be fine. (Don't decrease them much, though.) You need to work out this balancing act on your own.
|
Let $f(x) = \sum_{n=1}^\infty \frac{(-1)^n}{\sqrt n} \arctan\left(\frac{x}{\sqrt n}\right)$. Show that $f(x)$ converges uniformly.
First, it is easy to see that the series converges for every $x$ by Leibniz test.
Now, I'm not so sure how to prove uniform converges. I thought about the fact that $\arctan\left(\frac{x}{\sqrt n}\right)$ is bounded by $\frac{\pi}{2}$, problem is, it's not a supremum but an upper-bound.
I've tried to look for other tests like Weierstrass M-test but it didn't fit here.
EDIT: I think we should use Cauchy criteria. Let's assume by contradiction it is diverges there is an $\varepsilon > 0$ such that for every $N$ there are $m,n > N$ such that:
$$\left| \sum_{k=m}^n \frac{(-1)^k}{\sqrt k} \arctan(\frac{x}{\sqrt k})\right| \ge \varepsilon$$
Now, for every $x$:
$$\left| \sum_{k=m}^n \frac{(-1)^k}{\sqrt k} \arctan(\frac{x}{\sqrt k})\right| \le \left| \sum_{k=m}^n \frac{(-1)^k}{\sqrt k} \frac{\pi}{2} \right| = \frac{\pi}{2} \left|\sum_{k=m}^n \frac{(-1)^k}{\sqrt k}\right|$$
Since the later series converges by Leibnitz test, it is a Cauchy series and therefore there is an $N$ such that for every $m,n$:
$$\left|\sum_{k=m}^n \frac{(-1)^k}{\sqrt k}\right| < \frac{2\varepsilon}{\pi}$$
And so, we're done.
Could you verify my proof please?
Thanks.
|
I'm trying to find the field equations for some particular Lagrangian. In the middle I faced the term
$$\frac{\delta \Gamma_{\beta\gamma}^{\alpha}}{\delta g^{\mu\nu}} \, .$$
I know that
$$\delta \Gamma_{\beta\gamma}^{\alpha} = \frac{1}{2}g^{\sigma\alpha}(\nabla_{\beta}(\delta g_{\sigma\gamma}) + \nabla_{\gamma}(\delta g_{\sigma\beta}) - \nabla_{\sigma}(\delta g_{\beta\gamma})) \, .$$
I have two questions:
Is the expression for $\delta \Gamma_{\beta\gamma}^{\alpha}$ somehow related to $\frac{\delta \Gamma_{\beta\gamma}^{\alpha}}{\delta g^{\mu\nu}}$?
The idea at the end is to have terms like $$\frac{\delta \mathcal{L}}{\delta g^{\alpha\beta}} = 0$$ and thus make the variation of the action invariant under $\delta g^{\alpha\beta}$. So in simple words, is there is any way to have the term $\delta g^{\alpha\beta}$ put of the variation of the Christoffel symbol?
|
Search
Now showing items 1-10 of 27
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
|
What is Young’s Double Slit Experiment?
Young’s double-slit experiment uses two coherent sources of light placed at a small distance apart, usually, only a few orders of magnitude greater than the wavelength of light is used. Young’s double-slit experiment helped in understanding the wave theory of light which is explained with the help of a diagram. A screen or photodetector is placed at a large distance ’D’ away from the slits as shown.
The original Young’s double-slit experiment used diffracted light from a single source passed into two more slits to be used as coherent sources. Lasers are commonly used as coherent sources in the modern-day experiments.
Table of Content Derivation Position of Fringes Shape of Fringes Intensity of Fringes Special Cases Displacement of Fringes Constructive and Destructive Interference
Each source can be considered as a source of coherent light waves. At any point on the screen at a distance ‘y’ from the centre, the waves travel distances l
1 and l 2 to create a path difference of Δl at the point in question. The point approximately subtends an angle of θ at the sources (since the distance D is large there is only a very small difference of the angles subtended at sources). Derivation of Young’s Double Slit Experiment
Consider a monochromatic light source ‘s’ kept at a considerable distance from two slits s
1 and s 2. S is equidistant from s 1 and s 2. s 1 and s 2 behave as two coherent sources, as both bring derived from S.
The light passes through these slits and falls on a screen which is at a distance ‘D’ from the position of slits s
1 and s 2. ‘d’ be the separation between two slits.
If s
1 is open and s 2 is closed, the screen opposite to s 1 is closed, only the screen opposite to s 2 is illuminated. The interference patterns appear only when both slits s 1 and s 2 are open.
When the slit separation (d) and the screen distance (D) are kept unchanged, to reach
P the light waves from s 1 and s 2 must travel different distances. It implies that there is a path difference in Young’s double slit experiment between the two light waves from s 1 and s 2. Approximations in Young’s double slit experiment Approximation 1: D > > d:Since D > > d, the two light rays are assumed to be parallel, then the path difference, Approximation 2: d/λ >> 1:Often, d is a fraction of a millimetre and λ is a fraction of a micrometre for visible light.
Under these conditions θ is small, thus we can use the approximation sin θ approx tan θ = γ/D.
∴ path difference, Δz = γ/D
This is the path difference between two waves meeting at a point on the screen. Due to this path difference in Young’s double slit experiment, some points on the screen are bright and some points are dark.
Position of Fringes In Young’s Double Slit Experiment Position of Bright Fringes
For maximum intensity at P
Δz = nλ (n = 0, ±1, ±2, . . . .)
Or d sin θ = nλ (n = 0, ±1, ±2, . . . .)
The bright fringe for n = 0 is known as the central fringe. Higher order fringes are situated symmetrically about the central fringe. The position of n
th bright fringe is given by
y (bright) = (nλ\d)D (n = 0, ±1, ±2, . . . .)
Position of Dark Fringes
For minimum intensity at P,\(\Delta z=\left( 2n-1 \right)\frac{n\lambda }{2}\left( n=\pm 1,\pm 2,….. \right)\) \(d\sin \theta =\left( 2n-1 \right)\frac{n\lambda }{2}\)
The first minima are adjacent to the central maximum on either side. We will obtain the position of dark fringe as\({{y}_{dark}}=\frac{\left( 2n-1 \right)\lambda D}{2d}\left( n=\pm 1,\pm 2,….. \right)\)
Fringe Width
Distance between two adjacent bright (or dark) fringes is called the fringe width.\(\beta z\frac{n\lambda D}{d}-\frac{\left( n-1 \right)\lambda D}{d}=\frac{\lambda D}{d}\)
⇒ \(\beta \propto \lambda\)
If the apparatus of Young’s double slit experiment is immersed in a liquid of refractive index (u), then wavelength of light and hence fringe width decreases ‘u’ times.\({{\beta }^{1}}=\frac{\beta }{\mu }\)
If a white light is used in place of monochromatic light, then coloured fringes are obtained on the screen with red fringes larger in size that of violet.
Angular width of Fringes
Let the angular position of n
th bright fringe is \({{\theta }_{n}}\) and because of its small value \(\tan {{\theta }_{n}}\approx {{\theta }_{n}}\)\(\tan {{\theta }_{n}}=\frac{\gamma n}{D}\approx {{\theta }_{n}}=\frac{\gamma n}{D}\)
⇒ \({{\theta }_{n}}=\frac{n\gamma D/d}{D}=\frac{n\lambda }{d}\)
Similarly, the angular position of (n+1)
th bright fringe is \({{\theta }_{n+1}},\) then
∴ Angular width of a fringe in Youngs double slit experiment is given by,\(\theta ={{\theta }_{n+1}}-{{\theta }_{n}}=\frac{\left( n+1 \right)\lambda }{d}-\frac{n\lambda }{d}=\frac{\lambda }{d}\) \(\theta =\frac{\lambda }{d}\)
We know that \(\beta =\frac{\lambda D}{d}\)
⇒ \(\theta =\frac{\lambda }{d}=\frac{\beta }{D}\)
Angular width is independent of ‘n’ i.e angular width of all fringes are same.
Maximum Order of Interference Fringes
The position of n
th order maxima on the screen is \(\gamma =\frac{n\lambda D}{d};n=0,\pm 1,\pm 2, . . \)
But ‘n’ values cannot take infinitely large values as it would violate 2
nd approximation.
i.e θ is small (or) y < < D
⇒ \(\frac{\gamma }{D}=\frac{n\lambda }{d}<<1\)
Hence, the above formula for interference maxima is applicable when \(n<<\frac{d}{\lambda }\)
When ‘n’ value becomes comparable to \(\frac{d}{\lambda },\) path difference can no longer be given by \(\frac{d\gamma }{D},\) Hence for maxima, path difference = nλ
⇒ \(d\sin \theta =n\lambda\)
⇒ \(n=\frac{d\sin \theta }{\lambda }\) \({{n}_{\max }}=\left[ \frac{d}{\lambda } \right]\)
The above represents box function or greatest integer function.
Similarly, the highest order of interference minima\({{n}_{\min }}=\left[ \frac{d}{\lambda }+\frac{1}{2} \right]\)
Shape of Interference Fringes in YDSE
From the given YDSE diagram, the path difference from the two slits is given by\({{s}_{2}}p-{{s}_{1}}p\approx d\sin \theta\) (constant)
The above equation represents a hyperbola with its two foci as s
1 and s 2.
The interference pattern we get on the screen is a section of a hyperbola when we revolve hyperbola about the axis s
1s 2.
If the fringe will represent 1
st minima, the fringe will represent 1 st maxima, it represents central maxima.
If the screen is yz plane, fringes are hyperbolic with a straight central section.
If the screen is xy plane, the fringes are hyperbolic with a straight central section.
Intensity of Fringes In Young’s Double Slit Experiment
For two coherent sources s
1 and s 2, the resultant intensity at point p is given by
I = I1 + I2 + 2 √(I1 . I2) cos φ
Putting I1 = I2 = I0 (Since, d<<<D)
I = I0 + I0 + 2 √(I0.I0) cos φ
I = 2I0 + 2 (I0) cos φ
I = 2I0 (H cos s φ)
I = \(4{{I}_{0}}{{\cos }^{2}}\left( \frac{\phi }{2} \right)\).
For maximum intensity\(\cos \frac{\phi }{2}=\pm 1\) \(\frac{\phi }{2}=n\pi ,n=0,\pm 1,\pm 2,……\)
or φ = 2nπ
phase difference φ = 2nπ
then,
path difference \(\Delta x=\frac{\lambda }{{2}{\pi }}\left( {2}n{\pi } \right)\) = nλ
The intensity of bright points are maximum and given by
Imax = 4I0
For Minimum Intensity\(\cos \frac{\phi }{2}=0\) \(\frac{\phi }{2}=\left( n-\frac{1}{2} \right)\pi \,\,\,\,\,where\,\left( n=\pm 1,\pm 2,\pm 3,….. \right)\)
φ = (2n – 1) π
Phase difference φ = (2n – 1)π\(\frac{2\pi }{\lambda }\Delta x=\left( 2n-1 \right)\pi\) \(\Delta x=\left( 2n-1 \right)\frac{\lambda }{2}\)
Thus,
intensity of minima is given by
Imin = 0
If \({{I}_{1}}\ne {{I}_{2}},{{I}_{\min }}\ne 0.\)
Special Cases Rays Not Parallel to Principal Axis:
From the above diagram,
Path difference \(\Delta x=\left( A{{S}_{1}}+{{S}_{1}}P \right)-{{S}_{2}}P\) \(\Delta x=A{{S}_{1}}-\left( {{S}_{2}}P-{{S}_{1}}P \right)\) \(\Delta x=d\sin \theta -\frac{4d}{D}\)
For maxima \(\Delta x=n\lambda\)
For minima \(\Delta x=\left( 2n-1 \right)\frac{\lambda }{2}\)
Using this we can calculate different positions of maxima and minima.
Source Placed Beyond the Central Line:
If the source is placed a little above or below this centre line, the wave interaction with S
1 and S 2 has a path difference at a point P on the screen,
Δ x= (distance of ray 2) – (distance of ray 1)
= \(\left( S\,{{S}_{2}}+{{S}_{2}}P \right)-\left( S\,{{S}_{1}}+{{S}_{1}}P \right)\)
= \(\left( S\,{{S}_{2}}+S\,{{S}_{1}} \right)+\left( {{S}_{2}}P-{{S}_{1}}P \right)\)
= bd/a + yd/D → (*)
We know Δx = nλ for maximum
Δx = (2n – 1) λ/2 for minimum
By knowing the value of Δx from (*) we can calculate different positions of maxima and minima.
Displacement of Fringes in YDSE
When a thin transparent plate of thickness ‘t’ is introduced in front of one of the slits in Young’s double slit experiment, the fringe pattern shifts toward the side where the plate is present.
The dotted lines denote the path of the light before introducing the transparent plate. The solid lines denote the path of the light after introducing a transparent plate.
Path difference before introducing the plate \(\Delta x={{S}_{1}}P-{{S}_{2}}P\)
Path difference after introducing the plate \(\Delta {{x}_{new}}={{S}_{1}}{{P}^{1}}-{{S}_{2}}{{P}^{1}}\)
The path length \({{S}_{2}}{{P}^{1}}={{\left( {{S}_{2}}{{P}^{1}}-t \right)}_{air}}+{{t}_{plate}}\) \(={{\left( {{S}_{2}}{{P}^{1}}-t \right)}_{air}}+{{\left( \mu t \right)}_{plate}}\)
where \(’\mu t'\) is optical path\(={{S}_{2}}{{P}^{1}}_{air}+\left( \mu -1 \right)t\)
Then we get\({{\left( \Delta x \right)}_{new}}={{S}_{1}}{{P}^{1}}_{air}-\left( {{S}_{2}}{{P}^{1}}_{air}+\left( \mu -1 \right)t \right)\) \(={{\left( {{S}_{1}}{{P}^{1}}-{{S}_{2}}{{P}^{1}} \right)}_{air}}-\left( \mu -1 \right)t\) \({{\left( \Delta x \right)}_{new}}=d\sin \theta -\left( \mu -1 \right)t\) \({{\left( \Delta x \right)}_{new}}=\frac{\gamma d}{D}-\left( \mu -1 \right)t\)
Then,\(\begin{matrix} y=\frac{\Delta xD}{d}+\frac{D}{d}\left[ \left( \mu -1 \right)t \right] \\ \downarrow \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\downarrow \, \\ \left( 1 \right)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left( 2 \right)\,\,\,\, \\ \end{matrix}\)
Term (1) defines the position of a bright or dark fringe, the term (2) defines the shift occurred in the particular fringe due to the introduction of a transparent plate.
Constructive and Destructive Interference
For constructive interference, the path difference must be an integral multiple of the wavelength.
Thus for a bright fringe to be at ‘y’,
nλ = y dD
Or, ynth = nλ Dd
Where n = ±0,1,2,3…..
The 0th fringe represents the central bright fringe.
Similarly, the expression for a dark fringe in Young’s double slit experiment can be found by setting the path difference as:
Δl = (n+12)λ
This simplifies to yn = (n+12)λDd
Note that these expressions require that θ be very small. Hence yD needs to be very small. This implies D should be very large and y should be small. This, in turn, requires that the formula works best for fringes close to the central maxima. In general, for best results, dD must be kept as small as possible for a good interference pattern.
The Young’s double slit experiment was a watershed moment in scientific history because it firmly established that light indeed behaved as a wave.
The Double Slit Experiment was later conducted using electrons, and to everyone’s surprise, the pattern generated was similar as expected with light. This would forever change our understanding of matter and particles, forcing us to accept that matter like light also behaves like a wave.
|
Let $E_\delta =[0,1]^{N-1}\times [0,\delta]$, $p\in [1,\infty)$ and $1/p+1/p'=1$. Let $\varphi\in C^1(E_\delta)$ such that $$\varphi(x)=0,\ \forall \ x\in [0,1]^{N-1}\times \{0\}.$$
By the fundamental theorem of calculus and Holder inequality, if $x'=(x_1,\cdots,x_{N-1})$ and $x=(x',t)$ then $$|\varphi(x)-\varphi(x',0)|=\left|\int_0^t \frac{\partial \varphi(x',s)}{\partial s}ds\right|\le t^{1/p'}\left(\int_0^\delta\left|\frac{\partial \varphi(x',s)}{\partial s}\right|^pds\right)^{1/p},$$
which implies that $$|\varphi(x)|^p\le t^{p/p'}\int_0^\delta\left|\frac{\partial \varphi(x',s)}{\partial s}\right|^pds,$$
After integration and an application of Fubini's theorem in the above inequality, we may conclude that $$\int_{E_\delta} |\varphi(x)|^p\le \frac{\delta^p}{p}\int_{E_\delta}|\nabla \varphi(x)|^p.$$
Now let $\Omega\subset\mathbb{R}^N$ be a bounded regular domain.
So, we can think in the boundary of $\Omega$ as a finite union of sets of the above form, therefore, there is a constant $C>0$, such that if $\Omega_\delta=\{x\in \Omega:\ \operatorname{dist}(x,\partial\Omega)<\delta\}$ then $$\int_{\Omega_\delta} |\varphi(x)|^p\le C\delta^p\int_{\Omega_\delta}|\nabla \varphi(x)|^p.$$
By density, this is also true for functions in $W_0^{1,p}(\Omega)$. The last inequality implies that $$\lim_{\delta\to 0}\frac{1}{\delta^p}\int_{\Omega_\delta}|u|^p=0, \forall\ u\in W_0^{1,p}(\Omega). \tag{1}$$
My question is: can we conclude $(1)$ for function in $W_0^{1,p}(\Omega)$, without regularity on the boundary of $\Omega$. I mean, is it true for any open set $\Omega$?
|
Ex.9.1 Q16 Some Applications of Trigonometry Solution - NCERT Maths Class 10 Question
The angles of elevation of the top of a tower from two points at a distance of \(4\,\rm{m}\) and \(9\,\rm{m}\) from the base of the tower and in the same straight line with it are complementary. Prove that the height of the tower is \(6\,\rm{m.}\)
Text Solution What is Known?
Angle of elevation of the top of a tower from two points at a distance of \(4\,\rm{m}\) and \(9\,\rm{m}\) from the base of the tower are complimentary.
What is Unknown?
To prove height of the tower is \(6\,\rm{m.}\)
Reasoning:
Let the height of the tower as \(CD\). B is a point \(4\rm{m}\) away from the base \(C \) of the tower and \(A\) is a point \(5\rm{m}\) away from the point \(B\) in the same straight line. The angles of elevation of the top \(D\) of the tower from the points \(B\) and \(A\) are complementary.
Since the angles are complementary, if one angle is \(\theta \) and the other is (\(90^\circ \)− \(\theta \)). Trigonometric ratio involving \(CD, \,BC,\, AC\) and angles is \(tan\,\theta \).
Using \(tan\,\theta \) and tan( \(90^\circ \) − \(\theta \)) \(=\) \(cot\,\theta \) ratios are equated to find the height of the tower.
Steps:
In \(\Delta BCD\),
\[\begin{align}& \text{tan}\theta \,\text{= }\frac{CD}{BC} \\ & \text{tan}\theta \,\text{= }\frac{CD}{4}\qquad (1) \\ \end{align}\]
Here,
\[\begin{align}AC&=AB+BC \\ & =5+4 \\ & =9 \end{align}\]
In \(\Delta ACD\),
\[\begin{align}{\rm{tan}}\left( {90 - \theta } \right)\,&= \frac{{CD}}{{AC}}\\\cot \theta &= \frac{{CD}}{9}\,\,\,\,\,\,\,\,\qquad\left[ {\,{\rm{tan}}\left( {90 - \theta } \right)\, = \,\cot \theta } \right]\\\frac{1}{{{\rm{tan}}\theta }}\, &= \frac{{CD}}{9}\,\,\,\,\,\,\,\qquad\left[ {\cot \theta = \frac{1}{{{\rm{tan}}\theta }}} \right]\\{\rm{tan}}\theta &= \frac{9}{{CD}}\qquad (2)\end{align}\]
From equation (1) and (2)
\[\begin{align}\frac{{CD}}{4} &= \frac{9}{{CD}}\\C{D^2} &= 36\\CD &= \pm 6\end{align}\]
Since, Height cannot be negative
Therefore, height of the tower is \(6\,\rm{m.}\)
|
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say
|
There is no need for an epsilon transition, it is a waste of space.
For a regular expression $E$, with resulting automaton $A$, must respect these properties of the transition function, $\delta$:
$A$ has exactly one initial state $q_0$, which is not
accessible$^{*}$ from any state. That is,$$ \delta(q, a) \neq q_0 \quad \forall q \in A, \forall a \in \Sigma$$
$A$ has exactly one final state $q_f$, which is not
co-accessible$^{**}$ to any state. That is,$$ \delta(q_f, a) = \emptyset \quad \forall a\in \Sigma$$
These two points are important. The initial state has no transitions into it, and the final state has no transitions out of it. This means we have no "merge conflicts", as it were.
Let $F$ be the set of all states co-accessible to the final state of $N(s)$, $s_f$.
Let $T$ be the set of all states accessible from the initial state of $N(t)$, $t_0$.
We then have the two states $s_f$ and $t_0$ and their transitions like so:
$$ \begin{align}F \rightarrow & s_f \rightarrow \emptyset \\\emptyset \rightarrow &t_0 \rightarrow T \\\end{align}$$
You can see how merging the two creates no conflicts or discrepancies:
$$ \begin{align}F \cup \emptyset \rightarrow & s_ft_0 \rightarrow \emptyset \cup T \\F \rightarrow & s_ft_0 \rightarrow T \\\end{align}$$
$^{*}$ A state $q$ of $A$ is
accessible from a state $p$ if there is a computation in $A$ whose source is $p$ and whose destination is $q$. A state $q$ is accessible if it is accessible from an initial state.
$^{**}$ A state $p$ of $A$ is
co-accessible to a state $q$ if there is a computation in $A$ whose source is $p$ and whose destination is $q$. A state $p$ is co-accessible if it is co-accessible to a final state.
|
Gödel has proved the existence of undecidable propositions for any system of recursive axioms capable of formalizing arithmetic. But do we know the logical causes of this state of undecidability? In other words, what are the common or recurrent characteristics of this type of propositions, if any?
This answer is a bit technical, but I think the OP will find it interesting.
Let me first recall a bit about the
history of the incompleteness theorem (IT). How IT is usually stated is:
If PA (or any recursively axiomatizable theory extending PA) is consistent, then "PA is consistent" is undecidable in PA.
However, this is
not what Goedel originally proved! Goedel's proof required an additional assumption: that PA is reasonably correct (specifically, that PA is $\omega$-consistent). This is of course an unfortunate hypothesis; besides making the theorem weaker, it also brings in a hopefully-unnecessary bit of Platonism. (OK, let's say you're fine with Platonism; why should you care about theories which are consistent but false? See below for an answer to this.)
Specifically, Goedel could easily show that "PA doesn't prove Con(PA)," but showing "PA doesn't
disprove Con(PA)" required the additional assumption. So strictly speaking, Goedel's original argument certainly contained an unprovability theorem, but arguably fell short of a full undecidability (i.e. unprovability and undisprovability) theorem.
Goedel left it as an open question whether this assumption could be done away with. This was resolved, leading to the usual statement of IT, by Rosser, who showed how a technical trick could improve Goedel's argument. So that lifted Goedel's proof to a genuine undecidability theorem.
Rosser's improvement didn't just lead to a "more formalist" version of IT; it also had real consequences, relevant to your question. Specifically, it shows:
Unprovability is incomputable: there is no algorithm which can determine if a given sentence in the language of arithmetic is unprovable from PA.
Why not? Well, suppose A were such an algorithm. Then we could construct a consistent complete recursively axiomatizable extension of PA (contradicting Rosser's version of IT!) as follows:
Enumerate the sentences of arithmetic in a reasonable way as P_1, P_2, P_3, ...
Define Q_i inductively as follows:
Q_1="Not P_1" if P_1 is unprovable in PA, and Q_1="P_1" otherwise.
Having defined Q_i, we let Q_{i+1}="Not P_{i+1}" if the sentence "(Q_1 and Q_2 and ... and Q_i) implies P_{i+1}" is unprovable in PA, and Q_{i+1}="P_{i+1}" otherwise.
Assuming the existence of A, the sequence Q_i is computable. Now consider the theory T=PA+{Q_1, Q_2, Q_3, ...}; this theory is recursively axiomatized, extends PA, and (by an easy induction argument) is consistent; whoops!
Now note that T is
probably wrong about many things! At no point in the construction of T did we refer to the truth of sentences of arithmetic, merely their (un)provability. So this argument really needed the full strength of Rosser's version of IT; yet the conclusion, that PA-provability is incomputable, is of interest even if we adopt a fully Platonistic viewpoint and have no a priori interest in incorrect theories of arithmetic.
Now on to the meat of the question:
some reasons for undecidability.
The point above - that unprovability is incomputable - can be modified to "undecidability is incomputable" without too much effort. Intuitively, this says that there are many reasons for undecidability (or unprovability), and that we will never have a full understanding of what drives the phenomenon. Interestingly, this clashes with the general fact - mentioned by jobermark above - that almost all
currently known examples of undecidability come from self-reference issues.
There are instances of unprovability which do not come from paradoxes, however, and for which there is reasonable philosophical evidence for undisprovability; let me give an example of one of these, which I learned from Asaf Karagila. This is an instance of undecidability, not from PA, but rather from ZFC ( = standard set theory; this is a first-order theory, despite being about sets, and Goedel's theorems apply to it). In particular, the relevant
language is different - rather than working in the language ${+, \times, <, 0, 1}$ (or similar) of arithmetic, we're working in the language ${\in}$ of set theory.
An
inaccessible cardinal is a particularly large infinite cardinal. Specifically, it is a cardinal which is bigger than the powerset of any cardinal below it, and also satisfies a more technical "regularity" condition. Intuitively, inaccessible cardinals cannot be "built out of smaller cardinals".
Let P be the sentence "There is an inaccessible cardinal" (it is not hard, but somewhat tedious, to express this in the language of set theory), and let's think about P in the set theory ZFC. There are a number of philosophical arguments for the consistency of ZFC together with P; see e.g. Penelope Maddy's article "Believing the axioms". So let's for the moment take Con(ZFC+P) for granted, that is, that ZFC doesn't
disprove P. How can we show that ZFC doesn't prove P either?
The usual proof of this is via Goedel's theorem: show that ZFC+P proves Con(ZFC). However, there is a proof avoiding this! Suppose ZFC did prove P. Then let V be a model of ZFC. Since V is a model of ZFC, and ZFC proves P, there is some k in V that V thinks is (i) an inaccessible cardinal, and (ii) the
least inaccessible cardinal (since V satisfies Regularity, for any definable property there is a least cardinal with that property if any cardinal with that property exists in the first place).
But now consider V_k, the kth level of the cumulative hierarchy of V. It's not hard to check that V_k satisfies the ZFC axioms. But V_k can't have an inaccessible cardinal! This is because any inaccessible cardinal m in V_k would also be an inaccessible cardinal in V (this isn't obvious, and takes a short argument; in particular, this
fails if we replace "inaccessible" with a more complicated large cardinal notion!), and would be less than k (since every cardinal in V_k is less than k); but k was by definition the least inaccessible cardinal in V. So this is a contradiction.
Note that at no point did we invoke IT! This is an example of an unprovability phenomenon which arises for "purely mathematical" (that is, non-metamathematical) reasons. Of course, the distinction is a very subjective one, and reasonable people much smarter than me may very well disagree with the claim I made in the previous sentence; but I still think it's interesting.
Incidentally, it's also worth pointing out that the incompleteness theorem
itself has a proof which doesn't really hinge on the paradoxes: namely, Kripke gave a proof which in certain respects is similar to the argument about inaccessibles I outlined above. See this question of mine at mathoverflow for some more proofs of incompleteness.
We know some, but not all of the potential causes of undecidability, and the ones we do know about are, at root, the same reasons we have paradoxes in ordinary thought. The proofs we have of undecidability generally rely upon a specific known paradox and just formalize it.
For instance Goedel's proof is an elaborate inescapable version of the Cretan paradox. It produces a proposition that says: Proposition #[big number] is not on the list of things we can ever prove, and by the way
this is proposition #[big number]. In other words: Cretan-numbered propositions are never provably true, says the Cretan-numbered proposition. But unlike the original Cretan, it only says it is not provable, not that it is lying. So we can avoid absolute paradox by accepting the fact there are things that are true but the system can't prove them.
A simpler version relies upon the 'Berry paradox' -- 'The smallest positive integer not definable in fewer than twelve words' contains less than twelve words and therefore does not define any positive integer. If we count off the numbers of provable propositions systematically, we can, using the system of enumeration always find the equivalent of the magic number twelve in the original paradox. We can prove that any statement provable about a given number is 'at least so complex to state', but state that fact in a statement less complex than that stated minimum.
But we can only
see the paradoxes that we see. If we can force a single paradox into so complex a system, we cannot know that there are not other paradoxes lurking around the corner. We in fact know that we cannot know that. The proof itself is not a single deduction, but an algorithm one can apply to any such system. So elaborating the system to resolve a known paradox always still leaves it subject to the same proof of incompleteness.
We can find new undecidable aspects of any formal system by formalizing the principles that the paradoxes
that we know derive from: Self-reference and negation have limited compatibility (Russel's paradox and all its relatives) Infinity and ordering have limited meaningful application with respect to one another (the unexpected independence of the Axiom of Choice and the Continuum Hypothesis) Continuity and identifiability have limited compatibility (sorites issues, the quirky limitations of infinitesimals, and the Banach-Tarski paradox) The ability to know which generalizations constitute definitions and which do not, is implicitly limited (important Goedel-Bernays-VonNeumann collections that cannot be sets -- e.g. the 'group' of all groups under products, which cannot be a group because its base set would contain sets of every size but simply isa group from any normal perspective.)
Kant had already pointed at a single case of each of these limitations with his 'antinomies' -- four problems he felt he had demonstrated were meaningful issues beyond the ability for humans to resolve:
1 is the essence of his answer to the problem of free will, 2 is the essence of his answer to the problem of the end of time, 3 is the essence of his answer to the problem of atoms, 4 is the essence of his answer to the problem of the necessary being.
Now that we understand that formal systems don't really help remove these limitations, we have produced formal systems that evade or resolve them, or we have come to accept them. But that does not mean they are the only sources of potential conflict. We know that they are not.
Given that your main question has been adequately tackled in the comments and answers, I thought perhaps another perspective might be illuminating.
Definition:A statement is undecidable if there is neither a proof that it is true, nor a proof that it is false.
Proposition: Yesterday, I had a cup of coffee in the morning
Now, is there a proof of this fairly innocuous proposition? There is nothing complicated or metaphysical about it; its easy to understand and requires no complicated or even simple mathematics, and is within the everyday experience of just about everybody; as for it's truth...I can vouch for its truth.
Ergo: true, undecidable propositions are easy to find; you too can do this at home, without any danger to one's health or those of your guests or friends; though they may not be impressed.
I would suggest that most propositions are like this, true or false but undecidably so. Why then should we suppose that formal systems are any different?
The question can be turned on its head, can we find a formal system such that all propositions about it can be shown to be decidable? Yes, there are; but they're so simple that they're not worth bothering about.
What is important about the exercise that Godel went through is the invention of a formal system, and its attendent apparatus; for example, theres a proof of Hilberts Nullstellensatz using tools developed from Godels ideas.
Undecidability then is a pervasive phenomena about the real world, which we should not be at all surprised to find reflected in formal systems that look to explain or reflect it - like the natural numbers or geometry; the surprise, I suppose, is that it can be shown in such a simple system as arithmetic; but then, as questions like Fermats theorem have shown, or the still outstanding Riemanns hypothesis they can be very difficult; so, not really that surprising.
|
This is a very nice problem: I struggled with it for some time. I’ll go part way, and you should be able to finish it off.
Certainly $b$ is nonzero, ’cause it’s the constant term of an irreducible quadratic. Now, since the roots of $x^2+ax+b$ are among the roots of $f$, we have $(x^2+ax+b)\big|f(x)$, so we may write $f(x)=(x^2+ax+b)(x^2+a'x+b')$, a real factorization of $f$. From the form of $f$, we get $a+a'=0$ and $bb'=1$, so that we may rewrite:$$x^4+x+1=(x^2+ax+b)(x^2-ax+1/b)\,.$$Expand this out and compare coefficients to get a pair of equations in $a$ and $b$, solve the easy one for $a$ and make a substitution, and get a sextic equation for $b$, coefficients happening to come from the set $\{0,\pm1\}$.
I think you ought to be able to finish it off now, but I’ll give more hints if you can’t.
EDIT, a week later: You have asked for a method for finding the polynomial for $b+1/b$, when $b^6-b^4-b^3-b^2+1=0$. We also know that $b^3-b-1-1/b+1/b^3=0$, so let’s write $\beta=b+1/b$ and first compute $\beta^3$:\begin{align}\beta^3&=b^3+3b+3/b+1/b^3\\&=4b+1+4/b\qquad\text{(by subtracting zero)}\\&=4\beta+1\,,\end{align}so that $\beta^3-4\beta-1=0$, which is $\Bbb Q$-irreducible by the rational root test.
FURTHER EDIT, a few days later yet: You asked for irreducibility (over $\Bbb Q$) of $x^6-x^4-x^3-x^2+1$. Here’s my argument, which I’m still a little worried about.
Over $\Bbb F_2$, we have $x^6+x^4+x^3+x^2+1=(x^2+x+1)(x^4+x^3+x^2+x+1)$, both factors being irreducible. (The second factor has for its roots the primitive fifth roots of unity, and they show up only in the field with $16$ elements.)
Over $\Bbb F_3$, we have $x^6-x^4-x^3-x^2+1=(x^3+x^2+x-1)(x^3-x^2-x-1)$, and both factors are $\Bbb F_3$-irreducible ’cause they don’t have roots in $\Bbb F_3$.
Now, what kind of factorization can $h=x^6-x^4-x^3-x^2+1$ have over $\Bbb Z$? No linear factors, we know, so either (1) three quadratics, (2) two cubics, or (3) a quadratic and a quartic. But if there were a quadratic irreducible factor of $h$, there would be a quadratic factor over $\Bbb F_3$, and there isn’t. So possibilities (1) and (3) are excluded. In possibility (2), take one of those $\Bbb Z$-irreducible cubic factors, call it $g$, and look at it in characteristic two. There, $g$ still divides $h$, so can’t remain irreducible, and therefore has a root modulo $2$, but $h$ itself doesn’t have such a root, so (2) is also excluded. (There must be a better argument!) Conclusion? It follows that $h$ has no nontrivial $\Bbb Z$-factorization.
|
Let $G$ be a finite group, $L(G)$ its subgroup lattice and $\mu$ the Möbius function.
Consider the Euler totient of $G$ defined as follows: $$ \varphi(G) = \sum_{H \le G}\mu(H,G) |H| $$ Let $X=\{M_1, \dots, M_n \}$ be the set of maximal subgroups of $G$. By applying the Crosscut Theorem with $X$ (see this comment of Richard Stanley) and next the inclusion–exclusion principle, we get that:$$ \varphi(G) = |G \setminus \bigcup_{i=1}^n M_i| $$ In other words, $\varphi(G)$ is the number of elements $g \in G$ such that $\langle g \rangle = G$. It follows that $$ \varphi(G) \neq 0 \Leftrightarrow G \text{ cyclic}$$
Note that $\varphi(\mathbb{Z}/n) = \varphi(n)$ the usual Euler's totient function.
Now, consider the dual Euler totient of $G$ defined as follows: $$ \hat{\varphi}(G) = \sum_{H \le G}\mu(1,H) |G:H| $$
Question: $ \hat{\varphi}(G) \neq 0 \Leftrightarrow G$ has a faithful irreducible complex representation? Remark: We will see below that $(\Rightarrow)$ is true. So the question reduces to $(\Leftarrow)$. It is true for the finite simple group $G$ of order $<10000$:
$$ \begin{array}{c|c|c|c|c|c} G & |G| & \hat{\varphi}(G) \newline \hline A_5 & 60 & 8 & \newline \hline PSL(2,7) & 168 & 228 & \newline \hline A_6 & 360 & 8748 & \newline \hline PSL(2,8) & 504 & 19056 & \newline \hline PSL(2,11) & 660 & 24932 & \newline \hline PSL(2,13) & 1092 & 105684 & \newline \hline PSL(2,17) & 2448 & 389496 & \newline \hline A_7 & 2520 & 188136 & \newline \hline PSL(2,19)& 3420 & 1148028 & \newline \hline PSL(2,16)& 4080 & 1935584 & \newline \hline PSL(3,3)& 5616 & 395496 & \newline \hline PSU(3,3)& 6048 & 507168 & \newline \hline PSL(2,23)& 6072 & 2234784 & \newline \hline PSL(2,25)& 7800 & 5391800 & \newline \hline M_{11} & 7920 & 1044192 & \newline \hline PSL(2,27)& 9828 & 7778916 & \newline \end{array}$$
Any idea about the meaning of these numbers?
Proof of $(\Rightarrow)$
Consider the relative version $$ \hat{\varphi}(H,G) = \sum_{K \in [H,G]}\mu(H,K) |G:K|.$$
Warning: $-\hat{\varphi}(H,G)$ is not the Möbius invariant of the bounded coset poset $\hat{C}(H,G)$ because $$-\mu(\hat{C}(H,G)) = \sum_{K \in [H,G]}\mu(K,G) |G:K| $$ and $\mu(K,G) \neq \mu(H,K)$ in general.
Now if $[H,G]$ is boolean of rank $n$ then $\mu(K,G) = (-1)^n \mu(H,K)$; moreover (independently) by Theorem 3.21 of this paper, if $ \hat{\varphi}(H,G) \neq 0$ then there is an irreducible complex representation $V$ of $G$ such that $G_{(V^H)} = H$. Next, using a dual reformulation of the Crosscut Theorem with $X$ the set of atoms, we can extend the proof of Theorem 3.21 to any interval $[H,G]$ (i.e. without assuming it to be boolean). Finally by taking $H = 1$, we get that for any finite group $G$, if $\hat{\varphi}(G) \neq 0$ then there is an irreducible complex representation $V$ such that $G_{(V)} = 1$, which means that $V$ is faithful.
|
You'll need to understand the
sampling theorem. In short, each signal has what we call a spectrum¹, which is the Fourier transform of the signal as it comes in time domain (if it is a time signal), or spatial domain (if it is a picture. Since the Fourier transform is bijective, a signal and its transform are equivalent; in fact, one can often interpret the Fourier Transform as change of basis. We call that "conversion to frequency domain", since the Fourier transform's values for low ordinates describe the things that change slowly in the original (time or spatial) domain signal, whereas high-frequency content is represented by Fourier transform values with high position.
Generally, such spectra can have a certain
support; the support is the minimal interval outside of which the spectrum is 0.
If you now use an observing system whose ability to reproduce frequencies is limited to an interval that is smaller than said support (which often is infinite, by the way, and always is infinite for signals that have finite extension in time or space), you can not represent the original signal with that system.
In this case, your picture has a certain resolution – which is, in the end, the fact that you evaluate the value of your function at discrete points in a fixed, non-infinitesimal spacing. The inverse of that spacing is the (spatial) sampling rate.
Thus, your picture cannot represent the original signal – it's simply mathematically impossible that the mapping of underlying function to pixels is truly equivalent to the original function, since we know that in this case, the total range of frequencies representable by your evaluation at discrete points ("sampling") is half the sampling rate, and thus, something
must go wrong with the part of your signal's spectrum that is above half the sampling rate.
What happens is, in fact, that the spectrum gets aliases – every spectral component at a frequency $f_o \ge \frac{f_\text{sample}}{2}$ gets "shifted" down by $n\cdot f_\text{sample},\, n\in \mathbb Z$, so that $|f_o-nf_\text{sample}| < \frac{f_\text{sample}}{2}$. In effect, that leads to "structure" where there (feels like) shouldn't be some.
Take the "large" structures from your picture that I've painted green:
It certainly looks like there is low-frequency content here - but in reality, it's just the high-frequecy content at frequencies $>\frac{f_\text{sample}}2$ that got aliased to low frequencies, since it was close to an integer multiple of the sampling rate.
So,
yes, you can predict the artifacts that happen to a 2D signal when being sampled by comparing its Fourier transform to the bandwidth offered by the sampling rate.
¹ this might be different from the spectrum as used in linear algebra to describe the Eigen-properties of operators.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Suppose that I have 10 kg of plastic/polymer that is non solvable in water. I then divide the plastic/polymer into two equal parts. Let us name the parts A and B in order to identify them. I then proceed to make beads from each part. I make $n_A$ beads of radius $r_A$ from part A. I also make $n_B$ beads of radius $r_B$ from part B. Here $r_A>r_B$ and consequently $n_B>n_A$. Suppose that I put the beads from part A into 10 liters of water and I do the same for the beads from part B. Which one is more viscous, the "solution" from part A or from part B? Note: here $n_A,n_B$ stand for some arbitrary number.
Disclaimer: I will be talking about the shear viscosity. I think that most of what I write is also applicable to extensional viscosity, although with different numerical constants, but I am not fully sure so I will limit myself to shear viscosity.
Most theoretical models predict that the shear viscosity of both solutions will
be equal. The reason is that these models describe the viscosity of the suspension $\mu$ as the viscosity of the 'empty' liquid $\mu_0$ $+$ a correction term that depends on the volume concentration of solids, $\phi$.
The most famous one is the Einstein viscosity equation that simply reads: $$\frac{\mu}{\mu_0}=1+2.5\phi $$
This model works for low solid concentrations when inter-particle interactions can be neglected. Later Batchelor showed that including long-range hydrodynamic interactions an additional correction of $+6.2\phi^2$ should be included, but still only a dependence on the volume concentration is there.
A nice summary of more models can be found in this paper: K.D Danov,
J. Colloid Interface Sci., 235, 144–149 (2001). That paper also includes a discussion that shows that the factor 2.5 should be dependent on the mobility of the particle. This would mean that it will be higher for smaller particles, so in your case that the solution with part B will be somewhat more viscosity.
Another overview of models, including models which have an increasing viscosity with increasing particle size can be found here. So that would point to part A giving the more viscous solution.
So to summarize: to first order (i.e. low concentrations) the two viscosities will be the same. At higher orders you have competing effects of the particle radius that should increase the viscosity and the particle number that should also increase it. Which effect is dominant is probably hard to tell theoretically.
|
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
|
Page history 12 October 2019 10 October 2019 28 September 2019 20 September 2019 23 August 2019 5 August 2019 17 July 2019 25 June 2019
Dating maintenance tags: {{Full citation needed}}m
+15
→Future potential
-7
→Reduced CO2 emissions: add {{full citation needed}}
+24
→History
-19
→Failure risks: tidy up ref
+92
→top: An "average" would have to be a single figure.
-1
→Reduced CO2 emissions: tidy up ref
+93
→Reduced CO2 emissionsm
+11
→Wind power: tidy up ref
+108
→Future potential: tidy up ref
+121
24 June 2019
→Methane emissions (from reservoirs): add ref author
+30
→Low cost/high value power: tidy up ref
+144
→Methane emissions (from reservoirs): tidy up ref
+58
16 June 2019 13 June 2019 7 June 2019
→Major projects under construction: rm out of date info
-565
→Major projects under construction: compact column header
+35
6 June 2019
→World hydroelectric capacity: table aligned left and cleanup
-442
→See also: cleanup see also, rm links specific to US
-185
remove move tag: discussion did not reach consensus after more than 6 months
-79
→Major projects under construction: updated notesm
+68
→Major projects under construction: updated notesm
+84
23 April 2019 14 April 2019 13 March 2019
lead was clearer and more correct before
+556
Rescuing orphaned refs ("ren21.net" from rev 884211014)
+83
no edit summary
-639
20 February 2019 14 February 2019
fixed link per WP:NOPIPEm
-14
→Future potential: Corrected dashes to hyphens; removed erroneous comma.m
-5
Improved style of one sentence, and added quantification; removed incorrect hyphen.
+42
7 February 2019 6 February 2019 5 February 2019
→Calculating available power: Editing the equation: "r" ->"\dot{V}", "h" -> "|\Delta h|", \rho \dot{V} = \dot{m}.
+376
6 January 2019
→Low cost/high value power: copyedits
-275
→Future potential: holy shit that was poorly written: here's some sense to what these values mean
+247
|
The Annals of Applied Probability Ann. Appl. Probab. Volume 29, Number 1 (2019), 326-375. Random switching between vector fields having a common zero Abstract
Let $E$ be a finite set, $\{F^{i}\}_{i\in E}$ a family of vector fields on $\mathbb{R}^{d}$ leaving positively invariant a compact set $M$ and having a common zero $p\in M$. We consider a piecewise deterministic Markov process $(X,I)$ on $M\times E$ defined by $\dot{X}_{t}=F^{I_{t}}(X_{t})$ where $I$ is a jump process controlled by $X$: ${\mathsf{P}}(I_{t+s}=j|(X_{u},I_{u})_{u\leq t})=a_{ij}(X_{t})s+o(s)$ for $i\neq j$ on $\{I_{t}=i\}$.
We show that the behaviour of $(X,I)$ is mainly determined by the behaviour of the linearized process $(Y,J)$ where $\dot{Y}_{t}=A^{J_{t}}Y_{t}$, $A^{i}$ is the Jacobian matrix of $F^{i}$ at $p$ and $J$ is the jump process with rates $(a_{ij}(p))$. We introduce two quantities $\Lambda^{-}$ and $\Lambda^{+}$, respectively, defined as the
minimal (resp., maximal) growth rate of $\|Y_{t}\|$, where the minimum (resp., maximum) is taken over all the ergodic measures of the angular process $(\Theta,J)$ with $\Theta_{t}=\frac{Y_{t}}{\|Y_{t}\|}$. It is shown that $\Lambda^{+}$ coincides with the top Lyapunov exponent (in the sense of ergodic theory) of $(Y,J)$ and that under general assumptions $\Lambda^{-}=\Lambda^{+}$. We then prove that, under certain irreducibility conditions, $X_{t}\rightarrow p$ exponentially fast when $\Lambda^{+}<0$ and $(X,I)$ converges in distribution at an exponential rate toward a (unique) invariant measure supported by $M\setminus \{p\}\times E$ when $\Lambda^{-}>0$. Some applications to certain epidemic models in a fluctuating environment are discussed and illustrate our results. Article information Source Ann. Appl. Probab., Volume 29, Number 1 (2019), 326-375. Dates Received: September 2017 Revised: June 2018 First available in Project Euclid: 5 December 2018 Permanent link to this document https://projecteuclid.org/euclid.aoap/1544000431 Digital Object Identifier doi:10.1214/18-AAP1418 Mathematical Reviews number (MathSciNet) MR3910006 Zentralblatt MATH identifier 07039127 Subjects Primary: 60J25: Continuous-time Markov processes on general state spaces 34A37: Differential equations with impulses 37H15: Multiplicative ergodic theory, Lyapunov exponents [See also 34D08, 37Axx, 37Cxx, 37Dxx] 37A50: Relations with probability theory and stochastic processes [See also 60Fxx and 60G10] 92D30: Epidemiology Citation
Benaïm, Michel; Strickler, Edouard. Random switching between vector fields having a common zero. Ann. Appl. Probab. 29 (2019), no. 1, 326--375. doi:10.1214/18-AAP1418. https://projecteuclid.org/euclid.aoap/1544000431
|
Answer
Please see the work below.
Work Step by Step
We know that $v=\sqrt{\frac{\gamma P}{\rho}}$ We plug in the known values to obtain: $v=\sqrt{\frac{(1.29)(48000)}{0.35}}$ $v=420\frac{m}{s}$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
I am stuck on the following exercise.
It is given that the series $\sum_{n=1}^\infty a_n$ is convergent, but not absolutely convergent and $\sum_{n=1}^\infty a_n=0$. Denote by $s_k$ the partial sum $\sum_{n=1}^k a_n$, k=1,2,... Then
$s_k=0$ for infinitely many k
$s_k>0$ for infinitely many k, and $s_k<0$ for infinitely many k
it is possible that $s_k>0$ for all k
it is possible that $s_k>0$ for all but a finite number of values of k
Here $\sum_{n=1}^\infty a_n=\lim_{k\to \infty}s_k=0$ hence it is possible that its partial sum $s_k=0$ for infinitely many k, hence first option is correct.
Here series is convergent but not absolutely convergent therefore this series has negative terms hence $s_k$ can be greater than and less than $0$ infinitely many times ,hence option 2 is correct.
I have example $\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}$ this series is convergent but not absolutely convergent and $s_k>0$ for all k for this series, hence option 3 is correct. I am not getting option 4.
please correct me if i am wrong.
Thank you.
|
One can use binomial expansion in combination with the complex extension of trig functions:
$$\cos(xy)=\frac{e^{xyi}+e^{-xyi}}{2}=\frac{a^{xy}+a^{-xy}}2$$
Using $a=e^i$ for simplicity.
We also have:
$$(a+a^{-1})^n=\sum_{i=0}^{\infty}\frac{n!a^{n-i}a^{-i}}{i!(n-i)!}=\sum_{i=0}^{\infty}\frac{n!a^{n-2i}}{i!(n-2i)!}$$
Which is obtained by binomial expansion.
We also have:
$$(a+a^{-1})^n=(a^{-1}+a)^n=\sum_{j=0}^{\infty}\frac{n!a^{2j-n}}{j!(n-j)!}$$
And, combining the two, we get:
$$(a+a^{-1})^n=\frac{\sum_{i=0}^{\infty}\frac{n!a^{n-2i}}{i!(n-i)!}+\sum_{j=0}^{\infty}\frac{n!a^{2j-n}}{j!(n-j)!}}2=\frac12\sum_{i=0}^{\infty}\frac{n!}{i!(n-i)!}(a^{n-2i}+a^{-(n-2i)})$$
If we have $\cos(n)=\frac{a^n+a^{-n}}2$, then we have
$$(2\cos(n))^k=(a^n+a^{-n})^k=\sum_{i=0}^{\infty}\frac{k!}{i!(k-i)!}\frac{a^{n(k-2i)}+a^{-n(k-2i)}}2$$
Furthermore, the far right of the last equation can be simplified back into the form of cosine:
$$\sum_{i=0}^{\infty}\frac{k!}{i!(k-i)!}\frac{a^{n(k-i)}+a^{-n(k-i)}}2=\sum_{i=0}^{\infty}\frac{k!}{i!(k-i)!}(\cos(n(k-2i)))$$
Thus, we can see that for $\cos(ny)$, it simply the first of the many terms in $\cos^n(y)$ and we may rewrite the summation formula as:
$$(2\cos(n))^k=\cos(nk)+\sum_{i=1}^{\infty}\frac{k!}{i!(k-i)!}(\cos(n(k-2i)))$$
And rearranging terms, we get:
$$\cos(nk)=2^k\cos^k(n)-\sum_{i=1}^{\infty}\frac{k!}{i!(k-i)!}(\cos(n(k-2i)))$$
This becomes explicit formulas for $n=0,1,2,3,\dots$
I note that there is no way by which you may reduce the above formula without the knowledge that $n,k\in\mathbb{Z}$.
Also, it is quite difficult to produce the formulas for, per say, $\cos(10x)$ because as you proceed to do so, you will notice that it requires knowledge of $\cos(8x),\cos(6x),\cos(4x),\dots$, which you can eventually solve, starting with $\cos(2x)$ (it comes out to be the well known double angle formula), using this to find, $\cos(4x)$, use that to find $\cos(6x)$, etc. all the way to $\cos(10x)$.
Notably, this can be easier than Chebyshev Polynomials because it only requires that you know the odd/even formulas less than the one you are trying to solve. (due to $-2i$)
But this is the closest I may give to you for the formula of $\cos(xy)$, $x,y\in\mathbb{R}$.
It is also true for $x,y\in\mathbb{C}$.
Then, use $x=y$ to get $\cos(xx)=\cos(x^2)$
$$\cos(x^2)=\cos^x(x))-\sum_{i=1}^{\infty}\frac{x!}{i!(x-i)!}(\cos(x(x-2i)))$$
It is definitely
not periodic in the $2\pi i$ sense, but it is a formula you can use.
Obviously, I don't recommend using it because it is complicated.
|
Can anyone help me with this exercise, please?
A topological space $X$ is said to be irreducible if $X\neq\emptyset$ and if every pair of non-empty open sets in $X$ intersect, or equivalently, if every non-empty open set is dense in $X$. Show that $\text{Spec}(A)$ is irreducible if and only if the nilradical of $A$ is a prime ideal.
Notation: $A$ is a commutative ring with $1$ (not necessarily $1\ne0$) $\eta= \text{nilradical of $A$ }= \bigcap\limits_{\mathscr{p}\text{ prime}}\mathscr{p}=\{a\in A:\text{$a$ is nilpotent}\}$ $\text{Spec}(A)=\{p\subset A:\text{$p$ prime}\}$, and the topology is such that $V(E)=\{p\subset A\text{ prime}:E\subset A\}$ is a basis for closed sets, for all subset $E\subset A$ (we can show that the complementar of these sets form a basis for open sets)
If the nilradical $\eta=\mathscr{p}$ is prime, then every non-empty closed set $V(E)$ satisfy: "$p\in V(E)\implies V(E)=\text{Spec}(A)$" (since every prime contains $\eta=p$), hence, every non-empty open set contains $p$, so $\text{Spec}(A)$ is irreducible.
The conversely is the problem...
A previous exercise showed that there exists minimal prime ideals in every ring $A$.
I assumed that $\eta$ is not a prime ideal, hence there exists at least two distinct minimal prime ideals. So, let $p$ be a minimal prime ideal and $E=\bigcap\{q\subset A:\text{$q$ is prime minimal, $q\ne p$}\}$. If there are a finite number of minimal prime ideals (for example, if $A$ is Noetherian), then the complementar of $V(E)$ is contained in $V(p)$ (since if a finite intersection of prime ideals is contained in any ideal $I$, then at least one of these prime ideal is contained in $I$), hence, $\text{Spec}(A)$ is not irreducible.
But this argument seems not to work for general rings...
Any help will be appreciated!
Thanks!
|
Is there an example of an eigenfunction of a linear time invariant (LTI) system that is
not a complex exponential? Justin Romberg's Eigenfunctions of LTI Systems says such eigenfuctions do exist, but I am not able to find one.
All eigenfunctions of an LTI system can be described in terms of complex exponentials, and complex exponentials form a complete basis of the signal space. However, if you have a system that is
degenerate, meaning you have eigensubspaces of dimension >1, then the eigenvectors to the corresponding eigenvalue are all linear combination of vectors from the subspace. And linear combinations of complex exponentials of different frequencies are not complex exponentials anymore.
Very simple example: The identity operator 1 as an LTI system has the whole signal space as eigensubspace with eigenvalue 1. That implies ALL functions are eigenfunctions.
I thought I had worded my response clearly---apparently not :-). The original question was, "Are there eigensignals besides the complex exponential for an LTI system?". The answer is, if one is given the fact that the system is LTI but nothing else is known, then the only confirmed eignensignal is the complex exponential. In specific cases, the system may have additional eigensignals as well. The example I gave was the ideal LPF with sinc being such an eigensignal. Note that the sinc function is not an eigensignal of an arbitrary LTI system. I gave the LPF and the sinc as an example to point a non-trivial case---x(t) = y(t) will satisfy a mathematician but not an engineer :->. I am sure one can come up with other specific non-trivial examples that have other signals as eigensignals besides the complex exponential. But these other eigensignals will work for those specific examples only.
Also, cos and sin are not, in general, eigensignals. If cos(wt) is applied and the output is A cos(wt + theta), then this output cannot be expressed as a constant times the input (except when theta is 0 or pi, or A=0), which is the condition needed for a signal to be an eigensignal. There may be conditions under which cos and sin are eigensignals, but they are special cases and not general.
CSR
For any arbitrary LTI sytem, the complex exponential is, to the best of my knowledge, the only known eigensignal. On the other hand, consider the ideal LPF. The $\operatorname{sinc}$ function: $$\operatorname{sinc}(t) \triangleq \frac{\sin(\pi t)}{\pi t}$$ can easily be seen to be an eigen signal. This points to the existence of LTI systems (such as the ideal LPF) having signals other than complex exponentials as eigen signals ($\frac{\sin(\pi t)}{\pi t}$ in this case).
Maybe spatially invariant multidimensional objects like lenses with circular symmetry. It is called the Fourier Bessel expansion. There is no T for time but the convolution frequency domain relations hold
|
Find the number of irreducible monic polynomials of degree $2$ over a field with five elements.
Please anyone help me.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Find the number of irreducible monic polynomials of degree $2$ over a field with five elements.
Please anyone help me.
This question appears to be off-topic. The users who voted to close gave this specific reason:
Note: As this result is given in this answer.
The number of irreducible monic polynomials of degree $n$ (only when $n$ is prime) over the field of characteristic $p$ is $\frac{p^n-p}{n}$. In your case $p=5$ and $n=2$
Alternately Gauss gave the following result,
The number of irreducible monic polynomials of degree $n$ over $F_q$ is given by $$N_q(n)=\frac{1}{n}\sum_{d|n}\mu(d)q^{n/d}$$ where $\mu$ is the Mobius function.
For a proof see this.
Hints.
Count the number of monic quadratic polynomials, irreducible or not.
Count the number of products $(x-a)(x-b)$, remembering that $a$ might equal $b$.
Subtract.
|
We know that there are Whitehead theorem for homotopy and homology theory.
Is there the Whitehead theorem for cohomology theory for 1-connected CW complexes?
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community
We know that there are Whitehead theorem for homotopy and homology theory.
Is there the Whitehead theorem for cohomology theory for 1-connected CW complexes?
Conceptually, the following two theorems (both due to Whitehead) are Eckmann-Hilton duals.
Theorem. A weak homotopy equivalence between CW complexes is a homotopy equivalence.
Theorem. A homology isomorphism between simple spaces is a weak homotopy equivalence.
They don't look dual, but they are. SeeJ.P. May. The dual Whitehead Theorems.
London Math. Soc. Lecture Note Series Vol. 86(1983), 46--54.
The point is that the second statement is really about cohomology, and the standard cellular proof of the first statement dualizes word-for-word to a ``cocellular'' proof of the second. Cocellular constructions are what appear in Postnikov towers, and they can be used more systematically than can be found in the literature. Yet another plug: they are central in the upcoming book "More Concise Algebraic Topology'' by Kate Ponto and myself.
As Sean says, the key point is the Universal Coefficient Theorem, but the details are not completely obvious unless you make some finiteness assumptions.
Suppose that $f:X\to Y$ is such that $H^{\ast}(f;\mathbb{Z})$ is an isomorphism. Let $Z$ be the cofibre of $f$, so $\tilde{H}^{\ast}(Z;\mathbb{Z})=0$. If we can prove that $\tilde{H}_{\ast}(Z;\mathbb{Z})=0$ then we can appeal to the ordinary homological Whitehead theorem. MathJax is mangling my tildes: all (co)homology groups of $Z$ below should be read as reduced.
For any prime $p$, we have a universal coefficient sequence for $H^{\ast}(Z;\mathbb{Z}/p)$ in terms of $H^{\ast}(Z;\mathbb{Z})$, so $H^{\ast}(Z;\mathbb{Z}/p)=0$. As ${\mathbb{Z}/p}$ is a field we also know that $H_{\ast}(Z;\mathbb{Z}/p)$ is a free module with $H^{\ast}(Z;\mathbb{Z}/p)$ as its dual, so we must have $H_{\ast}(Z;\mathbb{Z}/p)=0$. Using the universal coefficient theorem for homology we deduce that $H_{\ast}(Z;\mathbb{Z})/p$ and $\text{ann}(p,H_{\ast}(Z;\mathbb{Z}))$ are zero, so multiplication by $p$ is an isomorphism on $H_{\ast}(Z;\mathbb{Z})$. As this holds for all $p$, we see that $H_{\ast}(Z;\mathbb{Z})$ is a rational vector space. Thus, if it is nontrivial it will contain a copy of $\mathbb{Q}$ so (via universal coefficients again) $H^{\ast}(Z;\mathbb{Z})$ will contain a copy of $\text{Ext}(\mathbb{Q},\mathbb{Z})$. This group is nonzero (in fact, enormous) by a standard calculation, so this contradicts the initial assumption.
Sure.
The basic point is that for simply-connected spaces, you can determine the connectivity of a map by looking at the connectivity of the cofiber instead of the connectivity of the fiber.
In homology, you determine the connectivity of the cofiber by looking at $H_*(C;\mathbb{Z})$, because of the Hurewicz Theorem.
In cohomology, you appeal to: if $X$ is simply-connected, then $X$ is $n$-connected if and only if $[ X, K(G,m)] = *$ for all $m \leq n$ and all abelian groups $G$. This is because (by basic obstruction theory) if $X$ is $(n-1)$-connected, then $[X, K(G,n)] \cong \mathrm{Hom}(\pi_n(X), G )$.
This results in a long list of groups to check, admittedly; but it can be whittled down by Universal Coefficients theorems if you like.
|
I'm going to assume you were just handed this schematic and asked to perform some tasks. The results you got do not sound correct for the circuit. (It's not a great design. But it's enough to get by.)
The first thing to check is the voltage of the power supply. If this is a \$9\:\text{V}\$ battery that was sitting on a lab shelf,
definitely check it out with a meter -- using a \$1\:\text{k}\Omega\$ resistor across its terminals while you use the meter to measure the voltage. If this is a commercial power supply, check it anyway. It doesn't hurt to verify that your voltage is what it is supposed to be.
The next thing to check is the resistor values. Make sure they are near the values you are told they should be. Again, a few moments spent here can save you lots of time, later. It's only five resistors, so it shouldn't take a lot of time.
It's a good idea to have checked the datasheet for the specific BJT you are using. The 2N3904 comes in a variety of packages and I wouldn't be surprised to find different pinouts even for the same TO-92 package. It's worth a moment to make sure you know where your pins are located. I won't suggest that you spend time checking out the BJT, since you will be doing that soon enough when you plug it into the circuit. But if you do have a transistor checker available, please do use it.
Finally, check your wiring two or three times over. I make mistakes. So you can make them, too. Check things over several times. Might save more time, later.
I won't explain, but I'll do a few quick calculations for you. These will be added here so that you can check this out with a voltmeter before running your tests.
Assuming that the BJT isn't saturated, the base current should be:
$$I_\text{B}=\frac{V_\text{TH}-V_\text{BE}}{R_\text{TH}+\left(\beta+1\right)\:R_\text{E}}$$
Here, \$R_\text{E}=200\:\Omega+2.2\:\text{k}\Omega=2.4\:\text{k}\Omega\$, \$R_\text{TH}=\frac{R_1\:R_2}{R_1+R_2}\$ and \$V_\text{TH}=9\:\text{V}\frac{R_2}{R_1+R_2}\$. The value for \$V_\text{BE}\$ will vary a little, but I can already tell that it's closer to \$600\:\text{mV}\$ than to \$700\:\text{mV}\$. So I'll just pick something just under the middle of that. This gives me about \$800\:\text{nA}\$ for the base current (using a random guess for \$\beta\approx 260\$.) And therefore a collector current of only about \$210\:\mu\text{A}\$. About a \$2.1\:\text{V}\$ drop across the collector resistor.
So the next thing you do is do NOT put a signal up to your circuit. Just leave it open for now. Apply power and measure the output voltage. It should be in the area of about \$6.5\:\text{V}\$ to \$7.3\:\text{V}\$. Hopefully nearer the middle of that area. If the voltage is outside that range, stop everything and go measure the base voltage. This should be:
$$V_\text{B}=V_\text{TH} - I_\text{B}\cdot R_\text{TH}$$
In this case, that works out to about \$1.1\:\text{V}\$ to \$1.2\:\text{V}\$. If the base voltage
also is too far outside that range (a few tenths of a volt above or below it), then just stop. Something is wired wrong. Get help and/or check things out.
These are basic checks.
If these things seem okay, then there is a good chance you have it wired correctly. Go back and verify the voltage source, just in case. Make sure it's still \$9\:\text{V}\$.
If all that is in line, then go ahead and add the capacitors and then apply your starting signal. You definitely should be getting more than a few millivolts (at the collector, relative to the [-] terminal of the power supply.)
Which reminds me -- polarity is important. Make sure that the
emitter of the BJT is towards the [-] side of the power supply and that the collector of the BJT is towards the [+] side.
My guess is that you didn't hook up the power supply correctly, got a PNP instead of an NPN, or really flubbed up the wiring somehow. Check things. Then check them again.
Also, this particular circuit will have varying gain -- and some distortion as a result. (If you actually get it built up correctly.) But for these purposes, that should be fine enough.
|
Ex.13.5 Q2 Surface Areas and Volumes Solution - NCERT Maths Class 10 Question
A right triangle, whose sides are \(3 \,\rm{cm}\) and \( 4\,\rm{cm}\) (other than hypotenuse) is made to revolve about its hypotenuse. Find the volume and surface area of the double cone so formed. (Choose value of π as found appropriate.)
Text Solution What is Known?
A right triangle whose sides are \(3 \,\rm{cm}\) and \( 4\,\rm{cm}\) (other than hypotenuse) is made to revolve about its hypotenuse to form cone.
What is Unknown?
The volume and surface area of the double cone formed.
Reasoning:
Draw a figure to visualize the double cone formed
In order to find the volume and surface area, we need to find \(BD\) or radius of the double cone
From the figure it’s clear that \(BD \bot AC\)
To find \(BD\)
(i) We first find \(AC\) using Pythagoras theorem
\(\begin{align}A{C^2} &= A{B^2} + B{C^2}\\AC &= \sqrt {A{B^2} + B{C^2}} \end{align}\)
(ii) Using AA criterion of similarity
Prove
\[\begin{align}&{\Delta ABC}\sim{\Delta BDC}\\&{ \frac{{AB}}{{BD}} = \frac{{AC}}{{BC}}}\end{align}\]
(Corresponding Sides of similar triangles are in proportion)
\[\begin{align}\text{Radius or}\, BD = \frac{{AB}}{{BC}} \times BC\end{align}\]
Since we know \(AB, AC\) and \(BC; BD\) can be found out
Since double cone is made by joining \(2\) cones by their bases
Therefore Volume of double cone \(=\) Volume of Cone \(ABB\) \(+\) Volume of Cone \(BCB\)
We will find the volume of the cone by using formulae;
Volume of the cone\(\begin{align} = \frac{1}{3}\pi {r^2}h\end{align}\)
where
\(r\) and \(h\) are the radius and height of the cone respectively.
Visually from the figure it’s clear that CSA of double cone includes CSA of both the cones
Therefore, CSA of double Cone \(=\) CSA of cone \(ABB’ +\) CSA of cone \(BCB’\)
We will find the CSA of the cone by using formulae;
CSA of frustum of a cone\( = \pi rl\)
where
\(r\) and \(l\) are the radius and slant height of the cone respectively. Steps:
In \(\Delta ABC\) right-angled at \(B\)
\[\begin{align}A{C^2} &= A{B^2} + B{C^2}\\AC &= \sqrt {{{\left( {3cm} \right)}^2} + {{\left( {4cm} \right)}^2}} \\&= \sqrt {9c{m^2} + 16c{m^2}} \\&= \sqrt {25c{m^2}} \\&= 5cm\end{align}\]
Consider \(\Delta ABC\) and \(\Delta BDC\)
\[\begin{align}\angle ABC &= \angle CDB = 90^\circ {\rm{ }}\left( {BD \bot AC} \right)\\\angle BCA &= \angle BCD{\rm{}}\left( {{\rm{common}}} \right)\end{align}\]
By AA criterion of similarity \(\Delta ABC \sim \Delta BDC\)
Therefore,
\(\begin{align}\frac{{AB}}{{BD}} = \frac{{AC}}{{BC}}\end{align}\) (Corresponding sides of similar triangles are in proportion)
\[\begin{align}BD &= \frac{{AB \times BC}}{{AC}}\\&= \frac{{3cm \times 4cm}}{{5cm}}\\&= \frac{{12}}{5}cm\\&= 2.4cm\end{align}\]
Volume of double cone \(=\) Volume of Cone ABB’ \(+\) Volume of Cone BCB’
\[\begin{align}&= \frac{1}{3} \times \pi {(BD)^2} \times AD + \frac{1}{3}\pi {(BD)^2} \times DC\\&= \frac{1}{3} \times \pi {(BD)^2}\left[ {AD + DC} \right]\\&= \frac{1}{3} \times \pi {(BD)^2} \times AD\\&= \frac{1}{3} \times 3.14 \times 2.4cm \times 2.4cm \times 5cm\\&= \frac{{90.432}}{3}c{m^3}\\&= 30.144c{m^3}\\&= 30.14c{m^3}\end{align}\]
CSA of double Cone \(=\) CSA of cone ABB’ \(+\) CSA of cone BCB’
\[\begin{align}&= \pi \times BD \times AB + \pi \times BD \times BC\\&= \pi \times BD\left[ {AB + BC} \right]\\&= 3.14 \times 2.4cm \times \left[ {3cm + 4cm} \right]\\&= 3.14 \times 2.4cm \times 7cm\\&= 52.752c{m^2}\\&= 52.75c{m^2}\end{align}\]
|
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
|
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
|
Let $X,Y$ be two disjoint closed sets on $\mathbb{R}$ such that $X\cup Y = [a,b]$. Show that $X= \emptyset$ or $Y = \emptyset$.
Here's what I've got by now:
Let $k \in X\cup Y$. Therefore $k \in X$ or $k \in Y$. Suppose that $X \neq \emptyset$ and $k \in X$, hence $k \notin Y$ which implies that $k \in \mathbb{R} \setminus Y$. From that it follows that $\exists \epsilon_k > 0$ such that $(k - \epsilon_k, k + \epsilon_k) \subset \mathbb{R} \setminus Y$. Since $k \in X\cup Y = [a,b]$ it follows that $a \leq k \leq b$.
Now let's get an $\epsilon > 1$ such that $(k-\frac{\epsilon_k}{\epsilon},k+\frac{\epsilon_k}{\epsilon}) \subset \big((k - \epsilon_k, k + \epsilon_k) \cap [a,b] \big)$. That $\epsilon$ clearly exists, and then it follows that $(k-\frac{\epsilon_k}{\epsilon},k+\frac{\epsilon_k}{\epsilon}) \subset \mathbb{R}\setminus Y \cap [a,b]$.
Now see that $\mathbb{R} \setminus Y \cap [a,b] = \mathbb{R} \setminus Y \cap (X \cup Y) = X$. From our previous conclusion, it follows that $X$ is open.
Using the same reasoning, by supposing that $Y\neq 0$ it follows that $Y$ is open.
So now suppose that $X,Y \neq \emptyset$. Theferore $X,Y$ are open sets. Therefore $X\cup Y$ is also an open set, hence it cannot be a closed interval $[a.b]$ which is closed, therefore $X = \emptyset$ or $Y = \emptyset$.
Edit for cases when $k=a$ or $k=b$
As the user
5xum pointed out, to guarantee the existence of that $\epsilon > 0$ we need $k \notin \{a,b\}$. So if I can guarantee that there is such $k \in X$, we're done without loss of generality for the case where $Y \neq \emptyset$.
To prove that, suppose that there isn't such $k \in X$. Therefore $X=\{a\}$ or $X=\{b\}$ or $X=\{a,b\}$. In all those three cases $Y$ cannot be closed, because $Y$ would be respectively $(a,b]$, $[a,b)$ and $(a,b)$. Since $Y$ is closed, that $k$ exists.
Can someone please check my work? That was a hard lemma for me and even after that proof attempt, I'm not 100% sure if it's fully correct! Thanks and any kind of help is highly appreciated!
|
Ex.6.3 Q6 Triangles Solution - NCERT Maths Class 10 Question
In Figure , if \(\Delta \mathrm{ABE} \cong \Delta \mathrm{ACD}\), show that \(\Delta ADE \sim \Delta ABC\).
Diagram
Text Solution Reasoning:
As we know if two triangles are congruent to each other;their corresponding parts are equal.
If one angle of a triangle is equal to one angle of the other triangle and the sides including these angles are proportional, then the two triangles are similar.
This criterion is referred to as the SAS (Side–Angle–Side) similarity criterion for two triangles.
Steps:
In \(\begin{align}\Delta ABE\,\,{\rm {and}}\,\Delta ACD \end{align}\)
\[\begin{align}& AE=AD\,\,(\because \Delta ABE\cong \Delta ACD\,\,\rm{given}).........(1) \\ & AB=AC\,\,(\because\Delta ABE\cong \Delta ACD\,\,\rm{given}).........(2) \\ \end{align}\]
Now Consider \(\Delta ADE,\,\,\Delta ABC\)
\[\begin{align} \frac{AD}{AB}&=\frac{AE}{AC}\qquad\,\text{from}\,\,(1)\,\And (2) \\ \rm{and}\quad\,\,\angle DAF&=\angle BAC\,\,\,\left( \text{Common}\,\text{angle} \right) \\ \Rightarrow\quad \Delta ADE&\sim{\ }\Delta ABC\left( \text{SAS}\,\text{criterion} \right) \\ \end{align}\]
|
I'll list two approaches, either of which should work.
Projected gradient descent
One general approach would be to use projected gradient descent, which provides a way to accommodate these kinds of constraints. For specific kinds of constraints there may be a better way.
Let's recall how we train neural networks. We have an objective function $\Psi$ that computes the total loss, as a function of the weights $w$. Training a neural network amounts to solving the optimization problem "minimize $\Psi(w)$". This is typically done using (stochastic) gradient descent to find the weights $w$ that minimize $\Psi(w)$. In each iteration, you take a single gradient step to update $w$ (backpropagation is just a way to help you compute the gradient $\nabla \Psi(w)$, to help you take this step).
Now you want to add some constraints on $w$. Let $\mathcal{R}$ denote the region (linear subspace) of $w$'s that satisfy all your constraints. Now we want to solve the optimization problem "minimize $\Psi(w)$ subject to $w \in \mathcal{R}$". One standard way to solve such an optimization problem is to use projected gradient descent: in each iteration, you take a single gradient step to update $w$, then project $w$ to the nearest point in $\mathcal{R}$; and repeat. So this amounts to doing the standard backprop training procedure for a neural network, but after each weight update, projecting the weights to the nearest valid value that satisfies all the constraints. You can read more about projected gradient descent in standard resources. In your situation, it is easy to project onto a linear subspace; this is a simple matter of linear algebra. So projected gradient descent should be straightforward to apply. It also shouldn't be too hard to add the projection step, in a framework like Tensorflow. This also has the nice benefit of generalizing beyond linear constraints.
Reparameterization
The other plausible approach would be a reparameterization trick. If you have $m$ weights, your region $\mathcal{R}$ has the form $\{w : Aw=c\}$, where $A$ is a $m \times n$ matrix and $c$ a constant $m$-vector. This can be reparametrized into the form $\{Bv\}$ where $B$ is a $n\times n-m$ matrix and $v$ ranges over all possible $n-m$-vectors. In other words, we can find a matrix $B$ such that every valid weight value $w$ can be expressed as $w=Bv$ for some $v$. (I think you can compute $B$ as the pseudoinverse of $A$.) Then, instead of solving the optimization problem "minimize $\Psi(w)$ subject to $w \in \mathcal{R}$", you can solve the optimization problem "minimize $\Psi(Bv)$", where now there are no constraints on $v$. This in turn can be done using gradient descent.
Once you've found the matrix $B$, you'll probably find the latter approach very easy to implement in frameworks like Tensorflow; you just let $v$ be the variables you are trying to solve for, compute $w$ from $v$ using $w=Bv$, and then the neural network uses these derived weights $w$, and you can ask Tensorflow to minimize the loss as usual.
|
Three-dimensional antenna radiation patterns. The radial distance from the origin in any direction represents the strength of radiation emitted in that direction. The top shows the directive
pattern of a horn antenna
, the bottom shows the omnidirectional
pattern of a dipole antenna
.
In the field of antenna design the term
radiation pattern (or antenna pattern or far-field pattern) refers to the directional (angular) dependence of the strength of the radio waves from the antenna or other source. [1] [2] [3]
Particularly in the fields of fiber optics, lasers, and integrated optics, the term radiation pattern may also be used as a synonym for the
near-field pattern or Fresnel pattern. [4] This refers to the positional dependence of the electromagnetic field in the near-field, or Fresnel region of the source. The near-field pattern is most commonly defined over a plane placed in front of the source, or over a cylindrical or spherical surface enclosing it. [1] [4]
The far-field pattern of an antenna may be determined experimentally at an antenna range, or alternatively, the near-field pattern may be found using a
near-field scanner, and the radiation pattern deduced from it by computation. [1] The far-field radiation pattern can also be calculated from the antenna shape by computer programs such as NEC. Other software, like HFSS can also compute the near field.
The far field radiation pattern may be represented graphically as a plot of one of a number of related variables, including; the field strength at a constant (large) radius (an
amplitude pattern or field pattern), the power per unit solid angle ( power pattern) and the directive gain. Very often, only the relative amplitude is plotted, normalized either to the amplitude on the antenna boresight, or to the total radiated power. The plotted quantity may be shown on a linear scale, or in dB. The plot is typically represented as a three-dimensional graph (as at right), or as separate graphs in the vertical plane and horizontal plane. This is often known as a polar diagram.
Contents Reciprocity 1 Typical patterns 2 Proof of reciprocity 3 Practical consequences 3.1 References 4 External links 5 Reciprocity
It is a fundamental property of antennas that the
receiving pattern (sensitivity as a function of direction) of an antenna when used for receiving is identical to the far-field radiation pattern of the antenna when used for transmitting. This is a consequence of the reciprocity theorem of electromagnetics and is proved below. Therefore in discussions of radiation patterns the antenna can be viewed as either transmitting or receiving, whichever is more convenient. Typical patterns
Typical polar radiation plot. Most antennas show a pattern of "lobes" or maxima of radiation. In a directive antenna
, shown here, the largest lobe, in the desired direction of propagation, is called the "main lobe
". The other lobes are called "sidelobes
" and usually represent radiation in unwanted directions.
Since electromagnetic radiation is dipole radiation, it is not possible to build an antenna that radiates equally in all directions, although such a hypothetical isotropic antenna is used as a reference to calculate antenna gain. The simplest antennas, monopole and dipole antennas, consist of one or two straight metal rods along a common axis.
These axially symmetric antennas have radiation patterns with a similar symmetry, called omnidirectional patterns; they radiate equal power in all directions perpendicular to the antenna, with the power varying only with the angle to the axis, dropping off to zero on the antenna's axis. This illustrates the general principle that if the shape of an antenna is symmetrical, its radiation pattern will have the same symmetry.
In most antennas, the radiation from the different parts of the antenna interferes at some angles. This results in zero radiation at certain angles where the radio waves from the different parts arrive out of phase, and local maxima of radiation at other angles where the radio waves arrive in phase. Therefore the radiation plot of most antennas shows a pattern of maxima called "lobes" at various angles, separated by "
nulls" at which the radiation goes to zero.
A rectangular radiation plot, an alternative presentation method to a polar plot.
The larger the antenna is compared to a wavelength, the more lobes there will be. In a directive antenna in which the objective is to direct the radio waves in one particular direction, the lobe in that direction is larger than the others; this is called the "
main lobe". The axis of maximum radiation, passing through the center of the main lobe, is called the " beam axis" or boresight axis". In some antennas, such as split-beam antennas, there may exist more than one major lobe. A minor lobe is any lobe except a major lobe.
The other lobes, representing unwanted radiation in other directions, are called "
side lobes". The side lobe in the opposite direction (180°) from the main lobe is called the " back lobe". Usually it refers to a minor lobe that occupies the hemisphere in a direction opposite to that of the major (main) lobe.
Minor lobes usually represent radiation in undesired directions, and they should be minimized. Side lobes are normally the largest of the minor lobes. The level of minor lobes is usually expressed as a ratio of the power density in the lobe in question to that of the major lobe. This ratio is often termed the side lobe ratio or side lobe level. Side lobe levels of −20 dB or smaller are usually not desirable in many applications. Attainment of a side lobe level smaller than −30 dB usually requires very careful design and construction. In most radar systems, for example, low side lobe ratios are very important to minimize false target indications through the side lobes.
Proof of reciprocity
For a complete proof, see the reciprocity (electromagnetism) article. Here, we present a common simple proof limited to the approximation of two antennas separated by a large distance compared to the size of the antenna, in a homogeneous medium. The first antenna is the test antenna whose patterns are to be investigated; this antenna is free to point in any direction. The second antenna is a reference antenna, which points rigidly at the first antenna.
Each antenna is alternately connected to a transmitter having a particular source impedance, and a receiver having the same input impedance (the impedance may differ between the two antennas).
It will be assumed that the two antennas are sufficiently far apart that the properties of the transmitting antenna are not affected by the load placed upon it by the receiving antenna. Consequently, the amount of power transferred from the transmitter to the receiver can be expressed as the product of two independent factors; one depending on the directional properties of the transmitting antenna, and the other depending on the directional properties of the receiving antenna.
For the transmitting antenna, by the definition of gain, G, the radiation power density at a distance r from the antenna (i.e. the power passing through unit area) is
\mathrm{W}(\theta,\Phi) = \frac{\mathrm{G}(\theta,\Phi)}{4 \pi r^{2}} P_{t}.
Here, the arguments \theta and \Phi indicate a dependence on direction from the antenna, and P_{t} stands for the power the transmitter would deliver into a matched load. The gain G may be broken down into three factors; the antenna gain (the directional redistribution of the power), the radiation efficiency (accounting for ohmic losses in the antenna), and lastly the loss due to mismatch between the antenna and transmitter. Strictly, to include the mismatch, it should be called the
realized gain, [4] but this is not common usage.
For the receiving antenna, the power delivered to the receiver is
P_{r} = \mathrm{A}(\theta,\Phi) W\,.
Here W is the power density of the incident radiation, and A is the antenna aperture or effective area of the antenna (the area the antenna would need to occupy in order to intercept the observed captured power). The directional arguments are now relative to the receiving antenna, and again A is taken to include ohmic and mismatch losses.
Putting these expressions together, the power transferred from transmitter to receiver is
P_{r} = A \frac{G}{4 \pi r^{2}} P_{t},
where G and A are directionally dependent properties of the transmitting and receiving antennas respectively. For transmission from the reference antenna (2), to the test antenna (1), that is
P_{1r} = \mathrm{A_{1}}(\theta,\Phi) \frac{G_{2}}{4 \pi r^{2}} P_{2t},
and for transmission in the opposite direction
P_{2r} = A_{2} \frac{\mathrm{G_{1}}(\theta,\Phi)}{4 \pi r^{2}} P_{1t}.
Here, the gain G_{2} and effective area A_{2} of antenna 2 are fixed, because the orientation of this antenna is fixed with respect to the first.
Now for a given disposition of the antennas, the reciprocity theorem requires that the power transfer is equally effective in each direction, i.e.
\frac{P_{1r}}{P_{2t}} = \frac{P_{2r}}{P_{1t}},
whence
\frac{\mathrm{A_{1}}(\theta,\Phi)}{\mathrm{G_{1}}(\theta,\Phi)} = \frac{A_{2}}{G_{2}}.
But the right hand side of this equation is fixed (because the orientation of antenna 2 is fixed), and so
\frac{\mathrm{A_{1}}(\theta,\Phi)}{\mathrm{G_{1}}(\theta,\Phi)} = \mathrm{constant},
i.e. the directional dependence of the (receiving) effective aperture and the (transmitting) gain are identical (QED). Furthermore, the constant of proportionality is the same irrespective of the nature of the antenna, and so must be the same for all antennas. Analysis of a particular antenna (such as a Hertzian dipole), shows that this constant is \frac{\lambda^{2}}{4\pi}, where \lambda is the free-space wavelength. Hence, for any antenna the gain and the effective aperture are related by
\mathrm{A}(\theta,\Phi) = \frac{\lambda^{2} \mathrm{G}(\theta,\Phi)}{4 \pi}.
Even for a receiving antenna, it is more usual to state the gain than to specify the effective aperture. The power delivered to the receiver is therefore more usually written as
P_{r} = \frac{\lambda^{2} G_{r} G_{t}}{(4 \pi r)^{2}} P_{t}
(see link budget). The effective aperture is however of interest for comparison with the actual physical size of the antenna.
Practical consequences When determining the pattern of a receiving antenna by computer simulation, it is not necessary to perform a calculation for every possible angle of incidence. Instead, the radiation pattern of the antenna is determined by a single simulation, and the receiving pattern inferred by reciprocity. When determining the pattern of an antenna by measurement, the antenna may be either receiving or transmitting, whichever is more convenient. References ^ a b c Constantine A. Balanis: “Antenna Theory, Analysis and Design”, John Wiley & Sons, Inc., 2nd ed. 1982 ISBN 0-471-59268-4 ^ David K Cheng: “Field and Wave Electromagnetics”, Addison-Wesley Publishing Company Inc., Edition 2, 1998. ISBN 0-201-52820-7 ^ Edward C. Jordan & Keith G. Balmain; “Electromagnetic Waves and Radiating Systems” (2nd ed. 1968) Prentice-Hall. ISBN 81-203-0054-8 ^ a b c Institute of Electrical and Electronics Engineers , “The IEEE standard dictionary of electrical and electronics terms”; 6th ed. New York, N.Y., Institute of Electrical and Electronics Engineers, c1997. IEEE Std 100-1996. ISBN 1-55937-833-6 [ed. Standards Coordinating Committee 10, Terms and Definitions; Jane Radatz, (chair)]
This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" (in support of MIL-STD-188).
External links
Understanding and Using Antenna Radiation Patterns By Joseph H. Reisert
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
Similar to this question: Applications of connectedness I want to collect applications of compactness.
E.g.: compact + discrete => finite, which can be used to prove the finiteness of the automorphism group of polarized abelian varieties.
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community
Similar to this question: Applications of connectedness I want to collect applications of compactness.
E.g.: compact + discrete => finite, which can be used to prove the finiteness of the automorphism group of polarized abelian varieties.
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
Any continuous function on a compact space is bounded and admits a maximum. This is perhaps the most important application of compactness.
Also pretty important in my opinion is the fact that a bijective continuous map $f:X\rightarrow Y$ with $X$, $Y$ Hausdorff topological spaces, is automatically an homeomorphism if $X$ is compact.
Another important application of compactness is the Stone-Weierstrass theorem : assuming X compact, a subalgebra of $C^0(X,R)$ is dense if and only if it separates points.
Now something a bit more fancy:
Let G be a compact group. Then the semi-group generated by an element is dense in the group generated by that element.
All maximal connected compact subgroups of Lie groups are conjugated to each other.
Let $f_n$ a sequence of continuous functions on a compact space that converges uniformly to $f$. Then for all neighborhoods $V$ of $f^{-1}(0)$, and any sufficiently big $n$, $f_n^{-1}(0)$ is contained in $V$.
Compact manifolds (more generally ANR) have finitely generated homology groups.
On a compact smooth Riemannian manifold, there is always infinitely many geodesics connecting two points.
The list goes on forever. Let me end with some famous conjecture on compact spaces (Kaplansky). Let $X$ be a Hausdorff compact space. Are all algebra homomorphisms from $C^0(X)$ to a Banach algebra $A$, continuous?
I thought I'd give the experts on this subject a day to respond. But as they haven't, I'll describe this nice computational application of compactness myself:
One interesting feature of compact spaces is that they can be exhaustively searched on a computer in a finite time, even if they are infinite.
A great example is described here. Consider the Cantor space of infinite binary sequences $C=2^\omega$. Suppose we have a computable predicate on this space, ie. a computable function $f\colon 2^\omega\rightarrow 2=\{0,1\}$. Then we can search all of $C$ to find an element that satisfies $f(x)=1$, or show that there is no such $x$, with an algorithm that is guaranteed to terminate.
The algorithm is described here. The important point is that (with the right topology) the computable functions are continuous and that the Cantor space is compact. This means that any predicate $f$ on $C$ is uniformly continuous in the sense that there is an $n$ such that $f$ can be computed without examining more than the first $n$ digits in its argument. From that it can eventually be deduced that the search for an element of $C$ satisfying $f$ can be completed in finite time.
This is somewhat surprising given that we can't exhaustively search $\mathbb{N}$ with a computable predicate in finite time, and yet $2^\omega$ seems like a 'larger' space.
A map from a compact space to a Hausdorff space is a homeomorphism if and only if it is continuous and bijective. This is useful to prove, for example, that all simple and closed curves are homeomorphic to the circle.
You might be interested by this article by Terry Tao.
Which contains a really great discussion of compactness, applications of compactness, and what compactness means. But my favorite part of the article is the end, since it is the only part that contained a statement which really surprised me. Which I will quote directly, not feigning to describe something better than Terry Tao:
Another use of compactifications is to allow one to rigorously view one type of mathematical object as a limit of others. For instance, one can view a straight line in the plane as the limit of increasingly large circles, by describing a suitable compactification of the space of circles which includes lines; this perspective allows us to deduce certain theorems about lines from analogous theorems about circles, and conversely to deduce certain theorems about very large circles from theorems about lines. In a rather different area of mathematics, the Dirac delta function is not, strictly speaking, a function, but exists in a certain (local) compactification of spaces of functions, such as spaces of measures or distributions. Thus one can view the Dirac delta function as a limit (in a suitably weak topology) of classical functions, which can be very useful for manipulating that function. One can also use compactifications to view the continuous as the limit of the discrete; for instance, it is possible to compactify the sequence Z/2Z, Z/3Z, Z/4Z, etc. of cyclic groups, so that their limit is the circle group T = R/Z. These simple examples can be generalised into much more sophisticated examples of compactifications (and to the closely related concept of completions), which have many applications in geometry, analysis, and algebra.
Compactness is crucial to many discretization arguments. For example, if you have a compact subset K of a domain D in the complex numbers, it is sometimes useful to cover it with a grid of squares. To do this, you argue by compactness that there is some positive delta such that every point in K is at least delta away from the complement of D. Then you place down a grid of squares of sidelength delta/2 (say), and each square that intersects K will lie entirely in D. One reason this is useful is that the boundary of the union of squares that intersect K is guaranteed to be reasonably nice in a way that the boundary of K is not.
Compactness allows one to formulate Poincare Duality, which effectively doubles your data on the geometry of a compact manifold.
The following maximization result makes it very explicit how one can use compactness to transfer results about finite sets to compact sets.
Let X be a compact topological space and P be a (strict) partial order on X. Assume that $L_x=${$y\in X:y P x$} is open for all x. Then there exists a P-maximal element.
Proof: Suppose not. Then the family of all $L_x$ covers X. By compactness, there is a finite subcover, so X is covered by ${L_x}_1,{L_x}_2,\ldots,{L_x}_n$. So the finite set containing $x_1,\ldots,x_n$ has no P-maximal element, which is impossible.
Let me include an example of compactness which is a bit farther away from analysis and geometry.
Given a set $F = \{ \phi_i \}$ of symbols of prepositions, assume that you form some composed prepositions connecting those with the symbols $\wedge$, $\vee$, $\neg$ and parenthesis (let me be not really precise here). A valid concatenation is for instance $(\phi \vee \psi) \wedge (\neg \rho)$. Take any set $X$ of composed propositions. We say that $X$ is non-contradictory if one can assign a truth value to all $\phi_i$ in such a way that, using ordinary rules for connectives, all sentences in $X$ are true.
Theorem: If every finite subset of $X$ is non-contradictory, $X$ is non-contradictory as well.
The proof is simple. The set of possible choices for truth values is just $Y = \{0, 1\}^F = \prod_{\phi_i \in F} \{0, 1 \}$. Topologize $Y$ with the product topology, using the discrete topology on $\{0, 1\}$. Then $Y$ is compact by Tychonoff's theorem. For each composed preposition $\psi \in X$, the set $\{\psi \text{ is true} \}$ is a finite intersection of closed subset, hence a closed subset of $Y$.
The hypothesis says that the intersection of finitely many of these closed sets is non-empty; by compactness, the intersection of all these closed subsets is not empty, which is the thesis.
As a generalization of "compact + discrete => finite", we have "locally compact + linear => finite dimensional," which supplies a lot of no-go theorems in functional analysis. For instance, compact operators on infinite-dimensional spaces can't be invertible.
Compactifications were mentioned in general above... but I think this might be worth mentioning.
The Stone-Cech compactification $\beta$ is used all the time since it produces a compact Hausdorff space from an arbitrary space in the "most efficient way." Just looking at applications of $\beta$ might be a more pointed question than asking about the wide world of compactness.
$\beta$ is used frequently in topological algebra since the topological structure of universal algebras on non-compact spaces is often highly complicated. Specifically, it is used in Applications of the Stone-Cech compactification to free topological groups to give extremely short proofs of some important results in the study of free topological groups. The original proof of Joiner's Fundamental Lemma is rather long and complicated. The paper by Hardy, Morris, and Thomas-Smith I have linked here (sorry if you don't have free access to this) gives a two page proof. One can also prove some nice embedding theorems for topological groups in just a few lines using $\beta$. To see some more applications of $\beta$ to topological groups see Arhangel'skii and Tkachenko's book.
As a special case of coudy's first answer, compactness is commonly used in calculus of variations to show existence of a maximizer (or minimizer).
Suppose we have some collection $X$ of functions, a continuous functional $E : X \to R$, and we know $M :=\sup_{f \in X} E[f] < \infty$. So for each $n$ there is a function $f_n$ with $E[f_n] \ge M-1/n$, but we do not know if there is a function $f$ with $E[f] = M$. If $X$ is compact in an appropriate topology, $f_n$ will have a convergent subsequence $f_{n_k} \to f$, and the limit $f$ of this subsequence must have $E[f] = M$. The Arzela-Ascoli theorem is a good example of a way to obtain the compactness.
Gordan's lemma is another application of "compact + discrete => finite", but it is one of the basic building blocks of the theory of toric varieties.
In some sense the theory of division by polynomials (i.e. the Gröbner basis theory) is a manifestation of compactness, e.g. Dickson's lemma can be stated and proved as an application of compactness, see e.g. the wikipedia entry. I guess this is an example of the principle mentioned in the answer of Michael Greinecker.
Every scheme, whose underlying space is hausdorff and compact, is affine. ;-)
[This answer is just for amusement]
The fact that a Hausdorff topological vector space is locally compact if and only if it is finite dimensional is central in functional analysis, for the theory of compact and Fredholm operators.
|
Juan Maldacena wrote a popular essay on gauge symmetries and their breaking:
The next Maldacena may arrive from Zimbabwe.
Analogously, we learn that Maldacena's work with gauge theories was helped by the chronic inflation in his homeland, Argentina. The persistent monetary inflation and currency reforms – something that many of us consider to be "once in a century" painful event – became as mundane in Argentina as a gauge transformation. In fact, as Maldacena shows (and he is not the first one, I guess), it is not just an analogy. The switching to another unit of wealth
isa special case of a gauge transformation.
With this experience, a European or North American gauge theorist facing Maldacena must feel just like a European soccer player facing Argentina, if we recall another observation by Juan at Strings 2014.
The paper is full of images from Wikipedia. The beginning is all about the Beauty and the Beast and the concept of symmetry. You are also reminded about the electricity and magnetism.
But the financial representation of the gauge field \(A_\mu\) and the gauge symmetry is the most interesting thing, of course. The financial gauge group is isomorphic to \(\RR\) but otherwise it works well. Maldacena offers you a financial interpretation of the field strength \(F_{\mu\nu}\) as well, of course.
If you think about a lattice version of the gauge theory, the links correspond to a "conversion of one currency from another". If you go around a loop constructed from these links, the exchange rates may be mismatched and you may earn (or lose) money. Your original assets ("before" you make the round trip) get pretty much multiplied by a factor\[
{\rm Assets}_{\rm after} = {\rm Assets}_{\rm before} \cdot \exp(\text{Magnetic flux})
\] if I add a formula to Maldacena's presentation. The exponential is the monodromy, the Wilson loop without any trace. You shouldn't forget that the trading gauge group is noncompact.
In the real world, we have to consider all these objects in the quantum mechanical language, as Maldacena discusses in another section, and the too obviously wrong consequences of the most naive realization of the symmetry must be avoided by the spontaneous symmetry breaking, the Higgs mechanism.
If someone likes semi-technical popular texts on physics, I recommend it.
|
Ex.3.6 Q2 Pair of Linear Equations in Two Variables Solution - NCERT Maths Class 10 Question
Formulate the following problems as a pair of equations, and hence find their solutions:
(i) Ritu can row downstream \(20\,\rm{ km}\) in \(2\) hours, and upstream \(4 \,\rm{km}\) in \(2\) hours. Find her speed of rowing in still water and the speed of the current. (ii) 2 women and \(5\) men can together finish an embroidery work in \(4\) days, while \(3\) women and 6 men can finish it in \(3\) days. Find the time taken by \(1\) woman alone to finish the work, and also that taken by \(1 \)man alone. (iii) Roohi travels \(300 \,\rm{km}\) to her home partly by train and partly by bus. She takes \(4\) hours if she travels \(60 \,\rm{km}\) by train and remaining by bus. If she travels \(100\,\rm{ km}\) by train and the remaining by bus, she takes \(10\) minutes longer. Find the speed of the train and the bus separately.
Text Solution Reasoning: Steps: (i)
Let the Ritu’s speed of rowing in still water and the speed of stream be \(x\,{\rm{ km/h}}\)and \(y\,{\rm{km/h }}\) respectively.
Ritu’s speed of rowing;
Upstream \(= \left( {x – y} \right) \,\rm{km/h}\)
Downstream \(= \left( {x + y} \right)\,\rm{km/h}\)
According to question,
Ritu can row downstream \(20 \,\rm{km}\) in \(2\) hours,
\[\begin{align}2\left( {x + y} \right) &= 20\\x + y& = 10 \qquad\left( 1 \right)\end{align}\]
Ritu can row upstream \(4\,\rm{ km}\) in \(2\) hours,
\[\begin{align}2\left( {x - y} \right) &= 4\\x - y &= 2 \qquad \left( 2 \right)\end{align}\]
Adding equation (1) and (2), we obtain
\[\begin{align} 2x&=12 \\ x&=6 \\\end{align}\]
Putting \(x = 6\) in equation (1), we obtain
\[\begin{align}6 + y &= 10\\y& = 4\end{align}\]
Hence, Ritu’s speed of rowing in still water is \(6 \,\rm{km/h}\) and the speed of the current is \(4 \,\rm{km/h}.\)
(ii)
Let the number of days taken by a woman and a man to finish the work be
\(x\) and \(y\) respectively.
Therefore, work done by a woman in \(1\) day \( = \frac{1}{x}\)
and work done by a man in 1 day \( = \frac{1}{y}\)
According to the question,
\(2\) women and \(5\) men can together finish an embroidery work in \(4\) days;
\[\frac{2}{x} + \frac{5}{y} = \frac{1}{4} \qquad \left( 1 \right)\]
\(3\) women and \(6\) men can finish it in \(3\) days
\[\frac{3}{x} + \frac{6}{y} = \frac{1}{3} \qquad \left( 2 \right)\]
Substituting \(\begin{align}\frac{1}{x} = p \end{align}\) and \(\begin{align}\frac{1}{y} = q \end{align}\)in equations (1) and (2), we obtain
\[\begin{align}
\frac{2}{x} + \frac{5}{y} &= \frac{1}{4} \Rightarrow 2p + 5q = \frac{1}{4} \Rightarrow 8p + 20q - 1 = 0 \quad & & \left( 3 \right)\\ \frac{3}{x} + \frac{6}{y} &= \frac{1}{3} \Rightarrow 3p + 6q = \frac{1}{3} \Rightarrow 9p + 18q - 1 = 0 \quad & & \left( 4 \right) \end{align}\]
By cross-multiplication, we obtain
\[\begin{align}\frac{p}{{ - 20 - ( - 18)}} &= \frac{q}{{ - 9 - ( - 8)}} = \frac{1}{{144 - 180}}\\
\frac{p}{{ - 2}} &= \frac{q}{{ - 1}} = \frac{1}{{ - 36}}\\\frac{p}{{ - 2}}& = \frac{1}{{ - 36}}{\text{ and }}\frac{q}{{ - 1}} = \frac{1}{{ - 36}}\\p &= \frac{1}{{18}}\quad {\text{ and }} \quad q = \frac{1}{{36}}\end{align}\]
\(\begin{align}{\text{Therefore, }}p &= \frac{1}{x} = \frac{1}{{18}}\\
& \Rightarrow x = 18\\{\text{and, }}q &= \frac{1}{y} = \frac{1}{{36}}\\& \Rightarrow y = 36\end{align}\)
Hence, number of days taken by a woman is \(18\) and by a man is \(36.\)
(iii)
Let the speed of train and bus be
\(u \,\rm{km/h}\) and \(v\,\rm{ km/h}\) respectively.
According to the given information,
Roohi travels \(300\,\rm{ km}\) and takes \(4\) hours if she travels \(60 \,\rm{km}\) by train and the remaining by bus
\[\frac{{60}}{u} + \frac{{240}}{v} = 4 \qquad \left( 1 \right)\]
If she travels \(100\,\rm{ km}\) by train and the remaining by bus, she takes \(10\) minutes longer
\[\frac{{100}}{u} + \frac{{200}}{v} = \frac{{25}}{6} \qquad \left( 2 \right)\]
Substituting \(\begin{align}\frac{1}{u}=p\end{align}\) and \(\begin{align}\frac{1}{v}=q \end{align}\) in equations (1) and (2), we obtain
\[\begin{align}\frac{{60}}{{u}} + \frac{{240}}{v} &= 4{\rm{ }} \Rightarrow 60p + 240q = 4 \qquad \left( 3 \right)\\\frac{{100}}{u} + \frac{{200}}{v} &= \frac{{25}}{6}{\rm{ }} \Rightarrow 100p + 200q = \frac{{25}}{6}{\rm{ }} \Rightarrow 600p + 1200q = 25 \qquad \left( 4 \right)\end{align}\]
Multiplying equation (3) by 10, we obtain
\[600p + 2400q = 40 \quad \left( 5 \right)\]
Subtracting equation (4) from (5), we obtain
\[\begin{align}1200q &= 15\\q &= \frac{{15}}{{1200}}\\q& = \frac{1}{{80}}\end{align} \]
Substituting \(\begin{align}q = \frac{1}{{80}} \end{align}\) in equation (3), we obtain
\[\begin{align}60p + 240 \times \frac{1}{{80}} &= 4\\60p &= 4 - 3\\p &= \frac{1}{{60}}\end{align}\]
\(\begin{align}{\text{Therefore, }}p &= \frac{1}{u} = \frac{1}{{60}}\\& \Rightarrow u = 60\\{\text{and, }}q &= \frac{1}{v} = \frac{1}{{80}}\\& \Rightarrow v = 80\end{align}\)
Hence, speed of the train \(= 60\,{\rm{ km/h}}\)
And speed of the bus \(= 80\,{\rm{ km/h}}\)
|
I am taking input from an electret mic amplified using LM358 amplifier from my PIC16F877A's ADC unit. I am getting the readings in Volts from the ADC which ranges from 2.5V to 5V. How can I convert these readings into dB?
DB SPL is a pressure measuring unit.
You can't convert a voltage to an DB SPL reading unless you know:
the microphone sensitivity (or simply the analog output voltage to the input pressure ratio) which tells you essentially the voltage level it will output for a given sound pressure level the gain that the preamp has applied
Your microphone has a sensitivity of -46dBV/Pa , this gives 0.005012 V RMS/ Pa
1 Pa (pascal) equals 94 dB sound pressure (SPL)
The dB equation for voltage is \$ 20 \times \log \frac {V_1}{V_o} \$
where V1 is the voltage being measured, and \$ V_0 \$ the reference level
If we do an example calculation for the measurement of 2.5v (assuming a unity gain for the amplifier) we get
\$ 20 \times \log \frac {2.5}{0.005012} = 53.96dB \$
so the SPL will be (-46) + 53.96 = 7.95 + 94 = 101.95 Db SPL
We assumed a unity gain for the preamplifier, if the actual gain was 20dB then the SPL becomes
101.95 - 20 = 81.95 Db SPL
if the actual gain was 10dB then the SPL becomes
101.95 - 10 = 91.95 Db SPL ...
-46dB V/Pa is how I read it and 1 Pa is the sound pressure in newtons per sq metre. 0dB SPL is 20 micro Pascal therefore, 1 Pa is 50,000 times bigger or, in dB it is 94 dB SPL.
So, if you are measuring -46 dBV then you are measuring a SPL of 94 dB. -46 dBV is near enough 5 mV RMS so, again, if you measure 5mV RMS then the SPL is 94dB.
If you have a pre-amplifier with a gain of ten, then 50mV RMS equates to 94dB SPL and 5mV would equate to a SPL of 74 dB.
This should be enough to get you started.
|
I need help with what seems like a pretty simple integral for a Fourier Transformation. I need to transform $\psi \left( {0,t} \right) = {\exp^{ - {{\left( {\frac{t}{2}} \right)}^2}}}$ into $\psi(0,\omega)$ by solving:
$$ \frac{1}{{2\pi }}\int_{ - \infty }^{ + \infty } {\psi \left( {0,t} \right){e^{ - i\omega t}}dt} $$
So far I've written (using Euler's formula):
$$\psi \left( {0,\omega } \right) = \frac{1}{{2\pi }}\int_{ - \infty }^{ + \infty } {\psi \left( {0,t} \right){e^{ - i\omega t}}dt} = \frac{1}{{2\pi }}\left( {\int_{ - \infty }^{ + \infty } {{e^{ - {{\left( {\frac{t}{2}} \right)}^2}}}\cos \omega tdt - i\int_{ - \infty }^{ + \infty } {{e^{ - {{\left( {\frac{t}{2}} \right)}^2}}}\sin \omega tdt} } } \right)$$
$$ \begin{array}{l} = \frac{1}{{2\pi }}\left( {{I_1} - i{I_2}} \right)\\ \end{array}$$
I just don't recall a way to solve this integrals by hand. Wolfram Alpha tells me that the result of the first integral is ${I_1} = 2\sqrt \pi {e^{ - {\omega ^2}}}$ and for the second $I_{2}=0$. But on my notes I have ${I_1} = 2\sqrt \pi {e^{ - {{\left( {{\omega ^2}/2} \right)}^2}}}$.
Can anybody tell me how one can solve this type of integrals and if the result from Wolfram Alpha is accurate? Any help will be appreciated.
|
Rather than answer the question numerically I have outlined the four different cases,
reversible / irreversible and isothermal / adiabatic.
In adiabatic changes no energy is transferred to the system, that is the heat absorbed or released to the surroundings is zero. A vacuum (Dewar) flask realises a good approximation to an adiabatic container. Any work done must therefore be at the expense of the internal energy. If the ‘system’ is a gas then its temperature will not remain constant during any expansion or compression.
In expansion the work done is $dw=-pdV$ and the change in internal energy $ dU=C_vdT$.
The heat change is zero then $dq=0$ which means from the First Law $$dU = dw$$and so$$ C_vdT = -pdV$$
Dividing both sides by
T and for one mole of an perfect gas $p=RT/V$ thus $$C_v\frac{dT}{T}=-R\frac{dV}{V}$$If the gas starts at $T_1,V_1$ and ends up at $T_2,V_2$ the last equation can be integrated (and rearranged) to give$$\ln\left (\frac{T_2}{T_1}\right)=-\ln\left(\frac{V_2}{V_1}\right)^{R/C_v}$$or $$\frac{T_1}{T_2}=+\left(\frac{V_2}{V_1}\right)^{R/C_v}$$
using the relationship $C_p = C_v+R$$$\frac{T_1}{T_2}=+\left(\frac{V_2}{V_1}\right)^{(C_p-C_v)/C_v}$$Using the gas law this van be rewritten in a more useful form as$$ p_1V_1^{\gamma } = p_2V_2^{\gamma }$$where $\gamma = C_p/C_v$. This is also written as $pV^\gamma = const$.
The change in internal energy in an adiabatic process is$$\Delta U=C_v(T_2-T_1)$$
Adiabatic reversible
In a
reversible adiabatic change we use the formulas above to work out what happens. If n moles of a gas fill a container of volume $V_1$ at $p_1$ atm. and is expanded reversibly and adiabatically until it is in equilibrium at a final pressure $p_2$ we can calculate the final volume and temperature. The values of $C_p$ and $C_v$ are assumed to be known and are constant with temperature.
The equation to use is
$$ p_1V_1^{\gamma} = p_2V_2^{\gamma}$$and as only $V_2$ is unknown this can be calculated. The work done is $$w=nC_v(T_2-T_1)$$
Adiabatic irreversible
In an
irreversible adiabatic change if n moles of an perfect gas expands irreversibly from a pressure of $p_1$ against a constant external pressure $p_2$ the temperature drops from $T_1$ to $T_2$. We can calculate how much work is done and the final volume. The internal energy change is $\Delta U= nC_v(T_2-T_1)$and the work done is $w=p(V_2-V_1)$ and as $\Delta U = w$ (as the system is adiabatic) the final volume can be obtained. Similarly if the volumes are known the final temperature can be obtained.
Isothermal irreversible
In an
isothermal irreversible change the work done on suddenly allowing a perfect gas to expanding from $V_1$ to $V_2$ is determined by the final external pressure $p_2$ and is $$ q=-w =\int_{V_1}^{V_2} pdV = p_2(V_2-V_1)$$thus expansion into a vacuum does no work.
Isothermal reversible
A
reversible isothermal change performs the maximum possible amount of work, and assuming a perfect gas,$$ q=-w_{rev} =\int_{V_1}^{V_2} pdV=nRT \int_{V_1}^{V_2} \frac{1}{V}dV$$
$$w_{rev} =- nRT\ln\left(\frac{V_2}{V_1}\right)= - nRT\ln\left(\frac{p_1}{p_2}\right)$$
|
Oceanic Basin Modes: Quasi-Geostrophic approach¶
This tutorial was contributed by Christine Kaufhold and Francis Poulin.
As a continuation of the Quasi-Geostrophic (QG) model described in the other tutorial, we will now see how we can use Firedrake to compute the spatial structure and frequencies of the freely evolving modes in this system, what are referred to as basin modes. Oceanic basin modes are low frequency structures that propagate zonally in the oceans that alter the dynamics of Western Boundary Currents, such as the Gulf Stream. In this particular tutorial we will show how to solve the QG eigenvalue problem with no basic state and no dissipative forces. Unlike the other demo that integrated the equations forward in time, in this problem it is necessary to compute the eigenvalues and eigenfunctions for a particular differential operator. This requires using PETSc matrices and eigenvalue solvers in SLEPc.
This demo requires SLEPc and slepc4py to be installed. This is most easily achieved by providing the optional –slepc flag to either firedrake-install (for a new installation), or firedrake-update (to add SLEPc to an existing installation).
Governing PDE¶
We first briefly recap the nonlinear, one-layer QG equation that we considered previously. The interested reader can find the derivations in [Ped92] and [Val06]. This model consists of an evolution equation for the Potential Vorticity, \(q\), and an elliptic problem through which we can determine the streamfunction,
Where \(\psi\) is the stream-function, \(\vec{u}=(u, v)\) is the velocity field, \(q\) is the Potential Vorticity (PV), \(\beta\) is the Coriolis parameter and \(F\) is the rotational Froude number. The velocity field is easily obtained using
We assume that the amplitude of the wave motion is very small, which allows us to linearize the equations of motion and therefore neglect the nonlinear advection,
We look for wave-like solutions that are periodic in time, with a frequency of \(\omega\)
This has the advantage of removing the time derivative from the equation and replacing it with an eigenvalue, \(i \omega\). By substituting the above solution into the QG equation, we can find a complex eigenvalue problem of the form
Weak Formulation¶
To use a finite element method it is necessary to formulate the weak form and then we can use SLEPc in Firedrake to compute eigenvalue problems easily. To begin, we multiply this equation by a Test Function \(\phi\) and integrate over the domain \(A\).
To remove the Laplacian operator we use integration by parts and the Divergence theorem to obtain
No-normal flow boundary conditions are required and mathematically this means that the streamfunction must be a constant on the boundary. Since the test functions inherit these boundary conditions, \(\hat{\phi} = 0\) on the boundary, the boundary integral vanishes and the weak form becomes,
Firedrake code¶
Using this form, we can now implement this eigenvalue problem in Firedrake. We import the Firedrake, PETSc, and SLEPc libraries.
from firedrake import *from firedrake.petsc import PETSctry: from slepc4py import SLEPcexcept ImportError: import sys warning("Unable to import SLEPc, eigenvalue computation not possible (try firedrake-update --slepc)") sys.exit(0)
We specify the geometry to be a square geometry with \(50\) cells with length \(1\).
Lx = 1.Ly = 1.n0 = 50mesh = RectangleMesh(n0, n0, Lx, Ly, reorder=None)
Next we define the function spaces within which our solution will reside.
Vcg = FunctionSpace(mesh,'CG',3)
We impose zero Dirichlet boundary conditions, in a strong sense, which guarantee that we have no-normal flow at the boundary walls.
bc = DirichletBC(Vcg, 0.0, "on_boundary")
The two non-dimensional parameters are the \(\beta\) parameter, set by the sphericity of the Earth, and the Froude number, the relative importance of rotation to stratification.
beta = Constant('1.0')F = Constant('1.0')
Additionally, we can create some Functions to store the eigenmodes.
eigenmodes_real, eigenmodes_imag = Function(Vcg), Function(Vcg)
We define the Test Function \(\phi\) and the Trial Function \(\psi\) in our function space.
phi, psi = TestFunction(Vcg), TrialFunction(Vcg)
To build the weak formulation of our equation we need to build two PETSc matrices in the form of a generalized eigenvalue problem, \(A\psi = \lambda M\psi\). We impose the boundary conditions on the mass matrix \(M\), since that is where we used integration by parts.
a = beta*phi*psi.dx(0)*dxm = -inner(grad(psi), grad(phi))*dx - F*psi*phi*dxpetsc_a = assemble(a).M.handlepetsc_m = assemble(m, bcs=bc).M.handle
We can declare how many eigenpairs, eigenfunctions and eigenvalues, we want to find
num_eigenvalues = 1
Next we will impose parameters onto our eigenvalue solver. The first is specifying that we have an generalized eigenvalue problem that is nonhermitian. The second specifies the spectral transform shift factor to be non-zero. The third requires we use a Krylov-Schur method, which is the default so this is not strictly necessary. Then, we ask for the eigenvalues with the largest imaginary part. Finally, we specify the tolerance.
opts = PETSc.Options()opts.setValue("eps_gen_non_hermitian", None)opts.setValue("st_pc_factor_shift_type", "NONZERO")opts.setValue("eps_type", "krylovschur")opts.setValue("eps_largest_imaginary", None)opts.setValue("eps_tol", 1e-10)
Finally, we build our eigenvalue solver using SLEPc. We add our PETSc matrices into the solver as operators and use setFromOptions() to call the PETSc parameters we previously declared.
es = SLEPc.EPS().create(comm=COMM_WORLD)es.setDimensions(num_eigenvalues)es.setOperators(petsc_a, petsc_m)es.setFromOptions()es.solve()
Additionally we can find the number of converged eigenvalues.
nconv = es.getConverged()
We now get the real and imaginary parts of the eigenvalue and eigenvector for the leading eigenpair (that with the largest in magnitude imaginary part). First we check if we actually managed to converge any eigenvalues at all.
if nconv == 0: import sys warning("Did not converge any eigenvalues") sys.exit(0)
If we did, we go ahead and extract them from the SLEPc eigenvalue solver:
vr, vi = petsc_a.getVecs()lam = es.getEigenpair(0, vr, vi)
and we gather the final eigenfunctions
eigenmodes_real.vector()[:], eigenmodes_imag.vector()[:] = vr, vi
We can now list and show plots for the eigenvalues and eigenfunctions that were found.
print("Leading eigenvalue is:", lam)try: from matplotlib import pyplot plot(eigenmodes_real) pyplot.gcf().show() plot(eigenmodes_imag) pyplot.gcf().show()except ImportError: warning("Matplotlib not available, not plotting eigemodes")
Below is a plot of the spatial structure of the real part of one of the eigenmodes computed above.
Below is a plot of the spatial structure of the imaginary part of one of the eigenmodes computed above.
This demo can be found as a Python script in qgbasinmodes.py.
References
|
You started off correctly but in flow used the wrong result that $\lim_{x \to 3}f(x) = f(3)$. This is not known in advance as we don't know whether $f$ is continuous or not. Another problem is that you are trying to deal with both $x \to 3^{-}$ and $x \to 3^{+}$ separately. This is required only when the definition of function concerned is different for cases $x < 3$ and $x > 3$.
You only to need to prove one result here and that is $$\lim_{x \to 3}(x - 3)f(x) = 0$$ and this is easily done by using Squeeze theorem. Let $a(x) = (x - 3)f(x)$ then we know that $$0 \leq |a(x)| = |(x - 3)||f(x)| \leq |x - 3|$$ because we know that $|f(x)| \leq 1$. Thus by Squeeze theorem we get $$\lim_{x \to 3}|a(x)| = 0$$ Further $$-|a(x)| \leq a(x)\leq |a(x)|$$ and again applying Squeeze theorem gives us $$\lim_{x \to 3}a(x) = 0$$ Now note that $a(3) = 0$ so $a$ is continuous at $3$.
The second part about $b(x)$ is easy if you know the first part.
As requested by OP here is an approach via $\epsilon-\delta$ definition. As I have mentioned in comments this definition can not be used to evaluate a limit of a function, but it can be used to check whether a number is a limit of the function or not. So in this method we need to guess the limit
somehow.
For the current we are asked to prove that $a(x)$ is continuous at $x_{0} = 3$. By definition of continuity this is equivalent to proving that $\lim_{x \to 3}a(x) = a(3)$. Now $a(3) = 0$ so we have to prove that $\lim_{x \to 3}a(x) = 0$. So you see that the question itself has given you the limit $0$ so that the trouble of guessing the limit is not here. Lucky!!
Now to prove that $\lim_{x \to 3}a(x) = 0$ we need to ensure that for every $\epsilon > 0$ there is a number $\delta > 0$ such that $$|a(x) - 0| < \epsilon$$ whenever $0 < |x - 3| < \delta$. Thus we need to start with an $\epsilon > 0$ and somehow try to find a suitable $\delta > 0$ (depending on $\epsilon$) such that $|a(x)| < \epsilon$ whenever $0 < |x - 3| < \delta$.
Now let $\epsilon > 0$ be given. Our goal to satisfy the inequality $$|a(x)| < \epsilon$$ or $$|(x - 3)f(x)| < \epsilon$$ Now note that $|f(x)| < 1$ so we already know that $$|(x - 3)f(x)| = |x - 3| |f(x)| < |x - 3|$$ and hence if we can get $|x - 3| < \epsilon$ then automatically we will have $$|a(x)| < |x - 3| < \epsilon$$ and our goal will be achieved. Thus we can take $\delta = \epsilon$ here and then $0 < |x - 3| < \delta$ will imply $0 < |x - 3| < \epsilon$ and this will imply that $|a(x)| < \epsilon$. Our proof is now complete and $\lim_{x \to 3}a(x) = 0$.
|
I am confused about, what I believe, refers to passive and active transformations in QM. What I have understood so far is that the matrix elements $\langle \psi| \hat{H}|\phi\rangle$ should remain unchanged under transformations; this implies that $\hat{H} = \hat{U}^\dagger \hat{H} \hat{U}$.
On the other hand, under certain transformation $\hat{U}$, an arbitrary operator $\hat{\Omega}$ transforms as $\hat{U} \hat{\Omega} \hat{U}^\dagger$.
1) Why should the matrix elements of the hamiltonian remain unchanged under a transformation? Is that the same as saying: "The evolution of the state is governed by the hamiltonian; under a transformation the system should still evolve in the same way, hence the matrix elements should be conserverd"?
2) What does $\hat{H} = \hat{U}^\dagger \hat{H} \hat{U}$ mean? Is that talking about how the hamiltonian transforms? Or is it only a condition on the transformation $\hat{U}$?
3) What is $\hat{\Omega} \to \hat{U} \hat{\Omega} \hat{U}^\dagger$ opposed $\hat{U}|\phi\rangle$. Does an active transformation mean applying $\hat{U}$ to every state, while a passive transformation is leaving vectors as they are and transforming the opeartors as $\hat{\Omega} \to \hat{U} \hat{\Omega} \hat{U}^\dagger$? Or does applying a transformation mean doing both of the above?
|
There ought to be a diagram showing how the angle $\theta$ is defined. Nevertheless, the boundary condition which you have been given is confusing.
The diagram above shows a vertical plane through the centre of the spherical electrode. There is
cylindrical symmetry here, so a cylindrical co-ordinate system is the obvious choice. If the plane is rotated through any azimuthal angle $\theta$ about the vertical axis $z$ through the centre of the sphere, then all measurements (potentials, current densities, etc) would be the same at all points with the same cylindrical co-ordinates $\rho, z$. So there could be no difference in potential or current density around any circle of radius $\rho$ which is centred on and perpendicular to the $z$ axis.
Inside the conducting electrode there is no resistance (infinite conductivity) so there could be a current of constant density around such a circle inside the electrode, without there being any change in potential around any azimuthal circle : $$J_{\theta}=\text{constant}$$
Inside the regions of finite conductivity 1 & 2, there could not be any current around an azimuthal circle, because there would have to be a change in potential between the start and the end points, which are the same, so this is impossible :$$J_{1\theta}=J_{2\theta}=0$$
In all three regions the boundary condition applies for
all values of $\theta$ and particular values of $\rho, z$. There is nothing special about $\theta=\frac12 \pi$. The boundary condition given in the book is confusing. It suggests that there is something special about the azimuthal angle $\theta=\frac12\pi$ but there is nothing in the problem which supports this.
Perhaps a spherical polar system is being used. If so, and if $\phi$ is the polar angle, then $\phi=\frac12 \pi$ defines the horizontal plane through the centre of the sphere. The above boundary condition is still true for this value of $\phi$ and all values of $r$ : $$J_{1\theta}(\phi=\frac12 \pi)=J_{2\theta}(\phi=\frac12 \pi)=0$$ But there is nothing special about this plane. The boundary condition applies for all other planes perpendicular to the axis, and all values of $\phi$ and $r$ provided that $\rho=r\sin\phi$ and $z=r\cos\phi$ are constant.
|
Consider the following two polynomials \begin{align} p(s)&:=s^n+\alpha_{n-1}s^{n-1}+\cdots+\alpha_1s+\alpha_0,\\ q(s)&:=s^{n-1}+\alpha_{n-1}s^{n-2}+\cdots+\alpha_2 s+\alpha_1, \end{align} where $\alpha_{0},\dots,\alpha_{n-1}$ are positive real numbers.
Assume that $p(s)=sq(s)+\alpha_0$ is Hurwitz, i.e., every root $\lambda_i\in\mathbb{C}$ of $p(s)$ satisfies $\Re\mathrm{e}(\lambda_i)\leq 0$. Then, can we conclude that $q(s)$ is also Hurwitz?
For $n=2$ and $n=3$, the answer is positive. Indeed, the result follows by applying the Descartes' rule of signs. What about the general case $n>3$?
Any help will be appreciated.
|
A hypothetical diatomic molecule has a binding length of $0.8860 nm$. When the molecule makes a rotational transition from $l = 2$ to the next lower energy level, a photon is released with $\lambda_r = 1403 \mu m$. At a vibration transition to a lower energy state, a photon is released with $\lambda_v = 4.844 \mu m$. Determine the spring constant k.
What I've done is:
1) Calculate the moment of inertia of the molecule by equating Planck's equation to the transition rotational energy:
$$E = \frac{hc}{\lambda} = \frac{2 \hbar^2}{I}$$
Thus solving for $I$:
$$I = 1.562 \times 10^{-46} kg m^2$$
2) Knowing that the moment of inertia of a diatomic molecule rotating about its CM can be expressed as $I = \rho r^2$, solve for $\rho$ (where $\rho$ is the reduced mass and $r$ is the distance from one of the molecules to the axis of rotation; thus $r$ is half the bond length. EDIT: $r$ is the bond length and NOT half of it. Curiously, if you use the half value you get a more reasonable k: around $3 N/m$):
$$\rho = \frac{I}{r^2} = 1.99 \times 10^{-28} kg$$
3) Solve for $k$ from the frequency of vibration equation:
$$f = \frac{1}{2\pi}\sqrt{\frac{k}{\rho}}$$
Knowing:
$$\omega = 2\pi f$$
We get:
$$k = \omega^2 \rho = (c/\lambda_v)^2 \rho = 0.763 N/m$$
Where $\lambda_v= 4.844 \times 10^{-6} m$
The problem I see is that the method seems to be OK but the result does not convince me. We know that the stiffness constant $k$ is is a measure of the resistance offered by a body to deformation. The unknown diatomic molecule we're dealing with seems to be much more elastic than $H_2$ (which has $k = 550 N/m$).
|
In "A Classical Introduction to Modern Number Theory" by Ireland and Rosen, a very close bound is obtained by completely elementary means.
Assum $p_n \le x < p_{n+1}$, where $p_i$ denotes the $i$'th prime. By definition, $\pi(x) = n$.
Considers the numbers up to $x$: $\{1,2,\cdots, x \}$. They are divisible only by primes among $\{ p_1, p_2, \cdots, p_{n} \}$. If you decompose those numbers into a square part and square-free part, i.e. write $a = rs^2$ where $r$ is a product of distinct primes and $s$ is a square, we see that there are 2 constraints on $r,s$:
Since $r$ is square-free, it must be a product of distinct primes among the first $n$ primes. There are only $2^{n}$ options for $r$.
Since $s^2 \le a$, we must have $s \le \sqrt{a} \le \sqrt{x}$, so there are at most $\sqrt{x}$ options for $s$.
All in all, there are at most $2^n \times \sqrt{x}$ options for those numbers between $1$ and $x$:$$ x \le 2^{n} \sqrt{x} = 2^{\pi(x)} \sqrt{x}$$
This shows that $\pi(x) \ge \log_{2} \sqrt{x} = \frac{\log_{2} x}{2}$. $\blacksquare$
Similarly, if we decompose those numbers into an $n$'th-power and an $n$-powerfree number, we find:$$\pi(x) \ge \frac{\ln x (1-\frac{1}{n}) }{\ln n} $$But $n=2$ already gives the best bound. I will think later about improving this to $\pi(x) \ge \log_2 x$ - I believe it is possible with a similar method.
|
I have a question form my teachers, and I cannot understand why I can find out the modulation index form the figure.
The question provide a Figure like this:
And the information signal is a sinusoidal test signal with peak amplitude $6\ \rm V$ and is applied to an AM-DSB-C modulator, the Fourier spectrum of the modulated signal is shown above.
The solution is like this:
As $\displaystyle \frac {\frac {Acm}{4}}{\frac{Ac}{2}}=\frac{3}{4}$, $m=1.5$, so the modulation index is $1.5$ Moreover as $\displaystyle m=\frac{x}{c}$, the peak amplitude of $s(t)$ is $6\ \rm V$, so the DC offset($c$) is $4\ \rm V$.
I know where is the $\displaystyle \frac {Ac}{2}=4$ come form
as $s_{AM-DSB-C}(t)=A\big(s(t)+c\big)\cos(2 \pi f_c t)$
\begin{align} \mathcal F\left\{s_{AM-DSB-C}(t)\right\} &=S_{AM-DSB-C}(f)\\ &=\frac{A}{2}\big[S(f-f_c)+S(f+f_c)\big] + \frac {Ac}{2} \big[\delta (f-f_c) + \delta (f+f_c)\big] \end{align}
so, form the second term we can get $\displaystyle\frac {Ac}{2}=4$
But, I cannot understand how the solution can get $\displaystyle \frac {Acm}{4}=3$.
|
Let
$\Omega\subseteq\mathbb{R}^n$ be a bounded domain $H:=W_0^{1,2}(\Omega)$ be the Sobolev space $|\;\cdot\;|_p$ be the seminorm $$|u|_p^p:=\int_\Omega|\nabla u|^p\;d\lambda^n\;\;\;\text{for }u:\Omega\to\mathbb{R}\;\text{weakly differentiable}$$ on $L^p:=L^p(\Omega)$ $\left\|\;\cdot\;\right\|_p$ be the $L^p$-norm
From the basic theory of the eigenvalue problem of the Laplacian, one knows that $$R(u):=\frac{|u|_2^2}{\left\|u\right\|_2^2}\;\;\;\text{for }u\in H\setminus\left\{0\right\}\tag{1}$$ attains its infimum in $H\setminus\left\{0\right\}$. Now, I would like to show, that $$\tilde{R}(u):=R(u)+\frac{\left\|\sqrt{\alpha}u\right\|_2^2}{\left\|u\right\|_2^2}\;\;\;\text{for }u\in H\setminus\left\{0\right\}\tag{2}\;,$$ for some $\alpha\in L^\infty$, has a minimum, too.
We may note, that we can assume, that $R$ attains its minimum $\lambda_1$ in $u_1\in H$ with $\left\|u_1\right\|_2^2=1$. Then, $$\tilde{R}(u_1)=\lambda_1+\left\|\sqrt{\alpha} u_1\right\|_2^2\;.$$ However, I don't see how I need to proceed for me. Maybe this is not the right track and we need to use the Poincaré inequality $$\left\|u\right\|_2^2\le C|u|_2^2\;\;\;\text{for all }u\in H\;,$$ for some $C>0$, instead.
|
Illinois Journal of Mathematics Illinois J. Math. Volume 48, Number 1 (2004), 89-96. On the geometry of positively curved manifolds with large radius Abstract
Let $M$ be an $n$-dimensional complete connected Riemannian manifold with sectional curvature $K_M\geq 1$ and radius $\operatorname{rad}(M)>\pi /2$. For any $x\in M$, denote by $\operatorname{rad} (x)$ and $\rho (x)$ the radius and conjugate radius of $M$ at $x$, respectively. In this paper we show that if $\operatorname{rad} (x)\leq \rho (x)$ for all $x\in M$, then $M$ is isometric to a Euclidean $n$-sphere. We also show that the radius of any connected nontrivial (i.e., not reduced to a point) closed totally geodesic submanifold of $M$ is greater than or equal to that of $M$.
Article information Source Illinois J. Math., Volume 48, Number 1 (2004), 89-96. Dates First available in Project Euclid: 13 November 2009 Permanent link to this document https://projecteuclid.org/euclid.ijm/1258136175 Digital Object Identifier doi:10.1215/ijm/1258136175 Mathematical Reviews number (MathSciNet) MR2048216 Zentralblatt MATH identifier 1048.53024 Subjects Primary: 53C21: Methods of Riemannian geometry, including PDE methods; curvature restrictions [See also 58J60] Secondary: 53C20: Global Riemannian geometry, including pinching [See also 31C12, 58B20] Citation
Wang, Qiaoling. On the geometry of positively curved manifolds with large radius. Illinois J. Math. 48 (2004), no. 1, 89--96. doi:10.1215/ijm/1258136175. https://projecteuclid.org/euclid.ijm/1258136175
|
Logarithms and Exponentials
log computes logarithms, by default natural logarithms,
log10 computes common (i.e., base 10) logarithms, and
log2 computes binary (i.e., base 2) logarithms. The general form
log(x, base) computes logarithms with base
base.
log1p(x) computes \(\log(1+x)\) accurately also for \(|x| \ll 1\).
exp computes the exponential function.
expm1(x) computes \(\exp(x) - 1\) accurately also for \(|x| \ll 1\).
Keywords math Usage
log(x, base = exp(1))logb(x, base = exp(1))log10(x)log2(x)
log1p(x)
exp(x)expm1(x)
Arguments x
a numeric or complex vector.
base
a positive or complex number: the base with respect to which logarithms are computed. Defaults to \(e\)=
exp(1).
Details
All except
logb are generic functions: methods can be defined for them individually or via the
Math group generic.
log10 and
log2 are only convenience wrappers, but logs to bases 10 and 2 (whether computed
via
log or the wrappers) will be computed more efficiently and accurately where supported by the OS. Methods can be set for them individually (and otherwise methods for
log will be used).
logb is a wrapper for
log for compatibility with S. If (S3 or S4) methods are set for
log they will be dispatched. Do not set S4 methods on
logb itself.
All except
log are primitive functions.
Value
A vector of the same length as
x containing the transformed values.
log(0) gives
-Inf, and
log(x) for negative values of
x is
NaN.
exp(-Inf) is
0.
For complex inputs to the log functions, the value is a complex number with imaginary part in the range \([-\pi, \pi]\): which end of the range is used might be platform-specific.
S4 methods
exp,
expm1,
log,
log10,
log2 and
log1p are S4 generic and are members of the
Math group generic.
Note that this means that the S4 generic for
log has a signature with only one argument,
x, but that
base can be passed to methods (but will not be used for method selection). On the other hand, if you only set a method for the
Math group generic then
base argument of
log will be ignored for your class.
References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988)
The New S Language. Wadsworth & Brooks/Cole. (for
log,
log10 and
exp.)
Chambers, J. M. (1998)
Programming with Data. A Guide to the S Language. Springer. (for
logb.)
See Also Aliases log logb log10 log2 log1p exp expm1 Examples
library(base)
# NOT RUN {log(exp(3))log10(1e7) # = 7x <- 10^-(1+2*1:9)cbind(x, log(1+x), log1p(x), exp(x)-1, expm1(x))# }
Documentation reproduced from package base, version 3.5.1, License: Part of R 3.5.1 Community examples usmmaine@gmail.comat Sep 16, 2017 base v3.4.1
x1 <- c(1.1, -2.3, 2.5, 0.5, -3.2, -4, 5.2, -2.2, -2.2, 3) y3 <- log2(x1) y3
|
Nonrelativistic solutionThe variables used will be$x$ for the distance travelled$v$ for velocity$a$ for acceleration ($1~\mathrm{g}$)$t$ for the time$c$ for the speed of light.Non brakingAssuming the velocity you arrive at does not matter we take the equation$$x = \frac12 a t^2\ .$$Solve for $t$:$$t = \sqrt{\frac{2x}{a}}\ .$$(Let’s ...
The eccentricity is 1.0.The eccentricity $e$ of an orbit can be found from the radius of apoapse and periapse as:$$e=\frac{r_a-r_p}{r_a+r_p}$$and the semimajor axis $a$ can as well, from:$$a=\frac{r_a+r_p}{2}$$If you throw an object horizontally (velocity perpendicular to position vector) you will end up in a closed orbit if you throw at slower ...
Breaking that down:launch due Eastsite in the Ecuadorean Andessometime before local midnighton a July 4when there's a new moonLaunch due east.With the exception of launch sites that cannot launch due east lest a failed launch rain debris on some other country, this is the preferable direction. Launching to the east takes advantage ...
This is not a recurrence for a Legendre polynomial $P_{n}(\mu)$but an Associated Legendre Function (ALF).This1 states much more than is needed here.ALFs are often denoted $P_{n,m}(\mu)$in geophysics.The degree $n$ and order $m$ are non-negative integers$0, 1, 2, \dots$ with $0 \le m \le n.$Argument $\mu$ is the sine of the latitude(measured north or ...
The eccentricity of a radial orbit is $1$, regardless of its energy.This is a class of orbits where the type of orbit cannot be inferred from the eccentricity alone. With a "traditional" parabolic orbit of $e=1$, the angular momentum $L$ has a well defined value, but the semi-major axis $a$ is not defined. In the case of a vertical bounded free-fall orbit, ...
The expression on the right is meant to give the eccentricity vector but the vector notation has been lost.Here it is in this answer:$$ e = {v^2 r \over {\mu}} - {(r \cdot v ) v \over{\mu}} - {r\over{\left|r\right|}}$$and the vector nature is not clear either. We should write it as$$ \mathbf{e} = {v^2 \mathbf{r} \over {\mu}} - {(\mathbf{r} \cdot \...
When you're trying to rotate something, there are two cases:1) The torque you're applying is large compared to the angular momentum the body has, i.e. when the body isn't rotating. Then it starts to rotate in the direction of the torque you're applying. This is the more intuitive case.2) The torque you're applying is small compared to the angular ...
I've looked through your numbers, and they do seem to be correct. The range values included are likely intended to be in km, but seem to be inaccurate. The range values are not official at this point in time, and do not have access to the spacecraft ephemeris data yet. I suspect when that data comes down the range values will be estimated better.Bottom ...
Check out the diagram at the top of the page that you got the equation from.Let's define our terms.$\dot{m}_e V_e$ is the momentum thrust term$\dot{m}_0 V_0$ is the incoming momentum term$(p_e - p_0) A_e$ is the pressure thrust termThe incoming momentum term is important for jet engines because the engine swallows the incoming stream and then ...
Ideally if you could design the nozzle to match the exhaust pressure in a vacuum (i.e. nearly zero), the third term drops automatically.If $p_0$ is zero, then $p_e$ would have to go to zero as well because an ideally designed nozzle results in no pressure drag (i.e. ambient freestream pressure and exhaust pressure are the same). In reality, such a nozzle ...
Let's start by assuming you don't decelerate halfway. Work in units with $c=1$. With a constant acceleration of $a$, the rapidity $\phi=a\tau$ at a proper time $\tau$ after you start from rest, so $$\beta=\tanh a\tau,\,\gamma=\cosh a\tau,\,dx=\beta dt=\beta\gamma d\tau=\sinh a\tau d\tau,$$where $x$ is the distance travelled and $dt=\gamma d\tau$ is the ...
The potential of fly-by ping pong is pretty much unlimited, provided you have enough time to your disposal.Given an initial transfer with a perihelion slightly lower than the orbit of Venus, a Venus flyby can increase the aphelion to a bit further out than the Earth's orbit.On the following Earth flyby, the perihelion can be lowered, and you can just ...
I will give my current best shot at this problem, and others should feel free to strengthen the argument with additional mathematics. (Or poke holes!)You ask two questions, I will answer the first as the second has been partially answered by the update.Are there other inclination change strategies that are more efficient for some values of $\alpha$?...
Given either geocentric latitude, longitude, and radius or geodetic latitude, longitude, and altitude, the computation of Earth-centered, Earth-fixed cartesian coordinates is fairly simple. For geocentric coordinates $R,\theta,\lambda$, one uses$$\begin{aligned}R\ &\text{is the radial distance from the center of the Earth} \\\theta\ &\text{is ...
I've answered some aspects of this, but considering a 1km railgun in my answer to the "parent" question. A much longer railgun doesn't make a lot of difference to the calculations there, except for the acceleration and power. The issues with reaction de-orbiting the railgun are the same. Concerning the shape of a longer railgun. Let us consider a 10g 4 km/s ...
Update: As of 2019-01-17, the page has been updated with range figures more in line with what you've calculated.According to the JHUAPL website (click on 'Learn more about these images ', for some reason a direct link didn't work), range should be in km:The following ancillary information is provided for each image posted: the date of the observation ...
note: This is a very helpful extended comment that may be of use to the OP but it can't currently be posted as a comment until this user reaches 50 reputation points.Oh this is a fantastic question. It is common to fall into the following trap when making these types of calculations.Check carefully your reference frames. Celta V numbers are all relative,...
Short answer: you stop nutation the way you stop any oscillation, by giving it an appropriate opposite impulse as it crosses zero, the oscillations central, zero energy point.Longer answer: The terminology in this area can be messy. So let's start with a simple model.The classic motions of a spinning top or (Navy) gyroscope are "spin", "precession", and "...
The article assumes that the Earth's shape is an oblate spheroid, i.e., a "squashed" sphere. If we "unsquash" everything back, including satellites' positions (which means multiplying the $z$-coordinates of the satellites by $1/\sqrt{1-\epsilon_e^2}$), the line segment connecting the new positions would intersect the sphere if and only if the line segment ...
Looks like they've used some figures that are very close to the sample calculations shown in the original paper with Katherine Johnson, which is available from NASA archives.The figure is for the distance of the vehicle from the center of the earth at the time of the retro rocket firing (or burnout) which initiates reentry. They've changed the original ...
The only equation needed to produce these curves is the thrust equation in a couple of different guises.$$F = \dot{m}v_e + (p_e-p_{atm})A_e $$$$I_{sp}= F /(g_0 \dot{m})$$You also need isentropic flow charts or tables to calculate the nozzle parameters that change with changing area ratio. An example isentropic flow chart is shown here, from 1953 book "...
This is not an answer.Here's a clever idea that doesn't work:Find the Earth's position relative to the solar system barycenter at several points in time.Find the best fit ellipse to those positions (similar to finding osculating elements, but not instantaneous ones).Look at the foci of the result to determine where the central object must be and it's ...
The most common method, by far, is brute force, by propagating the relevant satellite and then calculating the visibility. There are however many ways to optimize this brute force:Large delta-t until "close" and then smaller delta-t after thatUsing Multi-threading to reduce processing timeFew github examples:https://github.com/shupp/Predicthttps://...
Dimensional analysis is a zeroth-order way to approximate (thank you Mr. Ross; 9th grade Physics). There will likely be better answers, but let's see what happens.meters/sec^2 x seconds = meters/sec9.8 m/sec^2 x 150 sec (MECO, where you're going mostly sideways) gives ~1500 m/s delta-vPeople usually give something like 0.9 to 1.5 km/s when forced to ...
(This answer continues the one above.)A table of $P_{n,m}(\mu)$ valuescan be envisioned as a square array.Rows down the page are indexed by degree $n$,and columns to the right by order $m$.Due to the underlying math,the only sensible $P_{n,m}(\mu)$ values occur for$0 \le m \le n$.The effect is that the table is lower-triangularinstead of square;$...
I’m not a space engineer, but I found this thread and I think it can answer your question. Delta V as a Function of AltitudeThe lift-off delta v to a $100km$ altitude is in the range of about $1.4km/s$ for an ideal system.And the answer with explained equation :To just reach a height of 150km at least once, you don't need to achieve a true orbit (...
The caption in the book (Sutton) reads:Normalized vehicle velocity increment as a function of normalized exhaust velocity for various payload fractions with negligible inert mass of propellant tanks. The optima of each curve are connected by a line that represents Eq. 17–9.The text reads:For a given mission, theoretically there is an optimum range ...
If I understand you correctly the state vector you would use would be defined in an arbitrary reference frame and you would like to find the position of the central body in that same arbitrary reference frame that would result in an orbit which includes that state vector.Mathematically there are infinitely many solutions to this problem, any position you ...
Is North Korea's KMS-4 in a proper Sun-synchronous orbit?tl;dr: Based on an approximate analysis of its TLE, KMS-4 is in an imperfect Sun-synchronous orbit, and will drift slightly in sun-synchrony by about 4.8 degrees each year.How do I tell? I want to understand the mechanics.tl;dr: With an inclination of about 97.4 degrees at a low altitude of ~...
|
Thermodynamics has always been a tough thing for me. There are lots of assumptions in this subject (those assumptions, I know, are necessary, I know the science of thermodynamics is a very practical science).
First Law of Thermodynamics states mathematically: $$\Delta U=Q+W$$ (with proper sign conventions must be used). This is just a law of conservation of energy and a very straightforward equation, but when we come to chemical thermodynamics this equation changes its form and becomes: $$\Delta U=Q+p\,\Delta V$$ My intuition says as soon as pressure and volume comes in any equation it becomes specifically for gases. So, my first question is: Why thermodynamical equations are just for gases?
Let's imagine an isothermal expansion of a gas (that simple piston and gas experiment) under a constant pressure, now work $W$ is $$W=p\,\Delta V$$ but if use ideal gas Law equation i.e. $$pV=nRT$$ $$p\,\Delta V = \Delta nRT + nR\,\Delta T\tag1$$
since the expansion is isothermal therefore $\Delta T = 0$ and I can think that during expansion no atom or molecule has been annihilated therefore $\Delta n = 0$, so after all we get $$p\,\Delta V = 0$$ $$W=0$$
I want to know my mistakes in above consideration.
There is a question in my book:
A swimmer coming out of a pool is covered with a film of water weighing $18\ mathrm g$. How much heat must be supplied to evaporate this water at $298\ \mathrm K$? Calculate the internal energy change of vaporization at $100\ \mathrm{^\circ C}$. $\Delta_\mathrm{vap}H^\circ = 40.66\ \mathrm{kJ\ mol^{-1}}$ for water at $373\ \mathrm K$
My book give its solution like this$$\ce{H2O(l) -> H2O(g)}$$Amount of substance of $18\ \mathrm g$ of $\ce{H2O(l)}$ is just $1\ \mathrm{mol}$. Since, $\Delta U=Q-p\,\Delta V$, therefore,
$$\Delta U=\Delta H-p\,\Delta V $$. $$\Delta U=\Delta H-\Delta nRT$$ $$\Delta U=40.66 \times 10^3\ \mathrm{J\ mol^{-1}}-1\ \mathrm{mol}\times8.314\ \mathrm{J\ K^{-1}mol^{-1}}\times373\ \mathrm K$$ $$ \Delta U=37.56\ \mathrm{kJ\ mol^{-1}}$$
I have a lot of problems with this solution which goes directly to the foundations of science of thermodynamics. (I must say it's because of these books that science becomes a rotten subject, these books destroy the real essence of science).
How is $\Delta n=1\ \mathrm{mol}$? Why temperature is taken as $373\ \mathrm K$ and not $298\ \mathrm K$, since the process starts at $298\ \mathrm K$ we should use it? At $373\ \mathrm K$ the process becomes an isothermal one (latent heat) so $\Delta U$ ought to be zero, if we think the process of vaporization starts from $373\ \mathrm K$.
Any help will be much appreciated. Thank you.
|
Using currently available fuels and technology and an unlimited budget. How big would a rocket without stages have to be to make it to a stable orbit? and how big would a rocket without staged have to be to escape Earth's gravity? "edited" - or for the easier question how big would the rocket have to be if it was massless and only the fuel had mass?
A rocket "without stages" is a one-stage rocket. There have been proposals of this type before. You can often find mentions of SSTO, which means "single stage to orbit". The X-33 was a fairly recent R&D attempt at this. That was a spaceplane and the rocket all in one.
The present question has a different attempt. It is asking about SSTO minimum size with no requirements for reentry, and apparently no payload requirement either. That's difficult to answer in a fully theoretical sense, without just saying "it's zero". If you allow the engineering into the discussion, it seems like there should obviously be an answer.
Further complicating things is the fact that the size of the rocket doesn't necessarily affect the mass fraction of propellant of the tank itself. It is important to look for this number for
tanks specifically. NASA gives these examples:
Propellant Rocket Percent Propellant for Earth OrbitSolid Rocket 96%Kerosene-Oxygen 94%Hypergols 93%Methane-Oxygen 90%Hydrogen-Oxygen 83%
There's no obvious academic reason I can't assume an orbital Solid Rocket tank that weighs 1 kg, and is still 96% propellent. As far as tanks go, they physically have a mass proportional to the volume of stuff held in it (to first order..).
Nonetheless, my 1 kg orbital rocket is sure to be highly ineffective. In fact, the Delta v to LEO will increase because air drag and gravity drag will be higher for this rocket. Air drag, mainly. Out of the normal 10 km/s to orbit, around 1 km/s is typically air drag.
A completely accurate way to answer your question would then be to find out when drag gets so large that the
tank mass itself satisfies the rocket equation as the payload. To do that, I have to get a metric for air drag. I'll assume it goes about the same speed as other rockets (because decreasing the speed increases gravity drag). That means that this is purely a consequence of surface area to mass ratio.
$$ \Delta v_{drag} \propto \frac{A}{M} \propto \frac{R^2}{R^3} \propto \frac{1}{R} \propto M^{-1/3} \\ \Delta v_{drag} = \Delta v_{nominal} \left( \frac{ M }{ M_{nominal} } \right)^{-1/3} $$
I'm ball-parking things, so I'll assume that the nominal case is a shuttle that has delta v from drag of 1 km/s and weighs 1e6 kg. Now, we formalize the total Delta v and plug it into the rocket equation.
$$ \Delta v = 9 km/s + \Delta v_{drag} \\ v_e = 4 km/s \\ \Delta v = v_\text{e} \ln \frac {m_0} {m_1} $$
Plug and chug...
$$ 9 km/s + 1 km/s \left( \frac{ M }{ 1,000,000 kg } \right)^{-1/3} = 4 km/s \times \ln \frac{1}{0.04} \\ M = 17,179 \text{ kg} $$
The more I look at this, the more I realize this really is your answer. This is a rocket that weighs 17 tonnes. Its delta v to orbit is 12.8 km/s, as opposed to the normal (more favorable) 10 km/s, and this is because of the higher air drag because of its size. It will stand 47 feet tall, and it will deliver its own tank into orbit (which weighs 680 kg). Nothing else.
This is your answer, because you can not go any larger without adding multiple stages.
Is this an efficient way to deliver tanks into orbit? No. However, as people have pointed out before, if we wanted tanks in orbit, we would have used the shuttle external tank, which was 300 m/s short of full orbit when it was dropped into the ocean.
It's still sort of somewhat the most efficient way to deliver a tank to orbit
at that size. If you want more tanks in orbit for cheaper, you'll need larger rockets, so that they'll be more efficient. However, I think the problem would reduce to the same thing if you continued to scale it up, and left the rest as empty space. This is a completely insane idea, but it would follow the same mathematics I used here. In other words, you strap a large empty tank onto the solid booster rocket.
Obviously, a better approach would be to take up inflatable modules in a normal rocket. If you absolutely needed rigid tanks, perhaps you could wrap them like Russian dolls around each other, forming a rocket larger than the 17 tonne baseline. Then you would unpack them when you got to orbit. Now you have lots of tanks, and can proceed with your evil plan.
I hereby call our plan "Single Tank To Orbit", abbreviated STTO.
Couple of additional points to AlanSE answer
You can experiment with rocket equation by yourself using WolframAlpha. Just plug in your own values for final mass (payload), exhaust velocity etc.
Lockheed Martin X-33 was supposed to be technology demonstrator for VentureStar single-stage-to-orbit spaceplane. So you can use its design specifications to compare with estimates from rocket equation. VentureStar was expected to deliver 20 tonnes of payload to low-earth orbit, and then safely return back to the surface. The mass at launch would have been 1000 tonnes.
One possibility to avoid the tyranny of the rocket equation is to use the atmospheric oxygen as oxidizer, that is to use air-breathing jet engines to provide at least part of delta-v. The scramjet technology could provide the speeds in the atmosphere up to Mach 12-17, thus reducing the delta-v needed for the final acceleration into orbit.
|
In
classical thermodynamics the change in internal energy is defined by the first law as$$\Delta U = q + w$$so that only the difference in $U$ is known; $q$ is the heat absorbed by the 'system' and $w$ the work done on the system.
For example in a closed system (no exchange of matter with environment) we can write for a reversible change\begin{align} \mathrm{d}q &= T\mathrm{d}S \\ \mathrm{d}w &= -p\mathrm{d}V \end{align}and then if the only form of work on a gas is volume change$$ \mathrm{d}U = T\mathrm{d}S - p\mathrm{d}V$$and this is the fundamental equation for a closed system. Thus only difference in internal energy are measurable from thermodynamics, and this follows from the first law. (Even if you integrate this equation from say state $a$ to $b$ the result will be $U(b)-U(a)=\ddot {}$ in other words $\Delta U$.)
Thermodynamics was developed before the nature of matter was known, i.e. it does not depends on matter being formed of atoms and molecules. However, if we use additional knowledge about the nature of molecules then the internal energy (and entropy) can be determined from statistical mechanics.
The internal energy ($U$ not $\Delta U$) of a perfect monoatomic gas is the ensemble average and is $$U=(3/2)NkT$$or in general $U=(N/Z)\Sigma_ j\exp(−\epsilon_j/(kT))$ where $Z$ is the partition function, $k$ Boltzmann constant, $\epsilon_j$ energy of level $j$, and $T$ temperature and $N$ Avogadro's number. The absolute value of the entropy $S$ (for a perfect monoatomic gas) can also be determined and is given by the Sakur-Tetrode equation.
|
A simple illustration of the trapezoid rule for definite integration:$$ \int_{a}^{b} f(x)\, dx \approx \frac{1}{2} \sum_{k=1}^{N} \left( x_{k} - x_{k-1} \right) \left( f(x_{k}) + f(x_{k-1}) \right). $$
First, we define a simple function and sample it between 0 and 10 at 200 points
%matplotlib inlineimport numpy as npimport matplotlib.pyplot as plt
def f(x): return (x-3)*(x-5)*(x-7)+85x = np.linspace(0, 10, 200)y = f(x)
Choose a region to integrate over and take only a few points in that region
a, b = 1, 8 # the left and right boundariesN = 5 # the number of pointsxint = np.linspace(a, b, N)yint = f(xint)
Plot both the function and the area below it in the trapezoid approximation
plt.plot(x, y, lw=2)plt.axis([0, 9, 0, 140])plt.fill_between(xint, 0, yint, facecolor='gray', alpha=0.4)plt.text(0.5 * (a + b), 30,r"$\int_a^b f(x)dx$", horizontalalignment='center', fontsize=20);
Compute the integral both at high accuracy and with the trapezoid approximation
from __future__ import print_functionfrom scipy.integrate import quadintegral, error = quad(f, a, b)integral_trapezoid = sum( (xint[1:] - xint[:-1]) * (yint[1:] + yint[:-1]) ) / 2print("The integral is:", integral, "+/-", error)print("The trapezoid approximation with", len(xint), "points is:", integral_trapezoid)
The integral is: 565.2499999999999 +/- 6.275535646693696e-12 The trapezoid approximation with 5 points is: 559.890625
|
Let $\phi$ be a convex function on $(-\infty, \infty)$, $f$ a Lebesgue integrable function over $[0,1]$ and $\phi\circ f$ also integrable over $[0,1]$. Then we have:
$$\phi\Big(\int_{0}^{1} f(x)dx\Big)\leq\int_{0}^{1}\Big(\phi\circ f(x)\Big)dx.$$
I am thinking about a counterexample that the converse of this statement. In other words, I am trying to find a $\phi$ which is convex on $\mathbb{R}$, and $f$ is a Lebesgue integral function (on some set), satisfying $\phi\Big(\int_{0}^{1} f(x)dx\Big)\leq\int_{0}^{1}\Big(\phi\circ f(x)\Big)dx$, but $\phi\circ f$ is not Lebesgue integrable (on some set).
I tried to use the convexity of non-integrability of $\dfrac{1}{x}$. However, $\dfrac{1}{x}$ is not convex in the whole $\mathbb{R}$, so I tried to use $\left|\dfrac{1}{x}\right|$. So define $\phi:=\left|\dfrac{1}{x}\right|$.
We know that $f(x)=x$ is Lebesgue integrable, and if we restrict our case to $\mathbb{R^{+}}$, then $\phi\circ f=\dfrac{1}{x}$ which is not Lebesgue integrable.
Then, we have $$\phi\Big(\int_{1}^{2} f(x)dx\Big)=\dfrac{2}{3}<\int_{1}^{2}\Big(\phi\circ f(x)\Big)dx=\log(2).$$
Is my argument correct? I feel that I am kind of in a self-contradiction, or my attempt to show the converse of the statement of Jensen's inequality is wrong from the beginning.
Thank you so much for any ideas!
|
I'm trying to understand lines in affine and projective space in order to solve problems 2.15 and 4.13 in Algebraic Curves by William Fulton: https://www.google.com/url?sa=t&source=web&rct=j&url=http://www.math.lsa.umich.edu/~wfulton/CurveBook.pdf&ved=2ahUKEwiI3drj-NHhAhXJxIsKHQSMC9QQFjAAegQIAhAB&usg=AOvVaw0_YBJKOU-rk2J9pX3aCJyF
Fulton defines a line trough points $P=(a_1,...,a_n), Q=(b_1,...,b_n) \in \Bbb{A}^n$ to be {$(a_1+t(b_1-a_1),...,a_n+t(b_n-a_n))| t \in k$},
and a line trough points $P=[a_0:...:a_n], Q=[b_0,...,b_n] \in \Bbb{P}^n$ to be {$[\mu a_0+\lambda b_0:...:\mu a_n+\lambda b_n]| \mu, \lambda \in k$, not both zero}.
In particular I'm trying to show (in both the affine and projective case) that a line corresponds to a linear subvariety of dimension $m=1$, which Fulton defines to be a variety of the form $V=V(F_1,...,F_n)$ ($degF_i=1$) that can be mapped to $V(X_{m+1},...,X_n)$, i.e., to $V(X_2,...,X_n)$ (or ($V(X_1,...,X_n)$ in the projective case) by an affine/projective change of coordinates.
I.e. that any line is such a linear subvariety and vice versa. Note that I'm only familiar with the formalism presented in Fulton, so a more classical approach is preferred.
|
The Annals of Probability Ann. Probab. Volume 22, Number 1 (1994), 160-176. Survival Asymptotics for Brownian Motion in a Poisson Field of Decaying Traps Abstract
Let $W(t)$ be the Wiener sausage in $\mathbb{R}^d$, that is, the $a$-neighborhood for some $a > 0$ of the path of Brownian motion up to time $t$. It is shown that integrals of the type $\int^t_0\nu(s) d|W(s)|$, with $t \rightarrow \nu (t)$ nonincreasing and $nu (t) \sim \nu t^{-\gamma}, t \rightarrow \infty$, have a large deviation behavior similar to that of $|W(t)|$ established by Donsker and Varadhan. Such a result gives information about the survival asymptotics for Brownian motion in a Poisson field of spherical traps of radius $a$ when the traps decay independently with lifetime distribution $\nu(t)/\nu(0)$. There are two critical phenomena: (i) in $d \geq 3$ the exponent of the tail of the survival probability has a crossover at $\gamma = 2/d$; (ii) in $d \geq 1$ the survival strategy changes at time $s = \lbrack\gamma/(1 + \gamma)\rbrack t$, provided $\gamma < 1/2, d = 1$, respectively, $\gamma < 2/d, d \geq 2$.
Article information Source Ann. Probab., Volume 22, Number 1 (1994), 160-176. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aop/1176988853 Digital Object Identifier doi:10.1214/aop/1176988853 Mathematical Reviews number (MathSciNet) MR1258871 Zentralblatt MATH identifier 0793.60086 JSTOR links.jstor.org Citation
Bolthausen, Erwin; Hollander, Frank Den. Survival Asymptotics for Brownian Motion in a Poisson Field of Decaying Traps. Ann. Probab. 22 (1994), no. 1, 160--176. doi:10.1214/aop/1176988853. https://projecteuclid.org/euclid.aop/1176988853
|
Violino, Giulio, Ellison, Sara L, Sargent, Mark, Coppin, Kristen E K, Scudder, Jillian M, Mendel, Trevor J and Saintonge, Amelie (2018)
Galaxy pairs in the SDSS - XIII. The connection between enhanced star formation and molecular gas properties in galaxy mergers. Monthly Notices of the Royal Astronomical Society, 476 (2). pp. 2591-2604. ISSN 0035-8711
PDF - Published Version
Download (1MB)
Abstract
We investigate the connection between star formation and molecular gas properties in galaxy mergers at low redshift (z$\leq$0.06). The study we present is based on IRAM 30-m CO(1-0) observations of 11 galaxies with a close companion selected from the Sloan Digital Sky Survey (SDSS). The pairs have mass ratios $\leq$4, projected separations r$_{\mathrm{p}} \leq$30 kpc and velocity separations $\Delta$V$\leq$300 km s$^{-1}$, and have been selected to exhibit enhanced specific star formation rates (sSFR). We calculate molecular gas (H$_{2}$) masses, assigning to each galaxy a physically motivated conversion factor $\alpha_{\mathrm{CO}}$, and we derive molecular gas fractions and depletion times. We compare these quantities with those of isolated galaxies from the extended CO Legacy Data base for the GALEX Arecibo SDSS Survey sample (xCOLDGASS, Saintonge et al. 2017) with gas quantities computed in an identical way. Ours is the first study which directly compares the gas properties of galaxy pairs and those of a control sample of normal galaxies with rigorous control procedures and for which SFR and H$_{2}$ masses have been estimated using the same method. We find that the galaxy pairs have shorter depletion times and an average molecular gas fraction enhancement of 0.4 dex compared to the mass matched control sample drawn from xCOLDGASS. However, the gas masses (and fractions) in galaxy pairs and their depletion times are consistent with those of non-mergers whose SFRs are similarly elevated. We conclude that both external interactions and internal processes may lead to molecular gas enhancement and decreased depletion times.
Item Type: Article Schools and Departments: School of Mathematical and Physical Sciences > Physics and Astronomy Research Centres and Groups: Astronomy Centre Subjects: Q Science > QB Astronomy Depositing User: Mark Sargent Date Deposited: 20 Feb 2018 10:19 Last Modified: 03 Jul 2018 13:14 URI: http://srodev.sussex.ac.uk/id/eprint/73715 📧Request an update
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.