text
stringlengths 256
16.4k
|
|---|
Electrochemical Impedance Spectroscopy: Experiment, Model, and App
Electrochemical impedance spectroscopy is a versatile experimental technique that provides information about an electrochemical cell’s different physical and chemical phenomena. By modeling the physical processes involved, we can constructively interpret the experiment’s results and assess the magnitudes of the physical quantities controlling the cell. We can then turn this model into an app, making electrochemical modeling accessible to more researchers and engineers. Here, we will look at three different ways of analyzing EIS: experiment, model, and simulation app.
Electrochemical Impedance Spectroscopy: The Experiment
Electrochemical impedance spectroscopy (EIS) is a widely used experimental method in electrochemistry, with applications such as electrochemical sensing and the study of batteries and fuel cells. This technique works by first polarizing the cell at a fixed voltage and then applying a small additional voltage (or occasionally, a current) to perturb the system. The perturbing input oscillates harmonically in time to create an alternating current, as shown in the figure below.
An oscillating perturbation in cell voltage gives an oscillating current response.
For a certain amplitude and frequency of applied voltage, the electrochemical cell responds with a particular amplitude of alternating current at the same frequency. In real systems, the response may be complicated for components of other frequencies too — we’ll return to this point below.
EIS experiments typically vary the frequency of the applied perturbation across a range of mHz and kHz. The relative amplitude of the response and time shift (or phase shift) between the input and output signals change with the applied frequency.
These factors depend on the rates at which physical processes in the electrochemical cell respond to the oscillating stimulus. Different frequencies are able to separate different processes that have different timescales. At lower frequencies, there is time for diffusion or slow electrochemical reactions to proceed in response to the alternating polarization of the cell. At higher frequencies, the applied field changes direction faster than the chemistry responds, so the response is dominated by capacitance from the charge and discharge of the double layer.
The time-domain response is not the simplest or most succinct way to interpret these frequency-dependent amplitudes and phase shifts. Instead, we define a quantity called an
impedance. Like resistance in a static system, impedance is the ratio of voltage to current. However, it uses the real and imaginary parts of a complex number to represent the relation of both amplitude and phase to the input signal and output response. The mathematical tool that relates the impedance to the time-domain response is a Fourier transform, which represents the frequency components of the oscillating signal.
To explain the idea of impedance more fully for a simple case, consider the input voltage as a cosine wave oscillating at an angular frequency (
ω):
Then the response is also a cosine wave, but with a phase offset (
φ). Compared to the time shift in the image above, the phase offset is given as \phi = -\omega \,\delta t . The magnitude of the current and its phase offset depend on the physics and chemistry in the cell.
Now, let’s consider the resistance from Ohm’s law:
This quantity varies in time with the same frequency as the perturbing signal. It equals zero at times when the numerator also equals zero and becomes singular when the denominator equals zero. So unlike the resistance in a DC system, it’s not a very useful quantity!
Instead, from Euler’s theorem, let’s express the time-varying quantities as the real parts of complex exponentials, so that:
and
We denote the coefficients V_0 and I_0\,\exp(i\phi) as quantities \bar{V} and \bar{I}, respectively.
These are complex amplitudes that can be understood in terms of the Fourier transformation of the original time-domain sinusoidal signals. They express the distinct amplitudes and phase difference of the voltage and current. Because all of the quantities in the system are oscillating sinusoidally, we understand the physical effects by comparing these complex quantities, rather than the time-domain quantities. To describe the oscillating problem (often called
phasor theory), we define a complex analogue of resistance as:
This is the impedance of the system and, as the name suggests, it’s the quantity we measure in electrochemical impedance spectroscopy. It’s a complex quantity with a magnitude and phase, representing both resistive and capacitive effects. Resistance contributes the real part of the complex impedance, which is in-phase with the applied voltage, while capacitance contributes the imaginary part of the complex impedance, which is precisely out-of-phase with the applied voltage.
EIS specialists look at the impedance in the form of a spectrum, normally with a Nyquist plot. This plots the imaginary component of impedance against the real component, with one data point for every frequency at which the impedance has been measured. Below is an example from a simulation — we’ll discuss how it’s modeled in the next section.
Simulated Nyquist plot from an electrochemical impedance spectroscopy experiment. Points toward the top right are at lower frequencies (mHz), while those toward the bottom left are at higher frequencies (>100 Hz).
In the figure above, the semicircular region toward the left side shows the coupling between double-layer capacitance and electrode kinetic effects at frequencies faster than the physical process of diffusion. The diagonal “diffusive tail” on the right comes from diffusion effects observed at lower frequencies.
EIS experiments are useful because information about many different physical effects can be extracted from a single analysis. There is a quantitative relationship between properties like diffusion coefficients, kinetic rate constants, and dimensions of the features in Nyquist plots. Often, EIS experiments are interpreted using an “equivalent circuit” of resistors and capacitors that yields a similar frequency-dependent impedance to the one shown in the Nyquist plot above. This idea was discussed in my colleague Scott’s blog post on electrochemical resistances and capacitances.
When there is a linear relation between the voltage and current, only one frequency will appear in the Fourier transform. This simplifies the analysis significantly.
For the simple harmonic interpretation of the experiment in terms of impedance, we need the current response to oscillate at the same frequency as the voltage input. This means that the system must respond linearly. For an electrochemical cell, we can usually accomplish this by ensuring that the applied voltage is small compared to the quantity
RT/F — the ratio of the gas constant multiplied by the temperature to the Faraday constant. This is the characteristic “thermal voltage” in electrochemistry and is about 25 mV at normal temperatures. Smaller voltage changes usually induce a linear response, while larger voltage changes cause an appreciably nonlinear response.
Of course, with simulation to predict the time-domain current, we can always consider a nonlinear case and perform a Fourier transform numerically to study the effect on the impedance. In practice, the interpretation in terms of impedance illustrated above is best suited to the harmonic assumption. Impedance measurements are therefore often used in a complementary manner with transient techniques, such as amperometry or voltammetry, which are better suited for investigating nonlinear or hysteretic effects.
Let’s look at a simple example of the physical theory that underpins these ideas to see how the impedance spectrum relates to the real controlling physics.
Electrochemical Impedance Spectroscopy: The Model
To model an EIS experiment, we must describe the key underlying physical and chemical effects, which are the electrode kinetics, double-layer capacitance, and diffusion of the electrochemical reactants. In electroanalytical systems, a large quantity of artificially added supporting electrolytes keeps the electric field low so that solution resistance can be neglected. In this case, we can describe the mass transport of chemical species in the system using the diffusion equation (Fick’s laws) with suitable boundary conditions for the electrode kinetics and capacitance. In the COMSOL Multiphysics® software, we use the
Electroanalysis interface together with an Electrode Surface boundary feature to describe these equations.
For more details about how to set up this model, you can download the Electrochemical Impedance Spectroscopy tutorial example in the Application Library.
Model tree for the Electroanalysis interface in an EIS model.
Under
Transport Properties, we can specify the diffusion coefficients of the redox species under consideration. We at least need the reduced and oxidized species for a single redox couple, such as the common redox couple ferro/ferricyanide, to use as an analytical reference. The Concentration boundary condition defines the fixed bulk concentrations of these species. The Electrode Reaction and Double Layer Capacitance subnodes for the Electrode Surface boundary feature contribute Faradaic and non-Faradaic current, respectively. For the double-layer capacitance, we typically use an empirically measured equivalent capacitance and specify the electrode reaction according to a standard kinetic equation like the Butler-Volmer equation.
Note that we’re not referring to equivalent circuit properties at all here. In COMSOL Multiphysics, all of the inputs in the description of the electrochemical problem are physical or chemical quantities, while the output is a Nyquist plot. When analyzing the problem in reverse, we’re able to use an observed Nyquist plot from our experiments to make inferences about the real values of these physical and chemical inputs.
In the settings for the Electrode Surface feature, we represent the impedance experiment by applying a
Harmonic Perturbation to the cell voltage. Settings for the Electrode Surface boundary feature in an EIS model.
Here, the quantity
V_app is the applied voltage.
The harmonic perturbation is applied with respect to a resting steady voltage (or current) on the cell. In this case, we have set this to a reference value of zero volts. With more advanced models, we might consider using the results of another COMSOL Multiphysics model, one that’s significantly nonlinear for example, to find the resting conditions to which the perturbation is applied. If you’re interested in understanding the mathematics of the harmonic perturbation in greater detail, my colleague Walter discussed them in a previous blog post.
When studying lithium-ion batteries, for example, we can perform a time-dependent analysis of the cell’s discharge, studying its charge transport, diffusion and migration of the lithium electrolyte, and the electrode kinetics and diffusion of the intercalated lithium atoms. We can pause this simulation at various times to consider the impedance measured from a rapid perturbation. For further insight into the physics involved, you can read my colleague Tommy’s blog post on modeling electrochemical impedance in a lithium-ion battery.
Electrochemical Impedance Spectroscopy: The Simulation App
A frequent demand for electrochemical simulations is that they “fit” experimental data in order to determine unknown physical quantities or, more generally, to interpret the data at all. Even for experienced electroanalytical chemists, it can be difficult to intuitively “see” the physics and chemistry in the underlying graphs like the Nyquist plot. However, by simulating the plots under a range of conditions, the influence of different effects on the overall graph is revealed.
Simulation is helpful for analyzing EIS, but it can also be time consuming for the experts involved. As was the case with my old research group, these experts can spend more time writing programs and running models to fit data together with experimental researchers than on the science. Wouldn’t it be nice if all electrochemical researchers could load experimental data into a simple interface, simulate impedance spectra for a given physical model and inputs, and even perform automatic parameter fitting? The good news is that we can! With the Application Builder in COMSOL Multiphysics, we can create an easy-to-use EIS app based on an underlying model. As a model can contain any level of physical detail, the app provides direct access to the physical data and isn’t confined to simple equivalent circuits.
To highlight this, we have an EIS demo app based on the model available in the Application Library. The app user can set concentrations for electroactive species and tune the diffusion coefficients as well as the electrode kinetic rate constant and double-layer capacitance. After clicking the
Compute button, the app generates results that can be visualized through Nyquist and Bode plots. The EIS simulation app in action.
As well as enabling physical parameter estimation, this app is very helpful for teaching, since we can quickly change inputs and visualize the results that would occur in the experiment. A natural extension for the app is to import experimental data to the same Nyquist plot for direct comparison. We can also build up the underlying physical model to consider the influence of competing electrochemical reactions or follow-up homogeneous chemistry from the products of an electrochemical reaction.
Concluding Thoughts
Here, we’ve introduced electrochemical impedance spectroscopy and discussed some methods used to model it. We also saw how a simulation app built from a simple theoretical model can provide greater insight into the relationship between the theory of an electrochemical system and its behavior as observed in an experiment.
Further Reading Explore other topics related to electrochemical simulation on the COMSOL Blog Comments (1) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
isobars and nuclear saturation Abstract
In this paper, we construct a nuclear interaction in chiral effective field theory with explicit inclusion of the $$\mathrm{{\Delta}}$$-isobar $$\mathrm{{\Delta}}(1232)$$ degree of freedom at all orders up to next-to-next-to-leading order (NNLO). We use pion-nucleon ($${\pi}N$$) low-energy constants (LECs) from a Roy-Steiner analysis of $${\pi}N$$ scattering data, optimize the LECs in the contact potentials up to NNLO to reproduce low-energy nucleon-nucleon scattering phase shifts, and constrain the three-nucleon interaction at NNLO to reproduce the binding energy and point-proton radius of $$^{4}\mathrm{He}$$. For heavier nuclei we use the coupled-cluster method to compute binding energies, radii, and neutron skins. We find that radii and binding energies are much improved for interactions with explicit inclusion of $$\mathrm{{\Delta}}(1232)$$, while $$\mathrm{{\Delta}}$$-less interactions produce nuclei that are not bound with respect to breakup into $${\alpha}$$ particles. Finally, the saturation of nuclear matter is significantly improved, and its symmetry energy is consistent with empirical estimates.
Authors: Chalmers Univ. of Technology, Göteborg (Sweden). Dept. of Physics Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Physics Division; Univ. of Tennessee, Knoxville, TN (United States). Dept. of Physics and Astronomy Publication Date: Research Org.: Oak Ridge National Laboratory, Oak Ridge Leadership Computing Facility (OLCF); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Tennessee, Knoxville, TN (United States); Chalmers Univ. of Technology, Göteborg (Sweden) Sponsoring Org.: USDOE Office of Science (SC), Nuclear Physics (NP) (SC-26); Swedish Research Council (SRC); European Union (EU) OSTI Identifier: 1474706 Alternate Identifier(s): OSTI ID: 1422656 Grant/Contract Number: AC05-00OR22725; FG02-96ER40963; SC0008499; SC0018223; 2015-00225; 600398 Resource Type: Accepted Manuscript Journal Name: Physical Review C Additional Journal Information: Journal Volume: 97; Journal Issue: 2; Journal ID: ISSN 2469-9985 Publisher: American Physical Society (APS) Country of Publication: United States Language: English Subject: 73 NUCLEAR PHYSICS AND RADIATION PHYSICS; effective field theory; equations of state of nuclear matter; nuclear binding; nuclear charge distribution; nuclear forces; nuclear matter; symmetry energy; baryons Citation Formats
Ekström, A., Hagen, G., Morris, T. D., Papenbrock, T., and Schwartz, P. D. Δ isobars and nuclear saturation. United States: N. p., 2018. Web. doi:10.1103/PhysRevC.97.024332.
Ekström, A., Hagen, G., Morris, T. D., Papenbrock, T., & Schwartz, P. D. Δ isobars and nuclear saturation. United States. doi:10.1103/PhysRevC.97.024332.
Ekström, A., Hagen, G., Morris, T. D., Papenbrock, T., and Schwartz, P. D. Mon . "Δ isobars and nuclear saturation". United States. doi:10.1103/PhysRevC.97.024332. https://www.osti.gov/servlets/purl/1474706.
@article{osti_1474706,
title = {Δ isobars and nuclear saturation}, author = {Ekström, A. and Hagen, G. and Morris, T. D. and Papenbrock, T. and Schwartz, P. D.}, abstractNote = {In this paper, we construct a nuclear interaction in chiral effective field theory with explicit inclusion of the $\mathrm{{\Delta}}$-isobar $\mathrm{{\Delta}}(1232)$ degree of freedom at all orders up to next-to-next-to-leading order (NNLO). We use pion-nucleon (${\pi}N$) low-energy constants (LECs) from a Roy-Steiner analysis of ${\pi}N$ scattering data, optimize the LECs in the contact potentials up to NNLO to reproduce low-energy nucleon-nucleon scattering phase shifts, and constrain the three-nucleon interaction at NNLO to reproduce the binding energy and point-proton radius of $^{4}\mathrm{He}$. For heavier nuclei we use the coupled-cluster method to compute binding energies, radii, and neutron skins. We find that radii and binding energies are much improved for interactions with explicit inclusion of $\mathrm{{\Delta}}(1232)$, while $\mathrm{{\Delta}}$-less interactions produce nuclei that are not bound with respect to breakup into ${\alpha}$ particles. Finally, the saturation of nuclear matter is significantly improved, and its symmetry energy is consistent with empirical estimates.}, doi = {10.1103/PhysRevC.97.024332}, journal = {Physical Review C}, number = 2, volume = 97, place = {United States}, year = {2018}, month = {2} } Citation information provided by Web of Science
Web of Science
|
I have regression results about the effect of a treatment on two outcome variables, $N$ and $D$.
The coefficient on $N$ is positive. The coefficient on $D$ is negative.
These results suggest that the effect on $\frac{N}{D}$ should be positive. The treatment increases the numerator, and decreases the denominator.
However, when I create a new variable $\frac{N}{D}$ and use this as an outcome variable, I get an negative coefficient on $\frac{N}{D}$.
I don't understand why this is happening. Full setup below.
I have an experiment in a panel data setting. All subjects are viewed twice (once pre- and once post- treatment). Between the two periods, randomly selected subjects are treated with a binary variable. The control group remains untreated.
I have two outcome variables, $N_{i,t}$ and $D_{i,t}$. Both underlying $N$ and $D$ variables are strictly positive.
I'm estimating regressions as:
$Outcome_{i,t}=\beta_{T}\times Treatment + TimeFEs + SubjectFEs+\epsilon$
Where:
The treatment coefficient ($\beta_{T}$) is positive for $N$, and negative for $D$.
However, when I create a new variable for each observation (call it "$R_{i,t}$" = $\frac{N_{i,t}}{D_{i,t}}$) and use it as the outcome, I get a negative treatment coefficient.
|
For the reaction$$\ce{CaSO4 (s) <=> Ca^2+ (aq) + SO4^{2-} (aq)},$$the appropriate equilibrium constant using activities can be set up as$$K_\mathrm{eq}=\frac{a(\ce{Ca^2+})\,a(\ce{SO4^2-})}{a(\ce{CaSO4})}.$$
The activity $a$ of a pure substances in condensed phases is approximately one, since their behaviour can be often treated ideally, i.e. the chemical potential $\mu$ at the considered conditions is approximately the same as the chemical potential at standard conditions $\mu^\circ$. This derives directly from the definition of the relative activity: $$a = \exp\left\{\frac{\mu-\mu^\circ}{\mathcal{R}T}\right\}$$
For substances where the solubility is very low, concentrations of the ions are very low, so in a further approximation we can assume that in these solutions the activity coefficient is approximately one, therefore, we can write concentrations instead of activities and obtain for the equilibrium constant the expression, that is also known a solubility product.$$K_\mathrm{eq} \approx K_\mathrm{sp} = c(\ce{Ca^2+})\,c(\ce{SO4^2-})$$
However, this is completely irrelevant to the question itself. Since the solution is saturated, it also means, that the solid is in equilibrium with the solution, which means that the kinetics of dissolution and precipitation are equal of magnitude. The Gibbs energy has a very distinct value for these equations.
chemical equilibrium
Reversible processes [processes which may be made to proceed in the forward or reverse direction by the (infinitesimal) change of one variable], ultimately reach a point where the rates in both directions are identical, so that the system gives the appearance of having a static composition at which the Gibbs energy, $G$, is a minimum. At equilibrium the sum of the chemical potentials of the reactants equals that of the products, so that:
$$\Delta G_\mathrm{r} = \Delta G_\mathrm{r}^\circ + \mathcal{R}T\,\ln\,K = 0\\\Delta G_\mathrm{r}^\circ = - \mathcal{R}T\,\ln\,K\\$$
The equilibrium constant, $K$, is given by the mass-law effect.
Source: IUPAC goldbook
|
LaTeX supports many worldwide languages by means of some special packages. In this article is explained how to import and use those packages to create documents in
Italian.
Contents
Italian language has some accentuated words. For this reason the preamble of your document must be modified accordingly to support these characters and some other features.
documentclass{article} \usepackage[utf8]{inputenc} \usepackage[italian]{babel} \usepackage[T1]{fontenc} \begin{document} \tableofcontents \vspace{2cm} %Add a 2cm space \begin{abstract} Questo è un breve riassunto dei contenuti del documento scritto in italiano. \end{abstract} \section{Sezione introduttiva} Questa è la prima sezione, possiamo aggiungere alcuni elementi aggiuntivi e tutto digitato correttamente. Inoltre, se una parola è troppo lunga e deve essere troncato babel cercherà per troncare correttamente a seconda della lingua. \section{Teoremi Sezione} Questa sezione è quello di vedere cosa succede con i comandi testo definendo \[ \lim x = \sin{\theta} + \max \{3.52, 4.22\} \] \end{document}
There are two packages in this document related to the encoding and the special characters. These packages will be explained in the next sections.
If your are looking for instructions on how to use more than one language in a sinlge document, for instance English and Italian, see the International language support article.
Modern computer systems allow you to input letters of national alphabets directly from the keyboard. In order to handle a variety of input encodings used for different groups of languages and/or on different computer platforms LaTeX employs the
inputenc package to set up input encoding. In this case the package properly displays characters in the Italian alphabet. To use this package add the next line to the preamble of your document: \usepackage[utf8]{inputenc} The recommended input encoding is utf-8. You can use other encodings depending on your operating system.
To proper LaTeX document generation you must also choose a font encoding which has to support specific characters for Italian language, this is accomplished by the
package: fontenc \usepackage[T1]{fontenc} Even though the default encoding works well in Italian, using this specific encoding will avoid glitches with some specific characters. The default LaTeX encoding is
OT1.
To extended the default LaTeX capabilities, for proper hyphenation and translating the names of the document elements, import the
babel package for the Italian language. \usepackage[italian]{babel} As you may see in the example at the introduction, instead of "abstract" and "Contents" the Italian words "Sommario" and "Indice" are used.
Sometimes for formatting reasons some words have to be broken up in syllables separated by a
- (
hyphen) to continue the word in a new line. For example, matematica could become mate-matica. The package babel, whose usage was described in the previous section, usually does a good job breaking up the words correctly, but if this is not the case you can use a couple of commands in your preamble.
\usepackage{hyphenat} \hyphenation{mate-mati-ca recu-perare}
The first command will import the package
hyphenat and the second line is a list of space-separated words with defined hyphenation rules. On the other side, if you want a word not to be broken automatically, use the
{\nobreak word} command within your document.
For more information see
|
How can we show that Halting Problem for one-counter additive machines is decidable ?
A simple way is the following:
Suppose that the increments are $+1, -1$; the number of states of the machine is $m$ and the current value of the counter is $n$.
You can notice that if the counter reaches $n+m+1$ without hitting the $0$, then by the pigeonhole principle the machine from value $n$ to $n+m+1$ has entered the same state $s_i$ two times ($s_i \to ... \to s_i$), and the difference in the counter value is positive; so it will continue to repeat the same loop and increase the counter.
After hitting the $0$ - for the same reason - if the counter reaches $m+1$ then the machine will never halt.
So it's enough to simulate the machine up to $m(n+m+1)$ steps; if it doesn't halt in that time it will never halt: it will loop forever on the same values $(s_i,c_j) \to ... \to (s_i,c_j)$ or increase forever the counter: $(s_i,c_j) \to ...\to (s_i,c_k) \to ...;\; c_k > c_j+m$ .
If the increments are $\pm v, v \in [1,h]$ then the machine can be reduced to an equivalent one with $\pm 1$ increments.
|
Calculational Exercises
1. Consider \(\mathbb{R}^3 \) with two orthonormal bases: the canonical basis \(e = (e_1 , e_2 , e_3 )\) and the basis \(f = (f_1 , f_2 , f_3)\), where
\[ f_1 = \frac{1}{\sqrt{3}}(1,1,1), f_2 = \frac{1}{\sqrt{6}}(1,-2,1), f_3 = \frac{1}{\sqrt{2}}(1,0,-1) \]
Find the matrix, \(S\), of the change of basis transformation such that
\[ [v]_f = S[v]_e, ~\rm{for~ all}~ v \in \mathbb{R}^3 ,\]
where \([v]_b\) denotes the column vector of \(v\) with respect to the basis \(b\).
2. Let \(v \in \mathbb{C}^4\) be the vector given by \(v = (1, i, −1, −i)\). Find the matrix (with respect to the canonical basis on \(\mathbb{C}^4\) ) of the orthogonal projection \(P \in \cal{L}(\mathbb{C}^4)\) such that
\[null(P ) = {v}^\perp.\]
3. Let \(U\) be the subspace of \(\mathbb{R}^3\) that coincides with the plane through the origin that is perpendicular to the vector \(n = (1, 1, 1) \in \mathbb{R}^3.\)
(a) Find an orthonormal basis for \(U\).
(b) Find the matrix (with respect to the canonical basis on \(\mathbb{R}^3\)) of the orthogonal projection \(P \in \cal{L}(\mathbb{R}^3\)) onto \(U\), i.e., such that \(range(P ) = U\).
4. Let \(V = \mathbb{C}^4\) with its standard inner product. For \( \theta \in \mathbb{R}\), let
\[ v_\theta = \left( \begin{array}{c} 1 \\ e^{i\theta} \\ e^{2i\theta} \\ e^{3i\theta} \end{array} \right) \in \mathbb{C}^4.\]
Find the canonical matrix of the orthogonal projection onto the subspace \({v_\theta }^\perp\). Proof-Writing Exercises
1. Let \(V\) be a finite-dimensional vector space over \(\mathbb{F}\) with dimension \(n \in \mathbb{Z}_+ \), and suppose that \(b = (v_1 , v_2 , \ldots , v_n) \) is a basis for \(V\) . Prove that the coordinate vectors \([v_1 ]_b, [v_2 ]_b, \ldots, [v_n ]_b\) with respect to \(b\) form a basis for \(\mathbb{F}^n.\)
2. Let \(V\) be a finite-dimensional vector space over \(\mathbb{F}\), and suppose that \(T \in \cal{L}(V)\) is a linear operator having the following property: Given any two bases \(b\) and \(c\) for \(V\) , the matrix \(M(T, b)\) for \(T\) with respect to \(b\) is the same as the matrix \(M(T, c)\) for \(T\) with respect to \(c\). Prove that there exists a scalar \(\alpha \in \mathbb{F}\) such that \(T = \alpha I_V\), where \(I_V\) denotes the identity map on \(V\).
|
These exercises are not tied to a specific programming language. Example implementations are provided under the Code tab, but the Exercises can be implemented in whatever platform you wish to use (e.g., Excel, Python, MATLAB, etc.).
### 0. Describing orbits
In an elliptical orbit, the satellite's distance from the object that it orbits changes. The degree to which this happens is called the *eccentricity* of the orbit. A circular orbit has zero eccentricity, while a strongly elongated orbit like a comet has zero eccentricity.
We will use a few words to describe points on the orbit. The points nearest and furthest from the body being orbited (the host) are called *apsides* (singular, *apsis*). The point of closest approach is called the *periapsis* or *pericenter*. If the object being orbited is a star, it
may also be called the *periastron*; if it is the Sun, the *perihelion*; and if it is the Earth, the *perigee*. Likewise, the furthest point from the host is the *apoapsis* or *apocenter*; for specific hosts, this may also be called *apastron*, *aphelion*, or *apogee*.
### 1. **Creating the model**
The physics of a small satellite orbiting a large body are the same, whether this is the Moon orbiting the Earth, or a planet orbiting the Sun.
As a reminder, if the large body is at the origin and the satellite is at position $(x,y)$, the acceleration of the satellite is given by
$$a_x = \frac{-GMx}{r^3}$$
$$a_y = \frac{-GMy}{r^3}$$
where $r = \sqrt{x^2 + y^2}$.
**Write a program that allows you to simulate this motion numerically using the Euler-Cromer algorithm and visualize it.** Your program should be able to both generate a picture or animation of the orbit and a plot of the magnitude of the orbiting object's position vector vs. time.
It is always a good idea to test any computer model by simulating a system whose behavior you are familiar with, to ensure that your simulation reproduces all the features that it should.
To test your program, consider the most familiar orbiting system we have: Earth's motion around the Sun. This system has the following properties:
* The Sun has a mass of $1.99 \times 10^{30}$ kg, and moves only slightly compared to its distance from the Earth. (Thus, you can consider it to be stationary.)
* Earth orbits the Sun in a nearly circular orbit at a distance of 1 AU = $1.5 \times 10^{11}$ m
First, **determine initial conditions for the Earth: what should its starting position and velocity vectors be?** It will be useful to draw a cartoon of Earth's orbit, place the Earth somewhere in it, and then draw an arrow representing the Earth's velocity vector at that point. To determine the magnitude of Earth's velocity,
think about the following:
* How much time does it take for the Earth to complete one orbit?
* How far does Earth move during that time?
**Simulate the Earth's orbit around the Sun and confirm that you see the behavior that you expect**: a closed orbit that is nearly circular and that takes the expected amount of time to travel around the Sun.
### 2. **The Euler and Euler-Cromer algorithms**
Now, investigate the behavior of your simulation using by comparing your Euler-Cromer results from the first Exercise to the the those produced using the simple Euler method.
**Experiment with different values of the time step $\Delta t$ for the two algorithms.** How do their behaviors differ for large $\Delta t$? What about at small $\Delta t$? (Which algorithm works better?)
### 3. **The Moon's orbit around the Earth**
Another familiar system is the Moon's orbit around the Earth. However, its orbit is more eccentric. The Moon's orbit has the following parameters:
* Mass of the Earth: $5.97 \times 10^{24}$ kg
* Lunar apogee (point furthest from Earth): $4.05 \times 10^8$ km
* Lunar perigee (point closest to Earth): $3.63 \times 10^8$ km
As before, use initial conditions in which the Moon's displacement and velocity vectors are perpendicular. This corresponds to either apogee or perigee (why?)
However, here you don't know the initial velocity that will reproduce the actual orbit of the Moon.
So, you'll need to find it. **Place the Moon at either apogee or perigee, and then by trial and error,
adjust the initial velocity so that you obtain the correct value for the other apsis.**
Then, examine the orbital period of the Moon. Does this match with what you know about the Moon's motion? In particular, **investigate the length of the orbital period and compare it to what we actually observe from the Moon.**
### 4. **Monitoring the orbital energy**
Modify your code to keep track of the specific kinetic energy, specific potential energy, and total specific orbital energy of the Moon. (The *specific energy* is the energy per unit mass of an orbiting body.) In particular:
* $T_s = \frac{1}{2}(v_x^2 + v_y^2)$
* $U_s = -\frac{GM}{r}$
* $E_s = T_s + U_s$
**For an eccentric orbit like the Moon's, plot the kinetic energy, potential energy, and total energy vs. time. Can you identify the perigee and apogee points from your plots?**
Then play with your initial conditions by slowly increasing the Moon's initial velocity. **What sorts of orbits do you see if the total energy is very negative? Only slightly negative? Positive?**
### 5. **Timesteps and timescales**
Determine roughly how many *timesteps per orbit* you need to simulate the motion of the Moon around the Earth with acceptable results. (What this means is up to you!)
Now, simulate the orbit of Halley's comet around the Sun. (You'll need to change the central mass back to $M_\odot = 1.99\times 10^{30}$ kg. It has an extremely eccentric orbit, with an aphelion of $5.25\times 10^{12}$ m and a perihelion
of $8.77\times 10^{10}$ m. As before, you'll need to use trial and error to find the initial velocity at aphelion that gives you the correct perihelion distance.
**How many timesteps per orbit are required to get an acceptable simulation of Halley's comet around the Sun? Compare this to the number needed for the Moon; why are they different?**
One of the common difficulties in computer simulation is the presence of dissimilar timescales: when the simulation needs to simulate both processes that occur very quickly and those that occur very slowly. **How is that present in the Halley's comet
simulation, and why does it require you to do so much more work to simulate one orbit well?**
|
Intersects of 2 functions
Assuming function $f$ and $g$,
scipy.optimize.fsolve would give an elegant solution:
from scipy.optimize import fsolvedef f(xy): x, y = xy return (y - x ** 2, y - np.sin(x))fsolve(f, [1.0, 1.0])
gives
array([0.87672622, 0.76864886]) is the intersect of $y=x^2$ and $y=\sin(x)$.
Intersects of 2 arrays
There is a function called
numpy.intersect1d that returns the sorted, uniq values in both arrays, and
reduce(np.intersect1d, list_of_arrays) gives intersect of all arrays in the list. Index can also be returned by
return_indices=True option.
If the 2 arrays are data values, using
np.flatnonzero(np.abs(arr1-arr2))<tol) seems to be a good solution; this method returns the indices of values in
arr1 and
arr2 which are closed by
tol.
Another method is based on idea that there is a change in sign of $\lim_{x\to x_0^+}f(x_0)-g(x_0)$ and $\lim_{x\to x_0^-}f(x_0)-g(x_0)$, therefore, if $\mathrm{d}x$ is small enough, this method would give a very good estimation of the intersects.
def intersects(x, arr1, arr2): return x[np.flatnonzero(np.diff(np.sign(arr1 - arr2))]
|
The rational numbers $\Q$ are a countable, totally ordered set, so any subset of the rationals is also countable and totally ordered. In fact, the subsets of the rationals are the `only' countable, totally ordered sets!
Example 5.6.1 Let $A=\N\times \N$ using the lexicographic ordering. Under this ordering, $A$ is totally ordered (in fact, well ordered). We show that $A$ is isomorphic to a subset of $\Q$. Let $f\colon A\to \Q$ be the function $$ f((n,m))= 2n-{1\over m}, $$ and let $B$ be the image of $f$. Clearly $f\colon A\to B$ is surjective. To convince yourself that $f$ is an isomorphism, look at some values: $$ \matrix{ f((1,1)) = 2-1, & f((1,2))= 2-1/2, & f((1,3))=2-1/3, &…,\cr f((2,1)) = 4-1, & f((2,2))= 4-1/2, & f((2,3))=4-1/3, &…,\cr f((3,1)) = 6-1, & f((3,2))= 6-1/2, & f((3,3))=6-1/3, &…,\cr} $$ In general, if we fix $n$, then $f((n,m))$ is a sequence that increases from $2n-1$ toward $2n$. These rational numbers are ordered just like the lexicographic ordering of $\N\times \N$.
This example may seem surprising at first, because the rationals seem "one-dimensional'' while $\N\times \N$ seems "two-dimensional.''
Theorem 5.6.2 Suppose $A$ is any countable, totally ordered set. Then $A$ is isomorphic to a subset of the rational numbers.
Proof. Since $A$ is countable, we can arrange it in a sequence $a_1, a_2, a_3, …$. We describe a procedure to define $f(a_i)$ for each $a_i$ in turn. Let $f(a_1)$ be any rational. Suppose we have defined $f(a_1), f(a_2), …, f(a_n)$ in such a way that all order relations are preserved (that is, for all $i,j\le n$, $a_i\le a_j$ if and only if $f(a_i)\le f(a_j)$). We want to define $f$ on $a_{n+1}\in A$. Partition the set $\{a_1,…, a_n\}$ into two subsets: $$ X=\{a_i:i\le n \hbox{ and } a_i< a_{n+1}\}, Y=\{a_i:i\le n \hbox{ and } a_i>a_{n+1}\}. $$ In $\Q$, every element of $f(X)$ is smaller than every element of $f(Y)$. Choose $q$ strictly larger than the elements of $f(X)$ and strictly smaller than the elements of $f(Y)$. For each $i\le n$, the relationship between $q$ and $f(a_i)$ is the same as the relationship between $a_{n+1}$ and $a_i$. Therefore, letting $f(a_{n+1})=q$, we have extended the function to one more element in such a way that all order relations are preserved. The resulting function defined on all of $A$ is thus an isomorphism from $A$ to the range of $f$.
Exercises 5.6
Ex 5.6.1Show that the positive rationals, $\Q^+$ are isomorphicto the negative rationals, $\Q^-$. (Hint: $-1/x$.)
Ex 5.6.2Show that $\{0,1\}\times \Z$ (using the natural ordering on each factorand the lexicographic ordering on the product)is isomorphic to$$\{…, -4, -2, -1, -1/2, -1/4,…, 1/4, 1/2, 1, 2, 4, …\}.$$
Ex 5.6.3Let $I=\{q\in \Q:0\le q< 1\}$. Show that$\Z\times I$ (with the lexicographic ordering)is isomorphic to $\Q$. (Hint: add.)
If $A$ and $B$ are partially ordered sets, then $f\colon A\to B$ is an
embedding if for all $x,y\in A$, $x\le y$ iff$f(x)\le f(y)$.
Ex 5.6.4Show that an embedding is necessarily an injection, and hence $A$ is isomorphic to the image of $f$.
Ex 5.6.5Verify that the identity function is an embeddingand that the composition of two embeddings is an embedding.
Suppose $A$ and $B$ are partially ordered sets and there is an embedding of $A$ in $B$ and an embedding of $B$ in $A$. We would like to conclude that $A$ and $B$ are isomorphic—sort of a Schröder-Bernstein theorem for partial orders.
Ex 5.6.6Show that this is true if $A$ is finite.
Ex 5.6.7Show that this does not hold in general by findingtwo totally ordered sets that can be embeddedin each other but are not isomorphic. (Hint: try intervals.)
In spite of this, it can be proved that when$A$ and $B$ are
well ordered and each can be embeddedin the other, then they are isomorphic.
|
Speaker
Dr Anton Lymanets (University of Tuebingen)
Description
The CBM experiment at the future Facility for Antiproton and Ion Research (FAIR) will explore the properties of nuclear matter at high net baryon densities and at moderate temperature. The key detector – a Silicon Tracking System (STS) – will reconstruct charged particle tracks created in interactions of heavy-ion beam with nuclear target at projectile energies ranging from 10 to 40 GeV/nucleon. Operation at 10 MHz interaction rate with charged particle multiplicities up to 1000 requires fast and radiation hard silicon sensors. The necessary momentum resolution of 1% imposes stringent requirements to the sensor material budget (0.3% X$_0$) and detector module structure. The STS will occupy volume of about 1 m$^3$ defined by the aperture of a dipole magnet. It will consist of 8 tracking stations based on double-sided silicon microstrip detectors. The sensors with 58 $\mu m$ pitch, size up to $62 \times 62$ mm$^2$ and 1024 strips per side have AC-coupled strips oriented at $\pm$7.5° stereo angle. Short corner strips on the opposite edges of the sensors are interconnected via second metallization layer thus avoiding insensitive areas. Complicated design and the large number of silicon sensors needed for the construction of the STS (about 1300) require a set of quality assurance procedures that involve optical inspection, electric characterization and readout tests. We report about the development of an optical inspection system using NI LabVIEW software and Vision package for pattern recognition. The STS readout electronics with 2.1 million channels will dissipate about 40 kW of power. To cope with it, bi-phase CO$_2$ evaporative cooling will be used. Performance of a test system will be presented, in particular the cooling efficiency of a custom-made heat exchanger for the front-end electronics.
Primary author
Dr Anton Lymanets (University of Tuebingen)
|
Journal of Differential Geometry J. Differential Geom. Volume 99, Number 2 (2015), 215-253. On the spectrum of bounded immersions Abstract
In this article, we investigate some of the relations between the spectrum of a non-compact, extrinsically bounded submanifold $\varphi : M^m \to N^n$ and the Hausdorff dimension of its limit set $\lim \varphi$. In particular, we prove that if $\varphi : M^2 \to \mathbb{R}^3$ is a (complete) minimal surface immersed into an open, bounded, strictly convex subset $\Omega$ with $C^3$-boundary, then $M$ has discrete spectrum, provided that $\mathcal{H}_{\Psi} (\lim \varphi \cap \Omega) = 0$, where $\mathcal{H}_{\Psi}$ is the Hausdorff measure of order $\Psi(t) = t^2 \lvert \log t \rvert$. Our main theorem, Thm. 2.4, applies to a number of examples recently constructed, by many authors, in the light of Nadirashvili’s discovery of complete bounded minimal disks in $\mathbb{R}^3$, as well as to solutions of Plateau’s problems for non-rectifiable Jordan curves, giving a fairly complete answer to a question posed by S.T. Yau in his Millenium Lectures.
On the other hand, we present a simple criterion, called
the ball property, whose fulfilment guarantees the existence of elements in the essential spectrum. As an application, we show that some of the examples of Jorge-Xavier and Rosenberg-Toubiana of complete minimal surfaces between two planes have essential spectrum $\sigma_{\mathrm{ess}}(-\Delta) = [0, \infty)$. Article information Source J. Differential Geom., Volume 99, Number 2 (2015), 215-253. Dates First available in Project Euclid: 16 January 2015 Permanent link to this document https://projecteuclid.org/euclid.jdg/1421415562 Digital Object Identifier doi:10.4310/jdg/1421415562 Mathematical Reviews number (MathSciNet) MR3302039 Zentralblatt MATH identifier 1327.53076 Citation
Bessa, G. Pacelli; Jorge, Luquésio P.; Mari, Luciano. On the spectrum of bounded immersions. J. Differential Geom. 99 (2015), no. 2, 215--253. doi:10.4310/jdg/1421415562. https://projecteuclid.org/euclid.jdg/1421415562
|
LaTeX uses internal counters that provide numbering of pages, sections, tables, figures, etc. This article explains how to access and modify those counters and how to create new ones.
Contents
A counter can be easily set to any arbitrary value with
\setcounter. See the example below:
\section{Introduction} This document will present several counting examples, how to reset and access them. For instance, if you want to change the numbers in a list. \begin{enumerate} \setcounter{enumi}{3} \item Something. \item Something else. \item Another element. \item The last item in the list. \end{enumerate}
In this example
\setcounter{enumi}{3} sets the value of the item counter in the list to 3. This is the general syntax to manually set the value of any counter. See the reference guide for a complete list of counters.
All commands changing a counter's state in this section are changing it globally.
Counters in a document can be incremented, reset, accessed and referenced. Let's see an example:
\section{Another section} This is a dummy section with no purpose whatsoever but to contain text. This section has assigned the number \thesection. \stepcounter{equation} \begin{equation} \label{1stequation} \int_{0}^{\infty} \frac{x}{\sin(x)} \end{equation}
In this example, two counters are used:
\thesection
section at this point. For further methods to print a counter take a look on how to print counters.
\stepcounter{equation}
equation. Other similar commands are
\addtocounter and
\refstepcounter, see the reference guide.
Further commands to manipulate counters include:
\counterwithin<*>{<ctr1>}{<ctr2>}
<ctr2> to the counters that reset
<ctr1> when they're incremented. If you don't provide the
*,
\the<ctr1> will be redefined to
\the<ctr2>.\arabic{<ctr1>}. This macro is included in the LaTeX format since April 2018, if you're using an older version, you'll have to use the
chngctr package.
\counterwithout<*>{<ctr1>}{<ctr2>}
<ctr2> from the counters that reset
<ctr1> when they're incremented. If you don't provide the
*,
\the<ctr1> will be redefined to
\arabic{<ctr1>}. This macro is included in the LaTeX format since April 2018, if you're using an older version, you'll have to use the
chngctr package.
\addtocounter{<ctr>}{<num>}
<num> to the value of the counter
<ctr>.
\setcounter{<ctr>}{<num>}
<ctr>'s value to
<num>.
\refstepcounter{<ctr>}
\stepcounter but you can use LaTeX's referencing system to add a
\label and later
\ref the counter. The printed reference will be the current expansion of
\the<ctr>.
The basic syntax to create a new counter is by
\newcounter. Below an example that defines a numbered environment called
example:
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \newcounter{example}[section] \newenvironment{example}[1][]{\refstepcounter{example}\par\medskip \textbf{Example~\theexample. #1} \rmfamily}{\medskip} \begin{document} This document will present... \begin{example} This is the first example. The counter will be reset at each section. \end{example} Below is a second example \begin{example} And here's another numbered example. \end{example} \section{Another section} This is a dummy section with no purpose whatsoever but to contain text. This section has assigned the number \thesection. \stepcounter{equation} \begin{equation} \label{1stequation} \int_{0}^{\infty} \frac{x}{\sin(x)} \end{equation} \begin{example} This is the first example in this section. \end{example} \end{document}
In this LaTeX snippet the new environment
example is defined, this environment has 3 counting-specific commands.
\newcounter{example}[section]
section or omit the parameter if you don't want your defined counter to be automatically reset.
\refstepcounter{example}
\label afterwards.
\theexample
For further information on user-defined environments see the article about defining new environments
You can print the current value of a counter in different ways:
\theCounterName
2.1 for the first subsection in the second section.
\arabic
2.
\value
\setcounter{section}{\value{subsection}}).
\alph
b.
\Alph
B.
\roman
ii.
\Roman
II.
\fnsymbol
†.
\theCounterName is the macro responsible to print
CounterName's value in a formatted manner. For new counters created by
\newcounter it gets initialized as an Arabic number. You can change this by using
\renewcommand. For example if you want to change the way a subsection counter is printed to include the current section in italics and the current subsection in uppercase Roman numbers, you could do the following:
\renewcommand\thesubsection{\textit{\thesection}.\Roman{subsection}} \section{Example} \subsection{Example}\label{sec:example:ssec:example} This is the subsection \ref{sec:example:ssec:example}. Default counters in LaTeX
Usage Name For document structure For floats For footnotes For the enumerate environment Counter manipulation commands \addtocounter{CounterName}{number} \stepcounter{CounterName} \refstepcounter{CounterName}
It works like
\stepcounter, but makes the counter visible to the referencing mechanism (
\ref{label} returns counter value)
\setcounter{CounterName}{number} \newcounter{NewCounterName}
If you want the
NewCounterName counter to be reset to zero every time that another OtherCounterName counter is increased, use: \newcounter{NewCounterName}[OtherCounterName]
\setcounter{section}{\value{subsection}}.
\value{CounterName} \theCounterName
for example:
\thechapter,
\thesection, etc. Note that this might result in more than just the counter, for example with the standard definitions of the
article class
\thesubsection will print
Section. Subsection (e.g.
2.1).
For more information see:
|
Theorem: Diagonalizable matrices share the same eigenvector matrix $S$ iff $AB=BA$
Proof: $(\Leftarrow)$ Suppose $AB=BA$.
$$ABx=BAx=B\lambda x=\lambda Bx$$ Thus $x$ and $Bx$ are both eigenvectors of $A$, sharing the same $\lambda$. Assume that the eigenvalues of $A$ are distinct (it means the eigenspaces are all one dimensional) then $Bx$ must be a multiple of $x$. In other words $x$ is an eigenvector of $B$ as well as $A$.
Above, I can't follow the meaning of "if the eigenvalues are distinct then the eigenspaces are all one dimensional".
For example, let $$A=\begin{pmatrix}4 & -5 \\ 2 & -3\end{pmatrix}$$ It has two distinct eigenvalues, $-1$ and $2$. But its eignevectors are $x_1=\begin{pmatrix}1 &1\end{pmatrix}^\mathrm{T}$ and $x_2=\begin{pmatrix}5 &2 \end{pmatrix}^\mathrm{T}$, they are not one dimensional.
|
Answer
$(5, \dfrac{3 \pi}{2})$
Work Step by Step
Here, $r=\sqrt {x^2+y^2}=\sqrt{(0)^2+(-5)^2}= \sqrt {25}=5$ $\tan \theta =\dfrac{y}{x}$ and $\theta=arctan[\dfrac{-5}{0}]$ Thus, $\theta =\dfrac{3 \pi}{2}$ Hence, $(5, \dfrac{3 \pi}{2})$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.orgis added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Nous montrons, pour une grande famille de propriétés des espaces homogènes, qu’une telle propriété vaut pour tout espace homogène d’un groupe linéaire connexe dès qu’elle vaut pour les espaces homogènes de$\text{SL}_{n}$à stabilisateur fini. Nous réduisons notamment à ce cas particulier la vérification d’une importante conjecture de Colliot-Thélène sur l’obstruction de Brauer–Manin au principe de Hasse et à l’approximation faible. Des travaux récents de Harpaz et Wittenberg montrent que le résultat principal s’applique également à la conjecture analogue (dite conjecture (E)) pour les zéro-cycles.
In this article we construct a p-adic three-dimensional eigenvariety for the group$U$(2,1)($E$), where$E$is a quadratic imaginary field and$p$is inert in$E$. The eigenvariety parametrizes Hecke eigensystems on the space of overconvergent, locally analytic, cuspidal Picard modular forms of finite slope. The method generalized the one developed in Andreatta, Iovita and Stevens [$p$-adic families of Siegel modular cuspforms Ann. of Math. (2) 181, (2015), 623–697] by interpolating the coherent automorphic sheaves when the ordinary locus is empty. As an application of this construction, we reprove a particular case of the Bloch–Kato conjecture for some Galois characters of$E$, extending the results of Bellaiche and Chenevier to the case of a positive sign.
In this paper, we prove that the set of all$F$-pure thresholds on a fixed germ of a strongly$F$-regular pair satisfies the ascending chain condition. As a corollary, we verify the ascending chain condition for the set of all$F$-pure thresholds on smooth varieties or, more generally, on varieties with tame quotient singularities, which is an affirmative answer to a conjecture given by Blickle, Mustaţǎ and Smith.
We develop the analog of crystalline Dieudonné theory for$p$-divisible groups in the arithmetic of function fields. In our theory$p$-divisible groups are replaced by divisible local Anderson modules, and Dieudonné modules are replaced by local shtukas. We show that the categories of divisible local Anderson modules and of effective local shtukas are anti-equivalent over arbitrary base schemes. We also clarify their relation with formal Lie groups and with global objects like Drinfeld modules, Anderson’s abelian$t$-modules and$t$-motives, and Drinfeld shtukas. Moreover, we discuss the existence of a Verschiebung map and apply it to deformations of local shtukas and divisible local Anderson modules. As a tool we use Faltings’s and Abrashkin’s theories of strict modules, which we review briefly.
Let$X:=\mathbb{A}_{R}^{n}$be the$n$-dimensional affine space over a discrete valuation ring$R$with fraction field$K$. We prove that any pointed torsor$Y$over$\mathbb{A}_{K}^{n}$under the action of an affine finite-type group scheme can be extended to a torsor over$\mathbb{A}_{R}^{n}$possibly after pulling$Y$back over an automorphism of$\mathbb{A}_{K}^{n}$. The proof is effective. Other cases, including$X=\unicode[STIX]{x1D6FC}_{p,R}$, are also discussed.
A polarized variety is K-stable if, for any test configuration, the Donaldson–Futaki invariant is positive. In this paper, inspired by classical geometric invariant theory, we describe the space of test configurations as a limit of a direct system of Tits buildings. We show that the Donaldson–Futaki invariant, conveniently normalized, is a continuous function on this space. We also introduce a pseudo-metric on the space of test configurations. Recall that K-stability can be enhanced by requiring that the Donaldson–Futaki invariant is positive on any admissible filtration of the co-ordinate ring. We show that admissible filtrations give rise to Cauchy sequences of test configurations with respect to the above mentioned pseudo-metric.
We describe all degenerations of three-dimensional anticommutative algebras$\mathfrak{A}\mathfrak{c}\mathfrak{o}\mathfrak{m}_{3}$and of three-dimensional Leibniz algebras$\mathfrak{L}\mathfrak{e}\mathfrak{i}\mathfrak{b}_{3}$over$\mathbb{C}$. In particular, we describe all irreducible components and rigid algebras in the corresponding varieties.
We exhibit invariants of smooth projective algebraic varieties with integer values, whose nonvanishing modulo$p$prevents the existence of an action without fixed points of certain finite$p$-groups. The case of base fields of characteristic$p$is included. Counterexamples are systematically provided to test the sharpness of our results.
According to a well-known theorem of Serre and Tate, the infinitesimal deformation theory of an abelian variety in positive characteristic is equivalent to the infinitesimal deformation theory of its Barsotti–Tate group. We extend this result to 1-motives.
Let U be the unipotent radical of a Borel subgroup of a connected reductive algebraic group G, which is defined over an algebraically closed field k. In this paper, we extend work by Goodwin and Röhrle concerning the commuting variety of Lie(U) for Char(k) = 0 to fields whose characteristic is good for G.
A general conjecture is stated on the cone of automorphic vector bundles admitting nonzero global sections on schemes endowed with a smooth, surjective morphism to a stack of$G$-zips of connected Hodge type; such schemes should include all Hodge-type Shimura varieties with hyperspecial level. We prove our conjecture for groups of type$A_{1}^{n}$,$C_{2}$, and$\mathbf{F}_{p}$-split groups of type$A_{2}$(this includes all Hilbert–Blumenthal varieties and should also apply to Siegel modular$3$-folds and Picard modular surfaces). An example is given to show that our conjecture can fail for zip data not of connected Hodge type.
This paper is dedicated to a problem raised by Jacquet Tits in 1956: the Weyl group of a Chevalley group should find an interpretation as a group over what is nowadays called$\mathbb{F}_{1}$, the field with one element. Based on Part I of The geometry of blueprints, we introduce the class of Tits morphisms between blue schemes. The resulting Tits category$\text{Sch}_{{\mathcal{T}}}$comes together with a base extension to (semiring) schemes and the so-called Weyl extension to sets. We prove for${\mathcal{G}}$in a wide class of Chevalley groups—which includes the special and general linear groups, symplectic and special orthogonal groups, and all types of adjoint groups—that a linear representation of${\mathcal{G}}$defines a model$G$in$\text{Sch}_{{\mathcal{T}}}$whose Weyl extension is the Weyl group$W$of${\mathcal{G}}$. We call such models Tits–Weyl models. The potential of Tits–Weyl models lies in (a) their intrinsic definition that is given by a linear representation; (b) the (yet to be formulated) unified approach towards thick and thin geometries; and (c) the extension of a Chevalley group to a functor on blueprints, which makes it, in particular, possible to consider Chevalley groups over semirings. This opens applications to idempotent analysis and tropical geometry.
The work is devoted to the variety of two-dimensional algebras over algebraically closed fields. First we classify such algebras modulo isomorphism. Then we describe the degenerations and the closures of certain algebra series in the variety of two-dimensional algebras. Finally, we apply our results to obtain analogous descriptions for the subvarieties of flexible and bicommutative algebras. In particular, we describe rigid algebras and irreducible components for these subvarieties.
In this paper we establish Springer correspondence for the symmetric pair$(\text{SL}(N),\text{SO}(N))$using Fourier transform, parabolic induction functor, and a nearby cycle sheaf construction. As an application of our results we see that the cohomology of Hessenberg varieties can be expressed in terms of irreducible representations of Hecke algebras of symmetric groups at$q=-1$. Conversely, we see that the irreducible representations of Hecke algebras of symmetric groups at$q=-1$arise in geometry.
The Dieudonné crystal of a$p$-divisible group over a semiperfect ring$R$can be endowed with a window structure. If$R$satisfies a boundedness condition, this construction gives an equivalence of categories. As an application we obtain a classification of$p$-divisible groups and commutative finite locally free$p$-group schemes over perfectoid rings by Breuil–Kisin–Fargues modules if$p\geqslant 3$.
Let$U$be an affine smooth curve defined over an algebraically closed field of positive characteristic. The Abhyankar conjecture (proved by Raynaud and Harbater in 1994) describes the set of finite quotients of Grothendieck’s étale fundamental group$\unicode[STIX]{x1D70B}_{1}^{\acute{\text{e}}\text{t}}(U)$. In this paper, we consider a purely inseparable analogue of this problem, formulated in terms of Nori’s profinite fundamental group scheme$\unicode[STIX]{x1D70B}^{N}(U)$, and give a partial answer to it.
When$p>2$, we construct a Hodge-type analogue of Rapoport–Zink spaces under the unramifiedness assumption, as formal schemes parametrizing ‘deformations’ (up to quasi-isogeny) of$p$-divisible groups with certain crystalline Tate tensors. We also define natural rigid analytic towers with expected extra structure, providing more examples of ‘local Shimura varieties’ conjectured by Rapoport and Viehmann.
We show that the anti-canonical volume of an$n$-dimensional Kähler–Einstein$\mathbb{Q}$-Fano variety is bounded from above by certain invariants of the local singularities, namely$\operatorname{lct}^{n}\cdot \operatorname{mult}$for ideals and the normalized volume function for real valuations. This refines a recent result by Fujita. As an application, we get sharp volume upper bounds for Kähler–Einstein Fano varieties with quotient singularities. Based on very recent results by Li and the author, we show that a Fano manifold is K-semistable if and only if a de Fernex–Ein–Mustaţă type inequality holds on its affine cone.
Colmez [Périodes des variétés abéliennes a multiplication complexe, Ann. of Math. (2)138(3) (1993), 625–683; available at http://www.math.jussieu.fr/∼colmez] conjectured a product formula for periods of abelian varieties over number fields with complex multiplication and proved it in some cases. His conjecture is equivalent to a formula for the Faltings height of CM abelian varieties in terms of the logarithmic derivatives at$s=0$of certain Artin$L$-functions. In a series of articles we investigate the analog of Colmez’s theory in the arithmetic of function fields. There abelian varieties are replaced by Drinfeld modules and their higher-dimensional generalizations, so-called$A$-motives. In the present article we prove the product formula for the Carlitz module and we compute the valuations of the periods of a CM$A$-motive at all finite places in terms of Artin$L$-series. The latter is achieved by investigating the local shtukas associated with the$A$-motive.
Varieties of the form$G\times S_{\!\text{reg}}$, where$G$is a complex semisimple group and$S_{\!\text{reg}}$is a regular Slodowy slice in the Lie algebra of$G$, arise naturally in hyperkähler geometry, theoretical physics and the theory of abstract integrable systems. Crooks and Rayan [‘Abstract integrable systems on hyperkähler manifolds arising from Slodowy slices’, Math. Res. Let., to appear] use a Hamiltonian$G$-action to endow$G\times S_{\!\text{reg}}$with a canonical abstract integrable system. To understand examples of abstract integrable systems arising from Hamiltonian$G$-actions, we consider a holomorphic symplectic variety$X$carrying an abstract integrable system induced by a Hamiltonian$G$-action. Under certain hypotheses, we show that there must exist a$G$-equivariant variety isomorphism$X\cong G\times S_{\!\text{reg}}$.
|
In exercises 1 - 2, find a unit normal vector to the surface at the indicated point.
1) \( f(x,y)=x^3,\quad (2,−1,8)\)
Answer: \( (\frac{\sqrt{145}}{145})(12\hat{\mathbf i}−\hat{\mathbf k})\)
2) \( \ln\left(\dfrac{x}{y−z}\right)=0\) when \( x=y=1\)
In exercises 3 - 7, find a normal vector and a tangent vector at point \( P\).
3) \( x^2+xy+y^2=3,\quad P(−1,−1)\)
Answer: Normal vector: \( \hat{\mathbf i}+\hat{\mathbf j}\), tangent vector: \( −\hat{\mathbf j}\)
4) \( (x^2+y^2)^2=9(x^2−y^2),\quad P(\sqrt{2},1)\)
5) \( xy^2−2x^2+y+5x=6,\quad P(4,2)\)
Answer: Normal vector: \( 7\hat{\mathbf i}−17\hat{\mathbf j}\), tangent vector: \( 7\hat{\mathbf i}+7\hat{\mathbf j}\)
6) \( 2x^3−x^2y^2=3x−y−7,\quad P(1,−2)\)
7) \( ze^{x^2−y^2}−3=0, \quad P(2,2,3)\)
Answer: \( −1.094\hat{\mathbf i}−0.18238\hat{\mathbf j}\)
In exercises 8 - 19, find the equation for the tangent plane to the surface at the indicated point. (Hint: If the given function is not already solved for \(z\), start by solving it for \( z\) in terms of \( x\) and \( y\).)
8) \( −8x−3y−7z=−19,\quad P(1,−1,2)\)
9) \( z=−9x^2−3y^2,\quad P(2,1,−39)\)
Answer: \( −36x−6y−z=−39\)
10) \( x^2+10xyz+y^2+8z^2=0,\quad P(−1,−1,−1)\)
11) \( z=\ln(10x^2+2y^2+1),\quad P(0,0,0)\)
Answer: \( z=0\)
12) \( z=e^{7x^2+4y^2}, \quad P(0,0,1)\)
13) \( xy+yz+zx=11,\quad P(1,2,3)\)
Answer: \( 5x+4y+3z−22=0\)
14) \( x^2+4y^2=z^2,\quad P(3,2,5)\)
15) \( x^3+y^3=3xyz,\quad P(1,2,\frac{3}{2})\)
Answer: \( 4x−5y+4z=0\)
16) \( z=axy,\quad P(1,\frac{1}{a},1)\)
17) \( z=\sin x+\sin y+\sin(x+y),\quad P(0,0,0)\)
Answer: \( 2x+2y−z=0\)
18) \( h(x,y)=\ln\sqrt{x^2+y^2},\quad P(3,4)\)
19) \( z=x^2−2xy+y^2,\quad P(1,2,1)\)
Answer: \( −2(x−1)+2(y−2)−(z−1)=0\)
In exercises 20 - 25, find parametric equations for the normal line to the surface at the indicated point. (Recall that to find the equation of a line in space, you need a point on the line, \( P_0(x_0,y_0,z_0)\), and a vector \( \vecs v=⟨a,b,c⟩\) that is parallel to the line. Then the equations of the line are: \(\quad x=x_0+at,\quad y=y_0+bt, \quad z=z_0+ct.)\)
20) \( −3x+9y+4z=−4,\quad P(1,−1,2)\)
21) \( z=5x^2−2y^2,\quad P(2,1,18)\)
Answer: \( x=20t+2,y=−4t+1,z=−t+18\)
22) \( x^2−8xyz+y^2+6z^2=0,\quad P(1,1,1)\)
23) \( z=\ln(3x^2+7y^2+1),\quad P(0,0,0)\)
Answer: \( x=0,y=0,z=t\)
24) \( z=e^{4x^2+6y^2},\quad P(0,0,1)\)
25) \( z=x^2−2xy+y^2\) at point \( P(1,2,1)\)
Answer: \( x−1=2t;y−2=−2t;z−1=t\)
In exercises 26 - 28, use the figure shown here.
26) The length of line segment \( AC\) is equal to what mathematical expression?
27) The length of line segment \( BC\) is equal to what mathematical expression?
Answer: The differential of the function \( z(x,y)=dz=f_xdx+f_ydy\)
28) Using the figure, explain what the length of line segment \( AB\) represents.
29) Show that \( f(x,y)=e^{xy}x\) is differentiable at point \( (1,0).\)
Answer: Using the definition of differentiability, we have \( e^{xy}x≈x+y\).
30) Show that \( f(x,y)=x^2+3y\) is differentiable at every point. In other words, show that \( Δz=f(x+Δx,y+Δy)−f(x,y)=f_xΔx+f_yΔy+ε_1Δx+ε_2Δy\), where both \( ε_1\) and \( ε_2\) approach zero as \( (Δx,Δy)\) approaches \( (0,0).\)
Answer: \( Δz=2xΔx+3Δy+(Δx)^2.(Δx)^2→0\) for small \( Δx\) and \( z\) satisfies the definition of differentiability.
31) Find the total differential of each function:
\( z=x^3 + y^3 - 5\) \( z=e^{xy}\) \( z=y\cos x+\sin y\) \(P = t^2 + 3t + tu^3\) \( w=e^y\cos(x)+z^2\) Answers: \( dz = 3x^2\,dx +3y^2\,dy \) \( dz = ye^{xy}\,dx +xe^{xy}\,dy \) \( dz = -y\sin x\,dx +(\cos x + \cos y)\,dy \) \( dP = (2t + 3 + u^3)\, dt + 3t u^2 \,du \) \( dw = -e^y\sin(x)\,dx +e^y\cos(x)\,dy +2z\,dz\)
32) a. Find the total differential \(dz\) of the function \( z=\dfrac{xy}{y+x}\) and then
b. State its value where \( x\) changes from \( 10\) to \( 10.5\) and \( y\) changes from \( 15\) to \( 13\). Answer: a. \( dz = \dfrac{y^2}{(x+y)^2}\, dx + \dfrac{x^2}{(x+y)^2} \,dy \) b. \(dx = 0.5\) and \(dy = -2\) so \( \begin{align*} dz &= f_x(10, 15) \, dx + f_y(10,15)\, dy \\ &= \frac{15^2}{25^2}\,dx + \frac{10^2}{25^2}\, dy \\ &= \frac{225}{625}\, (0.5) + \frac{100}{625} (-2) \\ &= \frac{9}{25}\left(\frac{1}{2}\right) + \frac{4}{25} (-2) \\ &= \frac{18}{100} - \frac{32}{100} \\ &= .18 - .32 = -0.14 \end{align*}\)
33) Let \( z=f(x,y)=xe^y.\) State its total differential. Then compute \( Δz\) from \( P(1,2)\) to \( Q(1.05,2.1)\) and then find the approximate change in \( z\), \(dz\), from point \( P\) to point \( Q\). Recall \( Δz=f(x+Δx,y+Δy)−f(x,y)\), and \( dz\) and \( Δz\) should be approximately equal, if \(dx\) and \(dy\) are both reasonably small.
Answer: Total Differential: \(dz = e^y\,dx + xe^y\, dy \) \( Δz≈1.185422\) and \( dz≈1.108.\) Note that they are relatively close.
34) The volume of a right circular cylinder is given by \( V(r,h)=πr^2h.\) Find the differential \( dV\). Interpret the formula geometrically.
Answer: \( dV = 2 \pi r h\, dr + \pi r^2 \,dh \)
35) See the preceding problem. Use differentials to estimate the amount of aluminum in an enclosed aluminum can with diameter \( 8.0cm\) and height \( 12cm\) if the aluminum is \( 0.04\) cm thick.
Answer: \( 16\,\text{cm}^3\)
36) Use the differential \( dz\) to approximate the change in \( z=\sqrt{4−x^2−y^2}\) as \( (x,y)\) moves from point \( (1,1)\) to point \( (1.01,0.97).\) Compare this approximation with the actual change in the function.
37) Let \( z=f(x,y)=x^2+3xy−y^2.\) Find the exact change in the function and the approximate change in the function as \( x\) changes from \( 2.00\) to \( 2.05\) and \( y\) changes from \( 3.00\) to \( 2.96\).
Answer: \( Δz=\) exact change \( =0.6449\), approximate change is \( dz=0.65\). The two values are close.
38) The centripetal acceleration of a particle moving in a circle is given by \( a(r,v)=\frac{v^2}{r},\) where \( v\) is the velocity and \( r\) is the radius of the circle. Approximate the maximum percent error in measuring the acceleration resulting from errors of \( 3\%\) in \( v\) and \( 2\%\) in \( r\). (Recall that the percentage error is the ratio of the amount of error over the original amount. So, in this case, the percentage error in a is given by \( \frac{da}{a}\).)
39) The radius \( r\) and height \( h\) of a right circular cylinder are measured with possible errors of \( 4\%\) and \( 5\%\), respectively. Approximate the maximum possible percentage error in measuring the volume (Recall that the percentage error is the ratio of the amount of error over the original amount. So, in this case, the percentage error in \( V\) is given by \( \frac{dV}{V}\).)
Answer: \( 13\%\) or \( 0.13\)
40) The base radius and height of a right circular cone are measured as \( 10\) in. and \( 25\) in., respectively, with a possible error in measurement of as much as \( 0.1\) in. each. Use differentials to estimate the maximum error in the calculated volume of the cone.
41) The
electrical resistance \( R\) produced by wiring resistors \( R_1\) and \( R_2\) in parallel can be calculated from the formula \( \frac{1}{R}=\frac{1}{R_1}+\frac{1}{R_2}\). If \( R_1\) and \( R_2\) are measured to be \( 7Ω\) and \( 6Ω\), respectively, and if these measurements are accurate to within \( 0.05Ω\), estimate the maximum possible error in computing \( R\). (The symbol \( Ω\) represents an ohm, the unit of electrical resistance.) Answer: \( 0.025\)
42) The area of an ellipse with axes of length \( 2a\) and \( 2b\) is given by the formula \( A=πab\). Approximate the percent change in the area when \( a\) increases by \( 2\%\) and \( b\) increases by \( 1.5\%.\)
43) The period \( T\) of a
simple pendulum with small oscillations is calculated from the formula \( T=2π\sqrt{\frac{L}{g}}\), where \( L\) is the length of the pendulum and \( g\) is the acceleration resulting from gravity. Suppose that \( L\) and \( g\) have errors of, at most, \( 0.5\%\) and \( 0.1\%\), respectively. Use differentials to approximate the maximum percentage error in the calculated value of \( T\). Answer: \( 0.3\%\)
44)
Electrical power \( P\) is given by \( P=\frac{V^2}{R}\), where \( V\) is the voltage and \( R\) is the resistance. Approximate the maximum percentage error in calculating power if \( 120 V\) is applied to a \( 2000−Ω\) resistor and the possible percent errors in measuring \( V\) and \( R\) are \( 3\%\) and \( 4\%\), respectively.
For exercises 45 - 49, find the linear approximation of each function at the indicated point.
45) \( f(x,y)=x\sqrt{y},\quad P(1,4)\)
Answer: \( L(x,y) = 2x+\frac{1}{4}y−1\)
46) \( f(x,y)=e^x\cos y;\quad P(0,0)\)
47) \( f(x,y)=\arctan(x+2y),\quad P(1,0)\)
Answer: \( L(x,y) = \frac{1}{2}x+y+\frac{1}{4}π−\frac{1}{2}\)
48) \( f(x,y)=\sqrt{20−x^2−7y^2},\quad P(2,1)\)
49) \( f(x,y,z)=\sqrt{x^2+y^2+z^2},\quad P(3,2,6)\)
Answer: \( L(x,y,z) = \frac{3}{7}x+\frac{2}{7}y+\frac{6}{7}z\)
50) [T] Find the equation of the tangent plane to the surface \( f(x,y)=x^2+y^2\) at point \( (1,2,5),\) and graph the surface and the tangent plane at the point.
51) [T] Find the equation for the tangent plane to the surface at the indicated point, and graph the surface and the tangent plane: \( z=\ln(10x^2+2y^2+1),\quad P(0,0,0).\)
Answer:
\( z=0\)
52) [T] Find the equation of the tangent plane to the surface \( z=f(x,y)=\sin(x+y^2)\) at point \( (\frac{π}{4},0,\frac{\sqrt{2}}{2})\), and graph the surface and the tangent plane.
Contributors
Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
Paul Seeburger (Monroe Community College) edited the LaTeX and created all but part e of exercise 31.
|
February 4, 2015
ece1229
No comments
Ampere's law, antenna, antisymmetric, average power, bivector, complex power, constituative relations, continuity equation, cross product, curl, dB, dBi, decibel, directivity, divergence, divergence theorem, dot product, dual, duality, ece1229, electic source, electric dipole, electric field, electric sources, far field, four current, four gradient, four potential, four vector, free space, GA, gain, Geometric Algebra, geometric product, grade, Green's function, half power beamwidth, Helmholtz equation, Helmholz equation, impedance, impulse response, intensity, isotropic radiator, linear media, linear time invariant, Lorentz gauge, magnetic charge, magnetic current, magnetic dipole, magnetic field, magnetic source, magnetic sources, magnetization, Mathematica, Maxwell-Faraday equation, Maxwell's equation, Maxwell's equations, Mie scattering, Minkowski space, optical limit, parametric plot, ParametricPlot, ParametricPlot3D, Pauli basis, phasor, plane wave, polarization vector, potential, power, Poynting vector, pseudoscalar, radar cross section, radiation intensity, Rayleigh scattering, scalar, scalar potential, spacetime gradient, spacetime split, spherical coordinates, spherical scattering, spherical wave, Stokes' theorem, superposition, vector potential, wedge product
I’ve now posted a first set of notes for the antenna theory course that I am taking this term at UofT.
Unlike most of the other classes I have taken, I am not attempting to take comprehensive notes for this class. The class is taught on slides that match the textbook so closely, there is little value to me taking notes that just replicate the text. Instead, I am annotating my copy of textbook with little details instead. My usual notes collection for the class will contain musings of details that were unclear, or in some cases, details that were provided in class, but are not in the text (and too long to pencil into my book.)
The notes linked above include:
Reading notes for chapter 2 (Fundamental Parameters of Antennas) and chapter 3 (Radiation Integrals and Auxiliary Potential Functions) of the class text. Geometric Algebra musings. How to do formulate Maxwell’s equations when magnetic sources are also included (those modeling magnetic dipoles). Some problems for chapter 2 content.
February 4, 2015
ece1229
No comments
ece1229, electric field, electric sources, four potential, magnetic field, magnetic sources, scalar potential, spacetime gradient, spacetime split, vector potential
[Click here for a PDF of this post with nicer formatting]
This is a small addition to Phasor form of (extended) Maxwell’s equations in Geometric Algebra.
Relative to the observer frame implicitly specified by \( \gamma_0 \), here’s an expansion of the curl of the electric four potential
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:720}
\begin{aligned} \grad \wedge A_{\textrm{e}} &= \inv{2}\lr{ \grad A_{\textrm{e}} – A_{\textrm{e}} \grad } \\ &= \inv{2}\lr{ \gamma_0 \lr{ \spacegrad + j k } \gamma_0 \lr{ A_{\textrm{e}}^0 – \BA_{\textrm{e}} } – \gamma_0 \lr{ A_{\textrm{e}}^0 – \BA_{\textrm{e}} } \gamma_0 \lr{ \spacegrad + j k } } \\ &= \inv{2}\lr{ \lr{ -\spacegrad + j k } \lr{ A_{\textrm{e}}^0 – \BA_{\textrm{e}} } – \lr{ A_{\textrm{e}}^0 + \BA_{\textrm{e}} } \lr{ \spacegrad + j k } } \\ &= \inv{2}\lr{ – 2 \spacegrad A_{\textrm{e}}^0 + j k A_{\textrm{e}}^0 – j k A_{\textrm{e}}^0 + \spacegrad \BA_{\textrm{e}} – \BA_{\textrm{e}} \spacegrad – 2 j k \BA_{\textrm{e}} } \\ &= – \lr{ \spacegrad A_{\textrm{e}}^0 + j k \BA_{\textrm{e}} } + \spacegrad \wedge \BA_{\textrm{e}} \end{aligned} \end{equation}
In the above expansion when the gradients appeared on the right of the field components, they are acting from the right (i.e. implicitly using the Hestenes dot convention.)
The electric and magnetic fields can be picked off directly from above, and in the units implied by this choice of four-potential are
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:760}
\BE_{\textrm{e}} = – \lr{ \spacegrad A_{\textrm{e}}^0 + j k \BA_{\textrm{e}} } = -j \lr{ \inv{k}\spacegrad \spacegrad \cdot \BA_{\textrm{e}} + k \BA_{\textrm{e}} } \end{equation} \begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:780} c \BB_{\textrm{e}} = \spacegrad \cross \BA_{\textrm{e}}. \end{equation}
For the fields due to the magnetic potentials
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:800}
\lr{ \grad \wedge A_{\textrm{e}} } I = – \lr{ \spacegrad A_{\textrm{e}}^0 + j k \BA_{\textrm{e}} } I – \spacegrad \cross \BA_{\textrm{e}}, \end{equation}
so the fields are
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:840}
c \BB_{\textrm{m}} = – \lr{ \spacegrad A_{\textrm{m}}^0 + j k \BA_{\textrm{m}} } = -j \lr{ \inv{k}\spacegrad \spacegrad \cdot \BA_{\textrm{m}} + k \BA_{\textrm{m}} } \end{equation} \begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:860} \BE_{\textrm{m}} = -\spacegrad \cross \BA_{\textrm{m}}. \end{equation}
Including both electric and magnetic sources the fields are
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:900}
\BE = -\spacegrad \cross \BA_{\textrm{m}} -j \lr{ \inv{k}\spacegrad \spacegrad \cdot \BA_{\textrm{e}} + k \BA_{\textrm{e}} } \end{equation} \begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:920} c \BB = \spacegrad \cross \BA_{\textrm{e}} -j \lr{ \inv{k}\spacegrad \spacegrad \cdot \BA_{\textrm{m}} + k \BA_{\textrm{m}} } \end{equation}
|
i start reading about modular forms of half-integral weight $k/2$ for $\Gamma' \subset \Gamma_0(4)$. As far as i understand these are holomorphic functions $f\colon \mathbb{H} \rightarrow \mathbb{C}$ which satisfies $$f(z)|[\zeta]_{k/2} = f(z)$$ for all $\zeta \in \widetilde{\Gamma}'$, where $\widetilde{\Gamma}'=\lbrace (\gamma,j(\gamma,z):\gamma \in \Gamma'\rbrace$. Here $j(\gamma,z)= \theta(\gamma z)/\theta(z)$ for the classical Jacobi thera series. And there is allways mentiont that these functions $f$ are holomorphic at the cusps, an easy condition therefor is that for a cusp $\gamma \tau$ the Fourier series of function $f(\gamma \tau)$ has no main part, and so is just a Taylor series.
My questions are: -Did i get it right so far? -Is there a way to write the holom. condition at the cusps by using the slash operator?
Thanks for the help!
p.s. Let the slash operator defined by $f(z)|[\zeta]_{k/2}=f(\alpha z)\phi(z)^{-k}$ for $\zeta \in G:=\lbrace (\alpha,\phi(z)):\alpha \in GL_2(\mathbb{Q}), \phi:\mathbb{H} \rightarrow \mathbb{C}\rbrace$.
|
pandas.Series.ewm¶
Series.
ewm(
com=None, span=None, halflife=None, alpha=None, min_periods=0, freq=None, adjust=True, ignore_na=False, axis=0)¶
Provides exponential weighted functions
New in version 0.18.0.
Parameters: com: float, optional
Specify decay in terms of center of mass, \(\alpha = 1 / (1 + com),\text{ for } com \geq 0\)
span: float, optional
Specify decay in terms of span, \(\alpha = 2 / (span + 1),\text{ for } span \geq 1\)
halflife: float, optional
Specify decay in terms of half-life, \(\alpha = 1 - exp(log(0.5) / halflife),\text{ for } halflife > 0\)
alpha: float, optional
Specify smoothing factor \(\alpha\) directly, \(0 < \alpha \leq 1\)
New in version 0.18.0.
min_periods: int, default 0
Minimum number of observations in window required to have a value (otherwise result is NA).
freq: None or string alias / date offset object, default=None (DEPRECATED)
Frequency to conform to before computing statistic
adjust: boolean, default True
Divide by decaying adjustment factor in beginning periods to account for imbalance in relative weightings (viewing EWMA as a moving average)
ignore_na: boolean, default False
Ignore missing values when calculating weights; specify True to reproduce pre-0.15.0 behavior
Returns:
a Window sub-classed for the particular operation
Notes
Exactly one of center of mass, span, half-life, and alpha must be provided. Allowed values and relationship between the parameters are specified in the parameter descriptions above; see the link at the end of this section for a detailed explanation.
The freq keyword is used to conform time series data to a specified frequency by resampling the data. This is done with the default parameters of
resample()(i.e. using the mean).
When adjust is True (default), weighted averages are calculated using weights (1-alpha)**(n-1), (1-alpha)**(n-2), ..., 1-alpha, 1.
When adjust is False, weighted averages are calculated recursively as: weighted_average[0] = arg[0]; weighted_average[i] = (1-alpha)*weighted_average[i-1] + alpha*arg[i].
When ignore_na is False (default), weights are based on absolute positions. For example, the weights of x and y used in calculating the final weighted average of [x, None, y] are (1-alpha)**2 and 1 (if adjust is True), and (1-alpha)**2 and alpha (if adjust is False).
When ignore_na is True (reproducing pre-0.15.0 behavior), weights are based on relative positions. For example, the weights of x and y used in calculating the final weighted average of [x, None, y] are 1-alpha and 1 (if adjust is True), and 1-alpha and alpha (if adjust is False).
More details can be found at http://pandas.pydata.org/pandas-docs/stable/computation.html#exponentially-weighted-windows
Examples
>>> df = DataFrame({'B': [0, 1, 2, np.nan, 4]}) B 0 0.0 1 1.0 2 2.0 3 NaN 4 4.0 >>> df.ewm(com=0.5).mean() B 0 0.000000 1 0.750000 2 1.615385 3 1.615385 4 3.670213
|
Based on such demands, I designed a co-pilot which can:
Automatically clustering the clusters; Remove periodic boundary conditions and make the center-of-mass at $(0,0,0)^T$; Adjust the view vector along the minor axis of the aggregate; Classify the aggregate.
After obtaining clusters in step 1, we must remove periodic boundary conditions of the cluster. If in step 1, one uses BFS or DFS + Linked Cell List method, then one can remove periodic boundary condition during clustering; but this method has limitations, it does not work properly if the cluster is percolated throughout the box. Therefore, in this step, I use circular statistics to deal with the clusters. In periodic boundary condition simulation box, distance between an NP in an aggregate and the midpoint will never exceed $L/2$ in corresponding box dimension. Midpoint here is not center-of-mass, e.g., distance between a nozzle point and center-of-mass of an ear-syringe-bulb is clearly larger than its half length; midpoint is a actually a "de-duplicated" center-of-mass. Besides, circular mean also puts most points in the center in case of percolation. Therefore, in part 2, we have following steps:
Choose a $r_\text{cut}$ to test whether the aggregate is percolate; If the aggregate is percolate, evaluate the circular mean of all points $r_c$; Set all coordinates $r$ as $r\to pbc(r-r_c)$; If the aggregate is not percolate, midpoint is evaluated by calculating circular mean of coordinates $r$ where $\rho(r)>0$, $\rho(r)$ is calculated using bin size that smaller than $r_\text{cut}$ used in step 1; Same as step 3, update coordinates; After step 5, the aggregates are unwrapped from the box, set $r\to r-\overline{r}$ to set center-of-mass at $(0,0,0)^T$
$$\alpha=\underset{\beta}{\operatorname{argmin}}\sum_i (1-\cos(\alpha_i-\beta))$$
Adjusting the view vector is simple, evaluate the eigenspace of gyration tensor as $rr^T/n$ and sort the eigenvectors by eigenvalue, i.e., $\lambda_1\ge\lambda_2\ge\lambda_3$, then the minor axis is corresponding eigenvector $v_3$, the aggregate then can be rotated by $[v_1, v_2, v_3]$ so that the minor axis is $z$-axis.
The last step is a bit more tricky, the best trail I attempted was to use SVC, a binary classification method. I used about 20 samples labeled as "desired", these 20 samples were extended to 100 samples by adding some noises to the samples, e.g., moving coordinates a little bit, adding several NPs into the aggregate or removing several NPs randomly, without "breaking" the category of the morphology. Together with 100 "undesired" samples, I trained the SVC with a Gaussian kernel. The result turned out to be pretty good. I also tried to use ANN to classify all 5 categories of morphologies obtained from simulations, but ANN model did not work very well, perhaps the reason was lack of samples or the model I built was too rough. I didn't try other multi-class methods, anyway, that part of work was done, I stopped developing this co-pilot long time ago.
|
Heat Equation is
$$Au_{xx} = u_{t},$$ for $-\infty<x<\infty$ and $t>0$. Also, $u(x,0) \rightarrow 0$ as $ x \rightarrow \pm \infty$.
The initial condition:
$$ u(x,0) = \exp{(-|x|)},$$
for $-\infty<x<\infty$.
Want the solution form: (Using the Fourier Transform)
$$u(x,t) = \frac{1}{\pi}\int_{-\infty}^{\infty} \frac{cos(wx)}{1+w^2} \exp{(-Aw^2t) dw}.$$
The fundamental FT equations are:
$$F(s) = \mathcal{F[f(x)]} = \int_{-\infty}^{\infty} f(x) \exp{(-iwx)}dx$$ $$f(x) = \mathcal{F^{-1}[f(x)]} = \frac{1}{2\pi}\int_{-\infty}^{\infty} F(s) \exp{(iwx)}dw$$
Attempt:
Taking fourier transform of both sides of the PDE,
$$-Aw^2U(w,t) = \frac{\partial U}{\partial t} (w,t)$$ Then by solving this as an ODE and using the integrating factor, $$U(w,t) = U(w,0)\exp{(-Aw^2t)}$$
Taking inverse fourier transform of both sides,
$$u(x,t) = \frac{1}{\pi}\int_{-\infty}^{\infty} \frac{1}{1+w^2} \exp{(iwx)} \exp{(-Aw^2t)} dw.$$
But clearly this is not the required answer. Euler's formula can be used to get
$$u(x,t) = \frac{1}{\pi}\int_{-\infty}^{\infty} \frac{cos(wx)}{1+w^2}\exp{(-Aw^2t)} dw + \frac{i}{\pi}\int_{-\infty}^{\infty} \frac{sin(wx)}{1+w^2} \exp{(-Aw^2t)} dw.$$
However, that means that the integral on the right must be equal to zero (which computationally it indeed is). But lack of knowledge about that type of integral makes this impossible. Feel as if there's something easy that is being missed. Any help would be appreciated.
|
What is the area, in square units, of triangle ABC in the figure shown if points A, B, C and D are coplanar, angle D is a right angle, AC = 13, AB = 15 and DC = 5?
Use Pythagorean Theorem to find DA. So 13^2-5^2=144, and the square root of 144 is 12. So DA must be 12.
Notice that AB has the length of 15 and DA has the length of twelve. Since some right triangles have a ratio of 3:4:5, side DB must have the length of 9
Since DC+BC is 9, and DC is 5, BC is 4.
So now we know the three sides of ABC, which is 13, 15, and 4. We can use Heron's formula to figure out the area.
Heron's formula is:
A=sqrt((S)(S-A)(S-B)(S-C))
S=(a+b+c)/2
We plug it sides into the 2nd equation, S=(13+15+4)/2, and we get S=16.
Then plug your answers into the first equation. A=sqrt((16)(16-15)(16-13)(16-4)), and we get A=24.
So the area of ABC is 24
What is the area, in square units, of triangle ABC in the figure shown if points A, B, C and D are coplanar, angle D is a right angle, AC = 13, AB = 15 and DC = 5?
1.
\(\begin{array}{|rcll|} \hline DA^2+DC^2 &=& AC^2 \quad & | \quad DC=5,~ AC = 13 \\ DA^2+5^2&=& 13^2 \\ DA^2 &=& 13^2-5^2 \\ DA^2 &=& 144 \\ DA^2 &=& 12^2 \\ \mathbf{DA} &\mathbf{=}& \mathbf{12} \\ \hline \end{array} \)
2.
\(\begin{array}{|rcll|} \hline BD^2+DA^2 &=& AB^2 \quad & | \quad DA=12,~ AB = 15 \\ BD^2+12^2&=& 15^2 \\ BD^2 &=& 15^2-12^2 \\ BD^2 &=& 81 \\ BD^2 &=& 9^2 \\ \mathbf{BD} &\mathbf{=}& \mathbf{9} \\ \hline \end{array}\)
3.
\(\begin{array}{|rcll|} \hline BC &=& BD-DC \quad & | \quad BD=9,~ DC = 5 \\ BC &=& 9-5 \\ \mathbf{BC} &\mathbf{=}& \mathbf{4} \\ \hline \end{array}\)
4. \(\begin{array}{|rcll|} \hline A &=& \dfrac{BC\cdot DA}{2} \quad & | \quad BC=4,~ DA = 12 \\ A &=& \dfrac{4\cdot 12}{2} \\ A &=& 2\cdot 12 \\ \mathbf{A} &\mathbf{=}& \mathbf{24} \\ \hline \end{array}\)
The area of triangle ABC is
24
|
Answer
(a) $m\angle P = 63^{\circ}$ (b) $NP = 13~cm$
Work Step by Step
(a) Since $\angle M \cong \angle Q$, the measure of the angle $m\angle M = 90^{\circ}$ We can see that $m\angle NRQ = 90^{\circ}$ since it is supplementary to $\angle PRN$ Then $m\angle MNR = 90^{\circ}$, since the sum of the four angles in $MNRQ$ is $360^{\circ}$ We can find $m\angle P$: $m\angle P + m\angle PRN + m\angle RNP = 180^{\circ}$ $m\angle P + m\angle PRN + m\angle RNP = 180^{\circ}$ $m\angle P + m\angle PRN + (m\angle MNP - m\angle MNR) = 180^{\circ}$ $m\angle P + m\angle PRN + (m\angle P+54^{\circ} - m\angle MNR) = 180^{\circ}$ $2m\angle P + m\angle PRN + 54^{\circ} - m\angle MNR = 180^{\circ}$ $2m\angle P + 90^{\circ} + 54^{\circ} - 90^{\circ} = 180^{\circ}$ $2m\angle P + 54^{\circ} = 180^{\circ}$ $2m\angle P = 126^{\circ}$ $m\angle P = \frac{126^{\circ}}{2}$ $m\angle P = 63^{\circ}$ (b) We can find $PR$: $PR = PQ- QR$ $PR = PQ-MN$ $PR = 20~cm-15~cm$ $PR = 5~cm$ Note that $RN = MQ = 12~cm$ We can find the length of $\overline{NP}$: $NP = \sqrt{(PR)^2+(RN)^2}$ $NP = \sqrt{(5~cm)^2+(12~cm)^2}$ $NP = \sqrt{25~cm^2+144~cm^2}$ $NP = \sqrt{169~cm^2}$ $NP = 13~cm$
|
In the general equation of the compressibility factor $Z$, we define $Z$ as $$Z=\frac{pV}{nRT}$$ Here, what is $p$? Is it $p_\text{real}$ or $p_\text{ideal}$? Also, what is $V$? $V_\text{real}$ or $V_\text{container}$?
Both quantities are of the real gas.
Note that for the ideal gas behaviour, the compressibility factor $$Z=\frac{V_\text{real}}{V_\text{ideal}}=\frac{pV_\text{real}}{nRT}=1$$
for any combination of $p, V, T$.
But real gases, in contrary to the ideal gas, have nonzero volume of molecules and there are intermolecular interactions.
The volume of molecules increases $Z$, as it makes the pressure higher, because the collision frequency is higher.
The cohesive forces between molecules decreases $Z$. As molecule attraction, like hydrogen bonds or dipole interactions, decreases the effective number of molecules and therefore pressure.
This is reflected in the van der Waals equation - probably the simplest state equation of real gases, for not too high pressure: $$\begin{align} \left(p + \frac{an^2}{V^2}\right)\left(V - nb\right)&=nRT \\ \left(p + \frac{a}{V_\mathrm m^2}\right)\left(V_\mathrm m - b\right)&=RT \end{align}$$
$$\begin{align} Z &= \frac{p\cdot V_\text{real}}{nRT} \\ Z&= \frac{p\cdot V_\text{real}}{\left(p + \frac{an^2}{V_\text{real}^2}\right)(V_\text{real} - nb)} \\ Z&= \frac{1}{\left(1 + \frac{an^2}{p \cdot V_\text{real}^2}\right)\left(1 - \frac{nb}{V_\text{real}}\right)} \\ Z&= \frac{1}{\left(1 + \frac{a}{pV_{\mathrm m,\text{real}}^2}\right)\left(1 - \frac{b}{V_{\mathrm m,\text{real}}}\right)} \end{align}$$
For $|x|\ll1, 1/(1+x)=1-x$
For not too far from ideal behaviour, we can apply the above approximation.
$$\begin{align} Z&=\left(1 - \frac{an^2}{pV_{\mathrm{real}}^2}\right)\left(1 + \frac{bn}{V_{\mathrm{real}}}\right) \\ Z&=\left(1 - \frac{a}{pV_{\mathrm m,\text{real}}^2}\right)\left(1 + \frac{b}{V_{\mathrm m,\text{real}}}\right) \\ \end{align}$$
We can therefore also afford to neglect minor terms.
$$\begin{align} Z&=1 - \frac{an^2}{pV_{\mathrm{real}}^2}+ \frac{bn}{V_{\mathrm{real}}}\\ Z&=1 - \frac{a}{pV_{\mathrm m,\text{real}}^2}+ \frac{b}{V_{\mathrm m,\text{real}}}\\ \end{align}$$
To address clarified scenario, as a the same amounts of the ideal and real gas are under the same temperature and pressure:
$$V_\text{ideal gas}=\frac{nRT}{p}$$ $$V_\text{real gas}=Z \frac{nRT}{p}$$
There is need to perform iteration for probably the easiest way to get the result. With the $V_{\rm{ideal gas}}\ $ as the first approximation. The other option is to solve the root of the cubic equation what we probably do not want to.
$$\begin{align} V_\text{real}&=Z(p, V_\text{real})\frac{nRT}{p}\\ V_\text{real}&=\frac{1}{\left(1 + \frac{an^2}{p \cdot V_\text{real}^2}\right)\left(1 - \frac{nb}{V_\text{real}}\right)} \frac{nRT}{p} \\ V_\text{real}&=\left( 1 - \frac{an^2}{pV_{\rm{real}}^2}+ \frac{bn}{V_{\rm{real}}}\right) \frac{nRT}{p} \\ \end{align}$$
The van der Waals equation can also be expressed in terms of reduced properties
$$\left( P_r + \frac{3}{V_\rm{r}^2}\right) \left( V_{\rm r}-\frac{1}{3}\right) = \frac{8}{3}T_\mathrm r$$
This yields a critical compressibility factor $3/8$.
The values of pressure, temperature and volume are divided by the respective critical values of the given gas.
Compressibility factor for a gas is defined as the ratio of the volume of real gas to the volume of ideal gas . Thus $$ Z = \frac{V_\text{real}}{V_\text{ideal}} $$ Using the ideal gas equation $$ PV_\text{ideal} = nRT $$ $$ V_\text{ideal} = \frac{nRT}{P} $$ So $$ Z = \frac{V_\text{real}}{\frac{nRT}{P} } $$ $$ Z = \frac{PV_\text{real}}{nRT} $$ Hence except $V_\text{real}$ all other quantities are ideal.
|
Renormalization Group 8 Lectures Dr V. Niarchos
The course introduces the concept of renormalisation group flow in quantum field theory building on previous lessons about renormalisation in other lectures on QFT. Applications to different field theories, including gauge theories, will be described.
Outline of the course
-
Wilson's renormalization group. Brief review of regularisation/renormalization in continuum QFT with examples. Introduction of Wilson's renormalization group using the path integral formulation of QFT. Implementation in scalar $\phi^4$ theory.
-
Callan-Symanzik equation. Renormalisation conditions. Callan-Symanzik equations. Introduction of $\beta$ and $\gamma$ functions. Renormalisation group equations.
-
Renormalisation group flow in renormalised perturbation theory. Examples in scalar $\phi^4$ theory in different dimensions, QED, non-abelian gauge theories. Asymptotic freedom in gauge theories. Books for the course
M. E. Peskin and D. V. Schroeder,
An introduction to QFT, (Addison Wesley, 1994) J. Zinn-Justin, Quantum field theory and critical phenomena, (Oxford University Press, 2002) K. G. Wilson and J. B. Kogut, The Renormalization group and the epsilon expansion, Phys. Rept.12:75-200, 1974.
|
Communications in Analysis and Geometry Volume 16 (2008) Number 5 Evolution of an extended Ricci flow system
Pages: 1007 – 1048
DOI: http://dx.doi.org/10.4310/CAG.2008.v16.n5.a5
Author Abstract
We show that Hamilton’s Ricci flow and the static Einsteinvacuum equations are closely connected by the followingsystem of geometric evolution equations:\begin{align*}\tdel g &={-}2Rc(g) + 2\alpha_ndu\otimes du,\tdel u &= \Delta^{g}u,\end{align*}where $g(t)$ is a Riemannian metric, $u(t)$ a scalar function and $\alpha_n$ aconstant depending only on the dimension $n\geq 3$. This provides aninteresting and useful link from problems in low-dimensional topology andgeometry to physical questions in general relativity.
Published 1 January 2008
|
Is there a good errata for Atiyah-Macdonald available? A cursory Google search reveals a laughably short list here, with just a few typos. Is there any source available online which lists inaccuracies and gaps?
Dear Tim, on page 31 they consider a ring $A$ and two $A$- algebras defined by their structural ring morphisms $f:A\to B$ and $g:A\to C$. They then define the tensor product as a ring $D=B\otimes _A C$ and want to make it an $A$- algebra. For that they must define the structural morphism $A\to D$ and they claim that it is given by the formula $a \to f(a)\otimes g(a)$.This is false since that map is not a ring morphism. The correct structural map $A\to D$ is actually $a\mapsto 1_B\otimes g(a) =f(a)\otimes 1_C$.
PS: To prevent misunderstandings, let me add that Atiyah-MacDonald is, to my taste, the best mathematics book I have ever seen, all subjects considered. EDIT OF JULY 26, 2017
Proposition 2.4 page 21 reads:
Let $M$ be a finitely generated $A$-module, let $\mathfrak a$ be an ideal of $A$, and let $\phi$ be an $A$-module endomorphism of $M$ such that $\phi(M)\subseteq\mathfrak a M$. Then $\phi$ satisfies an equation of the form $$\phi^n+a_1\,\phi^{n-1}+\cdots+a_n=0$$ where the $a_i$ are in $\mathfrak a$.
Strictly speaking, this makes no sense (it seems to me) because $\phi$ and the $a_i$ belong to different rings. I suggest the following restatement:
Let $M$ be a finitely generated $A$-module, let $\mathfrak a$ be an ideal of $A$, let $\phi$ be an $A$-module endomorphism of $M$ such that $\phi(M)\subseteq\mathfrak a M$, and let $\psi:A\to\operatorname{End}_A(M)$ be the natural morphism. Then $\phi$ satisfies an equation of the form $$\phi^n+\psi(a_1)\,\phi^{n-1}+\cdots+\psi(a_n)=0$$ where the $a_i$ are in $\mathfrak a$.
Another fix would be to equip $\operatorname{End}_A(M)$ with its natural $A$-module structure and change the display to $$ \phi^n+a_1\,\phi^{n-1}+\cdots+a_n\,\phi^0=0. $$
END OF EDIT OF JULY 26, 2017 EDIT OF JUNE 9, 2011
Page 102, penultimate paragraph:
"... $f$ induces a homomorphism $\widehat{f}:\widehat{G}\to\widehat{H}$, which is continuous."
No topology has been defined on $\widehat{G}$ and $\widehat{H}$.
[July 7, 2011, GMT. The topology on $\widehat{G}$ can be described as follows. For any subset $S$ of $G$, let $\widehat{S}\subset\widehat{G}$ be the set of equivalence classes of Cauchy sequences in $S$, and say that a subset $V$ of $\widehat{G}$ is a neighborhood of $0$ if there is a neighborhood $W$ of $0$ in $G$ such that $\widehat{W}\subset V$.]
By the way, there is (I think) a somewhat similar "mistake" in the article Atiyah wrote with Wall in "Algebraic Number Theory" Ed. Cassels and Froehlich (see Erratum for Cassels-Froehlich). Atiyah and Wall forgot to mention the crucial compatibility between change of groups and connecting morphisms. (See p. 99.)
END OF EDIT OF JUNE 9, 2011
Page 25, first line of the proof of (2.13): change (2.11) to (2.12).
Page 29, about two third of the page: change (2.14) to (2.13).
EDIT. Page 39, last line: change $m$ to $m_i$ (three times). EDIT OF NOV. 22, 2010. Page 63, proof of Lemma 5.14. The current text reads
Conversely, if $x\in r(\mathfrak a^e)$ then $x^n=\sum a_i\,x_i$ for some $n>0$, where the $a_i$ are elements of $\mathfrak a$ and the $x_i$ are elements of $C$. Since each $x_i$ is integral over $A$ it follows from (5.2) that $M=A[x_1,\dots,x_n]\ \dots$
It would be better (I think) to write something like
Conversely, if $x\in r(\mathfrak a^e)$ then $x^n=a_1\,x_1+\cdots+a_m\,x_m$ for some $m,n>0$, where the $a_i$ are elements of $\mathfrak a$ and the $x_i$ are elements of $C$. Since each $x_i$ is integral over $A$ it follows from (5.2) that $M=A[x_1,\dots,x_m]\ \dots$
[July 8, 2011, GMT. Page 90. It seems to me that the second part of the proof of Theorem 8.7 can be simplified. We must check the uniqueness of the decomposition of an Artin ring $A$ as a finite product of Artin local rings $A_i$. To do this it suffices to observe that, for each minimal primary ideal $\mathfrak q$ of $A$, there is a unique $i$ such that $\mathfrak q$ is the kernel of the canonical projection onto $A_i$.]
[July 7, 2011, GMT. Page 107, lines 4-5. Instead of $A^*=A[x_1,\dots,x_r]$ read $A^*=A[y_1,\dots,y_r]$ where $y_i=(0,x_i,0,\dots)$.]
[July 7, 2011, GMT. Page 112, proof of Proposition (10.24). Instead of $\mathfrak{a}^{k+n(i)}$ read $\mathfrak{a}^{\max(0,k-n(i))}$.]
[July 9, 2015. The integer $d(M)$ (and in particular $d(A)$) is defined on p. 117 after the proof of Theorem 11.1. Another definition of $d(A)$ is given on p. 119 after the proof of Proposition 11.6 via the equality $d(A)=d(G_{\mathfrak m}(A))$. But the old meaning of $d(?)$ is used again in the proof of Proposition 11.20 p. 122, where the expression $d(G_{\mathfrak q}(A))$ occurs at the beginning of the last display. To avoid any confusion, let me denote by $D(M)$ the integer given by the first definition, and set $d(A):=D(G_{\mathfrak m}(A))$.
It seems to me the proof of Proposition 11.3 p. 118 is not entirely correct. I suggest to keep the proof, but to weaken slightly the statement, the new statement being: If $P(M/xM,t)\neq0$ and $D(M/xM)\ge1$, then $P(M,t)\neq0$ and $D(M/xM)=D(M)-1$.
This new statement applies to the first equality in the last display in the proof of Proposition 11.20 p. 122 if $d:=\dim A\ge1$ (the case $d=0$ being trivial). - On the third line of the proof $\mathfrak q$ should be $\mathfrak q^2$.]
On page 8, the proof of part ii of Proposition 1.11 begins "Suppose $\mathfrak{p}\not\subseteq\mathfrak{a}_i$ for all $i$." It should be $\not\supseteq$.
On page 29, the example at the top has two typos: it says "$(x)=2x$", when it should be "$f(x)=2x$", and the exact sequence at the end of that same line says "$0\rightarrow\mathbb{Z}\otimes \stackrel{f\otimes 1}{\longrightarrow} \mathbb{Z}\otimes N$", when it should be
"$0\rightarrow\mathbb{Z}\otimes N\stackrel{f\otimes 1}{\longrightarrow} \mathbb{Z}\otimes N$".
On p.55, exercise 4.2 reads "If $\mathfrak a = r(\mathfrak a)$, then $\mathfrak a$ has no embedded prime ideals". I believe it should include the assumption that $\mathfrak a$ is decomposable.
A-M defines embedded primes for decomposable ideals only. And it doesn't seem that a radical ideal should automatically be decomposable. If you take something like a reduced (nonnoetherian) ring with infinitely many minimal prime ideals, I expect the zero ideal will be radical but not decomposable...
Nearly all the mistakes pointed out so far were fixed in the Russian translation, which was done by Manin. But not all. I'll list in parentheses the page numbers of the translation where the original error still occurs for the 5 people who might care. (The translation is usually 11 page numbers ahead of the original.) Scan the answers posted before this one to determine which mistakes I am referring to.
p. 29 (---> p. 41): on line 8, change (2.14) to (2.13)
p. 55 (---> p. 66): exercise 2
p. 71 (---> p. 82): exercise 23
p. 88 (---> p. 99): exercise 27(v)
There were also completely original mistakes added especially for the translation! On page 30 line -7 and page 31 lines 10 and 14 of the translation, the tensor product signs should be direct sum signs. On page 32 in the statement of Nakayama's Lemma, the ideal a should be in fraktur font.
Minor typos:
p.34, exercise 2.23: Second sentence should start "For each finite subject $J$ of $\Lambda$".
p.48, exercise 3.27(i): The bracketed text should read "Use Exercises 25 and 26".
p.71, exercise 5.23: The hint should start "The only hard part is (iii) => (i). Suppose (i) is false".
p.88, exercise 7.27(v): The last clause should read "the homomorphism $f_{!}$ is a $K_1(A)$-module homomorphism".
p.127, index entry for "flat, faithfully": Should cite p. 46, not p. 29.
On page 23, in the third line of the sketch for Proposition 2.9, change "$v \circ u \circ f = 0$ " to "$f \circ v \circ u = 0$".
On p.89, the second to last line of the proof of Proposition 8.4 should say $\mathfrak{N}^k \subseteq \mathfrak{ N}$ instead of $\mathfrak{N}^k \supseteq \mathfrak{N}$.
On page 31, the first line refers to Proposition 2.11, when it should be 2.12.
Also minor: On p. 91, the $a$'s and $\mathfrak a$'s in the proof of Prop 8.8 seems to be a little jumbled.
I guess you want something like "Let $\mathfrak a$ be an ideal of $A$, other than $(0)$ or $(1)$. We have $\mathfrak m = \mathfrak N$, hence $\mathfrak m$ is nilpotent by (8.4) and therefore there exists a positive integer $r$ such that $\mathfrak a \subseteq \mathfrak m^r$ and $\mathfrak a \not\subseteq \mathfrak m^{r + 1}$; hence there exists $y \in \mathfrak a$ and $a \in A$ such that $y = ax^r$ but $y \not\in (x^{r + 1})$," etc.
page 81, line 5: change $f_i \in A[x]$ to $f_i \in \mathfrak{a}$
On page 41 in the proof of proposition 3.10., change
"i) $\implies$ ii) by (3.5) and (2.20)" to "i) $\implies$ ii) by (3.7) and (2.19)"
On page 52 in remark 1) at the bottom of the page, change
"(see Chapter 1, Exercise 25)" to "(see Chapter 1, Exercise 27)"
On page 65 at the end of the proof of proposition 5.18. the black square to denote end of proof is missing.
On page 66 we need to correct the proof of corollary 5.22., one correct version is the following: We start with the quotient map $\pi: A[x^{-1}] \to A[x^{-1}] /m$ where $m$ is a maximal ideal containing $x^{-1}$. We take an algebraic closure $\Omega$ of the field $A[x^{-1}] /m$ and consider the map $i \circ \pi: A[x^{-1}] \to \Omega$. Then by the previous theorem, (5.21), we can extend $i \circ \pi$ to some valuation ring $B$ of $K$ containing $A[x^{-1}]$: $g: B \to \Omega$ such that $g|_{A[x^{-1}]} = i \circ \pi$. Then $g(x^{-1}) = 0$. Hence $x^{-1} \in ker(g)$ and since the kernel is a proper ideal of $B$, $x^{-1}$ is not a unit in $B$ and hence $x$ is not in $B$. (also see math.SE)
On page 77 in the proof of proposition 6.7., change
"...a composition series, by ii);..." to "...a composition series, by i);..."
In p.45, Ex.3.12.iv, one can avoid the tedious argument provided in the hint by noting that $K\otimes_A M\cong(A-\{0\})^{-1}M$. (I originally did it as hinted ...)
In p.68, Ex.5.10.ii, (b') is actually equivalent to a weaker (c') that asserts only that $f^*:\mathrm{Spec}(B_\mathfrak{q})\to\mathrm{Spec}(A_\mathfrak{p})\cap V(\ker f)$ is surjective. However, (a') does imply the original (c').
Here are a few more small miscellaneous mistakes and typos:
page 18, line -6: $M''$ not defined (it is $M/M'$)
page 20, line -12: in the expression for $A$ as a direct product, it should be $i=1$ not $i=I$.
page 28, line -5: $=$ should read $\cong$
page 29, line -13 (first line of proof that iv) $\Rightarrow$ iii)): $x_{i}$ should read $x_{i}'$
page 91, in the last example, it is not true that ${\mathfrak m}^{2}=0$. It is a non-zero principal ideal. But the following statement is still true, $\dim \left( {\mathfrak m}/{\mathfrak m}^{2} \right)=2$. It is generated by $x^2$ and $x^3$.
page 102, Lemma 10.1(iv) $H=0$ should read $H=\{ 0 \}$
Here are a few more minor errors:
p. 42, proof of Prop. 3.11(iv), last line: change "by i)" to "by ii)".
p. 72, Exercise 29 (clarification): by "a local ring of $A$" they mean "a localization of $A$ at some prime ideal". (See this question.)
p. 72, Exercise 31: for condition (2), change "for all $x,y \in K^*$" to "for all $x,y \in K^*$ such that $x+y \neq 0$". (Similarly on p. 94, unless one uses the convention $v(0)=+\infty$.)
p. 74, Example 2: change "if $a \neq 0$" to "if $a \neq 0$ and $a \neq \pm 1$".
p. 75, proof of Prop. 6.2: change "hence $M_n=M$" to "hence $M_n=N$".
p. 82, definition of irreducible ideal: it is understood (by the first sentence of the section) that an irreducible ideal $\mathfrak{a}$ is also required to be
proper (otherwise Lemma 7.12 would be false, for example). Also, there should perhaps be an entry "Ideal, irreducible, 82" in the index on p. 127.
p. 94, paragraph before Prop. 9.2: change "$v \colon K^* \to Z$" to "$v \colon K^* \to \mathbf{Z}$".
p. 111, definition of $G(A)$ in the middle of the page: change $a^n$ to $\mathfrak{a}^n$.
And finally some
really important corrections... ;-)
p. 30, middle of the page: missing comma in "finite set of elements $x_1,\ldots x_n$".
p. 61, proof of Prop. 5.6(ii): missing space before the brackets in "$x/s \in S^{-1}B(x \in B,\, s \in S)$".
p. 83, page header: change "NEOTHERIAN" to "NOETHERIAN".
Here is what I found (in LaTeX unlike in the original); only two of the errors mentioned below (both indicated) have been reported in previous answers here.
p.56, line 24 (Exercise 13(iv)): change second ${\mathfrak p}^{(n)}$ to ${\mathfrak p}^n$
p.76, line 11: change "Exercise" to "Example"
p.76, line 14: change "Exercise" to "Example"
p.91, line -11 (line 2 of second example): change (8.7) to (8.8)
(this was already found by Zev Chonoles. See their answer above from Feb 5, 2011)
p.97, line -1: change (9.7) to (9.6)
p.104, line -4: change $\left\{ G'_n \cap G_n \right\}$ to $\left\{ G' \cap G_n \right\}$
p.114, line -12: change $0 \rightarrow {\mathfrak b}^{m} M \rightarrow M/{\mathfrak b}^{m} M \rightarrow 0$ to $0 \rightarrow {\mathfrak b}^{m} M \rightarrow M \rightarrow M/{\mathfrak b}^{m} M \rightarrow 0$
(this was already found by Mahdi Majidi-Zolbanin. See their answer above from July 17, 2012)
On page 91, the second line in the second Example should refer to Proposition 8.8, not Theorem 8.7.
Page 69, Ex5.17: this is not the weak form, and the result is rather trivial.
Page 114, Exercise 5, the short exact sequence is missing the middle term.
Page 118, line 16, Example: Poincaré series should be $P(A,t)=(1-t)^{-s}l(A_0)$.
p. 107, lines 4-5: change "$A^* = A[x_1,\dots,x_r]$" to "$A^*$ is a quotient of $A[x_1,\dots,x_r]$".
Proof of Theorem 11.22, page 123, direction $iii) \Rightarrow i)$: The map should be $\alpha : k[t_1,\dots,t_d] \rightarrow G_{\mathfrak{m}}(A)$.
Last line of page 91, exercise 1, should it be hence $0=p_i^{(r)}$ for all large $r$?
On Chapter 2 (p. 19), for $A$-submodules $P,N$ of an $A$-module $M$ is defined the
quotient $(N:P)$ as a subset of the ring $A$ of scalars. Later, in Corollary 3.15 (p. 43), it is proved that for any multiplicatively subset $S$ of $A$ the equality $S^{-1}(N:P)=\bigl(S^{-1}N:S^{-1}P\bigr)$ holds, as long as $P$ is finitely generated. Note that in this case the result deals with an ideal in the ring $S^{-1}A$.
Later (p. 96) the notion of
fractional ideal is introduced: they are certain $A$-submodules of the field of fractions $K$ of $A$, being $A$ a domain. For a fractional ideal $M$ is defined the set $(A:M)=\{x\in K: xM\subseteq A\}$; note that this new definition differs from that given previously. Proposition 9.6 (p. 97) proves a equivalence regarding this new notion, but the proof uses Corollary 3.15., which is incorrect, though the correct reasoning is entirely similar.
Here are some mistakes (it is possible that I am wrong):
page 68, -line 7, the "$f$" should be "$f_1$";
page 68, -line 5, "larger than $g_m$" is already enough;
page 68, -ex 10 ii), "(b') $\Rightarrow$ (c')" needs an extra condition that $f$ is injective;
page 89, -line -4, "$\mathfrak{N}^k\supseteq\mathfrak{N}$" should be "$\mathfrak{N}^k\subseteq\mathfrak{N}$";
page 97, -line -1, “(9.7)” should be “(9.6)”;
...
Maybe in page 22, in the proof of corollary 2.7, $=$ should be $\cong$?
|
Answer
The weight of the box is 97.1 pounds.
Work Step by Step
The angle of inclination of the ramp with the horizontal is $18{}^\circ $. On resolving the vector into its horizontal and vertical components, the horizontal component will get cancelled and we get $\begin{align} & \sin 18{}^\circ =\frac{30}{x} \\ & x\sin 18{}^\circ =30 \\ & x=\frac{30}{\sin 18{}^\circ } \\ & =97.1\text{ pounds} \end{align}$ So, the weight of the box is $97.1\text{ pounds}$.
|
(b) As it is the case of indirect recursion so let first make it as direct recursion then apply rules of removal of left recursion.
to make it as direct recursion first production remain unchanged while in second production substitute the right hand side of first production wherever it comes.In the question $S$ comes in middle of $A$ so substitute the right hand side of production $S$.Now after substituting it looks like:
$A \rightarrow Ac\mid Aad \mid bd \mid \epsilon$
Now remove direct recursion from it
For removal of direct recursion rule:
$A \rightarrow A\alpha_1 \mid \ldots \mid A\alpha_n \mid \beta_1 \mid \ldots \mid \beta_m$
Replace these with two sets of productions, one set for $A:$
$A \rightarrow \beta_1A^\prime \mid \ldots \mid \beta_mA^\prime$
and another set for the fresh nonterminal $A^{\prime}$
$A^\prime \rightarrow \alpha_1A^\prime \mid \ldots \mid \alpha_nA^\prime \mid \epsilon$
After applying these rule we'll get:
$A \rightarrow bdA'\mid A'$ $A' \rightarrow cA'\mid adA' \mid \epsilon$
Now complete production without left recursion is:
$S \rightarrow Aa \mid b$ $A \rightarrow bdA'\mid A'$ $A' \rightarrow cA'\mid adA' \mid \epsilon$
|
For the following exercises, use implicit differentiation to find \(\frac{dy}{dx}\).
300) \(x^2−y^2=4\)
301) \(6x^2+3y^2=12\)
Answer:
\(\frac{dy}{dx}=\frac{−2x}{y}\)
Solution: \(\frac{dy}{dx}=\frac{−2x}{y}\)
302) \(x^2y=y−7\)
303) \(3x^3+9xy^2=5x^3\)
Answer: \(\frac{dy}{dx}=\frac{x}{3y}−\frac{y}{2x}\)
304) \(xy−\cos(xy)=1\)
305) \(y\sqrt{x+4}=xy+8\)
Answer: \(\frac{dy}{dx}=\frac{y−\frac{y}{2\sqrt{x+4}}}{\sqrt{x+4}−x}\)
306) \(−xy−2=\frac{x}{7}\)
307) \(y\sin(xy)=y^2+2\)
Answer: \(\frac{dy}{dx}=\frac{y^2\cos(xy)}{2y−\sin(xy)−xy\cosxy}\)
308) \((xy)^2+3x=y^2\)
309) \(x^3y+xy^3=−8\)
Answer: \(\frac{dy}{dx}=\frac{−3x^2y−y^3}{x^3+3xy^2}\)
For the following exercises, find the equation of the tangent line to the graph of the given equation at the indicated point. Just for observation, use a calculator or computer software to graph the function and the tangent line.
310) \([T] x^4y−xy^3=−2,(−1,−1)\)
311) \([T] x^2y^2+5xy=14,(2,1)\)
Answer:
\(y=−\frac{1}{2}x+2\)
312) \([T] tan(xy)=y,(\frac{π}{4},1)\)
313) \([T] xy^2+sin(πy)−2x^2=10,(2,−3)\)
Answer:
\(y=\frac{1}{π+12}x−\frac{3π+38}{π+12}\)
314) \([T] \frac{x}{y}+5x−7=−\frac{3}{4}y,(1,2)\)
315) \([T] xy+\sin(x)=1,(\frac{π}{2},0)\)
Answer:
\(y=0\)
316) [T] The graph of a folium of Descartes with equation \(2x^3+2y^3−9xy=0\) is given in the following graph.
a. Find the equation of the tangent line at the point \((2,1)\). Graph the tangent line along with the folium.
b. Find the equation of the normal line to the tangent line in a. at the point \((2,1)\).
317) For the equation \(x^2+2xy−3y^2=0,\)
a. Find the equation of the normal to the tangent line at the point \((1,1)\).
b. At what other point does the normal line in a. intersect the graph of the equation?
Answer: \(a. y=−x+2 b. (3,−1)\)
318) Find all points on the graph of \(y^3−27y=x^2−90\) at which the tangent line is vertical.
319) For the equation \(x^2+xy+y^2=7\),
a. Find the \(x\)-intercept(s).
b.Find the slope of the tangent line(s) at the x-intercept(s).
c. What does the value(s) in b. indicate about the tangent line(s)?
Answer: \(a. (±7√,0) b. −2\) c. They are parallel since the slope is the same at both intercepts.
320) Find the equation of the tangent line to the graph of the equation \(sin^{−1x}+sin^{−1}y=\frac{π}{6}\) at the point \((0,\frac{1}{2})\).
321) Find the equation of the tangent line to the graph of the equation \(tan^{−1}(x+y)=x^2+\frac{π}{4}\) at the point \((0,1)\).
Answer: \(y=−x+1\)
322) Find \(y′\) and \(y''\) for \(x^2+6xy−2y^2=3\).
323) [T] The number of cell phones produced when \(x\) dollars is spent on labor and \(y\) dollars is spent on capital invested by a manufacturer can be modeled by the equation \(60x^{3/4}y^{1/4}=3240\).
a. Find \(\frac{dy}{dx}\) and evaluate at the point \((81,16)\).
b. Interpret the result of a.
Answer: \(a. −0.5926\) b. When $81 is spent on labor and $16 is spent on capital, the amount spent on capital is decreasing by $0.5926 per $1 spent on labor.
324) [T] The number of cars produced when x dollars is spent on labor and y dollars is spent on capital invested by a manufacturer can be modeled by the equation \(30x^{1/3}y^{2/3}=360\).
(Both \(x\)and \(y\) are measured in thousands of dollars.)
a. Find \(\frac{dy}{dx}\) and evaluate at the point \((27,8)\).
b. Interpret the result of a.
325) The volume of a right circular cone of radius \(x\) and height \(y\) is given by \(V=\frac{1}{3}πx^2y\). Suppose that the volume of the cone is \(85πcm^3\). Find \(\frac{dy}{dx}\) when \(x=4\) and \(y=16\).
Answer: \(−8\)
326) For the following exercises, consider a closed rectangular box with a square base with side \(x\) and height \(y\).
Find an equation for the surface area of the rectangular box, \(S(x,y)\).
327) If the surface area of the rectangular box is 78 square feet, find \(\frac{dy}{dx}\) when \(x=3\) feet and \(y=5\) feet.
Answer: \(−2.67\)
|
Solve the initial value problem: $$ u_{tt} - u_{xx} = 0 \\ u(x,0) = 0 \\ u_t(x,0) = \begin{cases} \cos \pi (x-1) & \text{if } 1 < x < 2 \\ 0 & \text{otherwise} \end{cases} $$ Find and draw the solution at $t=3$ for
(a). $x\in (-\infty, \infty)$.
(b). $x\in (0, \infty)$ with $u_x(0,t) = 0$.
(c). $x\in (-\infty, 3)$ with $u(3,t) = 0$.
(d). $x\in (0,3)$ with $u_x(0,t)=u_x(3,t)=0$.
For (a) since we are in the entire real line, we can just use d'Alembert's solution and obtain $u = \sin(\pi(x-3t))-\sin(\pi(x+3t))$, but how do we do it for the other cases that we have semi infinite domain?
For (d) we can use separation of variables, correct?
|
I remember when I first learnt that testing for primality is in P (as noted in the paper Primes is in P, which explains the AKS algorithm). Some time later, I was talking with a close friend of mine (who has since received his bachelors in Computer Science). He had thought it was hard to believe that it was possible to determine whether a number was prime without factoring that number. That’s pretty cool. The AKS algorithm doesn’t even rely on anything really deep – it’s just a clever application of many (mostly) elementary results. Both of us were well aware of the fact that numbers are hard, as an understatement, to factor. My interest in factoring algorithms has suddenly surged again, so I’m going to look through some factoring algorithms (other than my interesting algorithm, that happens to be terribly slow).
The most fundamental of all factoring algorithms is to simply try lots of factors. One immediately sees that one only needs to try prime numbers up to the size $latex \sqrt{n}$. Of course, there is a problem – in order to only trial divide by primes, one needs to know which numbers are primes. So if one were to literally only divide by primes, one would either maintain a Sieve of Erosthanes or Perform something like AKS on each number to see if it’s prime. Or you could perform Miller-Rabin a few times (with different bases) to try ‘almost only’ primes (not in the measurable sense). As the prime number theorem says that $latex \pi (n) ~ \dfrac{n}{\log n}$, and so one would expect about $latex O(\sqrt{n})$ or more bit operations. This is why we call this the trivial method.
The first major improvement didn’t come about until 1975, when John Pollard proposed a new algorithm, commonly referred to as the Pollard-$latex \rho $ algorithm. Already, the complexity of the algorithm is far more intense than the previous. The main idea of the algorithm is the following observation: if we are trying to factor $latex n$ and $latex d$ is a relatively small factor of $latex n$, then it is likely there exist two numbers $latex x_i$ and $latex x_j$ such that $latex d|(x_i – x_j)$ but $latex n \not | (x_i – x_j)$, so that $latex \gcd(x_i – x_j, n) > 1$ and a factor has been found. But how is this implemented if we don’t know what $latex d|n$?
That’s a very interesting question, and it has a sort of funny answer. Perhaps the naive way to use this idea would be to pick a random number $latex a$ and another random number $latex b$. Then check to see if $latex \gcd(a-b,n) > 1$. If it’s not, try a random number $latex c$, and check $latex \gcd(c-a,n)$ and $latex \gcd(c-b,n)$, and so on. This is of course very time-consuming, as on the jth step one needs to do j-1 gcd tests.
But we can be (far) wittier. This is one of those great insights that made me say – whoa, now there’s an idea – when I first heard it. Suppose that instead, we have some polynomial $latex f(x)$ that we will use to pick our random numbers, i.e. we will choose our next random number $latex x_n$ by the iterative method $latex x_n = f(x_{n-1})$. Then if we hit a point where $latex x_j \equiv x_k \mod{d}$, with $latex k < j$, then we will also have that $latex f(x_j) \equiv f(x_k) \mod{d}$. This is how the method got its name – after a few random numbers, the sequence will loop back in on itself just like the letter $atex \rho$.
Of course, the sequence couldn’t possibly go on for more than d numbers without having some repeat mod d. But the greatest reason why we use this function is because it allows us to reduce the number of gcd checks we need to perform. Suppose that the ‘length’ of the loop is l: i.e. if $latex x_i \equiv x_j$, with $latex j > i$, then l is the smallest positive integer such that $latex x_{j+l} \equiv x_j \equiv x_i$. Also suppose that the loop starts at the mth random number. Then if we are at the kth number, with $k \geq m, n|k$, then we are ‘in the loop’ so to speak. And since $latex n|k, n|2k$ as well. So then $latex x_{2k} \equiv x_k \mod{d}$, and so $latex \gcd(x_{2k} – x_k, n) > 1$.
Putting this together eans that we should check $latex \gcd(x_k – x_{k/2}, n)$ for every even k, and that’s that. Now we do not have to do k-1 gcd calculations on the kth number, but instead one gcd calculation on every other random number. We left out the detail about the polynomial $latex f(x)$, which might seem a bit problematic. But most of the time, we just choose a polynomial of the form $f(x) = x^2 + a$, where $latex a \not \equiv 0, -2 \mod{n}$. (This just prevents hitting a degenerative sequence 1,1,1,1,1…).
Of course, there are a few additional improvements that can be made. This, which I have heard called the “Tortoise and the Hare” approach (named after the slow moving $latex x_i$ being compared to the fast moving $latex x_{2i}$), is not the only way of finding cycles. There is a method called Brent’s Variant that finds cycles in a different way that reduces the number of modular multiplications. The key idea in his is to have the ‘tortoise’ sit at $latex x_{2^i}$ and compare to the ‘hare’ who moves from $latex x_{2^i + 1}$ up to $latex x_{2^{i+1}}$. Then the tortoise sits at the next 2 power. The main idea of the savings is that at each step, Brent’s algorithm only needs to evaluate f(x) once, while implementing Pollard’s algorithm requires 3 (one for the tortoise, two for the hare).
In addition, one might not want to perform Euclid’s Algorithm after each step. Instead, one might do 100 steps at a time, and then perform Euclid’s algorithm on the product (effectively replacing 99 gcd computations by 99 modular multiplications, which saves time).
There is also undoubtedly an art to choosing the polynomial well, but I don’t know it. Fortunately, this sort of algorithm can easily be implemented in parallel with other polynomials. Unfortunately, although it picks up small factors quickly, its worst case running time is poor. The complexity of the algorithm is such that as far as I know, its big O running time isn’t fully proven. The sudden jump in factoring! More to come –
|
AP Statistics Curriculum 2007 Normal Std From Socr
m (→Appendix)
m (→Standard Normal Distribution)
Line 5: Line 5:
* Standard Normal ''density'' function <math>f(x)= {e^{-x^2 \over 2} \over \sqrt{2 \pi}}.</math>
* Standard Normal ''density'' function <math>f(x)= {e^{-x^2 \over 2} \over \sqrt{2 \pi}}.</math>
* Standard Normal ''cumulative distribution'' function <math>\Phi(y)= \int_{-\infty}^{y}{{e^{-x^2 \over 2} \over \sqrt{2 \pi}} dx}.</math>
* Standard Normal ''cumulative distribution'' function <math>\Phi(y)= \int_{-\infty}^{y}{{e^{-x^2 \over 2} \over \sqrt{2 \pi}} dx}.</math>
-
* Why are these two functions, <math>f(x), \Phi(y)</math> well-defined density and distribution functions, i.e., <math>\int_{-\infty}^{\infty} {f(x)dx}=1</math>? See the appendix below.
+
* Why are these two functions, <math>f(x), \Phi(y)</math> well-defined density and distribution functions, i.e., <math>\int_{-\infty}^{\infty} {f(x)dx}=1</math>? See the appendix below.
Note that the following exact ''areas'' are bound between the Standard Normal Density Function and the x-axis on these symmetric intervals around the origin:
Note that the following exact ''areas'' are bound between the Standard Normal Density Function and the x-axis on these symmetric intervals around the origin:
Current revision as of 16:28, 8 February 2012
Contents General Advance-Placement (AP) Statistics Curriculum - Standard Normal Variables and Experiments Standard Normal Distribution
The Standard Normal Distribution is a continuous distribution with the following density:
Standard Normal densityfunction Standard Normal cumulative distributionfunction Why are these two functions, f( x),Φ( y) well-defined density and distribution functions, i.e., ? See the appendix below.
Note that the following exact
areas are bound between the Standard Normal Density Function and the x-axis on these symmetric intervals around the origin: The area: -1.0 < x < 1.0 = 0.8413 - 0.1587 = 0.6826 The area: -2.0 < x < 2.0 = 0.9772 - 0.0228 = 0.9544 The area: -3.0 < x < 3.0 = 0.9987 - 0.0013 = 0.9974 Note that the inflection points ( f''( x) = 0)of the Standard Normal density function are 1. The Standard Normal distribution is also a special case of the more general normal distribution where the mean is set to zero and the variance is set to one. The Standard Normal distribution is often called the bell curvebecause the graph of its probability density resembles a bell. Experiments
Suppose we decide to test the state of 100 used batteries. To do that, we connect each battery to a volt-meter by randomly attaching the positive (+) and negative (-) battery terminals to the corresponding volt-meter's connections. Electrical current always flows from + to -, i.e., the current goes in the direction of the voltage drop. Depending upon which way the battery is connected to the volt-meter we can observe positive or negative voltage recordings (voltage is just a difference, which forces current to flow from higher to the lower voltage.) Denote
X ={measured voltage for battery i} - this is random variable with mean of 0 and unitary variance. Assume the distribution of all i X is Standard Normal, . Use the Normal Distribution (with mean=0 and variance=1) in the SOCR Distribution applet to address the following questions. This Distributions help-page may be useful in understanding SOCR Distribution Applet. How many batteries, from the sample of 100, can we expect to have? i Absolute Voltage > 1? P(X>1) = 0.1586, thus we expect 15-16 batteries to have voltage exceeding 1. |Absolute Voltage| > 1? P(|X|>1) = 1- 0.682689=0.3173, thus we expect 31-32 batteries to have absolute voltage exceeding 1. Voltage < -2? P(X<-2) = 0.0227, thus we expect 2-3 batteries to have voltage less than -2. Voltage <= -2? P(X<=-2) = 0.0227, thus we expect 2-3 batteries to have voltage less than or equal to -2. -1.7537 < Voltage < 0.8465? P(-1.7537 < X < 0.8465) = 0.761622, thus we expect 76 batteries to have voltage in this range. Appendix
The derivation below illustrates why the standard normal density function, , represents a well-defined density function, i.e., and .
Clearly the exponential function is always non-negative (in fact it's strictly positive for each real value argument). To show that , let . Then . Thus, , Change variables from Cartesian to polar coordinates: x= rcos(θ) x= rcos(θ), Hence, x 2+ w 2= r 2, , d x= cos(θ) d r, and d y= rcos(θ) d r. Therefore, , and , since , and . SOCR Home page: http://www.socr.ucla.edu
Translate this page:
|
How to Model Different Types of Damping in COMSOL Multiphysics®
In a previous blog post, we introduced various physical phenomena that cause damping in structures and showed how such damping can be represented mathematically. Today, we follow up by looking at how to actually include damping in finite element models.
How to Consider Damping in Finite Element Analysis
When performing a structural dynamics analysis, modeling the damping can be an important, and difficult, part of the task.
A transient analysis of a vibroacoustic micromirror, with viscous and thermal damping taken into account.
Below, you’ll find an overview of what to take into consideration when modeling damping effects in your finite element analyses with the COMSOL Multiphysics® software.
Eigenfrequency Analysis
An eigenfrequency problem can be solved with or without damping in COMSOL Multiphysics. As long as there are any dissipative effects in your model, these will be taken into account, and the computed eigenfrequencies will be complex-valued. This is automatic, so you do not need to add any special settings in the solver.
Eigenfrequencies in a model without (upper table) and with (lower table) damping.
In most cases, not only the eigenfrequencies but also the eigenmodes are complex-valued when damping is involved. The interpretation of a complex-valued mode shape is that the phase angle provides information about the phase shift between different points in the structure under free vibrations. That is, if the displacements in two points have different phase angles, they will not reach their peak values simultaneously.
In most cases, the effects of damping on mode shapes and eigenfrequencies are marginal. The main reason to include damping in an eigenfrequency analysis is to estimate how much different resonances will be damped.
Frequency-Response Analysis
If the excitation frequency is in the vicinity of a natural frequency (say within ±50%), the damping model is of paramount importance, as shown in the response curves in the previous blog post. This is a case where you really have to spend some effort on obtaining appropriate values of the damping. Close to the resonance, the results are completely controlled by the damping, so choosing between a loss factor of 0.01 or 0.02 can, in the end, mean a factor of 2 in a stress prediction.
Time-Domain Analysis
In a time-domain analysis, the damping will, in most cases, have a limited impact on the results. The exceptions are when simulating wave propagation or if the time history of the loads is such that some resonances are strongly excited.
There is, however, another important aspect of damping in time-domain analysis: It can stabilize the time stepping. It is common that spurious, less-interesting waves are generated in the structure. Unless they are properly suppressed, the time steps may become unnecessary small. To suppress such waves, it is advantageous to introduce a damping model that mainly provides significant damping at high frequencies.
Response Spectrum Analysis
In response spectrum analysis, the damping is part of the design response spectrum, so it should not be explicitly modeled. A single damping value is used to represent the whole structure.
Numerical Models for Damping The Finite Element Formulation
In matrix form, the finite-element-discretized equations of motion can be written as
where
is the mass matrix, M is the viscous damping matrix, C is the stiffness matrix, K is the displacement vector, and the right-hand side consists of the force vector u . f
The mass and stiffness matrices are computed given the geometry and basic material parameters, such as mass density and Young’s modulus. The damping matrix can, however, be formed in many different ways. Different types of damping contributions can often also be combined.
In the frequency domain, where it is assumed that the excitation and response are harmonic, the corresponding equation is
Here, the displacement and force vectors are complex-valued.
Loss Factor Damping
Loss factor damping is the primary method for describing losses in the material in a frequency-domain analysis. The mathematical description is, as discussed in the previous blog post, by a complex-valued multiplier to the stiffness.
In COMSOL Multiphysics, you can include loss factor damping through the
Damping subnode under a material model. For the Linear Elastic Material, you can even give individual loss factors for the different elements in the constitutive matrix. Entering loss factor damping values.
In reality, it is common that the loss factor has some frequency dependency. In a frequency-response analysis, this can be readily incorporated by making the loss factor a function of the built-in variable
freq. You can either use an expression, as shown below, or reference any type of function of the frequency.
Frequency-dependent loss factor.
To see how loss factor damping enters the system of equations, assume that the same loss factor is used everywhere. Then, the damping matrix can be identified as
The equation of motion then becomes
Viscous Damping
In a viscous damping model, stresses proportional to the strain rate appear in the solid material. In the most general case, the constitutive tensor relating the stress to the strain rate can contain 21 independent constants. Since damping is difficult to measure and quantify, these values would seldom be known, and it is more common to work with isotropic viscous damping models.
Viscous damping in the
Solid Mechanics interface in COMSOL Multiphysics uses two constants: Bulk viscosity, \eta_b Shear viscosity, \eta_v
The former provides a damping proportional to volume change, and the latter to shape change.
The viscous stress tensor can be written as
where \epsilon_v is the volumetric strain and {\boldsymbol \epsilon}_d is the deviatoric part of the strain tensor.
Since the damping stress is proportional to the strain rate, it will be more prominent at higher frequencies.
Viscous damping is another option in the
Damping node. Entering viscous damping.
Viscous damping can be used in any dynamic study type.
Rayleigh Damping
Rayleigh damping is a simple way of generating a damping matrix as a pure linear combination of the mass and stiffness matrices,
This damping model does not have a direct connection to physical damping processes. Originally, it was introduced because it gives a damping matrix that can be diagonalized by the eigenmodes from the undamped eigenfrequency problem, thus giving a complete dynamic decoupling between the different modes.
The stiffness matrix term (“beta damping”) can, however, be interpreted as being directly proportional to the strain rate. Actually, a pure beta damping corresponds to a viscous damping with
where
K and G are the elastic bulk and shear moduli, respectively.
Beta damping, just as viscous damping, provides a damping that is stronger at higher frequencies. The mass proportional term α, on the contrary, provides a damping that is strong at low frequencies. It acts on the velocity of the structure, so it will damp a rigid body motion.
Rayleigh damping is also given in the
Damping subnode under a material model. This design actually provides the freedom to create a type of damping that is a generalization of the original Rayleigh damping. In order for the damping matrix to be a linear combination of the mass and damping matrices at the system level, the Rayleigh damping parameters must be the same in all Damping nodes. If not, this property is true only at the element level.
There are two ways in which you can provide the Rayleigh damping parameters, either as a direct input of α and β or by providing the damping ratio at two different frequencies.
Entering Rayleigh damping in a Damping subnode to a material model. Dissipative Material Models
Some material models contain an inherent dissipation. In this context, the most interesting case is probably viscoelasticity. When you use such material model, it will usually provide significant damping. You would, in most cases, not combine it with a
Damping node on the same domain. Selecting a viscoelastic material model. Thermoelastic Damping
Thermoelastic damping can be directly incorporated into a model through a setting in the
Thermal Expansion multiphysics coupling. Thermoelastic damping in a coupled heat transfer and structural analysis.
The effect of including thermoelastic damping is that a heat source term, proportional to the rate of stress change, is added to the heat balance equations,
Here,
T is the temperature, \boldsymbol \sigma is the stress tensor, and \boldsymbol \alpha is the coefficient of thermal expansion tensor. Modal Damping
Solving linear structural dynamics problems using mode superposition is a very efficient technique. When using mode superposition together with damping, there are certain things to note.
The initial eigenfrequency analysis should be performed using the undamped problem, and the damping is then included only during the mode superposition step. The most convenient way of ensuring this is through the
Physics and Variables Selection section in the settings for each study step. Study step settings for the eigenfrequency study (left) and subsequent mode superposition study (right).
All types of damping contributions are allowed in the mode superposition. This may not sound surprising, but it is an effect of the fact that the eigenmodes are not assumed to be decoupled in the modal solvers in COMSOL Multiphysics. This means that you can solve a wider range of damped problems than in many other implementations of a mode superposition method.
In addition to damping provided by various physics features, you can also provide a damping ratio for each eigenmode: so-called
modal damping. Modal damping is particularly useful if you know from experience that some modes are more heavily damped than others. This is the case when different physical phenomena are connected to the mode shapes. Modal damping is given directly in the settings for the modal solver. Entering modal damping.
Modal damping is added to any other damping contributions.
Infinity Boundary Conditions
When you need to model losses due to acoustic emission or anchor losses, the important thing is to equip your model with boundary conditions that allow outgoing waves to disappear without reflection. COMSOL Multiphysics offers several options here, depending on the involved physics interfaces and whether the analysis is in the time domain or frequency domain.
In the frequency domain, a
perfectly matched layer (PML) is a good alternative for a boundary condition toward “infinity”. The PML formulation, which is available for many different physics interfaces, essentially attenuates outgoing waves so that the reflected energy is very small. The effect is that damping will be present in the analysis, since the energy in the outgoing waves is lost.
A PML is modeled using a few layers of elements on the exterior of the computational domain.
Defining a domain as a PML.
In the
Solid Mechanics interface, you also can find a special type of boundary condition called a Low-Reflecting Boundary. Its purpose is the same as for the PML: to avoid wave reflection. Although it is not as efficient as a PML when the waves strike the boundaries at oblique angles, the Low-Reflecting Boundary node has two advantages: It can be used in time-domain analyses Since it is a boundary condition, it does not require extra domains to be meshed outside of the computational domain The Low-Reflecting Boundary node.
Another way of simulating waves traveling toward infinity is by using the boundary element method (BEM) formulation for acoustic waves.
Friction Between Sliding Surfaces
If friction is an important source of damping, you will often have to make some engineering approximations. In principle, you can, of course, solve a time-dependent problem with full contact modeling including friction. Unfortunately, in the vast majority of cases, this will be prohibitively expensive in terms of computer resources.
An alternative is to replace the contact zone by a thin elastic layer and equip it with viscous or loss factor damping. The problem, however, is how to estimate the shear stiffness and corresponding loss factor. General methods for estimating these parameters is a topic for recent and ongoing research. You may have to do initial local analyses of the joint to investigate its properties.
Damping in Other Features
There are many other features beyond the material models through which you can supply damping in your model. Some examples include:
Spring Foundationfeature Thin Elastic Layerfeature Spring-Damperfeature Joints and gears in the Multibody Dynamicsinterface Damperand Impedancefeatures in the Lumped Mechanical Systeminterface Bearings in the Rotordynamicsand Multibody Dynamicsinterfaces Any load that is entered as velocity dependent Complex-valued material data Concluding Remarks
Modeling damping in structural dynamics is an essential and nontrivial part of the model definition. COMSOL Multiphysics provides you with a broad range of options for describing damping. Obtaining the correct data for the materials and components in a structure is, however, often challenging.
Next Steps
Learn about the Structural Mechanics Module, which includes specialized features and functionality available for modeling damping:
Read Part 1 of this blog series: Damping in Structural Dynamics: Theory and Sources If you are interested in modeling damping, you can find several examples showing different approaches in the Application Gallery: Static and Eigenfrequency Analyses of an Elbow Bracket Viscoelastic Structural Damper — Transient Analysis Bracket — Transient Analysis (bracket_frequency.mph) Wave Propagation in Rock Under Blast Loads Piezoelectric Tonpilz Transducer Thermoelastic Damping in a MEMS Resonator Disc Resonator Anchor Losses Vibrating Micromirror with Viscous and Thermal Damping: Transient Behavior Comments (10) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
As already mentioned,$$(\spadesuit) \qquadC(\Bbb{T}) \rtimes_{\alpha} \Bbb{Z}_{2} \cong\{f \in C([0,1] \to {\text{M}_{2}}(\Bbb{C})) \mid\text{$ f(0) $ and $ f(1) $ are diagonal}\}.$$Hence, by the definition of a continuous field of $ C^{*} $-algebras, $ C(\Bbb{T}) \rtimes_{\alpha} \Bbb{Z}_{2} $ is a continuous field of $ C^{*} $-algebras over the compact Hausdorff space $ [0,1] $ with the following structure:
The fibers over $ 0 $ and $ 1 $ are the $ C^{*} $-subalgebra of $ {\text{M}_{2}}(\Bbb{C}) $ consisting of all diagonal matrices, which is isomorphic to $ \Bbb{C} \oplus \Bbb{C} $. The fibers over $ (0,1) $ are $ {\text{M}_{2}}(\Bbb{C}) $. The generating $ * $-subalgebra of cross-sections is simply the set on the right-hand side of the relation $ (\spadesuit) $.
This agrees with a $ 1976 $ result by Ru-Ying Lee, which states that a $ C^{*} $-algebra $ A $ is a continuous field over a locally compact Hausdorff space $ X $ if and only if there exists a continuous open map from the primitive-ideal space of $ A $, $ \text{Prim}(A) $,
onto $ X $.
As $ (\Bbb{Z}_{2},\Bbb{T},\alpha) $ is a
second-countable transformation group, certain results in the theory of transformation-group $ C^{*} $-algebras show that $ \text{Prim}(C(\Bbb{T}) \rtimes_{\alpha} \Bbb{Z}_{2}) $ is homeomorphic to a closed and bounded interval of $ \Bbb{R} $.
Of course, any $ C^{*} $-algebra is a continuous field over a single point, but this is uninteresting.
|
That was an excellent post and qualifies as a treasure to be found on this site!
wtf wrote:
When infinities arise in physics equations, it doesn't mean there's a physical infinity. It means that our physics has broken down. Our equations don't apply. I totally get that
. In fact even our friend Max gets that.http://blogs.discovermagazine.com/crux/ ... g-physics/
Thanks for the link and I would have showcased it all on its own had I seen it first
The point I am making is something different. I am pointing out that:
All of our modern theories of physics rely ultimately on highly abstract infinitary mathematics That doesn't mean that they necessarily do; only that so far, that's how the history has worked out.
I see what you mean, but as Max pointed out when describing air as seeming continuous while actually being discrete, it's easier to model a continuum than a bazillion molecules, each with functional probabilistic movements of their own. Essentially, it's taking an average and it turns out that it's pretty accurate.
But what I was saying previously is that we work with the presumed ramifications of infinity, "as if" this or that were infinite, without actually ever using infinity itself. For instance, y = 1/x as x approaches infinity, then y approaches 0, but we don't actually USE infinity in any calculations, but we extrapolate.
There is at the moment no credible alternative. There are attempts to build physics on constructive foundations (there are infinite objects but they can be constructed by algorithms). But not finitary principles, because to do physics you need the real numbers; and to construct the real numbers we need infinite sets.
Hilbert pointed out there is a difference between boundless and infinite. For instance space is boundless as far as we can tell, but it isn't infinite in size and never will be until eternity arrives. Why can't we use the boundless assumption instead of full-blown infinity?
1) The rigorization of Newton's calculus culminated with infinitary set theory.
Newton discovered his theory of gravity using calculus, which he invented for that purpose.
I didn't know he developed calculus specifically to investigate gravity. Cool! It does make sense now that you mention it.
However, it's well-known that Newton's formulation of calculus made no logical sense at all. If \(\Delta y\) and \(\Delta x\) are nonzero, then \(\frac{\Delta y}{\Delta x}\) isn't the derivative. And if they're both zero, then the expression makes no mathematical sense! But if we pretend that it does, then we can write down a simple law that explains apples falling to earth and the planets endlessly falling around the sun.
I'm going to need some help with this one. If dx = 0, then it contains no information about the change in x, so how can anything result from it? I've always taken dx to mean a differential that is smaller than can be discerned, but still able to convey information. It seems to me that calculus couldn't work if it were based on division by zero, and that if it works, it must not be. What is it I am failing to see? I mean, it's not an issue of 0/0 making no mathematical sense, it's a philosophical issue of the nonexistence of significance because there is nothing in zero to be significant.
2) Einstein's gneral relativity uses Riemann's differential geometry.
In the 1840's Bernhard Riemann developed a general theory of surfaces that could be Euclidean or very far from Euclidean. As long as they were "locally" Euclidean. Like spheres, and torii, and far weirder non-visualizable shapes. Riemann showed how to do calculus on those surfaces. 60 years later, Einstein had these crazy ideas about the nature of the universe, and the mathematician Minkowski saw that Einstein's ideas made the most mathematical sense in Riemann's framework. This is all abstract infinitary mathematics.
Isn't this the same problem as previous? dx=0?
3) Fourier series link the physics of heat to the physics of the Internet; via infinite trigonometric series.
In 1807 Joseph Fourier analyzed the mathematics of the distribution of heat through an iron bar. He discovered that any continuous function can be expressed as an infinite trigonometric series, which looks like this: $$f(x) = \sum_{n=0}^\infty a_n \cos(nx) + \sum_{n=1}^\infty b_n \sin(nx)$$ I only posted that because if you managed to survive high school trigonometry, it's not that hard to unpack. You're composing any motion into a sum of periodic sine and cosine waves, one wave for each whole number frequency. And this is an infinite series of real numbers, which we cannot make sense of without using infinitary math.
I can't make sense of it WITH infinitary math lol! What's the cosine of infinity? What's the infnite-th 'a'?
4) Quantum theory is functional analysis
.
If you took linear algebra, then functional analysis
can be thought of as infinite-dimensional linear algebra combined with calculus. Functional analysis studies spaces whose points are actually functions; so you can apply geometric ideas like length and angle to wild collections of functions. In that sense functional analysis actually generalizes Fourier series.
Quantum mechanics is expressed in the mathematical framework of functional analysis. QM takes place in an infinite-dimensional Hilbert
space. To explain Hilbert space requires a deep dive into modern infinitary math. In particular, Hilbert space is complete
, meaning that it has no holes in it. It's like the real numbers and not like the rational numbers.
QM rests on the mathematics of uncountable sets, in an essential way.
Well, thanks to Hilbert, I've already conceded that the boundless is not the same as the infinite and if it were true that QM required infinity, then no machine nor human mind could model it. It simply must be true that open-ended finites are actually employed and underpin QM rather than true infinite spaces.
Like Max said, "Not only do we lack evidence for the infinite but we don’t need the infinite to do physics. Our best computer simulations, accurately describing everything from the formation of galaxies to tomorrow’s weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. So if we can do without infinity to figure out what happens next, surely nature can, too—in a way that’s more deep and elegant than the hacks we use for our computer simulations."
We can *claim* physics is based on infinity, but I think it's more accurate to say *pretend* or *fool ourselves* into thinking such.
Max continued with, "Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it—the true laws of physics. To start this search in earnest, we need to question infinity. I’m betting that we also need to let go of it."
He said, "let go of it" like we're clinging to it for some reason external to what is true. I think the reason is to be rid of god, but that's my personal opinion. Because if we can't have infinite time, then there must be a creator and yada yada. So if we cling to infinity, then we don't need the creator. Hence why Craig quotes Hilbert because his first order of business is to dispel infinity and substitute god.
I applaud your effort, I really do, and I've learned a lot of history because of it, but I still cannot concede that infinity underpins anything and I'd be lying if I said I could see it. I'm not being stubborn and feel like I'm walking on eggshells being as amicable and conciliatory as possible in trying not to offend and I'm certainly ready to say "Ooooohhh... I see now", but I just don't see it.
ps -- There's our buddy Hilbert again. He did many great things. William Lane Craig misuses and abuses Hilbert's popularized example of the infinite hotel to make disingenuous points about theology and in particular to argue for the existence of God. That's what I've got against Craig.
Craig is no friend of mine and I was simply listening to a debate on youtube (I often let youtube autoplay like a radio) when I heard him quote Hilbert, so I dug into it and posted what I found. I'm not endorsing Craig lol
5) Cantor was led to set theory from Fourier series.
In every online overview of Georg Cantor's magnificent creation of set theory, nobody ever mentions how he came upon his ideas. It's as if he woke up one day and decided to revolutionize the foundations of math and piss off his teacher and mentor Kronecker. Nothing could be further from the truth. Cantor was in fact studing Fourier's trigonometric series! One of the questions of that era was whether a given function could have more than one distinct Fourier series. To investigate this problem, Cantor had to consider the various types of sets of points on which two series could agree; or equivalently, the various sets of points on which a trigonometric series could be zero. He was thereby led to the problem of classifying various infinite sets of real numbers; and that led him to the discovery of transfinite ordinal and cardinal numbers. (Ordinals are about order in the same way that cardinals are about quantity).
I still can't understand how one infinity can be bigger than another since, to be so, the smaller infinity would need to have limits which would then make it not infinity.
In other words, and this is a fact that you probably will not find stated as clearly as I'm stating it here:
If you begin by studying the flow of heat through an iron rod; you will inexorably discover transfinite set theory.
Right, because of what Max said about the continuum model vs the actual discrete. Heat flow is actually IR light flow which is radiation from one molecule to another: a charged particle vibrates and vibrations include accelerations which cause EM radiation that emanates out in all directions; then the EM wave encounters another charged particle which causes vibration and the cycle continues until all the energy is radiated out. It's a discrete process from molecule to molecule, but is modeled as continuous for simplicity's sake.
I've long taken issue with the 3 modes of heat transmission (conduction, convention, radiation) because there is only radiation. Atoms do not touch, so they can't conduct, but the van der waals force simply transfers the vibrations more quickly when atoms are sufficiently close. Convection is simply vibrating atoms in linear motion that are radiating IR light. I have many issues with physics and have often described it as more of an art than a science (hence why it's so difficult). I mean, there are pages and pages on the internet devoted to simply trying to define heat.https://www.quora.com/What-is-heat-1https://www.quora.com/What-is-meant-by-heathttps://www.quora.com/What-is-heat-in-physicshttps://www.quora.com/What-is-the-definition-of-heathttps://www.quora.com/What-distinguishes-work-and-heat
Physics is a mess. What gamma rays are, depends who you ask. They could be high-frequency light or any radiation of any frequency that originated from a nucleus. But I'm digressing....
I do not know what that means in the ultimate scheme of things. But I submit that even the most ardent finitist must at least give consideration to this historical reality.
It just means we're using averages rather than discrete actualities and it's close enough.
I hope I've been able to explain why I completely agree with your point that infinities in physical equations don't imply the actual existence of infinities. Yet at the same time, I am pointing out that our best THEORIES of physics are invariably founded on highly infinitary math. As to what that means ... for my own part, I can't help but feel that mathematical infinity is telling us something about the world. We just don't know yet what that is.
I think it means there are really no separate things and when an aspect of the universe attempts to inspect itself in order to find its fundamentals or universal truths, it will find infinity like a camera looking at its own monitor. Infinity is evidence of the continuity of the singular universe rather than an existing truly boundless thing. Infinity simply means you're looking at yourself.
Anyway, great post! Please don't be mad. Everyone here values your presence and are intimidated by your obvious mathematical prowess
Don't take my pushback too seriously
I'd prefer if we could collaborate as colleagues rather than competing.
|
AP Statistics Curriculum 2007 Infer 2Proportions From Socr
(added a link to the Problems set)
m (→Genders of Siblings Example)
Line 66: Line 66:
* Test Statistics: <math>Z_o = {Estimate-HypothesizedValue\over SE(Estimate)} = {\hat{p}_1 - \hat{p}_2 - 0 \over SE(\hat{p}_1 - \hat{p}_2)} = {\hat{p}_1 - \hat{p}_2 - 0 \over \sqrt{{\hat{p}_1(1-\hat{p}_1)\over n_1} + {\hat{p}_2(1-\hat{p}_2)\over n_2}}} \sim N(0,1)</math> and <math>Z_o=5.4996</math>.
* Test Statistics: <math>Z_o = {Estimate-HypothesizedValue\over SE(Estimate)} = {\hat{p}_1 - \hat{p}_2 - 0 \over SE(\hat{p}_1 - \hat{p}_2)} = {\hat{p}_1 - \hat{p}_2 - 0 \over \sqrt{{\hat{p}_1(1-\hat{p}_1)\over n_1} + {\hat{p}_2(1-\hat{p}_2)\over n_2}}} \sim N(0,1)</math> and <math>Z_o=5.4996</math>.
-
* <math>
+
* <math>= P(Z>Z_o)< 1.9\times 10^{-8}</math>. This small p-values provides extremely strong evidence to reject the null hypothesis that there are no differences between the proportions of mothers that had a girl as a second child but had either boy or girl as their first child. Hence there is strong statistical evidence implying that genders of siblings are not independent.
* Practical significance: The practical significance of the effect (of the gender of the first child on the gender of the second child, in this case) can only be assessed using [[AP_Statistics_Curriculum_2007#Estimating_a_Population_Proportion |confidence intervals]]. A 95% <math>CI (p_1- p_2) =[0.033; 0.070]</math> is computed by <math>p_1-p_2 \pm 1.96 SE(p_1 - p_2)</math>. Clearly, this is a practically negligible effect and no reasonable person would make important prospective family decisions based on the gender of their (first) child.
* Practical significance: The practical significance of the effect (of the gender of the first child on the gender of the second child, in this case) can only be assessed using [[AP_Statistics_Curriculum_2007#Estimating_a_Population_Proportion |confidence intervals]]. A 95% <math>CI (p_1- p_2) =[0.033; 0.070]</math> is computed by <math>p_1-p_2 \pm 1.96 SE(p_1 - p_2)</math>. Clearly, this is a practically negligible effect and no reasonable person would make important prospective family decisions based on the gender of their (first) child.
Revision as of 23:05, 11 May 2010
Contents General Advance-Placement (AP) Statistics Curriculum - Inferences about Two Proportions Testing for Equality of Two Proportions
Suppose we have two populations and we are interested in estimating whether the proportions of subjects that have certain characteristic of interest (e.g., fixed gender) in each population are equal. To make this inference we obtain two samples {} and {}, where each
X and i Y represents whether the i iobservation in the sample had the characteristic of interest. That is th and
Since the
raw sample proportions of observations having the characteristic of interest are and
The
corrected sample proportions (for small samples) are and
By the independence of the samples, the standard error of the difference of the two proportion estimates is:
Raw proportions: Corrected Proportions: Hypothesis Testing the Difference of Two Proportions Null Hypothesis: H : o p = x p , where y p and x p are the sample population proportions of interest. x Alternative Research Hypotheses: One sided (uni-directional): H 1: p > x p , or y H 1: p < x p y Double sided: One sided (uni-directional): Test Statistics: Genders of Siblings Example
Is the gender of a second child influenced by the gender of the first child, in families with >1 child? Research hypothesis needs to be formulated first before collecting/looking/interpreting the data that will be used to address it. Mothers whose 1
st child is a girl are more likely to have a girl, as a second child, compared to mothers with boys as 1 st child. Data: 20 yrs of birth records of 1 Hospital in Auckland, New Zealand.
Second Child Male Female Total First Child Male 3,202 2,776 5,978 Female 2,620 2,792 5,412 Total 5,822 5,568 11,390
Let
p 1=true proportion of girls in mothers with girl as first child, p 2=true proportion of girls in mothers with boy as first child. The parameter of interest is p 1 − p 2. Hypotheses: H : o p 1− p 2= 0 (skeptical reaction). H 1: p 1− p 2> 0 (research hypothesis).
Second Child Number of births Number of girls Proportion Group 1 (Previous child was girl) n 1 = 5412 2792 2 (Previous child was boy) n 2 = 5978 2776 Test Statistics: and Z = 5.4996. o . This small p-values provides extremely strong evidence to reject the null hypothesis that there are no differences between the proportions of mothers that had a girl as a second child but had either boy or girl as their first child. Hence there is strong statistical evidence implying that genders of siblings are not independent. Practical significance: The practical significance of the effect (of the gender of the first child on the gender of the second child, in this case) can only be assessed using confidence intervals. A 95% C I( p 1− p 2) = [0.033;0.070] is computed by . Clearly, this is a practically negligible effect and no reasonable person would make important prospective family decisions based on the gender of their (first) child. This SOCR Analysis Activity illustrates how to use the SOCR Analyses to compute the p-values and answer the hypothesis testing challenge. SOCR Home page: http://www.socr.ucla.edu
Translate this page:
|
Nonlinear Systems of Equations and Problem-Solving
As with linear systems, a nonlinear system of equations (and conics) can be solved graphically and algebraically for all of its variables.
Learning Objectives
Solve nonlinear systems of equations graphically and algebraically
Key Takeaways Key Points Subtracting one equation from another is an effective means for solving linear systems, but it often is difficult to use in nonlinear systems, in which the terms of two equations may be very different. Substitution of a variable into another equation is usually the best method for solving nonlinear systems of equations. Nonlinear systems of equations may have one or multiple solutions. Key Terms system of equations: A set of formulas with multiple variables which can be solved using a specific set of values. conic section: Any of the four distinct shapes that are the intersections of a cone with a plane, namely the circle, ellipse, parabola, and hyperbola. nonlinear: An algebraic term that is raised to the power of two or higher; equivalently, a function with a curved graph. Conic Sections
A conic section (or just conic) is a curve obtained as the intersection of a cone (more precisely, a right circular conical surface) with a plane. In analytic geometry, a conic may be defined as a plane algebraic curve of degree 2. There are a number of other geometric definitions possible. The four types of conic section are the hyperbola, the parabola, the ellipse, and the circle, though the circle can be considered to be a special case of the ellipse.
The type of a conic corresponds to its eccentricity. Conics with eccentricity less than [latex]1[/latex] are ellipses, conics with eccentricity equal to [latex]1[/latex] are parabolas, and conics with eccentricity greater than [latex]1[/latex] are hyperbolas. In the focus-directrix definition of a conic, the circle is a limiting case of the ellipse with an eccentricity of [latex]0[/latex]. In modern geometry, certain degenerate cases, such as the union of two lines, are included as conics as well.
System of Equations
In a system of equations, two or more relationships are stated among variables. A system is solvable as long as there are as many simultaneous equations as variables. If each equation is graphed, the solution for the system can be found at the point where all the functions meet. The solution can be found either by inspection of a graph, typically by using graphing or plotting software, or algebraically.
Nonlinear Systems
Nonlinear systems of equations, such as conic sections, include at least one equation that is nonlinear. A nonlinear equation is defined as an equation possessing at least one term that is raised to a power of 2 or more. When graphed, these equations produce curved lines.
Since at least one function has curvature, it is possible for nonlinear systems of equations to contain multiple solutions. As with linear systems of equations, substitution can be used to solve nonlinear systems for one variable and then the other.
Solving nonlinear systems of equations algebraically is similar to doing the same for linear systems of equations. However, subtraction of one equation from another can become impractical if the two equations have different terms, which is more commonly the case in nonlinear systems.
Example
Consider, for example, the following system of equations:
[latex]\begin{align} y &= x^2 \; \qquad (1) \\ y &= x + 6 \quad (2) \end{align}[/latex]
We can solve this system algebraically by using equation [latex](1)[/latex] as a substitution. The quantity [latex]x^2[/latex] must be equivalent to the quantity [latex]y[/latex], so we substitute [latex]x^2[/latex] for [latex]y[/latex] in equation [latex](2)[/latex]:
[latex]\displaystyle{ \begin{align} y&=x+6 \\ x^2&=x+6 \end{align} }[/latex]
This quadratic equation can be solved by moving all the equation’s components to the left before using the quadratic formula:
[latex]x^2-x-6=0[/latex]
Using the quadratic formula, with [latex]a = 1[/latex], [latex]b = -2[/latex], and [latex]c = -6[/latex], it can be determined that the solutions are [latex]x = -2[/latex] and [latex]x = 3[/latex].
The solutions for [latex]x[/latex] can then be plugged into either of the original systems to find the value of [latex]y[/latex]. In this example, we will use equation [latex](1)[/latex]:
[latex]\displaystyle{ \begin{align} y&=x^2 \\ y&=(-2)^2 \\ y&=4 \end{align} }[/latex]
[latex]\displaystyle{ \begin{align} y&=x^2 \\ y&=(3)^2\\ y&=9 \end{align} }[/latex]
Thus, for [latex]x = -2[/latex], [latex]y = 4[/latex], and for [latex]x = 3[/latex], [latex]y = 9[/latex].
Our final solutions are: [latex](-2,4)[/latex] and [latex](3,9)[/latex].
Models Involving Nonlinear Systems of Equations
Nonlinear systems of equations can be used to solve complex problems involving multiple known relationships.
Learning Objectives
Use nonlinear systems of equations to solve problems in the real world
Key Takeaways Key Points Problems involving simultaneously moving bodies can be solved using systems of equations. If at least one body accelerates or decelerates, the system is nonlinear. If the relationship between multiple unknown numbers is described in as many ways as there are numbers, all unknowns can be found using systems of equations. If at least one of those relationships is nonlinear, the system is nonlinear. Substitution is the best method for solving for simultaneous equations, although to answer a question, one may not need to solve for every variable. Key Terms system of equations: A set of formulas with multiple variables which can be solved using a specific set of values.
Nonlinear systems of equations are not just for hypothetical discussions—they can be used to solve complex problems involving multiple known relationships.
Real World Examples
Consider, for example, a car that begins at rest and accelerates at a constant rate of [latex]4[/latex] meters per second each second. Its position in meters ([latex]y[/latex]) can be determined as a function of time in seconds ([latex]t[/latex]), by the formula:
[latex]\displaystyle{ \begin{align} y&=\frac{1}{2}\left(4\right)t^2\\ y&=2t^2 \end{align} }[/latex]
Now consider a second car, traveling at a constant speed of [latex]20[/latex] meters per second. Its position ([latex]y[/latex]) in meters can be determined as a function of time ([latex]t[/latex]) in seconds, using the following formula:
[latex]y=20t[/latex]
When the first car begins to accelerate, the second car is [latex]400[/latex] meters ahead of it. To express the position of the second car relative to the first as a function of time, we can modify the second equation as such:
[latex]y=20t+400[/latex]
To determine where the cars are when they are alongside one another and how much time has passed since the first began to accelerate, we can algebraically solve the system of equations using substitution:
[latex]\begin{align} y&=20t+400\\ 2t^2&=20t+400 \end{align}[/latex]
Solving for [latex]t[/latex], we can find that the cars are side-by-side after [latex]20[/latex] seconds.
Substituting [latex]20[/latex] for [latex]t[/latex] into the equations for either of the cars, we can find that the cars meet [latex]800[/latex] meters ahead of the first car’s starting point. Note that a question on an exam may not prompt solutions for both variables.
Some other real-world examples of nonlinear systems include:
Triangulation of GPS signals. A device like your cellphone receives signals from GPS satellites, which have known orbital positions around the Earth. A signal from a single satellite allows a cellphone to know that it is somewhere on a circle. Additional signals are additional circles that intersect each other, and the cellphone’s actual position is at the intersection. Three or more signals reduce the solution of the system to a single coordinate point. The conservation of mechanical energy can produce a system of nonlinear equations when there is an elastic (perfectly bouncy) collision. The kinetic energy of the objects depends on the speed squared, and the momentum depends on the speed directly. Manufacturing and design of everything, from electronic parts to metal tools to the architecture of buildings, uses computer-aided design software that helps create three-dimensional shapes from the intersection of curved lines. Rendering and visualizing these objects, and formulating a plan for constructing them, requires the software to solve nonlinear systems. Additional Example
In addition to practical scenarios like the above, nonlinear systems can be used in abstract problems. For example, a question on an exam could ask:
The product of two numbers is 12, and the sum of their squares is 40. What are the numbers?
In this case, we could make an equation for each known relationship:
[latex]\begin{align} x\cdot y&=12 \\ x^2+y^2&=40 \end{align}[/latex]
Substitution can be used to calculate that the numbers are 2 and 6.
Nonlinear Systems of Inequalities
Systems of nonlinear inequalities can be solved by graphing boundary lines.
Learning Objectives
Practice techniques for solving nonlinear systems of inequalities
Key Takeaways Key Points A nonlinear system of inequalities may have at least one solution; if it does, a solution may be bounded or unbounded. A solution for a nonlinear system of inequalities will be in a region that satisfies every inequality in the system. The best way to show solutions to nonlinear systems of inequalities is graphically, by shading the area that satisfies all of the system’s constituent inequalities. Key Terms inequality: A statement that, of two quantities, one is specifically less than or greater than another. Symbols: [latex]<[/latex] or [latex]\leq[/latex] or [latex]>[/latex] or [latex]\geq[/latex], as appropriate. nonlinear: A polynomial expression of degree 2 or higher. system of equations: A set of formulas with multiple variables which can be solved using a specific set of values.
A system of inequalities consists of two or more inequalities, which are statements that one quantity is greater than or less than another. A nonlinear inequality is an inequality that involves a nonlinear expression—a polynomial function of degree 2 or higher. The most common way of solving one inequality with two variables [latex]x[/latex] and [latex]y[/latex] is to shade the region on a graph where the inequality is satisfied.
Every inequality has a boundary line, which is the equation produced by changing the inequality relation to an equals sign. The boundary line is drawn as a dashed line (if [latex]<[/latex] or [latex]>[/latex] is used) or a solid line (if [latex]\leq[/latex] or [latex]\geq[/latex] is used). One side of the boundary will have points that satisfy the inequality, and the other side will have points that falsify it. By testing individual points, the correct region can be shaded. If we have two inequalities, therefore, we shade in the
overlap region, where both inequalities are simultaneously satisfied.
Consider, for example, the system including the parabolic nonlinear inequality:
[latex]y>x^2[/latex]
and the linear inequality:
[latex]y < x+2[/latex]
All points below the line [latex]y=x+2[/latex] satisfy the linear equality, and all points above the parabola [latex]y=x^2[/latex] satisfy the parabolic nonlinear inequality.
Graphing both inequalities reveals one region of overlap: the area where the parabola dips below the line. This area is the solution to the system.
The limits of each inequality intersect at [latex](-1, 1)[/latex] and [latex](2, 4)[/latex]. Note that the area above [latex]y=x^2[/latex] that is also below [latex] y=x+2[/latex] is closed between those two points. Whereas a solution for a linear system of equations will contain an infinite, unbounded area (lines can only pass one another a maximum of once), in many instances, a solution for a nonlinear system of equations will consist of a finite, bounded area.
This need not be the case with all nonlinear inequalities, but reversing the direction of both inequalities in the previous example would lead to an infinite solution area.
|
2019-10-09 06:01
HiRadMat: A facility beyond the realms of materials testing / Harden, Fiona (CERN) ; Bouvard, Aymeric (CERN) ; Charitonidis, Nikolaos (CERN) ; Kadi, Yacine (CERN)/HiRadMat experiments and facility support teams The ever-expanding requirements of high-power targets and accelerator equipment has highlighted the need for facilities capable of accommodating experiments with a diverse range of objectives. HiRadMat, a High Radiation to Materials testing facility at CERN has, throughout operation, established itself as a global user facility capable of going beyond its initial design goals. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPRB085 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPRB085 Notice détaillée - Notices similaires 2019-10-09 06:01
Commissioning results of the tertiary beam lines for the CERN neutrino platform project / Rosenthal, Marcel (CERN) ; Booth, Alexander (U. Sussex (main) ; Fermilab) ; Charitonidis, Nikolaos (CERN) ; Chatzidaki, Panagiota (Natl. Tech. U., Athens ; Kirchhoff Inst. Phys. ; CERN) ; Karyotakis, Yannis (Annecy, LAPP) ; Nowak, Elzbieta (CERN ; AGH-UST, Cracow) ; Ortega Ruiz, Inaki (CERN) ; Sala, Paola (INFN, Milan ; CERN) For many decades the CERN North Area facility at the Super Proton Synchrotron (SPS) has delivered secondary beams to various fixed target experiments and test beams. In 2018, two new tertiary extensions of the existing beam lines, designated “H2-VLE” and “H4-VLE”, have been constructed and successfully commissioned. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW064 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW064 Notice détaillée - Notices similaires 2019-10-09 06:00
The "Physics Beyond Colliders" projects for the CERN M2 beam / Banerjee, Dipanwita (CERN ; Illinois U., Urbana (main)) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; Cholak, Serhii (Taras Shevchenko U.) ; D'Alessandro, Gian Luigi (Royal Holloway, U. of London) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) ; Rae, Bastien (CERN) et al. Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN’s accelerator complex up to 2040 and its scientific infrastructure through projects complementary to the existing and possible future colliders. Within the Conventional Beam Working Group (CBWG), several projects for the M2 beam line in the CERN North Area were proposed, such as a successor for the COMPASS experiment, a muon programme for NA64 dark sector physics, and the MuonE proposal aiming at investigating the hadronic contribution to the vacuum polarisation. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW063 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW063 Notice détaillée - Notices similaires 2019-10-09 06:00
The K12 beamline for the KLEVER experiment / Van Dijk, Maarten (CERN) ; Banerjee, Dipanwita (CERN) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; D'Alessandro, Gian Luigi (CERN) ; Doble, Niels (CERN) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) et al. The KLEVER experiment is proposed to run in the CERN ECN3 underground cavern from 2026 onward. The goal of the experiment is to measure ${\rm{BR}}(K_L \rightarrow \pi^0v\bar{v})$, which could yield information about potential new physics, by itself and in combination with the measurement of ${\rm{BR}}(K^+ \rightarrow \pi^+v\bar{v})$ of NA62. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW061 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW061 Notice détaillée - Notices similaires 2019-09-21 06:01
Beam impact experiment of 440 GeV/p protons on superconducting wires and tapes in a cryogenic environment / Will, Andreas (KIT, Karlsruhe ; CERN) ; Bastian, Yan (CERN) ; Bernhard, Axel (KIT, Karlsruhe) ; Bonura, Marco (U. Geneva (main)) ; Bordini, Bernardo (CERN) ; Bortot, Lorenzo (CERN) ; Favre, Mathieu (CERN) ; Lindstrom, Bjorn (CERN) ; Mentink, Matthijs (CERN) ; Monteuuis, Arnaud (CERN) et al. The superconducting magnets used in high energy particle accelerators such as CERN’s LHC can be impacted by the circulating beam in case of specific failure cases. This leads to interaction of the beam particles with the magnet components, like the superconducting coils, directly or via secondary particle showers. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPTS066 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPTS066 Notice détaillée - Notices similaires 2019-09-20 08:41
Performance study for the photon measurements of the upgraded LHCf calorimeters with Gd$_2$SiO$_5$ (GSO) scintillators / Makino, Y (Nagoya U., ISEE) ; Tiberio, A (INFN, Florence ; U. Florence (main)) ; Adriani, O (INFN, Florence ; U. Florence (main)) ; Berti, E (INFN, Florence ; U. Florence (main)) ; Bonechi, L (INFN, Florence) ; Bongi, M (INFN, Florence ; U. Florence (main)) ; Caccia, Z (INFN, Catania) ; D'Alessandro, R (INFN, Florence ; U. Florence (main)) ; Del Prete, M (INFN, Florence ; U. Florence (main)) ; Detti, S (INFN, Florence) et al. The Large Hadron Collider forward (LHCf) experiment was motivated to understand the hadronic interaction processes relevant to cosmic-ray air shower development. We have developed radiation-hard detectors with the use of Gd$_2$SiO$_5$ (GSO) scintillators for proton-proton $\sqrt{s} = 13$ TeV collisions. [...] 2017 - 22 p. - Published in : JINST 12 (2017) P03023 Notice détaillée - Notices similaires 2019-04-09 06:05
The new CGEM Inner Tracker and the new TIGER ASIC for the BES III Experiment / Marcello, Simonetta (INFN, Turin ; Turin U.) ; Alexeev, Maxim (INFN, Turin ; Turin U.) ; Amoroso, Antonio (INFN, Turin ; Turin U.) ; Baldini Ferroli, Rinaldo (Frascati ; Beijing, Inst. High Energy Phys.) ; Bertani, Monica (Frascati) ; Bettoni, Diego (INFN, Ferrara) ; Bianchi, Fabrizio Umberto (INFN, Turin ; Turin U.) ; Calcaterra, Alessandro (Frascati) ; Canale, N (INFN, Ferrara) ; Capodiferro, Manlio (Frascati ; INFN, Rome) et al. A new detector exploiting the technology of Gas Electron Multipliers is under construction to replace the innermost drift chamber of BESIII experiment, since its efficiency is compromised owing the high luminosity of Beijing Electron Positron Collider. The new inner tracker with a cylindrical shape will deploy several new features. [...] SISSA, 2018 - 4 p. - Published in : PoS EPS-HEP2017 (2017) 505 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.505 Notice détaillée - Notices similaires 2019-04-09 06:05
CaloCube: a new homogenous calorimeter with high-granularity for precise measurements of high-energy cosmic rays in space / Bigongiari, Gabriele (INFN, Pisa)/Calocube The direct observation of high-energy cosmic rays, up to the PeV region, will depend on highly performing calorimeters, and the physics performance will be primarily determined by their acceptance and energy resolution.Thus, it is fundamental to optimize their geometrical design, granularity, and absorption depth, with respect to the total mass of the apparatus, probably the most important constraints for a space mission. Furthermore, a calorimeter based space experiment can provide not only flux measurements but also energy spectra and particle identification to overcome some of the limitations of ground-based experiments. [...] SISSA, 2018 - 5 p. - Published in : PoS EPS-HEP2017 (2017) 481 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.481 Notice détaillée - Notices similaires 2019-03-30 06:08 Notice détaillée - Notices similaires 2019-03-30 06:08 Notice détaillée - Notices similaires
|
Hi, please help:
\({{x+2}\over x^2-2x}-{{2}\over x-2}=-1\)
i need to have X solved...i have no idea...thank you kindly..
solve for X \dfrac{x+2}{x^2-2x} - \dfrac{2}{x-2}=-1
\(\dfrac{x+2}{x^2-2x} - \dfrac{2}{x-2}=-1\)
\(\begin{array}{|rcll|} \hline \dfrac{x+2}{x^2-2x} - \dfrac{2}{x-2} &=& -1 \quad | \quad x-2 \ne 0 \Rightarrow x \ne 2 \\\\ && \quad | \quad x^2-2x \ne 0 \Rightarrow x \ne 2 \\\\ \dfrac{(x+2)(x-2)-2(x^2-2x)}{(x^2-2x)(x-2)} &=& -1 \\\\ (x+2)(x-2)-2(x^2-2x) &=& -(x^2-2x)(x-2) \\ x^2-4-2x^2+4x &=& -(x^2-2x)(x-2) \\ -x^2+4x-4 &=& -(x^2-2x)(x-2) \quad | \quad \cdot (-1) \\ x^2-4x+4 &=& (x^2-2x)(x-2) \\ (x-2)(x-2) &=& (x^2-2x)(x-2) \quad | \quad : (x-2) \\ x-2 &=& x^2-2x \\ x^2-2x &=& x-2 \\ x^2-3x+2 &=& 0 \\ (x-1)(x-2) &=& 0 \\\\ x = 1 && x \ne 2 \\ \hline \end{array}\)
Heureka,
thank you very much, although I fail to understand something:
why can't x be equal to 2?
Heureka,
thank you very much, although I fail to understand something:
why can't x be equal to 2?
If you look closely at the denominator, if x = 2 then you have: x - 2 =2 - 2 =0!!. You should know by now that you cannot divide by zero, because it is "undefined".
Solve for x:
(x + 2)/(x^2 - 2 x) - 2/(x - 2) = -1
Bring (x + 2)/(x^2 - 2 x) - 2/(x - 2) together using the common denominator x:
-1/x = -1
Take the reciprocal of both sides:
-x = -1
Multiply both sides by -1:
x = 1
|
I think there are some important things to remark concerning what you wrote above:
To a). Your calculation is absolutely correct, but the fact that the question as you posted it in your comment speaks about explaining an "obvious" fact in "a few sentences", I would suspect that it aimed more for something like the following (I'm not entirely sure though, as the ordering of the given subtasks seems to disagree with my interpretation):
Suppose we already know that $f(x)=x^4-2$ is irreducible over $\mathrm{GF}(5)$ (part d)). This would tell us that the degree of $\alpha$ over $\mathrm{GF}(5)$ is $4$ (part c)), so $K:=\mathrm{GF}(5)(\alpha)$ is a field with $5^4=625$ Elements. Its multiplicative group $K^\times=K\setminus\{0\}$ has order $\lvert K^\times \rvert=624$. So $a^{624}=1$ for any $a\in K^\times$, in particular $$a^{625}=a\cdot a^{624}=a\cdot 1=a,\quad\text{for any }a\in K^\times$$
Remark: Obviously, this equality holds for $a=0$ as well, so that every $a\in K$ is a root of $x^{625}-x\in\mathrm{GF}(5)[x]$. Considering that a polynomial of degree $n$ over a field can have only a maximum of $n$ roots (in some algebraic closure), one thus gets that every finite field $L$ of characteristic $p$ is a splitting field of $x^{\lvert L\rvert}-x$ over $\mathrm{GF}(p)$.
To c). Here I am not sure what the first line of your solution is used for? An later on, I think it would have been more precise to explicitly give some $\mathrm{GF}(5)$-Basis of $\mathrm{GF}(5)(\alpha)$, you might for example prove that $\{1,\alpha,\alpha^2,\alpha^3\}$ is such a basis (your argument already shows it is generating, so you only had to use the irreducibility of $f$ to prove linear independence), hence $$[\mathrm{GF}(5)(\alpha):\mathrm{GF}(5)]=\dim_{\mathrm{GF}(5)}(\mathrm{GF}(5)(\alpha))=4.$$
To d). How exactly did you end up with only checking there is no $a\in \mathrm{GF}(5)$ with $a^2=2$ to ensure that $f$ has no irreducible factor of degree $2$? You were definitely right about checking for roots to exclude factors of degree $1$, but the point of excluding factors of degree $2$ could need some supplementation, I think. Basically, there are two natural ways to do this (unfortunately both are a bit lengthy):
As $f$ is monic, you may without loss of generality assume both degree $2$ factors monic. So you could multiply out the assumption $$f(x)=(x^2+ax+b)(x^2+cx+d)$$compare the coefficients and try to solve for $a,b,c,d\in \mathrm{GF}(5)$, getting a contradiction.
As $\mathrm{GF}(5)$ consists of only five elements, there are just $5^2=25$ monic polynomial of degree $2$ in $\mathrm{GF}(5)[x]$. By testing for roots in $\mathrm{GF}(5)$ you might find and list all irreducible ones, and then, for any of these, test if it divides $f$.
To e). If I'm not mistaking what you wrote, this is an important misconception, I think! You could of course label $\alpha$ as $\sqrt[4]{2}$, which makes perfect sense, but the use of $i$ suggests you are thinking about complex numbers. However, the question happens in characteristic $5$! The procedure is similar, though. You already know one root of $f$ in $K=\mathrm{GF}(5)(\alpha)$, which is $\alpha$ (by definition). Now, what are the other roots? This is where the roots of unity come in. By definition, a $n$-th root of unity over a field $k$ (in some fixed algebraic closure of $k$) is nothing but a root of the polynomial $x^n-1\in k[x]$, that is, the roots of unity are the elements $\zeta $ with $\zeta ^n=1$. Now, if $\alpha$ is a root of $f$, that is $\alpha^4=2$, and $\zeta$ is a fourth root of unity over $\mathrm{GF}(5)$, then $$(\zeta\alpha)^4=\zeta^4\alpha^4=\alpha^4=2,$$so $\zeta\alpha$ is a root of $f$ as well, for any fourth root of unity $\zeta$. Now $x^4-1\in\mathrm{GF}(5)[x]$ is separable (as the characteristic $5$ does not divide $4$), so there are four different 4th roots of unity over $\mathrm{GF}(5)$, call them $1,\zeta,\eta,\tau$ (actually, the group of roots of unity is cyclic as a finite subgroup of a multiplicative group of a field, so we might as well write them as $1,\zeta,\zeta^2,\zeta^3$), and the four elements $\alpha,\zeta\alpha,\eta\alpha,\tau\alpha$ are all roots of $f$. As $f$ is of degree $4$, these are of course all roots. Now, what are the fourth roots of unity over $\mathrm{GF}(5)$? As the multiplicative group $\mathrm{GF}(5)^\times$ has order $4$, we immediately have $a^4=1$ for all $a\in \mathrm{GF}(5)^\times$ - that means, the $4$-th roots of unity are nothing but $1,2,3,4\in\mathrm{GF}(5)$, and the roots of $f$ in $\mathrm{GF}(5)(\alpha)$ are thus $$\alpha,2\alpha,3\alpha,4\alpha.$$
|
Convergence of global and bounded solutions of a two-species chemotaxis model with a logistic source
College of Mathematics and Statistics, Chongqing University, Chongqing University, Chongqing 401331, China
$\left\{\begin{array}{llll}u_t=\Delta u-\chi_1\nabla\cdot( u\nabla w)+\mu_1u(1-u-a_1v),\quad &x\in \Omega,\quad t>0,\\v_t=\Delta v-\chi_2\nabla\cdot( v\nabla w)+\mu_2v(1-a_2u-v),\quad &x\in\Omega,\quad t>0,\\w_t=\Delta w- w+u+v,\quad &x\in\Omega,\quad t>0,\\\end{array}\right.$ Mathematics Subject Classification:Primary: 35B40, 92C17; Secondary: 35B65, 35K4. Citation:Ke Lin, Chunlai Mu. Convergence of global and bounded solutions of a two-species chemotaxis model with a logistic source. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2233-2260. doi: 10.3934/dcdsb.2017094
References:
[1] [2] [3] [4] [5] [6]
T. Cieślak and C. Stinner,
Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions,
[7] [8]
P. de Mottoni,
[9] [10] [11] [12] [13] [14] [15]
D. Horstemann,
Generalizing the Keller-Segel model: Lyapunov functionals,
[16] [17] [18]
S. Ishida, K. Seki and T. Yokota,
Boundedness in quasilinear Keller-Segel systems of parabolicparabolic type on non-convex bounded domains,
[19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29]
Y. Lou and M. Winkler,
Global existence and uniform boundedness of smooth solutions to a cross-diffusion system with equal diffusion rates,
[30] [31]
M. Mizukami and T. Yokota,
Global existence and asymptotic stability of solutions to a twospecies chemotaxis system with any chemical diffusion,
[32]
J. Murray,
[33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54]
M. Winkler,
Boundedness and large time behavior in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion and general sensitivity,
[55] [56] [57]
show all references
References:
[1] [2] [3] [4] [5] [6]
T. Cieślak and C. Stinner,
Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions,
[7] [8]
P. de Mottoni,
[9] [10] [11] [12] [13] [14] [15]
D. Horstemann,
Generalizing the Keller-Segel model: Lyapunov functionals,
[16] [17] [18]
S. Ishida, K. Seki and T. Yokota,
Boundedness in quasilinear Keller-Segel systems of parabolicparabolic type on non-convex bounded domains,
[19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29]
Y. Lou and M. Winkler,
Global existence and uniform boundedness of smooth solutions to a cross-diffusion system with equal diffusion rates,
[30] [31]
M. Mizukami and T. Yokota,
Global existence and asymptotic stability of solutions to a twospecies chemotaxis system with any chemical diffusion,
[32]
J. Murray,
[33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54]
M. Winkler,
Boundedness and large time behavior in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion and general sensitivity,
[55] [56] [57]
[1]
Liangchen Wang, Yuhuan Li, Chunlai Mu.
Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source.
[2]
Pan Zheng, Chunlai Mu, Xuegang Hu.
Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source.
[3]
Shijie Shi, Zhengrong Liu, Hai-Yang Jin.
Boundedness and large time behavior of an attraction-repulsion chemotaxis model with logistic source.
[4]
Ling Liu, Jiashan Zheng.
Global existence and boundedness of solution of a parabolic-parabolic-ODE chemotaxis-haptotaxis model with (generalized) logistic source.
[5]
Chunhua Jin.
Global classical solution and stability to a coupled chemotaxis-fluid model with logistic source.
[6] [7]
Xie Li, Zhaoyin Xiang.
Boundedness in quasilinear Keller-Segel equations with nonlinear sensitivity and logistic source.
[8]
Masaaki Mizukami.
Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity.
[9] [10]
Abelardo Duarte-Rodríguez, Lucas C. F. Ferreira, Élder J. Villamizar-Roa.
Global existence for an attraction-repulsion chemotaxis fluid model with logistic source.
[11]
Giuseppe Viglialoro, Thomas E. Woolley.
Eventual smoothness and asymptotic behaviour of solutions to a chemotaxis system perturbed by a logistic growth.
[12]
Rachidi B. Salako, Wenxian Shen.
Spreading speeds and traveling waves of a parabolic-elliptic chemotaxis system with logistic source on $\mathbb{R}^N$.
[13]
Rachidi B. Salako, Wenxian Shen.
Existence of traveling wave solutions to parabolic-elliptic-elliptic chemotaxis systems with logistic source.
[14]
Rachidi B. Salako.
Traveling waves of a full parabolic attraction-repulsion chemotaxis system with logistic source.
[15] [16]
Monica Marras, Stella Vernier-Piro, Giuseppe Viglialoro.
Decay in chemotaxis systems with a logistic term.
[17]
Wei Mao, Liangjian Hu, Xuerong Mao.
Asymptotic boundedness and stability of solutions to hybrid stochastic differential equations with jumps and the Euler-Maruyama approximation.
[18]
Tomás Caraballo, Francisco Morillas, José Valero.
Asymptotic behaviour of a logistic lattice system.
[19] [20]
Tobias Black.
Global existence and asymptotic stability in a competitive two-species chemotaxis system with two signals.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top]
|
I have the following homework question that I am struggling with. I have read the corresponding chapter from the book, but no guidance there.
Consider a linked list $X: X_1 \to X_2 \to X_3 \ldots$. Assume that the cost of examining a particular element $X_i$ is $C_i$. Note that to examine $X_i$, one needs to scan through all elements in front of $X_i$. Let $P_i$ be the probability of searching for element $X_i$, so the total cost for all searches is $$ \sum_{j=1}^{n} \left( P_j \cdot \sum_{i=1}^{j} C_i \right) $$
Show that storing elements in non-increasing order of $P_i/C_i$ does not necessarily minimize the total cost.
Show that storing elements in non-decreasing order of $P_i$ does not necessarily minimize the total cost.
Any help and direction how to approach the problem will be highly appreciated.
|
We now turn our attention to finding derivatives of inverse trigonometric functions. These derivatives will prove invaluable in the study of integration later in this text. The derivatives of inverse trigonometric functions are quite surprising in that their derivatives are actually algebraic functions. Previously, derivatives of algebraic functions have proven to be algebraic functions and derivatives of trigonometric functions have been shown to be trigonometric functions. Here, for the first time, we see that the derivative of a function need not be of the same type as the original function.
Example \(\PageIndex{4A}\): Derivative of the Inverse Sine Function
Use the inverse function theorem to find the derivative of \(g(x)=\sin^{−1}x\).
Solution
Since for \(x\) in the interval \([−\dfrac{π}{2},\dfrac{π}{2}],f(x)=\sin x\) is the inverse of \(g(x)=\sin^{−1}x\), begin by finding \(f′(x)\). Since
\[f′(x)=\cos x \nonumber\]
and
\[f′(g(x))=\cos ( \sin^{−1}x)=\sqrt{1−x^2} \nonumber\]
we see that
\[g′(x)=\dfrac{d}{dx}(\sin^{−1}x)=\dfrac{1}{f′(g(x))}=\dfrac{1}{\sqrt{1−x^2}} \nonumber\]
Analysis
To see that \(\cos(\sin^{−1}x)=\sqrt{1−x^2}\), consider the following argument. Set \(\sin^{−1}x=θ\). In this case, \(\sin θ=x\) where \(−\dfrac{π}{2}≤θ≤\dfrac{π}{2}\). We begin by considering the case where \(0<θ<\dfrac{π}{2}\). Since \(θ\) is an acute angle, we may construct a right triangle having acute angle \(θ\), a hypotenuse of length \(1\) and the side opposite angle \(θ\) having length \(x\). From the Pythagorean theorem, the side adjacent to angle \(θ\) has length \(\sqrt{1−x^2}\). This triangle is shown in Figure \(\PageIndex{2}\) Using the triangle, we see that \(\cos(\sin^{−1}x)=\cos θ=\sqrt{1−x^2}\).
In the case where \(−\dfrac{π}{2}<θ<0\), we make the observation that \(0<−θ<\dfrac{π}{2}\) and hence
\(\cos(\sin^{−1}x)=\cos θ=\cos(−θ)=\sqrt{1−x^2}\).
Now if \(θ=\dfrac{π}{2}\) or \(θ=−\dfrac{π}{2},x=1\) or \(x=−1\), and since in either case \(\cosθ=0\) and \(\sqrt{1−x^2}=0\), we have
\(\cos(\sin^{−1}x)=\cosθ=\sqrt{1−x^2}\).
Consequently, in all cases,
\[\cos(\sin^{−1}x)=\sqrt{1−x^2}.\]
Example \(\PageIndex{4B}\): Applying the Chain Rule to the Inverse Sine Function
Apply the chain rule to the formula derived in Example \(\PageIndex{4A}\) to find the derivative of \(h(x)=\sin^{−1}(g(x))\) and use this result to find the derivative of \(h(x)=\sin^{−1}(2x^3).\)
Solution
Applying the chain rule to \(h(x)=\sin^{−1}(g(x))\), we have
\(h′(x)=\dfrac{1}{\sqrt{1−(g(x))^2}}g′(x)\).
Now let \(g(x)=2x^3,\) so \(g′(x)=6x.\) Substituting into the previous result, we obtain
\(h′(x)=\dfrac{1}{\sqrt{1−4x^6}}⋅6x=\dfrac{6x}{\sqrt{1−4x^6}}\)
Exercise \(\PageIndex{4}\)
Use the inverse function theorem to find the derivative of \(g(x)=\tan^{−1}x\).
Hint
The inverse of \(g(x)\) is \(f(x)=tanx\). Use Example \(\PageIndex{4A}\) as a guide.
Answer
\(g′(x)=\dfrac{1}{1+x^2}\)
The derivatives of the remaining inverse trigonometric functions may also be found by using the inverse function theorem. These formulas are provided in the following theorem.
Derivatives of Inverse Trigonometric Functions
\[\begin{align} \dfrac{d}{dx}\sin^{−1}x&=\dfrac{1}{\sqrt{1−(x)^2}} \label{trig1} \\[4pt] \dfrac{d}{dx}\cos^{−1}x&=\dfrac{−1}{\sqrt{1−(x)^2}} \label{trig2} \\[4pt] \dfrac{d}{dx}\tan^{−1}x&=\dfrac{1}{1+(x)^2} \label{trig3} \\[4pt] \dfrac{d}{dx}\cot^{−1}x&=\dfrac{−1}{1+(x)^2} \label{trig4} \\[4pt] \dfrac{d}{dx}\sec^{−1}x&=\dfrac{1}{|x|\sqrt{(x)^2−1}} \label{trig5} \\[4pt] \dfrac{d}{dx}\csc^{−1}x&=\dfrac{−1}{|x|\sqrt{(x)^2−1}} \label{trig6} \end{align}\]
Example \(\PageIndex{5A}\): Applying Differentiation Formulas to an Inverse Tangent Function
Find the derivative of \(f(x)=\tan^{−1}(x^2).\)
Solution
Let \(g(x)=x^2\), so \(g′(x)=2x\). Substituting into Equation \ref{trig3}, we obtain
\(f′(x)=\dfrac{1}{1+(x^2)^2}⋅(2x).\)
Simplifying, we have
\(f′(x)=\dfrac{2x}{1+x^4}\).
Example \(\PageIndex{5B}\): Applying Differentiation Formulas to an Inverse Sine Function
Find the derivative of \(h(x)=x^2 \sin^{−1}x.\)
Solution
By applying the product rule, we have
\(h′(x)=2x\sin^{−1}x+\dfrac{1}{\sqrt{1−x^2}}⋅x2\)
Exercise \(\PageIndex{5}\)
Find the derivative of \(h(x)=\cos^{−1}(3x−1).\)
Hint
Use Equation \ref{trig2}. with \(g(x)=3x−1\)
Answer
\(h′(x)=\dfrac{−3}{\sqrt{6x−9x^2}}\)
Example \(\PageIndex{6}\): Applying the Inverse Tangent Function
The position of a particle at time \(t\) is given by \(s(t)=\tan^{−1}\left(\dfrac{1}{t}\right)\) for \(t≥ \ce{1/2}\). Find the velocity of the particle at time \( t=1\).
Solution
Begin by differentiating \(s(t)\) in order to find \(v(t)\).Thus,
\(v(t)=s′(t)=\dfrac{1}{1+\left(\dfrac{1}{t}\right)^2}⋅\dfrac{−1}{t^2}\).
Simplifying, we have
\(v(t)=−\dfrac{1}{t^2+1}\).
Thus, \(v(1)=−\dfrac{1}{2}.\)
Exercise \(\PageIndex{6}\)
Find the equation of the line tangent to the graph of \(f(x)=\sin^{−1}x\) at \(x=0.\)
Hint
\(f′(0)\) is the slope of the tangent line.
Answer
\(y=x\)
|
Answer
The solution set is $$\{60^\circ, 120^\circ\}$$
Work Step by Step
$$2\sin\theta-\sqrt3=0$$ over the interval $[0^\circ,360^\circ)$ We carry out normal algebra here: $$\sin\theta=\frac{\sqrt3}{2}$$ $$\theta=\sin^{-1}\frac{\sqrt3}{2}$$ Over the interval $[0^\circ,360^\circ)$, there are two values of $\theta$ whose $\sin\theta=\frac{\sqrt3}{2}$, which are $60^\circ$ and $120^\circ$. In other words, $$\theta=\{60^\circ, 120^\circ\}$$ That is also the solution set to the problem.
|
Hint Applet (Trigonometric Substitution) Sample Problem Flash Applets embedded in WeBWorK questions u-subsitution Example Sample Problem with uSub.swf embedded
A standard WeBWorK PG file with an embedded applet has six sections:
A tagging and description section, that describes the problem for future users and authors, An initialization section, that loads required macros for the problem, A problem set-up sectionthat sets variables specific to the problem, An Applet link sectionthat inserts the applet and configures it, (this section is not present in WeBWorK problems without an embedded applet) A text section, that gives the text that is shown to the student, and An answer, hint and solution section, that specifies how the answer(s) to the problem is(are) marked for correctness, gives hints after a given number of tries and gives a solution that may be shown to the student after the problem set is complete.
The sample file attached to this page shows this; below the file is shown to the left, with a second column on its right that explains the different parts of the problem that are indicated above.
Other applet sample problems: GraphLimit Flash Applet Sample Problem GraphLimit Flash Applet Sample Problem 2 Derivative Graph Matching Flash Applet Sample Problem Hint Applet (Trigonometric Substitution) Sample Problem
PG problem file Explanation ##DESCRIPTION ##KEYWORDS('integrals', 'trigonometric','substitution') ## DBsubject('Calculus') ## DBchapter('Techniques of Integration') ## DBsection('Trigonometric Substitution') ## Date('8/20/11') ## Author('Barbara Margolius') ## Institution('Cleveland State University') ## TitleText1('') ## EditionText1('2010') ## AuthorText1('') ## Section1('') ## Problem1('20') ##ENDDESCRIPTION ######################################## # This work is supported in part by the # National Science Foundation # under the grant DUE-0941388. ########################################
This is the
The description is provided to give a quick summary of the problem so that someone reading it later knows what it does without having to read through all of the problem code.
All of the tagging information exists to allow the problem to be easily indexed. Because this is a sample problem there isn't a textbook per se, and we've used some default tagging values. There is an on-line list of current chapter and section names and a similar list of keywords. The list of keywords should be comma separated and quoted (e.g., KEYWORDS('calculus','derivatives')).
DOCUMENT(); loadMacros( "PGstandard.pl", "AppletObjects.pl", "MathObjects.pl", "parserFormulaUpToConstant.pl", );
This is the
The
# Set up problem TEXT(beginproblem()); $showPartialCorrectAnswers = 1; $a = random(2,9,1); $a2 = $a*$a; $a3 = $a2*$a; $a4 = $a2*$a2; $a4_3 = 3*$a4; $a2_5 = 5*$a2; $funct = FormulaUpToConstant("-sqrt{$a2-x^2}/{x}-asin({x}/{$a})");
This is the
The
################################### # Create link to applet ################################### $appletName = "trigSubWW"; $applet = FlashApplet( codebase => findAppletCodebase("$appletName.swf"), appletName => $appletName, appletId => $appletName, setStateAlias => 'setXML', getStateAlias => 'getXML', setConfigAlias => 'setConfig', maxInitializationAttempts => 10, height => '550', width => '595', bgcolor => '#e8e8e8', debugMode => 0, ); ################################### # Configure applet ################################### $applet->configuration(qq {<xml><trigString>sin</trigString></xml>}); $applet->initialState(qq {<xml><trigString>sin</trigString></xml>}); TEXT(MODES(TeX=>"", HTML=><<'END_TEXT')); <script> if (navigator.appVersion.indexOf("MSIE") > 0) { document.write("<div width='3in' align='center' style='background:yellow'> You seem to be using Internet Explorer.<br/> It is recommended that another browser be used to view this page.</div>"); } </script> END_TEXT
This is the
Those portions of the code that begin the line with
You must include the section that follows
The lines
TEXT(MODES(TeX=>"", HTML=><<'END_TEXT')); <script> if (navigator.appVersion.indexOf("MSIE") > 0) { document.write("<div width='3in' align='center' style='background:yellow'> You seem to be using Internet Explorer. <br/>It is recommended that another browser be used to view this page.</div>"); } </script> END_TEXT
The text between the
BEGIN_TEXT Evaluate the indefinite integral. $BR \[ \int\frac{\sqrt{$a2 - x^2}}{x^2}dx \] $BR \{ans_rule( 60) \} END_TEXT ################################## Context()->texStrings;
This is the
################################### # # Answers # ## answer evaluators ANS( $funct->cmp() ); TEXT($PAR, $BBOLD, $BITALIC, "Hi $studentLogin, If you don't get this in 5 tries I'll give you a hint with an applet to help you out.", $EITALIC, $EBOLD, $PAR); $showHint=5; Context()->normalStrings; TEXT(hint( $PAR, MODES(TeX=>'object code', HTML=>$applet->insertAll( debug =>0, reinitialize_button => 0, includeAnswerBox=>0, )) )); ################################## Context()->texStrings; SOLUTION(EV3(<<'END_SOLUTION')); $BBOLD Solution: $EBOLD $PAR To evaluate this integral use a trigonometric substitution. For this problem use the sine substitution. \[x = {$a}\sin(\theta)\] $BR$BR Before proceeding note that \(\sin\theta=\frac{x}{$a}\), and \(\cos\theta=\frac{\sqrt{$a2-x^2}}{$a}\). To see this, label a right triangle so that the sine is \(x/$a\). We will have the opposite side with length \(x\), and the hypotenuse with length \($a\), so the adjacent side has length \(\sqrt{$a2-x^2}\). $BR$BR With the substitution \[x = {$a}\sin\theta\] \[dx = {$a}\cos\theta \; d\theta\] $BR$BR Therefore: \[\int\frac{\sqrt{$a2 - x^2}}{x^2}dx= \int \frac{{$a}\cos\theta\sqrt{$a2 - {$a2}\sin^2\theta}} {{$a2}\sin^2\theta} \; d\theta\] \[=\int \frac{\cos^2\theta}{\sin^2\theta} \; d\theta\] \[=\int \cot^2\theta \; d\theta\] \[=\int \csc^2\theta-1 \; d\theta\] \[=-\cot\theta-\theta+C\] $BR$BR Substituting back in terms of \(x\) yields: \[-\cot\theta-\theta+C =-\frac{\sqrt{$a2-x^2}}{x}-\sin^{-1}\left(\frac{x}{$a}\right)+C \] so \[ \int\frac{\sqrt{$a2 - x^2}}{x^2}dx =-\frac{\sqrt{$a2-x^2}}{x}-\sin^{-1}\left(\frac{x}{$a}\right)+C\] END_SOLUTION Context()->normalStrings; ################################## ENDDOCUMENT();
This is the
The
|
Perhaps it would make more sense if it were written as follows:
$$\forall x(hasTail(x) \rightarrow dog(x))$$
Your statement is equivalent to the above statement: $$\begin{align} \lnot \lnot \forall x(hasTail(x) \rightarrow dog(x))&\equiv \lnot \exists x( \lnot(hasTail(x) \rightarrow dog(x)))\tag{1}\\ \\ & \equiv \lnot \exists x (\lnot(\lnot hasTail(x) \lor dog(x)))\tag{2}\\ \\ &\equiv \lnot \exists x(hasTail(x) \land \lnot dog(x))\tag{3}\\ \\&\equiv \lnot \exists x (\lnot dog(x) \land hasTail(x))\tag{4}\end{align}$$
$(1)$ follows from the equivalence $\lnot \forall x P(x) \equiv \exists x (\lnot P(x))$
$(2)$ follows from the equivalence $p \rightarrow q\equiv \lnot p \lor q$
$(3)$ follows from DeMorgan's Law
$(4)$ because of the commutativity of $\land$
|
The setup is as in this question:
Given a norm $N$ over ${\bf M}_n(\mathbb C)$, it is a natural question to find the best constant $C_N$ such that $$N([A,B])\le C_N N(A)N(B),\qquad\forall A,B\in{\bf M}_n(\mathbb C).$$
Equivalently, $C_N$ is the maximum of $N(AB-BA)$ provided that $N(A)=N(B)=1$.
Given examples of $C_N$ are
$C_N=\sqrt{2}$ if $N$ is the Frobenius norm $C_N=2$ if $N$ is the operator norm $\| \cdot\|_2$ $C_N=4$ if $N$ is the numerical radius$r(A)=\sup\limits_{x\ne0}\dfrac{|x^*Ax|}{\|x\|^2}$ (See this answer to an MO question).
if $N$ is the induced $p$-norm, defined for $1\le p\le\infty$ by $\|A\| _p = \sup \limits _{x \ne 0} \frac{\| A x\| _p}{\|x\|_p}$, we have $C_N=2$ for $p=\infty$ (with $\|A\|_\infty $ being just the maximum absolute row sum of the matrix). Indeed, the lower bound $2$ for $\|\cdot\|_\infty $ is obtained by taking e.g. $A=\begin{pmatrix} 1&0\\1&0\end{pmatrix}$ and $B=\begin{pmatrix} 0&1\\0&-1\end{pmatrix}$, and it should be easy to prove that $2$ is also the general upper bound for $\|\cdot\|_\infty $.
Similarly, $C_N=2$ for $p=1$ (with $\|A\|_1 $ being the maximum absolute column sum of the matrix).
Knowing that $C_N\equiv2$ for $p=1,2,\infty$, is it true that the same holds for the induced $p$-norms for all$p\ge1$?
If $N$ runs over all possible matrix norms, what is the range of $C_N$? In particular, is it bounded below and/or above?
(To avoid trivialities, let's keep it homogeneous by only considering "normalized" norms, i.e. require $N(I_n)=1$. This does not seem to be part of the standard definition of a norm.)
|
Electronic Journal of Probability Electron. J. Probab. Volume 23 (2018), paper no. 9, 21 pp. Approximation of smooth convex bodies by random polytopes Abstract
Let $K$ be a convex body in $\mathbb{R} ^n$ and $f : \partial K \rightarrow \mathbb{R} _+$ a continuous, strictly positive function with $\int \limits _{\partial K} f(x) \mathrm{d} \mu _{\partial K}(x) = 1$. We give an upper bound for the approximation of $K$ in the symmetric difference metric by an arbitrarily positioned polytope $P_f$ in $\mathbb{R} ^n$ having a fixed number of vertices. This generalizes a result by Ludwig, Schütt and Werner [36]. The polytope $P_f$ is obtained by a random construction via a probability measure with density $f$. In our result, the dependence on the number of vertices is optimal. With the optimal density $f$, the dependence on $K$ in our result is also optimal.
Article information Source Electron. J. Probab., Volume 23 (2018), paper no. 9, 21 pp. Dates Received: 20 June 2017 Accepted: 22 December 2017 First available in Project Euclid: 12 February 2018 Permanent link to this document https://projecteuclid.org/euclid.ejp/1518426057 Digital Object Identifier doi:10.1214/17-EJP131 Mathematical Reviews number (MathSciNet) MR3771746 Zentralblatt MATH identifier 1395.52008 Citation
Grote, Julian; Werner, Elisabeth. Approximation of smooth convex bodies by random polytopes. Electron. J. Probab. 23 (2018), paper no. 9, 21 pp. doi:10.1214/17-EJP131. https://projecteuclid.org/euclid.ejp/1518426057
|
Let $m,n \in \mathbb{N}$ and $m \geq n \geq 2$ and $x_1,x_2,...x_n \in \mathbb{N}_{\geq 1}$ such as $x_1+x_2+...+x_n=m$. Find $\min P$ with $P= \sum_{i=1}^{n} x_i^2.$
closed as off-topic by user9072, fedja, Stefan Kohl, Franz Lemmermeyer, Felipe Voloch Nov 1 '15 at 21:11
This question appears to be off-topic. The users who voted to close gave this specific reason:
"This question does not appear to be about research level mathematics within the scope defined in the help center." – Community, fedja, Franz Lemmermeyer, Felipe Voloch
This is a low-tech explanation of Joe Silverman's comment (but see also Geoff Robinson's response).
Proposition. The minimum is attained when the variables differ by at most $1$, i.e. when each $x_i$ is either $\lfloor m/n\rfloor$ or $\lceil m/n\rceil$. Proof. If $x_i-x_j\geq 2$, say, then replacing $x_i$ (resp. $x_j$) by $x_i-1$ (resp. $x_j+1$) yields a better $n$-tuple, because$$(x_i-1)+(x_j+1)=x_i+x_j\qquad\text{but}\qquad (x_i-1)^2+(x_j+1)^2<x_i^2+x_j^2.$$ Remark. Note that the proposition determines a unique $n$-tuple for the minimum, namely the number of $\lfloor m/n\rfloor$'s and $\lceil m/n\rceil$'s is determined by the residue of $m$ modulo $n$.
It is hard to find a precise answer, since there are some choices of $m,n$ which are easier than others . Lagrange's identity gives $\left( \sum_{i} x_{i}\right)^{2} + \sum_{i<j} (x_i -x_j)^{2} = n P$, which is relevant.
This is a quadratic integer linear programming problem, see this 2014 paper (by Christian Bliekú, Pierre Bonami , and Andrea Lodi) for a survey of what is known. (this problem is positive definite, so not as hard as the hardest case).
|
The notation $M = \int \vec F\cdot d\vec r$ elides an important detail: the path along which $F$ is to be integrated. The usual construction is to integrate along a path that starts at a fixed point $p$ and ends at a variable point $x$. But there are two trouble spots with this construction.
The first is not so bad. How do we know that there
is a path from $p$ to each point $x$? So we have to add the assumption that the domain of $\vec F$ is connected, or technically path-connected. If the domain is connected, we at least have an $M(x)$ for each $x$.
The second is trickier. $M$ needs to be a function, and it should depend only on $x$, not on the specific path one takes from $p$ to $x$. Otherwise you and I could accidentally pick different paths and get different results, and we wouldn't know which one to call $M(x)$. So $\vec F$ needs the property that line integrals are independent of path. This is equivalent to the property that line integrals around closed loops are zero.
If line integrals around closed loops are zero, it follows from Stokes's theorem that $\nabla \times \vec F = \vec 0$. And the implication
almost goes in the other way, too, except for the possibility that the path wraps around a “hole” in the domain. Try$$ \vec F(x,y,z) = \left<- \frac{y}{x^2+y^2}, \frac{x}{x^2+y^2},0\right>$$Then $\nabla \times \vec F = \vec 0$. However, the domain of $\vec F$ excludes the $z$-axis, where $x=y=0$. And in fact, if we integrate $\vec F$ around the unit circle in the $xy$-plane, we get $2\pi$, not $0$. So this $\vec F$ is not conservative, despite it having a curl of zero.
So we rule out that possibility and require that the domain of $\vec F$ be not just connected but
simply connected: that any closed loop in the domain is contractible to a point. Picture a slipknot closing up. If the domain has no holes, a loop in the domain can contract to a point, but if there is a hole, a loop around that hole can't move “past” it.
So if $\vec F$ has a curl of zero, and the domain of $\vec F$ is connected and simply connected, your construction does result in a potential function and proves that $\vec F$ is conservative.
It's worth noting that the sufficient conditions are not just differential ($\nabla \times \vec F = \vec 0$), but topological (domain is connected and simply connected). To follow this point, if we have a domain and we want to know more about its shape, we might try to construct vector fields with zero curl but which aren't conservative. If we could find one, we would know the domain is not simply connected. This kind of study is called
algebraic topology.
|
I have in mind $$\max\left( \sqrt{(x-1) (y-x)}+\sqrt{(1-x) (7-y)}+\sqrt{(y-7) (x-y)}\right)$$ for $x\geq -2\land x\leq 3\land y\geq 0\land y\leq 11 $, of course, taking into account real values of the roots only. In order to avoid complex numbers I consider
f=(Sqrt[(x - 1)*(y - x)] + Sqrt[(7 - y)*(1 - x)] + Sqrt[(x - y)*(y - 7)])*Boole[(x - 1)*(y - x) >= 0]* Boole[(7 - y)*(1 - x) >= 0]*Boole[(x - y)*(y - 7) >= 0]
Unfortunately, its plot does not give any prompt to me.
Plot3D[f, {x, -2, 3}, {y, 0, 11}]
Both
NMaximize[{f, x >= -2 && x <= 3 && y >= 0 && y <= 11},{x, y},AccuracyGoal-> 3,MaxIterations->200]
NMaximize::cvmit: Failed to converge to the requested accuracy or precision within 200 iterations. {0., {x -> 3., y -> 11.}}
and
FindMaximum[{f, x >= -2 && x <= 3 && y >= 0 && y <= 11}, {x, y}]
{0., {x -> 0.914707, y -> 9.31719}}
do not produce a correct answer in view of
f /. {x -> 1, y -> 3}
$ 2 \sqrt{2}$
|
Let $M^3$ be a complete riemannian manifold and $\Sigma ^2\subset M^3$ a embedded minimal compact surface. Consider the normal variation $\phi: \Sigma \times \Bbb{R}\to M$ given by
$$\phi(p,t)=\exp_p(tN(p)),$$
when $N$ is a normal vector field along to $\Sigma.$
I want to show that:
Lemma: There is $\delta>0$ such that $\phi:\Sigma\times [0,\delta) \to M$ is an immersion and even an embedding on $\Sigma\times [0,\delta)$.
I tried a sketch with tubular neighborhood tecniques but I don't conclude anything in this direction. In fact, I know that there is the tubular neighborhood of $\Sigma$, but I don't know how immerse this in $M$.
Anyone has a little help?
Thanks so much.
|
Solve system of equations
$\begin{cases} 3x^2 + \sin 2y - \cos y - 3 = 0 \\ x^3 - 3x - \sin y - \cos 2y + 3 = 0 \end{cases}$
I tried to use substitution $x = \cos t$ or sth, but I get literally nothing
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The equations are $x^2=A$ and $x^3-3x=B$ where $$A=\frac{3+\cos y - \sin 2y}{3},\quad B=\sin y +\cos 2y - 3.$$ Now note that $x^3-3x=x(x^2-3),$ so one can substitute the value from $x^2=A$ here, obtaining $x(A-3)=B,$ that is, $x=B/(A-3).$ Then putting this $x$ back into $x^2=A$ gives the relation, involving only $A,B,$ that $$B^2=A(A-3)^2.\tag{1}$$ Here $A,B$ are trig functions of $y$ admitting period $2\pi,$ and a root finder indicates there are four solutions for $y$ namely $y=0.066078,\ 0.523599,\ 2.61799,\ 3.06375.$ In degrees, the second and third appear to be 30 and 150 respectively, while the first and last seem about 3.785 and 175.54 degrees, not that "nice" as solutions. (I haven't checked analytically whether the 30 and 150 degree solutions are exactly correct.)
Anyway, for each of the four possible $y$ values, one can compute its corresponding $x$ from $x=B/(A-3)$
Added: I checked the $y$ being 30 and 150 degree solutions and they are exact, each leading to $x=1.$ That is, in radians, two of the four solutions are $(x,y)=(1,\pi/6),\ (1,5\pi/6).$ Also the two other solutions for $y$ appear to add up to $\pi$ and each of those leads to the same value of $x$ for the other two pairs $(x,y)$ of solutions, though these others are not particularly nice. Going with the equation $(1)$ and trying to say put all in terms of cosine gives a high degree polynomial to solve, I think degree 8 once one uses $\sin^2 y + \cos^2 y=1$ to eliminate the other trig function.
Edit: Correction-- the "other two $y$ solutions" (besides $\pi/6,5\pi/6$) in fact do not add to $\pi$ but to about $3.1298.$ Furthermore their corresponding $x$ values are about $1.1352$ from the smaller $y$ and $0.8481$ from the larger. I noticed this while trying (unsuccessfully) to show that $B^2-A(A-3)^2$ was symmetric around $\pi/2$; it wasn't.
Invoking the double-angle formulas $\sin 2 y = 2 \sin y \cos y$ and $\cos 2 y = 1 - 2 \sin^2 y$, and abbreviating "$\sin y$" and "$\cos y$" by "$s$" and "$c$", we can manipulate the equations into these forms $$\begin{align} c \;( 2 s - 1 ) &= - 3 ( x^2 - 1 ) &= (x-1)\;P \tag{1}\\ s \;( 2 s - 1 ) &= -(x - 1 )^2 ( x + 2 ) &= (x-1)\;Q \tag{2} \end{align}$$ where $P := -3(x+1)$ and $Q := -(x-1)(x+2)$. From here, we see a family of solutions that make each side of $(1)$ and $(2)$ vanish simultaneously:
$x=1$, and all $y$ such that $\sin y = \frac12$ (in particular, $y = \frac\pi6$ and $y = \frac{5\pi}{6}$)
We also see that there are no other solutions with either $x=1$ or $\sin y = \frac12$. Moreover, there are no other solutions with either $P=0$ or $Q=0$: if $x=-1$, then $(2)$ gives an impossible relation, $c(2s-1)=-4$; likewise, if $x=-2$, then $(1)$ gives $c(2s-1) = -9$. Consequently, we can write $$\frac{c}{P} = \frac{x-1}{2s-1} = \frac{s}{Q}$$ which implies $$c = \pm\frac{P}{R}\qquad s = \pm\frac{Q}{R}$$ where $R := \sqrt{P^2+Q^2}$ and the "$\pm$"s match.
Substituting into $(1)$ (or $(2)$), then,$$2 Q \mp R = ( x - 1 ) R^2 \quad\to\quad \left(\;2 Q - ( x - 1 )R^2\;\right)^2 = R^2$$Rewriting $Q$ and $R^2 = P^2 + Q^2$ in terms of $x$, we have$$x^{10}+ 2 x^9 + 9 x^8 + 28 x^7 + 38 x^6 + 48 x^5 + 73 x^4 - 118 x^3 - 345 x^2 - 48 x + 276 = 0$$for which
Mathematica finds two real roots
$$x_1 \approx 1.1352 \qquad x_2 \approx 0.848153$$
The corresponding values of $Q/P$ give the tangents of the associated $y$s: $$\tan y_1 = \frac{-0.423884}{-6.4056} = \phantom{-}0.0661739$$ $$\tan y_2 = \frac{\phantom{-}0.432483}{-5.54446} = -0.0780028$$
One can check that
Only first-quadrant solutions $y_1$ and second-quadrant solutions $y_2$ actually satisfy the original equations.
Here's a graph of one period of the situation:
Equation $(1)$ defines the blue squiggles (right and middle); equation $(2)$ defines the red squiggle (left) and ovals. The line $x=1$ appears in gold.
Solving the first equation for $x$ $$ 3x^2+\sin(2y)-\cos(y)-3=0 $$ $$ x=\pm\sqrt{\frac{3+\cos(y)-\sin(2y)}{3}} $$ Substituting this into the second equation $$ \left(\pm\sqrt{\frac{3+\cos(y)-\sin(2y)}{3}}\right)^3\mp3\sqrt{\frac{3+\cos(y)-\sin(2y)}{3}}-\sin(y)-\cos(2y)+3=0 $$ Simplifying radicals $$ \pm\frac{1}{3^{3/2}}(3+\cos(y)-\sin(2y))^{3/2}\mp\frac{3}{\sqrt{3}}(3+\cos(y)-\sin(2y))^{1/2}-\sin(y)-\cos(2y)+3=0 $$ Expanding $\sin$'s, factoring the first two terms & ignoring $\pm,\mp$ signs to just take positive roots as this is already messy enough $$ \frac{1}{\sqrt{3}}(3+\cos(y)-2\sin(y)\cos(y))^{1/2}\left(-2+\frac{1}{3}\cos(y)-\frac{2}{3}\sin(y)\cos(y)\right)-\sin(y)-\cos^2(y)+\sin^2(y)+3=0 $$ Substituting $a=\sin(y)$, $b=\cos(y)$ $$ \frac{1}{\sqrt{3}}(3+b-2ab)^{1/2}\left(\frac{1}{3}b-\frac{2}{3}ab-2\right)-a-b^2+a^2+3=0 $$
We arrive at this messy (but solvable) equation. I can't think of an efficient way to solve it but solutions for the variables $a$ & $b$ can be computed with
Wolfram Alpha at the following link
$\quad$ $\quad$ $\quad$ $\quad$ $\quad$ $\quad$ $\quad$ Solutions $\quad$ (wait a minute for it to load)
If you substitute the original forms of $a$ & $b$ in terms of $y$ each solution gives an equation of the form $\cos(y)=f(\sin(y))$
Sorry i couldn't come up with anything neater nor more general but the important thing is that a solution can be found. Also, I may be being ignorant & not noticing an easier solution. If there are any apparent errors in my work, please let me know. Thanks for the interesting question.
Following on from coffeemath equation (1), and with the substitution $t=\tan(y/2)$, WolframAlpha gives the polynomial $$(t^2-4t+1)(5t^{10}-130t^9+66t^8-640t^7+128t^6-1044t^5+178t^4-656t^3+51t^2-122t+4)=0$$ The quadratic corresponds to $y=\pi/6$ and $y=5\pi/6$.
|
@Qmechanic answered your question in the comments, but evidently the message did not automatically sink in. Let me try to illustrate it with the routine demonstration you probably were exposed to when learning the uses of the Dirac bra-ket notation.
The short answer is that the tracefulness of the identity in the r.h.s. of your commutator equation for an infinite dimensional Hilbert space leads to $\langle a | a\rangle \neq$1, because it is actually singular. So your equation 1) is fine, since the r.h.s. is infinity.
But your equation 2) is flawed, since the relevant expression involves a 0 multiplying a stronger infinity, amounting to infinity, again, as in 1).
I will illustrate this with $A=\hat x$ and $B=\hat p /\hbar$, as in standard QM courses. Absorb $\hbar$ in $\hat p$ to make the formalism more familiar.
Starting from the standard operator equation $[\hat x ,\hat p]=i 1\!\!1 $, first take its non-diagonal matrix elements, before building up to your 2),$$\langle x|\hat x \hat p - \hat p \hat x|y\rangle = (x-y)\langle x|\hat p|y\rangle=(x-y)\int dp ~ \langle x| p\rangle \langle p| \hat p|y\rangle \\ =(x-y)\int dp ~ \langle x| p\rangle p \langle p |y\rangle \\=\frac{ (x-y)}{2\pi}\int dp ~ p ~e^{i(x-y)p} =-i (x-y)\partial_x \delta (x-y) \\ =i \delta(x-y).$$ As always, $\langle x| p\rangle=\exp(ixp) ~/\sqrt{2\pi}$. Check the last equality by operating on a well-behaved test function. It trivially reflects homogeneity of degree -1, $\delta(\lambda x)=\delta(x)/\lambda$, so differentiate this by
λ and set λ=1.
That is, the expression diverges for $x\to y$, just like 1). The crucial point is that as the prefactor
(x-y) decreases, the matrix element multiplying it diverges and faster.
All matrix elements of a commutator being available, as above, you may reconstitute your original operator equations from these, by insertion of resolutions of the identity on either side.
|
I am writing some Calculus content, and I would like a "big list" of useful functions which are defined by definite integrals, but are not elementary functions.
Two examples of such functions are
$$ \mathrm{Erf}(x) = \frac{2}{\sqrt{\pi}}\int_0^x e^{-t^2}\mathrm{d} t $$
which is fundamentally important to statistics, and
$$ \mathrm{Si}(x) = \int_0^x \frac{\sin(t)}{t} \mathrm{d} t $$
which comes up all the time in signal processing.
I would like to be able to sketch such functions, express some definite integrals (like $\int_0^1 e^{-4t^2} dt$) in terms of such functions, etc.
So what other functions are important enough to have their own name, and are given as integrals of elementary functions?
|
Difference between revisions of "Conservation of Angular Momentum"
(copied stuff from other conservation of angular momentum page, will redirect other page to this page)
m (Maths formatting)
Line 27: Line 27:
:<math>\mathbf{L_i}=\mathbf{L_f}</math>
:<math>\mathbf{L_i}=\mathbf{L_f}</math>
−
:<math>
+
:<math>= I \omega</math>
Thus
Thus
Line 88: Line 88:
and thus the solution is:
and thus the solution is:
−
:<math>
+
:<math>cos\theta = 1 - \frac{{6m^2}{h}}{l(2m+M)(3m+M)}</math>
or
or
−
:<math>
+
:<math>\theta = arccos( 1 - \frac{{6m^2}{h}}{l(2m+M)(3m+M)} )</math>
[[Category:Physics]]
[[Category:Physics]]
[[Category:Mechanics]]
[[Category:Mechanics]]
Revision as of 15:33, 13 December 2016
The
conservation of angular momentum is a fundamental concept of physics along with other conservation laws such as those of energy and linear momentum. It states that the angular momentum of a system remains constant unless changed through an action of external forces.
In Newtonian mechanics, the angular momentum of a point mass about a point is defined as where is the position vector of the point mass with respect to the point of reference and is the linear momentum vector of the point mass.
The principle of angular momentum can be applied to a system of particles by summing the angular momentum of each particle about the same point. This can be represented as:
where
is the total angular momentum of the system is the angular momentum of the i thparticle Proof of conservation
For a constant radius, the second term is zero. Hence From this, it can be concluded that in the absence of an external moment, angular momentum must be conserved.
Example
Imagine a rod of length l and mass M suspensed from one end vertically, and that a small block mass having velocity v and mass m collides with the other end and sticks to it. The maximum angle of displacement of the rod from the vertical axis can be calculated using the
conservation of angular momentum:
Thus
After the collision, there is conservation of energy such that the final potential energy of the rod (with the sticking block mass) equals the initial kinetic energy just after the collision:
Now solve the two sides of the above equation separately:
Solving the above four equations yields:
Plugging in for angular velocity from the initial equations above yields:
Calculating the moment of inertia
I now becomes necessary for a rod of length l and mass M, with a small block of mass m at its end.
An ordinary rod of length
l has the following moment of inertia relative to an axis of rotation at one end:
A moment of inertia is additive, defined as follows:
where
m is the mass at each (perpendicular) distance r from the axis of rotation.
Thus the moment of inertia
I for a rod of length l and mass M, with a small block of mass m at its end is simply this:
which is:
Plugging this back into the unsolved equation above yields:
If we complicate the problem further by assuming the small block began with velocity zero from an incline of height h, then applying conservation of energy to the moment in time just prior to its collision with the rod yields the following velocity of impact:
and hence
and thus the solution is:
or
|
It has been stated that the molar conductance($\Lambda_m$) of strong electrolytes is not affected to a greater extent on dilution and so to find the limiting value of molar concentration($\Lambda_m^0$) we can extrapolate the graph of molar conductance vs the concentration ($\sqrt c$). But as the concentration would near zero shouldn't the molar conductance decrease sharply to the range of say $10^{-8}$ which is the conductance of distilled water as the amount of solute that is the strong electrolyte would be negligible to consider?Also in this case extrapolating the graph should not be possible?
The molar conductivity of a strong electrolyte $(\Lambda_m)$ is given as the ratio of measured conductivity $(\kappa)$ to the molar concentration $(c)$: $$\Lambda_m=\frac{\kappa}{c}$$
However, as you note, the value of $\Lambda_m$ is not independent of concentration. The reason for this is that $\kappa$ does not scale linearly with concentration.
$$\Lambda_m = \Lambda_m^\circ -K\sqrt{c}$$
In this equation, $\Lambda_m^\circ$ is the limiting or intrinsic conductivity of the electrolyte and $K$ is an empirical constant. The interpretation of $\Lambda_m^\circ$ is the molar conductivity at infinite dilution as $c\rightarrow0$.
$$\lim_{c\rightarrow 0}{\Lambda_m}=\Lambda_m^\circ$$
The infinite dilution scenario is different from the scenario when there were no ions in solution to begin with, just as mathematically $\lim_{x \rightarrow 0}{f(x)}$ does not always equal $f(0)$ (often because $f(0)$ is undefined).
In the limiting case of $c\rightarrow 0$, the ion concentration never reaches zero, but it does keep getting close. You can never dilute a solution containing a solute to zero concentration (i.e. pure solvent).
If you graph $\Lambda_m$ as a function of $\sqrt{c}$, you should get a straight line with a non-zero intercept $(\Lambda_m^\circ)$. This value is an empirical parameter used to model data. It can also tell you something about the intrinsic conductivity of ions (see this Wikipedia article). However, as you say, at exactly $c=0$, he conductivity should be negligible. At any other value of $c$, however the conductivity should follow from the equation given. Thus, the conductivity is not necessary linear in the infinitesimal range around $c=0$:
$$\Lambda_m=\begin{cases} 10^{-8}\approx0, &c=0\\ \Lambda_m^\circ - K\sqrt{c}, &c>0 \end{cases}$$
On increasing the dilution the conductivity($\kappa$) does decrease as it is proportional to the number of ions in a unit volume. But the molar conductance is the product of $\Lambda_m=\kappa V$ where $V$ is the volume containing one mole of the dissolved electrolyte.It is proportional to the number of ions in $V$ volume of the electrolytic solution which includes all the ions in one mole of the electrolyte plus the additional contribution from water. Hence, on increasing the dilution $\kappa$ goes down but $V$ goes up in such a way that its product goes up. Therefore in increasing the dilution, the conductivity will fall to the level of distilled water but the molar conductance still includes the volume containing one mole electrolyte and hence will have a value higher than that of distilled water and the concentrated solution of the electrolyte.
[The general term Conductance (as defined by $C=\frac{1}{R}$) is different from the quantity molar conductance. The former, deals with a electrolytic setup with a definite cell constant and fixed condition. The other is equal to the Conductance of $V$ volume of electrolyte between two electrodes placed 1 UNIT APART.]
Since molar conductivity is directly proportional to the number of ions ,reduction in concentration reduces the number of ions per unit volume hence reduction in conductivity.
|
LaTeX:LaTeX on AoPS
LaTeX About - Getting Started - Diagrams - Symbols - Downloads - Basics - Math - Examples - Pictures - Layout - Commands - Packages - Help
This article explains how to use LaTeX in the AoPSWiki, the AoPS Community, and the AoPS Classroom. See Packages to know which packages are prebuilt into the AoPS site.
Contents 1 Getting Started with LaTeX 1.1 The Very Basics 1.2 Basic Expressions 1.3 Beyond the Basic Expressions 1.4 Equalities and Inequalities Getting Started with LaTeX The Very Basics
LaTeX uses a special "math mode" to display mathematics. There are two types of this "math mode":
In-line Math Mode
In in-line math mode, we use
$ signs to enclose the math we want to display, and it displays in-line with our text. For example, typing
$\sqrt{x} = 5$ gives us
Display Math Mode
In display math mode, we enclose our code in double dollar signs, and it displays the math centered and on its own line. For example,
$$\sqrt{x} = 5$$ gives us
In-line vs. Display
Besides displaying in-line vs. displaying centered and on a new line, the two modes render differently in other ways. Note that
$\sum_{k=1}^n k^2$ gives us whereas
$$\sum_{k=1}^n k^2$$ gives us
Basic Expressions Multiplication
Sometimes, when we're multiplying, we don't need a multiplication symbol. For instance, we can write instead of without ambiguity. However, when you're multiplying numbers, for instance, a multiplication symbol comes in handy. The standard symbol is given by
$\cdot$. For example,
$12\cdot\frac{1}{2}$ gives us
Fractions
We can make fractions via
$\frac{...}{...}$. For instance,
$\frac{x+y}{2}$ will give us
Roots
Square roots in are pretty simple; we just type
$\sqrt{...}$. For instance,
$\sqrt{2}$ gives us Cube roots, fourth roots, and so on are only slightly more difficult; we type
$\sqrt[n]{...}$. For instance,
$\sqrt[4]{x-y}$ gives
Superscripts & Subscripts
To get superscripts (or exponents), we use the caret symbol
^. Typing
$x^2+y^2$ gives Subscripts are obtained via an underscore (holding shift and the minus sign on most keyboards). For instance,
$a_k$ yields
Groups
Most operations in (such as superscripts and subscripts) can only see the "group" of characters immediately following it. We use curly braces
{...} to indicate groups longer than one character. For instance, if we wrote
$x^2015$, we'd expect to get but we instead get This is because each character in the string
2015 is in its own group until we tell that
2015 should be one whole group. To convey this information to , we write
$x^{2015}$ and we get
Beyond the Basic Expressions Grouping Basic Expressions
Our ordinary parentheses
(...) and brackets
[...] work to group expressions in . For instance,
$(x+y)[z+w]$ gives us We can also group expressions using curly braces, but we can't just type
{...}. Rather, we must type
\{...\}. This is because uses plain curly braces for other things, such as fractions and superscripts and subscripts.
When we put (vertically) large expressions inside of parentheses (or brackets, or curly braces, etc.), the parentheses don't resize to fit the expression and instead remain relatively small. For instance,
$$f(x) = \pi(\frac{\sqrt{x}}{x-1})$$ comes out as To automatically adjust the size of parentheses to fit the expression inside of them, we type
\left(...\right). If we do this for our equation above, we get We can use
\left and
\right for all sorts of things... parentheses (as we saw), brackets
$\left[...\right]$, braces
$\left\{...\right\}, absolute values
$\left|...\right|$, and much more (norms, floor and ceiling functions, inner products, etc.).
Lists
To make a list, such as a sequence, we use
\dots. For example,
$a_0,a_1,\dots,a_n$ will give us
Sums
There are two basic ways to write out sums. First, we can use
+ and
\cdots. An example of this way would be
$a_1+a_2+\cdots+a_n$ This will give us Second, we could use summation notation, or
\sum. Such an example is
$\sum_{i=0}^n a_i$, giving Note the use of superscripts and subscripts to obtain the summation index.
Products
Again, there are two basic ways to display products. First, we can use
\cdot and
\cdots. An example is
$n! = n\cdot(n-1)\cdots 2\cdot 1$, which of course gives The alternative is to use product notation with
\prod. For instance,
$n! = \prod_{k=1}^n k$, giving
Equalities and Inequalities Inequalities
the commands
>, <, \geq, \leq, and
\neq give us and respectively.
Aligning Equations
To align multiple equations, we use the
align* environment. For example, we might type a system of equations as follows:
\begin{align*} ax + by &= 1 \\ cx + dy &= 2 \\ ex + fy &= 3. \end{align*}
(You do not need dollar signs.) The
& symbol tells where to align to and the \\ symbols break to the next line. This code will output An example of a string of equations is:
\begin{align*} ((2x+3)^3)' &= 3(2x+3)^2 \cdot (2x+3)' \\ &= 3(2x+3)^2 \cdot 2 \\ &= 6(2x+3)^2. \end{align*}
Again, the
& symbol tells where to align to, and the \\ symbols break to the next line. This code outputs
Numbering Equations
To number equations, we use the
align environment. This is the same environment as the
align* environment, but we leave the
* off. The
* suppresses numbering. To number one equation, the code
\begin{align} ax + by = c \end{align}
will produce We don't have to use
& or \\ since there is nothing to align and no lines to break. To number several equations, such as a system, the code
\begin{align} ax + by &= c \\ dx + ey &= f \\ gx + hy &= i \end{align}
will produce In general,
align will auto-number your equations from first to last.
Comments in Equations
Again, we use the
align* environment. The code
\begin{align*} ax + by &= c & \text{because blah} \\ dx + ey &= f & \text{by such-and-such} \end{align*}
will produce (You can use
align to get numbering
and comments!) Definition by Cases
To define, say, a function by cases, we use the
cases environment. The code
$$ \delta(i,j) = \begin{cases} 0 & \text{if } i \neq j \\ 1 &\text{if } i = j \end{cases} $$
gives us As usual, the
& is for aligning and the \\ is for line-breaking.
|
EMV method starts its original form in Mathematics Reflection here:http://reflections.a...irelymixing.pdf
As a matter of fact, the idea of this method is extremely simple, because you are just assumed to rewrite the inequality in form of $a - b,b - c,c - a$, then compare other terms.
The drawback of this method is that it only deals with 3 or 4 variables. Moreover, it requires (sometimes) some computations to transform inequality to $a - b,b - c,c - a$.
In this paper, I will write about the Global derivative, one general form for EMV theorem. Specially, this theorem is very simple but have numerous kinds of applications. One application I like the most is the solution to Suranji's Inequality - one of the most beautiful inequality ever.
---------------------------------------------------------------------------------------
Chapter. The Global Derivative Section. The Foundation
You may know everything about the normal derivative of a single-variable function or multi-variable function, but it is unlikely that you hear about the global derivative and its application to inequalities. Before we start, let's have a look at a simple method of proving inequalities, call the entirely mixing variable method. Entirely mixing variable method is my first and foremost motivation to create of the Global Derivative.
The complete article about the entirely mixing variable method can be found in Mathematics Reflection, volume 5/2006, or can be directly accessed at
[url=http://reflections.awesomemath.org/2006\_5/2006\_5\_entirelymixing.pdf]http://reflections.a...memath.org/2006
The second motivation comes from the following famous inequality which first appeared in Crux Mathematics Corum, and was reused in some national mathematical competitions recently. It is found by a mathematician, Vasile Cirtoaje.
Example 1.
Let $a,b,c$ be three real numbers. Prove that
$(a^2 + b^2 + c^2)^2\ge 3(a^3b + b^3c + c^3a).$
At the first glance, you may think that this simply-looking problem has nothing to do but an easy and direct application of basic inequalities. So perhaps you cannot imagine why it has the equalities for both $a = b = c$ (certainly) and $(a,b,c) = \left(\sin^2 \dfrac {4\pi}{7}, \sin^2 \dfrac {2\pi}{7}, \sin^2 \dfrac {\pi}{7}\right).$ I'll say that the second case of equality is nice but "bad" since it has nothing in common with the original form. Finally, we really want to know "how did people solve it"?
The original solution of the author (Vasile Cirotaje) is "incredible" as the problem itself. It comes from an identity
$4(a^2 + b^2 + c^2 - ab - bc - ca)\left((a^2 + b^2 + c^2)^2 - 3(a^3b + b^3c + c^3a)\right)$
$= \left((a^3 + b^3 + c^3) - 5(a^2b + b^2c + c^2a) + 4(ab^2 + bc^2 + ca^2)\right)^2$
$+ 3\left((a^3 + b^3 + c^3) - (a^2b + b^2c + c^2a) - 2(ab^2 + bc^2 + ca^2) + 6abc\right)^2.$
Some may spontaneously respond that this solution is not fully mathematically acceptable since the identity is too much unusual and accidental. As a matter of fact, there is another solution \footnote{This solution was sent to Crux Magazine by a reader named Stefan.} that you may like more
$2(a^2 + b^2 + c^2)^2 - 6(a^3b + b^3c + c^3a) = \sum_{cyc} (a^2 - 2ab + bc - c^2 + ca)^2 \ge 0.$
This solution inspired me to discover another identity as follow
$6(a^2 + b^2 + c^2)^2 - 12(a^3b + b^3c + c^3a) = \sum_{cyc} (a^2 - 2b^2 + c^2 + 3bc - 3ca)^2 \ge 0.$
We can not deny that all three identities above are really miraculous, but in some sense, or at least for me, they are still not mathematically convincing since we can't explain what drives us to them! I found the third identity in a fortune when expanding a general formulation (when I do so, I don't know what I will get)
$(a^2 - 2b^2 + c^2 + kbc - kca)^2 + (b^2 - 2c^2 + a^2 + kca - kab)^2 + (c^2 - 2a^2 + b^2 + kab - kbc)^2,$
and you may agree with me that this kind of "lucky, subjective and obscure mathematics" is not what we want to pursue. In the mean time, the same story continues with another simply-looking inequality.
Example 2.
If $a,b,c$ are three real numbers, then
$a^4 + b^4 + c^4 + ab^3 + bc^3 + ca^3\ge 2(a^3b + b^3c + c^3a).$
How could you imagine that in this inequality, the equality holds for
$(a,b,c) \sim \left(1 + 2\cos \dfrac {\pi}{9}, 1 + 2\cos \dfrac {2\pi}{9}, - 1\right).$
In the proposal, I will present a general solution to prove them by the global derivative. More interestingly, every property of global derivative can be easily proved by elementary knowledge of analysis, and applications of global derivative is not only for $3 -$variable inequalities but also for the general $n -$variable inequalities as well.Section. The Global Derivative - The method and Theorems.
Definition.
Assume that $f(x_1,x_2,...,x_n): R^n\to R$ is a continuous $\mathbb{C}^1$ function of $R^n$. The global derivative of $f$, denote $[f]$, is defined as follow
$[f] = \sum_{i = 1}^n D_i f,$
in which $D_i f$ is the partial derivative of $f$ regard to the variable $x_i$.
The global derivative, in general, has every beautiful property that the normal derivative has. Following are some of its properties
$ [f(g)] = [f](g).[g] \ \ \ \ ;\ \ \ \ \ [af + bg] = a[f] + b[g];$
$ [fg] = [g]f + [f]g \ \ \ \ \ ;\ \ \ \ \ \left[\dfrac{f}{g}\right] = \dfrac{[f]g - [g]f}{g^2};$
Moreover, it has some other special properties. The most important property is that it is eliminated with difference. That is
$[x - y] = 0,$
for any two variables $x,y$. This simple property ensures many applications of the original EMV theorem. In term of inequalities, global derivative plays a very important role by the following theorem
Theorem 1.
Suppose that $f(x_1,x_2,...,x_n): R^n\to R$ is a continuous $\mathbb{C}^1$ function. The inequality $f(x_1,x_2,...,x_n)\ge 0$, with $x_1,x_2,...,x_n\ge 0$, holds if two following conditions are fulfilled at once
$(i). f(x_1,x_2,...,x_n)\ge 0 \mbox{\ if \ } x_1x_2...x_n = 0.$
$(ii). [f] \ge 0 \ \forall x_1,x_2,...,x_n\ge 0.$
This theorem has so many applications in inequalities. For example, it provides a way to generalize the famous Schur inequality (although people always try to generalize this important inequality, there is no real generalization of Schur inequality as for now) and it also leads to a simple proof of Sujanri inequality\footnote{This inequality is given in the Miklos Schweitzer Mathematical competition (a competition for undergraduate students) with a pretty complicated solution.}, one of the most beautiful elementary inequalities ever.
Example 3
[Sujanri]
If $a_1,a_2,...,a_n$ are non-negative real numbers then
$(n - 1)(a_1^n + a_2^n + ... + a_n^n) + na_1a_2...a_n\ge (a_1 + a_2 + ... + a_n)\lt(a_1^{n - 1} + a_2^{n - 1} + ... + a_n^{n - 1}\rt).$
Another important property of the global derivative is the following statement
Theorem 2.
Assume that $f(x_1,x_2,...,x_n): R^n\to R$ is a smooth function. Denote $f_0 = f$ and $f_k = [f_{k - 1}]$ for $k > 1$. We have the following identity
$f(x_1 + t,x_2 + t,...,x_n + t) = \sum_{k = 0}^{\infty} \dfrac{f_k(x_1,x_2,...,x_k)}{k!}t^k.$
Theorem 1 can be proved by introductory knowledge about 1-variable derivative, and theorem 2 can be proved by Taylor formulation.
The above theorem is the main key for us to prove the 4-degree inequalities which have miraculous cases of equalities as we mention in the previous section. It can actually provide the necessary and sufficient conditions for an inequality to hold.
Theorem 3.
Assume that $F(x_1,x_2,...,x_n)$ is a cyclic polynomial of $n$ real variables $x_1,x_2,...,x_n$ with degree $4$ such that $F(x_1,x_2,...,x_n) = 0$ if $x_1 = x_2 = ... = x_n$. Denote $F_0 = F, F_1 = [F_0]$ and $F_2 = [F_1]$, then
(i).The inequality $F\ge 0$ holds for all real numbers $x_1,x_2,...,x_n$ if and only if for any $x_1,x_2,...,x_{n - 1}\ge 0$, the following condition holds
$F_0|_{x_n = 0}\ge 0 \text{\ and \ } F_1^2|_{x_n = 0}\le 2(F_0F_2)|_{x_n = 0}.$
(ii).The inequality $F\ge 0$ holds for all non-negative real numbers $x_1,x_2,...,x_n$ if and only if for any $x_1,x_2,...,x_{n - 1}\ge 0$, then at least one of two following conditions holds
$(1).\ F_0|_{x_n = 0} \ge 0 ; F_1|_{x_n = 0} \ge 0 \text{\ and \ } F_2|_{x_n = 0} \ge 0.$
$(2). \ F_0|_{x_n = 0}\ge 0 \text{\ and \ } F_1^2|_{x_n = 0}\le 2(F_0F_2)|_{x_n = 0}.$
This theorem can help prove the mentioned Vasile's Inequality in a few lines.
We will advance to another exciting application of the global derivative. Some years ago, Ho Joo Lee found a great result on symmetric polynomial of degree 3, generally known as PID theorem, as follow
Theorem
[Ho Joo Lee theorem]
Let $P(a,b,c)$ be a symmetric polynomial of degree $3$. The following conditions are equivalent to each other
$(i).\ P(1,1,1),P(1,1,0),P(1,0,0) \ge 0.$
$(ii).\ P(a,b,c) \ge 0 \ \forall a,b,c\ge 0.$
Nowadays, symmetric polynomial inequalities of degree $3$ become one of the most basic and easiest form of inequalities, but problems with the cyclic forms are still challenging. With helps of global derivative, I found another result that can give proofs for all cyclic inequalities of degree $3$.
Theorem 4.
Let $P(a,b,c)$ be a cyclic homogeneous polynomial of degree $3$.
The inequality $P \ge 0$ holds for all non-negative variables $a,b,c$ if and only if
$P(1,1,1) \ge 0 \ ;\ P(a,b,0) \ge 0 \ \forall a,b\ge 0;$
Global derivative is one of the simplest method but most powerful method that I found in the proposal. Some examples above are applications for three-variable inequalities, but its strength is not lost when dealing with $n$-variable inequalities. The article on global derivative ends here and I wish that it can convey the first sign about the structure of the whole paper.
---------------------------------------------------------------------
Hi vọng các bạn yêu bất đẳng thức sẽ tìm thêm nhiều phương pháp tốt hơn. Tuy rằng, các bạn hãy tâm niệm rằng BDT chỉ giống như một trò chơi trong toán học, tuy rằng đẹp, hay, nhưng không thực sự có một ảnh hưởng đối với sự phát triển của toán học hiện đại nói chung (so, học giỏi bất đẳng chưa đủ để trở thành một nhà toán học). Hi vọng diễn đàn sẽ có đc không khí thảo luận sôi nổi, nhiệt tình, trong sáng, trên tinh thần tôn trọng lẫn nhau.
Nhiều bạn (trong đó có hungkhtn) đã, or sẽ thích BDT vì BDT thường nhìn rất gọn và đẹp. Và những ý tường tuyệt vời trong BDT là những ý tường giúp bạn có được những lời giản đẹp dẽ nhất.
|
Given an initial velocity vi, mass of the cue ball, coefficient of friction between the ball and surface, and the radius from the center as where the cue has struck, how would I determine the change in velocity.
Unfortunately it is not enough to know where the cue has struck the cue ball, as the the spin on the cueball depends on many more factors (for instance, the spin on the cue ball is generated by
accelerating through it with the cue). Perhaps you can replace this parameter by the spin on the ball as its angular momentum vector.
The way I would then model this problem is the following. Assume that the balls are identified by the following parameters
radius $r$; mass $m$; position $\mathbf r$; velocity vector $\mathbf v$ of the centre of mass; angular velocity pseudovector $\boldsymbol\omega$.
The vertical component of the spin $\boldsymbol\omega$ is dissipated through friction, and this leads to the introduction of a friction parameter, say $\mu_z$.
As for the motion of the ball one has to distinguish between two fundamental regimes: sliding and rolling. When the ball is
sliding the velocity vector changes direction because of the horizontal component of the spin, which together with the grip on the cloth causes the ball to steer left/right and/or accelerate/decelerate. The force is in the direction of the vector $\mathbf k\times\boldsymbol\omega$, where $\mathbf k$ is in the direction of the vertical axis. The magnitude depends on another coefficient, say $\mu_s$, so something of the form$$\mathbf F_s = \mu_sr m g\widehat{\boldsymbol\omega\times\mathbf k}$$This force is responsible for changing both the angular velocity $\boldsymbol\omega$ and the velocity of the centre of mass of the ball $\mathbf v$. Furthermore another force that acts on the centre of mass is dynamical friction, which goes against the direction of the motion
As soon as the ball starts rolling, it continues on a straight line along the direction of $\mathbf v$ until it gets to a halt. In this case the acting force is just friction, however another coefficient, say $\mu_d$, due to the different nature of this friction, might be necessary.
Hope these few ideas helped. I also recommend reading through this paper for further ideas.
|
SolidsWW Flash Applet Sample Problem 1
Line 352: Line 352:
|}
|}
+ + +
[[Category:Sample Problems]]
[[Category:Sample Problems]]
Revision as of 10:07, 10 August 2011 Flash Applets embedded in WeBWorK questions solidsWW Example Sample Problem with solidsWW.swf embedded
A standard WeBWorK PG file with an embedded applet has six sections:
A tagging and description section, that describes the problem for future users and authors, An initialization section, that loads required macros for the problem, A problem set-up sectionthat sets variables specific to the problem, An Applet link sectionthat inserts the applet and configures it, (this section is not present in WeBWorK problems without an embedded applet) A text section, that gives the text that is shown to the student, and An answer and solution section, that specifies how the answer(s) to the problem is(are) marked for correctness, and gives a solution that may be shown to the student after the problem set is complete.
The sample file attached to this page shows this; below the file is shown to the left, with a second column on its right that explains the different parts of the problem that are indicated above. A screenshot of the applet embedded in this WeBWorK problem is shown below:
There are other example problems using this applet: solidsWW Flash Applet Sample Problem 2 solidsWW Flash Applet Sample Problem 3 And other problems using applets: Derivative Graph Matching Flash Applet Sample Problem USub Applet Sample Problem trigwidget Applet Sample Problem solidsWW Flash Applet Sample Problem 1 GraphLimit Flash Applet Sample Problem 2 Other useful links: Flash Applets Tutorial Things to consider in developing WeBWorK problems with embedded Flash applets
PG problem file Explanation ##DESCRIPTION ## Solids of Revolution ##ENDDESCRIPTION ##KEYWORDS('Solids of Revolution') ## DBsubject('Calculus') ## DBchapter('Applications of Integration') ## DBsection('Solids of Revolution') ## Date('7/31/2011') ## Author('Barbara Margolius') ## Institution('Cleveland State University') ## TitleText1('') ## EditionText1('2011') ## AuthorText1('') ## Section1('') ## Problem1('') ########################################## # This work is supported in part by the # National Science Foundation # under the grant DUE-0941388. ##########################################
This is the
The description is provided to give a quick summary of the problem so that someone reading it later knows what it does without having to read through all of the problem code.
All of the tagging information exists to allow the problem to be easily indexed. Because this is a sample problem there isn't a textbook per se, and we've used some default tagging values. There is an on-line list of current chapter and section names and a similar list of keywords. The list of keywords should be comma separated and quoted (e.g., KEYWORDS('calculus','derivatives')).
DOCUMENT(); loadMacros( "PGstandard.pl", "AppletObjects.pl", "MathObjects.pl", );
This is the
The
TEXT(beginproblem()); $showPartialCorrectAnswers = 1; Context("Numeric"); $a = random(2,10,1); $b = random(2,10,1); $xy = 'y'; $func1 = "$a*sin(pi*y/8)+2"; $func2 = "$b*sin(pi*y/2)+2"; $xmax = max(Compute("$a+2"), Compute("$b+2"),9); $shapeType = 'circle'; $correctAnswer = Compute("64*$a+4*pi*$a^2+32*pi");
This is the
The solidsWW.swf applet will accept a piecewise defined function either in terms of x or in terms of y. We set
######################################### # How to use the solidWW applet. # Purpose: The purpose of this applet # is to help with visualization of # solids # Use of applet: The applet state # consists of the following fields: # xmax - the maximum x-value. # ymax is 6/5ths of xmax. the minima # are both zero. # captiontxt - the initial text in # the info box in the applet # shapeType - circle, ellipse, # poly, rectangle # piece: consisting of func and cut # this is a function defined piecewise. # func is a string for the function # and cut is the right endpoint # of the interval over which it is # defined # there can be any number of pieces # ######################################### # What does the applet do? # The applet draws three graphs: # a solid in 3d that the student can # rotate with the mouse # the cross-section of the solid # (you'll probably want this to # be a circle # the radius of the solid which # varies with the height #########################################
<p> This is the
Those portions of the code that begin the line with
################################### # Create link to applet ################################### $appletName = "solidsWW"; $applet = FlashApplet( codebase => findAppletCodebase ("$appletName.swf"), appletName => $appletName, appletId => $appletName, setStateAlias => 'setXML', getStateAlias => 'getXML', setConfigAlias => 'setConfig', maxInitializationAttempts => 10, #answerBoxAlias => 'answerBox', height => '550', width => '595', bgcolor => '#e8e8e8', debugMode => 0, submitActionScript => '' );
<p>You must include the section that follows
################################### # Configure applet ################################### $applet->configuration(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='3' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0x0000ff</theColor> <profile> <piece func='$func1' cut='8'/> <piece func='$func2' cut='10'/> </profile> </plot></xml>}); $applet->initialState(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='3' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0x0000ff</theColor> <profile> <piece func='$func1' cut='8'/> <piece func='$func2' cut='10'/> </profile> </plot></xml>}); TEXT( MODES(TeX=>'object code', HTML=>$applet->insertAll( debug=>0, includeAnswerBox=>0, )));
The lines
$applet->initialState(qq{<xml><plot>
<xy>$xy</xy>
<captiontxt>'Compute the volume of the figure shown.'</captiontxt>
<shape shapeType='$shapeType' sides='3' ratio='1.5'/>
<xmax>$xmax</xmax>
<theColor>0x0000ff</theColor>
<profile>
<piece func='$func1' cut='8'/>
<piece func='$func2' cut='10'/>
</profile>
</plot></xml>}); configure the applet.
The configuration of the applet is done in xml. The argument of the function is set to the value held in the variable
The code
Answer submission and checking is done within WeBWorK. The applet is intended to aid with visualization and is not used to evaluate the student submission.
TEXT(MODES(TeX=>"", HTML=><<'END_TEXT')); <script> if (navigator.appVersion.indexOf("MSIE") > 0) { document.write("<div width='3in' align='center' style='background:yellow'> You seem to be using Internet Explorer. <br/>It is recommended that another browser be used to view this page.</div>"); } </script> END_TEXT
The text between the
BEGIN_TEXT $BR $BR Find the volume of the solid of revolution formed by rotating the curve \[x=\begin{cases} $a\sin\left(\frac{\pi y}{8}\right)+2 &y\le 8\\ $b\sin\left(\frac{\pi y}{2}\right)+2 &8<y\le 10\end{cases}\] about the \(y\)-axis. \{ans_rule(35) \} $BR END_TEXT Context()->normalStrings;
This is the
################################ # # Answers # ## answer evaluators ANS( $correctAnswer->cmp() ); ENDDOCUMENT();
This is the
The
The Flash applets are protected under the following license: Creative Commons Attribution-NonCommercial 3.0 Unported License.
|
An even more ambitious, more stringy type of supersymmetry could actually be more compatible with the LHC data
A year ago or so, we have entered the serious LHC era in which many eyes of high-energy physicists have been refocused from formal, top-down theory to experiments and phenomenology, i.e. to the thinking about the signs of new physics we may soon see.
While the speed with which the LHC may uncover new physics has surely been disappointing for many victims of a wishful thinking, it can't be excluded that some signs of new physics will emerge in a few months or years. Aside from the Higgs boson(s), supersymmetry remains the most likely scheme that could appear as the first discovery.
Minimal and extended supersymmetry
Supersymmetry is a new, very abstract form of a symmetry. It is mathematically analogous to ordinary symmetries such as the rotational symmetry \(SO(3)\).
The rotational symmetry forms something known as a Lie (continuous) group and a very efficient way to study and classify such possible symmetries is to look at the infinitesimal transformations (transformations by infinitely small angles). Those may be expressed as tiny variations of the identity matrix, \({\bf 1}+iM\), where \(M\) is typically an infinitesimal Hermitian matrix (I needed the \(i\) factor for the sum to be unitary).
The information about the "shape" of the rotational symmetry group or another group is encoded in the linear space of possible matrices \(M\). This space is known as the Lie algebra and contains an operation \([M,N]\), the commutator, which knows everything about the multiplication rules of the original group as well as its "curved structure". The commutator may be viewed as an abstract operation but if you represent the generators by genuine particular matrices, it reduces to \(MN-NM\), indeed.
Supersymmetry is one of the "generalized forms of a Lie group" which is also determined by its Lie algebra of infinitesimal transformations. However, it must be a "superalgebra" which means that the objects \(M,N\) are no longer ordinary operators that have real eigenvalues we may imagine. Instead, the generators of a supersymmetry algebra are operators such as \(Q_\alpha\) which are "anticommuting" or "Grassmann-odd". These adjectives are also morally equivalent to "fermionic" which contrasts with the adjective "bosonic" or "commuting" or "Grassmann-even" which may be used for the original generators \(M,N\) of an ordinary Lie algebra.
Grassmann-odd, fermionic numbers don't commute with each other. Instead, in the simplest case, they anticommute,
\[ \{\theta_i,\theta_j\}\equiv \theta_i \theta_j + \theta_j\theta_i = 0. \] If you exchange the ordering in the multiplication, you change the sign of the product. This is analogous e.g. to a cross product of two vectors, \(\vec a\times \vec b = -\vec b \times \vec a\). However, the multiplication of the Grassmann-odd numbers (and even operators) is still associative, unlike the cross product, and you can't ever represent Grasmann-odd numbers by ordinary real numbers. Instead, they're numbers that can't be "enumerated". They don't belong to any well-defined set. You may only get "ordinary real or complex numbers" if you multiply an even number of Grassmann-odd numbers.
Based on this description, you could guess that the Grassmann-odd numbers are "unreal" in a similar way as a wave function. The absolute value of a wave function must be first squared to get something with a clear measurable interpretation, namely a probability. The same thing holds for Grassmann-odd numbers. They would really make no sense (or would be identically equal to zero) in any classical theory. But in a quantum theory, they're as sensible as the ordinary numbers as a description of wave functions and operators.
Much like bosons and fermions are equally natural or legitimate, Grassmann-even and Grassmann-odd numbers are equally legitimate as well. This conclusion would almost certainly be surprising for someone who hasn't studied quantum field theory in detail: but once he studies it, he understands it's an established fact. When you exchange two identical particles (or two "distant" operators of the same kind), you can't change the physics. So the probability distributions shouldn't change; but if the wave function gets multiplied by \(\pm 1\), it's still OK because even \(-1\) squares to one which is good because you get back to the original state if you do these permutations twice; and because the probabilities are unchanged because probabilities are squared amplitudes.
It's time to say what the supersymmetry algebra is. It has the generators \(Q_\alpha\) which must transform as spacetime spinors. They commute with the Hamiltonian so they're "conserved charges". It's remarkable that one may have conserved charges carrying spin 1/2 at all; it is one of the counterexample to the Coleman-Mandula theorem (if you formulated this theorem in a sloppy way and omitted some of its assumptions). If you know what spinors are, and I am not going to explain it here, you know that the dimension of a spinor representation grows as a power of two where the exponent is roughly one half of the spacetime dimension:
\[ D({\rm spinor}) = 2^{D({\rm spacetime})/2} \] Well, you should really subtract \(1\) from the spacetime dimension first, and then take the integer part of one-half of the result, to get the dimension of the smallest irreducible spinor representation. Let's not go into these technicalities. The fact that \(Q_\alpha\) are spinors determines their commutator with the Lorentz generators. They commute with the energy-momentum, as I suggested previously. And the most important new commutator – well, this one and only this one is an anticommutator – says
\[ \{Q_\alpha,Q_\beta\} = p_\mu \gamma^\mu_{\alpha\beta} + {\rm central \,\,charges} \] It's still a bit schematic – I am not distinguishing dotted and undotted spinor indices etc. – but it contains the right flavor of the truth. (The anticommutator given by the curly bracket is just like a commutator but with a plus sign.) In words, the supersymmetry generators anticommute to energy-momentum, i.e. ordinary spacetime translations: the gamma matrices are just coefficients needed for the equation to be nicely Lorentz-covariant (i.e. for it to respect the pairing of indices etc.). There may also be additional similar terms on the right hand side known as "central charges" – various conserved winding numbers of strings and wrapping numbers of branes are good examples in string/M-theory.
Because the number of the components of a spinor grows quickly with the spacetime dimension and because each generator of a symmetry constrains your theory – and supersymmetry is no exception – you quickly get an overconstrained theory. So the spacetime dimension shouldn't be too high. It turns out that the maximum spacetime dimension in which you may get a non-trivial, interacting theory with \(D\) large spacetime dimensions which also contains massless particles and no negative probabilities is \(D=11\). The corresponding theory has 32 "real supercharges" – generators of the supersymmetry algebra which are rewritten as a collection of independent Hermitian Grassmann-odd operators in this counting.
The effective supersymmetric field theory in \(D=11\) inevitably contains gravity because the 32 supercharges are able to change the spin of particles by a few units and you inevitably get to \(j=2\) of a graviton, too. And it is called the eleven-dimensional supergravity. It is arguably the most symmetric and most beautiful or "the simplest" extension of Einstein's general theory of relativity. As a quantum field theory, it is probably non-renormalizable perturbatively, with new counterterms arising at 7 loops, and even if they cancel to all orders, it is surely inconsistent at the non-perturbative level. The unique short-distance completion (a fully consistent theory, at all distances, that reduces to the other one at long distances) of this theory is nothing else than the eleven-dimensional M-theory, a "higher-dimensional sibling" of the ten-dimensional superstring vacua.
Going to lower dimensions
I could describe how the possible number of supersymmetry generators depends on dimensions below 11 but let's jump directly to \(D=4\) because many readers still incorrectly think that our spacetime only has 4 dimensions. The \(D=4\) Weyl/chiral spinor has two complex components which carry the information of 4 real components. Alternatively, you may describe them as 4 real components of a Majorana/real spinor. The Majorana and Weyl conditions can't be imposed simultaneously in \(D=4\). Instead, you only have either Weyl spinors or the Majorana spinors as the minimal ones, and they're pretty much the same thing when it comes to the information they carry (although we typically manipulate with them differently). The Dirac spinor is made out of two Weyl (or two Majorana) spinors.
If you compactify eleven-dimensional M-theory on a 7-dimensional torus, the original 32 real supercharges in 11 dimensions are divided into 8 groups of 4 supercharges, i.e. 8 Weyl spinors. One Weyl spinor of supercharges is the minimum amount of supersymmetry you may have in \(D=4\), the so-called minimal or \({\mathcal N}=1\) supersymmetry. On the other hand, the 32 supercharges inherited from the maximally supersymmetric theory, M-theory, produce the maximal or \({\mathcal N} = 8 \) supergravity. Numbers in between are possible, especially the important cases \({\mathcal N}=2 \) and \( {\mathcal N}=4\) but some values that are not powers of two are possible, too (but much less attractive for the researchers, at least so far).
I can't go into details but the more supersymmetries you get, the more accurately you may calculate various physical predictions. The "simplest compactification" in M-theory or string theory have a large number of supercharges, they correspond to simple compactification manifolds (simple shape of hidden dimensions), they are relatively unique, and they are pretty well-understood.
On the other hand, vacua with a small amount of supersymmetry are numerous, unconstrained, they form large portions of the "landscape", and supersymmetry is only slightly helpful if you want to easily calculate predictions: many terms in all these predictions must be calculated, anyway, and sometimes the exact results are not known. Non-supersymmetric vacua are the messiest ones, of course. They're really not under control, they're potentially unstable, they may fail to exist at all, and they may also form an even larger landscape than the supersymmetric ones (or they may have a miraculous, not quite understood reason why they are or why it is unique). We have many guesses but we don't know for sure.
How many supersymmetries are in the real world?
Ten-dimensional string vacua have either 16 or 32 real supercharges, either the full 11-dimensional amount or one half of it. Because the 10-dimensional spinor is smaller than the 11-dimensional one – there exists a Majorana/Weyl spinor in \(D=10\) that is both real and chiral, and therefore allows one to reduce the dimension from 32 real to 16 real components – we are able to get below 32.
Heterotic string theory, the first 10-dimensional stringy starting point that was shown to be capable of explaining all types of particles and forces we observe in Nature, has 16 supercharges to start with. You may say that it's only 1/2 of the maximal number because the supercharges only arise from the left-moving excitations moving along strings, while all the supersymmetries linked to the right-moving excitations are eliminated by replacing the right-moving excitations by the old 26-dimensional bosonic string theory.
When you compactify a theory on a Calabi-Yau three-fold, you reduce the number of real supercharges to 1/4 of the original number. So you may see that by compactifying a heterotic string theory on a Calabi-Yau three-fold, you get a nice 4-dimensional theory with the minimal \({\mathcal N} = 1\) supersymmetry. Even the minimal supersymmetry has some big advantages over no supersymmetry but the next "extended" i.e. \({\mathcal N}>1\) supersymmetr, namely \({\mathcal N}=2\) supersymmetry, already seems "too much of a good thing".
Why? If you extend the known list of particles to all of their supersymmetric cousins, you will be able to derive that the resulting theory has too many particle species, too few independent interactions, and doesn't allow many things such as the CP-violation and the electroweak "chiral" couplings of the gauge field to the fermions, either. For years, everyone would assume that only the minimal, \({\mathcal N}=1\) supersymmetry, may be preserved in Nature down to energies that are approximately accessible to our accelerators. (At very high energies, probably inaccessible experimentally, the maximum \( {\mathcal N}=8\) SUSY still exists morally in some sense, namely locally on the manifold where it looks like a flat 10-dimensional type II string theory or 11-dimensional M-theory.)
A fun quotation. At TASI 99, Paul Aspinwall was one of the lecturers. He gave great lectures about the \({\mathcal N}=2\) theories which are interesting for many mathematical reasons; the Seiberg-Witten analysis of magnetic monopoles and monodromies in gauge theories belongs here. Paul Aspinwall would say that \({\mathcal N}=2\) is the most beautiful, naturally balanced choice interpolating between the too high supersymmetries which are too constraining and make the theories too trivial, and the minimal or no supersymmetry which is too unconstrained and messy. "The only problem is that our world doesn't seem to have an \({\mathcal N}=2\) supersymmetry, but this is not my fault."
Everyone laughed. But was he right? Finally, I am getting to the new hep-ph paper:
Before the experts leave this article with the word "obvious bullshit", let me say that they only talk about the extended supersymmetry of the gauge sector. It would really lead to contradictions if you tried to extend the quarks and leptons to \({\mathcal N}=2\) multiplets.
Does it make any sense for some particles, but not others, to respect the extended supersymmetry? Well, it could make more sense than what you might think. In a braneworld scenario of string theory (and this is of course my comment, but one that you may hear from pretty much any string theorist), the gauge bosons may live on branes that preserve a higher number of supersymmetries than the branes or their intersections where the leptons and quarks live. Of course, the supersymmetry respected by the "whole theory" is just the greatest common denominator, i.e. at most \({\mathcal N}=1\) which is spontaneously broken, anyway. But the gauge fields could have more supercharges.
The non-gravitational forces we know, the electroweak force and the strong force, could actually naturally be extended to whole \({\mathcal N}=2\) multiplets. Instead of a gluon and a single gluino, you would have a gluon, two gluinos, and a bosonic scalar sgluino (or whatever the name should be). We know such a "replacement" of non-supersymmetric theories by highly (extended) supersymmetric theories (or an even more extended one) from another context: AdS/QCD. When applied to heavy ion physics and nuclear physics in general, people want to understand phenomena that seem to agree with the old non-supersymmetric QCD. However, they often use the maximally supersymmetry \({\mathcal N}=4\) gauge theory and they get a good agreement. An extended supersymmetrization of the gauge fields seems to be a rather harmless, and probably healthy, modification.
The authors of the paper show the methods how to distinguish \({\mathcal N}=1\) from \({\mathcal N}=2\) supersymmetry at the LHC. And they even find out that with the extended, more ambitious supersymmetry, the lower limits on the squark masses actually weaken! So if the gauge fields describing the three forces are described by a theory with extended supersymmetry, some of the supersymmetric particles could actually be lighter than what a naive analysis of the current LHC data seems to allow!
|
Search strategies for exotic decays of the Higgs boson in the gg+MET final state at the LHC Table of contents: Abstract
In this study, we devise a search strategy for the exotic decay of the 125 GeV
Higgs boson in the $\gamma\gamma+MET$ final state. The studied final state comes in two different topologies: resonant and non-resonant. In the resonant case, the Higgs decays into two scalars, one being undetected and the other decaying resonantly into two photons. The non-resonant case, based on low scale SUSY
breaking models, the Higgs decays into two neutralinos, each subsequently decaying into a photon and a gravitino. We estimate the sensitivity of these searches using a DELPHES detector simulation, and targeting $100$ fb$^{-1}$ of $\sqrt{s}=14$ TeV
$pp$ data from the LHC.
Figures from ggMET
pdf Figure 1a: Feynman diagrams for the non-resonant signal scenarios (Based on low scale SUSY breaking models, the Higgs decays into two neutralinos, each subsequently decaying into a photon and a gravitino)
pdf Figure 1b: Feynman diagrams for the resonant signal scenarios (Higgs decays into two scalars, one being undetected and the other decaying resonantly into two photons)
pdf Figure 2: signal selection efficiency after triggers selection v.s. mass for different signal scenarios and types
pdf Figure 3: Missing transverse distribution of signal and background for the gluon-gluon production mode
pdf Figure 4: Distribution of deltaPhi between di photon for signal and background for the gluon-gluon production mode
pdf Figure 5: MT distribution (MT of $\gamma\gamma+MET$, $\mu\mu$) of signal and backgrounds for the ZH production mode
pdf Figure 6: Photons invariant mass distribution of signal and backgrounds for the ZH production mode
pdf Figure 7: delta phi between di photon and di muons distribution of signal and backgrounds for the ZH production mode
pdf Figure 8: Di muon invariant mass distribution of signal and backgrounds for the ZH production mode
pdf Figure 9: Pt of dimuon for signal and backgrounds for the ZH production mode
pdf Figure 10: ∆φ between Diphoton and MET distribution of signal and backgrounds for the ZH production mode
pdf Figure 11: leading Photon Pt distribution of signal and backgrounds for the ZH production mode
pdf Figure 12: subleading Photon Pt distribution of signal and backgrounds for the ZH production mode
pdf Figure 13: Transverse Mass distribution of signal and backgrounds for the ZH production mode
pdf Figure 14: Significance plots for different trigger scenarios in the gluon fusion analysis, for a reference signal branching ratio of BR(h → γγ + E/T ) = 10%
pdf Figure 15: Significance plots for different trigger and signal scenarios in the gluon fusion analysis , for a reference signal branching ratio of BR(h → γγ + E/T ) = 10%
pdf Figure 16: 5σ branching ratios for the ggF channel, for resonant (in red) and non-resonant (in black) final states, using the γ + E/T trigger, for a reference signal branching ratio of BR(h → γγ + E/T ) = 10%
pdf Figure 17: Branching ratios for 95% confidence level exclusion in the ZH case, resonant and non-resonant topologies, requiring at least one photon (Nγ ≥ 1, in green and blue, respectively) and at least two photons (Nγ ≥ 2 in black and red, respectively), for a reference signal branching ratio of BR(h → γγ + E/T ) = 10% . The shaded areas correspond to a variation in systematics up to 10%
|
LHCb Collaboration,; Aaij, R; Adeva, B; Adinolfi, M; Bernet, R; Bowen, E; Bursche, A; Chiapolini, N; Chrzaszcz, M; Dey, B; Elsasser, C; Graverini, E; Lionetto, F; Lowdon, P; Mauri, A; Müller, K; Serra, N; Steinkamp, O; Storaci, B; Straumann, U; Tresch, M; Vollhardt, A; Weiden, A; et al, (2015).
Measurement of $CP$ violation in $B^0 \rightarrow J/\psi K^0_S$ decays. Physical Review Letters, 115:031601. Abstract
Measurements are presented of the $CP$ violation observables $S$ and $C$ in the decays of $B^0$ and $\overline{B}{}^0$ mesons to the $J/\psi K^0_S$ final state. The data sample corresponds to an integrated luminosity of $3.0\,\text{fb}^{-1}$ collected with the LHCb experiment in proton-proton collisions at center-of-mass energies of $7$ and $8\,\text{TeV}$. The analysis of the time evolution of $41500$ $B^0$ and $\overline{B}{}^0$ decays yields $S = 0.731 \pm 0.035 \, \text{(stat)} \pm 0.020 \,\text{(syst)}$ and $C = -0.038 \pm 0.032 \, \text{(stat)} \pm 0.005\,\text{(syst)}$. In the Standard Model, $S$ equals $\sin(2\beta)$ to a good level of precision. The values are consistent with the current world averages and with the Standard Model expectations.
Abstract
Measurements are presented of the $CP$ violation observables $S$ and $C$ in the decays of $B^0$ and $\overline{B}{}^0$ mesons to the $J/\psi K^0_S$ final state. The data sample corresponds to an integrated luminosity of $3.0\,\text{fb}^{-1}$ collected with the LHCb experiment in proton-proton collisions at center-of-mass energies of $7$ and $8\,\text{TeV}$. The analysis of the time evolution of $41500$ $B^0$ and $\overline{B}{}^0$ decays yields $S = 0.731 \pm 0.035 \, \text{(stat)} \pm 0.020 \,\text{(syst)}$ and $C = -0.038 \pm 0.032 \, \text{(stat)} \pm 0.005\,\text{(syst)}$. In the Standard Model, $S$ equals $\sin(2\beta)$ to a good level of precision. The values are consistent with the current world averages and with the Standard Model expectations.
Additional indexing
|
I am trying to determine how to find the tightest upper and lower bounds when using the squeeze theorem. If there is a general technique to determining them I would appreciate the insight. Specifically I have this problem:
$$\lim_{n\to \infty} \sqrt[n]{2\left(\frac12\right)^n+\left(\frac23\right)^n+3\left(\frac12\right)^n}$$
I know that:
$$\sqrt[n]{\left(\frac23\right)^n}\le \sqrt[n]{2\left(\frac12\right)^n+\left(\frac23\right)^n+3\left(\frac12\right)^n} \le \sqrt[n]{6\left(\frac12\right)^n}$$
What steps were taken to find the upper bound?
|
Max Koecher (for example, in
The Minnesota Notes on Jordan Algebras and Their Applications; new edition: Springer Lecture Notes in Mathematics, number 1710, 1999), defined a domain of positivity for a symmetric nondegenerate bilinear form $B: X \times X \rightarrow \mathbb{R}$ on a finite dimensional real vector space $X$, to be an open set $Y \subseteq X$ such that $B(x,y) > 0$ for all $x,y \in Y$, and such that if $B(x,y) > 0$ for all $y \in Y$, then$x \in Y$. (More succinctly, perhaps, we could say it's a maximal set $Y \subseteq X$ such that $B(Y,Y) > 0$.) Aloys Krieger and Sebastian Walcher, in their notes to chapter 1 of this book, state that "In the language used today, a domain of positivity is a self-dual open proper convex cone." [I now believe this is wrong; see my answer below for what I think is true instead.] It's quite easy to prove that it's an open proper convex cone. (Proper means it contains no nonzero linear subspace of $X$, i.e. that its closure is pointed.) But, although I have a vague recollection of having encountered a proof once in a paper on homogeneous self-dual cones, I haven't succeeded in finding it again, or in supplying it myself. I'm pretty sure Krieger and Walcher's claim is correct—for example, the 1958 paper by Koecher that is generally cited (along with a 1960 paper by Vin'berg) for the proof of the celebrated result that the (closed) finite-dimensional homogeneous self-dual cones are precisely the cones of squares in finite dimensional formally real Jordan algebras, is titled "The Geodesics of Domains of Positivity" (but in German).
The most natural way to prove this would be to find a positive semidefinite nondegenerate $B'$, such that the cone is a domain of positivity for $B'$ as well. In principle, $B'$ might depend on the domain $Y$. (While maximal in the subset ordering, domains of positivity for a given form $B$ are not unique.) But a tempting possibility, independent of $Y$, is to transform to a basis for $X$ in which $B$ is diagonal, with diagonal elements $\pm 1$, change the minus signs to plus signs, and transform back to obtain $B'$.
To clarify the question: we will define a cone $K$ in a real vector space $X$ to be self-dual iff there
exists an inner product—that is, a positive definite bilinear form $\langle . , . \rangle: X \times X \rightarrow \mathbb{R}$—such that $K = K^*_{\langle . , . \rangle}$. Here $K^*_{\langle . , . \rangle}$ is the dual with respect to the inner product $\langle . , . \rangle$, that is $K^*_{\langle . , . \rangle} := \{ y \in X: \forall x \in X ~\langle y, x \rangle > 0 \}$. So in asking for a proof that a domain of positivity is a self-dual cone, we are asking whether some inner product $\langle . , . \rangle$ with respect to which $K$ is self-dual exists. Above, I considered the case $K=Y$, and called the inner product I was looking for, $B'$.
Does anyone know, or can anyone come up with, a proof?
|
My confusion started from thinking the quantum superposition principle.
Several website say that the quantum superposition means all state can be represented as infinity superposition of orthogonal states.
Also, i see that the Hilbert space is a infinity dimension vector space, so I don't have any question at first. (thanks for telling me it is not correct)
When I try to consider some question about spin, I find that I may have some wrong concepts.for example, in Stern-Gerlach experiment, there are two possible results for the spin of an electron: up or down. A pure state is represented by $$|\Psi\rangle = \alpha |\uparrow\rangle+\beta |\downarrow\rangle$$$$\left | \alpha \right |^{2}+\left | \beta \right |^{2}=1$$
there are only two superposition state(thanks for telling me it is not correct), which is differ from my understanding of quantum superposition.
At this moment, I recall that an operator is a infinite dimensional matrix, and a state vector is a n*1 matrix. Then, $\widehat{A}|\Psi\rangle$ is a matrix multiplication. For matrix multiplication, the number of columns in first matrix should equal the number of rows in second. if the $ |\Psi\rangle = \alpha |\uparrow\rangle+\beta |\downarrow\rangle$, then its matrix only have two rows, but the operator have infinite columns, therefore $\widehat{A}|\Psi\rangle$ cannot be calculated? I think I must have make something wrong, can anyone help me?
-------recently added-------
Since the electron must have gone through one of the two slits, but we have no way of knowing which one without performing a measurement, the total wavefunction can be written $|\Psi\rangle = \frac{1}{\sqrt{2}}( |\psi_1 \rangle + |\psi_2 \rangle) $, where the $\frac{1}{\sqrt{2}}$ factor is just for normalisation. So the electron is in a
superpositionof $|\psi_1 \rangle and |\psi_2 \rangle$..
Then the matrix form of the wave function is $$\begin{bmatrix} \psi_1 \\ \psi_2 \end{bmatrix} $$ the operator have infinite columns, so there is no operator can act on the wave function? or... is the matrix multiplication between infinite matrix and finite matrix differ from finite matrix*finite matrix?
|
When you first encountered the trigonometric functions it was probably in the context of "triangle trigonometry,'' defining, for example, the sine of an angle as the "side opposite over the hypotenuse.'' While this will still be useful in an informal way, we need to use a more expansive definition of the trigonometric functions. First an important note: while degree measure of angles is sometimes convenient because it is so familiar, it turns out to be ill-suited to mathematical calculation, so (almost) everything we do will be in terms of
radian measure of angles.
To define the radian measurement system, we consider the unit circle in the \(xy\)-plane:
subtendthe angle. In the figure, this arc is the portion of the circle from point \((1,0)\) to point \(A\). The length of this arc is the radian measure of the angle \(x\); the fact that the radian measure is an actual geometric length is largely responsible for the usefulness of radian measure. The circumference of the unit circle is \(2\pi r=2\pi(1)=2\pi\), so the radian measure of the full circular angle (that is, of the 360 degree angle) is \(2\pi\).
While an angle with a particular measure can appear anywhere around the circle, we need a fixed, conventional location so that we can use the coordinate system to define properties of the angle. The standard convention is to place the starting radius for the angle on the positive \(x\)-axis, and to measure positive angles counterclockwise around the circle. In the figure, \(x\) is the standard location of the angle \(\pi/6\), that is, the length of the arc from \((1,0)\) to \(A\) is \(\pi/6\). The angle \(y\) in the picture is \(-\pi/6\), because the distance from \((1,0)\) to \(B\) along the circle is also \(\pi/6\), but in a clockwise direction.
Now the fundamental trigonometric definitions are: the cosine of \(x\) and the sine of \(x\) are the first and second coordinates of the point \(A\), as indicated in the figure. The angle \(x\) shown can be viewed as an angle of a right triangle, meaning the usual triangle definitions of the sine and cosine also make sense. Since the hypotenuse of the triangle is 1, the "side opposite over hypotenuse'' definition of the sine is the second coordinate of point \(A\) over 1, which is just the second coordinate; in other words, both methods give the same value for the sine.
The simple triangle definitions work only for angles that can "fit'' in a right triangle, namely, angles between 0 and \(\pi/2\). The coordinate definitions, on the other hand, apply to any angles, as indicated in this figure:
The angle \(x\) is subtended by the heavy arc in the figure, that is, \(x=7\pi/6\). Both coordinates of point \(A\) in this figure are negative, so the sine and cosine of \(7\pi/6\) are both negative.
The remaining trigonometric functions can be most easily defined in terms of the sine and cosine, as usual:
\[\eqalign{ \tan x &= {\sin x\over \cos x}\cr \cot x &= {\cos x \over \sin x}\cr \sec x &= {1\over \cos x}\cr \csc x &= {1\over \sin x}\cr }\]
and they can also be defined as the corresponding ratios of coordinates.
Although the trigonometric functions are defined in terms of the unit circle, the unit circle diagram is not what we normally consider the graph of a trigonometric function. (The unit circle is the graph of, well, the circle.) We can easily get a qualitatively correct idea of the graphs of the trigonometric functions from the unit circle diagram. Consider the sine function, \(y=\sin x\). As \(x\) increases from 0 in the unit circle diagram, the second coordinate of the point \(A\) goes from 0 to a maximum of 1, then back to 0, then to a minimum of \(-1\), then back to 0, and then it obviously repeats itself. So the graph of \(y=\sin x\) must look something like this:
|
I was studying Alexander Holevo's book on
Quantum Systems, Channels and Information, which is a small but terse introduction to Quantum Information Theory. The theory of Tensor products is used almost everywhere so I decided to study the same topic from different books. There is one property that I find hard to prove and I hope that someone could offer me some assistance in proving it.
N.B: As math stackexchange people may not be comfortable with the physics style notations (a.k.a Dirac's bra and ket notation), I'll use notations from John Conway Functional Analysis. If requested, I'll rephrase the question in Dirac's notation.
Let $\mathcal{H}_1$ and $\mathcal{H}_2$ be two finite dimensional Hilbert spaces (over $\mathbb{C}$) with inner products $\langle.\rangle_1$ and $\langle.\rangle_2$ respectively.
Let $\{e_i\}$ and $\{f_j\}$ be bases for $\mathcal{H}_1$ and $\mathcal{H}_2$ respectively.
Let $T \in \mathcal{L}(\mathcal{H}_1 \otimes\mathcal{H}_2)$, where $\otimes$ represents the tensor product. Then for any operator $S \in \mathcal{L}(\mathcal{H}_2)$, show that $$Tr_{\mathcal{H}_2}(T(I_{H_1}\otimes S)) = Tr_{\mathcal{H}_2}((I_{H_1}\otimes S)T)$$
where $Tr_{H_2}(A)$ for any operator $A \in \mathcal{L}(\mathcal{H}_1 \otimes\mathcal{H}_2)$, called the
partial trace of $A$ w.r.t $\mathcal{H}_2$, is defined as that operator on $\mathcal{H}_1$ that satisfies the following (this is as per Holevo's book)
$$\langle\phi,Tr_{H_2}(A) \psi\rangle_1 = \sum_{j}\langle\phi \otimes f_j,A (\psi \otimes f_j)\rangle$$ for every $\phi,\psi \in \mathcal{H}_1$. The inner product on RHS is the inner product for $\mathcal{H}_1 \otimes \mathcal{H}_2$ and is given by $$\langle \phi_1 \otimes \phi_2, \psi_1 \otimes \psi_2 \rangle = \langle \phi_1, \psi_1 \rangle_1 \langle \phi_2, \psi_2 \rangle_2$$ for any $\phi_i, \psi_i \in \mathcal{H}_i;$ $i=1,2$.
It is somewhat similar to showing $Tr(AB)=Tr(BA)$ for the usual trace. However I had a lot of trouble showing this. Also I wanted to know if it suffices to prove the result for $\phi = \psi$ as I think I've managed some kind of proof for that part.
|
Let $A\subset \mathbb{R}^n$ and $B\subset \mathbb{R}^n$ be closed retangles, and let $f:A\times B \rightarrow \mathbb{R}$ be integrable. For $x\in A$ and $y\in B$, and let \begin{equation} \mathscr{L}(x) = \mathbf{L} \int_B f(x, y) dy \end{equation} denote the lower integral of $f$ on $y\in B$. Then $\mathscr{L}$ is integrable on $A$ and: \begin{equation} \int_{A\times B} f = \int_A \mathscr{L} = \int_A \left(\mathbf{L} \int_B f(x, y) dy \right) dx \end{equation}
He gives the following remarks:
[Not relevant for this question] In practive it is often the case that $h(x) = \int_B f(x, y) dy$ is integrable, so that: \begin{equation} \int_{A\times B} f = \int_{A} \left(\int_B f(x, y) dy\right) dx \end{equation} can be applied. This certainly occurs if $f$ is continuous. The worst irregularity commonly encontered is that $h(x)$ is not integrable for a finite number of $x\in A$. In this case, $\mathscr{L}(x) = \int_B f(x, y) dy$ for all but these finitely many $x$. Since $\int_A\mathscr{L}$ remains unchanged if $\mathscr{L}$ is redefined at a finite number of points we can still write $\int_{A\times B} f = \int_{A} \left(\int_B f(x, y) dy\right) dx$, provided that $\left(\int_B f(x, y) dy\right)$ is defined arbitrarily, say as 0, when it does not exist. Let $f:[0, 1] \times [0, 1]\rightarrow \mathbb{R}$ be defined by: \begin{equation} f(x, y) = \begin{cases} 1 & \text{if }x\text{ is irrational} \\ 1 & \text{if }x\text{ is rational and }y\text{ is irrational}\\ 1-\frac{1}{q} & \text{if }x = p/q\text{ in lowest terms and }y\text{ is rational} \end{cases} \end{equation} Then $f$ is integrable and $\int_{[0, 1]\times[0, 1]} f = 1$. Now $\int_{0}^1 f(x, y) dy = 1$ if $x$ is irrational and does not exist if $x$ is rational. Therefore $h$ is not integrable if $h(x) = \int_{0}^1 f(x, y) dy$ is set equal to zero when the integral does not exist.
My question are:
On remark 3, why does it need to be a finitenumber of points? If we had any set with measure 0, wouldn't that be enough to just define $h(x) = 0$ on those points, and in this case, the equality: \begin{equation} \int_A \mathscr{L}(x) = \int_A h(x) \end{equation} would still hold. On remark 4, why $\int_0^1 f(x, y) dy$ does not exist for $x$ rational? The set of discontinuities in this case has measure 0 and by theorem 3-8 of the same book this would be enough to guarantee the existance of this integral, wouldn't it?
|
Given $f\colon\mathbb{R} \to \mathbb{R}$, $f$ is differentiable on $\mathbb{R}$ and the $\lim_{x \to \infty}f(x)$ does not exists . show/prove formally that there exists $x_0 \in \mathbb{R}$ such that $f'(x_0)=0$
My strategy is showing that $f$ is not monotonic function because all monotonic functions have limits when $x \to \infty$ ( $\lim_{x \to \infty}f(x)=l$ while $l \in \mathbb{R}$ or $l=+/- \infty$ ) now i can say that there exists $x_1$ and $x_2 \in \mathbb{R}$ such that $f'(x_1)<0$ and $f'(x_2)>0$ and use the mean value for derivative function (Darboux's theorem)
Can i really say that a continuous function is not monotonic just because $\lim_{x \to \infty}f(x)$ does not exists ? or its a "one way" statement ?
does all non-monotonic continues functions have $x_0 \in \mathbb{R}$ such that $f'(x_0)=0$ ?
|
Search
Now showing items 1-2 of 2
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
|
Difference between revisions of "LaTeX:Symbols"
(→Bracketing Symbols)
(→Bracketing Symbols)
Line 332: Line 332:
See that there's a dot after <tt>\right</tt>. You must put that dot or the code won't work.
See that there's a dot after <tt>\right</tt>. You must put that dot or the code won't work.
− + +
addition to the \left and \right commands, when doing floor or ceiling functions with fractions, using
\left\lceil\frac{x}{y}\right\rceil
\left\lceil\frac{x}{y}\right\rceil
Line 339: Line 340:
give both <math>\left\lceil\frac{x}{y}\right\rceil\text{ and }\left\lfloor\frac{x}{y}\right\rfloor\text{, respectively.}</math>
give both <math>\left\lceil\frac{x}{y}\right\rceil\text{ and }\left\lfloor\frac{x}{y}\right\rfloor\text{, respectively.}</math>
+
And, if you type this
And, if you type this
Revision as of 15:13, 7 March 2019
LaTeX About - Getting Started - Diagrams - Symbols - Downloads - Basics - Math - Examples - Pictures - Layout - Commands - Packages - Help
This article will provide a short list of commonly used LaTeX symbols.
Contents Finding Other Symbols
Here are some external resources for finding less commonly used symbols:
Detexify is an app which allows you to draw the symbol you'd like and shows you the code for it! MathJax (what allows us to use on the web) maintains a list of supported commands. The Comprehensive LaTeX Symbol List. Operators Relations
Symbol Command Symbol Command Symbol Command \le \ge \neq \sim \ll \gg \doteq \simeq \subset \supset \approx \asymp \subseteq \supseteq \cong \smile \sqsubset \sqsupset \equiv \frown \sqsubseteq \sqsupseteq \propto \bowtie \in \ni \prec \succ \vdash \dashv \preceq \succeq \models \perp \parallel \mid \bumpeq
Negations of many of these relations can be formed by just putting \not before the symbol, or by slipping an n between the \ and the word. Here are a few examples, plus a few other negations; it works for many of the others as well.
Symbol Command Symbol Command Symbol Command \nmid \nleq \ngeq \nsim \ncong \nparallel \not< \not> \not= \not\le \not\ge \not\sim \not\approx \not\cong \not\equiv \not\parallel \nless \ngtr \lneq \gneq \lnsim \lneqq \gneqq
To use other relations not listed here, such as =, >, and <, in LaTeX, you may just use the symbols on your keyboard.
Greek Letters
Symbol Command Symbol Command Symbol Command Symbol Command \alpha \beta \gamma \delta \epsilon \varepsilon \zeta \eta \theta \vartheta \iota \kappa \lambda \mu \nu \xi \pi \varpi \rho \varrho \sigma \varsigma \tau \upsilon \phi \varphi \chi \psi \omega
Symbol Command Symbol Command Symbol Command Symbol Command \Gamma \Delta \Theta \Lambda \Xi \Pi \Sigma \Upsilon \Phi \Psi \Omega Arrows
Symbol Command Symbol Command \gets \to \leftarrow \Leftarrow \rightarrow \Rightarrow \leftrightarrow \Leftrightarrow \mapsto \hookleftarrow \leftharpoonup \leftharpoondown \rightleftharpoons \longleftarrow \Longleftarrow \longrightarrow \Longrightarrow \longleftrightarrow \Longleftrightarrow \longmapsto \hookrightarrow \rightharpoonup \rightharpoondown \leadsto \uparrow \Uparrow \downarrow \Downarrow \updownarrow \Updownarrow \nearrow \searrow \swarrow \nwarrow
(For those of you who hate typing long strings of letters, \iff and \implies can be used in place of \Longleftrightarrow and \Longrightarrow respectively.)
Dots
Symbol Command Symbol Command \cdot \vdots \dots \ddots \cdots \iddots Accents
Symbol Command Symbol Command Symbol Command \hat{x} \check{x} \dot{x} \breve{x} \acute{x} \ddot{x} \grave{x} \tilde{x} \mathring{x} \bar{x} \vec{x}
When applying accents to i and j, you can use \imath and \jmath to keep the dots from interfering with the accents:
Symbol Command Symbol Command \vec{\jmath} \tilde{\imath}
\tilde and \hat have wide versions that allow you to accent an expression:
Symbol Command Symbol Command \widehat{7+x} \widetilde{abc} Others Command Symbols
Some symbols are used in commands so they need to be treated in a special way.
Symbol Command Symbol Command Symbol Command Symbol Command \textdollar or $ \& \% \# \_ \{ \} \backslash
(Warning: Using $ for will result in . This is a bug as far as we know. Depending on the version of this is not always a problem.)
European Language Symbols
Symbol Command Symbol Command Symbol Command Symbol Command {\oe} {\ae} {\o} {\OE} {\AE} {\AA} {\O} {\l} {\ss} !` {\L} {\SS} Bracketing Symbols
In mathematics, sometimes we need to enclose expressions in brackets or braces or parentheses. Some of these work just as you'd imagine in LaTeX; type ( and ) for parentheses, [ and ] for brackets, and | and | for absolute value. However, other symbols have special commands:
Symbol Command Symbol Command Symbol Command \{ \} \| \backslash \lfloor \rfloor \lceil \rceil \langle \rangle
You might notice that if you use any of these to typeset an expression that is vertically large, like
(\frac{a}{x} )^2
the parentheses don't come out the right size:
If we put \left and \right before the relevant parentheses, we get a prettier expression:
\left(\frac{a}{x} \right)^2
gives
And with system of equations:
\left\{\begin{array}{l}x+y=3\\2x+y=5\end{array}\right.
Gives
See that there's a dot after
\right. You must put that dot or the code won't work.
In addition to the \left and \right commands, when doing floor or ceiling functions with fractions, using
\left\lceil\frac{x}{y}\right\rceil
and \left\lfloor\frac{x}{y}\right\rfloor
give both
And, if you type this
\underbrace{a_0+a_1+a_2+\cdots+a_n}_{x}
Gives
Or
\overbrace{a_0+a_1+a_2+\cdots+a_n}^{x}
Gives
\left and \right can also be used to resize the following symbols:
Symbol Command Symbol Command Symbol Command \uparrow \downarrow \updownarrow \Uparrow \Downarrow \Updownarrow Multi-Size Symbols
Some symbols render differently in inline math mode and in display mode. Display mode occurs when you use \[...\] or $$...$$, or environments like \begin{equation}...\end{equation}, \begin{align}...\end{align}. Read more in the commands section of the guide about how symbols which take arguments above and below the symbols, such as a summation symbol, behave in the two modes.
In each of the following, the two images show the symbol in display mode, then in inline mode.
Symbol Command Symbol Command Symbol Command \sum \int \oint \prod \coprod \bigcap \bigcup \bigsqcup \bigvee \bigwedge \bigodot \bigotimes \bigoplus \biguplus
|
Thank you for using the timer!We noticed you are actually not timing your practice. Click the START button first next time you use the timer.There are many benefits to timing your practice, including:
Does GMAT RC seem like an uphill battle? e-GMAT is conducting a free webinar to help you learn reading strategies that can enable you to solve 700+ level RC questions with at least 90% accuracy in less than 10 days. Sat., Oct 19th at 7 am PDT
Re: In the figure above, is the area of triangular region ABC[#permalink]
Show Tags
23 Oct 2012, 22:39
37
22
teal wrote:
does anyone have any other alternative method to solve?
You can solve sufficiency questions of geometry by drawing some diagrams too.
Attachment:
Ques3.jpg [ 4.81 KiB | Viewed 71750 times ]
We need to compare areas of ABC and DAB. Notice that given triangle ABC with a particular area, the length of AD is defined. If AD is very small, (shown by the dotted lines) the area of DAB will be very close to 0. If AD is very large, the area will be much larger than the area of ABC. So for only one value of AD, the area of DAB will be equal to the area of ABC.
Now look at the statements:
(1) (AC)^2=2(AD)^2 \(AD = AC/\sqrt{2}\) The area of ABC is decided by AC and BC, not just AC. We can vary the length of BC to see the relation between AC and AD is not enough to say whether the areas will be the same (see diagram). So insufficient.
Attachment:
Ques4.jpg [ 7.15 KiB | Viewed 71856 times ]
(2) ∆ABC is isosceles. We have no idea about the length of AD so insufficient.
Using both, ratio of sides of ABC are \(1:1:\sqrt{2}\) = AC:BC : AB Area of ABC = 1/2*1*1 = 1/2
Area of DAB = \(1/2*AD*AB = 1/2*1/\sqrt{2}*\sqrt{2} = 1/2\)
Areas of both the triangles is the same_________________
Karishma Veritas Prep GMAT Instructor
Learn more about how Veritas Prep can help you achieve a great GMAT score by checking out their GMAT Prep Options >
Re: In the figure above, is the area of triangular region ABC[#permalink]
Show Tags
24 Oct 2012, 02:04
39
18
jfk wrote:
Attachment:
OG13DS79v2.png
In the figure above, is the area of triangular region ABC equal to the area of triangular region DBA ?
(1) (AC)^2=2(AD)^2
(2) ∆ABC is isosceles.
Source: OG13 DS79
The area of a right triangle can be easily expressed as half the product of the two legs. Area of triangle \(ABC\) is \(0.5AC\cdot{BC}\) and that of the triangle \(DBA\) is \(0.5AD\cdot{AB}\).
The question in fact is "Is \(AC\cdot{BC} = AB\cdot{AD}\)?"
(1) From \(AC^2=2AD^2\), we deduce that \(AC=\sqrt{2}AD\), which, if we plug into \(AC\cdot{BC} = AB\cdot{AD}\), we get \(\sqrt{2}AD\cdot{BC}=AB\cdot{AD}\), from which \(\sqrt{2}BC=AB\). This means that triangle \(ABC\) should necessarily be isosceles, which we don't know. Not sufficient.
(2) Obviously not sufficient, we don't know anything about \(AD\).
(1) and (2): If \(AC=BC=x\), then \(AB=x\sqrt{2}\), and \(AD=\frac{x}{\sqrt{2}}\). Then \(AC\cdot{BC}=AB\cdot{AD}=x^2\). Sufficient.
Answer C._________________
PhD in Applied Mathematics Love GMAT Quant questions and running.
We cannot simplify or reduce further. Insufficient.
(1) and (2)
(1) actually answers the question we are left with in (2):
\(AB = AD*\sqrt{2}?\)
Sufficient. Answer is C.
I am not sure if I could have done this in 2 mins.
Key take aways:
- Rephrase and simplify question as much as possible, given known formulas etc. - To check C: use work you do for checking (1) also for checking (2) and vice versa. - Don't get thrown off my complicated formulas
Hope it helps others. Please let me know if I made any mistake!
Re: In the figure above, is the area of triangular region ABC[#permalink]
Show Tags
05 Jun 2013, 18:41
I still don't get it. To compare the Area of (ABC) and Area of (DBA): is 1/2(AC)(CB) = 1/2(AD)(AB) ? In the hint 1: (AC)^2=2(AD)^2 -> AC = sq(2)*(AD) -> AD > AC (1) Then look at the graph: (AB) is the hypotenuse of triangle (ABC) -> AB > CB (2) (1) and (2) : -> (AC)(CB) < (AB)(AD) -> the answer is NO, their areas are not equal Thus, the answer must be A. Can anyone please explain my mistake in here? Thanks
Re: In the figure above, is the area of triangular region ABC[#permalink]
Show Tags
06 Jun 2013, 00:07
1
ltkenny wrote:
I still don't get it. To compare the Area of (ABC) and Area of (DBA): is 1/2(AC)(CB) = 1/2(AD)(AB) ? In the hint 1: (AC)^2=2(AD)^2 -> AC = sq(2)*(AD) -> AD > AC (1) Then look at the graph: (AB) is the hypotenuse of triangle (ABC) -> AB > CB (2) (1) and (2) : -> (AC)(CB) < (AB)(AD) -> the answer is NO, their areas are not equal Thus, the answer must be A. Can anyone please explain my mistake in here? Thanks
There's your error. \(AD = AC/\sqrt{2}\) So AD < AC (not AD > AC)_________________
Karishma Veritas Prep GMAT Instructor
Learn more about how Veritas Prep can help you achieve a great GMAT score by checking out their GMAT Prep Options >
Re: In the figure above, is the area of triangular region ABC[#permalink]
Show Tags
18 Sep 2013, 17:14
EvaJager wrote:
jfk wrote:
Attachment:
OG13DS79v2.png
In the figure above, is the area of triangular region ABC equal to the area of triangular region DBA ?
(1) (AC)^2=2(AD)^2
(2) ∆ABC is isosceles.
Source: OG13 DS79
The area of a right triangle can be easily expressed as half the product of the two legs. Area of triangle \(ABC\) is \(0.5AC\cdot{BC}\) and that of the triangle \(DBA\) is \(0.5AD\cdot{AB}\).
The question in fact is "Is \(AC\cdot{BC} = AB\cdot{AD}\)?"
(1) From \(AC^2=2AD^2\), we deduce that \(AC=\sqrt{2}AD\), which, if we plug into \(AC\cdot{BC} = AB\cdot{AD}\), we get \(\sqrt{2}AD\cdot{BC}=AB\cdot{AD}\), from which \(\sqrt{2}BC=AB\). This means that triangle \(ABC\) should necessarily be isosceles, which we don't know. Not sufficient.
(2) Obviously not sufficient, we don't know anything about \(AD\).
(1) and (2): If \(AC=BC=x\), then \(AB=x\sqrt{2}\), and \(AD=\frac{x}{\sqrt{2}}\). Then \(AC\cdot{BC}=AB\cdot{AD}=x^2\). Sufficient.
Answer C.
I'm a little confused with this method.
1) In statement 1, how can you deduce that the ABC needs to be an isosceles? 2) When you combine the statements, How do you know that \(AD=\frac{x}{\sqrt{2}}\)?
Re: In the figure above, is the area of triangular region ABC[#permalink]
Show Tags
18 Sep 2013, 18:49
1
russ9 wrote:
EvaJager wrote:
jfk wrote:
Attachment:
OG13DS79v2.png
In the figure above, is the area of triangular region ABC equal to the area of triangular region DBA ?
(1) (AC)^2=2(AD)^2
(2) ∆ABC is isosceles.
Source: OG13 DS79
The area of a right triangle can be easily expressed as half the product of the two legs. Area of triangle \(ABC\) is \(0.5AC\cdot{BC}\) and that of the triangle \(DBA\) is \(0.5AD\cdot{AB}\).
The question in fact is "Is \(AC\cdot{BC} = AB\cdot{AD}\)?"
(1) From \(AC^2=2AD^2\), we deduce that \(AC=\sqrt{2}AD\), which, if we plug into \(AC\cdot{BC} = AB\cdot{AD}\), we get \(\sqrt{2}AD\cdot{BC}=AB\cdot{AD}\), from which \(\sqrt{2}BC=AB\). This means that triangle \(ABC\) should necessarily be isosceles, which we don't know. Not sufficient.
(2) Obviously not sufficient, we don't know anything about \(AD\).
(1) and (2): If \(AC=BC=x\), then \(AB=x\sqrt{2}\), and \(AD=\frac{x}{\sqrt{2}}\). Then \(AC\cdot{BC}=AB\cdot{AD}=x^2\). Sufficient.
Answer C.
I'm a little confused with this method.
1) In statement 1, how can you deduce that the ABC needs to be an isosceles? 2) When you combine the statements, How do you know that \(AD=\frac{x}{\sqrt{2}}\)?
In statement 1, the question boils down to: Is \(\sqrt{2}BC=AB\)? or Is \(BC/AB=1/\sqrt{2}\)?
In a right triangle, the ratio of a leg and hypotenuse will be 1:\sqrt{2} if the third side is also 1 i.e. only if the triangle is isosceles. You can figure this from pythagorean theorem \(1^2 + x^2 = \sqrt{2}^2\) \(x = 1\)
On combining the statements, we know that ABC is isosceles so AC = BC. So ratio of sides \(AC:BC:AB = 1:1:\sqrt{2}\)i.e. the sides are \(x, x\) and \(\sqrt{2}x\) From statement 1 we know that \(AC = \sqrt{2}AD\) So \(AC = x = \sqrt{2}AD\) So \(AD = x/\sqrt{2}\)
This method is way too mechanical and prone to errors. Try to use the big picture approach._________________
Karishma Veritas Prep GMAT Instructor
Learn more about how Veritas Prep can help you achieve a great GMAT score by checking out their GMAT Prep Options >
Re: In the figure above, is the area of triangular region ABC[#permalink]
Show Tags
18 Nov 2013, 21:37
teal wrote:
does anyone have any other alternative method to solve?
Don't use numbers, use logic.
Vandygrad above helped me break this down. Look up what an Isosceles triangle is: Which is a triangle that has 2 equal sides, and 2 equal angles, so with that information on hand, you know that it's a 45/45/90 , and look up how to find the area of a triangle A=Base*Height/2, or A=1/2(B*H).
The question stem states that the area of ABC and DBA, is it the same? Yes/No?
1) No values are given for ABC/or or DBA. INSUF.
So you have an equation for C, but no know lengths, so insufficient to solve for the area. Knowing the formula is irrelevant, values are important. Time Saver.
2) ABC=Isocolecs: so you know the triangle has 2 equal sides, and 2 equal angels. No lengths, to be able to determine the area of triangle ABC:
Combined) C is solved for. Given: *****ABC = Isocolesces, you have two equal sides, that and knowing that you can reduce statement 1, then use the P. Theorem to solve: Stop here. Sufficient to solve.
Re: In the figure above, is the area of triangular region ABC[#permalink]
Show Tags
12 Mar 2015, 08:58
I understand the mathematical/algebraic approach, but is it possible to answer this question using logic alone?
For example:
Statement 1 fixes the ratio of AC to AD, but point B is still free to move in space, so insufficient. Statement 2 fixes the ratio of AC to BC, but point D is still free to move in space, so insufficient.
Together, the ratio of all the sides are fixed, so sufficient.
Re: In the figure above, is the area of triangular region ABC[#permalink]
Show Tags
12 Mar 2015, 19:53
swaggerer wrote:
I understand the mathematical/algebraic approach, but is it possible to answer this question using logic alone?
For example:
Statement 1 fixes the ratio of AC to AD, but point B is still free to move in space, so insufficient. Statement 2 fixes the ratio of AC to BC, but point D is still free to move in space, so insufficient.
Together, the ratio of all the sides are fixed, so sufficient.
|
I need to find when the sun reaches the Zenith at a given latitude.
What I've done so far:
$L=23.5 \cdot \sin(\frac{2\pi}{365.25}\cdot D) $
Here L is the latitude (<23.5) and D is number of days from March 21.
This is based on my understanding that the sun executes SHM around the celestial equator. I then solve this equation and get two roots, which are the required days.
My questions are:
Since the orbit of the earth is not exactly spherical, how much deviation actually occurs?
Am I right in understanding that since the declination is continuously shifting, the sun may actually never reach the Zenith? In such a scenario, I am only calculating the day when the little circle that the sun is instantaneously travelling on, meets the Zenith.
|
The angular resolution of the telescope
really has no direct bearing on our ability to detect Oort cloud objects beyond how that angular resolution affects the depth to which one can detect the light from faint objects. Any telescope can detect stars, even though their actual discs are way beyond the angular resolution of the telescope.
The detection of Oort cloud objects is simply a question of detecting the (unresolved) reflected light in exactly the same way that one detects a faint (unresolved) star. Confirmation of the Oort cloud nature of the object would then come by observing at intervals over a year or so and obtaining a very large ($>2$ arcseconds) parallax.
The question amounts to how deep do you need to go? We can do this in two ways (i) a back of the envelope calculation assuming the object reflects light from the Sun with some albedo. (ii) Scale the brightness of comets when they are distant from the Sun.
(i) The luminosity of the Sun is $L=3.83\times10^{26}\ W$. Let the distance to the Oort cloud be $D$ and the radius of the (assumed spherical) Oort object be $R$.The light from the Sun incident on the object is $\pi R^2 L/4\pi D^2$.If we now assume that a fraction $f$ of this is reflected uniformly into a $2\pi$ solid angle. This latter point is an approximation, the light will not be reflected isotropically, but it will represent some average over any viewing angle.
To a good approximation, as $D \gg 1$ au, we can assume that the distance from the Oort object to the Earth is also $D$. Hence the flux of light received at the Earth is$$F_{E} = f \frac{\pi R^2 L}{4\pi D^2}\frac{1}{2\pi D^2} = f \frac{R^2 L}{8\pi D^4}$$
Putting some numbers in, let $R=10$ km and let $D= 10,000$ au. Cometary material has a very low albedo, but let's be generous and assume $f=0.1$.$$ F_E = 3\times10^{-29}\left(\frac{f}{0.1}\right) \left(\frac{R}{10\ km}\right)^2 \left(\frac{D}{10^4 au}\right)^{-4}\ Wm^{-2}$$
To convert this to a magnitude, assume the reflected light has the same spectrum as sunlight. The Sun has an apparent visual magnitude of -26.74, corresponding to a flux at the Earth of $1.4\times10^{3}\ Wm^{-2}$. Converting the flux ratio to a magnitude difference, we find that the apparent magnitude of our fiducial Oort object is
52.4.
(ii) Halley's comet is similar (10 km radius, low albedo) to the fiducial Oort object considered above. Halley's comet was observed by the VLT in 2003 with a magnitude of 28.2 and at a distance of 28 au from the Sun. We can now just scale this magnitude, but it scales as distance to the power of
four, because the light must be received and then we see it reflected.Thus at 10,000 au, Halley would have a magnitude of $28.2 - 2.5 \log (28/10^{4})= 53.7$, in reasonable agreement with my other estimate. (Incidentally my crude formula in (i) above suggests a $f=0.1$, $R=10\ km$ comet at 28 au would have a magnitude of 26.9. Given that Halley probably has a smaller $f$ this is excellent consistency.)
The observation of Halley by the VLT represents the pinnacle of what is possible with today's telescopes. Even the Hubble deep ultra deep field only reached visual magnitudes of about 29. Thus a big Oort cloud object remains
more than 20 magnitudes below this detection threshold!
The most feasible way of detecting Oort objects is when they occult background stars. The possibilities for this are discussed by Ofek & Naker 2010 in the context of the photometric precision provided by Kepler. The rate of occultations (which are of course single events and unrepeatable) was calculated to be between zero and 100 in the whole Kepler mission, dependent on the size and distance distribution of the Oort objects. As far as I am aware, nothing has come of this (yet).
|
This is essentially a follow up motivated by this answer to my question about the gauge transformation interpretation of identity types.
A field $$\psi:\mathcal M\to\mathbb C^n$$ is a section of the $\mathbb C^n$-bundle over the spacetime manifold $\mathcal M$. We have a local gauge transformations $$\psi(x)\mapsto \psi'(x):=U(x)\,\psi(x)\ :\ (\mathcal M\to \mathbb C^n)\to(\mathcal M\to \mathbb C^n).$$
Now consider a language with type polymorphism and the class $M$ of all it's types whose elements can be put into a list. Let $\Psi$ be the polymorphic function which, for every type $X\in M$, maps an $X$-list to an integer, namely its length. For example, using Haskell syntax, if $X=\mathrm{Bool}$, then $\Psi_\mathrm{Bool}\left([\mathrm{True},\mathrm{True},\mathrm{False}]\right) = 3$. In System F notation, we have
$$\Psi:\forall X.\left([X]\to\mathrm{Int}\right).$$
The gauge transformations should correspond to maps
$$u\ :\ \forall X.\left([X]\to\mathrm{Int}\right)\ \longrightarrow\ \forall X.\left([X]\to\mathrm{Int}\right).$$
I could come up with some $u$'s, for example mapping the length function $\Psi$ to a map $\Psi':=u\,\Psi$ which instead returns 42 times the length of a list. But that would be, in physics terms, a global gauge transformation because it's not sensitive to the type $X$. I think, given that the only invariant of a finite dimensional vector space is its cardinality, it shouldn't be possible to construct a local transformation in this case. What would be a hands-on example for a local gauge transformation in this sense?
Moreover, I wanted to draw an everyday life parallel to identity types. Well first there is the minor obstacle that the above transformation can't be given by an expression in most language, as types are usually not first class objects. I guess this design choice is made because otherwise type inference would be spoiled. In homotopy type theory you have realization of "types are terms too" (via n-categories?) and then it's possible. But in any case, I still can't quite pin down the specification when a type is an
identity type. I understand "identity" for homotopy equivalent spaces and gauge invariant Lagrangians, but are there non-geometric structures, specifically programming relevant ones, which behave identical before and after the transformation? edit: I now made two visualization of the example here and then:
Btw., I know that HoTT does implement dependent types, not "just" parametrically polymorphic ones, but that should not be an obstacle.
|
1) Let us work in units where the speed-of-light $c=1$ is one.
In Ref. 1 is derived the radial geodesic equation for a particle in the equatorial plane
$$\tag{7.47} (\frac{dr}{d\lambda})^2+2V(r)~=~E^2, $$
with potential
$$ \tag{7.48} 2V(r)~:=~(1-\frac{r_s}{r})((\frac{L}{r})^2+\epsilon). $$
Here $\epsilon=0$ for a massless particle and $\epsilon=1$ for a massive particle. The energy $E$ and angular momentum $L$ are constants of motion (which reflect Killing-symmetries of the Schwarzschild metric); $\lambda$ is the affine parameter of the geodesic; and $r_s\equiv\frac{2GM}{c^2}$ is the Schwarzschild-radius. (More precisely, in the massive case $\epsilon=1$, the quantities $E$ and $L$ are specific quantities, i.e. quantities per unit rest mass; and $\lambda$ is proper time.)
2) By differentiating eq. (7.47) wrt. $\lambda$, we find that the condition for a circular orbit
$$r(\lambda)~\equiv~ r_{*} \qquad\Rightarrow\qquad \frac{dr}{d\lambda}~\equiv~0$$
is
$$\tag{1}V'(r_{*})~=~0\qquad\Leftrightarrow\qquad \frac{2r_{*}}{r_s}~=~3+\epsilon(\frac{r_{*}}{L})^2.$$
3) Let us next investigate an incoming particle, which has non-constant radial coordinate $\lambda\mapsto r(\lambda)$, and that is precisely on the critical border between being captured and not being captured by the black hole. It would have a radial turning point $\frac{dr}{d\lambda}=0$ precisely at the radius $r=r_{*}$, so that
$$\tag{2} 2V(r_{*})~=~E^2\qquad\Leftrightarrow\qquad (1-\frac{r_s}{r_{*}})((\frac{L}{r_{*}})^2+\epsilon)~=~E^2.$$
4)
The massless case $\epsilon=0$. Eq. (1) yields
$$\tag{3}r_{*}~=~\frac{3}{2}r_s.$$
Plugging eq. (3) into eq. (2) then yields the ratio
$$\tag{4} \frac{L}{E}~=~\frac{3}{2}\sqrt{3}r_s. $$
We next use that $L$ and $E$ are constants of motion, so that we can easily identify them at spacial infinity $r=\infty$, where special relativistic formulas apply. The critical impact parameter $b$ is precisely this ratio
$$\tag{5} b~=~\frac{L}{p}~=~\frac{L}{E}~\stackrel{(4)}{=}~\underline{\underline{\frac{3}{2}\sqrt{3}r_s}}. $$
5)
The non-relativistic case $v_{\infty}\ll 1$. The specific energy $E\approx 1$ consists mostly of rest energy. Solving eqs. (1) and (2) then leads to a unique solution
$\tag{6}r_{*}~\approx~ 2r_s~\approx~ L.$
The critical impact parameter $b$ becomes
$$\tag{5} b~=~\frac{L}{v_{\infty}}~\approx~\underline{\underline{2r_s\frac{c}{v_{\infty}}}}, $$
cf. Ref. 2. The cross section is $\sigma=\pi b^2$.
References:
S. Carroll,
Lecture Notes on General Relativity, Chapter 7, p.172-179. The pdf file is available from his website.
V.P. Frolov and I.D. Novikov,
Black Hole Physics: Basic Concepts and New Developments, p.48.
|
1. Solution: For any $\vp\in\ca L(V,\mb F)$, if $\dim \m{range} \vp=0$, then $\vp$ is the zero map. If $\dim \m{range} \vp=1$, then $\vp$ is surjective since $\dim\mb F=1$. Moreover, $\dim…
Exercises 1,2 and 4. For Problem 2, please also see Carson Rogers’s comment. 4. Solution: For any $f\in \ca L(V_1\times \cdots\times V_m,W)$ and given $i\in \{1,\cdots,m\}$, define $f_i:V_i\to W$ by…
1. Suppose $T\in\ca L(U, V)$ and $S\in\ca L(V, W)$ are both invertible linear maps. Prove that $ST\in\ca L(U, W)$ / is invertible and that $(ST)^{-1}=T^{-1}S^{-1}$. Solution: See Linear Algebra Done…
1. Solution: Suppose for some basis $v_1$, $\cdots$, $v_n$ of $V$ and some basis $w_1$, $\cdots$, $w_m$ of $W$, the matrix of $T$ has at most $\dim \m{range} T-1$ nonzero…
1. Give an example of a linear map $T$ such that $\dim \mathrm{null} T=3$ and $\dim \mathrm{range} T = 2$. Solution: Assume $V$ is 5-dimensional vector space with a basis…
1. Suppose $b,c\in \R$. Define $T: \R^3 \to \R^2$ by \[T(x, y, z)= (2x-4y +3z + b,6x +cxyz).\] Show that $T$ is linear if and only if $b=c=0$.Solution: If $T$…
|
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say
|
I have the function $$f(x)=\frac{2x}{10+x}$$ and I am asked to find its power series representation which I found to be $$\sum_{n=0}^{\infty} (-1)^{n} *\frac{2x^{n+1}}{10^{n+1}}$$ and I found the radius of convergence to be $R=10$. All until here is clear and easy, but when I am asked to find the 1st few terms I tries to do the following
$c_0$ I plugged a value of $x=0$ in my
originalfunction $f(x)$ which equals $(0)$ [correct answer]
$c_1$ I plugged a value of $x=0$ in the
1stderivative of $f(x)$ which equals $\frac{1}{5}$ [correct answer]
$c_2$ I plugged a value of $x=0$ in the
2ndderivative of $f(x)$ [incorrect answer]
$c_3$ I plugged a value of $x=0$ in the
3rdderivative of $f(x)$ [incorrect answer]
$c_4$ I plugged a value of $x=0$ in the
4thderivative of $f(x)$ [incorrect answer]
So if $c_0$ and $c_1$ are correct why would the others not be as well? Am I missing something profoundly important?
|
Yes. Denote the entrywise absolute value of a complex matrix $X$ by $|X|$. In general, if $|X|\le Y$ entrywise, then $\rho(X)\le\rho(|X|)\le\rho(Y)$.
Your $P$ is an orthogonal projection. Therefore the moduli of its entries are bounded above by $1$. Hence $|A|\le|B||D|$ and $\rho(A)\le\rho(|B||D|)$. So, it suffices to prove that the latter quantity is strictly smaller than $1$.
Suppose $(\lambda,v)$ is an eigenpair of $|B||D|$. Clearly, $\|\,|B|\,\|_\infty=\|B\|_\infty=1$. As $D$ is diagonal, $\|\,|D|\,\|_\infty=\|D\|_2<1$. Therefore $|\lambda|\|v\|_\infty=\|\,|B|\,|D|\,v\|_\infty\le\|\,|B|\,\|_\infty\|\,|D|\,\|_\infty\|v\|_\infty<\|v\|_\infty$. Thus $\rho(|B||D|)<1$. $\square$
Remark. Your conjecture is true because $D$ is diagonal. Otherwise it is false in general. Here is a counterexample. Let $B=\pmatrix{-\frac12&-\frac12\\ 0&1}$. Its largest singular value is $\sigma_1=\frac14(3+\sqrt{5})>1$. Let $B=USV^T$ be its singular value decomposition. Set $P=U\pmatrix{1\\ &0}U^T$ and $D=rVU^T$ for some $r$ that is smaller than but sufficiently close to $1$. Then $A=PBD=r\sigma_1U\pmatrix{1\\ &0}U^T$, whose spectral radius is $r\sigma_1>1$.
|
First, note that this is not specific to probability or the Stieljes integral. You could have asked the question about a plain vanilla integral $$\int_{-\infty}^\theta f(x)dx = \int_{-\infty}^\infty H(\theta-x)f(x)dx$$ where $H$ is the step function. Actually, we can simplify even further and consider derivative with respect to $\theta$ of $$\theta= \int_0^\theta dx = \int_0^\infty H(\theta-x)dx$$ where $\theta > 0.$ Everything about your question still applies here.
We know the derivative of the LHS is one. So what's wrong with the logic of bring the derivative inside the integral and saying it's zero. First, it's not always okay to bring a derivative under the integral sign. Second, I understand the logic that it's zero almost everywhere so thus the integral must be zero, but that's applying rules about functions to something that's fundamentally a distribution (or at least you must treat it as a distribution if you bring the derivative inside). By the same logic the Dirac delta function is zero almost everywhere so it must integrate to zero... in fact we shall see that this analogy is exact.
Let's go back to the definition of the derivative. We want $$\lim_{h\to0}\frac{f(\theta+h)-f(\theta)}{h} = \lim_{h\to 0} \frac{1}{h}\left(\int_0^\infty H(\theta+h-x)dx - \int_0^\infty H(\theta-x)dx\right).$$ Of course these integrals are just silly ways of writing $\theta+h$ and $\theta$ so they converge and there's no question of us combining them under one and we can write our expression $$ \lim_{h\to 0} \int_0^\infty \frac{H(\theta+h-x)-H(\theta-x)}{h}dx.$$
Now we face the all-important question of whether we can bring the limit inside the integral. If we do, we know that we get zero at almost every $x.$ But at the all-important point $x=\theta$ the derivative is undefined. This is where all the action is.
So instead of trying to bring the limit inside, let's look at what the inside function looks like as a function of $x$ for fixed $h.$ It's a rectangle of width $h$ and height $\frac{1}{h}$ with support between $\theta$ and $\theta+h$. Hmm. This is a rectangle of area one that gets arbitrarily tall and thin as $h\to 0.$ So it is an approximation to a Dirac delta. Of course for any $h\ne 0$ the integral is one so the limit is also one, as we knew it had to be.
So, we see the "derivative of the step function" behaves exactly as a delta function. This gives us the distribution identity $$ \frac{d}{d\theta} H(\theta-x) = \delta(\theta-x)$$
|
While the answer for your particular case depends on a lot of factors, you can estimate how hot an object gets that is facing the sun (and whose back side is insulated) using the Stefan-Boltzmann law. Let us assume that the material is perfectly black (emissivity of 1.0) and facing the sun which delivers approximately 1 kW of power per square meter at the surface of the Earth (a bit more at the top of the atmosphere).
The black surface will heat up until it loses that kW as fast as it is coming in. It will do so by a combination of radiation and convection.
We all know that things inside a car can get very hot - this is because the windows reflect more radiation at long wavelengths (the "greenhouse" effect). But for this calculation I will initially ignore this effect although it does play a role.
Stefan-Boltzmann: $$J = \sigma T^4$$
To lose 1 kW over 1 m
2 requires a temperature of $$T=\sqrt[4]{\frac{1000}{5.67\cdot 10^{-8}}}\approx 364 K$$
This assumes only the surface facing the sun loses heat by radiation: in other words this is only valid for a black surface mounted on a good insulator. It is an upper limit when we ignore convection and greenhouse effect (which are opposite - convection will cause the material to be cooler, and greenhouse effect will make it hotter). It is clearly not enough to melt your lettering.
Incidentally there are some interesting links on the temperature of objects in the sun. I quite liked this one describing measurements on cars. It shows that things left in the sun can get significantly hotter than the surrounding air (although there are some issues with the method used, the conclusions are mostly valid),
One final note - how hot things are is not the same as how hot they seem to be. Touching a good conductor (metal) will exaggerate the apparent temperature (hot feels hotter and cold feels colder) when compared to touching thermally insulating materials; and using non contact thermometers can lead to errors when comparing surfaces of different colors (emissivity) - but not enough to explain the reading of 160 F mentioned in the above article.
|
The energy level of the particle in a one-dimensional box are given by $$E_n = \frac{n^2\hbar^2 \pi ^2}{2mL^2} = \frac{n^2 h^2}{8mL^2}.\tag1$$We see that the energy is quantised and dependent on the quantum number $n$. We also see that the energy difference between those levels gets larger with $n$. The only variable in this equation is the length of the box $L$. The larger the box the denser the states.
For an extended pi system like the ones in the dyes we make various simplification. First and foremost we reduce a multidimensional problem to one dimension. We also treat the barriers of the box as infinitely large potentials. Then we can only estimate the length of the pox. We are completely ignoring the fact that the molecule will have vibrations. And there are a couple of more assumption we make. All of them will influence the accuracy of the calculation. You should discuss these issues thoroughly and come to your own conclusions.
pH13 outlined a procedure that has been established in the literature, which neatly correlates the experimental findings with the calculations.
[1] I personally find this method a little bit sketchy and not very intuitive at all. My first issue is the choice of the box itself; it seems rather arbitrary to me, as they decide to ignore the phenyl rings, add half a length of a bond, choose the average bond distance to be the one of benzene.
In this case, the box extends from nitrogen to nitrogen plus one-half of a bond length beyond each nitrogen. That is, $L = (p + 3)l$, where $p =$ the number of carbon atoms in the chain and $l$ is the average bond distance in the chain. The number of electrons in the chain, $N$, is $p + 3$.
[...]
[...] we let $l = 1.39 \times 10^{-8}~\mathrm{cm}$ (the bond distance in benzene) [...]
For a slightly simpler model I have estimated the length of the box to be $L=1.20~\mathrm{nm}=1.2\times10^{-9}$ and the number of π electrons to be $N= 22$.
With the reasonable assumption that orbitals are doubly occupied, we can hence find the the quantum number of the energy level of the HOMO $E_\mathrm{HOMO}$ to be $n_\mathrm{HOMO}=N/2$ and in analogy to that for $E_\mathrm{LUMO}$ as $n_\mathrm{LUMO}=N/2+1$. The energy of the photon to excite one electron from the HOMO to the LUMO is in first approximation it's difference $\Delta E_\mathrm{gap}$. Therefore we will find the formula you used after some transformations.\begin{align}\Delta E &= E_\mathrm{LUMO} - E_\mathrm{HOMO}\\ &= \frac{n_\mathrm{LUMO}^2 h^2}{8m_\mathrm{e}L^2} - \frac{n_\mathrm{HOMO}^2 h^2}{8m_\mathrm{e}L^2}\\ &= \frac{h^2}{8m_\mathrm{e}L^2}\left[n_\mathrm{LUMO}^2 -n_\mathrm{HOMO}^2\right]\\ &= \frac{h^2}{8m_\mathrm{e}L^2}\left[\left(\frac{N}{2}\right)^2 -\left(\frac{N}{2}+1\right)^2 \right]\\ &= \frac{h^2}{8m_\mathrm{e}L^2}\left[N+1\right]\tag2\\\Delta E &= \frac{hc}{\lambda}\\\lambda &= \frac{8m_\mathrm{e}cL^2}{h(N+1)}\tag3\end{align}
Plugging in the data, we find $\Delta E = 5.965~\mathrm{eV}$ and $\lambda=207~\mathrm{nm}$. Well, that's almost the same as you got, and that is very disappointing. Could it be, that there are a couple of approximations, that do not work well?
Farrell also admits that his approach is flawed:
Agreement between predicted transition energies and experimental energies, Table 1, is not good (although the agreement is rather surprising in view of the approximations made).
Therefore a parameter $\gamma$ which is adjusted to give a good fit to the experimental results is introduced. The length of his box becomes $L=p+3+\gamma$.
Elongating the conjugated bridge between the nitrogen atoms, lengthening the box, will result in a better fit, even without that adjustment. In other words, the less important the end groups (phenyl moieties) become, the better the simple model performs. That is not surprising at all.The following table is reproduced from that paper (with slight modification). The number of carbons in the chain is $p$.
\begin{array}{lcccc}p & \lambda_\mathrm{exp}/\mathrm{nm} & \Delta E_{\mathrm{exp}}/\mathrm{eV} & \Delta E_{\mathrm{th.}\, \gamma=0}/\mathrm{eV} & \Delta E_{\mathrm{th.}\, \gamma=0.68}/\mathrm{eV}\\\hline3 & 422 & 2.94 & 3.78 & 3.05\\5 & 555 & 2.23 & 2.74 & 2.32\\7 & 650 & 1.91 & 2.14 & 1.88\\9 & 760 & 1.63 & 1.76 & 1.57\\\end{array}
Conclusion
The failure of the model for short chains is not surprising. It will get better the longer the box is and the less influential the end moieties are. This system especially is well suited for discussions of such approximations and gain a basic understanding of quantum chemistry. Stay sceptical.
Reference John J. Farrel J Ed Chem 1985, 62, 351-352 [mirror] Addendum
As conjugation increases the energy gap between HOMO and LUMO should decrease. With this relationship as N increases as does the energy gap. Makes no sense
It actually does. The more particles you fill in such a well, the higher the energy of the same ensemble gets. What makes the energy gap smaller and the states more dense is the extension of the actual box. The larger $L$ the closer are the states together. If the box size grows larger faster than the electrons you fill in it, then the gap shrinks. The gap is linear dependent on $N$, but inverse quadratic in $L$.
|
Let $T$ and $\epsilon$ be two independent random variables. Let $R=T+\epsilon$. Given this information, we can derive the posterior distribution $f(T=t|R=r)$. I have the following question: I haven't figure out how to write mathematically the posterior distribution that I am interested in. I will try to describe it from the perspective of simulation.
Generate many realizations of $T$, i.e., $\mathcal{T}=\{t_i\}_{i=1}^N$ with $N$ being very large For each $t_i$, generate one $\epsilon_i$. So we have $\mathcal{E}=\{\epsilon\}_{i=1}^N$ Construct $r_i=t_i+\epsilon_i$. So we have $\mathcal{R}=\{r_i\}_{i=1}^N$ Remove all $i$'s such that $r_i<\bar{r}$, where $\bar{r}$ is some pre-specified threshold. Hence, we have truncated set $\mathcal{T}',~\mathcal{E}',~\mathcal{R}'$. Then when you observe $R=r'_i\in\mathcal{R}'$, that is the density that this $r'_i$ is generated from $t'_i$? I mean, something like $f(T'=t'_i|R'=r_i)$.
I think the simulation process I've described is clear but I don't know how to express this last posterior distribution mathematically. Also, how to derive this posterior distribution?
I think the distribution I have described is $f(T=t|R=r,R\geq \bar{r})$, is it correct? How to derive this distribution?
I was thinking the following way \begin{align*} f(T=t|R=r,R\geq \bar{r})&=\frac{f(T=t,R=r,R\geq \bar{r})}{f(R=r,R\geq \bar{r})}\\ &=\frac{f(R=r,R\geq \bar{r}|T=t)f(T=t)}{f_R(r)1_{\{r\geq\bar{r}\}}}\\ &=\frac{f(\epsilon=r-t,\epsilon\geq \bar{r}-t|T=t)f_T(t)}{f_R(r)1_{\{r\geq\bar{r}\}}}\\ &=\frac{f(\epsilon=r-t,\epsilon\geq \bar{r}-t)f_T(t)}{f_R(r)1_{\{r\geq\bar{r}\}}},~\because~\epsilon\perp T\\ &=\frac{f_\epsilon(r-t)1_{\{r\geq\bar{r}\}}f_T(t)}{f_R(r)1_{\{r\geq\bar{r}\}}}\\ &=\frac{f_\epsilon(r-t)f_T(t)}{f_R(r)}1_{\{r\geq \bar{r}\}} \end{align*} Is this derivation correct?
I am suspect about my derivation. Because if I don't do truncation, then \begin{equation} f(T=t|R=r)=\frac{f_\epsilon(r-t)f_T(t)}{f_R(r)}. \end{equation}
I think truncation is a big thing. But from my derivation it seems that truncation just adds an indicator function to the original "untruncated posterior". Hence, I think my derivation is wrong.
|
I really like the proof contained in the paper
Derivation of the Biot-Savart Law from Ampere's Law Using the Displacement Current from Robert Buschauer (2013)
It's simple and it fulfills the role of convincing the reader.
Basically the author works with one point charge $q$ situated in origin of
Z azis $(0,0,0)$. He supposes a particle moving in
Z axis to positive
Z values with velocity $v$. He creates a magnetic field line in a arbitrarious circle with $c$ radius, by symmetry, with center in $(0,0,a)$. The angle between any point in the circle and the center of circle starting from origin $(0,0,0)$ is $\alpha$.
Starting point is a part of
4th Maxwell's Equation of electromagnetism, the Ampere-Maxwell Law that consider changing electric flow with time in a area produces magnetic field circulation. This law generates a magnetic force that can be verified using special relativity that in another reference frame it's just a plain electric force.
$$\oint B\, dl = \mu_0\epsilon_0 \; d/dt(\int_A E.dA)$$
In the left side, the solution consists of integrating the $\oint B dl$ in this circle (
butterfly net ring). As $B$ is constant by symmetry, we have
$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \oint B\, dl = 2\pi c B \qquad\qquad$
(1)
In the right side $\;[\;\mu_0 \epsilon_0 d/dt (\int_A E\; dA \,)\;],\;$ as the surface (
butterfly net) we choose a sphere of radius $r$, to ensure that all points have the same value of electric field:
$$ E = q / 4\pi\epsilon_0r^2$$
Let's first calculate the right-hand integral in the right side. We adopted here a slightly different standard in spherical coordinate. Just to remember,the element for integration into spherical coordinates is $\; r^2 \sin \phi \, dr \, d\phi \, dq $
Let $\theta$ (
XY axis) vary from $0$ to $2\pi$ and by consider the angle $\phi$ with the vertical (
Z axis) from 0 to $\alpha$.
$$\Phi_E = \int_A E\; dA = q/4\pi \epsilon_0 r^2 \int_A dA = q/(4\pi \epsilon_0 r^2) r^2 \int_{0,2\pi} d\theta \int_{0,\alpha} \sin \Phi\; d\Phi = $$$$q/4\pi \epsilon_0 2\pi ( -\cos \alpha + 1) = q/2\epsilon_0 (1 - cos\alpha)$$
Thus
$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\Phi_E = \mu_0 q /2 (1 - cos\alpha)$
$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad d \Phi_E / dt = - q/2\epsilon_0 d \cos \alpha/dt\qquad$
(2)
Putting $\alpha$ as a function of $z$, we have, by the chain rule:
$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad d \cos \alpha/dt = (d \cos\alpha/dz) \; (dz/dt)\qquad$
(3)
However as $z$ is decreasing with the motion at velocity $v$, we have
$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad dz / dt = -v \qquad $
(4)
On the other hand: $$ \cos \alpha = z / r = z / \sqrt{c^2 + z^2}$$
Using this online tool for derivation: $d \cos \alpha/dz = c^2/r^3$ where $r = \sqrt{c^2 + z^2}$
$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad 2\pi c B = q \mu_0 /2 v (c^2/r^3)\qquad$ By
(1), (2), (3), (4)
$$B = \mu_0 q v c / 4\pi c r^3$$
but $\quad\sin \alpha = c / r\quad$ so we can add $\quad \sin \alpha\; r / c$:
$$B = \mu_0 q v \sin \alpha /4\pi r^2 $$
Vectorizing we have a cross product:
$$B = \mu_0 q \; v\uparrow \times r\uparrow /4\pi r^3$$
In some infinitesimal point we can consider a element of electric current as a point charge, so we can add other charge points by integration (any force is addictive!) for using in real applications. Thus we have in scalar notation:
$$dB = \mu_0 dq \; v \; r \sin \alpha /4\pi r^2$$
Considering $\quad dq = i\;dt\quad$ and $\quad v = ds/dt\quad $, we finally have reached to
Biot-Savart law:
$$dB = \mu_0 i \; ds \; r \sin \alpha /4\pi r^2$$
|
The CO2 molecules are trying to change phase, but the surrounding liquid exerts a pressure. If you consider a simplified mechanical equilibrium between a gas bubble and surrounding liquids
$$\Delta P \pi R^2 = \sigma 2\pi R$$
where $\sigma$ is the surface tension, you have a relation between bubble size and the pressure gradient between liquid and vapour. In the middle of the bulk liquid, there is just not enough 'generation' of vapour molecules in a small location to achieve a stable bubble. Consider now formation of a bubble on a perfect flat surface on the glass and you'll see that the LHS halves, as the area on which the pressure gradient acts is now a hemisphere, not a sphere.
This is however still not enough (and also perfectly flat surfaces don't really exist, but thats a minor point for an engineer like me). For a real surface, there will be tiny grooves. The bubbles don't form at the sharp points, but inside the tiny cavities inbetween - as the balance of area exposed to the pressure gradient and the achievable surface tension is favourable.
The bubble grows inside these cavitities until it is large enough to sustain a spherical shape, and then flows to the top because of buoyancy.
There is one more big clue about these cavities: Because of the favourable balance of surface forces, they will contain small gas pockets from the start as the liquid is prevented from entering due to the surface tension while wetting (pouring in the champagne) - it would be very hard to effectively evacuate these. These gives perfect nucleation sites.
If you're interested, I recommend having a look at Rohesnows work on nucleate boiling (which is more or less the same phenomenon). Makes boiling some pasta seem a lot less trivial.
|
The reason for using $H_0: \mu = 12$ is that, among the set of values that correspond to $\mu \geq 12$, $\mu = 12$ is the most conservative (also called least favorable) configuration.
Let us be more precise what conservative means here. Say we set certain value of the observed statistic $\hat{\mu}$ at which we are willing to consider the null hypothesis as false (also called critical value $c$). $c$ should naturally be smaller than 12 to provide evidence against the null. Since $\hat{\mu}$ is just one of many possible realizations of the statistic, there is always some possibility of observing a value of $\hat{\mu}$ even if $\mu \geq 12$. Luckily, if we know the distribution of the test statistic, wecan compute the probability of observing a value of $\hat{\mu}$ that is smaller than or equal to $c$. This probability is called the probability of Type 1 error.
You can compute the probability of Type 1 error for all configurations that correspond to the hypothesis $\mu \geq 12$. In the figure I plot the distributions of the test statistics under two different such configurations $C_1: \mu = 12$ and $C_2: \mu = 13$. I also plot the probabilities of Type 1 error under the critical value $10.36$ for the two hypotheses (the shaded area under the respective curve) . It is easy to see that the probability of Type 1 error is always bigger for the configuration $C_1: \mu = 12$ than for any other $C_2$ that would also correspond to the hypothesis $\mu \geq 12$. I assumed normality here, but this result holds for any distribution that the test statistic can take.
To sum up, the common practice (which also makes a lot of sense!) is to choose, within the set of configurations of the test that correspond to the null hypothesis, the one that gives you the highest probability of Type 1 error.
|
After I reviewed briefly the so-called black hole wars, and expressed my doubts about black hole complementarity, there are still many things to be said. However, I would like to skip over various solutions proposed in the last decades, and discuss the one that I consider most natural.
All the discussions taking place within the last year around black hole complementarity and firewall are concentrated near the event horizon.
But why looking for the information at the event horizon, when it was supposed to be lost at the singularity?
Remember the old joke with the policeman helping a drunk man searching his lost keys under a streetlight, only to find later that the drunk man actually lost them in the park? When asked why did he search the keys under the streetlight, the drunk man replied that in the park was too dark. In science, this behavior is called the streetlight effect.
By analogy, the dark place is the singularity, because it is not well understood. The lightened place is the event horizon. This is Schwarzschild's equation describing the metric of the black hole:
By analogy, the dark place is the singularity, because it is not well understood. The lightened place is the event horizon. This is Schwarzschild's equation describing the metric of the black hole:
$${d} s^2 = -(1-\frac{2m}{r}){d} t^2 +(1-\frac{2m}{r})^{-1}{d} r^2 + r^2{d}\sigma^2,$$
where ${d}\sigma^2 = {d}\theta^2 + \sin^2\theta {d} \phi^2$ is the metric of the unit sphere $S^2$, $m$ the mass of the body, and the units were chosen so that $c=1$ and $G=1$.
Schwarzschild's metric has two singularities, one at the event horizon, and the other one at the "center". But in coordinates like those proposed by Eddington-Finkelstein, or by Kruskal-Szekeres, the metric becomes regular at the event horizon, showing that this singularity is due to the coordinates used by Schwarzschild. Fig. 1. represents the Penrose-Carter diagram of the Schwarzschild black hole. The yellow lines represent the event horizon, and we see that the metric is regular there.
Schwarzschild's metric has two singularities, one at the event horizon, and the other one at the "center".
But in coordinates like those proposed by Eddington-Finkelstein, or by Kruskal-Szekeres, the metric becomes regular at the event horizon, showing that this singularity is due to the coordinates used by Schwarzschild. Fig. 1. represents the Penrose-Carter diagram of the Schwarzschild black hole. The yellow lines represent the event horizon, and we see that the metric is regular there.
Figure 1. Penrose-Carter diagram of the Schwarzshild black hole.
While at the event horizon the darkness was dispersed by finding appropriate coordinates, it persisted at the central singularity, represented with red. This is a spacelike singularity, and it is not actually at the center of the black hole, but in the future. This kind of singularity could not be removed completely, because it was not due exclusively to the coordinates.
However, in my paper Schwarzschild Singularity is Semi-Regularizable, I showed that we can eliminate the part of the singularity due to coordinates, by the transformation $r = \tau^2$, $t = \xi\tau^4$. The Schwarzshild metric in the new coordinates becomes
$${d} s^2 = -\frac{4\tau^4}{2m-\tau^2}{d} \tau^2 + (2m-\tau^2)\tau^4(4\xi{d}\tau + \tau{d}\xi)^2 + \tau^4{d}\sigma^2.$$
The metric is still singular, because it is degenerate, but the coordinate singularity was removed. The metric extends analytically through the singularity $r=0$, and the Penrose-Carter diagram becomes as in Fig. 2.
Figure 2. Penrose-Carter diagram of the extended Schwarzshild black hole.
In the new coordinates, the singularity behaves well. Although the metric is degenerate at the singularities, in arXiv:1105.0201 I showed that this kind of metric allows the construction of invariant geometric objects in a natural way. These objects can be used to write evolution equations beyond the singularity.
The Schwarzschild metric is eternal, but in the case relevant to the problem of information loss, the black hole is created and then evaporates. The analytic extension through the singularity presented earlier also works for this case, and the Penrose-Carter diagram is shown in Fig. 3.B.
Figure 3. A. Penrose diagram for the evaporating black hole, standard scenario. B. Penrose diagram for the evaporating black hole, when the solution is analytically extended through the singularity (as in arXiv:1111.4837). In the new solution, the geometry can be described in term of finite quantities, without changing Einstein's equation. Fields can go through the singularity, beyond it.
Information is no longer blocked at the singularity. The physical fields can evolve beyond the singularity, carrying the information, which is therefore recovered if the black hole evaporates.
This is not a modification of General Relativity, it is just a change of variables. The proposed objects remain finite at singularity, and the standard equations can be rewritten in terms of these new, finite objects. These objects are natural, and don't require a modification of Einstein's General Relativity. The proposed fix is not made by changing the theory, but by changing our understanding of the mathematics expressing the theory.
One principal inspiration for me when finding this solution is the work of David Finkelstein, especially the brilliant solution to the problem of the apparent singularity on the event horizon. Imagine how happy I was when I received by email, at the end of December 2012, the following encouragements from him:
Dear Cristi Stoica, I write concerning your paper "Schwarzschild Singularity is Semi-Regularizable" (arXiv 1111.4837v2). I write first to thank you for the deep pleasure that this paper afforded me. Your regularization of the central true singularity of the Schwarzschild metric is a remarkable and beautiful example of thinking outside the box. It is a natural, generally covariant, and deep result on a problem that has drawn wide attention, that of gravitational singularities. You found your solution easily once you conceived the idea, and yet it has been overlooked for these many decades by the truly great minds in the field.
I write concerning your paper "Schwarzschild Singularity is Semi-Regularizable" (arXiv 1111.4837v2).
I write first to thank you for the deep pleasure that this paper afforded me.
Your regularization of the central true singularity of the Schwarzschild metric is a remarkable and beautiful example of thinking outside the box. It is a natural, generally covariant, and deep result on a problem that has drawn wide attention, that of gravitational singularities. You found your solution easily once you conceived the idea, and yet it has been overlooked for these many decades by the truly great minds in the field.
[...] With good wishes for your future explorations, David Finkelstein
With good wishes for your future explorations,
David Finkelstein
|
I'm trying to find the Fourier transform of $xe^{-\frac{x^2}{2}}$. I was given the hint that when $g(x) = e^{-\pi x^2}$, $\mathcal{F}(g) = g(x)$.
This is how far I have come. So we start off with the usual Fourier transform: $$ \int_{\mathbb{R}} e^{-2 \pi i x \xi} xe^{-x^2/2} \mathrm{d}x $$
Next we do a variable change, letting $x = \sqrt{2 \pi} y$ and therefore $\mathrm{d}x = \sqrt{2 \pi}\mathrm{d}y$.
$$ \int_{\mathbb{R}} e^{-2 \pi i \sqrt{2 \pi} y \xi} \sqrt{2 \pi} ye^{-\pi y^2} \sqrt{2 \pi} \mathrm{d}y = 2 \pi \int_{\mathbb{R}} e^{-2 \pi i \sqrt{2 \pi} y \xi} ye^{-\pi y^2} \mathrm{d}y $$
Now we have something in the form $e^{-\pi x^2}$ inside the integral, but I'm not sure how to use this to my advantage.
|
The Schrödinger equation is the basis to understanding quantum mechanics, but how can one derive it? I asked my instructor but he told me that it came from the experience of Schrödinger and his experiments. My question is,
can one derive the Schrödinger equation mathematically?
The Schrödinger equation is the basis to understanding quantum mechanics, but how can one derive it? I asked my instructor but he told me that it came from the experience of Schrödinger and his experiments. My question is,
Be aware that a "mathematical derivation" of a physical principle is, in general, not possible. Mathematics does not concern the real world, we always need empirical input to decide which mathematical frameworks correspond to the real world.
However, the Schrödinger equation can be seen arising naturally from classical mechanics through the process of quantization. More precisely, we can motivate quantum mechanics from classical mechanics purely through Lie theory, as is discussed here, yielding the quantization prescription
$$ \{\dot{},\dot{}\} \mapsto \frac{1}{\mathrm{i}\hbar}[\dot{},\dot{}]$$
for the classical Poisson bracket. Now, the classical evolution of observables on the phase space is
$$ \frac{\mathrm{d}}{\mathrm{d}t} f = \{f,H\} + \partial_t f$$
and so its quantization is the operator equation
$$ \frac{\mathrm{d}}{\mathrm{d}t} f = \frac{\mathrm{i}}{\hbar}[H,f] + \partial_t f$$
which is the equation of motion in the Heisenberg picture. Since the Heisenberg and Schrödinger picture are unitarily equivalent, this is a "derivation" of the Schrödinger equation from classical phase space mechanics.
(Note: not all steps can be included here, it would be too long to remain in the context of a forum-discussion-answer.)
In the path integral formalism, each path is attributed a wavefunction $\Phi[x(t)]$, that contributes to the total amplitude, of let's say, to go from $a$ to $b.$ The $\Phi$'s have the same magnitude but have differing phases, which is just given by the classical action $S$ as was defined in the Lagrangian formalism of classical mechanics. So far we have: $$ S[x(t)]= \int_{t_a}^{t_b} L(\dot{x},x,t) dt $$ and $$\Phi[x(t)]=e^{(i/\hbar) S[x(t)]}$$
Denoting the total amplitude $K(a,b)$, given by: $$K(a,b) = \sum_{paths-a-to-b}\Phi[x(t)]$$
The idea to approach the wave equation, describing the wavefunctions as a function of time, we should start by dividing the time interval between $a$-$b$ into $N$ small intervals of length $\epsilon$, and for a better notation, let's use $x_k$ for a given path between $a$-$b$, and denote the full amplitude, including its time dependance as $\psi(x_k,t)$ ($x_k$ taken over a region $R$):
$$\psi(x_k,t)=\lim_{\epsilon \to 0} \int_{R} \exp\left[\frac{i}{\hbar}\sum_{i=-\infty}^{+\infty}S(x_{i+1},x_i)\right]\frac{dx_{k-1}}{A} \frac{dx_{k-2}}{A}... \frac{dx_{k+1}}{A} \frac{dx_{k+2}}{A}... $$
Now consider the above equation if we want to know the amplitude at the next instant in time $t+\epsilon$:
$$\psi(x_{k+1},t+\epsilon)=\int_{R} \exp\left[\frac{i}{\hbar}\sum_{i=-\infty}^{k}S(x_{i+1},x_i)\right]\frac{dx_{k}}{A} \frac{dx_{k-1}}{A}... $$
The above is similar to the equation preceding it, the difference relying on the hint that, the added factor with $\exp(i/\hbar)S(x_{k+1},x_k)$ does not involve any of the terms $x_i$ before $i<k$, so the integration can be preformed with all such terms factored out. All this reduces the last equation to:
$$\psi(x_{k+1},t+\epsilon)=\int_{R} \exp\left[\frac{i}{\hbar}\sum_{i=-\infty}^{k}S(x_{i+1},x_i)\right]\psi(x_k,t)\frac{dx_{k}}{A}$$
Now a quote from Feynman's original paper, regarding the above result:
This relation giving the development of $\psi$ with time will be shown, for simple examples, with suitable choice of $A$, to be equivalent to Schroedinger's equation. Actually, the above equation is not exact, but is only true in the limit $\epsilon \to 0$ and we shall derive the Schroedinger equation by assuming this equation is valid to first order in $\epsilon$. The above
needonly be true for small $\epsilon$ to the first order in $\epsilon.$
In his original paper, following up the calculations for 2 more pages, from where we left things, he then shows that:
Canceling $\psi(x,t)$ from both sides, and comparing terms to first order in $\epsilon$ and multiplying by $-\hbar/i$ one obtains
$$-\frac{\hbar}{i}\frac{\partial \psi}{\partial t}=\frac{1}{2m}\left(\frac{\hbar}{i}\frac{\partial}{\partial x}\right)^2 \psi + V(x) \psi$$ which is Schroedinger's equation.
I would strongly encourage you to read his original paper, don't worry it is really well written and readable.
References: Space-Time Approach to Non-Relativistic Quantum Mechanics by R. P. Feynman, April 1948.
Feynman Path Integrals in Quantum Mechanics, by Christian Egli
According to Richard Feynman in his lectures on Physics, volume 3, and paraphrased "The Schrodinger Equation Cannot be Derived". According to Feynman it was imagined by Schrodinger, and it just happens to provide the predictions of quantum behavior.
Fundamental laws of physics cannot be derived (turtles all the way down and all that).
However, they can be motivated in various ways. Direct experimental evidence aside, you can argue by analogy - in case of the Schrödinger equation, comparisons to Hamiltonian mechanics and the Hamilton-Jacobi equation, fluid dynamics, Brownian motion and optics have been made.
Another approach is arguing by mathematical 'beauty' or necessity: You can look at various ways to model the system and go with the most elegant approach consistent with constraints you imposed (ie reasoning in the vein of 'quantum mechanics is the only way to do X' for 'natural' or experimentally necessary values of X).
While it is in general impossible to derive the laws of physics in the mathematical sense of the word, a strong motivation or rationale can be given most of the time. Such impossibility arises from the very nature of physical sciences which attempt to stretch the all-to-imperfect logic of the human mind onto the natural phenomena around us. In doing so, we often make connections or intuitive hunches which happen to be successful at explaining phenomena in question. However, if one had to point out which logical sequence was used in producing the hunch, he would be at a loss - more often than not such logical sequence simply does not exist.
"Derivation" of the Schroedinger equation and its successful performance at explaining various quantum phenomena is one of the best (read audacious, mind-boggling and successful) examples of the intuitive thinking and hypothesizing which lead to great success. What many people miss is that Schroedinger simply took the ideas of Luis de Broglie further to their bold conclusion.
In 1924 de Broglie suggested that every moving particle could have a wave phenomenon associated with it. Note that he didn't say that every particle was a wave or vice versa. Instead, he was simply trying to wrap his mind around the weird experimental results which were produced at the time. In many of these experiments, things which were typically expected to behave like particles also exhibited a wave behavior. It is this conundrum which lead de Broglie to produce his famous hypothesis of $\lambda = \frac{h}{p}$. In turn, Schroedinger used this hypothesis as well as the result from Planck and Einstein ($E = h\nu$) to produce his eponymous equation.
It is my understanding that Schroedinger originally worked using Hamilton-Jacobi formalism of classical mechanics to get his equation. In this, he followed de Broglie himself who also used this formalism to produce some of his results. If one knows this formalism, he can truly follow the steps of the original thinking. However, there is a simpler, more direct way to produce the equation.
Namely, consider a basic harmonic phenomenon:
$ y = A sin (wt - \delta)$
for a particle moving along the $x$-axis,
$ y = A sin \frac{2\pi v}{\lambda} (t - \frac{x}{v}) $
Suppose we have a particle moving along the $x$-axis. Let's call the wave function (similar to the electric field of a photon) associated with it $\psi (x,t)$. We know nothing about this function at the moment. We simply gave a name to the phenomenon which experimentalists were observing and are following de Broglie's hypothesis.
The most basic wave function has the following form: $\psi = A e^{-i\omega(t - \frac{x}{v})}$, where $v$ is the velocity of the particle associated with this wave phenomenon.
This function can be re-written as
$\psi = A e^{-i 2 \pi \nu (t - \frac{x}{\nu\lambda})} = A e^{-i 2 \pi (\nu t - \frac{x}{\lambda})}$, where $\nu$ - the frequency of oscillations and $E = h \nu$. We see that $\nu = \frac{E}{2 \pi \hbar}$ The latter is, of course, the result from Einstein and Planck.
Let's bring the de Broglie's result into this thought explicitly:
$\lambda = \frac{h}{p} = \frac{2\pi \hbar}{p}$
Let's substitute the values from de Broglie's and Einstein's results into the wave function formula.
$\psi = A e^{-i 2 \pi (\frac{E t}{2 \pi \hbar} - \frac{x p}{2 \pi \hbar})} = A e^{- \frac{i}{\hbar}(Et - xp)} (*)$
this is a wave function associated with the motion of an unrestricted particle of total energy $E$, momentum $p$ and moving along the positive $x$-direction.
We know from classical mechanics that the energy is the sum of kinetic and potential energies.
$E = K.E. + P.E. = \frac{m v^2}{2} + V = \frac{p^2}{2 m} + V$
Multiply the energy by the wave function to obtain the following:
$E\psi = \frac{p^2}{2m} \psi + V\psi$
Next, rationale is to obtain something resembling the wave equation from electrodynamics. Namely we need a combination of space and time derivatives which can be tied back into the expression for the energy.
Let's now differentiate $(*)$ with respect to $x$.
$\frac{\partial \psi}{\partial x} = A (\frac{ip}{\hbar}) e^{\frac{-i}{\hbar}(Et - xp)}$
$\frac{\partial^2 \psi}{\partial x^2} = -A (\frac{p^2}{\hbar^2}) e^{\frac{-i}{\hbar}(Et - xp)} = \frac{p^2}{\hbar^2} \psi$
Hence, $p^2 \psi = -\hbar^2 \frac{\partial^2 \psi}{\partial x^2}$
The time derivative is as follows:
$\frac{\partial \psi}{\partial t} = - A \frac{iE}{\hbar} e^{\frac{-i}{\hbar}(Et - xp)} = \frac{-iE}{\hbar}\psi$
Hence, $E \psi = \frac{-\hbar}{i} \frac{\partial \psi}{\partial t}$
The expression for energy we obtained above was $E\psi = \frac{p^2}{2m} \psi + V\psi$
Substituting the results involving time and space derivatives into the energy expression, we obtain
$\frac{-i}{\hbar} \frac{\partial \psi}{\partial t} = \frac{- \hbar ^2}{2m} \frac{\partial ^2 \psi}{\partial x^2} + V\psi$
This, of course, became better known as the Schroedinger equation.
There are several interesting things in this "derivation." One is that both the Einstein's quantization and de Broglie's wave-matter hypothesis were used explicitly. Without them, it would be very tough to come to this equation intuitively in the manner of Schroedinger. What's more, the resulting equation differs in form from the standard wave equation so well-known from classical electrodynamics. It does because the orders of partial differentiation with respect to space and time variables are reversed. Had Schrodinger been trying to match the form of the classical wave equation, he would have probably gotten nowhere.
However, since he looked for something containing $p^2\psi$ and $E\psi$, the correct order of derivatives was essentially pre-determined for him.
Note: I am not claiming that this derivation follows Schroedinger's work. However, the spirit, thinking and the intuition of the times are more or less preserved.
In Mathematics you derive theorems from axioms and the existing theorems.
In Physics you derive laws and models from existing laws, models and observations.
In this case we can start from the observations in photoelectric effect to get the relation between photon energy and frequency. Then continue with the special relativity where we observed the speed of the light is constant in all reference frames. From this when generalizing the kinetic energy we can get the mass energy equivalence. Combining the two we can assign mass to the photon, consequently we can get the momentum of a photon as function of the wavenumber.
Generalizing the energy-frequency and the momentum-wavenumber relation when have the De-Broglie relations. Which is applicable to any particles.
Assuming that a particle have 0 energy when it stands still (you can do it), although it doesn't cause too much trouble if you leave the constant term there, in the later phases you can simply put it into the left side of the equation. We can deal with the kinetic energy. Substituting the
non-relativistic kinetic into the relation and reordering we can have the following dispersion relation:
$$\omega = \frac{\hbar k^2}{2m}$$
The wave equation can be derived from the dispersion relation of the matter waves using the way I mentioned in that answer.
In this case we will need the laplacian and first time derivative:
$$\nabla^2 \Psi + \partial_t \Psi = -k^2\Psi - \frac{i \hbar k^2}{2m}\Psi$$
Multiplying the time derivative with $-\frac{2m}{i\hbar}$, we can zero the right side:
$$\nabla^2 \Psi - \frac{2m}{i\hbar} \partial_t \Psi = -k^2\Psi + k^2\Psi = 0$$
We can reorder it to obtain the time dependent schrödinger equation of a free particle:
$$ \partial_t \Psi = \frac{i\hbar}{2m} \nabla^2 \Psi$$
To my mind there are two senses in which we can "derive" a result in physics. New theories try to address the shortcomings of older ones by upgrading what we already have, giving new results. They also recover old results. I suppose we can call both derivations.
For example, the TISE and TDSE were first obtained because quantum mechanics said that, where classical mechanics would imply $f=0$, we should have $\hat{f}\left|\psi\right\rangle = 0$, with $\hat{f}$ the operator promotion of $f$, which in this case is $f=E-\frac{p^2}{2m}-V$ with operators $E=i\hbar\partial_t,\,\mathbf{p}=-i\hbar\boldsymbol{\nabla}$. (Some results become the weaker $\left\langle\psi\right|\hat{f}\left|\psi\right\rangle = 0$, e.g. with $f=\frac{d\mathbf{p}}{dt}+\boldsymbol{\nabla}V$, so I'm not being entirely honest here. But we expect $\hat{E}$-eigenstates are important because the probability distribution of $E$ is conserved.)
Note that the above paragraph summarises how Schrödinger was derived in the first sense, and its ending parenthesis hints at how Newton's second law was "derived" in my second sense. And everyone talking about path integrals is hinting at a type-2 derivation for both results (path integrals obtain a transition amplitude in terms of $e^{iS}$ with $S$ the classical action now miraculously coming out of a hat, so technically our direct recovery is of Lagrangian mechanics rather than the equivalent Newtonian formulation).
I'll leave people to fight over which, if either, type of derivation is "valid" or "better", but physical insight requires frequent doses of both. I think it's worth distinguishing them in a discussion like this.
protected by Qmechanic♦ Oct 20 '14 at 20:11
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
In a previous question about differences in Newtonian and GTR gravitional force for the case of star-planet gravitational interactions an
approximate relationship was noted between the expressions for the gravitational force exerted by a star on an orbiting planet (the solution is valid situations with low gravity, slow orbital speed, and spherical source). This was based on texts in Walter 2008 and Goldstein et al 2001.
Walter derived an approximate relationship assuming a circular orbit. Goldstein focussed on deriving an orbit-average expression for perihelion precession.
On re-examining these texts it seems to me that GTR (General Theory of Relativity) provides more than just an orbit-averaged approximation. Rather it provides a phase-specific formula for total acceleration (in Scwharzschild space-time).
The equation of motion for a Newtonian orbit is $$u''_\theta + u_\theta =\frac{\mu}{h^2}$$ where $u_\theta = 1/r_\theta$ and $u'_\theta =$ d$(u_\theta)/$d$\theta$ and $h$ is specific angular momentum which is constant, $\mu = GM$ the gravitational parameter, $G$ is Newton's Universal gravitational constant, $M$ is mass of the star, $r$ is distance from star to planet, $\theta$ is the True anomaly.
Using $h^2 = Vt_\theta^2 \,r_\theta^2$ where $Vt$ is the instantaneous transverse component of velocity (in vector terms = full velocity minus radial velocity) we can obtain $$u''_\theta + u_\theta = \frac{1}{Vt_\theta^2} \left( \frac{\mu}{r_\theta^2} \right)$$ where the term in brackets is the Newtonian acceleration.
Walter presents the following equation for a GTR orbit (Schwarzschild model)
$$u''_\theta + u_\theta =\frac{\mu}{h^2} + \frac{3\mu}{c^2}\,u_\theta^2$$
Now, using $h^2 = Vt_\theta^2 \,r_\theta^2$ and $u_\theta^2 = 1/r_\theta^2$ we get
$$u''_\theta + u_\theta =\frac{\mu}{Vt_\theta^2 \,r_\theta^2} + \frac{3\mu}{c^2 \,r_\theta^2} =\frac{1}{Vt_\theta^2 } \left( \frac{\mu}{r_\theta^2} \,+ \frac{\mu}{r_\theta^2} \,\frac{3\,Vt_\theta^2}{c^2 } \right)$$ where the terms in brackets are the Newtonian acceleration and the extra acceleration according to GTR. And so the ratio of Newtonian acceleration to GTR-specific acceleration, at any angle $\theta$ is $$1 \, \mathrm{to} \, \frac {3 Vt_\theta^2}{c^2}$$. Goldstein emphasises that the GR acceleration does not indicate a velocity-dependence, so an alternative, more palatable, distance-dependent form of the ratio of accelerations, at any angle $\theta$, would be:- $$1 \, \mathrm{to} \, \frac {3 h^2}{c^2 \, r_\theta^2} \equiv 1 \, \mathrm{to} \, \frac {3 GM.P}{c^2 \, r_\theta^2} $$ where $h$ (specific angular momentum) and $P$ (semi-latus rectum) are the values for the orbit of the particular subject planet.
And so (ignoring other massive perturbing bodies) the magnitude of the total instantaneous radial force on the planet (of mass $m$) towards the Sun is given by:- $$ \mathrm{F/m} \,= \, \frac {GM}{r_\theta^2} + \frac {3 GM.GM.P}{c^2 \, r_\theta^4} $$
N.B. The GTR/Schwarzschild equations relate to proper time and Schwarzschild radial distance, not their Newtonian equivalents so strictly the ratio of accelerations is still an approximation.
Is this analysis valid, or have I missed something?
Update
I have accepted Stan Liou's answer as being very helpful in (a) providing a derivation from GTR/Schwarzschild of the formulae presented by Walter and Goldstein et al. and (b) indicating the imperfect correspondence between GTR terms/concepts and Newtonian terms/concepts.
My understanding is as follows. In a Newtonian central-force elliptical-orbit model the addition of an extra centre-directed acceleration ($dV/dt$), which varies through the orbit as $(+3 V_t^2/c^2\,\equiv \, +3 h^2/r^2c^2)$ times the co-temporal standard Newtonian centre-directed gravitational acceleration, can be shown (by first order perturbation analysis or numerical modelling) to produce apsidal precession at a rate (radians per orbit) defined by a Newtonian formula $$\epsilon = 24 \pi^3 \frac{a^2}{T^2 c^2(1-e^2)}$$ . This formula is well-known (see for example Wikipedia Apsidal precession). According to, but not clearly referenced by, this article, the formula (or an algebraic equivalent using other terms) was well known c. 1895 (i.e.
before the publications of Gerber 1898 and Einstein 1915). The formula predicts very well the long-term values of apsidal precession determined using Newtonian models from observations of solar planets.
Various writers (Einstein, Goldstein, Walter, presumably many others) present mathematical arguments indicating how an identical formula can be derived from Einstein's GTR. The presented arguments may involve approximations (e.g. Walter's use of near-circular orbits, Goldstein's use of orbit-averaged precession) and non-mathematical "correspondences" between the concepts/terms of the GTR model and the concepts/terms of the Newtonian model.
|
Differentiation is "linear" meaning you can tackle sums one term at a time. So $\sin(\pi x)$ and $\cos(\pi x)$ can be differentiated separately and then put back together.
As for each of those terms, $\sin(\pi x)$ can be pulled apart as: $y=\sin(u)$ and $u=\pi x$. Then the chain rule says: $$\frac{dy}{dx} = \frac{dy}{du}\frac{du}{dx} = \cos(u) \cdot \pi = \pi \cos(\pi x)$$
Likewise for the cosine term. So...$$\frac{d}{dx}\left[\sin(\pi x)+\cos(\pi x)\right] = \pi \cos(\pi x) - \pi \sin(\pi x)$$
Putting this together with the quotient rule one finds that...$$\frac{d}{dx}\left[\frac{\cos(\pi x)}{\sin(\pi x)+\cos(\pi x)}\right]$$ $$= \frac{-\pi \sin(\pi x)\left(\sin(\pi x)+\cos(\pi x)\right)-\cos(\pi x)\left(\pi \cos(\pi x) - \pi \sin(\pi x)\right)}{\left(\sin(\pi x)+\cos(\pi x)\right)^2}$$I'll leave it to you to simplify. :)
|
Let's say that you are standing on the ground and you observe two horizontal forces of
equal magnitude, but acting in opposite directions on the block, because of which the net vector sum of all horizontal forces is zero,i.e. $$\sum F_x=0 \implies \left(a_{x}\right)_{net}=0 $$Thus the block moves with constant velocity.Now, you wan't to calculate the net work done by the forces. The equation for work done by force is given by,\begin{align}W =&\int\limits_{x=x_1}^{x=x_2}\vec{F(x)}\cdot \vec{dx}\\&=\int\limits_{x=x_1}^{x=x_2}|\vec{F(x)}|\times|\vec{dx}|\times \cos\theta\end{align}where $\theta$ is the angle between smaller force vector $\left(\vec{F(x)}\right)$ and displacement vector $\left(\vec{dx}\right)$, when joined Head-to-Head Or Tail-to-Tail.
Now, since in your case the force vector remains constant throughout the motion, our work equation becomes,\begin{align}W&=\int\limits_{x=x_1}^{x=x_2}|\vec{F(x)}|\times|\vec{dx}|\times \cos\theta \\& = |F|\times\cos\theta \int\limits_{x=x_1}^{x=x_2}|\vec{dx}|\\& = |F|\times\cos\theta\times\left(|\vec{x_2}|-|\vec{x_1}|\right)\\\end{align}One the forces, would be acting along the direction of displacement, the angle made by this force with the displacement vector would be $0^°$, let us call this force say $F_1$.The other force would make an angle of $180^°$ with displacement vector, let us call this force say $F_2$. Over here, $|F_1|=|F_2|$( magnitude of $F_1$ and $F_2$ are equal).
Other than these two forces there can be two additional forces, if we assume that the experiment is being conducted in space where there is gravity, thus a
gravitational force; and a normal reaction force if the block is moving on a surface, thereby compressing it due to its weight(because of gravity).The gravitational force always act in vertically downward direction, and the normal reaction force always act normal to the surface. Over here we assume that the block is moving on a horizontal surface, thus the gravitational force and the normal reaction would be anti- parallel to each other and both would make an angle of $90^°$ with the horizontal displacement.
Since there is no acceleration in vertical direction,i.e. $$\left(\vec{a_{vertical}}\right)_{net} = 0$$$$|\text{gravitational force}| = |\text{normal reaction force}|=|mg|$$
And since there was no initial velocity in vertical direction and the acceleration is also taken $0$, there will be no displacement in vertical direction.
Thus, work done by $F_1$ would be,\begin{align}W_1& = |F_1|\times\cos 0 \times\left(|\vec{x_2}|-|\vec{x_1}|\right)\\& = |F_1|\times 1 \times\left(|\vec{x_2}|-|\vec{x_1}|\right)\\& = |F_1|\times\left(|\vec{x_2}|-|\vec{x_1}|\right)\\\end{align}
Similarly work done by force $F_2$ would be,\begin{align}W_2& = |F_2|\times\cos 180^° \times\left(|\vec{x_2}|-|\vec{x_1}|\right)\\& = |F_2|\times -1 \times\left(|\vec{x_2}|-|\vec{x_1}|\right)\\& = -|F_2|\times\left(|\vec{x_2}|-|\vec{x_1}|\right)\\\end{align}
For Gravitational force, work done would be,\begin{align}W_G& = |mg|\times\cos 90^° \times\left(|\vec{x_2}|-|\vec{x_1}|\right)\\& = |mg|\times 0 \times\left(|\vec{x_2}|-|\vec{x_1}|\right)\\& = 0\\\end{align}
For Normal reaction force, work done would be,\begin{align}W_N& = |\text{normal reaction}|\times\cos 90^° \times\left(|\vec{x_2}|-|\vec{x_1}|\right)\\& = |mg|\times 0 \times\left(|\vec{x_2}|-|\vec{x_1}|\right)\\& = 0\\\end{align}
Therefore, now the
net work done by all the forces would be sum of work done by all individual forces.
That is,\begin{align}W_{net}&= W_1 + W_2 + W_G + W_N\\&=|F_1|\times\left(|\vec{x_2}|-|\vec{x_1}|\right) + \left(-|F_2|\times\left(|\vec{x_2}|-|\vec{x_1}|\right)\right) + 0 + 0\\&=|F_1|\times\left(|\vec{x_2}|-|\vec{x_1}|\right) - |F_1|\times\left(|\vec{x_2}|-|\vec{x_1}|\right)\\&= 0\\\end{align}
Thus, $$W_{net}= 0$$
The
net work done on the block by all the forces is $0$, work done by be $0$. individual forces may or may not
This is also evident from the fact that the
Kinetic Energy, (given by $\frac{1}{2} m v^2$) remains constant through out the motion as the velocity of the block remains constant, because $\left(a_{x}\right)_{net}=0$ because two equal anti-parallel forces areapplied on the block.
And kinetic energy being constant $\Delta K= 0$, and we know that $W_{net} = \Delta K$, thus $W_{net} = 0$
We had earlier assumed that the observer was standing stationary on the ground, now assume there is another observer who is moving with same velocity as the block. For him, in his inertial frame of reference the block hasn't moved, because the
displacement of the frame and the block would be equal as they move with the same velocity.
Thus this observer will say that the work done by
individual forces is $0$, thus $W_{net}= 0$.
As I had earlier said that the work done by "
be $0$" it depends on the frame of reference. We will always calculate the work of any individual force in any reference frame from our basic work equation that is, individual forces may or may not
$$W =\int\limits_{x=x_1}^{x=x_2}\vec{F(x)}\cdot \vec{dx}$$
But all the observers in their respective inertial frame of reference would always agree on $\bf{W_{net}}$. That is why nature as a whole is a conserved system.
In your experiment where you are riding the cycle, you said that the cycle was moving with
constant velocity $\implies \Delta K= 0$, which means that $W_{net}$ must equal to $0$. Now, if we take the rider and the cycle as our system, we observe that there are two external forces acting on the system: Air Drag and Muscle Force (excluding the gravitational force and the normal reaction from ground), we also exclude the internal forces as they work in pair according to Newton's Third Law of Motion, thus canceling out the work of each other as a whole (as we observed in the above example how $F_1$ and $F_2$ cancel out each other's work as a whole).
We can write $W_{net}$ as $W_{external} + W_{internal}$, in this case $W_{internal}=0$, as stated above. Now external forces are : Air Drag and Muscle Force.
Caution: Don't confuse Muscle force as an Internal force, we regard it as an internal force, because any organism (like human being) can generate energy whenever it wants from the fat and other nutrients inside body, thus it is regarded as an external force. You can look at it as $\text{Sun's light energy }\underrightarrow{photosynthesis }\text{ food }\underrightarrow{ digestion }\text{ ATP = Muscle Energy}$ thus in fact Muscle energy is Sun's Energy, an external source.
now, \begin{align}& W_{net} = W_{external} = W_{Air Drag} + W_{Muscle} = \Delta K = 0\\& \implies W_{Air Drag} + W_{Muscle} = 0 \\& \implies W_{Air Drag} = - W_{Muscle}\\\end{align}
Thus the rider is trying to keep the energy of the system constant(for a constant velocity) by continuously compensating the energy lost due to Air Drag by putting in his muscle energy.
|
While I was looking at the periodic table today, I realised that there were gases that were much lighter than helium such as hydrogen. If hydrogen is lighter than helium, why do we insist on using helium in balloons?
As other answers have noted, the only gas lighter than helium is hydrogen, which has some flammability issues that make it more difficult to handle safely than helium.
Also, in practice, hydrogen is not significantly "lighter" than helium. While the molecular mass (and thus, per the ideal gas law, the density) of hydrogen gas is about half that of helium, what determines the buoyancy of a balloon is the
difference between the density of the gas inside the balloon and the air outside.
The density of air at STP is about $\rho_{\ce{air}}=\pu{1.2754 kg m-3}$ , while the densities of hydrogen and helium gas are $\rho_{\ce{H2}}=\pu{0.08988 kg m-3}$ and $\rho_{\ce{He}}=\pu{0.1786 kg m-3}$ respectively. The buoyant forces of a hydrogen balloon and a helium balloon in air (neglecting the weight of the skin and the pressure difference between the inside and the outside, which both decrease the buoyancy somewhat) are proportional to the density differences $\rho_{\ce{air}} -\rho_{\ce{H2}}=\pu{1.1855 kg m-3}$ and $\rho_{\ce{air}} -\rho_{\ce{He}}=\pu{1.0968 kg m-3}$. Thus, helium is only about $7.5\%$ less buoyant in air than hydrogen.
Of course, if the surrounding air were replaced with a lighter gas, the density difference between hydrogen and helium would become more significant. For example, if you wished to go ballooning on Jupiter, which has an atmosphere consisting mostly of hydrogen and some helium, a helium balloon would simply
sink, and even a pure hydrogen balloon (at ambient temperature) would not lift much weight. Of course, you could always just fill the balloon with ambient Jovian air and heat it up to produce a hot hydrogen balloon (not to be confused with a Rozière balloon, which are used on Earth and have separate chambers for hot air and hydrogen / helium). Ps. A quick way to approximately obtain this result is to note that a hydrogen molecule consists of two protons (and some electrons, which have negligible mass), and thus has a molecular mass of about $\pu{2 Da}$, while a helium atom has two protons and two neutrons, for a total mass of about $\pu{4 Da}$.
Air, meanwhile, is mostly oxygen and nitrogen: oxygen has a molecular mass of about $\pu{32 Da}$ (8 protons + 8 neutrons per atom, two atoms per molecule), while nitrogen is close to $\pu{28 Da}$ (one proton and one neutron per atom less than oxygen). Thus, the average molecular mass of air should be between $28$ and $\pu{32 Da}$; in fact, since air is about three quarters nitrogen, it's about $\pu{29 Da}$, and so the buoyancies of hydrogen and helium in air are proportional to $29 - 2 = 27$ and $29 - 4 = 25$ respectively. Thus, hydrogen should be about $\frac{(27 - 25)}{25} = \frac{2}{25} = \frac{8}{100} = 8\%$ more buoyant than helium, or, in other words, helium should be about $\frac{2}{27} \approx 7.5\%$ less buoyant than hydrogen.
Pps. To summarize some of the comments below, there are other possible lifting gases as well, but none of them appear to be particularly viable competitors for helium, at least not at today's helium prices.
For example, methane (molecular mass $\approx \pu{16 Da}$) has about half the buoyancy of hydrogen or helium in the Earth's atmosphere, and is cheap and easily available from natural gas. However, like hydrogen, it's also flammable, and while it's somewhat less dangerous by some measures (burn speed and flammability range), it's
more dangerous by others (total energy content per volume). In any case, the reduced buoyancy, together with the flammability, is probably enough to sink (pun not intended) methane as a viable alternative to helium.
A much less flammable choice would be water vapor which, with a molecular mass of $\approx \pu{18 Da}$, is only slightly less buoyant than methane at the same temperature and pressure. The obvious problem with water is that it's a liquid at ambient temperatures, which means it has to be heated to make it lift anything at all. This wouldn't be so bad (after all, you get extra lift from the expansion due to heat), except for the fact that it makes any failure in the heating system a potential disaster — whereas a hot air balloon will just gently drift down if the burner fails, a hot steam balloon can experience catastrophic buoyancy loss if the vapor condenses.
Despite these drawbacks, hot steam balloons have certainly been suggested, studied and tried in the past — alas, not always particularly successfully (although, apparently, there have been much more successful attempts as well). There are various ways in which the condensation issue could potentially be reduced, such as adding extra insulation layers to the balloon envelope, or even surrounding the steam balloon with a more conventional hot air envelope. So far, however, it seems that steam balloons remain firmly in the realm of nifty but impractical ideas.
Other potential lifting gases, with molecular mass similar to methane and water, include ammonia and neon. Neon, being a noble gas like helium, would certainly work and be safe, but alas, it's both less buoyant and more expensive than helium.
Ammonia, on the other hand, while much less flammable than methane, is rather toxic and corrosive (not to mention
really stinky, which, given its other properties, is probably a good thing). I don't think I'd like to fly in an ammonia balloon, but apparently, some people do! It seems that its main advantage (besides being much cheaper than helium) is its relatively low vapor pressure, which makes it easier to store and handle in compressed form.
Thus, at least for some niche applications (mainly hobbyists and some weather balloons, AFAICT), ammonia might actually be the most viable alternative to helium (and hot air) today, with methane / natural gas perhaps coming second. If helium were to become more scarce and expensive, these low-cost lifting gases (and possibly other alternatives, like helium recovery or even steam balloons) might become more practical. Then again, so would hydrogen — its safety issues, though well known, are not insurmountable, especially not for things like unmanned weather balloons where the risks are much less.⠀⠀⠀⠀⠀⠀
Actually, hydrogen is the only gas that is lighter than helium. However, it has a very big disadvantage: It is highly flammable. On the other hand, helium is almost completely inert - this is why it is very much safer to use the latter.
What might happen when you use hydrogen instead of helium was impressively proven by history when the "Hindenburg" zeppelin exploded on 6 May 1937. There is video footage, that can be seen on youtube.
In some of the comments it was mentioned, that hydrogen alone might not be the cause of the Hindenburg disaster, there were other contributing factors. However, using hydrogen remains dangerous, as this weather balloon experiment shows. In a more scientific setup the burning of a hydrogen balloon is compared to oxygen and a mixture of oxygen and hydrogen. Unfortunately a video of a helium filled balloon is not available, but it basically only ruptures and pops because of the different pressures on the in- and outside.
Yes, hydrogen is lighter than helium but helium, on the other hand, is an
inert gas (very less reactive). Also, hydrogen is highly flammable so that would make it unsafe to play with balloons.
One counter-argument: Helium is essentially a "fossil gas", and there's a limited supply of easy-to-get helium (until we get practical fusion reactors running, at least). Hydrogen, on the other hand, is universally available in $\ce{H2O}$ and needs only a bit of electricity to break it out. Since helium has important industrial uses
other than balloons, I expect that we will eventually find it becomes too expensive to throw away on toys.
With hydrogen, you are just one touch away from disaster. A hydrogen balloon goes anywhere near the birthday candles and you end up. Helium on the other hand is so inert that you can inhale it and all it would do is to make you sound like a chipmunk for a minute or so.
From my thinking helium is a stable gas and is a noble gas whereas hydrogen being the lightest gas but is flammable that's why helium is filled in weather balloons.
|
This page indicates that Greek letters can be inserted into Emacs by using
M-i. However, Emacs 23.2.1 in a Debian Squeeze variant inserts the "tab" character when
M-i is pressed. How can I insert Greek letters such α and β in Emacs?
This page indicates that Greek letters can be inserted into Emacs by using
You can use another prefix, like:
(global-set-key (kbd "C-x <ESC> a") "α")(global-set-key (kbd "C-x <ESC> b") "β")
Or use
global-abbrev-table as it's explained on the page you mentioned.
M-x
set-input-method RET TeX will allow you to write e.g.
\beta to get
β,
\sum or
\Sigma to get
Σ etc.
It can be toggled on and off with
toggle-input-method, bound to
C-\ and
C-<.
You can use
ucs-insert bound to
C-x
8
RET to insert any Unicodecharacters by name or by value.
For example to insert a lambda you can do
C-x
8
RET
GREEK SMALL LETTER LAMBDA
RET→ λ
C-x
8
RET
03bb
RET→ λ
A tab-completion is also available.
C-x
8
RET
* lambda
TABwill list every unicode characters ended by a lambda.
You can set your input method to Greek:
M-x set-input-method RET greek
or
C-x RET C-\ greek
(which is the same). To set the input method back press
C-\ (
toggle-input-method).
Expanding the answer by @Oleg Pavliv:
To solve this problem once and for all in your
.emacs file, you need to choose a key pattern (like
M-g + <latin letter>) and a memorizable correspondence table
<greek letter> - <latin letter>. I suggest not to invent anything new, but to use the correspondences from the PostScript Symbol encoding. This leads me to the following:
(global-set-key (kbd "M-g a") "α")(global-set-key (kbd "M-g b") "β")(global-set-key (kbd "M-g g") "γ")(global-set-key (kbd "M-g d") "δ")(global-set-key (kbd "M-g e") "ε")(global-set-key (kbd "M-g z") "ζ")(global-set-key (kbd "M-g h") "η")(global-set-key (kbd "M-g q") "θ")(global-set-key (kbd "M-g i") "ι")(global-set-key (kbd "M-g k") "κ")(global-set-key (kbd "M-g l") "λ")(global-set-key (kbd "M-g m") "μ")(global-set-key (kbd "M-g n") "ν")(global-set-key (kbd "M-g x") "ξ")(global-set-key (kbd "M-g o") "ο")(global-set-key (kbd "M-g p") "π")(global-set-key (kbd "M-g r") "ρ")(global-set-key (kbd "M-g s") "σ")(global-set-key (kbd "M-g t") "τ")(global-set-key (kbd "M-g u") "υ")(global-set-key (kbd "M-g f") "ϕ")(global-set-key (kbd "M-g j") "φ")(global-set-key (kbd "M-g c") "χ")(global-set-key (kbd "M-g y") "ψ")(global-set-key (kbd "M-g w") "ω")(global-set-key (kbd "M-g A") "Α")(global-set-key (kbd "M-g B") "Β")(global-set-key (kbd "M-g G") "Γ")(global-set-key (kbd "M-g D") "Δ")(global-set-key (kbd "M-g E") "Ε")(global-set-key (kbd "M-g Z") "Ζ")(global-set-key (kbd "M-g H") "Η")(global-set-key (kbd "M-g Q") "Θ")(global-set-key (kbd "M-g I") "Ι")(global-set-key (kbd "M-g K") "Κ")(global-set-key (kbd "M-g L") "Λ")(global-set-key (kbd "M-g M") "Μ")(global-set-key (kbd "M-g N") "Ν")(global-set-key (kbd "M-g X") "Ξ")(global-set-key (kbd "M-g O") "Ο")(global-set-key (kbd "M-g P") "Π")(global-set-key (kbd "M-g R") "Ρ")(global-set-key (kbd "M-g S") "Σ")(global-set-key (kbd "M-g T") "Τ")(global-set-key (kbd "M-g U") "Υ")(global-set-key (kbd "M-g F") "Φ")(global-set-key (kbd "M-g J") "Φ")(global-set-key (kbd "M-g C") "Χ")(global-set-key (kbd "M-g Y") "Ψ")(global-set-key (kbd "M-g W") "Ω")
The easiest way to sporadically insert Greek characters in Emacs is to use
abbrev-mode with this abbrev table of Greek letters.
To use the above gist, start emacs and invoke
M-x edit-abbrevs which will start the Abbrevs editor. Then cut and paste the
definitions within it under the
(global-abbrev-table) section (to make them globally available) or place them underneath another heading e.g.
(text-mode-abbrev-table).
Ensure to enable abbrev-mode in a given buffer with
M-x abbrev-mode RET, or enable abbrev-mode globally by adding
(setq-default abbrev-mode t) to your init file. Alternatively if you want to enable abbrev-mode only for e.g. text and derived modes, use
(add-hook 'text-mode-hook (lambda () (abbrev-mode 1))).
See the emacs wiki about abbrev-mode for more.
C-x 8 RET, as described by @Daimrod above, is fine for a one-off insertion.
If you want to bind a key to insert a given Unicode character: Load library
ucs-cmds.el, then use
C-1 C-x 8 RET. That inserts the character you choose and also creates a command with the same name, which you can bind to a key.
For example,
C-1 C-x 8 RET GREEK SMALL LETTER LAMBDA RET defines command
greek-small-letter-lambda, which inserts that character when called.
You can create multiple such commands at once, using macro
ucsc-make-commands, also from
ucs-cmds.el. For example, to create individual commands for
each of the Greek letters, just do this:
(ucsc-make-commands "^greek [a-z]+ letter")
Then you can bind, say, command
greek-small-letter-beta to
C-c b or whatever:
(global-set-key (kbd "C-c b") 'greek-small-letter-beta)
A cousin of Rasmus' answer for non-TeX/LaTeX needs:
M-x set-input-method RET C-\ greek RETor C-x RET C-\ greek RET
OR
M-x set-input-method RET C-\ greek-babel RETor C-x RET C-\ greek-babel RET
Either of these give you input methods where typing for instance the single keystroke, 'a' simply gets you an alpha (α), 'b' gets you a beta (β) and so forth.
After that, all you have to type is
C-\
to toggle back and forth from your default input method to the greek method very quickly. Quite handy.
But you have to watch out: The keys don't always match the sounds. For instance, typing hello gets you ηελλο. This is because they made it so the 'h' key becomes a greek eta (η), simply because the η looks like an h, and because the capital Greek eta (H) is the same as our 'H', even though it was pronounced differently.
The advantage of the greek-babel input method over just the greek will be appreciated mostly by those who work with the more advanced/complicated Attic Greek, which used a lot of accents. (Attic was used in Athens in Plato's time ~400BC, although the accents and lower-case letters were added centuries later.) You can hit the two keys < o and you get ὁ. The backwards apostrophe is an 'h' sound, called a 'rough breathing' -- thus ὁ is pronounced 'ho'. Accents are super easy and FAST. And you can combine accents over the same character. For instance, hit the three keys > ' a and you get ἄ. If you don't need the accents, just use the greek input method.
With both the greek and greek-babel input methods, you can also hit C-h C-\ to get a Help buffer on the input method, which includes a lovely table of all the possible keystroke combinations available to you. (C-x o to move to next window, so you can get to that info window and scroll down..)
To
find out how to enter a single character that you already see somewhere on the screen, you can copy-paste that character to some emacs buffer/file, and then call
M-x describe-char. With α, it yields
[...] to input: type "C-x 8 RET 3b1" or "C-x 8 RET GREEK SMALL LETTER ALPHA" [...]
PS: re your profile, try also
['a'] * 28
|
Ellipse Geometry – a Problem
I found the following nice problem in my Facebook account. Facebook, however, is a miracle to me and I am always unable to find a posting a second time. Unless you answer something silly like „Following“ it is lost. The best I could find was this link.
The problem is to prove:
The intersections of two perpendicular tangents to an ellipse form a circle.
Of course, this can be computed by Analytic Geometry and I carry this out below. It is no fun, however. E.g., you can find the radius of the circle if the claim is correct, take a point on the circle, compute the two tangents to the ellipse and check that they are perpendicular.
Let us find a more geometrical solution! At first, there is a well known „construction“ of ellipses folding a circular paper. You can do it with actual paper. Cut a circular paper. Mark a point A inside the circle. Then fold the paper, so that the boundary lies on A. I.e., the point P is reflected along the folding line to A. The line is the middle perpendicular of AP. If you do that often enough the folds will outline the green ellipse. There are even videos on Youtube showing this.
For the proof, you need that ellipses are the set of points where the sum of distances to A and M is constant. In our case, it is „obviously“ constantly equal to the radius of the circle. We have the following.
The set of all middle perpendiculars to a fixed point A and points P on a circle is the set of tangents to an ellipse.
Now you know how to get two perpendicular tangents. In the following construction, I did just that. The four blue lines are four middle perpendiculars. They are constructed by selecting a point P on the black circle, the line PA and a rectangular to that line. Then the blue lines are the four middle perpendiculars on A and the four intersections of our green lines with the circle.
We have to prove that
the midpoint Z on AM is the center of the rectangle and that the rectangle has the same diameter, independent on the choice of P on the black circle.
Then the corners of the blue rectangle will always be on the same circle.
Now, we stretch the blue rectangle by a factor 2 from the center at A. The point Z will be stretched to the point M then. The blue rectangle will become a rectangle through the intersections as in the following construction.
It is now „quite obvious“ that M is the center of the blue rectangle. Thus Z was the center of the 1/2 times smaller rectangle. This proves our first claim.
For the second claim, we need to show that the length of the diagonal of the large blue rectangle does not depend on the choice of the red point. The diagonal has the length
\(\sqrt{c^2+d^2}\)
by Pythagoras. Now we have another claim.
The sum of the squares of the lengths of each two perpendicular secants of a fixed circle that meet in a fixed point inside the circle is constant.
To prove that have a look at the dashed rectangle with diagonal AM. Using this rectangle and Pythagoras you can „easily“ express the diagonal of the blue rectangle by the length of AM and the radius of the circle. This proves the second claim.
The images in this posting have been done with C.a.R. (Compass and Ruler), a Java program developed by the author. It allows beautiful exports of images and the automatic creation of polar sets for sets of lines. That feature was used to compute the ellipse in the second image. The first ellipse was done using the two focal points. C.a.R. has also ellipses defined by 5 points, or even by an equation or a parameterization.
I promised to compute the problem using Analytic Geometry. I am using the Computer Algebra system Maxima via my Euler Math Toolbox for this.
The first method computes two perpendicular tangents to the ellipse with the equation
\(x^2 + c^2 y^2 = 1\)
To find a tangent perpendicular to a vector v, we can maximize the expression
\(v_1 x + v_2 y\)
on the ellipse using the method of Lagrange. If the vector v has norm 1 the value of this maximum will be the distance of the tangent from the origin. The Lagrange equations fot his are
\(v_1 = 2 \lambda x, \quad v_2 = 2 \lambda c^2 y, \quad x^2+c^2 y^2 = 1\)
After solving this, we get
\(v_1 x + v_2 y = 2 \lambda (x^2+c^2y^2) = 2 \lambda\)
Then we do the same with the orthogonal vector, i.e., we maximize
\(-v_2 x + v_1 y\)
We then show that the sum of squares of these two values is constant.
In EMT and Maxima, this is the following code.
>&solve([v1=2*la*x,v2=2*la*c^2*y,x^2+c^2*y^2=1],[x,y,la]); ... > la1 &= la with %[1] 2 2 2 sqrt(v2 + c v1 ) ------------------ 2 c >&solve([-v2=2*la*x,v1=2*la*c^2*y,x^2+c^2*y^2=1],[x,y,la]); ... > la2 &= la with %[1] 2 2 2 sqrt(c v2 + v1 ) ------------------ 2 c >&factor(la1^2+la2^2) 2 2 2 (c + 1) (v2 + v1 ) -------------------- 2 4 c
Thus the circle of intersections has the radius
\(r = \dfrac{\sqrt{1+c^2}}{c} = \sqrt{1 + \dfrac{1}{c^2}}\)
This is confirmed by the special case of tangents parallel to the axes.
There are several other methods. One is to construct the tangents using tangents to a circle. For this, the ellipse needs to be stretched by 1/c into y-direction. It will then become a circle. We need to compute points on the image of our circle with radius r under this mapping, then take the tangents to the mapped ellipse and map back. Below is the plan of such a construction.
The computations are very involved, however.
Another method to find both tangents is the following: We look at all lines through a given point (e.g. determined by their slope) and find the ones that intersect the ellipse only once. The product of the two slopes that solve this problem should be 1. Again, this is a complicated computation.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.