text stringlengths 256 16.4k |
|---|
The standard proof, apparently due to Dedekind, that algebraic numbers form a field is quick and slick; it uses the fact that $[F(\alpha) : F]$ is finite iff $\alpha$ is algebraic, and entirely avoids the (to me, essential) issue that algebraic numbers are roots of some (minimal) polynomial. This seems to be because finding minimal polynomials is
hard and largely based on circumstance.
There are more constructive proofs which, given algebraic $\alpha$, $\beta$, find an appropriate poly with $\alpha \beta$, $\alpha + \beta$, etc., as a root -- but of course these are not generally minimal.
You would of course want an algorithm to compute such min. polies, but assuming this is unfeasible (as it seems), my question is a bit different:
Every algebraic number $\alpha$, $\beta$, $\alpha\beta$, $\alpha + \beta$, etc., has a
uniquecorresponding minimal polynomial, call it $p_{\alpha}(x)$, $p_{\beta}(x)$, $p_{\alpha \beta}(x)$, etc., and these polies have other roots, the conjugates, $\alpha_1$,...,$\alpha_n$, $\beta_1$,...,$\beta_m$, etc. Suppose I want to define an operation on this set of polies in the most naive way: $p_{\alpha}(x) \star p_{\beta}(x) = p_{\alpha\beta}(x)$. (Note that this is NOT the usual, direct multiplication of polies.)
But is this even well-defined? More specifically: suppose I swap $\alpha$ with one of its conjugates, $\beta$ with one of its conjugates, and multiply those together. Is the minimal polynomial of the new product the same as before? i.e. is this new product a conjugate of the old one? Meaning, I would need $p_{\alpha_1 \beta_1}(x) = p_{\alpha_i \beta_k}(x)$ for any combination of conjugates in order for this proposed operation to even make sense. And this seems unlikely -- that would be sort of miraculous right?
What about a similar operation for $\alpha + \beta$, $\alpha - \beta$, etc?
More broadly, given two algebraic numbers $\alpha$, $\beta$, I'm interested in the set of minimal polynomials corresponding to those algebraic numbers which can be generated by performing the field operations on $\alpha$, $\beta$ -- call this the "set of minimal polies attached to the number field" or something -- and if a similar field (or even just ring) structure can be put on these polies by defining appropriate operations on them. (Not the usual operations, which will clearly give you polies with roots outside of your number field.) I'm ultimately after questions like:
(1) How do the conjugates of $\alpha \beta$, $\alpha + \beta$, etc., relate to the conjugates of $\alpha$ and $\beta$?
(2) How do the coefficients of the min. polies of $\alpha \beta$, $\alpha + \beta$, etc., relate to those of the min. polies of $\alpha$ and $\beta$? Obviously, the algebraic integers form a ring; what else can be said?
(3) Degree?
It may be impractical to calculate any one such min. poly explicitly, but maybe interesting things can be said about the collection as a whole? |
This table is, indeed, calculated by assuming that a ship can produce constant acceleration away from its origin, instantaneously pivot \$180^\circ\$ at the midpoint of its journey, and constantly decelerate for the second half of its journey. The greatest veloticy thus attained (w.r.t. origin) in the table above is roughly 1 million m/s--approximately 1/3 of one percent of
c--so we're comfortable sticking with classical calculations.
The formula that produces the above times is:
$$ t = 2 \times \sqrt{\dfrac d a} $$
where \$t\$ is measured in seconds, \$d\$ in meters, and \$a\$ in meters/seconds
2. (G is rounded to 10 m/s 2 for ease of calculation. Note that the table gives distances in km, so you've got to tack on three zeroes to end up in meters.)
Those who would like a refresher on their classical kinematics, read on:
Recall that the distance travelled when starting at rest and undertaking constant acceleration is given by
$$ d= \frac 1 2 at^2 $$
Solving for t gives us
$$ t=\sqrt{2 \times \frac d a} $$
In our situation we consider the time (t
1/2) to accelerate to the midpoint of the journey (d 1/2):
$$ t_{\frac 1 2}=\sqrt{\dfrac{2 \times d_{\frac 1 2}}{a}} = \sqrt{\dfrac d a} $$
Doubling this gives a total trip time of
$$ t=2 \times \sqrt{\frac d a} $$ |
exponential equation solve problem
b=4*3^(2*x-1)==5*4^(x+2)show(b)solve(b,x)
The Solution is:
[4^(x + 2) == 4/5*3^(2*x - 1)]
Should the solve alg. solve for x?
Well...
sage: b.solve(x)[0].log().log_expand().solve(x)[x == 1/2*(log(3) + 4*log(2) - log(4/5))/(log(3) - log(2))]
<Swing>"Who could ask for anything more ?"
</Swing>
Maybe looking for
other solutions ? (Hint, hint...) EDIT : Further hint : the logarithm is a "multivalued function" in the complex field (i. e. not a function strictly speaking). Any time you take a log, you introduce further, possibly spurious, solutions... EDIT 2: Full solution, since no one seemed to see the problem :
Original problem:
,----| b=4*3^(2*x-1)==5*4^(x+2)`----
if the members of this equations are equal, so do their logs. So we might try to solve :
,----| Lb=b.log().expand_log()| Lb`----(2*x - 1)*log(3) + 2*log(2) == 2*(x + 2)*log(2) + log(5)
But the converse
is not true !. More specifically :
,----| z, z_1, z_2=var("z, z_1, z_2", domain="integer")| (e^(x+2*I*pi*z)).maxima_methods().exponentialize()`----e^x
Therefore, we have to consider the solutions of :
,----| Lb2=(Lb.lhs()+2*I*pi*z_1==Lb.rhs()+2*I*pi*z_2)| Lb2`----2*I*pi*z_1 + (2*x - 1)*log(3) + 2*log(2) == 2*I*pi*z_2 + 2*(x + 2)*log(2) + log(5)
for any integer values of
z_1 and
z_2. The solutions are :
,----| Sol=Lb2.solve(x, to_poly_solve=True)| Sol`----[x == 1/2*(-2*I*pi*z_1 + 2*I*pi*z_2 + log(5) + log(3) + 2*log(2))/(log(3) - log(2))]
i. e. $$\left[x = \frac{-2 i \pi z_{1} + 2 i \pi z_{2} + \log\left(5\right) + \log\left(3\right) + 2 \log\left(2\right)}{2 {\left(\log\left(3\right) - \log\left(2\right)\right)}}\right]$$
which is unique for any difference $z=z_1-z_2$.
Checking these solutions is not as direct as one could wish. But one can check that the ratio of the two members is one :
,----| (b.rhs()/b.lhs()).subs(Sol).log().log_expand().expand().factor().exp()`----1
One can note that the non-real roots of this equation are somehow missedby Sage (and Maxima). This is also true for
giac and
sympy. ButMathematica returns them:
,----| mathematica.Reduce(b,x)`----Element[C[1], Integers] && x == -((2*I)*Pi*C[1] + 2*Log[2] + Log[3] + Log[5])/ (2*(Log[2] - Log[3]))
i. e. $$c_1\in \mathbb{Z}\land x=-\frac{2 i \pi c_1+\log (5)+\log (3)+2 \log (2)}{2 (\log (2)-\log (3))}$$
HTH,
Asked:
2019-02-13 08:20:11 -0500
Seen:
95 times
Last updated:
Feb 15 |
Difference between revisions of "Group cohomology of dihedral group:D8"
(→Over the integers)
(→Over the integers)
Line 12: Line 12:
The homology groups over the integers are given as follows:
The homology groups over the integers are given as follows:
−
<math>H_q(D_8;\mathbb{Z}) = \left \lbrace \begin{array}{rl} (\mathbb{Z}/2\mathbb{Z})^{(q + 3)/2}, & <math>(\mathbb{Z}/2\mathbb{Z})^{(q + 1)/2} \oplus \mathbb{Z}/4\mathbb{Z}, & q \equiv 3
+
<math>H_q(D_8;\mathbb{Z}) = \left \lbrace \begin{array}{rl} (\mathbb{Z}/2\mathbb{Z})^{(q + 3)/2}, & <math>(\mathbb{Z}/2\mathbb{Z})^{(q + 1)/2} \oplus \mathbb{Z}/4\mathbb{Z}, & q \equiv 3 \pmod 4 \\(\mathbb{Z}/2\mathbb{Z})^{q/2}, & q \equiv 2 \pmod 4 \mbox{ even }, q > 0 \\ \mathbb{Z}, & q = 0 \\\end{array}\right.</math>
The first few homology groups are given below:
The first few homology groups are given below:
Revision as of 04:51, 15 January 2013
Contents This article gives specific information, namely, group cohomology, about a particular group, namely: dihedral group:D8. View group cohomology of particular groups | View other specific information about dihedral group:D8 Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers
The homology groups over the integers are given as follows:
The first few homology groups are given below:
Over an abelian group
The first few homology groups with coefficients in an abelian group are given below:
? ? ? ? ? Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers
The first few cohomology groups are given below:
0 ? ? ? ? Over an abelian group
The first few cohomology groups with coefficients in an abelian group are:
? ? ? ? ? Cohomology ring with coefficients in integers PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] Second cohomology groups and extensions Schur multiplier
This has implications for projective representation theory of dihedral group:D8.
Schur covering groups
The three possible Schur covering groups for dihedral group:D8 are: dihedral group:D16, semidihedral group:SD16, and generalized quaternion group:Q16. For more, see second cohomology group for trivial group action of D8 on Z2, where these correspond precisely to the stem extensions.
Second cohomology groups for trivial group action
Group acted upon Order Second part of GAP ID Second cohomology group for trivial group action (as an abstract group) Order of second cohomology group Extensions Number of extensions up to pseudo-congruence, i.e., number or orbits under automorphism group actions Cohomology information cyclic group:Z2 2 1 elementary abelian group:E8 8 direct product of D8 and Z2, SmallGroup(16,3), nontrivial semidirect product of Z4 and Z4, dihedral group:D16, semidihedral group:SD16, generalized quaternion group:Q16 6 second cohomology group for trivial group action of D8 on Z2 cyclic group:Z4 4 1 elementary abelian group:E8 8 direct product of D8 and Z4, nontrivial semidirect product of Z4 and Z8, SmallGroup(32,5), central product of D16 and Z4, SmallGroup(32,15), wreath product of Z4 and Z2 6 second cohomology group for trivial group action of D8 on Z4 Klein four-group 4 2 elementary abelian group:E64 64 [SHOW MORE] 11 second cohomology group for trivial group action of D8 on V4 Baer invariants
Subvariety of the variety of groups General name of Baer invariant Value of Baer invariant for this group abelian groups Schur multiplier cyclic group:Z2 groups of nilpotency class at most two 2-nilpotent multiplier cyclic group:Z2 groups of nilpotency class at most three 3-nilpotent multiplier trivial group any variety of groups containing all groups of nilpotency class at most three -- trivial group |
Compressed Sensing (CS) is a technique that allows the reconstruction of signals starting from a limited number of linear
measurements that is potentially much smaller than the number of Nyquist-rate samples. The possibility of such sub-Nyquist representation hinges on an assumption on the considered class of signals, i.e., that they are sparse.To model signals and the sparsity assumption we may think to collect the signal samples from a given time window in the \(n\)-dimensional vector \(x\in\mathbb{R}^n\) and consider its representation w.r.t. a set of vectors arranged as the columns of a matrix \(S\). It is commonly assumed that the set of vectors spans the whole \(\mathbb{R}^n\) though it may be redundant and contain more than \(n\) vectors. For simplicity's sake we here assume that there is no redundancy and thus that \(S\) is a base. With this we know that \(x=S\xi\) with \(\xi=S^{-1}x\) and \(x\) is said to be \(\kappa\)-sparse if \(\xi\) has at most \(\kappa\) non-zero components with \(\kappa\ll n\).
If this happens the actual number of degrees of freedom in \(x\) is smaller than \(n\). Thus, the signal information content can be represented by a measurement vector \(y\in\mathbb{R}^m\) obtained by projections on a proper set of sensing vectors arranged as rows of the
sensing matrix \(A\).$$ y = Ax =A S \xi $$If \(m \lt n\) measuraments are enough to capture the needed signal information content then a signal compression is achieved with a corresponding compression ratio defined as \(CR = \frac{n}{m}\).
If the matrix \(AS\) satisfies certain conditions, \(\xi\) (and thus \(x\)) can be recovered from \(y\) despite the fact that \(A\) (and thus \(A S\)) is a dimensionality reduction, provided that \(m = \mathcal{O}(\kappa \log n)\). This hints to the fact that non-negligible values of \(CR\) may be obtained.Roughly speaking, the conditions on \(AS\) require that generic \(\kappa\)-sparse vectors are mapped
almost-isometrically into the measurements.Such conditions are commonly satisfied by setting \(A\) as an instance of a random matrix with independent and identically distributed entries, Gaussian or Bernoulli distributions being the most commonly adopted.Once this is done, \(x\) is computed from \(y = A x\) by enforcing the a priori knowledge that its representation \(\xi\) is sparse. Algorithmically, the sparse signal recovery can be obtained by solving of the following convex optimization problem $$\begin{array}{c}\displaystyle\hat{x}=\arg\left\{\min_{\zeta}\|S^{-1} \zeta\|_{1}\right\}\\\text{s.t.}\;\; \|A \zeta -y\|^2_2 \le \epsilon^2\end{array}$$where \(\|S^{-1} \zeta\|_{1}\) is a sparsity-promoting norm, i.e., a quantity that decreases as the representation of the signal w.r.t. the basis \(S\) conforms with the sparsity prior, and \( \|A \zeta -y\|^2_2\) is the usual Euclidean norm that measures the accuracy with which the measurements \(y\) are matched by the solution. The parameter \(\epsilon\ge 0\) is chosen proportionally to the amount of noise affecting \(y\), \(\epsilon = 0\) corresponding to the noiseless case.
Donoho, D.L., "Compressed sensing," Information Theory, IEEE Transactions on, vol.52, no.4, pp.1289-1306, 2006 - doi: 10.1109/TIT.2006.871582
Candes, E.J.; Wakin, M.B., "An Introduction To Compressive Sampling," Signal Processing Magazine, IEEE, vol.25, no.2, pp.21-30, 2008 - doi: 10.1109/MSP.2007.914731
Haboba, J.; Mangia, M.; Pareschi, F.; Rovatti, R.; Setti, G., "A Pragmatic Look at Some Compressive Sensing Architectures With Saturation and Quantization," Emerging and Selected Topics in Circuits and Systems, IEEE Journal on, vol.2, no.3, pp.443-459, 2012 - doi: 10.1109/JETCAS.2012.2220392
The general CS theory does not depend on the specific class of signals to acquire, and claims of
universality are quite frequent in the Literature.Yet, from a purely engineering perspective, whenever degrees of freedom and a priori knowledge on the application are simultaneously present, the obvious way to go is to use available knobs to optimize the system.To this end, one may note that most of the practical interest in CS comes from two key facts: Although theoretical upper bounds exist on the error committed by signal recovery algorithms, real performances are typically much better than those predicted by formal guarantees, yielding extremely effective signal acquisition. The mathematical conditions that allow signal recovery algorithms to succeed can be matched (with very high probability) by simply drawing \(A\) at random. Although theoretical guarantess deped on the choiche of specific distributions, in practice a wide class of random matrices allow for effective recovery.
Starting from these considerations, one may adjust the statistical distribution of \(A\) to optimize the acquisition performance depending on the actual statistics of the signal \(x\). This is what the
rakeness approach does, yielding a random policy for drawing \(A\) that increases the CS performance either by reducing the minimum \(m\) nedeed for a correct reconstruction or by improving reconstruction quality for a fixed \(m\).
More specifically, one exploits the fact that in real-world applictions, the input vectors are usually not only sparse but also non-white or
localized, i.e., their energy content is not uniformly distributed in the whole signal space. The idea is to choose the generic sensing matrix row \(a=(a_1,\dots,a_n)\) so to increase the average energy collected (i.e., “raked”) when the input signal is projected onto it, preserving at same time the randomness of \(A\) that allows to expect a satisfactory reconstruction in all cases.
Formally, let us model \(a\) and \(x\) as realizations of two independent stochastic processes whose \(n\times n\) correlation matrices are given by \(\mathcal{A}=\mathbf{E}[aa^\top]\) and \(\mathcal{X}=\mathbf{E}[xx^\top]\). We define
rakeness the average energy of the projection of \(x\) on \(a\), i.e., the quantity$$\begin{array}{rcl}\rho(a,x)& =& \mathbf{E}[(a^\top x)^2]=\mathbf{E}[a^\top x x^\top a]\\&=&\mathbf{E}[a^\top\mathcal{X}a]=\mathbf{E}[{\rm tr}(aa^\top\mathcal{X})]\\&=&{\rm tr}(\mathcal{A}\mathcal{X})=\sum_{i=1}^{n}\sum_{j=1}^n \mathcal{A}_{i,j}\mathcal{X}_{i,j}\end{array}$$Clearly, \(\rho\) is a measure of average alignment between random vectors. Actually, it can be also used to measure the whiteness of the distribution of a random vector since this last concept is related to the average aligment between any pair of vectors drawn independently from such a distribution. Hence, \(\rho(a,a)={\rm tr}(\mathcal{A}^2)=\sum_{j=1}^n\sum_{j=1}^n(\mathcal{A}_{i,j})^2\) can be bounded from above to guarantee that the sensing row is distributed in a sufficiently isotropic way.
Rakeness-based design aims at maximizing the ability of the projections of collecting the signal energy while keeping them random enough to span the signal space.This is mapped into the following optimization problem:$$\begin{array}{ll} \displaystyle \max_{\mathcal{A}} & \;\;\displaystyle \mathrm{tr}(\mathcal{A}\mathcal{X})\\[0.5em]{\rm s.t.} &\begin{array}{l}\mathcal{A}\succeq 0\\\mathrm{tr}(\mathcal{A})=e\\\rho(a,a)=\mathrm{tr}(\mathcal{A}^2)\le \tau e^2\end{array}\end{array}$$where \(\mathcal{A}\succeq 0\) indicates the need for a positive semi-definite \(\mathcal{A}\), the energy of each row (\({\rm tr(\mathcal{A})}=\sum_{i=1}^n\mathcal{A}_{i,i}\)) is normalized to a certain \(e\), and \(\tau\le 1\) controls the strength of the randomness constraint. Tuning of \(\tau\) on a proper and known range is not critical (see
Bertoni 2013), since it does not appreciably alter the overall system performance. The solution of this optimization problem (given in analytical terms in Mangia 2012) yields a correlation matrix \(\mathcal{A}\), that identifies the stochastic process to be used for generating sensing vectors.
The following picture sketches the rakeness-based design flow. For a given input signal class, i.e., a given \(\mathcal{X}\), the matrix \(\mathcal{A}\) is computed offline. After that, the whole sensing process works without any additional processing with respect to a standard CS encoder. Applying rakeness only means use sensing vectors with an optimized correlation profile in place of purely random ones.
It has been demonstrated (see references below) that exploiting rakeness-based design means increasing the quality of the reconstructed signal for a given value of \(m\), or,
viceversa, decreasing the minimum \(m\) guaranteeing a prescribed reconstruction quality.
Mangia, M.; Rovatti, R.; Setti, G., "Rakeness in the Design of Analog-to-Information Conversion of Sparse and Localized Signals," Circuits and Systems I: Regular Papers, IEEE Transactions on, vol.59, no.5, pp.1001-1014, 2012 - doi: 10.1109/TCSI.2012.2191312
Bertoni, N.; Mangia, M.; Pareschi, F.; Rovatti, R.; Setti, G., "Correlation tuning in compressive sensing based on rakeness: A case study," Electronics, Circuits, and Systems (ICECS), 2013 IEEE 20th International Conference on, pp.257-260, 2013 - doi: 10.1109/ICECS.2013.6815403
Mangia, M.; Rovatti, R.; Setti, G., "Analog-to-information conversion of sparse and non-white signals: Statistical design of sensing waveforms," Circuits and Systems (ISCAS), 2011 IEEE International Symposium on, pp.2129-2132, 2011 - doi: 10.1109/ISCAS.2011.5938019
acquired signals
In this toy case the signal dimensionality is \(n=64\) and \(S\) is chosen as the \(n\)-dimensional DCT basis. To generate a sparse and localized signal we start from a Gaussian \(n\)-dimensional vector \(g\) with zero mean and a non-diagonal correlation matrix \(\mathcal{G}\) resulting from a band-pass spectral profile like the one in the figure below. Then we compute \(\gamma=S^{-1}g\) to represent such a vector w.r.t. to \(S\) and produce the vector \(\xi\) by zeroing all the components of \(\gamma\) with the exception of the \(\kappa=6\) largest ones. The signal is then obtained by mapping back into the signal domain \(x=S\xi\). Non idealities are modelled as an additive white Gaussian noise, injected in the input signal such that the intrinsic signal to noise ratio is equal to 40 dB. Clearly \(x\) is \(\kappa\)-sparse and, since we retain the \(\kappa\) largest components of \(\gamma=S^{-1}g\) we have that \(\mathcal{X}\simeq\mathcal{G}\), i.e., that \(x\) is localized. The true matrix \(\mathcal{X}\) is estimated by a Monte Carlo simulation and is reported in the figure below.
sensing matrices
Such signals are acquired by means of two different sensing matrices both with \(m=12\) rows: a matrix \(A\) made of i.i.d. Gaussian zero-mean and unit variance entries, and a matrix \(A\) with independent rows, each of them generated by a multivariate Gaussian distribution with zero mean and a correlation matrix \(\mathcal{A}\) given by the rakeness-based approach (see the above plots). A typical pair of matrices is rendered as colormaps below: visual inspection confirms that the righ-hand matrix has slightly more low-pass rows w.r.t. the left-hand one.
results
In boh cases, the signal is recovered by solving the the standard \(\min_{l1}\) optimization problem. The reconstructions are reported, along with the input signal, in the next figure. Performance is measured in terms of Reconstruction-SNR (RSNR), i.e., if \(x\) is the true signal and \(\hat{x}\) the one output by te \(\min_{l1}\) optimization, \( \left(RSNR=[||x||_2/||x-\hat{x}||_2]_{dB}\right) \). In this case RSNR=4.1dB for the i.i.d. projection matrix (the one that would have been used by standard CS) and RSNR=40.1dB for the rakeness-based projection matrix.
Though this only implicitly enters the design flow, rakeness-based design assumes that the input signal is
localized, i.e., that its energy is not uniformly distributed in the whole signal space so that the correlation matrix \(\mathcal{X}={\mathbf E}[xx^\top]\) is not a multiple of the identity. To quantify localizazion we may measure how much the eigenvalues \(\lambda_j(\mathcal{X})\) for \(j=1,\dots,n\) of \(\mathcal{X}\) deviate from a uniform distribution of the total energy \({\rm tr}(\mathcal{X})\). A straightforward option is$$\mathfrak{L}_x=\sum_{j=0}^{n-1}\left(\frac{\lambda_j(\mathcal{X})}{\mathrm{tr}(\mathcal{X})}-\frac{1}{n}\right)^2=\frac{\mathrm{tr}\left(\mathcal{X}^2\right)}{\mathrm{tr}^2(\mathcal{X})}-\frac{1}{n} $$
The above localization measure satisfies \(0\le\mathfrak{L}_x\le 1-\frac{1}{n}\) where the upper bound corresponds to a unique non-zero eigenvalue and thus to a distribution producing vectors along a unique direction identified by the corresponding eigenvector. The lower bound corresponds to an isotropic distribution producing vectors with uncorrelated entries. If \(\mathfrak{L}_x=0\) the solution of the rakeness maximization problem would yield \(\mathcal{A}=\frac{e}{n}\mathcal{I}\) where \(\mathcal{I}\) is the identity matrix and thus, classical i.i.d. sensing would be the optimal choice. Yet, typical real-world signals have \(\mathfrak{L}_x>0\) and the \(\mathcal{A}\) resulting from the rakeness maximization problem achieves better results w.r.t. classical i.i.d. sensing.
Finally, both localization and rakeness depend quadratically on the second-order statistical features of the entailed vectors. This results in a link between the two quantities $$ \mathfrak{L}_a=\frac{\mathrm{tr}\left(\mathcal{A}^2\right)}{\mathrm{tr}^2(\mathcal{A})}-\frac{1}{n}= \frac{\rho(a,a)}{e^2}-\frac{1}{n} $$ and allows to recast the rakeness maximization problem into $$ \begin{array}{ll} \displaystyle \max_{\mathcal{A}} & \;\;\displaystyle \mathrm{tr}(\mathcal{A}\mathcal{X})\\[0.5em] {\rm s.t.} & \begin{array}{l} \mathcal{A}\succeq 0\\ \mathrm{tr}(\mathcal{A})=e\\ \mathfrak{L}_a\le\ell\mathfrak{L}_x \end{array} \end{array} $$ where the randomness constraint giving a cap on \(\rho(a,a)\) is translated in a more intuitive constraint requiring that the sensing waveforms (\(a\)) are localized not more than \(\ell\) times the waveforms they have to sense (\(x\)).
Cambareri, V.; Mangia, M.; Pareschi, F.; Rovatti, R.; Setti, G., "A rakeness-based design flow for Analog-to-Information conversion by Compressive Sensing," Circuits and Systems (ISCAS), 2013 IEEE International Symposium on, pp.1360-1363, 2013 - doi: 10.1109/BioCAS.2011.6107818
Having a physical implementation in mind, bynary antipodal or ternary matrices relax the product operations required to compute the measurements vector into much simpler operations.
A useful set of examples of the proposed simulation enviroment for different scenarios and for different encoding approachs. Each one is associated to a pieces of Matlab code and a brief discussion.
The officiall download page of the matlab files implementing the rakeness-based CS. |
Table of Contents
Fundamental Groups under Homeomorphisms on Topological Spaces
We now look at a very important theorem regarding the fundamental groups of topological spaces.
Theorem 1: Let $X$ and $Y$ be path-connected topological spaces. If $X$ and $Y$ are homotopically equivalent then $\pi_1(X, x) \cong \pi_1(Y, y)$ for every $x \in X$ and for every $y \in Y$.
Recall from the Homeomorphic Topological Spaces are Homotopically Equivalent page that if two spaces are homeomorphic then they are also homotopically equivalent. Therefore, if $X$ and $Y$ are homeomorphic path-connected topological spaces then for every $x \in X$ and for every $y \in Y$:(1)
Thus, the fundamental group of a space is a topological property. This theorem is extremely powerful for showing that two topological spaces are not homeomorphic.
Theorem 2: The closed unit disk $D^2$ is not homeomorphic to the sphere $S^2$. Proof:Suppose instead that $D^2$ is homeomorphic to $S^2$ and let $f : D^2 \to S^2$ be a homeomorphism. Let $x^* \in D^2$. Then $S^2 \setminus \{ x^* \}$ must be homeomorphic to $D^2 \setminus \{ f(x^*) \}$. Consider the fundamental group of $D^2 \setminus \{ x^* \}$. The circle is a deformation retract of $D^2 \setminus \{ x \}$ and so: Now consider the fundamental group of [[$ S^2 \setminus \{ f(x^*) \} $. The space can be deformed into a disk as illustrated below: Therefore: But these fundamental groups are different which contradict theorem 2. Therefore the assumption that $D^2$ was homeomorphic to $S^2$ was false. So $D^2$ is not homeomorphic to $S^2$. $\blacksquare$ |
Continued fractions provide a representation of numbers which is, in a sense, generic and canonical. It does not depend on an arbitrary choice of a base. Such a representation should be the best in a sense. In this section we quantify this naive idea.
A rational number \(a/b\) is referred to as a "good" approximation to a number \(\alpha\) if \[\frac{c}{d} \neq \frac{a}{b} \hspace{5mm} \text{and} \hspace{5mm} 0<d \leq b\] imply \[|d\alpha - c| > |b\alpha -a|.\]
Remarks. 1. Our "good approximation" is "the best approximation of the second kind" in a more usual terminology. 2. Although we use this definition only for rational \(\alpha\), it may be used for any real \(\alpha\) as well. Neither the results of this section nor the proofs alter. 3. Naively, this definition means that \(a/b\) approximates \(\alpha\) better then any other rational number whose denominator does not exceed \(b\). There is another, more common, definition of "the best approximation". A rational number \(x/y\) is referred to as "the best approximation of the first kind" if \(c/d\neq x/y\) and \(0<d\leq y\) imply \(|\alpha - c/d|>|\alpha - x/y|\). In other words, \(x/y\) is closer to \(\alpha\) than any rational number whose denominator does not exceed \(y\). In our definition we consider a slightly different measure of approximation, which takes into the account the denominator, namely \(b|\alpha - a/b|=|b\alpha -a|\) instead of taking just the distance \(|\alpha - a/b|\).
[good] Any "good" approximation is a convergent.
Proof. Let \(a/b\) be a "good" approximation to \(\alpha = [a_0;a_1,a_2,\ldots,a_n]\). We have to prove that \(a/b=p_k/q_k\) for some \(k\).
Thus we have \(a/b>p_1/q_1\) or \(a/b\) lies between two consecutive convergents \(p_{k-1}/q_{k-1}\) and \(p_{k+1}/q_{k+1}\) for some \(k\). Assume the latter. Then \[\left\vert \frac{a}{b} - \frac{p_{k-1}}{q_{k-1}} \right\vert \geq \frac{1}{bq_{k-1}}\] and \[\left\vert \frac{a}{b} - \frac{p_{k-1}}{q_{k-1}} \right\vert < \left\vert \frac{p_k}{q_k} - \frac{p_{k-1}}{q_{k-1}} \right\vert = \frac{1}{q_kq_{k-1}}.\] It follows that \[\label{l7} b>q_k.\] Also \[\left\vert \alpha - \frac{a}{b} \right\vert \geq \left\vert \frac{p_{k+1}}{q_{k+1}} - \frac{a}{b} \right\vert \geq \frac{1}{bq_{k+1}},\] which implies \[\left\vert b\alpha - a \right\vert \geq \frac{1}{q_{k+1}}.\] At the same time Theorem [inequ] (it right inequality multiplied by \(q_k\)) reads \[\left\vert q_k \alpha - p_k \right\vert \leq \frac{1}{q_{k+1}}.\] It follows that \[\left\vert q_k \alpha - p_k \right\vert \leq \left\vert b\alpha - a \right\vert,\] and the latter inequality together with ([l7]) show that \(a/b\) is not a "good" approximation of \(\alpha\) in this case.
This finishes the proof of Theorem [good].
Exercises
Prove that if \(a/b\) is a "good" approximation then \(a/b \geq a_0\).
Show that if \(a/b>p_1/q_1\) then \(a/b\) is not a "good" approximation to \(\alpha\). |
Hints will display for most wrong answers; explanations for most right answers. You can attempt a question multiple times; it will only be scored correct if you get it right the first time.
I used the official objectives and sample test to construct these questions, but cannot promise that they accurately reflect what’s on the real test. Some of the sample questions were more convoluted than I could bear to write. See terms of use. See the MTEL Practice Test main page to view random questions on a variety of topics or to download paper practice tests.
MTEL General Curriculum Mathematics Practice
Question 1
Some children explored the diagonals in 2 x 2 squares on pages of a calendar (where all four squares have numbers in them). They conjectured that the sum of the diagonals is always equal; in the example below, 8+16=9+15. Which of the equations below could best be used to explain why the children's conjecture is correct?
\( \large 8x+16x=9x+15x\)
Hint:
What would x represent in this case? Make sure you can describe in words what x represents.
\( \large x+(x+2)=(x+1)+(x+1)\)
Hint:
What would x represent in this case? Make sure you can describe in words what x represents.
\( \large x+(x+8)=(x+1)+(x+7)\)
Hint:
x is the number in the top left square, x+8 is one below and to the right, x+1 is to the right of x, and x+7 is below x.
\( \large x+8+16=x+9+15\)
Hint:
What would x represent in this case? Make sure you can describe in words what x represents.
Question 2
Here is a number trick: 1) Pick a whole number 2) Double your number. 3) Add 20 to the above result. 4) Multiply the above by 5 5) Subtract 100 6) Divide by 10 The result is always the number that you started with! Suppose you start by picking N. Which of the equations below best demonstrates that the result after Step 6 is also N?
\( \large N*2+20*5-100\div 10=N\)
Hint:
Use parentheses or else order of operations is off.
\( \large \left( \left( 2*N+20 \right)*5-100 \right)\div 10=N\)
\( \large \left( N+N+20 \right)*5-100\div 10=N\)
Hint:
With this answer you would subtract 10, instead of subtracting 100 and then dividing by 10.
\( \large \left( \left( \left( N\div 10 \right)-100 \right)*5+20 \right)*2=N\)
Hint:
This answer is quite backwards.
Question 3
Use the problem below to answer the question that follows: T shirts are on sale for 20% off. Tasha paid $8.73 for a shirt. What is the regular price of the shirt? There is no tax on clothing purchases under $175. Let p represent the regular price of these t-shirt. Which of the following equations is correct?
\( \large 0.8p=\$8.73\)
Hint:
80% of the regular price = $8.73.
\( \large \$8.73+0.2*\$8.73=p\)
Hint:
The 20% off was off of the ORIGINAL price, not off the $8.73 (a lot of people make this mistake). Plus this is the same equation as in choice c.
\( \large 1.2*\$8.73=p\)
Hint:
The 20% off was off of the ORIGINAL price, not off the $8.73 (a lot of people make this mistake). Plus this is the same equation as in choice b.
\( \large p-0.2*\$8.73=p\)
Hint:
Subtract p from both sides of this equation, and you have -.2 x 8.73 =0.
Question 4
Taxicab fares in Boston (Spring 2012) are $2.60 for the first \(\dfrac{1}{7}\) of a mile or less and $0.40 for each \(\dfrac{1}{7}\) of a mile after that. Let d represent the distance a passenger travels in miles (with \(d>\dfrac{1}{7}\)). Which of the following expressions represents the total fare?
\( \large \$2.60+\$0.40d\)
Hint:
It's 40 cents for 1/7 of a mile, not per mile.
\( \large \$2.60+\$0.40\dfrac{d}{7}\)
Hint:
According to this equation, going 7 miles would cost $3; does that make sense?
\( \large \$2.20+\$2.80d\)
Hint:
You can think of the fare as $2.20 to enter the cab, and then $0.40 for each 1/7 of a mile, including the first 1/7 of a mile (or $2.80 per mile).
Alternatively, you pay $2.60 for the first 1/7 of a mile, and then $2.80 per mile for d-1/7 miles. The total is 2.60+2.80(d-1/7) = 2.60+ 2.80d -.40 = 2.20+2.80d.
\( \large \$2.60+\$2.80d\)
Hint:
Don't count the first 1/7 of a mile twice.
Question 5
Cell phone plan A charges $3 per month plus $0.10 per minute. Cell phone plan B charges $29.99 per month, with no fee for the first 400 minutes and then $0.20 for each additional minute. Which equation can be used to solve for the number of minutes, m (with m>400) that a person would have to spend on the phone each month in order for the bills for plan A and plan B to be equal?
\( \large 3.10m=400+0.2m\)
Hint:
These are the numbers in the problem, but this equation doesn't make sense. If you don't know how to make an equation, try plugging in an easy number like m=500 minutes to see if each side equals what it should.
\( \large 3+0.1m=29.99+.20m\)
Hint:
Doesn't account for the 400 free minutes.
\( \large 3+0.1m=400+29.99+.20(m-400)\)
Hint:
Why would you add 400 minutes and $29.99? If you don't know how to make an equation, try plugging in an easy number like m=500 minutes to see if each side equals what it should.
\( \large 3+0.1m=29.99+.20(m-400)\)
Hint:
The left side is $3 plus $0.10 times the number of minutes. The right is $29.99 plus $0.20 times the number of minutes over 400.
Question 6
A sales companies pays its representatives $2 for each item sold, plus 40% of the price of the item. The rest of the money that the representatives collect goes to the company. All transactions are in cash, and all items cost $4 or more. If the price of an item in dollars is p, which expression represents the amount of money the company collects when the item is sold?
\( \large \dfrac{3}{5}p-2\)
Hint:
The company gets 3/5=60% of the price, minus the $2 per item.
\( \large \dfrac{3}{5}\left( p-2 \right)\)
Hint:
This is sensible, but not what the problem states.
\( \large \dfrac{2}{5}p+2\)
Hint:
The company pays the extra $2; it doesn't collect it.
\( \large \dfrac{2}{5}p-2\)
Hint:
This has the company getting 2/5 = 40% of the price of each item, but that's what the representative gets.
Question 7
Here is a student's work solving an equation: \( x-4=-2x+6\) \( x-4+4=-2x+6+4\) \( x=-2x+10\) \( x-2x=10\) \( x=10\) Which of the following statements is true?
The student‘s solution is correct.
Hint:
Try plugging into the original solution.
The student did not correctly use properties of equality.
Hint:
After \( x=-2x+10\), the student subtracted 2x on the left and added 2x on the right.
The student did not correctly use the distributive property.
Hint:
Distributive property is \(a(b+c)=ab+ac\).
The student did not correctly use the commutative property.
Hint:
Commutative property is \(a+b=b+a\) or \(ab=ba\).
Question 8
Solve for x: \(\large 4-\dfrac{2}{3}x=2x\)
\( \large x=3\)
Hint:
Try plugging x=3 into the equation.
\( \large x=-3\)
Hint:
Left side is positive, right side is negative when you plug this in for x.
\( \large x=\dfrac{3}{2}\)
Hint:
One way to solve: \(4=\dfrac{2}{3}x+2x\) \(=\dfrac{8}{3}x\).\(x=\dfrac{3 \times 4}{8}=\dfrac{3}{2}\). Another way is to just plug x=3/2 into the equation and see that each side equals 3 -- on a multiple choice test, you almost never have to actually solve for x.
\( \large x=-\dfrac{3}{2}\)
Hint:
Left side is positive, right side is negative when you plug this in for x.
Question 9
Which of the following is equivalent to \( \large A-B+C\div D\times E\)?
\( \large A-B-\dfrac{C}{DE} \)
Hint:
In the order of operations, multiplication and division have the same priority, so do them left to right; same with addition and subtraction.
\( \large A-B+\dfrac{CE}{D}\)
Hint:
In practice, you're better off using parentheses than writing an expression like the one in the question. The PEMDAS acronym that many people memorize is misleading. Multiplication and division have equal priority and are done left to right. They have higher priority than addition and subtraction. Addition and subtraction also have equal priority and are done left to right.
\( \large \dfrac{AE-BE+CE}{D}\)
Hint:
Use order of operations, don't just compute left to right.
\( \large A-B+\dfrac{C}{DE}\)
Hint:
In the order of operations, multiplication and division have the same priority, so do them left to right
Question 10
Use the solution procedure below to answer the question that follows: \( \large {\left( x+3 \right)}^{2}=10\) \( \large \left( x+3 \right)\left( x+3 \right)=10\) \( \large {x}^{2}+9=10\) \( \large {x}^{2}+9-9=10-9\) \( \large {x}^{2}=1\) \( \large x=1\text{ or }x=-1\) Which of the following is incorrect in the procedure shown above?
The commutative property is used incorrectly.
Hint:
The commutative property is \(a+b=b+a\) or \(ab=ba\).
The associative property is used incorrectly.
Hint:
The associative property is \(a+(b+c)=(a+b)+c\) or \(a \times (b \times c)=(a \times b) \times c\).
Order of operations is done incorrectly. The distributive property is used incorrectly.
Hint:
\((x+3)(x+3)=x(x+3)+3(x+3)\)=\(x^2+3x+3x+9.\)
If you found a mistake or have comments on a particular question, please contact me (please copy and paste at least part of the question into the form, as the numbers change depending on how quizzes are displayed). General comments can be left here. |
Learning Objectives
In this section students will:
Identify the degree and leading coefficient of polynomials. Add and subtract polynomials. Multiply polynomials. Use FOIL to multiply binomials. Perform operations with polynomia ls of several variables.
Earl is building a doghouse, whose front is in the shape of a square topped with a triangle. There will be a rectangular door through which the dog can enter and exit the house. Earl wants to find the area of the front of the doghouse so that he can purchase the correct amount of paint. Using the measurements of the front of the house, shown in Figure \(\PageIndex{1}\), we can create an expression that combines several variable terms, allowing us to solve this problem and others like it.
First find the area of the square in square feet.
\[\begin{align*} A &= s^2\\ &= {(2x)}^2\\ &= 4x^2 \end{align*}\]
Then find the area of the triangle in square feet.
\[\begin{align*} A &= \dfrac{1}{2}bh\\ &= \dfrac{1}{2}(2x)\left (\dfrac{3}{2} \right )\\ &= \dfrac{3}{2}x \end{align*}\]
Next find the area of the rectangular door in square feet.
\[\begin{align*} A &= lw\\ &= x\times1\\ &= x \end{align*}\]
The area of the front of the doghouse can be found by adding the areas of the square and the triangle, and then subtracting the area of the rectangle. When we do this, we get
\(4x^2+\dfrac{3}{2}x-x\) \(ft^2\)
or
\(4x^2+\dfrac{1}{2}x\) \(ft^2\)
In this section, we will examine expressions such as this one, which combine several variable terms.
Identifying the Degree and Leading Coefficient of Polynomials
The formula just found is an example of a
polynomial, which is a sum of or difference of terms, each consisting of a variable raised to a nonnegative integer power. A number multiplied by a variable raised to an exponent, such as \(384\pi\), is known as a coefficient. Coefficients can be positive, negative, or zero, and can be whole numbers, decimals, or fractions. Each product \(a_ix^i\), such as \(384\pi w\), is a term of a polynomial. If a term does not contain a variable, it is called a constant.
A polynomial containing only one term, such as \(5x^4\), is called a
monomial. A polynomial containing two terms, such as \(2x−9\), is called a binomial. A polynomial containing three terms, such as \(−3x^2+8x−7\), is called a trinomial.
We can find the
degree of a polynomial by identifying the highest power of the variable that occurs in the polynomial. The term with the highest degree is called the leading term because it is usually written first. The coefficient of the leading term is called the leading coefficient. When a polynomial is written so that the powers are descending, we say that it is in standard form.
How to: Given a polynomial expression, identify the degree and leading coefficient.
Find the highest power of x to determine the degree. Identify the term containing the highest power of x to find the leading term. Identify the coefficient of the leading term.
For the following polynomials, identify the degree, the leading term, and the leading coefficient.
\(3+2x^2−4x^3\) \(5t^5−2t^3+7t\) \(6p−p^3−2\) Solution The highest power of \(x\) is \(3\), so the degree is \(3\). The leading term is the term containing that degree, \(−4x^3\). The leading coefficient is the coefficient of that term, \(−4\). The highest power of \(t\) is \(5\), so the degree is \(5\). The leading term is the term containing that degree, \(5t^5\). The leading coefficient is the coefficient of that term, \(5\). The highest power of \(p\) is \(3\), so the degree is \(3\). The leading term is the term containing that degree, \(−p^3\), The leading coefficient is the coefficient of that term, −1.
Exercise \(\PageIndex{1}\)
Identify the degree, leading term, and leading coefficient of the polynomial \(4x^2−x^6+2x−6\).
Answer
The degree is \(6\), the leading term is \(−x^6\), and the leading coefficient is \(−1\).
Adding and Subtracting Polynomials
We can add and subtract polynomials by combining like terms, which are terms that contain the same variables raised to the same exponents. For example, \(5x^2\) and \(−2x^2\) are like terms, and can be added to get \(3x^2\), but \(3x\) and \(3x^2\) are not like terms, and therefore cannot be added.
How to: Given multiple polynomials, add or subtract them to simplify the expressions.
1. Combine like terms.
2. Simplify and write in standard form.
Find the sum.
\((12x^2+9x−21)+(4x^3+8x^2−5x+20)\)
Solution
\[\begin{align*} &4x^3+(12x^2+8x^2)+(9x-5x)+(-21+20)\qquad \text{Combine like terms} \\ &4x^3+20x^2+4x-1\qquad \qquad \qquad \qquad \qquad \qquad \; \; \; \text{Simplify} \end{align*}\]
Analysis
We can check our answers to these types of problems using a graphing calculator. To check, graph the problem as given along with the simplified answer. The two graphs should be equivalent. Be sure to use the same window to compare the graphs. Using different windows can make the expressions seem equivalent when they are not.
Exercise \(\PageIndex{2}\)
Find the sum.
\((2x^3+5x^2−x+1)+(2x^2−3x−4)\)
Answer
\(2x^3+7x^2−4x−3\)
Find the difference.
\((7x^4−x^2+6x+1)−(5x^3−2x^2+3x+2)\)
Solution
\(7x^4−5x^3+(−x^2+2x^2)+(6x−3x)+(1−2)\) Combine like terms
\(7x^4−5x^3+x^2+3x−1\) Simplify
Analysis
Note that finding the difference between two polynomials is the same as adding the opposite of the second polynomial to the first.
Exercise \(\PageIndex{3}\)
Find the difference.
\((−7x^3−7x^2+6x−2)−(4x^3−6x^2−x+7)\)
Answer
\(−11x^3−x^2+7x−9\)
Multiplying Polynomials
Multiplying polynomials is a bit more challenging than adding and subtracting polynomials. We must use the distributive property to multiply each term in the first polynomial by each term in the second polynomial. We then combine like terms. We can also use a shortcut called the FOIL method when multiplying binomials. Certain special products follow patterns that we can memorize and use instead of multiplying the polynomials by hand each time. We will look at a variety of ways to multiply polynomials.
Multiplying Polynomials Using the Distributive Property
To multiply a number by a polynomial, we use the distributive property. The number must be distributed to each term of the polynomial. We can distribute the \(2\) in \(2(x+7)\) to obtain the equivalent expression \(2x+14\). When multiplying polynomials, the distributive property allows us to multiply each term of the first polynomial by each term of the second. We then add the products together and combine like terms to simplify.
How to: Given the multiplication of two polynomials, use the distributive property to simplify the expression.
Multiply each term of the first polynomial by each term of the second. Combine like terms. Simplify.
Find the product.
\((2x+1)(3x^2−x+4)\)
Solution
\[\begin{align*} &2x(3x^2-x+4)+1(3x^2-x+4)\qquad \text{ Use the distributive property }\\ &(6x^3-2x^2+8x)+(3x^2-x+4)\qquad \text{ Multiply }\\ &6x^3+(-2x^2+3x^2)+(8x-x)+4\qquad \text{ Combine like terms } \\ &6x^3+x^2+7x+4\qquad \text{ Simplify } \end{align*}\]
Analysis
We can use a table to keep track of our work, as shown in Table \(\PageIndex{1}\). Write one polynomial across the top and the other down the side. For each box in the table, multiply the term for that row by the term for that column. Then add all of the terms together, combine like terms, and simplify.
\(3x^2\) \(−x\) \(+4\) \(2x\) \(6x^3\) \(−2x^2\) \(8x\) \(+1\) \(3x^2\) \(−x\) \(4\)
Exercise \(\PageIndex{4}\)
Find the product.
\((3x+2)(x^3−4x^2+7)\)
Answer
\(3x^4−10x^3−8x^2+21x+14\)
Using FOIL to Multiply Binomials
A shortcut called FOIL is sometimes used to find the product of two binomials. It is called FOIL because we multiply the first terms, the outer terms, the inner terms, and then the last terms of each binomial.
The FOIL method arises out of the distributive property. We are simply multiplying each term of the first binomial by each term of the second binomial, and then combining like terms.
FOIL to simplify expression
Given two binomials, use FOIL to simplify the expression.
Multiply the first terms of each binomial. Multiply the outer terms of the binomials. Multiply the inner terms of the binomials. Multiply the last terms of each binomial. Add the products. Combine like terms and simplify.
Use FOIL to find the product.
\((2x−10)(3x+3) \nonumber\)
Solution
Find the product of the first terms.
Find the product of the outer terms.
Find the product of the inner terms.
Find the product of the last terms.
\[\begin{align*} &6x^2+6x-54x-54\qquad \text{Add the products}\\ &6x^2+(6x-54x)-54\qquad \text{Combine like terms} \\ &6x^2-48x-54\qquad \qquad \qquad \text{Simplify} \end{align*}\]
Exercise \(\PageIndex{5}\)
Use FOIL to find the product.
\((x+7)(3x−5)\)
Answer
\(3x^2+16x−35\)
Perfect Square Trinomials
Certain binomial products have special forms. When a binomial is squared, the result is called a perfect square trinomial. We can find the square by multiplying the binomial by itself. However, there is a special form that each of these perfect square trinomials takes, and memorizing the form makes squaring binomials much easier and faster. Let’s look at a few perfect square trinomials to familiarize ourselves with the form.
\({(x+5)}^2=x^2+10x+25\)
\({(x-3)}^2=x^2-6x+9\)
Notice that the first term of each trinomial is the square of the first term of the binomial and, similarly, the last term of each trinomial is the square of the last term of the binomial. The middle term is double the product of the two terms. Lastly, we see that the first sign of the trinomial is the same as the sign of the binomial.
Expand \((3x−8)^2\).
Solution
Begin by squaring the first term and the last term. For the middle term of the trinomial, double the product of the two terms.
Exercise \(\PageIndex{6}\)
Expand \({(4x−1)}^2\).
Answer
\(16x^2−8x+1\)
Difference of Squares
Another special product is called the difference of squares, which occurs when we multiply a binomial by another binomial with the same terms but the opposite sign. Let’s see what happens when we multiply \((x+1)(x−1)\) using the FOIL method.
The middle term drops out, resulting in a difference of squares. Just as we did with the perfect squares, let’s look at a few examples.
\((x+5)(x-5)=x^2-25\)
\((x+11)(x-11)=x^2-121\)
\((2x+3)(2x-3)=4x^2-9\)
Because the sign changes in the second binomial, the outer and inner terms cancel each other out, and we are left only with the square of the first term minus the square of the last term.
Q&A
Is there a special form for the sum of squares?
No. The difference of squares occurs because the opposite signs of the binomials cause the middle terms to disappear. There are no two binomials that multiply to equal a sum of squares.
Difference of Squares
When a binomial is multiplied by a binomial with the same terms separated by the opposite sign, the result is the square of the first term minus the square of the last term. \[(a+b)(a−b)=a^2−b^2\]
Multiply \((9x+4)(9x−4)\).
Solution
Square the first term to get \({(9x)}^2=81x^2\). Square the last term to get \(4^2=16\). Subtract the square of the last term from the square of the first term to find the product of \(81x^2−16\).
Exercise \(\PageIndex{7}\)
Multiply \((2x+7)(2x−7)\).
Answer
\(4x^2−49\)
Performing Operations with Polynomials of Several Variables
We have looked at polynomials containing only one variable. However, a polynomial can contain several variables. All of the same rules apply when working with polynomials containing several variables. Consider an example:
\[\begin{align*} &(a+2b)(4a-b-c) a(4a-b-c)+2b(4a-b-c)\qquad \text{ Use the distributive property }\\ &4a^2-ab-ac+8ab-2b^2-2bc\qquad \qquad\qquad\qquad\qquad \text{ Multiply }\\ &4a^2+(-ab+8ab)-ac-2b^2-2bc\qquad \qquad\qquad\qquad \; \text{ Combine like terms } \\ &4a^2+7ab-ac-2bc-2b^2\qquad \qquad \qquad \qquad \qquad \qquad\text{ Simplify } \end{align*}\]
Multiply \((x+4)(3x−2y+5)\).
Solution
\[\begin{align*} &x(3x-2y+5)+4(3x-2y+5)\qquad \text{ Use the distributive property }\\ &3x^2-2xy+5x+12x-8y+20\qquad \text{ Multiply }\\ &3x^2-2xy+(5x+12x)-8y+20\qquad \text{ Combine like terms } \\ &3x^2-2xy+17x-8y+20\qquad \qquad\text{ Simplify } \end{align*}\]
Exercise \(\PageIndex{8}\)
Multiply \((3x−1)(2x+7y−9)\).
Answer
\(6x^2+21xy−29x−7y+9\)
Key Equations
perfect square trinomial \({(x+a)}^2=(x+a)(x+a)=x^2+2ax+a^2\) difference of squares \((a+b)(a−b)=a^2−b^2\) Key Concepts A polynomial is a sum of terms each consisting of a variable raised to a non-negative integer power. The degree is the highest power of the variable that occurs in the polynomial. The leading term is the term containing the highest degree, and the leading coefficient is the coefficient of that term. See Example. We can add and subtract polynomials by combining like terms. See Example and Example. To multiply polynomials, use the distributive property to multiply each term in the first polynomial by each term in the second. Then add the products. See Example. FOIL (First, Outer, Inner, Last) is a shortcut that can be used to multiply binomials. See Example. Perfect square trinomials and difference of squares are special products. See Example and Example. Follow the same rules to work with polynomials containing several variables. See Example. |
I'm trying to prove what seems an elementary general topology exercise (I'm trying to prove it in order to use it in a basic complex analysis course) It's the following property:
Let $(X,d)$ be a metric space, let $G\subset X$ be an open but not closed subset of $X$ and $x\in G$. Denoting, for each $r>0$, the open ball $B(x,r):=\{y\in X\mid d(x,y)<δ\}$, we define
$R=\sup\{r>0\mid B(x,r)\subset G\}\\ D= \operatorname{dist}(\{x\},\partial G) \quad \text{(Since $G$ is not clopen, it's $\partial G \ne \varnothing$})\\ (\text{where for each }A, B\subset X \text{ we define } \operatorname{dist}(A,B):=\inf\{d(a,b)\mid a\in A, b \in B\}). $
Then, $R=D$.
So far I've managed to prove that $R\leq D$:
Suppose $D<R$. Then it exists some $δ>0$ s.t. $D<δ<R$ and therefore $B(x,δ)\subset G$. It's not difficult to see that, since $D<δ$, it's going to be $B(x,δ)\cap \partial G\ne\varnothing$. But this contradicts the fact that $G\cap\partial G = \varnothing$ for every open but no clopen set $G$. Therefore, $R\le D$.
But I'm having trouble trying to prove that $D\leq R$. I think the proof would have to do with examining what happens at the boundary of $\overline{B}(x,R)$, but I'm not sure even if the whole statement is false, since I'm starting to suspect that maybe we need more hypothesis in order to make it true (the statement feels very intuitive in $\mathbb{R}^2$, for example).
Any thoughts on the problem?
Edit: Since I've received an answer which provides a counterexample for the $D\le R$ part (the one by Snake707), the statement has been proven false and therefore the only thing we can assure in metric spaces is that $R\le D$. I guess the question now is: When is also $D \le R$ true and therefore $D=R$? Are there some sufficient or necessary conditions over the particular space which is $X$ in order to make the statement true? (maybe asking for connectivity or requiring that $X$ is Banach or some type of space where all balls are connected, since as I said in $\mathbb{R}^2$ is intuitive that is going to be true. I don't know now).
As soon as I end my exam period, I'll come back to this problem again to try if I can decide something over it. In the meantime, let's see if there's somebody who can give a satisfactory answer before that. |
I talking about functions of the form ||X||^p, when p>1 for different values of p.
I know these are all convex functions, but I don't know how to graph them.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I talking about functions of the form ||X||^p, when p>1 for different values of p.
I know these are all convex functions, but I don't know how to graph them.
Let $f\colon A \to B$ be a function. This means $f$ takes as an argument elements of the set $A$ and returns elements of the set $B$.
For example, $g(x) = x^2$ usually is a function that takes and returns real numbers, thus $g\colon \mathbb R\to\mathbb R$.
Usually the graph of a function $f$ can only be sketched, if the sum of the dimensions of $A$ and $B$ is less or equal to 3.
In order to graph functions that map from $\mathbb R$ to $\mathbb R$ you can use e.g. desmos.
Now you were originally asking about a function $\|X\|^p$. Your notation implies that $X$ is supposed to be a vector. As I said before you can only graph a functions if the sum of the dimensions is less or equal to 3. This means you can graph $\|X\|^p$, if $X$ is two-dimensional. You can use this in order to graph it in this case. Note that it cannot handle the norm directly. The usual norms for two dimesional vectors are \begin{align*} \left\|\begin{bmatrix}x\\y\end{bmatrix}\right\|_q &= \left(|x|^q + |y|^q\right)^{\frac1q} \end{align*} for some $0<q<\infty$. You can thus graph it by typing
( (abs(x)^1.5 + abs(y)^1.5)^(1/1.5) )^5
into the graphic 3D plotter from above. It will give you a plot of $\|X\|_q^p$, where $q=1.5$ and $p=5$. |
This question is somehow related to my question at https://math.stackexchange.com/questions/683915/derived-pseudo-functor and to the question here: A homotopy commutative diagram that cannot be strictified .
Consider a model category $M$. Let us assume that, for all small category $S$, we can endow the category $Fun (S, M) $ with the injective model structure.
So, considering a given small category $S$, we got the notion of a homotopy limit $ Ho(Fun (S, M))\to Ho(M) $.
My first question is: which would be the correct notion of homotopy limit $ Fun (S, Ho(M))\to Ho(M) $? The second question is: how can I study the existence of such a homotopy limit?
I know that if the diagonal functor $ \Delta : Ho (M)\to Fun (S, Ho(M)) $ has a right adjoint, then the answer is clear. But, also, I do know that this situation is rare for model categories $ M $ (considering $S$ being not discrete). However, in the general situation, I guess that the correct notion of a such homotopy limit is the following:
Consider the projections $\lambda : M\to Ho(M) $ and $ \alpha : Fun(S, M)\to Ho (Fun (S, M)) $. The functor $ Fun ( S, \lambda ) $ has a total derived functor $ i: Ho ( Fun (S, M) ) \to Fun (S , Ho (M) ) $. I would define the homotopy limit $ Fun ( S, Ho(M)) $ as being a right Kan extension of the usual homotopy limit $ Ho(Fun (S, M))\to Ho (M) $ along thar derived functor $ i $.
I guess that, one example of such a homotopy limit is the 2 (weak) - limit (homotopy limit) of a pseudo functor $ A: S\to Cat $.
But, if this is the correct notion of such a homotopy limit, my question remains: how can I study the existence of that right Kan extension? Where can I find about this kind of homotopy limit?
Thank you in advance |
This is a heuristic explanation of Witten's statement, without going into the subtleties of axiomatic quantum field theory issues, such as vacuum polarization or renormalization.
A particle is characterized by a definite momentum plus possible other quantum numbers. Thus, one particle states are by definition states with a definite eigenvalues of the momentum operator, they can have further quantum numbers. These states should exist even in an interactiong field theory, describing a single particle away from any interaction.In a local quantum field theory, these states are associated with local field operators: $$| p, \sigma \rangle = \int e^{ipx} \psi_{\sigma}^{\dagger}(x) |0\rangle d^4x$$Where $\psi $ is the field corresponding to the particle and $\sigma$ describes the set of other quantum numbers additional to the momentum.A symmetry generator $Q$ being the integral of a charge density according to the Noether's theorem$$Q = \int j_0(x') d^3x'$$should generate a local field when it acts on a local field:$[Q, \psi_1(x)] = \psi_2(x)$(In the case of internal symmetries $\psi_2$ depends linearly on the components of $\psi_1(x)$, in the case of space time symmetries it depends on the derivatives of the components of $\psi_1(x)$)
Thus in general:
$$[Q, \psi_{\sigma}(x)] = \sum_{\sigma'} C_{\sigma\sigma'}(i\nabla)\psi_{\sigma'}(x)])$$
Where the dependence of the coefficients $ C_{\sigma\sigma'}$ on the momentum operator $\nabla$ is due to the possibility that $Q$ contains a space-time symmetry.Thus for an operator $Q$ satisfying $Q|0\rangle = 0$, we have$$ Q | p, \sigma \rangle = \int e^{ipx} Q \psi_{\sigma}^{\dagger}(x) |0\rangle d^4x = \int e^{ipx} [Q , \psi_{\sigma}^{\dagger}(x)] |0\rangle d^4x = \int e^{ipx} \sum_{\sigma'} C_{\sigma\sigma'}(i\nabla)\psi_{\sigma'}(x) |0\rangle d^4x = \sum_{\sigma'} C_{\sigma\sigma'}(p) \int e^{ipx} \psi_{\sigma'}^{\dagger}(x) |0\rangle d^4x = \sum_{\sigma'} C_{\sigma\sigma'}(p) | p, \sigma' \rangle $$Thus the action of the operator $Q$ is a representation in the one particle states. The fact that $Q$ commutes with the Hamiltonian is responsible for the energy degeneracy of its action, i.e., the states $| p, \sigma \rangle$ and $Q| p, \sigma \rangle$ have the same energy.This post imported from StackExchange Physics at 2015-06-16 14:50 (UTC), posted by SE-user David Bar Moshe |
Higher Order Homogenous Differential Equations Real, Distinct Roots of The Characteristic Equation
Consider the following $n^{\mathrm{th}}$ order linear homogenous differential equation with the constant coefficients $a_0, a_1, ..., a_n \in \mathbb{R}$:(1)
Recall from the Higher Order Homogenous Differential Equations - Constant Coefficients page that the characteristic equation to this differential equation is:(2)
Let $r_1$, $r_2$, …, $r_n$ be the roots to the characteristic equation and suppose that these roots are real and distinct, that is $r_i \neq r_j$ for $i \neq j$, $i, j = 1, 2, ..., n$. Then $y_1(t) = e^{r_1t}$, $y_2(t) = e^{r_2t}$, …, $y_n(t) = e^{r_nt}$ are all solutions to our differential equation. Thus since each root id real and distinct, then the next question to ask is whether or not $y_1$, $y_2$, …, $y_n$ form a fundamental set of solutions. The answer boils down to whether or not $y_1$, $y_2$, …, $y_n$ are a linearly independent set of functions. The following proposition tells us that indeed they are provided that the roots $r_1$, $r_2$, …, $r_n$ are indeed distinct.
Proposition 1: If $r_1, r_2, ..., r_n \in \mathbb{R}$ are $n$ distinct real numbers, then the set of functions $y_1(t) = e^{r_1t}$, $y_2(t) = e^{r_2t}$, …, $y_n(t) = e^{r_nt}$ are linearly independent on all of $\mathbb{R}$. Proof:To show that $y_1$, $y_2$, …, $y_n$ are a linearly independent set of functions on all of $\mathbb{R}$, we must show that the following equation implies that the constants $k_1 = k_2 = ... = k_n = 0$: First multiply both sides of the equation above by $e^{-r_1t}$: We will now differentiate the function above with respect to $t$ to get that: We will now multiply both sides of the equation above by $e^{-(r_2 - r_1)t}$: We will now differentiate both sides of the equation above with respect to $t$ again to get: If we continue this process over and over again, we eventually get that: Note that $e^{(r_n - r_{n-1})t} \neq 0$ as the exponential function, and $(r_i - r_j) \neq 0$ for each $i \neq j$, $i, j = 1, 2, ..., n$ since $r_1$, $r_2$, …, $r_n$ are distinct numbers. Thus we must have that $k_n = 0$ and so: Repeating the steps from above, we have that $k_n = k_{n-1} = ... = k_2 = k_1 = 0$, and so $y_1$, $y_2$, …, $y_n$ are a linearly independent set of functions. $\blacksquare$
From Proposition 1 above, we see that the general solution to an $n^{\mathrm{th}}$ order linear homogenous differential equation with constant coefficients and whose roots to the characteristic equation are real and distinct is of the form:(10)
Example 1 Find the general solution to the differential equation $\frac{d^3y}{dt^3} - 6 \frac{d^2y}{dt^2} + 11\frac{dy}{dt} -6y = 0$.
We first note that the characteristic equation for the differential equation is:(11)
We can immediately see that $r = 1$ is a solution to this differential equation, and in applying polynomial long-division, it is not too hard to see that $r = 2$ and $r = 3$ are also solutions to the characteristic equation as you should verify. Thus the general solution to our differential equation is:(12) |
When you studied fractions, you had lots of different ways to think about them. But the first way, and the one we keep coming back to, is to think of a fraction as the answer to a division problem.
Example \(\PageIndex{1}\):
Suppose 6 pies are to be shared equally among 3 children. This yields 2 pies per child. We write:
$$\frac{6}{3} = 2 \ldotp$$
The fraction \(\frac{6}{3}\) is equivalent to the answer to the division problem \(6 \div 3 = 2\). It represents the number of pies one whole child receives.
In the same way…
sharing 10 pies among 2 kids yields \(\frac{10}{2} = 5\) pies per kid,
sharing 8 pies among 2 kids yields \(\frac{8}{2} = 4\) pies per kid,
sharing 5 pies among 5 kids yields \(\frac{5}{5} = 1\) pies per kid, and
the answer to sharing 1 pies among 2 children is \(\frac{1}{2}\), which we call “one-half.”
We associate the number “\(\frac{1}{2}\)” to the picture .
In the same way, the picture represents “one third,” that is, \(\frac{1}{5}\).
(This is the amount of pie an individual child would receive if one pie is shared among three children.)
The picture is called “one fifth” and is indeed \(\frac{1}{5}\), the amount of pie an individual child receives when one pie is shared by five kids.
And the picture is called “three fifths” to represent \(\frac{3}{5}\), the amount of pie an individual receives if three pies are shared by five kids.
We know how to do division in our “Dots & Boxes” model.
Example: 3906 ÷ 3
Suppose you are asked to compute \(3906 \div 3\). One way to interpret this question (there are others) is:
“How many groups of 3 fit into 3906?”
In our “Dots & Boxes” model, the dividend 3906 looks like this:
and three dots looks like this:
So we are really asking:
“How many groups of fit into the picture of 3906?”
Notice what we have in the picture:
One group of 3 in the thousands box. Three groups of 3 in the hundreds box. Zero groups of 3 in the tens box. Two groups of 3 in the ones box.
This shows that 3 goes into 3906 one thousand, three hundreds and two ones times. That is,
$$3906 \div 3 = 1302 \ldotp$$
Of course, not every division problem works out evenly! Here’s a different example.
Example: 1024 ÷ 3
Suppose you are asked to compute \(1024 \div 3\). One way to interpret this question is:
“How many groups of 3 fit into 1024?”
So we’re looking for groups of three dots in this picture:
One group of three is easy to spot:
To find more groups of three dots, we must “unexplode” a dot:
We need to unexplode again:
This leaves one stubborn dot remaining in the ones box and no more group of three. So we conclude:
$$1024 \div 3 = 341\; \text{R} 1,\; meaning\; 1024 = 341 \cdot 3 + 1 \ldotp$$
In words: 1024 gives 341 groups of 3, plus one extra dot.
We can put these two ideas together — fractions as the answer to a division problem and what we know about division in the “Dots & Boxes” model — to help us think more about the connection between fractions and decimals.
Example: 1/8
The fraction \(\frac{1}{8}\) is the result of dividing 1 by 8. Let’s actually compute \(1 \div 8\) in a “Dots & Boxes” model, making use of decimals. We want to find groups of eight in the following picture:
Clearly none are to be found, so let’s unexplode:
(We’re being lazy and not drawing all the dots. As you follow along, you might want to draw the dots rather than the number of dots, if it helps you keep track.)
Now there is one group of 8, leaving two behind. We write a tick-mark on top, to keep track of the number of groups of 8, and leave two dots behind in the box.
We can unexplode the two dots in the \(\frac{1}{10}\) box:
This gives two groups of 8 leaving four behind. Remember: the two tick marks represent two groups of 8. And there are four dots left in the \(\frac{1}{100}\) box.
Unexploding those four remaining dots:
Now we have five groups of 8 and no remainder.
Remember: the tick marks kept track of how many groups of eight there were in each box. We have
One group of 8 dots in the \(\frac{1}{10}\) box Two groups of 8 dots in the \(\frac{1}{100}\) box. Five groups of 8 dots in the \(\frac{1}{1000}\) box.
So we conclude that:
$$\frac{1}{8} = 1 \div 8 = 0.125 \ldotp$$
Of course, it’s a good habit to check our answer:
$$0.125 = \frac{125}{1000} = \frac{5 \cdot 25}{5 \cdot 200} = \frac{5 \cdot 5}{5 \cdot 40} = \frac{5 \cdot 1}{5 \cdot 8} = \frac{1}{8} \ldotp$$
On Your Own
Work on the following exercises on your own or with a partner. Be sure to show your work.
Perform the division in a “Dots & Boxes” model to show that \(\frac{1}{4}\), as a decimal, is \(0.25\). Perform the division in a “Dots & Boxes” model to show that \(\frac{1}{2}\), as a decimal, is \(0.5\). Perform the division in a “Dots & Boxes” model to show that \(\frac{3}{5}\), as a decimal, is \(0.6\). Perform the division in a “Dots & Boxes” model to show that \(\frac{3}{6}\), as a decimal, is \(0.1875\). In simplest terms, what fraction is represented by each of these decimals? $$0.75, \qquad 0.625, \qquad 0.16, \qquad 0.85, \qquad 0.0625 \ldotp$$ Repeating Decimals
Not all fractions lead to simple decimal representations.
Example: 1/3
Consider the fraction \(\frac{1}{3}\). We seek groups of three in the following picture:
Unexploding requires us to look for groups of 3 in:
Here there are three groups of 3 leaving one behind:
Unexploding gives:
We find another three groups of 3 leaving one behind:
Unexploding gives:
And we seem to be caught in an infinitely repeating cycle.
We are now in a philosophically interesting position. As human beings, we cannot conduct this, or any, activity an infinite number of times. But it seems very tempting to write:
$$\frac{1}{3} = 0.33333 \ldots,$$
with the ellipsis “” meaning “keep going forever with this pattern.” We can
imagine what this means, but we cannot actually write down those infinitely many 3’s represented by the
notation
Many people make use of a
vinculum (horizontal bar) to represent infinitely long repeating decimals. For example, \(0. \bar{3}\) means “repeat the 3 forever”:
$$0. \bar{3} = 0.33333 \ldots,$$
and \(0.296 \overline{412}\) means “repeat the 412 forever”:
$$0.296 \overline{412} = 0.296412412412412 \ldots$$
Now we’re in a position to give a perhaps more satisfying answer to the question \(1024 \div 3\). In the example above, we found the answer to be
$$1024 \div 3 = 341\; \text{R} 1 \ldotp$$
But now we know we can keep dividing that last stubborn dot by 3. Remember, that represents a single dot in the ones place, so if we keep dividing by three it really represents \(\frac{1}{3}\). So we have:
$$1024 \div 3 = 341\; \text{R} 1 = 341 \frac{1}{3} = 341.3333333 \ldots = 341. \bar{3} \ldotp$$
Example: 6/7
As another (more complicated) example, here is the work that converts the fraction \(\frac{6}{7}\) to an infinitely long repeating decimal. Make sure to understand the steps one line to the next.
With this 6 in the final right-most box, we have returned to the very beginning of the problem. (Do you see why? Remember, we started with a six in the ones box!)
This means that we will simply repeat the work we have done and obtain the same sequence of answers: \(857142\). And then again, and then again, and then again. We have:
$$\begin{split} \frac{6}{7} &= 0.857142857142857142857142 \ldots \\ &= 0. \overline{857142} \ldotp \end{split}$$
On Your Own
Work on the following exercises on your own or with a partner. Be sure to show your work.
Compute \(\frac{4}{7}\) as an infinitely long repeating decimal. Compute \(\frac{1}{9}\) as an infinitely long repeating decimal. Use a “Dots & Boxes” model to compute \(133 \div 6\). Write the answer as a decimal. Use a “Dots & Boxes” model to compute \(255 \div 11\). Write the answer as a decimal. |
Is there a standard way of writing
$a$ is divisible by $b$ in mathematical notation?
From what I've search it seems that writing $a \equiv 0 \pmod b$ is one way? But also you can write $b \mid a$ as well (the middle character is a pipe)? And sometimes that pipe is replaced by $3$ vertical dots?
Or is there a way of writing
$a$ is a multiple of $b$ which I think means the same thing?
EDIT: thanks for the answers, is there a way to extend this and write something like: $b \mid a$ when $a = k$ |
Search
Now showing items 1-1 of 1
Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2016-09)
The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ... |
The Method of Undetermined Coefficients
We will now look at a method for solving second order linear nonhomogenous differential equations, for $a, b, c \in \mathbb{R}$, in the form:(1)
Note that the corresponding second order linear homogenous differential equation $a \frac{d^2y}{dt^2} + b \frac{dy}{dt} + cy = 0$ has constant coefficients. This method can be used when the coefficients of the corresponding second order linear homogenous differential equation does not have constant coefficients, but it is much more difficult to apply in most cases.
Recall from the Second Order Nonhomogenous Differential Equations page that the general solution to $\frac{d^2y}{dt^2} + p(t) \frac{dy}{dt} + q(t) y = g(t)$ is of the form $y = Cy_1(t) + Dy_2(t) + Y(t)$ where $y = Y(t)$ is a particular solution to this differential equation and $y = y_1(t)$ and $y = y_2(t)$ form a fundamental set of solutions to $\frac{d^2 y}{dt^2} + p(t) \frac{dy}{dt} + q(t) y = 0$. Solving the corresponding second order linear homogenous differential equation whose coefficients are constant is simple, and we will sometimes denote this solution (known as the complementary solution) by $y_h(t) = Cy_1(t) + Dy_2(t)$ (where the subscript $h$ stands for "homogenous). Thus, the difficult arises in finding a particular solution $Y(t)$ (which we will sometimes denote as $y_p$ where the subscript $p$ stands for "particular") to the second order linear nonhomogenous differential equation.
One such technique for doing so is known as
The Method of Undetermined Coefficients which we'll look at now. We first find the general solution to the corresponding second order linear homogenous differential equation $a\frac{d^2y}{dt^2} + b \frac{dy}{dt} + cy = 0$. We will then check to make sure that the function $g(t)$ is in a particular form, namely, $g(t)$ is being a function that involves only exponential, sine, and cosine functions or polynomials alongside combinations of all of these.
If $g(t)$ can be written as $g(t) = g_1(t) + g_2(t) + ... + g_n(t)$ where each $g_i(t)$ for $i = 1, 2, ..., n$ is of the simplest forms mentioned above, then we will have to reduce the problem to solving each of the following second order linear nonhomogenous differential equations:(2)
We then assume that $Y_i(t)$ is a particular solution to these differential equations, where $Y_i(t)$ is of the corresponding form containing exponential, sine, cosine, or polynomial functions. If necessary, we can multiply by $Y_i(t)$ by $t$ or $t^2$ to prevent duplication with the form of the solutions in the corresponding second order linear homogenous differential equation. If we let $s = 0, 1, 2$ be the smallest nonnegative integer such that none of the terms $Y_i(t)$ are a solution to the corresponding homogenous differential equation, then the following table gives us the forms to assume $Y_i(t)$ to be (where the $A$'s and $B$'s are the undetermined coefficients):
Form of $g_i(t)$ Form of $Y_i(t)$ $a_0 + a_1t + a_2t^2 + ... + a_nt^n$ $t^s (A_0 + A_1t + A_2t + ... + A_n)$ $(a_0 + a_1t + a_2t^2 + ... + a_nt^n)e^{\alpha t}$ $t^s (A_0 + A_1t + A_2t + ... + A_n)e^{\alpha t}$ $(a_0 + a_1t + a_2t^2 + ... + a_nt^n ) \left\{\begin{matrix} \sin \beta t\\ \cos \beta t \end{matrix}\right.$ $t^s [(A_0 + A_1t + A_2t + ... + A_n)\cos \beta t + (B_0 + B_1t + B_2t + ... + B_n)\sin \beta t]$ $e^{\alpha t}\left\{\begin{matrix} \sin \beta t\\ \cos \beta t \end{matrix}\right.$ $e^{\alpha t}[\cos \beta t + \sin \beta t]$ $(a_0 + a_1t + a_2t^2 + ... + a_nt^n)e^{\alpha t} \left\{\begin{matrix} \sin \beta t\\ \cos \beta t \end{matrix}\right.$ $t^s [(A_0 + A_1t + A_2t + ... + A_n)\cos \beta t + (B_0 + B_1t + B_2t + ... + B_n) \sin \beta t]e^{\alpha t}$.
To obtain a particular solution to our main second order linear nonhomogenous differential equation, we sum up the $Y_i(t)$'s for $i = 1, 2, ..., n$, that is:(3)
Therefore the corresponding general solution to this second order linear nonhomogenous differential equation $\frac{d^2y}{dt^2} + p(t) \frac{dy}{dt} + q(t) y = g(t)$ will be of the form:(4)
Let's now look at an example of using the method of undetermined coefficients.
Example 1 Solve the second order linear nonhomogenous differential equation $\frac{d^2y}{dt^2} - \frac{dy}{dt} - 2y = -2t + 4t^2$.
Let's first solve for the complementary solution. The corresponding homogenous differential equation is:(5)
The characteristic equation is $r^2 - r - 2 = 0$ which can be factored as $(r - 2)(r + 1) = 0$. Thus $r_1 = 2$ and $r_2 = -1$ and so the complementary solution is:(6)
Now we need to find a particular solution to our differential equation. Note that $g(t) = -2t + 4t^2$ is a polynomial, and so the method of undetermined coefficients will work well here. Assume the form $Y(t) = t^s(A + Bt + Ct^2)$. Note that in this case, $s = 0$ since a polynomial will never be constructed from our complementary solution. Thus $Y(t) = A + Bt + Ct^2$.
Now we differentiate $Y$ twice to get that:(7)
Plugging these values into our second order linear nonhomogenous differential equation and we have that:(8)
Thus we obtain the following system of equations:(9)
The above equations imply that $C = -2$, $B = 3$, and $A = -\frac{7}{2}$, and so our particular solution is:(10)
Therefore the general solution to our second order linear nonhomogenous differential equation is:(11) |
Words Over a Set
Definition: Let $A = \{ a, b, ... \}$ be a nonempty set. The Inverse Set of $A$ is the set $A^{-} = \{ a^{-1}, b^{-1}, ... \}$. A Word over $A \cup A^{-}$ is a finite sequence $w$ of elements in $A \cup A^{-1}$. The Empty Word is the empty sequence $()$. We are not assuming a group context here. If $A$ is any nonempty set, it is a set of symbols, and $A^{-}$ is also a set of symbols.
For example, if $A = \{ a, b, c, d, e \}$ then an example of a word over $A \cup A^{-}$ is:(1)
Definition: Let $A$ be a nonempty set. If $w$ is a word over $A \cup A^{-}$ then the Length of $w$ denoted $|w|$ is the number of elements in the sequence.
Defintion: Let $A$ be a nonempty set. If $w = a_1^{\epsilon_1}a_2^{\epsilon_2}...a_n^{\epsilon_n}$ is a word over $A \cup A^{-}$ where $a_1, a_2, ..., a_n \in A$ and $\epsilon_1, \epsilon_2, ..., \epsilon_n \in \{ 1, -1\}$ then the Inverse Word is denoted by $w^{-1}$ and is defined as $w^{-1} = a_n^{-\epsilon_n}a_{n-1}^{\epsilon_{n-1}}...a_1^{-\epsilon_1}$.
In the example above, we have that:(2)
Defintion: Let $A$ be a nonempty set. If $u$ and $v$ are words over $A$ then the Concatenation of $u$ with $v$ is the word $uv$ obtained by adjoining the sequence $v$ at the end of the sequence $u$.
For example, if $A = \{ a, b, c \}$, $u = aba$ and $v = cb^{-1}$ then:(3)
Let $A$ be a nonempty set and let $\mathcal R$ be a set of words over $A$. We define an equivalence relation $\sim$ on the set of words on $A \cup A^{-}$ with respect to $\mathcal R$. If $u$ and $v$ are words over $A \cup A^{-}$ we say that $u \sim v$ if there is a finite sequence of words $w_1, w_2, ..., w_n$ with $w_1 = u$ and $w_n = v$ such that for all $i \in \{ 1, 2, ..., n - 1\}$:
1)$w_{i+1}$ is obtained from $w_i$ by inserting an occurrence of $xx^{-1}$ in the word $w_i$. 2)$w_{i+1}$ is obtained from $w_i$ by deleting an occurrence $xx^{-1}$ in the word $w_i$. 3)$w_{i+1}$ is obtained from $w_i$ by inserting a word in $\mathcal R$ or $\mathcal R^{-}$ in the word $w_i$. 4)$w_{i+1}$ is obtained from $w_i$ by deleting a word in $\mathcal R$ or $\mathcal R^{-}$ in the word $w_i$.
We will shortly see that the set of these equivalence classes with a particular operation forms a group. |
Taiwanese Journal of Mathematics Taiwanese J. Math. Volume 19, Number 2 (2015), 381-396. INFINITELY MANY SOLUTIONS FOR A CLASS OF SUBLINEAR SCHRÖDINGER EQUATIONS Abstract
In this paper, we deal with the existence of infinitely many solutions for a class of sublinear Schrödinger equation $$ \left\{ \begin{array}{ll} -\triangle u+V(x)u=f(x, u), \ \ \ \ x\in {\mathbb{R}}^{N},\\ u\in H^{1}({\mathbb{R}}^{N}). \end{array} \right. $$ Under the assumptions that $\inf_{{\mathbb{R}}^{N}}V(x) \gt 0$ and $f(x, t)$ is indefinite sign and sublinear as $|t|\to +\infty$, we establish some existence criteria to guarantee that the above problem has at least one or infinitely many nontrival solutions by using the genus properties in critical point theory.
Article information Source Taiwanese J. Math., Volume 19, Number 2 (2015), 381-396. Dates First available in Project Euclid: 4 July 2017 Permanent link to this document https://projecteuclid.org/euclid.twjm/1499133636 Digital Object Identifier doi:10.11650/tjm.19.2015.4044 Mathematical Reviews number (MathSciNet) MR3332303 Zentralblatt MATH identifier 1357.35159 Citation
Chen, Jing; Tang, X. H. INFINITELY MANY SOLUTIONS FOR A CLASS OF SUBLINEAR SCHRÖDINGER EQUATIONS. Taiwanese J. Math. 19 (2015), no. 2, 381--396. doi:10.11650/tjm.19.2015.4044. https://projecteuclid.org/euclid.twjm/1499133636 |
Retract Subspaces of a Topological Space
Definition: Let $X$ be a topological space and let $A \subset X$ be a topological subspace. Then $A$ is said to be a Retract of $X$ if there exists a continuous function $r : X \to A$ called a Retraction Map such that $r \circ \mathrm{in} = \mathrm{id}_A$. Here, $\mathrm{in} : A \to X$ is the Inclusion Map, which sends each point $a \in A$ to $a \in X$. Then $r \circ \mathrm{in} : A \to A$.
Intuitively, a subspace $A$ is a retract of $X$ if $X$ can be continuously changed to become $A$ by fixing all of the points in $A$.
For example, let $D^2$ be the closed unit disk in $\mathbb{R}^2$. That is:(1)
And let $\displaystyle{\frac{1}{2}D^2}$ be the closed disk centered at the origin with radius $\frac{1}{2}$ in $\mathbb{R}^2$, that is:(2)
We claim that $\displaystyle{\frac{1}{2}D^2}$ is a retraction of $D^2$. Define a retraction map $r : X \to A$ by:(3)
Where $(a, b)$ is such that $d((x, y), (a, b)) = \mathrm{inf}_{(c, d) \in \frac{1}{2}D^2} d((x, y), (c, d))$. In other words, each point $(x, y) \in D^2$ is mapped to the point $(a, b)$ in $\displaystyle{\frac{1}{2}D^2}$ whose distance from $(x, y)$ is minimized. Clearly $r$ is a continuous map. Furthermore, $r \circ \mathrm{in} = \mathrm{id}_A$. So indeed, $\displaystyle{\frac{1}{2}D^2}$ is a retract of $D^2$.
Below is a visual representation of the retract:
Theorem 1: Let $X$ be a topological space and let $a \in X$. Then $\{ a \}$ is a retract of $X$. Proof:Consider the function $r : X \to \{ a \}$ defined for all $x \in X$ by $r(x) = b$. Then $r$ is trivially a continuous function. Furthermore, for the inclusion map $in : \{ a \} \to X$ defined by $in(a) = a$, we have that: Hence [[$ r \circ \in = \mathrm{id}_{\{ a \}} So $\{ a \}$ is a retract of $X$. $\blacksquare$ |
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago
BTW your program looks very interesting, in particular the way to enter mathematics.
One thing that seem to be missing is documentation (at least I did not find it).
This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for.
For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$?
*******
Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports.
When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to.
*******
If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string:
I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead:
One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find...
In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som...
@MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago
I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, "
BTW those animations with examples of searching look really cool.
@MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago
We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page!
@MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users.
@MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it.
@MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords.
@MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history.
@MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though)
@MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match.
@MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell.
@MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets.
@MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit.
@MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned.
@MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish.
@MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish.
So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago
@GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago
@quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago
"What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago
@quid I will reply here, since I do not want to digress in the comments too much from the topic of that question.
Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that".
Book recommendations are certainly accepted on the main site, if they are formulated in the proper way.
If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here.
Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed.
Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously.
I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc.
Academia.SE has some questions which could be classified as "demographic" (including gender).
@quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar.
But that is only anecdotal.
And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat.
From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov."
My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men.
As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation.
It seems that they have also other interpretations in Poland.
"A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House").
Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany."
BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question.
In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3]
A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar).
In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing.
On Slovakia specifically it says there:
The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko. |
In my syllabus we only study about rate laws under the topic "Rates of reactions", but as I was interested in it, I started to search about that topic. And then I found the Arrhenius equation$$\ln k ...
I need to calculate the half life for a first order reaction for which I need the rate constant $k$. From literature I observed that for a very closely simulated experiment I obtained the numerical ...
I was wondering about this. I have heard, and known for a while, that the famous and celebrated Prototype Kilogram - the lump of metal whose mass is used to define the standard mass scale worldwide - ...
How can I convert a reaction rate $k$ with units of $\mathrm{1\over M~~s}$ to $\mathrm{1\over ppm~~s}$?For context: I want to make a numerical simulation of a system of reactions that happens in air ...
In the Eyring equation (EE),$$k = \frac{k_\mathrm B T}{h} \exp\left(\frac{-\Delta G_{\mathrm f}}{RT}\right),$$the units of $k$ are $\mathrm{s^{-1}}$. However, in general rate constants are usually ...
The modified Arrhenius equation is used to express the rate constant in a chemical mechanism model I'm working with. The equations is as follows:$$k_\mathrm{f} = A\times T^b\times\exp\left(-\frac{E_\...
I understand that $k_\text{cat}$ measures the turnover number of an enzyme. This measure is therefore a quantity of molecule conversions per unit of time. I suspect that my problem is more that of a ... |
Answer
$s=\dfrac{\pi}{4}$
Work Step by Step
RECALL: $\frac{\pi}{4}$ is a special angle and $\cos{(\frac{\pi}{4})}=\dfrac{\sqrt2}{2}$ Thus, if $\cos{s} = \frac{\sqrt2}{2}$, then $s=\dfrac{\pi}{4}$.
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Analysis of the Minimal Compliance Problem and Related Filters Andreas Thalhammer Nov. 25, 2014, 3:30 p.m. S2 059
In this talk we consider the topology optimization of elastic continua. For this, we derive the so-called \textit{Minimal Compliance Problem:}
\begin{align*}
\ell(\mathbf{u}(\rho)) &\to \min_{\rho\in L_\infty(\Omega)}\\ \text{subject to } \quad \int_\Omega \rho ( \mathbf{x}) d \mathbf{x} &\leq m_0, \\ \rho_{\min} \leq \rho(\mathbf{x}) &\leq 1,\qquad\; \dot{\forall} \mathbf{x} \in \Omega. \end{align*}
Whereas we are able to show existence of solutions for this problem, a disadvantage of this formulation is that we cannot guarantee a $0-1$-structure for the material-void problem, since we allow intermediate values of the density $\rho$.
In order to force a \textit{0-1}-structure, we are discussing material interpolation methods such as SIMP and RAMP. Although the sharp contrast in the numerical output is forced by these methods, existence of solutions of the modified problem - in contrast to the original formulation - cannot be proved directly.
As a possible remedy, the RIDC method is presented, whose main idea is to add an additional constraint $P_S(\rho) \leq \varepsilon_P$ to the Minimal Compliance Problem. For this modified problem, it is possible to show existence of solutions if the integral operators $P$ and $S$ satisfy specific assumptions. |
Literature on Carbon Nanotube Research
I have hijacked this page to write down my views on the literature on Carbon Nanotube (CNT) growths and processing, a procedure that should give us the cable/ribbon we desire for the space elevator. I will try to put as much information as possible here. If anyone has something to add, please do not hesitate!
Contents 1 Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes 2 Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis 3 Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology 4 Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen 5 Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning 6 In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation 7 High-Performance Carbon Nanotube Fiber Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes
B. G. Demczyk et al., Materials and Engineering,
A334, 173-178, 2002 The paper by Demczyk et al. (2002) is the basic reference for the experimental determination of the tensile strengths of individual Multi-wall nanotube (MWNT) fibers. The experiments are performed with a microfabricated piezo-electric device. On this device CNTs in the length range of tens of microns are mounted. The tensile measurements are obseverd by transmission electron microscopy (TEM) and videotaped. Measurements of the tensile strength (tension vs. strain) were performed as well as Young modulus and bending stiffness. Breaking tension is reached for the SWNT at 150GP and between 3.5% and 5% of strain. During the measurements 'telescoping' extension of the MWNTs is observed, indicating that single-wall nanotubes (SWNT) could be even stronger. However, 150GPa remains the value for the tensile strength that was experimentally observed for carbon nanotubes. Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis
Y.-L. Li, I. A. Kinloch, and A. H. Windle, Science,
304,276-278, 2004 The work described in the paper by Y.-L. Li et al. is a follow-on of the famous paper by Zhu et al. (2002), which was cited extensively in Brad's book. This article goes a little more into the details of the process. If you use a mixture of ethene (as the source of carbon), ferrocene, and theophene (both as catalysts, I suppose) into a furnace (1050 to 1200 deg C) using hydrogen as carrier gas, you apparently get an 'aerogel' or 'elastic smoke' forming in the furnace cavity, which comprises the CNTs. Here's an interesting excerpt: Under these synthesis conditions, the nanotubes in the hot zone formed an aerogel, which appeared rather like “elastic smoke,” because there was sufficient association between the nanotubes to give some degree of mechanical integrity. The aerogel, viewed with a mirror placed at the bottom of the furnace, appeared very soon after the introduction of the precursors (Fig. 2). Itwas then stretched by the gas flow into the form of a sock, elongating downwards along the furnace axis. The sock did not attach to the furnace walls in the hot zone, which accordingly remained clean throughout the process.... The aerogel could be continuously drawn from the hot zone by winding it onto a rotating rod. In this way, the material was concentrated near the furnace axis and kept clear of the cooler furnace walls,...
The elasticity of the aerogel is interpreted to come from the forces between the individual CNTs. The authors describe the procedure to extract the aerogel and start spinning a yarn from it as it is continuously drawn out of the furnace. In terms of mechanical properties of the produced yarns, the authors found a wide range from 0.05 to 0.5 GPa/g/ccm. That's still not enough for the SE, but the process appears to be interesting as it allows to draw the yarn directly from the reaction chamber without mechanical contact and secondary processing, which could affect purity and alignment. Also, a discussion of the roles of the catalysts as well as hydrogen and oxygen is given, which can be compared to the discussion in G. Zhang et al. (2005, see below).
Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology
M. Zhang, K. R. Atkinson, and R. H. Baughman, Science,
306, 1358-1361, 2004 In the research article by M. Zhang et al. (2004) the procedure of spinning long yarns from forests of MWNTs is described in detail. The maximum breaking strength achieved is only 0.46 GPa based on the 30micron-long CNTs. The initial CNT forest is grown by chemical vapour deposition (CNT) on a catalytic substrate, as usual. A very intersting formula for the tensile strength of a yarn relative to the tensile strength of the fibers (in our case the MWNTs) is given:
<math> \frac{\sigma_{\rm yarn}}{\sigma_{\rm fiber}} = \cos^2 \alpha \left(1 - \frac{k}{\sin \alpha} \right) </math>
where <math>\alpha</math> is the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant <math>k=\sqrt(dQ/\mu)/3L</math> is given by the fiber diameter d=1nm, the fiber migration length Q (distance along the yarn over which a fiber shifts from the yarn surface to the deep interior and back again), the quantity <math>\mu=0.13</math> is the friction coefficient of CNTs (the friction coefficent is the ratio of maximum along-fiber force divided by lateral force pressing the fibers together), <math>L=30{\rm \mu m}</math> is the fiber length. A critical review of this formula is given here.
In the paper interesting transmission electron microscope (TEM) pictures are shown, which give insight into how the yarn is assembled from the CNT forest. The authors describe other characteristics of the yarn, like how knots can be introduced and how the yarn performs when knitted, apparently in preparation for application in the textile industry.
Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen
Important aspects of the production of CNTs that are suitable for the SE is the efficiency of the growth and the purity (i.e. lack of embedded amorphous carbon and imperfections in the Carbon bounds in the CNT walls). In their article G. Zhang et al. go into detail about the roles of oxygen and hydrogen during the chemical vapour deposition (CVD) growth of CNT forests from hydrocarbon sources on catalytic substrates. In earlier publications the role of oxygen was believed to be to remove amorphous carbon by oxidation into CO. The authors show, however, that, at least for this CNT growth technique, oxygen is important, because it removes hydrogen from the reaction. Hydrogen has apparently a very detrimental effect on the growth of CNTs, it even destroys existing CNTs as shown in the paper. Since hydrogen radicals are released during the dissociation of the hydrocarbon source compount, it is important to have a removal mechanism. Oxygen provides this mechanism, because its chemical affinity towards hydrogen is bigger than towards carbon.
In summary, if you want to efficiently grow pure CNT forests on a catalyst substrate from a hydrocarbon CVD reaction, you need a few percent oxygen in the source gas mixture. An additional interesting information in the paper is that you can design the places on the substrate, on which CNTs grow by placing the the catalyst only in certain areas of the substrate using lithography. In this way you can grow grids and ribbons. Figures are shown in the paper.
In the paper no information is given on the reason why the CNT growth stops at some point. The growth rate is given with 1 micron per minute. Of course for us it would be interesting to eliminate the mechanism that stops the growth so we could grow infinitely long CNTs.
This article can be found in our archive.
Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning
Q. Li et al. have published a paper on a subject that is very close to our hearts: growing long CNTs. The longer the fibers, which we hope have a couple of 100GPa of tensile strength, can hopefully be spun into the yarns that will make our SE ribbon. In the paper the method of chemical vapour deposition (CVD) onto a catalyst-covered silicon substrate is described, which appears to be the leading method in the publications after 2004. This way a CNT "forest" is grown on top of the catalyst particles. The goal of the authors was to grow CNTs that are as long as possible. The found that the growth was terminated in earlier attempts by the iron catalyst particles interdiffusing with the substrate. This can apparently be avoided by putting an aluminium oxide layer of 10nm thickness between the catalyst and the substrate. With this method the CNTs grow to an impressive 4.7mm! Also, in a range from 0.5 to 1.5mm fiber length the forests grown with this method can be spun into yarns.
The growth rate with this method was initially <math>60{\rm \mu m\ min.^{-1}}</math> and could be sustained for 90 minutes, This is very different from the <math>1{\rm \mu m\ min.^{-1}}</math> reported by G. Zhang et al. (2005), which shows that the growth is very dependent on the method and materials used. The growth was prolonged by the introduction of water vapour into the mixture, which achieved the 4.7mm after 2h of growth. By introducing periods of restricted carbon supply, the authors produced CNT forests with growth marks. This allowed to determine that the forest grew from the base. This is in line with the in situ observations by S. Hofmann et al. (2007).
Overall the paper is somewhat short on the details of the process, but the results are very interesting. Perhaps the 5mm CNTs are long enough to be spun into a usable yarn.
In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation
The paper by S. Hofmann et al. (2007) is a key publication for understanding the microscropic processes of growing CNTs. The authors describe an experiment in which they observe in situ the growth of CNTs from chemical vapour deposition (CVD) onto metallic catalyst particles. The observations are made in time-lapse transmission electron microscopy (TEM) and in x-ray photo-electron spectroscopy. Since I am not an expert on spectroscopy, I stick to the images and movies produced by the time-lapse TEM. In the observations it can be seen that the catalysts are covered by a graphite sheet, which forms the initial cap of the CNT. The formation of that cap apparently deforms the catalyst particle due to its inherent shape as it tries to form a minimum-energy configuration. Since the graphite sheet does not extend under the catalyst particle, which is prevented by the catalyst sitting on the silicon substrate, the graphite sheet cannot close itself. The deformation of the catalyst due to the cap forming leads to a retoring force exerted by the crystaline stracture of the catalyst particle. As a consequence the carbon cap lifts off the catalyst particle. On the base of the catalyst particle more carbon atoms attach to the initial cap starting the formation of the tube. The process continues to grow a CNT as long as there is enough carbon supply to the base of the catalyst particle and as long as the particle cannot be enclosed by the carbon compounds. During the growth of the CNT the catalyst particle breathes so drives so the growth process mechanically.
Of course for us SE community the most interesting part in this paper is the question: can we grow CNTs that are long enough so we can spin them in a yarn that would hold the 100GPa/g/ccm? In this regard the question is about the termination mechanism of the growth. The authors point to a very important player in CNT growth: the catalyst. If we can make a catalyst that does not break off from its substrate and does not wear off, the growth could be sustained as long as the catalyst/substrate interface is accessible to enough carbon from the feedstock.
If you are interested, get the paper from our archive, including the supporting material, in which you'll find the movies of the CNTs growing.
High-Performance Carbon Nanotube Fiber
K. Koziol et al., Science,
318, 1892, 2007. The paper "High-Performance Carbon Nanotube Fiber" by K. Koziol et al. is a research paper on the production of macroscopic fibers out of an aerogel (low-density, porous, solid material) of SWNT and MWNT that has been formed by carbon vapor deposition. They present an analysis of the mechanical performance figures (tensile strength and stiffness) of their samples. The samples are fibers of 1, 2, and 20mm length and have been extracted from the aerogel with high winding rates (20 metres per minute). Indeed higher winding rates appear to be desirable, but the authors have not been able to achieve higher values as the limit of extraction speed from the aerogel was reached, and higher speeds led to breakage of the aerogel.
They show in their results plot (Figure 3A) that typically the fibers split in two performance classes: low-performance fibers with a few GPa and high-performance fibers with around 6.5 GPa. It should be noted that all tensile strengths are given in the paper as GPa/SG, where SG is the specific gravity, which is the density of the material divided by the density of water. Normally SG was around 1 for most samples discussed in the paper. The two performance classes have been interpreted by the authors as the typical result of the process of producing high-strength fibers: since fibers break at the weakest point, you will find some fibers in the sample, which have no weak point, and some, which have one or more, provided the length of the fibers is in the order of the frequency of occurrence of weak points. This can be seen by the fact that for the 20mm fibers there are no high-performance fibers left, as the likelihood to encounter a weak point on a 20mm long fiber is 20 times higher than encountering one on a 1mm long fiber.
As a conclusion the paper is bad news for the SE, since the difficulty of producing a flawless composite with a length of 100,000km and a tensile strength of better than 3GPa using the proposed method is enormous. This comes back to the ribbon design proposed on the Wiki: using just cm-long fibers and interconnect them with load-bearing structures (perhaps also CNT threads). Now we have shifted the problem from finding a strong enough material to finding a process that produces the required interwoven ribbon. In my opinion the race to come up with a fiber of better than Kevlar is still open. |
Most of us think that counting is as easy as 1, 2, 3... When counting objects, one needs to be careful to not count an object more than once or miss an object. In this section, we will explore some ideas behind counting.
The Multiplication Principle
If a process can be broken down into two steps, performed in order, with m ways of completing the first step and n ways of completing the second step after the first step is completed, then there are (m)(n) ways of completing the process.
Example \(\PageIndex{1}\):
Suppose that pizza can be ordered in 3 sizes, 2 crust choices, 4 choices of toppings, and 2 choices of cheese toppings. How many different ways can a pizza be ordered?
Solution
To determine the number of possibilities, we will use the multiplication principle. Let \(S = \) pizza size, \(C =\) crust choice, \(T =\) topping choice, and \(Ch =\) cheese choice.
Since we need to choose one choice from each category, we can write that we need to choose size, and crust, and topping, and cheese.
Let's use the multiplication principle:
\[\begin{align} \text{Ways} &= (S)(C)(T)(Ch) \nonumber \\[5pt] &= (3)(2)(4)(2) \nonumber \\[5pt] &= 48 \nonumber\end{align} \nonumber\]
What if there was only one choice for cheese? How would this affect the calculation?
Example \(\PageIndex{2}\):
Count the number of possible outcomes when:
A coin tossed four times. A standard die is rolled five times. Permutation
Definition: permutation
A permutation is an ordered arrangement of objects.
The number of permutations of \(n\) distinct objects, taken all together, is \(n!\), where
\[n!= n(n-1)(n-2).....1 \label{perm}\].
Note that \(0!=1\).
Example \(\PageIndex{3}\)
Miss James wants to seat 30 of her students in a row for a class picture. How many different seating arrangements are there? 17 of Miss James' students are girls and 13 are boys. In how many different ways can she seat 17 girls together on the left, then the 13 boys together on the right?
Solution
Let's start with the girls. There are 17 of them, and so, when seating the first girl in the row, there are 17 choices. The next spot will have 16 choices left, then 15, and so on. Thus, the number of choices for seating the girls can be written \(17!\).
For the boys, by the same reasoning, there are \(13!\) ways to seat them on the right.
Now let's apply the multiplication principle: we need to seat the girls and the boys at the same time. For each permutation we might pick for the girls, we need to apply each different case for the boys as a distinct possibility. So, our result is \((17!)(13!)\). This means there are \(2.215 \cdot 10^{24}\) different ways to seat these students with girls on the left and boys on the right!
The number of permutations of \(r \) objects picked from \(n\) objects, where \(0 \leq r \leq n\), is
\[_nP_r = \displaystyle \frac{n!}{(n-r)!}.\label{pick}\]
When reading this out loud, we say "n Pick r" - when we pick something, like a team for sports or favorite desserts, the order matters.
Example \(\PageIndex{4}\):
Using the digits 1,3,5,7, and 9, with no repetitions of digits, how many three–digit numbers can be made?
Solution
We have \(n = 5\) objects, and we want to pick \(r = 3\) of them. So via Equation \ref{pick}:
\[ \begin{align} _nP_r &= \displaystyle \frac{5!}{(5-3)!} \nonumber\\[5pt] &= \displaystyle \frac{5!}{2!} \nonumber \\[5pt] &= \displaystyle \dfrac{(5)(4)(3)(2)(1)}{(2)(1)} \nonumber \\[5pt] &= (5)(4)(3) \nonumber \\[5pt] &= 60 \nonumber \end{align} \nonumber \]
Combination
The following is defined already in 3.3 Finite Difference Calculus.
The number of combinations of \(r \) objects chosen from \(n\) objects, where \(0 \leq r \leq n\), is
\[_nC_r = \displaystyle \dfrac{n!}{(n-r)! r!} \label{combo}\]
\(_nC_r \) is also denoted as \( \displaystyle n \choose r\). When reading this out loud, we say "n Choose r" - when we choose objects, like candies out of a bag or clothes from a closet, the order doesn't matter.
Example \(\PageIndex{5}\):
Evaluate \(_6C_2\), and \(_4C_4\).
Solution
Let's try \(_6C_2\), or \(\displaystyle 6 \choose 2\):
\(_nC_r = \displaystyle \frac{6!}{(6-2)!2!}\)
\(_nC_r = \displaystyle \frac{6!}{(4)!2!}\)
\(_nC_r = \displaystyle \frac{(6)(5)(4)(3)(2)}{(4)(3)(2)(2)}\)
\(_nC_r = \displaystyle \frac{(6)(5)}{(2)}\)
\(_nC_r = \displaystyle \frac{30}{2}\)
\(_nC_r = 15\)
Now let's tackle \(\displaystyle 4 \choose 4\):
\(_nC_r = \displaystyle \frac{4!}{(4-4)!4!}\)
\(_nC_r = \displaystyle \frac{4!}{0!4!}\)
The result of \(0!\) is \(1\).
\(_nC_r = \displaystyle \frac{4!}{4!}\)
\(_nC_r = 1\)
This makes sense: there is only one way to choose four things from a group of four things. You choose all of them, and that is the only option.
Example \(\PageIndex{6}\):
How many 5-member committees are possible if we are choosing members from a group of 30 people?
Let's see: we have 30 people to choose from, so \(n = 30\). We want to choose 5 members, so \(r = 5\). Lastly, we don't care about the order in which we choose, so we use \(_nC_r\):
\(\displaystyle 30 \choose 5\)\( = \displaystyle \frac{30!}{(30-5)!5!}\)
\(\displaystyle 30 \choose 5\)\( = \displaystyle \frac{30!}{25!5!}\)
\(\displaystyle 30 \choose 5\)\( = \displaystyle \frac{(30)(29)(28)(27)(26)}{5!}\)
\(\displaystyle 30 \choose 5\)\( = \displaystyle \frac{(30)(29)(28)(27)(26)}{120}\)
\(\displaystyle 30 \choose 5\)\( = 142 506\)
Example \(\PageIndex{7}\):
In how many ways can 3 men and 3 women sit in a row, if no two men and no two women are next to each other? In how many ways can 3 men and 3 women sit in a circle, if no two men and no two women are next to each other? Pascal's Triangle
Pascal's triangle was developed by the mathematician Blaise Pascal. It is generated by adding the two terms diagonally above to receive the new term, where the first term is 1, which is defined as \(_nC_r = _{n-1}C_{r -1}+_{n-1}C_{r }\).
The triangle is useful when calculating \(_nC_r\) as well: count down \(n\) rows, and then count in \(r\) terms. For example: \(_7C_2\) means that we look at row 7, term 2: 6.
Gives the coefficients of \( (a + b)^n.\)
The entries of the nth row are \(C(n,0), C(n,1)...C(n,n)])
The sums of each row are consecutive powers of 2.
The third element from in each row yields triangular numbers.
Binomial Expansion
\((x+y)^n = x^n+n x^{(n-1)}y + \cdots+{ n\choose k} x^k y^{(n-k)} + \cdots+y^n\). |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
No, they're mostly notational variations. There are different connotations to the different notations, and different notations are common in different fields where they can mean quite different things. Also, sometimes they are used in a particular context for different (but usually related) things. You'll, of course, have to see how it has been defined in that context.
Any use of $\in$ usually strongly suggests a set-theoretic context (though you occasionally see it in a type-theoretic context). The $f:X\to Y$ notation was, I believe, popularized in mathematics by category theory. In this case, it means $f\in\mathsf{Hom}(X,Y)$. This is compatible with $f\in Y^X$ in the case that the category is the category of sets and functions. In a general categorical context, $Y^X$ is usually reserved for exponential objects for which it makes no sense to ask if $f\in Y^X$, though you (usually) can ask if there is an arrow $1 \to Y^X$.
In logic, when $X$ and $Y$ are sorts, usually $f:X\to Y$ means that $f$ is a function symbol. In this case, $X$ and $Y$ aren't sets and neither is $X \to Y$. It makes no sense to say $f\in X\to Y$ unless $\in$ isn't being thought of as set membership or we want to say $X\to Y$ means the set of function symbols from $X$ to $Y$. This latter view starts to get rather similar to the categorical view.
In type theory, usually $f:X\to Y$ means something like $f$ has type $X\to Y$ and $X\to Y$ is the type of functions from $X$ to $Y$. Types are not sets and part of the purpose of this notation is to remind one of that fact. Indeed, types are more like sorts and this notation is similar to how it is used in logic. Sometimes $Y^X$ is also used where $Y^X$ is used in a more first-class sense in a way that is compatible with the similar distinction I mentioned for category theory, so we might write $X\to Z^Y$ rather than $X\to(Y\to Z)$. Sometimes no distinction is being made and the notation is completely synonymous and the choice is made purely for typographical reasons. For computer programming, the choice is mostly typographical,
f : X -> Y is easier to write and easier to read, though programming languages are also closely related to type theories.
It doesn't really make sense to talk about which of these notations is "more fundamental" than the others in general. That said, the $f:X\to Y$ notation is usually making the least commitments if multiple notations are in use. Often $f:X\to Y$ isn't a proposition with a truth value, so it wouldn't make much sense to write $\{f\mid f:X\to Y\}$. Admittedly, in the context where it isn't a proposition, you usually aren't using set theory. If you really wanted to, you could meta-theoretically write $\{t\mid \cdot\vdash t:X\to Y\}$ to mean the set of closed terms of type $X\to Y$, say. This would have nothing to do with a set of functions though. |
The weight 2 Bianchi modular forms are particularly important in regard to their conjectural connections with abelian varieties of $\textrm{GL}_2$-type. In the weight 2 case, we have $F: \mathcal{H}_3 \rightarrow \mathbb{C}^3$ and $$(F |_k\gamma)(z)=\dfrac{1}{|r|^2+|s|^2} \begin{pmatrix} \bar{r}^2 & 2\bar{r}s & s^2 \\ -\bar{r}\bar{s} & |r|^2-|s|^2 & rs \\ \bar{s}^2 & -2r\bar{s} & r^2 \end{pmatrix} F(\gamma z)$$ where $\gamma=\begin{pmatrix} a &b \\ c&d \end{pmatrix}$ and $r=cx+d$ and $s=cy$.
Let $\beta_1:=-\frac{dx}{y}, \beta_2:= \frac{dy}{y}, \beta_3:=\frac{d\bar{x}}{y} $ be a basis of differential 1-forms on $\mathcal{H}_3$. A differential form $\omega$ is
harmonic if $\Delta \omega =0$ where $\Delta=d \circ \delta + \delta \circ d$ is the usual Laplacian with $d$ being the exterior derivative and $\delta$ the codifferential operator. Then $\textrm{PSL}_2(\mathbb{C})$ acts on the space of differential 1-forms as $$\gamma \cdot {}^t(\beta_1,\beta_2,\beta_3)_{(z)} = Sym^2(J(\gamma,z)){}^t(\beta_1,\beta_2,\beta_3)_{(z)}.$$A weight $2$ Bianchi modular form for $\Gamma$ can be alternatively described as a real analytic function $F=(F_1,F_2,F_3) : \mathcal{H}_3 \rightarrow \mathbb{C}^3$ such that$$F_1\beta_1 + F_2 \beta_2+F_3\beta_3$$ is a harmonic differential 1-form on $\mathcal{H}_3$ that is $\Gamma$-invariant. It is called cuspidal if it satisfies the extra property $$\int_{\mathbb{C} / \mathcal{O}_K} (F| \gamma )(x,y) dx = 0$$ for every $\gamma \in \textrm{PSL}_2(\mathcal{O}_K).$
This condition is equivalent to saying that the constant coefficient in the Fourier-Bessel expansion of $F|\gamma$ is equal to zero for every $\gamma \in \textrm{PSL}_2(\mathcal{O}_K)$.
Authors: Knowl status: Review status: beta Last edited by Holly Swisher on 2019-05-08 14:04:34 Referred to by: History:(expand/hide all) |
For exercises 1 - 15, find \(f′(x)\) for each function.
1) \(f(x)=x^2e^x\)
Answer: \(f'(x) = 2xe^x+x^2e^x\)
2) \(f(x)=\dfrac{e^{−x}}{x}\)
3) \(f(x)=e^{x^3\ln x}\)
Answer: \(f'(x) = (3x^2\ln x+x^2)e^{x^3}\ln x\)
4) \(f(x)=\sqrt{e^{2x}+2x}\)
5) \(f(x)=\dfrac{e^x−e^{−x}}{e^x+e^{−x}}\)
Answer: \(f'(x) = \dfrac{4}{(e^x+e^{−x})^2}\)
6) \(f(x)=\dfrac{10^x}{\ln 10}\)
7) \(f(x)=2^{4x}+4x^2\)
Answer: \(f'(x) = 2^{4x+2}⋅\ln 2+8x\)
8) \(f(x)=3^{\sin 3x}\)
9) \(f(x)=x^π⋅π^x\)
Answer: \(f'(x) = πx^{π−1}⋅π^x+x^π⋅π^x\ln π\)
10) \(f(x)=\ln(4x^3+x)\)
11) \(f(x)=\ln\sqrt{5x−7}\)
Answer: \(f'(x) = \dfrac{5}{2(5x−7)}\)
12) \(f(x)=x^2\ln 9x\)
13) \(f(x)=\log(\sec x)\)
Answer: \(f'(x) = \dfrac{\tan x}{\ln 10}\)
14) \(f(x)=\log_7(6x^4+3)^5\)
15) \(f(x)=2^x⋅\log_37^{x^2−4}\)
Answer: \(f'(x) = 2^x⋅\ln 2⋅\log_3 7^{x^2−4}+2^x⋅\dfrac{2x\ln 7}{\ln 3}\) For exercises 16 - 23, use logarithmic differentiation to find \(\dfrac{dy}{dx}\).
16) \(y=x^{\sqrt{x}}\)
17) \(y=(\sin 2x)^{4x}\)
Answer: \(\dfrac{dy}{dx} = (\sin 2x)^{4x}[4⋅\ln(\sin 2x)+8x⋅\cot 2x]\)
18) \(y=(\ln x)^{\ln x}\)
19) \(y=x^{\log_2 x}\)
Answer: \(\dfrac{dy}{dx} = x^{\log_2 x}⋅\dfrac{2\ln x}{x\ln 2}\)
20) \(y=(x^2−1)^{\ln x}\)
21) \(y=x^{\cot x}\)
Answer: \(\dfrac{dy}{dx} = x^{\cot x}⋅[−\csc^2 x⋅\ln x+\frac{\cot x}{x}]\)
22) \(y=\dfrac{x+11}{\sqrt[3]{x^2−4}}\)
Answer: \(\dfrac{dy}{dx} = =\dfrac{x+11}{\sqrt[3]{x^2−4}}\left[\dfrac{1}{x+11}- \dfrac{2x}{3\left(x^2+4\right)}\right]\)
23) \(y=x^{−1/2}(x^2+3)^{2/3}(3x−4)^4\)
Answer: \(\dfrac{dy}{dx} = x^{−1/2}(x2+3)^{2/3}(3x−4)^4⋅\left[\dfrac{−1}{2x}+\dfrac{4x}{3(x^2+3)}+\dfrac{12}{3x−4}\right]\)
24) [T] Find an equation of the tangent line to the graph of \(f(x)=4xe^{(x^2−1)}\) at the point where
\(x=−1.\) Graph both the function and the tangent line.
25) [T] Find the equation of the line that is normal to the graph of \(f(x)=x⋅5^x\) at the point where \(x=1\). Graph both the function and the normal line.
Answer:
\(y=\frac{−1}{5+5\ln 5}x+(5+\frac{1}{5+5\ln 5})\)
26) [T] Find the equation of the tangent line to the graph of \(x^3−x\ln y+y^3=2x+5\) at the point where \(x=2\). (Hint: Use implicit differentiation to find \(\dfrac{dy}{dx}\).) Graph both the curve and the tangent line.
27) Consider the function \(y=x^{1/x}\) for \(x>0.\)
a. Determine the points on the graph where the tangent line is horizontal.
b. Determine the points on the graph where \(y′>0\) and those where \(y′<0\).
Answer: \(a. x=e~2.718\) \(b. (e,∞),\,(0,e)\)
28) The formula \(I(t)=\dfrac{\sin t}{e^t}\) is the formula for a decaying alternating current.
a. Complete the following table with the appropriate values.
\(t\) \(\dfrac{\sin t}{e^t}\) 0 (i) \(π/2\) (ii) \(π\) (iii) \(3π/2\) (vi) \(2π\) (v) \(2π\) (vi) \(3π\) (vii) \(7π/2\) (viii) \(4π\) (ix)
b. Using only the values in the table, determine where the tangent line to the graph of (I(t)\) is horizontal.
29) [T] The population of Toledo, Ohio, in 2000 was approximately 500,000. Assume the population is increasing at a rate of 5% per year.
a. Write the exponential function that relates the total population as a function of \(t\).
b. Use a. to determine the rate at which the population is increasing in \(t\) years.
c. Use b. to determine the rate at which the population is increasing in 10 years
Answer: a. \(P=500,000(1.05)^t\) individuals b. \(P′(t)=24395⋅(1.05)^t\) individuals per year c. \(39,737\) individuals per year
30)[T] An isotope of the element erbium has a half-life of approximately 12 hours. Initially there are 9 grams of the isotope present.
a. Write the exponential function that relates the amount of substance remaining as a function of \(t\), measured in hours.
b. Use a. to determine the rate at which the substance is decaying in \(t\) hours.
c. Use b. to determine the rate of decay at \(t=4\) hours.
31) [T] The number of cases of influenza in New York City from the beginning of 1960 to the beginning of 1961 is modeled by the function
\[N(t)=5.3e^{0.093t^2−0.87t},\quad (0≤t≤4),\nonumber\]
where \(N(t)\) gives the number of cases (in thousands) and t is measured in years, with \(t=0\) corresponding to the beginning of 1960.
a. Show work that evaluates \(N(0)\) and \(N(4)\). Briefly describe what these values indicate about the disease in New York City.
b. Show work that evaluates \(N′(0)\) and \(N′(3)\). Briefly describe what these values indicate about the disease in the United States.
a. At the beginning of 1960 there were 5.3 thousand cases of the disease in New York City. At the beginning of 1963 there were approximately 723 cases of the disease in the United States. b. At the beginning of 1960 the number of cases of the disease was decreasing at rate of \(−4.611\) thousand per year; at the beginning of 1963, the number of cases of the disease was decreasing at a rate of \(−0.2808\) thousand per year.
32) [T] The relative rate of change of a differentiable function \(y=f(x)\) is given by \(\frac{100⋅f′(x)}{f(x)}\%.\) One model for population growth is a Gompertz growth function, given by \(P(x)=ae^{−b⋅e^{−cx}}\) where \(a,b\), and \(c\) are constants.
a. Find the relative rate of change formula for the generic Gompertz function.
b. Use a. to find the relative rate of change of a population in \(x=20\) months when \(a=204,b=0.0198,\) and \(c=0.15.\)
c. Briefly interpret what the result of b. means.
33) For the following exercises, use the population of New York City from 1790 to 1860, given in the following table.
Year since 1790 Population 0 33,131 10 60,515 20 96,373 30 123,706 40 202,300 50 312,710 60 515,547 70 813,669
New York City Population Over TimeSource: http://en.wikipedia.org/wiki/Largest..._United_States
_by_population_by_decade
34) [T] Using a computer program or a calculator, fit a growth curve to the data of the form \(p=ab^t\).
Answer: \(p=35741(1.045)^t\)
35) [T] Using the exponential best fit for the data, write a table containing the derivatives evaluated at each year.
36) [T] Using the exponential best fit for the data, write a table containing the second derivatives evaluated at each year.
Answer: Year since 1790 P" 0 69.25 10 107.5 20 167.0 30 259.4 40 402.8 50 625.5 60 971.4 70 1508.5
37) [T] Using the tables of first and second derivatives and the best fit, answer the following questions:
a. Will the model be accurate in predicting the future population of New York City? Why or why not?
b. Estimate the population in 2010. Was the prediction correct from a.? |
Note: This question was asked in stats.stackexchange.com and math.stackexchange.com, with expired bounties on both sites.
Given a sequence of iid random variables $X_i$ (without loss of generality from $U(0,1)$), an integer $k \ge 1$ and some $p \in (0,1)$, construct the sequence of random vectors $Z^{(j)}$, $j=0,1,...$ in the following way. Let
$$Z^{(0)}=(X_{(1)},...,X_{(k)}),$$
where $X_{(l)}$ is the $l$-order statistic of sample $\{X_1,...,X_k\}$. Introduce notations
\begin{align} Z^{(j)}&=(Z_{j,1},...,Z_{j,k}),\\\\ m_j&=\min(Z_{j-1,1},...,Z_{j-1,k},X_{k+j}),\\\\ M_j&=\max(Z_{j-1,1},...,Z_{j-1,k},X_{k+j}) \end{align}
Then
$$Z^{(j)}=(Y_{(1)},...,Y_{(k)})$$
where $Y_{(l)}$ is the $l$-order statistic of the following set which is
The set $\{Z_{j-1,1},...,Z_{j-1,k},X_{k+j}\}\backslash m_j$ with probability $p$ The set $\{Z_{j-1,1},...,Z_{j-1,k},X_{k+j}\}\backslash M_j$ with probability $1-p$
The decision between cases 1. and 2. is made independently from the $X_i$ (and hence from the $Z^{(i)}$).
The $Z^{(j)}$ are supported on the $k$-dimensional simplex $S_k = \{(x_1, \dots, x_k) \in \mathbb{R}^k \, | \, 0 \le x_1 \le x_2 \le \dots \le x_k \le 1 \}$.
It appears that the $Z^{(j)}$ converge in distribution. Is this known? Is anything known about the limiting distribution?
For the case $k=1$, the answer is the following. Denote the cdf of $Z^{(j)}$ by $F_j$.
The cdf of $\min(X_{n+1},Z^{(n)})$ (for $U(0,1)$ case) is
$$x+F_n(x)−xF_n(x)$$ and the cdf of $\max(X_{n+1},Z^{(n)})$ is
$$xF_n(x)$$.
Hence
\begin{align} F_{n+1}(x)&=p(x+F_n(x)−xF_n(x))+(1−p)xF_n(x)\\\\ &=px+(p(1-x)+(1-p)x)F_n(x) \end{align}
Since $p(1-x)+(1-p)x\in(0,1)$ we have that
$$\lim F_{n}(x)=\frac{px}{1-p(1-x)-(1-p)x}$$
I am looking for general results (case $k>1$) either for the limiting distribution of the whole vector $Z^{(j)}$ or of some of its components (marginal distributions). |
Let me answer the second query.
We know $$\mathbf{ B}=\text{curl}\;\mathbf{ A} $$ where $\bf B$ is the magnetic field & $\bf A$ is vector-potential.
Now, using Maxwell's equation, we get $$\text{curl}\;(\text{curl}\; \mathbf A)= \mu_0 \mathbf J . $$ By cracking a bit-algebra, we get, $$-\frac{\partial^2 A_x}{\partial x^2}-\frac{\partial^2 A_x}{\partial y^2}- \frac{\partial^2 A_x}{\partial z^2} +\frac{\partial}{\partial x}\left(\frac{\partial A_x}{\partial x}+ \frac{\partial A_y}{\partial y}+\frac{\partial A_z}{\partial z}\right)= \mu_0 J_x.$$
Now, for convenience, we take $\text{div}\; \mathbf A= 0\; .$
This makes the above relation looks like $$-\frac{\partial^2 A_x}{\partial x^2}-\frac{\partial^2 A_x}{\partial y^2}- \frac{\partial^2 A_x}{\partial z^2} = \mu_0 J_x\;.$$ The vector potential at $(x_1,y_1,z_1)$ is then given by $$\mathbf A(x_1,y_1,z_1)= \frac{\mu_0}{4\pi}\int \frac{\mathbf{J}(x_2,y_2,z_2)\;\mathrm dv_2}{r_{12}}\;.$$
Now, consider a loop of wire carrying current $I$.
Now, $\mathrm{d} v_2= a\; \mathrm{d} l$ where $\mathrm dl$ is an infinitesimal section of the wire; $\mathbf{J} \; \mathrm{d} v_2= I\; \mathrm{d}\mathbf{l} .$
Therefore our vector-potential for the thin-wire carrying steady current is given by $$\mathbf A= \frac{\mu_0 I}{4\pi}\int \frac{\mathrm d\mathbf l}{r_{12}}.$$
Let us focus on that section of wire which happens to let the current in the $\hat{\mathbf x}$ direction & is located at the origin of our frame.
Then at a certain point $(x,y)$ in $xy$ plane, contribution to the vector-potential from the infinitesimal wire-section at the origin is given by $$\mathrm{d}\mathbf A= \hat{\mathbf x} \;\frac{\mu_0 I}{4\pi} \frac{\mathrm{d}\mathbf l}{\sqrt{x^2 + y^2}} \;.$$
Since, $\mathbf A$ is in $xy$ plane, its curl must point in the $\hat{\mathbf z}$ direction. therefore, \begin{align}\mathrm{d}\mathbf B &= \text{curl}\;\mathrm{d}\mathbf A\\& =\hat{\mathbf z}\left(-\frac{\partial A_x}{\partial y}\right)\\ &= \hat{\mathbf z}\;\frac{\mu_0 I}{4\pi} \frac{ \mathrm{d} l\; y}{(x^2 + y^2)^{3/2}}\\ &= \hat{\mathbf z}\;\frac{\mu_0 I}{4\pi} \frac{\mathrm{d}l\; \sin\varphi}{r^2}\;. \end{align}
Now, that being deduced, we can say that this would be valid for any general coordinate system; all that matters is the relative orientation of the element $d\mathbf l$, radius vector $\bf r$ from the element to the concerned point.
The contribution to the magnetic field from any element of wire can be taken to be a vector perpendicular to the plane containing $\mathrm{d} \mathbf l$ & $\bf r$ & the angle between them namely $\varphi\; .$
This can be compactly written as $${\mathrm{d}\mathbf B= \frac{\mu_0}{4\pi} \frac{I\; \mathrm{d}\mathbf l \times \hat{ \mathbf r}}{r^2}}\; .$$ And this is Biot-Savart Law, which you wanted. |
Saying that warm air "holds" more moisture is technically incorrect, but is a common colloquialism. Let's break it down to the technicalities.
Let's consider a glass of water with a vacuum (no air) above it. What will happen? The molecules that are at the top most layer of the water will evaporate. At what rate will the water evaporate? Better yet, what is evaporation?
Evaporation is when the water molecules gain enough kinetic energy (how fast they vibrate) to break the bonds that hold them to one another. Kinetic energy is dependent on temperature. So the molecules vibrate faster, break their bonds, and enter the vacuum as a vapor. Some molecules will stay as a vapor in the vacuum, but others will reenter the liquid. When the molecules enter the liquid as fast as they are leaving, then it is saturated.
If the air is cooled down, then the rate at which molecules leave the liquid slows down. The molecules entering the liquid do not slow down at the same rate, causing the liquid to grow toward it's initial state.
Note that I specifically said it is a vacuum. Instead of a glass of water, picture the water as little drops. The atmosphere can act to warm or cool these drops, and vice-versa.
In the more nitty-gritty aspect of this, the equation that describes the vapor pressure as a function of temperature is called the Clausius-Clapeyeron equation/relation. The American Meteorological Society has one approximate solution, but I prefer this equation: $$e_{sat}(T)=611 Pa \exp[\frac{L_v}{R_v}(273.15^{-1}-T^{-1})]$$, where $L_v$ is the latent heat of vaporization, $R_v$ is the specific gas constant for water vapor, and $T$ is the absolute temperature in Kelvin. Combined with the ideal gas law for water vapor (assuming saturation) $$e_{sat}(T)V=m_vR_vT$$, and given the volume ($V$) we can write an expression for the mass of water vapor $m_v$. The equation comes out to $$m_v=611 Pa \exp[\frac{L_v}{R_v}(273.15^{-1}-T^{-1})]V R_v^{-1}T^{-1}$$
To answer your final question, the molecules are approximated as being infintessimally small, per the ideal gas law. To be more specific, one molecule of water is about 7.08$\times$ 10$^{-19}$ cubic feet (after some math), so the added volume is considered negligible. In short, the molecules are treated as point masses. |
Prove that the function $\sqrt x$ is uniformly continuous on $\{x\in \mathbb{R} | x \ge 0\}$.
To show uniformly continuity I must show for a given $\epsilon > 0$ there exists a $\delta>0$ such that for all $x_1, x_2 \in \mathbb{R}$ we have $|x_1 - x_2| < \delta$ implies that $|f(x_1) - f(x_2)|< \epsilon.$
What I did was $\left|\sqrt x - \sqrt x_0\right| = \left|\frac{(\sqrt x - \sqrt x_0)(\sqrt x + \sqrt x_0)}{(\sqrt x + \sqrt x_0)}\right| = \left|\frac{x - x_0}{\sqrt x + \sqrt x_0}\right| < \frac{\delta}{\sqrt x + \sqrt x_0}$
but I found some proof online that made $\delta = \epsilon^2$ where I don't understand how they got? So, in order for $\delta =\epsilon^2$ then $\sqrt x + \sqrt x_0$ must $\le$ $\epsilon$ then $\frac{\delta}{\sqrt x + \sqrt x_0} \le \frac{\delta}{\epsilon} = \epsilon$. But then why would $\epsilon \le \sqrt x + \sqrt x_0? $ Ah, I think I understand it now just by typing this out and from an earlier hint by Michael Hardy here. |
Inaccessible cardinal Inaccessible cardinals are the traditional entry-point to the large cardinal hierarchy, although weaker notions such as the worldly cardinals can still be viewed as large cardinals.
A cardinal $\kappa$ being inaccessible implies the following:
$V_\kappa$ is a model of ZFC and so inaccessible cardinals are worldly. The worldly cardinals are unbounded in $\kappa$, so $V_\kappa$ satisfies the existence of a proper class of worldly cardinals. $\kappa$ is an aleph fixed point and a beth fixed point, and consequently $V_\kappa=H_\kappa$. (Solovay)there is an inner model of a forcing extension satisfying ZF+DC in which every set of reals is Lebesgue measurable; in fact, this is equiconsistent to the existence of an inaccessible cardinal. For any $A\subseteq V_\kappa$, the set of all $\alpha<\kappa$ such that $\langle V_\alpha;\in,A\cap V_\alpha\rangle\prec\langle V_\kappa;\in,A\rangle$ is club in $\kappa$.
An ordinal $\alpha$ being inaccessible is equivalent to the following:
$V_{\alpha+1}$ satisfies $\mathrm{KM}$. $\alpha>\omega$ and $V_\alpha$ is a Grothendiek universe. $\alpha$ is $\Pi_0^1$-Indescribable. $\alpha$ is $\Sigma_1^1$-Indescribable. $\alpha$ is $\Pi_2^0$-Indescribable. $\alpha$ is $0$-Indescribable. $\alpha$ is a nonzero limit ordinal and $\beth_\alpha=R_\alpha$ where $R_\beta$ is the $\beta$-th regular cardinal, i.e. the least regular $\gamma$ such that $\{\kappa\in\gamma:\mathrm{cf}(\kappa)=\kappa\}$ has order-type $\beta$. $\alpha = \beth_{R_\alpha}$. $\alpha = R_{\beth_\alpha}$. $\alpha$ is a weakly inaccessible strong limit cardinal (see weakly inaccessible below). Contents Weakly inaccessible cardinal
A cardinal $\kappa$ is
weakly inaccessible if it is an uncountable regular limit cardinal. Under GCH, this is equivalent to inaccessibility, since under GCH every limit cardinal is a strong limit cardinal. So the difference between weak and strong inaccessibility only arises when GCH fails badly. Every inaccessible cardinal is weakly inaccessible, but forcing arguments show that any inaccessible cardinal can become a non-inaccessible weakly inaccessible cardinal in a forcing extension, such as after adding an enormous number of Cohen reals (this forcing is c.c.c. and hence preserves all cardinals and cofinalities and hence also all regular limit cardinals). Meanwhile, every weakly inaccessible cardinal is fully inaccessible in any inner model of GCH, since it will remain a regular limit cardinal in that model and hence also be a strong limit there. In particular, every weakly inaccessible cardinal is inaccessible in the constructible universe $L$. Consequently, although the two large cardinal notions are not provably equivalent, they are equiconsistent.
There are a few equivalent definitions of weakly inaccessible cardinals. In particular:
Letting $R$ be the transfinite enumeration of regular cardinals, a limit ordinal $\alpha$ is weakly inaccessible if and only if $R_\alpha=\aleph_\alpha$ A nonzero cardinal $\kappa$ is weakly inaccessible if and only if $\kappa$ is regular and there are $\kappa$-many regular cardinals below $\kappa$; that is, $\kappa=R_\kappa$. A regular cardinal $\kappa$ is weakly inaccessible if and only if $\mathrm{REG}$ is unbounded in $\kappa$ (showing the correlation between weakly Mahlo cardinals and weakly inaccessible cardinals, as stationary in $\kappa$ is replaced with unbounded in $\kappa$) Levy collapse
The Levy collapse of an inaccessible cardinal $\kappa$ is the $\lt\kappa$-support product of $\text{Coll}(\omega,\gamma)$ for all $\gamma\lt\kappa$. This forcing collapses all cardinals below $\kappa$ to $\omega$, but since it is $\kappa$-c.c., it preserves $\kappa$ itself, and hence ensures $\kappa=\omega_1$ in the forcing extension.
Inaccessible to reals
A cardinal $\kappa$ is
inaccessible to reals if it is inaccessible in $L[x]$ for every real $x$. For example, after the Levy collapse of an inaccessible cardinal $\kappa$, which forces $\kappa=\omega_1$ in the extension, the cardinal $\kappa$ is of course no longer inaccessible, but it remains inaccessible to reals. Universes
When $\kappa$ is inaccessible, then $V_\kappa$ provides a highly natural transitive model of set theory, a universe in which one can view a large part of classical mathematics as taking place. In what appears to be an instance of convergent evolution, the same universe concept arose in category theory out of the desire to provide a hierarchy of notions of smallness, so that one may form such categories as the category of all small groups, or small rings or small categories, without running into the difficulties of Russell's paradox. Namely, a
Grothendieck universe is a transitive set $W$ that is closed under pairing, power set and unions. That is, (transitivity) If $b\in a\in W$, then $b\in W$. (pairing) If $a,b\in W$, then $\{a,b\}\in W$. (power set) If $a\in W$, then $P(a)\in W$. (union) If $a\in W$, then $\cup a\in W$.
The
Grothendieck universe axiom is the assertion that every set is an element of a Grothendieck universe. This is equivalent to the assertion that the inaccessible cardinals form a proper class. Degrees of inaccessibility
A cardinal $\kappa$ is
$1$-inaccessible if it is inaccessible and a limit of inaccessible cardinals. In other words, $\kappa$ is $1$-inaccessible if $\kappa$ is the $\kappa^{\rm th}$ inaccessible cardinal, that is, if $\kappa$ is a fixed point in the enumeration of all inaccessible cardinals. Equivalently, $\kappa$ is $1$-inaccessible if $V_\kappa$ is a universe and satisfies the universe axiom.
More generally, $\kappa$ is $\alpha$-inaccessible if it is inaccessible and for every $\beta\lt\alpha$ it is a limit of $\beta$-inaccessible cardinals.
$1$-inaccessibility is already consistency-wise stronger than the existence of a proper class of inaccessible cardinals, and $2$-inaccessibility is stronger than the existence of a proper class of $1$-inaccessible cardinals. More specifically, a cardinal $\kappa$ is $\alpha$-inaccessible if and only if for every $\beta<\alpha$: $$V_{\kappa+1}\models\mathrm{KM}+\text{There is a proper class of }\beta\text{-inaccessible cardinals}$$
As a result, if $\kappa$ is $\alpha$-inaccessible then for every $\beta<\alpha$: $$V_\kappa\models\mathrm{ZFC}+\text{There exists a }\beta\text{-inaccessible cardinal}$$
Therefore $2$-inaccessibility is weaker than $3$-inaccessibility, which is weaker than $4$-inaccessibility... all of which are weaker than $\omega$-inaccessibility, which is weaker than $\omega+1$-inaccessibility, which is weaker than $\omega+2$-inaccessibility...... all of which are weaker than hyperinaccessibility, etc.
Hyper-inaccessible
A cardinal $\kappa$ is
hyperinaccessible if it is $\kappa$-inaccessible. One may similarly define that $\kappa$ is $\alpha$-hyperinaccessible if it is hyperinaccessible and for every $\beta\lt\alpha$, it is a limit of $\beta$-hyperinaccessible cardinals. Continuing, $\kappa$ is hyperhyperinaccessible if $\kappa$ is $\kappa$-hyperinaccessible.
More generally, $\kappa$ is
hyper${}^\alpha$-inaccessible if it is hyperinaccessible and for every $\beta\lt\alpha$ it is $\kappa$-hyper${}^\beta$-inaccessible, where $\kappa$ is $\alpha$-hyper${}^\beta$-inaccessible if it is hyper${}^\beta$-inaccessible and for every $\gamma<\alpha$, it is a limit of $\gamma$-hyper${}^\beta$-inaccessible cardinals.
Every Mahlo cardinal $\kappa$ is hyper${}^\kappa$-inaccessible. |
Now let \(\theta \) be any angle. We say that \(\theta \) is in
standard position if its initial side is the positive \(x\)-axis and its vertex is the origin \((0,0) \). Pick any point \((x,y) \) on the terminal side of \(\theta \) a distance \(r>0 \) from the origin (see Figure 1.4.3(c)). (Note that \(r = \sqrt{ x^2 + y^2 } \). Why?) We then define the trigonometric functions of \(\theta \) as follows:
\[\label{1.2} \sin\;\theta ~=~ \dfrac{y}{r} \qquad\qquad
\cos\;\theta ~=~ \dfrac{x}{r} \qquad\qquad \tan\;\theta ~=~ \dfrac{y}{x}\]
\[\label{1.3}\csc\;\theta ~=~ \dfrac{r}{y} \qquad\qquad
\sec\;\theta ~=~ \dfrac{r}{x} \qquad\qquad \cot\;\theta ~=~ \dfrac{x}{y}\] As in the acute case, by the use of similar triangles these definitions are well-defined (i.e. they do not depend on which point \((x,y) \) we choose on the terminal side of \(\theta\)). Also, notice that \(| \sin\;\theta | \le 1 \) and \( | \cos\;\theta | \le 1 \), since \(| y | \le r \) and \( | x | \le r \) in the above definitions. Notice that in the case of an acute angle these definitions are equivalent to our earlier definitions in terms of right triangles: draw a right triangle with angle \(\theta \) such that \(x = \text{adjacent side} \), \(y = \text{opposite side} \), and \(r = \text{hypotenuse} \). For example, this would give us \(\sin\;\theta = \frac{y}{r} = \frac{\text{opposite}}{\text{hypotenuse}} \) and \(\cos\;\theta = \frac{x}{r} = \frac{\text{adjacent}}{\text{hypotenuse}} \), just as before (see Figure 1.4.4(a)).
Example 1.20
Find the exact values of all six trigonometric functions of \(120^\circ\).
Solution:
We know \(120^\circ = 180^\circ - 60^\circ\). By Example 1.7 in Section 1.2, we see that we can use the point \((-1,\sqrt{3})\) on the terminal side of the angle \(120^\circ\) in QII, since we saw in that example that a basic right triangle with a \(60^\circ\) angle has adjacent side of length \(1\), opposite side of length \(\sqrt{3}\), and hypotenuse of length \(2\), as in the figure on the right. Drawing that triangle in QII so that the hypotenuse is on the terminal side of \(120^\circ\) makes \(r = 2\), \(x=-1\), and \(y=\sqrt{3}\). Hence:
\[\nonumber \sin\;120^\circ \;=\; \dfrac{y}{r} \;=\; \dfrac{\sqrt{3}}{2} \qquad
\cos\;120^\circ \;=\; \dfrac{x}{r} \;=\; \dfrac{-1}{2} \qquad \tan\;120^\circ \;=\; \dfrac{y}{x} \;=\; \dfrac{\sqrt{3}}{-1} \,=\, -\sqrt{3}\]
\[\nonumber \csc\;120^\circ \;=\; \dfrac{r}{y} \;=\; \dfrac{2}{\sqrt{3}} \qquad
\sec\;120^\circ \;=\; \dfrac{r}{x} \;=\; \dfrac{2}{-1} \;=\; -2 \qquad \cot\;120^\circ \;=\; \dfrac{x}{y} \;=\; \dfrac{-1}{\sqrt{3}}\]
Example 1.21
Find the exact values of all six trigonometric functions of \(225^\circ \).
Solution:
We know that \(225^\circ = 180^\circ + 45^\circ \). By Example 1.6 in Section 1.2, we see that we can use the point \((-1,-1) \) on the terminal side of the angle \(225^\circ \) in QIII, since we saw in that example that a basic right triangle with a \(45^\circ \) angle has adjacent side of length \(1 \), opposite side of length \(1 \), and hypotenuse of length \(\sqrt{2} \), as in the figure on the right. Drawing that triangle in QIII so that the hypotenuse is on the terminal side of \(225^\circ \) makes \(r = \sqrt{2} \), \(x=-1 \), and \(y=-1 \). Hence:
\[\nonumber \sin\;225^\circ \;=\; \dfrac{y}{r} \;=\; \dfrac{-1}{\sqrt{2}} \qquad
\cos\;225^\circ \;=\; \dfrac{x}{r} \;=\; \dfrac{-1}{\sqrt{2}} \qquad \tan\;225^\circ \;=\; \dfrac{y}{x} \;=\; \dfrac{-1}{-1} \,=\, 1\]
\[\nonumber \csc\;225^\circ \;=\; \dfrac{r}{y} \;=\; -\sqrt{2} \qquad
\sec\;225^\circ \;=\; \dfrac{r}{x} \;=\; -\sqrt{2} \qquad \cot\;225^\circ \;=\; \dfrac{x}{y} \;=\; \dfrac{-1}{-1} \,=\, 1\]
Example 1.22
Find the exact values of all six trigonometric functions of \(330^\circ \).
Solution:
We know that \(330^\circ = 360^\circ - 30^\circ \). By Example 1.7 in Section 1.2, we see that we can use the point \((\sqrt{3},-1) \) on the terminal side of the angle \(225^\circ \) in QIV, since we saw in that example that a basic right triangle with a \(30^\circ \) angle has adjacent side of length \(\sqrt{3} \), opposite side of length \(1 \), and hypotenuse of length \(2 \), as in the figure on the right. Drawing that triangle in QIV so that the hypotenuse is on the terminal side of \(330^\circ \) makes \(r = 2 \), \(x=\sqrt{3} \), and \(y=-1 \). Hence:
\[\nonumber \sin\;330^\circ \;=\; \dfrac{y}{r} \;=\; \dfrac{-1}{2} \qquad
\cos\;330^\circ \;=\; \dfrac{x}{r} \;=\; \dfrac{\sqrt{3}}{2} \qquad \tan\;330^\circ \;=\; \dfrac{y}{x} \;=\; \dfrac{-1}{\sqrt{3}}\]
\[\nonumber \csc\;330^\circ \;=\; \dfrac{r}{y} \;=\; -2 \qquad
\sec\;330^\circ \;=\; \dfrac{r}{x} \;=\; \dfrac{2}{\sqrt{3}} \qquad \cot\;330^\circ \;=\; \dfrac{x}{y} \;=\; -\sqrt{3}\] Solution:
These angles are different from the angles we have considered so far, in that the terminal sides lie along either the \(x\)-axis or the \(y\)-axis. So unlike the previous examples, we do not have any right triangles to draw. However, the values of the trigonometric functions are easy to calculate by picking the simplest points on their terminal sides and then using the definitions in formulas Equation \ref{1.2} and Equation \ref{1.3}.
For instance, for the angle \(0^\circ \) use the point \((1,0) \) on its terminal side (the positive \(x\)-axis), as in Figure 1.4.6. You could think of the line segment from the origin to the point \((1,0) \) as sort of a degenerate right triangle whose height is \(0 \) and whose hypotenuse and base have the same length \(1 \). Regardless, in the formulas we would use \(r = 1 \), \(x = 1 \), and \(y = 0 \). Hence:
\[\nonumber \sin\;0^\circ \;=\; \dfrac{y}{r} \;=\; \dfrac{0}{1} \;=\; 0 \qquad
\cos\;0^\circ \;=\; \dfrac{x}{r} \;=\; \dfrac{1}{1} \;=\; 1 \qquad \tan\;0^\circ \;=\; \dfrac{y}{x} \;=\; \dfrac{0}{1} \;=\; 0\]
\[\nonumber \csc\;0^\circ \;=\; \dfrac{r}{y} \;=\; \dfrac{1}{0} \;=\; \text{undefined}\qquad
\sec\;0^\circ \;=\; \dfrac{r}{x} \;=\; \dfrac{1}{1} \;=\; 1 \qquad \cot\;0^\circ \;=\; \dfrac{x}{y} \;=\; \dfrac{1}{0} \;=\; \text{undefined}\]
Note that \(\csc\;0^\circ \) and \(\cot\;0^\circ \) are undefined, since division by \(0 \) is not allowed.
Similarly, from Figure 1.4.6 we see that for \(90^\circ \) the terminal side is the positive \(y\)-axis, so use the point \((0,1) \). Again, you could think of the line segment from the origin to \((0,1) \) as a degenerate right triangle whose base has length \(0 \) and whose height equals the length of the hypotenuse. We have \(r = 1 \), \(x = 0 \), and \(y = 1 \), and hence:
\[\nonumber \sin\;90^\circ \;=\; \dfrac{y}{r} \;=\; \dfrac{1}{1} \;=\; 1 \qquad
\cos\;90^\circ \;=\; \dfrac{x}{r} \;=\; \dfrac{0}{1} \;=\; 0 \qquad \tan\;90^\circ \;=\; \dfrac{y}{x} \;=\; \dfrac{1}{0} \;=\; \text{undefined}\]
\[\nonumber \csc\;90^\circ \;=\; \dfrac{r}{y} \;=\; \dfrac{1}{1} \;=\; 1\qquad
\sec\;90^\circ \;=\; \dfrac{r}{x} \;=\; \dfrac{1}{0} \;=\; \text{undefined}\qquad \cot\;90^\circ \;=\; \dfrac{x}{y} \;=\; \dfrac{0}{1} \;=\; 0\]
Likewise, for \(180^\circ \) use the point \((-1,0) \) so that \(r = 1 \), \(x = -1 \), and \(y = 0 \). Hence:
\[\nonumber \sin\;180^\circ \;=\; \dfrac{y}{r} \;=\; \dfrac{0}{1} \;=\; 0 \qquad
\cos\;180^\circ \;=\; \dfrac{x}{r} \;=\; \dfrac{-1}{1} \;=\; -1 \qquad \tan\;180^\circ \;=\; \dfrac{y}{x} \;=\; \dfrac{0}{-1} \;=\; 0\]
\[\nonumber \csc\;180^\circ \;=\; \dfrac{r}{y} \;=\; \dfrac{1}{0} \;=\; \text{undefined}\quad\;\;\;
\sec\;180^\circ \;=\; \dfrac{r}{x} \;=\; \dfrac{1}{-1} \;=\; -1\quad\;\;\; \cot\;180^\circ \;=\; \dfrac{x}{y} \;=\; \dfrac{-1}{0} \;=\; \text{undefined}\]
Lastly, for \(270^\circ \) use the point \((0,-1) \) so that \(r = 1 \), \(x = 0 \), and \(y = -1 \). Hence:
\[\nonumber \sin\;270^\circ \;=\; \dfrac{y}{r} \;=\; \dfrac{-1}{1} \;=\; -1 \qquad
\cos\;270^\circ \;=\; \dfrac{x}{r} \;=\; \dfrac{0}{1} \;=\; 0 \qquad \tan\;270^\circ \;=\; \dfrac{y}{x} \;=\; \dfrac{-1}{0} \;=\; \text{undefined}\]
\[\nonumber \csc\;270^\circ \;=\; \dfrac{r}{y} \;=\; \dfrac{1}{-1} \;=\; -1\qquad
\sec\;270^\circ \;=\; \dfrac{r}{x} \;=\; \dfrac{1}{0} \;=\; \text{undefined}\qquad \cot\;270^\circ \;=\; \dfrac{x}{y} \;=\; \dfrac{0}{-1} \;=\; 0\] The following table summarizes the values of the trigonometric functions of angles between \(0^\circ \) and \(360^\circ \) which are integer multiples of \(30^\circ \) or \(45^\circ\): Table 1.3 Table of trigonometric function values
Since \(360^\circ \) represents one full revolution, the trigonometric function values repeat every \(360^\circ \). For example, \(\sin\;360^\circ = \sin\;0^\circ \), \(\cos\;390^\circ = \cos\;30^\circ \), \(\tan\;540^\circ = \tan\;180^\circ \), \(\sin\;(-45^\circ) = \sin\;315^\circ \), etc. In general, if two angles differ by an integer multiple of \(360^\circ\) then each trigonometric function will have equal values at both angles. Angles such as these, which have the same initial and terminal sides, are called
coterminal.
In Examples 1.20-1.22, we saw how the values of trigonometric functions of an angle \(\theta \) larger than \(90^\circ \) were found by using a certain acute angle as part of a right triangle. That acute angle has a special name: if \(\theta \) is a nonacute angle then we say that the
reference angle for \(\theta \) is the acute angle formed by the terminal side of \(\theta \) and either the positive or negative \(x\)-axis. So in Example 1.20, we see that \(60^\circ \) is the reference angle for the nonacute angle \(\theta = 120^\circ\); in Example 1.21, \(45^\circ \) is the reference angle for \(\theta = 225^\circ\); and in Example 1.22, \(30^\circ \) is the reference angle for \(\theta = 330^\circ \).
Example 1.24
Let \(\theta = 928^\circ \).
Figure 1.4.7 Which angle between \(0^\circ \) and \(360^\circ \) has the same terminal side (and hence the same trigonometric function values) as \(\theta\,\)? What is the reference angle for \(\theta\,\)? Solution (a) Since \(928^\circ = 2 \times 360^\circ + 208^\circ \), then \(\theta \) has the same terminal side as \(208^\circ \), as in Figure 1.4.7. (b) \(928^\circ \) and \(208^\circ \) have the same terminal side in QIII, so the reference angle for \(\theta = 928^\circ \) is \(208^\circ - 180^\circ = 28^\circ \).
When \(\theta \) is in QII, we see from Figure 1.4.8(a) that the point \((-4,3) \) is on the terminal side of \(\theta \), and so we have \(x = -4 \), \(y = 3 \), and \(r = 5 \). Thus, \(\sin\;\theta = \frac{y}{r} = \frac{3}{5} \) and \(\tan\;\theta = \frac{y}{x} = \frac{3}{-4} \).
When \(\theta \) is in QIII, we see from Figure 1.4.8(b) that the point \((-4,-3) \) is on the terminal side of \(\theta \), and so we have \(x = -4 \), \(y = -3 \), and \(r = 5 \). Thus, \(\sin\;\theta = \frac{y}{r} = \frac{-3}{5} \) and \(\tan\;\theta = \frac{y}{x} = \frac{-3}{-4} =
\frac{3}{4} \).
Thus, either \(\fbox{\(\sin\;\theta = \frac{3}{5} \) and \(\tan\;\theta = -\frac{3}{4}\)}\) or \(\fbox{\(\sin\;\theta = -\frac{3}{5} \) and \(\tan\;\theta = \frac{3}{4}\)}\).
Since reciprocals have the same sign, \(\csc\;\theta \) and \(\sin\;\theta \) have the same sign, \(\sec\;\theta \) and \(\cos\;\theta \) have the same sign, and \(\cot\;\theta \) and \(\tan\;\theta \) have the same sign. So it suffices to remember the signs of \(\sin\;\theta \), \(\cos\;\theta \), and \(\tan\;\theta\):
For an angle \(θ\) in standard position and a point \((x, y)\) on its terminal side:
\(\sin\;\theta \) has the same sign as \(y\) \(\cos\;\theta \) has the same sign as \(x\) \(\tan\;\theta \) is positive when \(x \) and \(y \) have the same sign \(\tan\;\theta \) is negative when \(x \) and \(y \) have opposite signs |
I am bit confused as to if both the things are the same since it seems that people refer to both as being given by the ``Liouville action",
\(\frac{1}{4\pi}\int d^2z ( \vert \partial \phi \vert ^2 + \mu e^{\phi } ) + bounday-terms \)
- Is the above action (which I would have thought is for the Liouville CFT) the same as $S_{ZT}$ which is said to satisfy the identity $S_{E}(M_3) = \frac{c}{3}S_{ZT}(M_2,\Gamma_{i=1,...,g})$?
In the above equality $S_E$ is the Einstein gravity action evaluated on-shell on a 3-manifold which is formed by "filling" in the interiors of a choice $\Gamma_{i=1,..,g}$ of $g$ non-contractible cycles on a genus-g Riemann surface $M_2$. (...in the context of $AdS/CFT$ one would want to interprete $M_2$ as being the conformal boundary of the space-time $M_3$...)
If $S_{ZT}$ is indeed the same as the Liouville CFT action then how does one understand the need to choose these non-contractible cycles to define that?
- In the same strain I would want to know what exactly is the meaning of the conjecture made in equation 5.1 in this paper, http://arxiv.org/abs/1303.6955 |
I got a problem solving the equation below: $$ \int_0^a J_0\left(b\sqrt{a^2-x^2}\right)\cosh(cx) dx$$ where $J_0$ is the zeroth order of Bessel function of the first kind.
I found the integral expression below on Gradshteyn and Ryzhik's book 7th edition, section 6.677, number 6: $$ \int_0^a J_0\left(b\sqrt{a^2-x^2}\right)\cos(cx) dx = \frac{\sin\left(a\sqrt{b^2+c^2}\right)}{\sqrt{b^2+c^2}}.$$
My naive intuition says that I can solve the integral above by changing $c\to ic$, so I would get
$$ \int_0^a J_0\left(b\sqrt{a^2-x^2}\right)\cosh(cx) dx = \begin{cases} \frac{\sin\left(a\sqrt{b^2-c^2}\right)}{\sqrt{b^2-c^2}}, & \text{if } b > c\\ \frac{\sinh\left(a\sqrt{c^2-b^2}\right)}{\sqrt{c^2-b^2}}, & \text{if } b < c\\ a, & \text{otherwise.} \end{cases} $$
I am not quite sure about this because it contains substitution into complex number. If anyone could confirm that this is correct/incorrect, it would be much appreciated! Confirmation with some proof will be better.
Many thanks! |
Let us show that $a_n=\left(1+\frac{1}{n}\right)^n$ for $n\geq 1$ gives an increasing sequence bounded by $3$.
Increasing. The product of $k$ positive numbers can always be written as the $k$-th power of their geometric mean. In particular$$ 1\cdot\left(1+\frac{1}{n}\right)^n = \text{GM}\big(1,\underbrace{1+\tfrac{1}{n}}_{n\text{ times}}\big)^{n+1}\color{red}{<}\text{AM}\big(1,\underbrace{1+\tfrac{1}{n}}_{n\text{ times}}\big)^{n+1}=\left[\frac{1+n\left(1+\tfrac{1}{n}\right)}{n+1}\right]^{n+1} $$by the AM-GM inequality. If we expand the RHS, we exactly get $a_n<a_{n+1}$.
Bounded by $3$. By the binomial theorem and the fact that $\binom{n}{k}\leq\frac{n^k}{k!}$ we have:$$ \left(1+\frac{1}{n}\right)^n = 1+\sum_{k=1}^{n}\binom{n}{k}\frac{1}{n^k}\leq 1+\sum_{k=1}^{n}\frac{1}{k!} $$for any $n\geq 1$, hence $a_n$ is bounded by $1+\sum_{k\geq 1}\frac{1}{k!}$. On the other hand, for any $k\geq 3$ we have $k!\geq 2\cdot 3^{k-2}$, hence$$ a_n \leq 1+1+\frac{1}{2}+\sum_{k\geq 3}\frac{1}{2\cdot 3^{k-2}}=\frac{11}{4}\color{red}{<3}. $$
Since any increasing and bounded sequence is convergent to its supremum, this shows that$$ \lim_{n\to +\infty}\left(1+\frac{1}{n}\right)^n $$is a mathematical constant less than three, and with few efforts you may also prove that such mathematical constant is exactly$$ \sum_{k\geq 0}\frac{1}{k!} $$a better-suited representation for numerical purposes, since such series is rapidly convergent. $\left(\sum_{k\geq 0}\frac{1}{2^k k!}\right)^2$ or $\left(\sum_{k\geq 0}\frac{(-1)^k}{k!}\right)^{-1}$ are even better.
Relations with the factorial. By defining $b_n$ as $\frac{n^n}{n!}$ we have $b_1=1$ and$$ \frac{b_{n+1}}{b_n} = \frac{(n+1)^{n+1}}{(n+1)!}\cdot\frac{n!}{n^n}=\left(1+\frac{1}{n}\right)^n, $$hence$$ b_{n+1}=b_1\prod_{k=1}^{n}\frac{b_{k+1}}{b_k}=\prod_{k=1}^{n}\left(1+\frac{1}{k}\right)^k \in (2^n,3^n).$$This gives a weak version of Stirling's inequality,
$$ \frac{(n+1)^{n}}{3^n}<n!\leq\frac{(n+1)^n}{2^n}.$$ |
Is the particle in a box under harmonic driving electric field solvable analytically?
Here is the Schrodinger equation:
$$ i\frac{\partial \psi(x,t)}{\partial t}=\left[-\frac{1}{2} \frac{\partial^2}{\partial x^2}+V(x)+F(t)*x\right]\psi(x,t) $$
where the potential $V(x)$ is
$$ V(x)= \begin{array}{cc} \Big\{ & \begin{array}{cc} 0 & 0\leq x\leq L \\ \infty & \text{otherwise} \\ \end{array} \\ \end{array} $$
The driving force $F(t)$ is
$$ F(t)=F_0*\cos(\omega_0*t) $$
and $F_0$ and $\omega_0$ are constants. |
Say I have 4 random variables. $X^{(1)}$ and $X^{(2)}$ are jointly multivariate normal with mean 0 and covariance $\Sigma_X$, and $Y^{(1)}$ and $Y^{(2)}$ are jointly multivariate normal with mean 0 and covariance $\Sigma_Y$. There are no dependencies between these two pairs. Now I want to know the following expected value: $$E\left [\frac{\mathrm{cov}(X_{1:N}^{(1)},X_{1:N}^{(2)})}{\sqrt{\mathrm{var}(X_{1:N}^{(1)}+Y_{1:N}^{(1)})}\sqrt{\mathrm{var}(X_{1:N}^{(2)}+Y_{1:N}^{(2)})}}\right ] $$
If it helps, I think I'm right in saying that this fraction is essentially a semi-partial correlation. That is, if you define $Z^{(1)}=X^{(1)}+Y^{(1)}$ and $Z^{(2)}=X^{(2)}+Y^{(2)}$, it is the semi-partial correlation between $Z^{(1)}$ and $Z^{(2)}$, controlling for $Y^{(1)}$ and $Y^{(2)}$. I don't particularly care about that interpretation though, I'm really just interested in finding the solution to this expected value.
So far I've managed to realize that the numerator and denominator aren't independent, and thus the problem can't be easily partitioned into independent expected values. It seems, then, that it would have to be worked out in terms of the joint distribution of the numerator and denominator, which is rather daunting. So I guess my hope is someone here is equal to the task, has some pointers, or maybe even knows about an established solution?
From doing some simulations, it seems that the distribution of this thing looks just like that of a correlation coefficient, so I have a suspicion that the pdf will have the same form, with different parameters (since the denominator here is different from what it would be in a normal correlation). But then the question is what those parameters would be. |
I needed to compute $\sin 18^{\circ}$.
Now, these two relations hold for every $x$:
$\cos 5x=16\cos^5x-20\cos^3x+5\cos x$ $\sin5x=16\sin^5x-20\sin^3x+5\sin x$, which can be easily proved using the multiple angle formulae.
Now, one thing to observe is : $\sin5(18^{\circ})=1$ and $\cos5(18^{\circ})=0$
So, $$\begin{align}16\cos^518^{\circ}-20\cos^318^{\circ}+5\cos18^{\circ}&=0 \\\implies16\cos^418^{\circ}-20\cos^218^{\circ}+5&=0 \tag{1} \end{align}$$ And, $$\begin{align}16\sin^518^{\circ}-20\sin^3+5\sin18^{\circ}&=1\\ \implies16\sin^418^{\circ}-20\sin^218^{\circ}+5&=\dfrac{1}{\sin18^{\circ}} \tag{2} \end{align}$$
So, before moving forward with my computation, I would like to ask: Is there any problem in these two equations?
Now, equation (1) consists of squares of cosines, which clearly means that (1) can be represented consisting of sines like this:
$$\begin{align}16\cos^418^{\circ}-20\cos^218^{\circ}+5 &=0\\ \implies16(1-\sin^218^{\circ})^2 -20(1-\sin^218^{\circ})+5&=0 \\ \implies 16\sin^418^{\circ}-12\sin^218+1&=0 \end{align}$$
Now, using the above equation, we can easily deduce that $\sin^218=\dfrac{12\pm4\sqrt5}{32}=\dfrac{3\pm\sqrt5}{8}$.
Now, if I put this value in equation (2), I get something like this:
$\dfrac{1}{\sin18^{\circ}}=16\dfrac{(3\pm\sqrt5)^2}{64}-20\dfrac{(3\pm\sqrt5)}{8}+5$, which reduces to $1\mp\sqrt5$, which upon solving does not give me the correct value for $\sin18^{\circ}$. So, what mistake am I doing here?
Edit: By solving, I mean that the actual value of $\sin18^{\circ}=\dfrac{\sqrt5-1}{4}$, but in my computation, the negative $1$ and the denominator $4$ is missing. |
Finite temperature is introduced in the Ads Space by inserting a black hole. In the Ads-CFT correspondence, the Wilson loop is at $u \rightarrow \infty$. But the black hole horizon itself would be at $u \rightarrow u_0$, i.e. a finite u, implying that the black hole mass is finite.
For reference, let me specify the metric in 10D space as $\frac{u^2}{a^2} (-H dt^2 + dx_{||}^2) + \frac{a^2}{u^2}(\frac{du^2}{H} + d\Omega^2)$, with H = $1 - u^4/a^4$. This shows that the black hole horizon is at u = a. And the dimension u is interpreted as energy dimension.
Does this mean that the gauge particles described by the Wilson loop is heavier than the black hole, since the Wilson loop is at $u = \infty$?
Is it a mathematical jugglery to introduce finite temperature without worrying about the physical possibility?
This post imported from StackExchange Physics at 2018-06-19 08:51 (UTC), posted by SE-user Angela |
How can I as a trusted user of a middleman company (such as PhishTank) verify whether a phishing site is valid if the scam listens only on a unique referrer link(randomly created) and is blocking any other access methods?
To throw a threat scenario into scene.
An attacker sent an email to a local bank officer, the email looks very similar to a official email of an employee in their company at a higher tier and the time was planned. Later they detect it was a spear-phishing attack from an old employee. They report the attack on PhishTank (for example), but there they can’t verify it because the link doesn’t allow direct access (only with a unique referrer as in the email). How can they still verify whether it was a valid report of not?
Now the real question,
On a technical view, how does such an attack work?
I am on an Ubuntu system that has the multiverse repository enabled, and I’d like to see whether I can disable it.
How can I check whether any package from multiverse is installed on the system?
I am trying to solve a Maze puzzle using the A* algorithm. I am trying to analyze the algorithm based on different applicable heuristics.
Currently, I explored Manhattan and Euclidean distances. Which other heuristics are available? How do we compare them? How do we know whether a heuristic is better than another?
I have a language $ L= \{a^nb^nc^m : n, m \ge 0\}$ .
Now, I wanted to determine whether this language is linear or not.
So, I came up with this grammar:
$ S \rightarrow A\thinspace|\thinspace Sc$
$ A \rightarrow aAb \thinspace | \thinspace \lambda$
I’m pretty sure(not completely however) that this grammar is linear and consequently language too is linear.
Now, when I use pumping lemma of linear languages with $ w$ , $ v$ and $ u$ chosen as follow I find that this language is not linear.
$ w = a^nb^nc^m, \space v = a^k, \space y=c^k$
$ w_0 = a^{n-k}b^nc^{n-k}$
now, $ w_0 \notin L \space (\because n_a \neq n_b)$
So, I’m unable to find whether the language is linear or not and what goes wrong in above logic with either case. Please help.
Title says it all, but to clarify:
Define a problem, called $ IsInNP$ , as follows:
Given a Turing Machine $ M$ , $ IsInNP$ is the problem of deciding if the problem that $ M$ decides is in $ NP$ .
What is the complexity class of $ IsInNP$ ? Is it even decidable? Is the answer the same for any other complexity class, like $ NP$ -hard? And are those questions even sensible to ask?
By the way, I am aware that the class $ NP$ is not enumerable, but since I do not quite understand enumerability and it seems that recursively enumerable problems can be decidable, I do not know if that means that deciding whether a problem is in $ NP$ , or any other complexity class, is decidable.
Also, I am aware of Rice’s Theorem, and I believe it can be interpreted as saying that deciding whether a problem is in $ NP$ is undecidable, but I am not certain.
Bonus question if the above questions are sensible: given a property $ S$ that only $ NP$ problems possess, does the above also mean that deciding whether a problem decided by a Turing Machine $ M_2$ has property $ S$ is in the same complexity class as $ IsInNP$ ?
Let $ L$ be a language defined over $ \Sigma = \left \{ a, b \right \}$ such that $ L = \left \{ x\#y \mid x,y \in \Sigma^*, \# \text { is a constant and } x \neq y \right \}$ State whether the language L is a CFL or not? Give valid reasons for the same.
Now, I think that the given language is not a CFL. I have used the pumping lemma test for showing that L is not CFL. Specifically, I have done the following-
Consider a string $ w = abb\#aab$ . Obviously, $ w \in L$ .
Let, $ u = \epsilon \ v = a \ w = bb\#aa \ x = b \ y = \epsilon$
Here, $ |vx| \geq 1$
But, $ uv^2wx^2y = aabb\#aabb \notin L$ Therefore, pumping lemma test result is negative. Therefore, we can conclude that the given language is not a CFL.
Now, I have a doubt regarding the above method- I know that given a CFL, if we want to perform the pumping lemma test for the CFL, we must always use strings which are of length greater than or equal to the minimum pumping length. In fact, this also confirms to the condition that the length of the string $ w$ used for the pumping lemma test (denoted by $ |w|$ ) must be greater than or equal to n.
Therefore, when I use $ w = abb\#aab$ for doing the pumping lemma test, I implicitly make the assumption that 7 is greater than or equal to the minimum pumping length (if $ L$ were to be a CFL).
Am I correct or incorrect in doing so?
I’ve learned this a few years ago that this is impossible unless one simply ‘executes’ (in a modern computing sense) the input with the language rules, but I have some problems in just using this statement.
The fundamental doubt is that the statement itself is well-stated. If I’m using the term ‘execution’ to describe the act of matching the rules one input element by one, is this statement valid?
Is this statement (deciding whether an input sequence is following a language is impossible without an execution) not exactly limited to RE? In other words, I wonder this statement also holds even for the languages in other classes.
I’m not even sure how I can search for this statement and confirm from the external source.
(By RE, here I indicate the recursively enumerable languages, not the regular expression)
You’re hosting a 1 v 1 basketball league with a game schedule. At the end of the league you have each player report their supposed win-loss record (there are no ties), but you want to check whether the proposed standings were actually possible given the schedule.
For example: you have four players (Alice+Bob+Carol+Dave) and your schedule is a simple round robin. The reported standings [
A: 3-0 B: 1-2 C: 1-2 D: 1-2] and [ A: 2-1 B: 1-2 C: 1-2 D: 2-1] would be possible, but the standing [ A: 3-0 B: 0-3 C: 0-3 D: 3-0] would not be.
Now suppose the schedule is instead a 3 game head to head between Alice+Bob and Carol+Dave. The reported standing [
A: 3-0 B: 0-3 C: 0-3 D: 3-0] is now possible, but [ A: 3-0 B: 1-2 C: 1-2 D: 1-2] would no longer be.
(The schedule does not need to be symmetric in any way. You could have Alice only play against Bob 10 times, then make Bob+Carol+Dave play 58 round robins against each other.)
Problem: Given a schedule with k participants and n total games, efficiently check whether a proposed win-loss standings could actually occur from that schedule.
The O($ 2^n$ ) brute force method is obvious, enumerate all possible game outcomes and see if any match the proposed standings. And if
k is small increasing n doesn’t add much complexity – it’s very easy to check a two person league’s standings regardless of whether they play ten games or ten billion games. Beyond that I haven’t made much headway in finding a better method, and was curious if anyone had seen a similar problem before.
I’ve separately asked about the mechanics of whether an invisible creature can see themselves and their own gear in D&D 5E. And from a rules perspective, they can’t. But from a more flavor perspective, I want to know if there’s any indication in the various D&D published materials (maybe in novels, movies, articles, or setting sourcebooks for prior editions) whether or not people can see themselves (and their things) when invisible. Does somewhere explain just what a character experiences when they look at themselves while invisible, like saying whether they’re surprised by seeing the ground through their own feet, or whether there’s some ghostly effect where they can tell they’re invisible to others but can see themselves to some extent?
How do other IAs/UXD’s treat external links & what are the perceived pros/cons of different options. Specifically looking for recommendations on:
whether & how to visually distinguish external links from internal links whether to open them in the same window/tab or a different window or tab
I’ve found some great feedback re these questions on http://www.ixda.org/search.php?tag=external+links & http://www.useit.com/alertbox/open_new_windows.html
Looking for additional opinions, thoughts & especially any usability test findings contributors of this site might have re these questions. |
Tangent Lines at Points
Consider a curve represented by the function $f$, and suppose that we want to find the slope of a line tangent to the point $P(a, f(a))$. To calculate this slope, suppose we take the point $Q(x, f(x))$. We can easily create a secant line from $P$ to $Q$ from which we use basic algebra to calculate that $m = \frac{f(x) - f(a)}{x - a}$.
Notice that this is the slope of the secant line between $P$ and $Q$, but we want to find the slope of the tangent line at $P$. Notice that as $Q$ gets closer and closer to the point $P$, the secant line between $P$ and $Q$ gets closer to the tangent line at $P$:
Thus, we define the slope of the tangent line at $P$ to be:(1)
There is also another common form for the slope of a tangent line that is sometimes more useful. If we let $h = x - a$, it follows that $x = a + h$. Furthermore, as $x \to a$, $x - a = h \to 0$, and thus, we can rewrite the above formula as:(2)
Definition of the Derivative
With what we just learned about tangent lines at points, we can now finally define a derivative:
Definition: If $f$ is a function, then the Derivative of $f$ at a value $a$ denoted $f'(a) = \lim_{h \to 0} \frac{f(a + h) - f(a)}{h} = \lim_{x \to a} \frac{f(x) - f(a)}{x - a}$ is the slope of the tangent line to $f$ at the point $(a, f(a))$ provided that this limit exists. If this limit exists and is finite, then $f$ is said to be Differentiable at $a$. If this limit does not exist, then we say $f$ is Not Differentiable at $a$.
We now have a definition of what a derivative is and how they can help us calculate the slope of a tangent line at a given point. Do note that derivatives can also be functions if we replace the value $a$ with the variable $x$, that is, $f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}$. This will be important later.
Now figure out the equation of a tangent line, recall the point-slope form a line:(3)
We can modify this formula by letting $x_1 = a$, $y_1 = f(a)$ and $m = f'(a)$ and therefore, the equation of the tangent line that passes through $(a, f(a))$ and has the slope $f'(a)$ is given by the following equation:(4)
We will now look at an example applying the definition of the derivative.
Example 1 Calculate the slope of the tangent line at the point $(2, 4)$ of the function $f(x) = x^2$.
Applying the definition of the derivative, we get that:(5)
Therefore, the slope of the tangent line at the point $(2, 4)$ is $f'(2) = 4$.
Example 2 Calculate the slope of the tangent line at the point $(a, f(a))$ of the function $f(x) = x^3$.
Once again, we will apply the definition of the derivative to get that:(6)
Therefore, the slope of the tangent line at the point $(a, f(a))$ is $f'(a) = 3a^2$
Example 3 Calculate the equation of the tangent line at the point $(2, 8)$ of the function $f(x) = x^3$.
From example 2, we note that the slope of the tangent line at point $(a, f(a))$ is $f'(a) = 3a^2$. Applying this to what we're given in example 3, we see that $(a, f(a)) = (2, 8)$ and therefore, the slope of the tangent line at point $(2, 8)$ is $f'(2) = 3(2)^2 = 12$. We note that $a = 2$ and $f(a) = 8$, so applying this to the formula for the equation of a tangent line, we get:(7)
Thus, the equation of the tangent line at point $(2, 8)$ for the function $f(x) = x^3$ is $y = 12x - 16$. |
Table of Contents
The Method of Undetermined Coefficients Examples 1
Recall from The Method of Undetermined Coefficients page that if we have a second order linear nonhomogeneous differential equation with constant coefficients of the form $a \frac{d^2y}{dt^2} + b \frac{dy}{dt} + cy = g(t)$ where $a, b, c \in \mathbb{R}$, then if $g(t)$ is of a form containing polynomials, sines, cosines, or the exponential function $e^x$.
To solve these type of differential equations, we first need to solve the corresponding linear homogeneous differential equation $a \frac{d^2y}{dt^2} + b \frac{dy}{dt} + cy = 0$ for the homogeneous solution $y_h(t)$.. We then need to find a particular solution $Y(t)$ which will be of a particular form dependent on the combination of functions forming $g(t)$ (see the page linked above).
We can then solve for the coefficients and obtain a general solution $y = y_h(t) + Y(t)$.
We will now look at some examples of applying this method.
Example 1 Solve the following second order linear nonhomogeneous differential equation $\frac{d^2y}{dt^2} + \frac{dy}{dt} - 6y = 12e^{3t} + 12e^{-2t}$ using the method of undetermined coefficients.
The corresponding second order homogeneous differential equation is $\frac{d^2y}{dt^2} + \frac{dy}{dt} - 6y = 0$ and the characteristic equation is $r^2 + r - 6 = (r + 3)(r - 2) = 0$. The roots to the characteristic equation are $r_1 = -3$ and $r_2 = 2$ and so the solution to the homogeneous second order differential equation is:(1)
We now want to find a particular solution $Y(t)$. Assume that $Y(t) = Ae^{3t} + Be^{-2t}$. No part of the assumed form of $Y(t)$ is contained in the solution to the corresponding second order homogeneous differential equation from above, so we do not need to multiply by $t$. The first and second derivatives of $Y$ are given below.(2)
Substituting the values of $Y(t)$, $Y'(t)$, and $Y''(t)$ into our differential equations gives us:(4)
The equation above implies that $A = 2$ and $B = -3$. Therefore a particular solution to the second order nonhomogeneous differential equation is $Y(t) = 2e^{3t} -3e^{-2t}$. Thus, the general solution is given by:(5)
Example 2 Solve the following second order linear nonhomogeneous differential equation $\frac{d^2y}{dt^2} + 9y = t^2 e^{3t} + 6$ using the method of undetermined coefficients.
The corresponding second order homogeneous differential equation is $\frac{d^2y}{dt^2} + 9y = 0$, and the corresponding characteristic equation is $r^2 + 9 = 0$. Therefore $r^2 = -9$ and $r = 0 \pm 3i$, so the roots of the characteristic equation are $r_1 = 3i$ and $r_2 = -3i$. The solution to the corresponding second order homogeneous differential equation is:(6)
We now need to find a particular solution for the second order nonhomogeneous differential equation. Assume the form $Y(t) = (P + Qt + Rt^2)e^{3t} + S$. The first and second derivatives of $Y$ are:(7)
Substituting the values of $Y(t)$, $Y'(t)$, and $Y''(t)$ into the second order nonhomogeneous differential equation and we have that:(9)
The equation above implies that:(10)
Therefore $S = \frac{2}{3}$, $R = \frac{1}{18}$, $Q = -\frac{1}{27}$ and $P = \frac{1}{162}$. Therefore, a particular solution to the second order nonhomogeneous differential equation is $Y(t) = \left ( \frac{1}{162} - \frac{1}{27} t + \frac{1}{18} t^2 \right ) e^{3t} + \frac{2}{3}$ and so the general solution to the second order nonhomogeneous differential equation given is:(11) |
I am looking for some graph theory concepts and definitions around embedding a DAG into another DAG. I could only find a few lines on Wikipedia around this so I wonder if someone can help me find good references for the important concepts and definitions.
For example, assume I have a DAG $G=(E,V)$ where there are two types of edges, red and black. $E_{red} \subset E$, $E_{black} \subset E$, and $E_{red} \cap E_{black} = \emptyset$ .
I assume that the red edges represents some "core" DAG and the black edges represents "details".
If I want to extract this "core" DAG, I might do something like this:
Put $E_{red}$ and all related vertices into a new graph $G' = (E',V')$.
For every pair $e_p,e_c \in E_{red}$, if $e_c$ is reachable from $e_p$ and there is no other $e \in E_{red}$ on the path, then add the edge $(end(e_p),beginning(e_c))$ to $G'$.
$G'$ might then be thought of as having removed the black edges (and thus the details) from $G$.
My question is, what am I doing, and where can I read about embedding DAGs and extracting embeddings based on edge colors? |
I apologize that this is perhaps not adequate for mathoverflow but I have struggled with this for days now and become desperate...
The reduced K-group $\tilde{K}(S^0)$ of the zero sphere is the ring $\mathbb{Z}$ as being the kernel of the ring morphism $K(S^0)\to K(x_0)$. The ring structure on $K(S^0)$ and $K(x_0)$ comes from the tensor product $\otimes$ of vector bundles.
If $H$ is the canonical line bundle over $S^2$ then $(H-1)^2=0$ where the product comes from $\otimes$. The Bott periodicity theorem states that the induced map $\mathbb{Z}\left[H\right]/(H-1)^2\to K(S^2)$ is an isomorphism of rings. So $\tilde{K}(S^2)\cong \mathbb{Z}\left[H-1\right]/(H-1)^2$, I think, and every square in $\tilde{K}(S^2)$ is zero.
The reduced external product gives rise to a map $\tilde{K}(S^0)\to \tilde{K}(S^2)$ which is a ring (?) isomorphism (see e.g. Hatcher Vector Bundles and K-Theory, Theorem 2.11.) but not every square in $\tilde{K}(S^2)$ is zero then. How can this be?
Aside from that I do not understand the relation of $\otimes:K(X)\otimes K(X)\to K(X)$ and the composition of the external product with map induced from the diagonal map $K(X)\otimes K(X)\to K(X\times X)\to K(X)$. |
I was told that using gradient methods for Gaussian mixture models may end up with Dirac delta function(s). I hadn't thought of this problem before, but when I verify this, it does seem to be a problem.
For example, let us consider a mixture of 2 Gaussians, and data points $x_1, x_2, \cdots, x_m$ ($m\gg$ 2). The following models gives a likelihood of infinity:
One mixture $c_1$ fits any of the data point, say $x_1$, by a Dirac delta function. The other mixture $c_2$ fits the remaining data points with a wide-spreading Gaussian.
The likelihood
$$\begin{align} p(\mathcal{D})&=\prod_{i=1}^mp(x_i)\\ &=\prod_{i=1}^m\bigg[p(c_i=1)p(x_i|c_i=1)+p(c_i=2)p(x_i|c_i=2)\bigg] \end{align} $$
Then for $x_1$, its probability density if infinity. For $x_2,\cdots,x_m$ the first term is zero, but the second term is non-zero. Then the overall likelihood is infinity.
I am wondering if my understanding is correct. If it is, I am confused why EM doesn't encounter this problem, as fitting GMM with Dirac delta functions is not typically discussed in textbooks.
I am further puzzled with the objective of fitting GMMs. It seems that we don't have to (and it's not right to) maximize the likelihood. The maximum likelihood is infinity as shown above. You don't need to maximize that, and it has already been there. But EM algorithms try to maximize the likelihood by alternately pushing the lower-bound of likelihood to be tight and optimizing within the lower bound. This raises a doubt if EM working at all is just because it cannot find global optima. Otherwise, EM will fit Dirac delta.
I am quite confused and not sure what's wrong. |
The definition of incompressible is often unclear and changes depending on which community uses it. So let's look at some common definitions:
Constant density
This means the density is constant everywhere in space and time. So:$$\frac{D\rho}{Dt} = \frac{\partial \rho}{\partial t} + \vec{u}\cdot\nabla{\rho} = 0$$Because density is constant everywhere in space and time, the temporal derivative is zero, and the spatial gradient is zero.
Low Mach Number
This shows up when the flow velocity is relatively low and so all pressure changes are hydrodynamic (due to velocity motion) rather than thermodynamic. The effect of this is that $\partial \rho / \partial p = 0$. In other words, the small changes in pressure due to flow velocity changes do not change the density. This has a secondary effect -- the speed of sound in the fluid is $\partial p/\partial \rho = \infty$ in this instance. So there is an infinite speed of sound, which makes the equations elliptic in nature.
Although we assume density is independent of pressure, it is possible for density to change due to changes in temperature or composition if the flow is chemically reacting. This means:
$$\frac{D\rho}{Dt} \neq 0$$because $\rho$ is a function of temperature and composition. If, however, the flow is not reacting or multi-component, you will also get the same equation as the constant density case:
$$\frac{D\rho}{Dt} = 0$$
Therefore, incompressible can mean constant density, or it can mean low Mach number, depending on the community and the application. I prefer to be explicit in the difference because I work in the reacting flow world where it matters. But many in the non-reacting flow communities just use incompressible to mean constant density.
Example of non-constant density
Since it was asked for an example where the material derivative is zero but density is not constant, here goes:
$$\frac{D\rho}{Dt} = \frac{\partial \rho}{\partial t} + \vec{u}\cdot\nabla \rho = 0$$
Rearrange this:
$$\frac{\partial \rho}{\partial t} = -\vec{u}\cdot\nabla\rho$$
gives a flow where $\rho \neq \text{const.}$ yet $D\rho/Dt = 0$. It has to be an unsteady flow.
Is there another example of steady flow? In steady flow, the time derivative is zero, so you have:
$$\vec{u}\cdot\nabla\rho = 0$$
If velocity is not zero, $\vec{u} \neq 0$, then we have $\nabla \rho = 0$ and so any moving, steady flow without body forces (gravity) or temperature/composition differences must have constant density.
If velocity is zero, you can have a gradient in density without any issues. Think of a column of the atmosphere for example -- density is higher at the bottom than the top due to gravity, and there is no velocity. So again, $D\rho/Dt = 0$ but density is not constant everywhere. The challenge here of course is that the continuity equation is not sufficient to describe the situation since it becomes $0 = 0$. You would have to include the momentum equation to incorporate the gravity forces. |
I need your help for my task.
I need to calibrate to the market data for Hull White model for Zero Coupon Bond Price. I refer to John Hull and Alan White paper. I want to ask you a few questions and correct me if I'm wrong in my steps.
We need to know the spot rate $r$ at a certain time $t$ and yield curve at a certain time $t$ which imply $P(r,t,T)$. We can calculate continuously compounded interest rate $R(r,t,T)$ at certain time $t$ by formula of $-\frac{1}{(T-t)}\ln{P(r,t,T)}$ After that, we must choose the volatility function for spot ($\sigma_{r}(r,t)$) and volatility for continuously compounded interest rate ($\sigma_{R}(r,t,T)$) right? How can we choose it? It is said that the volatility refers to the standard deviation of proportional changes in the value of the variable. So we must have data for each day, than differentiate the $R(r,t,T)$ between two consecutive days, and then make the difference proportional by percentage, than find the standard deviation. Is it true? Then it's said that we can find $B(0,T)$ of the data with the equation of $B(t,T) = \frac{R(r,t,T)\sigma_{R}(r,t,T)(T-t)}{r\sigma_{r}(r,t)}$. Can it be done just by inputting all variables we've got in $t = 0$? After that we find the $B(t,T)$ of the Hull-White model by the formula of $\frac{B(0,T) - B(0,t)}{\frac{\partial B(0,t)}{\partial t}}$. How can you find $B(0,t)$ and $\frac{\partial B(0,t)}{\partial t}$? We can continue by finding $\hat{A}(t,T)$, $a(t)$, and $\phi(t)$ Then we can model the interest rate with $\sigma$, $a$, $\phi$ in the function of time We can calculate $B(t,T)$ and $A(t,T)$ for the ZCB price with Hull White model
Are the steps true? I am really confused |
On the last question, I am not sure how good you are at the representation theory, but the following fact is true: take so(d,2) (we need so(3,2) for this work), use the conformal base, i.e. Lorentz generators $L_{ab}$, translations $P_a$, conformal boosts $K_a$ and dilatation $D$, $a,b=1..d$. $P$ and $K$ behave as raising/lowering generators with respect to $D$, $[D,P]=+P$, $[D,K]=-K$. Take the vacuum to carry a spin-s representation of the Lorentz algebra and a weight $\Delta$ with respect to $D$, i.e. $|\Delta\rangle^{a_1...a_s}$. When $\Delta=d+s-2$, there is a singular vector, $P_m|\Delta\rangle^{ma_2...a_s}$. This is a standard representation theory: finding raising/lowering operators, defining vacuum, looking for singular vectors. Actually, singular vectors are exactly the conformally-invariant equations one can impose.
On the field language this means that $\partial_m J^{m a_2...a_s}=0$ is a conformally invariant equation iff the conformal dimension of $J$ is $\Delta=d+s-2$. Despite the fact that $J^{a_1...a_s}$ is a good conformal operator for any value of the conformal dimension, only for $d+s-2$ its divergence decouples. (Perhaps you have seen $L_{-2}+\alpha L_{-1}^2$ as a singular vector in the Virasoro algebra, now it is replaced with $P_m$ or $\partial_m$).
Now, having $J^{a_1..a_s}$ of weight $\Delta$ we can consider its contragradient representation or on the field language couple it via $\int \phi_{a_1..a_s}J^{a_1...a_s}$ to some other field $\phi$. That we need a conformally invariant coupling implies $\Delta_\phi=d-\Delta_J=s-2$. Not surprisingly something special must happen for $\Delta_J=d+s-2$.
$$\int (\phi_{a_1...a_s}+\partial_{a_1}\xi_{a_2...a_s})J^{a_1...a_s}=\int \phi J-\int \phi_{a_1...a_s}\partial_m J^{ma_2..a_s}=\int \phi J$$we see that a statement that is dual to the conservation of $J$ is the gauge invariance of $\phi$.
I have not read the paper yet, but as far as I can see they play with the dimension of $J$ and for $d+s-2$ and $2-s$ it describes a conserved tensor and a gauge field just because of representation theory of the conformal group (decoupling of certain null states). At any given moment of time in the paper $J$ has some fixed dimension and is either a conserved tensor, a gauge field or just a spin-s conformal field of generic dimension $\Delta$.
On the last but one, you are right in that gauge invariance has a little to do with conformallity. The answer is spin and dimension dependent. For $s=0$ there is $m^2$ for which the scalar is conformal. For $s=1$ and certain $m^2$ the Maxwell field is a gauge field but the Maxwell equation is conformal in $d=4$ only. Beyond $d=4$ a gauge spin-one field is not conformal, or a spin-one conformal field is not a gauge field. For $s\geq2$ the situation is even more tricky: in $AdS_4$ the gauge fields are conformal, but in Minkowski space they are not conformal (in terms of gauge potentials $\phi_{\mu_1...\mu_s}$). You may have a look at http://arxiv.org/abs/0707.1085
On the second, first of all the transversality is on the right place in 5.1. Secondly, your confusion (inspired by my answer to another question) is that there are two different classes of fields people are interested in. First is the class of usual particles, where we talk about representations of the Poincare algebra $iso(d-1,1)$ if we are in $d$-dimensional Minkowski space or $so(d-1,2)$ and $so(d,1)$ if we are in anti de Sitter ot de Sitter (there we need harmonicity, tracelessness, transversality). Conformal fields are in the second class. Conformal means that it must be a representation of the conformal group $so(d,2)$ for Minkowski-$d$, note that $iso(d-1,1)\in so(d,2)$. The conformal group of anti de Sitter-$d$ is also $so(d,2)$. Note that the symmetry algebra of AdS-$(d+1)$ is exactly the conformal group of Minkowski-$d$. So when we talk about conformal fields we are interested in reprsetations of $so(d,2)$ (the signature can vary depending on the problem, it is some real form of $so(d+2)$). I would like to stress that conformal fields in d-dimensions are in one-to-one correspondence with usual fields in $AdS_{d+1}$, for the algebra is the same, which is at the core of AdS/CFT correspondence.
For example, a spin-$0$ field in Minkowski space obeys $\square \phi=0$. It gives rise to an irreducible representaion of $iso(d-1,1)$. Coincidentally, the same representation turns out to be an irreducible representation of a bigger algebra, $so(d,2)$, the conformal algebra. It is a coincidence. There exists also a spin-$0$ conformal field of weight $\Delta$, say $\phi_\Delta(x)$. Without imposing any equations it is an irreducible representation of $so(d,2)$. As a representation of its subalgebra $iso(d-1,1)$ it decomposes into an intergral of representations (Fourier) and is highly reducible. There is a special weight $\Delta=(d-2)/2$ for which $\phi_\Delta(x)$ is reducible and the decoupling of null states is achived via $\square \phi=0$ (analogous to the conservation of $J$ above). Note that $J$ above is an irreducible representation of $so(d,2)$ but it is highly reducible under $iso(d-1,1)$. For special weight $d+s-2$ we have to impose the conservation condition in order to project out the null states, but again the conserved tensor is an irreducible of $so(d,2)$ and reducible under $iso(d-1,1)$. So your confusion is because the fields are conformal, these are representations of a bigger algebra, they are more 'fat' and require less equations (even no at all) to project onto an irreducible.
$S^3$ is the analog of Minkowski-$3$ (compactified and Euclidian), then $so(4)$ is the analog of $iso(3,1)$ and they are interested in normalizable functions, these are the spherical harmonics or polynomials depending on coordinates. Then they discuss labelling of these representations using $so(4)\sim su(2)\oplus su(2)$ and proceed to doing some integrals. This post imported from StackExchange Physics at 2014-03-07 13:47 (UCT), posted by SE-user John |
Brouwer's Fixed Point Theorem
Recall from one of the results on The Induced Mapping from the Fundamental Groups of Two Topological Spaces page that if $S^1$ is not a retract of $D^2$. We will use this result to prove the famous Brouwer's fixed point theorem.
Theorem 1 (Brouwer's Fixed Point Theorem): Every continuous function from the closed unit disk onto itself has a fixed point. That is, if $f : D^2 \to D^2$ is a continuous then there exists a point $(x, y) \in D^2$ such that $f(x, y) = (x, y)$. Proof:Let $f : D^2 \to D^2$ be a continuous function and suppose that no fixed point exists. That is, for every $(x, y) \in D^2$ we have that $f(x, y) \neq (x, y)$. Since $f(x, y) \neq (x, y)$ there must exist a unique line starting at $f(x, y)$, passing through $(x, y)$, and then hitting the boundary of $D^2$ (the unit circle $S^1$. Define a function $g : D^2 \to S^1$ for each $(x, y) \in D^2$ by $g(x, y)$ will be the unique point on the boundary of $S^1$. Then $g$ is a clearly a continuous function, and $g \circ \mathrm{in} = \mathrm{id}_{S^1}$. So $g$ is a retraction of the unit circle $S^1$ with the closed unit disk $D^2$. But this is impossible from the result stated at the top of the page. So there exists and $(x, y) \in D^2$ such that $f(x, y) = (x, y)$. $\blacksquare$
Theorem 2 (Brouwer's Fixed Point Theorem for $D^n$): Every continuous function $f : D^n \to D^n$ has a fixed point. Here, $D^1$ is the closed unit interval, $D^2$ is of course the closed unit disk, $D^3$ is the closed unit ball, etc…
The proof of Theorem 2 is analogous to that of Theorem 1, so we will omit it.
Theorem 3 (General Brouwer's Fixed Point Theorem): If $A \subset \mathbb{R}^n$ is homeomorphic to $D^n$ then every continuous function $f : A \to A$ has a fixed point. |
A discrete memoryless source
W has words $w_1,w_2,w_3,w_4,w_5,w_6$ that occur with probablilities $0.05,0.05,0.15,0.2,0.25,0.3$ respectivley.
Does there exist a compact instantaneous binary encoding for this source with word lengths $2, 2, 4, 4, 5$ and $5$?
Shannon's Noiseless coding theroem that says a compact encoding has expected wordlength $n$ where $n\leq \frac{H(W)}{\log_2(D)}$. I get $H(W)=2.328$ and $n=1.901$, but in this case the expected word length is $3.67$ so it cannont be a compact encoding.
I feel like I have gone wrong here, could somebody tell me how? Because I have gone on to the next question where I perform Huffman coding to achieve a compact instantaneous encoding however the word length for this encoding is
still higher than $1.901$ so fee like I must have my value for $n$ wrong but can't see how? |
Notice:
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!
Test-drive the GATK tools and Best Practices pipelines on Terra Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything. Genotype Refinement workflow for germline short variants Contents Overview Summary of workflow steps Output annotations Example More information about priors Mathematical details 1. Overview
The core GATK Best Practices workflow has historically focused on variant discovery --that is, the existence of genomic variants in one or more samples in a cohorts-- and consistently delivers high quality results when applied appropriately. However, we know that the quality of the individual genotype calls coming out of the variant callers can vary widely based on the quality of the BAM data for each sample. The goal of the Genotype Refinement workflow is to use additional data to improve the accuracy of genotype calls and to filter genotype calls that are not reliable enough for downstream analysis. In this sense it serves as an optional extension of the variant calling workflow, intended for researchers whose work requires high-quality identification of individual genotypes.
While every study can benefit from increased data accuracy, this workflow is especially useful for analyses that are concerned with how many copies of each variant an individual has (e.g. in the case of loss of function) or with the transmission (or de novo origin) of a variant in a family.
If a “gold standard” dataset for SNPs is available, that can be used as a very powerful set of priors on the genotype likelihoods in your data. For analyses involving families, a pedigree file describing the relatedness of the trios in your study will provide another source of supplemental information. If neither of these applies to your data, the samples in the dataset itself can provide some degree of genotype refinement (see section 5 below for details).
After running the Genotype Refinement workflow, several new annotations will be added to the INFO and FORMAT fields of your variants (see below).
Note that GQ fields will be updated, and genotype calls may be modified. However, the Phred-scaled genotype likelihoods (PLs) which indicate the original genotype call (the genotype candidate with PL=0) will remain untouched. Any analysis that made use of the PLs will produce the same results as before. 2. Summary of workflow steps Input
Begin with recalibrated variants from VQSR at the end of the germline short variants pipeline. The filters applied by VQSR will be carried through the Genotype Refinement workflow.
Step 1: Derive posterior probabilities of genotypes Tool used: CalculateGenotypePosteriors
Using the Phred-scaled genotype likelihoods (PLs) for each sample, prior probabilities for a sample taking on a HomRef, Het, or HomVar genotype are applied to derive the posterior probabilities of the sample taking on each of those genotypes. A sample’s PLs were calculated by HaplotypeCaller using only the reads for that sample. By introducing additional data like the allele counts from the 1000 Genomes project and the PLs for other individuals in the sample’s pedigree trio, those estimates of genotype likelihood can be improved based on what is known about the variation of other individuals.
SNP calls from the 1000 Genomes project capture the vast majority of variation across most human populations and can provide very strong priors in many cases. At sites where most of the 1000 Genomes samples are homozygous variant with respect to the reference genome, the probability of a sample being analyzed of also being homozygous variant is very high.
For a sample for which both parent genotypes are available, the child’s genotype can be supported or invalidated by the parents’ genotypes based on Mendel’s laws of allele transmission. Even the confidence of the parents’ genotypes can be recalibrated, such as in cases where the genotypes output by HaplotypeCaller are apparent Mendelian violations.
Step 2: Filter low quality genotypes Tool used: VariantFiltration
After the posterior probabilities are calculated for each sample at each variant site, genotypes with GQ < 20 based on the posteriors are filtered out. GQ20 is widely accepted as a good threshold for genotype accuracy, indicating that there is a 99% chance that the genotype in question is correct. Tagging those low quality genotypes indicates to researchers that these genotypes may not be suitable for downstream analysis. However, as with the VQSR, a filter tag is applied, but the data is not removed from the VCF.
Step 3: Annotate possible de novo mutations Tool used: VariantAnnotator
Using the posterior genotype probabilities, possible de novo mutations are tagged. Low confidence de novos have child GQ >= 10 and AC < 4 or AF < 0.1%, whichever is more stringent for the number of samples in the dataset. High confidence de novo sites have all trio sample GQs >= 20 with the same AC/AF criterion.
Step 4: Functional annotation of possible biological effects Tool options: Funcotator (experimental)
Especially in the case of de novo mutation detection, analysis can benefit from the functional annotation of variants to restrict variants to exons and surrounding regulatory regions. Funcotator is a new tool that is currently still in development. If you would prefer to use a more mature tool, we recommend you look into SnpEff or Oncotator, but note that these are not GATK tools so we do not provide support for them.
3. Output annotations
The Genotype Refinement workflow adds several new info- and format-level annotations to each variant. GQ fields will be updated, and genotypes calculated to be highly likely to be incorrect will be changed. The Phred-scaled genotype likelihoods (PLs) carry through the pipeline without being changed. In this way, PLs can be used to derive the original genotypes in cases where sample genotypes were changed.
Population Priors
New INFO field annotation PG is a vector of the Phred-scaled prior probabilities of a sample at that site being HomRef, Het, and HomVar. These priors are based on the input samples themselves along with data from the supporting samples if the variant in question overlaps another in the supporting dataset.
Phred-Scaled Posterior Probability
New FORMAT field annotation PP is the Phred-scaled posterior probability of the sample taking on each genotype for the given variant context alleles. The PPs represent a better calibrated estimate of genotype probabilities than the PLs are recommended for use in further analyses instead of the PLs.
Genotype Quality
Current FORMAT field annotation GQ is updated based on the PPs. The calculation is the same as for GQ based on PLs.
Joint Trio Likelihood
New FORMAT field annotation JL is the Phred-scaled joint likelihood of the posterior genotypes for the trio being incorrect. This calculation is based on the PLs produced by HaplotypeCaller (before application of priors), but the genotypes used come from the posteriors. The goal of this annotation is to be used in combination with JP to evaluate the improvement in the overall confidence in the trio’s genotypes after applying CalculateGenotypePosteriors. The calculation of the joint likelihood is given as:
where the GLs are the genotype likelihoods in [0, 1] probability space.
Joint Trio Posterior
New FORMAT field annotation JP is the Phred-scaled posterior probability of the output posterior genotypes for the three samples being incorrect. The calculation of the joint posterior is given as:
where the GPs are the genotype posteriors in [0, 1] probability space.
Low Genotype Quality
New FORMAT field filter lowGQ indicates samples with posterior GQ less than 20. Filtered samples tagged with lowGQ are not recommended for use in downstream analyses.
High and Low Confidence De Novo
New INFO field annotation for sites at which at least one family has a possible de novo mutation. Following the annotation tag is a list of the children with de novo mutations. High and low confidence are output separately.
4. Example
Before:
1 1226231 rs13306638 G A 167563.16 PASS AC=2;AF=0.333;AN=6;… GT:AD:DP:GQ:PL 0/0:11,0:11:0:0,0,249 0/0:10,0:10:24:0,24,360 1/1:0,18:18:60:889,60,0
After:
1 1226231 rs13306638 G A 167563.16 PASS AC=3;AF=0.500;AN=6;…PG=0,8,22;… GT:AD:DP:GQ:JL:JP:PL:PP 0/1:11,0:11:49:2:24:0,0,249:49,0,287 0/0:10,0:10:32:2:24:0,24,360:0,32,439 1/1:0,18:18:43:2:24:889,60,0:867,43,0
The original call for the child (first sample) was HomRef with GQ0. However, given that, with high confidence, one parent is HomRef and one is HomVar, we expect the child to be heterozygous at this site. After family priors are applied, the child’s genotype is corrected and its GQ is increased from 0 to 49. Based on the allele frequency from 1000 Genomes for this site, the somewhat weaker population priors favor a HomRef call (PG=0,8,22). The combined effect of family and population priors still favors a Het call for the child.
The joint likelihood for this trio at this site is two, indicating that the genotype for one of the samples may have been changed. Specifically a low JL indicates that posterior genotype for at least one of the samples was not the most likely as predicted by the PLs. The joint posterior value for the trio is 24, which indicates that the GQ values based on the posteriors for all of the samples are at least 24. (See above for a more complete description of JL and JP.)
5. More information about priors
The Genotype Refinement Pipeline uses Bayes’s Rule to combine independent data with the genotype likelihoods derived from HaplotypeCaller, producing more accurate and confident genotype posterior probabilities. Different sites will have different combinations of priors applied based on the overlap of each site with external, supporting SNP calls and on the availability of genotype calls for the samples in each trio.
Input-derived Population Priors
If the input VCF contains at least 10 samples, then population priors will be calculated based on the discovered allele count for every called variant.
Supporting Population Priors
Priors derived from supporting SNP calls can only be applied at sites where the supporting calls overlap with called variants in the input VCF. The values of these priors vary based on the called reference and alternate allele counts in the supporting VCF. Higher allele counts (for ref or alt) yield stronger priors.
Family Priors
The strongest family priors occur at sites where the called trio genotype configuration is a Mendelian violation. In such a case, each Mendelian violation configuration is penalized by a de novo mutation probability (currently 10-6). Confidence also propagates through a trio. For example, two GQ60 HomRef parents can substantially boost a low GQ HomRef child and a GQ60 HomRef child and parent can improve the GQ of the second parent. Application of family priors requires the child to be called at the site in question. If one parent has a no-call genotype, priors can still be applied, but the potential for confidence improvement is not as great as in the 3-sample case.
Caveats
Right now family priors can only be applied to biallelic variants and population priors can only be applied to SNPs. Family priors only work for trios.
6. Mathematical details
Note that family priors are calculated and applied before population priors. The opposite ordering would result in overly strong population priors because they are applied to the child and parents and then compounded when the trio likelihoods are multiplied together.
Review of Bayes’s Rule
HaplotypeCaller outputs the likelihoods of observing the read data given that the genotype is actually HomRef, Het, and HomVar. To convert these quantities to the probability of the genotype given the read data, we can use Bayes’s Rule. Bayes’s Rule dictates that the probability of a parameter given observed data is equal to the likelihood of the observations given the parameter multiplied by the prior probability that the parameter takes on the value of interest, normalized by the prior times likelihood for all parameter values:
$$ P(\theta|Obs) = \frac{P(Obs|\theta)P(\theta)}{\sum_{\theta} P(Obs|\theta)P(\theta)} $$
In the best practices pipeline, we interpret the genotype likelihoods as probabilities by implicitly converting the genotype likelihoods to genotype probabilities using non-informative or flat priors, for which each genotype has the same prior probability. However, in the Genotype Refinement Pipeline we use independent data such as the genotypes of the other samples in the dataset, the genotypes in a “gold standard” dataset, or the genotypes of the other samples in a family to construct more informative priors and derive better posterior probability estimates.
Calculation of Population Priors
Given a set of samples in addition to the sample of interest (ideally non-related, but from the same ethnic population), we can derive the prior probability of the genotype of the sample of interest by modeling the sample’s alleles as two independent draws from a pool consisting of the set of all the supplemental samples’ alleles. (This follows rather naturally from the Hardy-Weinberg assumptions.) Specifically, this prior probability will take the form of a multinomial Dirichlet distribution parameterized by the allele counts of each allele in the supplemental population. In the biallelic case the priors can be calculated as follows:
$$ P(GT = HomRef) = \dbinom{2}{0} \ln \frac{\Gamma(nSamples)\Gamma(RefCount + 2)}{\Gamma(nSamples + 2)\Gamma(RefCount)} $$
$$ P(GT = Het) = \dbinom{2}{1} \ln \frac{\Gamma(nSamples)\Gamma(RefCount + 1)\Gamma(AltCount + 1)}{\Gamma(nSamples + 2)\Gamma(RefCount)\Gamma(AltCount)} $$
$$ P(GT = HomVar) = \dbinom{2}{2} \ln \frac{\Gamma(nSamples)\Gamma(AltCount + 2)}{\Gamma(nSamples + 2)\Gamma(AltCount)} $$
where Γ is the Gamma function, an extension of the factorial function.
The prior genotype probabilities based on this distribution scale intuitively with number of samples. For example, a set of 10 samples, 9 of which are HomRef yield a prior probability of another sample being HomRef with about 90% probability whereas a set of 50 samples, 49 of which are HomRef yield a 97% probability of another sample being HomRef.
Calculation of Family Priors
Given a genotype configuration for a given mother, father, and child trio, we set the prior probability of that genotype configuration as follows:
$$ P(G_M,G_F,G_C) = P(\vec{G}) \cases{ 1-10\mu-2\mu^2 & no MV \cr \mu & 1 MV \cr \mu^2 & 2 MVs} $$
where the 10 configurations with a single Mendelian violation are penalized by the de novo mutation probability μ and the two configurations with two Mendelian violations by μ^2. The remaining configurations are considered valid and are assigned the remaining probability to sum to one.
This prior is applied to the joint genotype combination of the three samples in the trio. To find the posterior for any single sample, we marginalize over the remaining two samples as shown in the example below to find the posterior probability of the child having a HomRef genotype:
This quantity P(Gc|D) is calculated for each genotype, then the resulting vector is Phred-scaled and output as the Phred-scaled posterior probabilities (PPs). |
If $x$ is a positive rational number, but not an integer, then can $x^{x^{x^x}}$ be a rational number ?
We can prove that if $x$ is a positive rational number but not an integer, then $x^x$ can not be rational:
Denote $x=\dfrac{b}{a},(a,b)=1,x^x=\dfrac{d}{c},(c,d)=1,$ $$\left(\dfrac{b}{a}\right)^\dfrac{b}{a}=\dfrac{d}{c} \hspace{12pt}\Rightarrow \hspace{12pt}\left(\dfrac{b}{a}\right)^b=\left(\dfrac{d}{c}\right)^a \hspace{12pt}\Rightarrow \hspace{12pt}b^b c^a=d^a a^b$$ Since $(a,b)=1,(c,d)=1,$ we have $c^a\mid a^b$ and $a^b\mid c^a$, hence $a^b=c^a.$ Since $(a,b)=1$, $a^b$ must be an $ab$-th power of an integer, assume that $a^b=t^{ab},$ then $a=t^a,$ where $t$ is a positive integer, this is impossible if $t>1,$ so we get $t=1,a=1$, hence $x$ is an integer.
Then from Gelfond–Schneider theorem , we can prove that if $x$ is a positive rational number but not an integer, then $x^{x^x}$ can not be rational. In fact, it can not be an algebraic number, because both $x$ and $x^x$ are algebraic numbers and $x^x$ is not a rational number.
Can we prove that $x^{x^{x^x}}$ is irrational? Can $x^{x^{\dots (n-th)^{\dots x}}}~(n>1)$ be rational? |
Moving-average smoothing
ma computes a simple moving average smoother of a given time series.
Keywords ts Usage
ma(x, order, centre = TRUE)
Arguments x
Univariate time series
order
Order of moving average smoother
centre
If TRUE, then the moving average is centred for even orders.
Details
The moving average smoother averages the nearest
order periods ofeach observation. As neighbouring observations of a time series are likelyto be similar in value, averaging eliminates some of the randomness in thedata, leaving a smooth trend-cycle component. $$\hat{T}_{t} =\frac{1}{m} \sum_{j=-k}^ky_{t+j}$$ where\(k=\frac{m-1}{2}\)
When an even
order is specified, the observations averaged willinclude one more observation from the future than the past (k is roundedup). If centre is TRUE, the value from two moving averages (where k isrounded up and down respectively) are averaged, centering the movingaverage.
Value
Numerical time series object containing the simple moving average smoothed values.
See Also Aliases ma Examples
# NOT RUN {plot(wineind)sm <- ma(wineind,order=12)lines(sm,col="red")# }
Documentation reproduced from package forecast, version 8.7, License: GPL-3 Community examples twigt.arie@gmail.comat Sep 23, 2018 forecast v8.4
## Example to get an understanding how values are calculated by applying the `ma()` function. ```{r} # create a numerical vector which is easy to understand numbers <- c(3, 5, 3, 6, 4, 5, 3, 4, 4, 6, 3, 5, 4, 6, 3, 5, 4, 4, 6, 3, 5) # apply the 'ma' function ma(numbers, order = 5) ``` Calculate the average for the first 5 numbers by hand. ```{r} (sum(3, 5, 3, 6, 4))/5 ``` |
In my post Trigonometry Yoga, I discussed how defining sine and cosine as lengths of segments in a unit circle helps develop intuition for these functions.
I learned the circle definitions of sine and cosine in my junior year of high school, in the class that would now be called pre-calculus (it was called “Trig Senior Math”). Two years earlier, I’d learned the triangle definitions of sine, cosine, and tangent in geometry class. I don’t remember any of my teachers ever mentioning a circle definition of the tangent function.
The geometric definition of the tangent function, which predates the triangle definition, is the length of a segment tangent to the unit circle. The tangent really is a tangent! Just as for sine and cosine, this one-variable definition helps develop intuition. Here is the definition, followed by an applet to help you get a feel for it:
Let OA be a radius of the unit circle, let B = (1,0), and let \( \theta =\angle BOA\). Let C be the intersection of \(\overrightarrow{OA}\) and the line x=1, i.e. the tangent to the unit circle at B. Then \(\tan \theta\) is the y-coordinate of C, i.e. the signed length of segment BC.
Move the blue point below; the tangent is the length of the red segment. (If a label is getting in the way, right click and toggle “show label” from the menu).
The circle definition of the tangent function leads to geometric illustrations of many standard properties and identities. (If this were my class, I would stop here and tell you to explore on your own and with others).
Some things to notice:
\(\left| \tan \theta \right|\) gets big as \(\theta\) approaches \(\pm 90{}^\circ \).
\(\tan (\pm 90{}^\circ)\) is undefined, because at these angles, \(\overline{OA}\) is parallel to x=1, so the two lines don’t intersect, and point C doesn’t exist.
\(\tan 90{}^\circ\) tends toward \(+\infty\), \(\tan (-90{}^\circ)\) tends toward \(-\infty\).
\(\tan \theta\) is positive in the first and third quadrants, negative in the second and fourth quadrants.
\(\tan \theta\)=\(tan (\theta+180{}^\circ)\) — the angles \(\theta\) and \(\theta +180{}^\circ\) form the same line. Thus the period of the tangent function is \(180 {}^\circ = \pi\) radians.
\(\tan \theta\) = \(- \tan (-\theta)\). Moving from \(\theta\) to \(-\theta\) reflects \(OC\) about the x-axis.
\(\tan \theta\) is equal to the slope of OA (rise = \(\tan \theta\) , run =1), which is also equal to \(\dfrac{\sin\theta}{\cos\theta}\), as well as Opposite over Adjacent for angle \(\theta\) in right triangle CBO.
\(\tan (45{}^\circ)=1\). When \(\theta=45{}^\circ\), triangle CBO is a 45-45-90 triangle, and OB=1. Similarly, \(\tan (-45{}^\circ)=-1\), etc.
For small values of \(\theta\), \(\tan \theta\) is close to \(\sin \theta\), which is close to the arc length of AB, i.e. the measure of \(\theta\) in radians.
If we define \(\arctan \theta\) as the function whose input is the signed length of BC and whose output is the angle \(\theta\) corresponding to that tangent length, then the domain of that function is the reals, and it makes sense to define the range as \(-90 {}^\circ< \theta <90{}^\circ\) (in radians \(-\pi/2<\theta < \pi/2\) and arctan’s output is an arc length). This range includes all the angles we need and avoids the discontinuity at \(\theta= \pm 90{}^\circ =\pm \pi/2\) radians. For \(\left| \theta \right|\leq 45{}^\circ\), \(\left| \tan \theta \right|\leq 1\). Half of the input values of \(\tan \theta\) give outputs with absolute values less than or equal to 1, and the other half give values on the rest of the number line. This mapping also occurs with fractions and slopes, but there’s something very compelling about seeing the lengths change dynamically. Applets like the one above could also help students develop intuition about slopes. \(\tan (180{}^\circ-\theta) = -\tan \theta\). We reflect BC over the x-axis to form \(B{C}’\). Then \(\angle BO{C}’=\theta\) and \(\angle BOD =(180{}^\circ-\theta)\). \(B{C}’\) (the blue segment) is the tangent of \((180{}^\circ-\theta)\).
\(\tan (\theta \pm 90{}^\circ)\) = \(-1/\tan \theta\). The picture below illustrates the geometry of this identity when \(\theta\) is in the first quadrant.
The line formed at \(\theta + 90{}^\circ\) is perpendicular to OC and \(\triangle COB\sim \triangle ODB\). Thus \(\dfrac{BD}{OB}=\dfrac{OB}{BC}\), and with appropriate signs, \(\tan (\theta + 90{}^\circ)\) = \(-1/\tan \theta\). Since \(\tan \theta\)=\(\tan (\theta+180{}^\circ)\), \(\tan (\theta +90{}^\circ)=\tan(\theta-90{}^\circ)\).
The applet below shows the geometry in all quadrants, and it gives a dynamic sense of the relationship between \(\tan\theta\) and \(\tan(-\theta)\). Again, move the blue point:
Special Bonus: The Secant Function
The signed length of the segment OC is called the secant function, \(\sec\theta\).
Using similar triangles, we see that \(\sec \theta = \dfrac{1}{\cos \theta}\).
The Pythagorean Theorem applied to \(\triangle COB\) shows that \(\tan^2\theta+1=\sec^2 \theta\).
When the tangent function is big, so is the secant function, and when the tangent function is small, so is the secant function. Also \(\sec \theta\) is close to \(\pm 1\) when \(\theta\) is close to the x-axis and when \(\tan \theta\) is close to 0.
The graphs of the two functions look nice together: |
Assume I have these two elliptic curves: \begin{align*} E:Y^2&=X^3+b_2X^2+b_4X+b_6\\ E':Y^2&=X^3+gb_2X^2+g^2b_4X+g^3b_6, \end{align*} over $\mathbb{F}_q$, where $g$ is not a square in $\mathbb{F}_q$, and $\mathbb{F}_q$ does not have characteristic $2$. I know that $\#E(\mathbb{F}_q)=q+1-t$ and am asked to prove that $\#E'(\mathbb{F}_q)=q+1+t$. I am however not really sure how to do this. I know that by definition $\#E(\mathbb{F}_q)=q+1-\tau$ and $\#E(\mathbb{F}_q)=q+1-\pi-\pi'$, where $\pi$ and $\pi'$ are the zeroes of $T^2-\tau T+q$. Any ideas on how I could approach this problem?
I think that the following trick is wanted. Consider the quantities $$ f(X)=X^3+b_2X^2+b_4X+b_6 $$ and $$ h(X')=X'^3+gb_2X'^2+g^2b_4X'+g^3b_6. $$ We see that $g^3f(X)=h(gX)$. Because $g^3$ is a non-square, if we fix the value $X=x\in F_q$ then one and only one of the following alternatives will occur:
We have $h(gx)=f(x)=0$. $h(gx)$ is a non-zero square, and $f(x)$ is a non-zero non-square. $h(gx)$ is a non-zero non-square, and $f(x)$ is a non-zero square.
In all cases the equations $$ y^2=h(gx)\qquad\text{and}\qquad y^2=f(x) $$ have exactly two solutions $y\in F_q$ between them. In respective cases 1) one solution $y=0$ to each, 2) two solutions to former, none to the latter, 3) none to the former, two to the latter.
[Edit] Adding more details. Let $q_i,i=1,2,3$ be the number of those elements $x\in \Bbb{F}_q$ such that we are in case $i$. Taking into account the point at infinity we see that the numbers of points on the two curves are $$\begin{aligned} E(\Bbb{F}_q)&=q_1+2q_3+1,\\ E'(\Bbb{F}_q)&=q_1+2q_2+1. \end{aligned}$$ This is because if $x$ is in case 1, then there is one point of the form $(x,0)\in E$, and one point $(gx,0)\in E'$. If $x$ is in case 2, then there are two points of the form $(gx,y)\in E'$ but no points of the form $(x,y)\in E$. And if $x$ is in case 3, then the reverse holds.
The claim follows from this as each $x$ falls into exactly one of the three cases, so $q_1+q_2+q_3=q$. [/Edit] |
I’m not sure to whom the image or the idea is due. Please comment if you have information. (See comments below for current information.)
The rules will naturally generalize those in Connect-Four. Namely, starting from an empty board, the players take turns placing their coins into the $\omega\times 4$ grid. When a coin is placed in a column, it falls down to occupy the lowest available cell. Let us assume for now that the game proceeds for $\omega$ many moves, whether or not the board becomes full, and the goal is to make a connected sequence in a row of $\omega$ many coins of your color (you don’t have to fill the whole row, but rather a connected infinite segment of it suffices). A draw occurs when neither or both players achieve their goals.
In the $\omega\times 6$ version of the game that is shown, and indeed in the $\omega\times n$ version for any finite $n$, I claim that neither player can force a win; both players have drawing strategies.
Theorem. In the game Connect-$\omega$ on a board of size $\omega\times n$, where $n$ is finite, neither player has a winning strategy; both players have drawing strategies. Proof. For a concrete way to see this, observe that either player can ensure that there are infinitely many of their coins on the bottom row: they simply place a coin into some far-out empty column. This blocks a win for the opponent on the bottom row. Next, observe that neither player can afford to follow the strategy of always answering those moves on top, since this would lead to a draw, with a mostly empty board. Thus, it must happen that infinitely often we are able to place a coin onto the second row. This blocks a win for the opponent on the second row. And so on. In this way, either players can achieve infinitely many of their coins on each row, thereby blocking any row as a win for their opponent. So both players have drawing strategies. $\Box$
Let me point out that on a board of size $\omega\times n$, where $n$ is odd, we can also make this conclusion by a strategy-stealing argument. Specifically, I argue first that the first player can have no winning strategy. Suppose $\sigma$ is a winning strategy for the first player on the $\omega\times n$ board, with $n$ odd, and let us describe a strategy for the second player. After the first move, the second player mentally ignores a finite left-initial segment of the playing board, which includes that first move and with a odd number of cells altogether in it (and hence an even number of empty cells remaining); the second player will now aim to win on the now-empty right-side of the board, by playing as though playing first in a new game, using strategy $\sigma$. If the first player should ever happen to play on the ignored left side of the board, then the second player can answer somewhere there (it doesn’t matter where). In this way, the second player plays with $\sigma$ as though he is the first player, and so $\sigma$ cannot be winning for the first player, since in this way the second player would win in this stolen manner.
Similarly, let us argue by strategy-stealing that the second player cannot have a winning strategy on the board $\omega\times n$ for odd finite $n$. Suppose that $\tau$ is a winning strategy for the second player on such a board. Let the first player always play at first in the left-most column. Because $n$ is odd, the second player will eventually have to play first in the second or later columns, leaving an even number of empty cells in the first column (perhaps zero). At this point, the first player can play as though he was the second player on the right-side board containing only that fresh move. If the opponent plays again to the left, then our player can also play in that region (since there were an even number of empty cells). Thus, the first player can steal the strategy $\tau$, and so it cannot be winning for the second player.
I am unsure about how to implement the strategy stealing arguments when $n$ is even. I shall give this more thought. In any case, the theorem for this case was already proved directly by the initial concrete argument, and in this sense we do not actually need the strategy stealing argument for this case.
Meanwhile, it is natural also to consider the $n\times\omega$ version of the game, which has only finitely many columns, each infinite. The players aim to get a sequence of $\omega$-many coins in a column. This is clearly impossible, as the opponent can prevent a win by always playing atop the most recent move. Thus:
Theorem. In the game Connect-$\omega$ on a board of size $n\times\omega$, where $n$ is finite, neither player has a winning strategy; both players have drawing strategies.
Perhaps the most natural variation of the game, however, occurs with a board of size $\omega\times\omega$. In this version, like the original Connect-Four, a player can win by either making a connected row of $\omega$ many coins, or a connected column or a connected diagonal of $\omega$ many coins. Note that we orient the $\omega$ size column upwards, so that there is no top cell, but rather, one plays by selecting a not-yet-filled column and then occupying the lowest available cell in that column.
Theorem. In the game Connect-$\omega$ on a board of size $\omega\times\omega$, neither player has a winning strategy. Both players have drawing strategies. Proof. Consider the strategy-stealing arguments. If the first player has a winning strategy $\sigma$, then let us describe a strategy for the second player. After the first move, the second player will ignore finitely many columns at the left, including that first actual move, aiming to play on the empty right-side of the board as though the first player using stolen strategy $\sigma$ (but with colors swapped). This will work fine, as long as the first player also plays on that part of the board. Whenever the first player plays on the ignored left-most part, simply respond by playing atop. This prevents a win in that part of the part, and so the second player will win on the right-side by pretending to be first there. So there can be no such winning strategy $\sigma$ for the first player.
If the second player has a winning strategy $\tau$, then as before let the first player always play in the first column, until $\tau$ directs the second player to play in another column, which must eventually happen if $\tau$ is winning. At that moment, the first player can pretend to be second on the subboard omitting the first column. So $\tau$ cannot have been winning after all for the second player. $\Box$
In the analysis above, I was considering the game that proceeded in time $\omega$, with $\omega$ many moves. But such a play of the game may not actually have filled the board completely. So it is natural to consider a version of the game where the players continue to play transfinitely, if the board is not yet full.
So let us consider now the transfinite-play version of the game, where play proceeds transfinitely through the ordinals, until either the board is filled or one of the players has achieved the winning goal. Let us assume that the first player also plays first at limit stages, at $\omega$ and $\omega\cdot 2$ and $\omega^2$, and so on, if game play should happen to proceed for that long.
The concrete arguments that I gave above continue to work for the transfinite-play game on the boards of size $\omega\times n$ and $n\times\omega$.
Theorem. In the transfinite-play version of Connect-$\omega$ on boards of size $\omega\times n$ or $n\times\omega$, where $n$ is finite, neither player can have a winning strategy. Indeed, both players can force a draw while also filling the board in $\omega$ moves. Proof. It is clear that on the $n\times\omega$ board, either player can force each column to have infinitely many coins of their color, and this fills the board, while also preventing a win for the opponent, as desired.
On the $\omega\times n$ board, consider a variation of the strategy I described above. I shall simply always play in the first available empty column, thereby placing my coin on the bottom row, until the opponent also plays in a fresh column. At that moment, I shall play atop his coin, thereby placing another coin in the second row; immediately after this, I also play subsequently in the left-most available column (so as to force the board to be filled). I then continue playing in the bottom row, until the opponent also does, which she must, and then I can add another coin to the second row and so on. By always playing the first-available second-row slot with all-empty second rows to the right, I can ensure that the opponent will eventually also make a second-row play (since otherwise I will have a winning condition on the second row), and at such a moment, I can also make a third-row play. By continuing in this way, I am able to place infinitely many coins on each row, while also forcing that the board becomes filled. $\Box$
Unfortunately, the transfinite-play game seems to break the strategy-stealing arguments, since the play is not symmetric for the players, as the first player plays first at limit stages.
Nevertheless, following some ideas of Timothy Gowers in the comments below, let me show that the second player has a drawing strategy.
Theorem. In the transfinite-play version of Connect-$\omega$ on a board of size $\omega\times\omega$, the second player has a drawing strategy. Proof. We shall arrange that the second player will block all possible winning configurations for the first player, or to have column wins for each player. To block all row wins, the second player will arrange to occupy infinitely many cells in each row; to block all diagonal wins, the second player will aim to occupy infinitely many cells on each possible diagonal; and to block the column wins, the second player will aim either to have infinitely many cells on each column or to copy a winning column of the opponent on another column.
To achieve these things, we simply play as follows. Take the columns in successive groups of three. On the first column in each block of three, that is on the columns indexed $3m$, the second player will always answer a move by the first player on this column. In this way, the second player occupies every other cell on these columns—all at the same height. This will block all diagonal wins, because every diagonal winning configuration will need to go through such a cell.
On the remaining two columns in each group of three, columns $3m+1$ and $3m+2$, let the second player simply copy moves of the opponent on one of these columns by playing on the other. These moves will therefore be opposite colors, but at the same height. In this way, the second player ensures that he has infinitely many coins on each row, blocking the row wins. And also, this strategy ensures that in these two columns, at any limit stage, either neither player has achieved a winning configuration or both have.
Thus, we have produced a drawing strategy for the second player. $\Box$
Thus, there is no advantage to going first. What remains is to determine if the first player also has a drawing strategy, or whether the second player can actually force a win.
Gowers explains in the comments below also how to achieve such a copying mechanism to work on a diagonal, instead of just on a column.
I find it also fascinating to consider the natural generalizations of the game to larger ordinals. We may consider the game of Connect-$\alpha$ on a board of size $\kappa\times\lambda$, for any ordinals $\alpha,\kappa,\lambda$, with transfinite play, proceeding until the board is filled or the winning conditions are achieved. Clearly, there are some instances of this game where a player has a winning strategy, such as the original Connect-Four configuration, where the first player wins, and presumably many other instances.
Question. In which instances of Connect-$\alpha$ on a board of size $\kappa\times\lambda$ does one of the players have a winning strategy?
It seems to me that the groups-of-three-columns strategy described above generalizes to show that the second player has at least a drawing strategy in Connect-$\alpha$ on board $\kappa\times\lambda$, whenever $\alpha$ is infinite.
Stay tuned… |
Reduction of Order on Second Order Linear Homogeneous Differential Equations Examples 1
Recall from the Reduction of Order on Second Order Linear Homogenous Differential Equations page that if we have a second order linear homogeneous differential equation $\frac{d^2y}{dt^2} + p(t) \frac{dy}{dt} + q(t) y = 0$ and if $y = y_1(t)$ is a nonzero solution to this differential equation then suppose that $y = v(t) y_1(t)$ is also a solution to this differential equation. We then saw that by taking the first and second derivatives, $y'$ and $y''$, then we could reduce our differential equation to:(1)
But this is merely a first order differential equation of the function $v'(t)$ and so we can use techniques of first order differential equations to solve for $v(t)$ and then for $y_2 = v(t) y_1(t)$ and possibly form a fundamental set of solutions to the original differential equations.
Let's look at some examples of reduction of order on second order linear homogeneous differential equations.
Example 1 Find a second solution to the differential equation $t^2 \frac{d^2y}{dt^2} + 2t \frac{dy}{dt} - 2y = 0$ for $t > 0$ given that $y_1(t) = t$ is a solution.
Let $y = v(t) y_1(t) = tv(t)$. If we differentiate this function, we get that $y' = tv'(t) + v(t)$, and if we differentiate again, we get that $y''(t) = tv''(t) + v'(t) + v'(t) = tv''(t) + 2v'(t)$. If we plug this into our differential equation then we have that:(2)
We can now solve this differential equation using techniques for solving first order differential equations. Let $u(t) = v'(t)$. Then we obtain the following first order differential equation:(3)
Let's solve this differential equation by using integrating factors. Let $\mu (t) = e^{ \int \frac{4}{t} \: dt} = e^{4\ln t} = e^{\ln t^4} = t^4$. Now let's multiply both sides of the differential equation above by $\mu (t)$ to get that:(4)
Now note that $u(t) = v'(t)$ so $v'(t) = Ct^{-4}$. Integrating both sides of this equation and we get that $v(t) = -C\frac{t^{-3}}{3} = -\frac{C}{3t^3}$. Therefore:(5)
Note that $-\frac{C}{3}$ is just a constant, and more simply, $y_2 = \frac{1}{t^2}$ is another solution to this differential equation.
Example 2 Find a second solution to the differential equation $t \frac{d^2y}{dt^2} - \frac{dy}{dt} + 4t^3 y = 0$ for $t > 0$ given that $y_1(t) = \sin t^2$
First let's rewrite our differential equation by diving by $t$ to get:(6)
Let $y = v(t) y_1(t) = v(t) \sin t^2$. Instead of computing the first and second derivatives of $y$, let's instead use the formula:(7)
Now let $u(t) = v'(t)$ and so we obtain the following first order differential equation:(8) |
I had a problem in my booked i tried to prove. Here is the problem
"Let $x_1,\ldots,x_n$ be different real numbers and $y_1,\ldots,y_n,s_1,\ldots,s_n$ some real numbers. Prove that there exists a polynomial $p(x)$ of a degree less than $2n$ such that $p(x_i)=y_i$ and $p'(x_i)=s_i$ for every $i=1,2,\ldots,n.$"
Here is my attempt:
Let $V= \{ p(x)\in \mathbb{R}[x] \mid \deg(p(x))<2n\}$ $\Rightarrow$ $\exists$ $h(x)\in V$ s.t $\deg(h(x))<n<2n$ from here we now can apply Lagrange Interpolation theorem. $\Rightarrow$ $h(x_i)=y_i$.
Let $Q= \{ g(x)\in \mathbb{R}[x] \mid g(x)=p'(x),\ p(x) \in V, \deg(p(x)) < n \}$
where $\dim(Q)=n-1$.
Now let $g(x)\in Q \Rightarrow \deg(g(x))<n-1$
Let $A: Q\rightarrow \mathbb{R}^n$
$g(x) \rightarrow (g(x_1),\ldots,g(x_n))$
Now assume $g(x)\in \ker(A) \Rightarrow g(x_i) = 0$, $n$ zeros $\Rightarrow \ g(x)=0 \ \Rightarrow \ker(A) = \{0\} \Rightarrow A$ injective $+$surjective $\Rightarrow g(x_i)=s_i$
But since $\deg(h(x))<n \Rightarrow h'(x) \in Q \Rightarrow g(x)=h'(x) \Rightarrow g(x_i)=h'(x_i)=s_i$
My Professor had i quick look and stated that this was a good attempt but it was not correct. He said that my proof does not with certainty show that this $h'(x_i)=s_i$ ill get in the end will fullfill $h(x_i)=y_i$
I didnt understand him, because in my opinion im sure that the derivitive of $h(x)$ which fullfill $h(x_i)=y_i$ lies in $Q$ and therefor i can let my $g(x)$ be equal to $h'(x)$
I would be grateful if someone can explain this? |
An exponential sum is an expression of the form \[ \sum_{n=1}^N e^{2 \pi i f(n)},\] where \( f \) is a real-valued function defined on the positive integers. Such sums are used in the solution of various problems in number theory; in this article we will just play around with a few examples, draw their graphs and try to explain some of their features. By the "graph" of an exponential sum we mean the sequence of partial sums, plotted in the complex plane, with successive points joined by straight line segments. That is, we start at the origin; draw a line interval corresponding to the first term of the sum; from the end of this interval draw another, corresponding to the second term of the sum; and so on.
A good place to start is the exponential sum with \( f(n)=( \ln n)^4 \) and, say, \( N=5000 \). The graph was dubbed "the Loch Ness monster" by John Loxton in a 1981 article.
Graph of the exponential sum \( \sum_{n=1}^{5000} e^{2\pi i (\ln n )^4} \). Click on the image to enlarge.
To explain the shape of the graph we concentrate on the angle between successive line segments. Because of the factor \( 2 \pi \) in the exponential sum, we are measuring angles in units of full circles: that is, \( \frac14 \) represents a right angle, \( \frac12 \) represents a \( 180^\circ \) angle, and so on. The "blobs" in the Loch Ness monster occur when the angle between the \( n\)th line segment and the \( (n+1)\)th is close to an integer plus a half for several consecutive \( n \), so that the line segments (all of which have the same length) "fold back" upon one another. Correspondingly, the "smooth" parts correspond to values of \( n \) where the angle is close to an integer, so that the curve maintains substantially the same direction for a while.
We can understand this in more detail by noting that the angle between the \( n \)th and the \( (n+1) \)th line interval is \( f(n+1)-f(n) \), which is approximately the derivative \( f'(n) \). For the "monster" we have \[ f'(n)= \frac{4(\ln n)^3}{n} ; \] this tends to zero as \( n\to\infty \), and will at some stage take values of interest close to \( \frac12\), 1, \( \frac{3}{2} \) and so on. In fact, the "blob" at the bottom of the diagram is a result of the fact that \( f'(n) \approx \frac{1}{2} \) when \( n \) is about 4900; the previous one comes from \( f'(n) \approx \frac{3}{2} \) and so forth. The small blob at top left corresponds to \( f'(n)\approx \frac{11}{2} \). The almost vertical smooth section in the middle of the diagram is when \( f'(n) \approx1 \), the previous one \( f'(n) \approx2 \).
A less visually appealing example, but one which provides much food for thought, is given by \( f(n)= \frac{1}{2} \pi n^2 \).
Since \( f'(n)=n \pi \), and since angles of integer size don't count, the behaviour of this graph is governed by the fractional parts (that is, the parts after the decimal point) of multiples of \( \pi \). The origin of this graph is near the bottom left of the diagram; by counting segments you can easily see that the section between the 3rd and 4th segments is almost straight. The graph appears to end near the middle of the diagram. In fact, this is not so: at this point, the 57th segment doubles back on the 56th so closely that you can't visually tell them apart. Further back towards the beginning, you can distinguish two nearby but separate paths. Two extremely accurate fractional approximations to \( \pi \) are \( \frac{22}{7} \) (well known) and \( \frac{355}{113} \) (less well known); if you add the pairs of numbers I have just mentioned you obtain \( 3+4=7 \) and \( 56+57=113 \), the denominators of these fractions. This is no coincidence: the numbers associated with significant features of the graph are not merely a topic for your next trivia night, but are profoundly connected with the value of \( \pi \).
Different types of functions \( f(n) \) give very different graphs and may not have any particularly noticeable "blobs" at all. If \( f \) is a polynomial the graphs often (though not always) display a beautiful symmetry, combined with a fascinating level of fine detail. Taking, for example, the cubic \( f(n)=n/11+n^2/21+n^3/31 \), we obtain the following delightful graph.
You can even personalise the graphs by choosing \[ f(n)= \frac{n}{dd} + \frac{n^2}{mm} + \frac{n^3}{yy} , \] where \( dd.mm.yy \) is your own date of birth. Here's mine: if you can deduce the values of \( dd, mm \) and \( yy \), don't forget to send me a birthday card.
Graph of the exponential sum \( \sum_{n=1}^{10620} e^{2\pi i f(n)} \). Click on the image to enlarge.
"Real-life" applications of these ideas? Forget it. This is mathematics for fun. Nothing wrong with that.
David Angell, School of Mathematics and Statistics, UNSW. Further information J.H. Loxton, The Graphs of Exponential Sums. Mathematika 30 (1983), 153--163. |
So I'd like some help w/ this question.
Given 3 assets with means, variances, and correlation:
Two portfolios are created (A and B), each with the three assets above with weights ($w_n$) as follows:
Portfolio A: $w_1=0.2$, $w_2=0$, $w_3=0.8$
Portfolio B: $w_1=0.4$, $w_2=0.1$, $w_3=0.5$.
The assets' correlation are
$\rho_{12}=0.5,\rho_{13}=0.2,\rho_{23}=0$.
I would like to know the correlation of the two portfolios?
My attempt: So I've calculated the two portfolios' expected values and variances as follows: $E(A)=0.084, Var(A)=0.0024576$
$E(B)=0.092, Var(B)=0.006193$
And If I use $$\rho_{AB}=\frac{Cov(A,B)}{\sigma_A \sigma_B}=\frac{E(AB)-E(A)E(B)}{\sigma_A \sigma_B}$$ how would I calculate $E(AB)-E(A)E(B)$? Is $E(AB)$ the Expected Value of the assets' products, or is it the Expected Value of their weighted products? Thanks |
Invertibility of a Linear Map
Invertibility of a Linear Map
Definition: If $T \in \mathcal L (V, W)$ then the linear map $T$ is said to be Invertible if $\exists S \in \mathcal L (W, V)$ such that $ST = I_V$ and $TS = I_W$. The linear map $S$ is said to be the Inverse Linear Map of $T$ which we denote by $S = T^{-1}$.
Before we look more into the invertibility of linear maps, we will first look at an important theorem which tells us that if $T \in \mathcal L (V, W)$ is invertible, then the inverse $S = T^{-1}$ is unique.
Theorem 1 (Uniqueness of Inverses): If $T \in \mathcal L (V, W)$ is an invertible linear map, then the inverse linear map $T^{-1} \in \mathcal L (W, V)$ is unique. Proof:Let $T \in \mathcal L (V, W)$ be an invertible linear map, and suppose that $S, S' \in \mathcal L (W, V)$ are inverses to $T$. Therefore we have that $ST = I_V = S'T$ and $TS = I_W = TS'$. So $S = S I_W = S(TS') = (ST)S' = I_V S' = S'$, which implies that $S = S'$ and so the inverse linear map of $T$ is unique. $\blacksquare$
Theorem 2: If $T \in \mathcal L (V, W)$, then $T$ is an invertible linear map if and only if $T$ is bijective. Proof:$\Rightarrow$ Suppose that $T \in \mathcal L (V, W)$ is an invertible linear map. To show that $T$ is bijective we must show that $T$ is both injective and surjective. We will first show that $T$ is injective. Let $u, v \in V$ and suppose that $T(u) = T(v)$. Then we have that $u = T^{-1}(T(u)) = T^{-1}(T(v)) = v$ which implies that $u = v$ and so $T$ is injective. Now we will show that $T$ is surjective. Let $w \in W$. So we have that $w = T(T^{-1}(w))$, and $T^{-1} (w) \in V$. So $\forall w \in W$ $\exists T^{-1} (w) \in V$ such that $w = T(T^{-1}(w))$. Therefore $T$ is surjective. We thus conclude that since $T$ is both injective and surjective then $T$ is bijective. $\Leftarrow$ Suppose that $T \in \mathcal L (V, W)$ is a bijective linear map. We want to show that $T$ is invertible, which we will show by manually constructing an inverse. Let $w \in W$. Since $T$ is surjective, then $\exists v \in V$ such that $w = T(v)$. Now let $S$ be a map, and let $v = S(w)$ be the unique element in $V$ that is mapped to $w$. We note that $v = S(w)$ uniquely maps to $w$ since $T$ is injective. Therefore $w = T(v) = T(S(w))$. Therefore $TS = I_W$. We now need to show that $ST = I_V$. Let $v \in V$, and so:
\begin{equation} T(S(T(v)) = T((ST)(v)) = (TS)(T(v)) = I_W T(v) = T(v) \end{equation}
Since $T$ is injective, we have that $S(T(v)) = v$ and so $ST = I_V$. Now we only need to show that $S$ is a linear map, that is show that $S \in \mathcal L (W, V)$. Let $w, x \in W$. Notice that:
\begin{equation} T(S(w) + S(x)) = T(S(w)) + T(S(x)) = w + x \end{equation}
So $S(w) + S(x)$ is the unique element from $V$ that is mapped to $w + x$. So therefore $S(w) + S(x) = S(w + x)$. Lastly, let $a \in \mathbb{F}$. Then $T(aS(w)) = aT(S(w)) = aw$, and so $aS(w)$ is the unique element from $V$ that is mapped to $aw$, so therefore $aS(w) = S(aw)$. Therefore $S$ is a linear map from $W$ to $V$ and so $S \in \mathcal L (W, V)$, so $T$ has an inverse. $\blacksquare$ |
The issue can be characterized as a confusion of prior and posterior probabilityor maybe as the dissatisfaction of not knowing the joint distribution of certain random variables.
Conditioning
As an introductory example,we consider a model for the experiment of drawing, without replacement,two balls from an urn with $n$ balls numbered from $1$ to $n$.The typical way to model this experiment is with two random variables $X$ and $Y$,where $X$ is the number of the first ball and $Y$ is the number of the second ball,and with the joint distribution $P(X=x \land Y=y) = 1/(n(n-1))$for all $x,y \in N := \{1,\dots,n\}$ with $x \neq y$.This way, all possible outcomes have the same, positive probability,and the impossible outcomes (e.g., drawing the same ball twice) have zero probability.It follows $P(X=x)=1/n$ and $P(Y=x)=1/n$ for all $x \in N$.
Let the experiment be conducted and the second ball revealed to us,while the first ball is kept secret.Denote $t$ the number of the second ball.Then, still, $P(X=x)=1/n$ for all $x \in N$.However, for each $x \in N$, our
degree of belief that the event $X=x$ happened,should now be $P(X=x \vert Y=t) = P(X=x \land Y=t) / P(Y=t)$,which in case of $x \neq t$ is $1/(n-1)$,and in case of $x = t$, it is $0$.This is the probability of $X=x$ conditioned on the information that $Y=t$ happened,also called the posterior probability of $X=x$,meaning, the updated probability of $X=x$ after we obtained the evidence that $Y=t$ happened.It is still $P(X=x)=P(Y=x)=1/n$ for all $x \in N$,those are the prior probabilities.
Not conditioning on evidence means ignoring evidence.However, we can only condition on what is expressible in the probabilistic model.In our example with the two balls from the urn,we cannot condition on the weather or on how we feel today.In case that we have reason to believe that such is evidence relevant to the experiment,we must change our model first in order to allow us to express this evidence as formal events.
Let $C$ be the indicator random variable that says if the first ballhas a lower number than the second ball, that is, $C = 1 \Longleftrightarrow X < Y$.Then $P(C=1) = 1/2$.Let again $t$ be the number of the second ball,which is revealed to us, but the number of the first ball is secret.Then it is easy to see that $P(C=1 \vert Y=t) = (t-1) / (n-1)$.In particular $P(C=1 \vert Y=1) = 0$,which in our model means that $C=1$ has certainly not happened.Moreover, $P(C=1 \vert Y=n) = 1$,which in our model means that $C=1$ has certainly happened.It is still $P(C=1) = 1/2$.
Confidence Interval
Let $X = (X_1, \dots, X_n)$ be a vector of $n$ i.i.d random variables.Let $(l,u)$ be a confidence interval estimator (CIE) with confidence level $\gamma$for a real parameter of the distribution of the random variables in $X$,that is, $l$ and $u$ are real-valued functions with domain $\mathbb{R}^n$,such that if $\theta \in \mathbb{R}$ is the true value of the parameter,then $P(l(X) \leq \theta \leq u(X)) \geq \gamma$.
Let $C$ be the indicator random variable that says if $(l,u)$ determined the correct parameter,that is, $C = 1 \Longleftrightarrow l(X) \leq \theta \leq u(X)$.Then $P(C=1) \geq \gamma$.
Let us collect data so that we have values $x = (x_1,\dots,x_n) \in \mathbb{R}^n$,where $x_i$ is the realization of $X_i$ for all $i$.Then our
degree of belief that the event $C=1$ happened should be $\delta := P(C=1 \vert X = x)$.In general, we cannot compute this conditional probability, but we know that it is either $0$ or $1$,since $(C = 1 \land X = x) \Longleftrightarrow ((l(x) \leq \theta \leq u(x)) \land X = x)$.If $l(x) \leq \theta \leq u(x)$ is false, then the latter statement is false, and thus $\delta=0$.If $l(x) \leq \theta \leq u(x)$ is true, then the latter statement is equivalent to $X=x$, and thus $\delta=1$.If we only know the values $l(x)$ and $u(x)$ and not the data $x$,we can still argue in a similar way that $\delta \in \{0,1\}$.
It is still $P(C=1) \geq \gamma$.If, for our degree of belief that $C=1$ happened,we like this prior probability more,then we must ignore $x$, and this also means ignoring the confidence interval $[l(x),u(x)]$.Saying that $[l(x),u(x)]$ contained $\theta$ with probability at least $\gamma$,would mean acknowledging this evidence and at the same time ignoring it.
Learning More, Knowing Less
What makes this situation so difficult to grasp may be the factthat we cannot compute the conditional probability $\delta$.But this is not particular to the CIE situation,rather it may occur whenever we have insufficient information about the joint distribution of random variables.As an example, let $X$ and $Y$ be discrete random variables and let their marginal distributions be given,that is, for each $x \in \mathbb{R}$, we know $P(X=x)$ and $P(Y=x)$.This does not give us their joint distribution, that is,we do not know $P(X=x \land Y=y)$ for any $x,y \in \mathbb{R}$.Assume that a result of this experiment should be reported as the value of the random vector $(X,Y)$,that is, results should be reported as pairs of real numbers.
Let the underlying experiment be conducted, and assume that we learn that $Y=7$ happened,while the value for $X$ is still unknown to us.This does not change $P(X=x)$ for any $x$.However, it would be problematic to say that the result of the experiment was of the form $(x,7)$,where $x \in \mathbb{R}$,and that the probability for each particular real number $x$ for being the first component of this pair was $P(X=x)$.It is problematic since in this way, we would acknowledge the evidence $Y=7$and at the same time ignore it.We acknowledge the evidence $Y=7$ by reporting the second component of the pair as being $7$.We ignore it by using the prior probability $P(X=x)$, where in factour degree of belief for $X=x$ should now be$P(X=x \vert Y=7) = P(X=x \land Y=7) / P(Y=7)$, which unfortunately we cannot compute.
It may be unsatisfactory that in a sense,knowing more about $Y$ forces us to say less about $X$.But to the best of my knowledge this is how things are. |
Basic Theorems Regarding the Boundary of a Set in a Topological Space
Recall from The Boundary of a Set in a Topological Space page that if $(X, \tau)$ is a topological space and $A \subseteq X$ then a point $x \in X$ is called a boundary point of $A$ if $x$ is contained in the closure of $A$ but not in the interior of $A$, that is, $x \in \bar{A} \setminus \mathrm{int}(A)$.
Equivalently, $x$ is a boundary point of $A$ if every $U \in \tau$ with $x \in U$ intersects $A$ and $A^c$ nontrivially. Furthermore, we said that the set of all boundary points of $A$ is called the boundary of $A$ and is denoted $\partial A$.
We also proved a very important theorem in that the boundary of any set $A$ is a closed set, i.e., $\partial A$ is closed.
We will now look at an important theorem regarding the boundary of sets in a topological space.
Theorem 1: Let $(X, \tau)$ be a topological space and let $A, B \subseteq X$. Then $\partial A \cup \partial B \supseteq \partial (A \cup B)$. Proof:Let $x \in \partial (A \cup B)$. Then $x \in \overline{A \cup B} \setminus \mathrm{int} (A \cup B)$. Since $x \in \overline{A \cup B}$ we have that $x \in \bar{A} \cup \bar{B}$ by one of the theorem on the Basic Theorems Regarding the Closure of Sets in a Topological Space page. Therefore $x \in \bar{A}$ or $x \in \bar{B}$ $(*)$ Similarly, since $x \not \in \mathrm{int} (A \cup B)$ we have that $x \not \in \mathrm{int} (A) \cup \mathrm{int} (B)$ (since if so then $x \in \mathrm{int} (A \cup B)$ since $\mathrm{int} (A) \cup \mathrm{int} (B) \subseteq \mathrm{int} (A \cup B)$ by one of the theorems on the Basic Theorems Regarding the Interior Points of Sets in a Topological Space page). Hence $x \not \in \mathrm{int} (A)$ and $x \not \in \mathrm{int} (B)$ $(**)$. Combining $(*)$ and $(**)$ we see that: Therefore $\partial A \cup \partial B \supseteq \partial (A \cup B)$. $\blacksquare$ |
Complex Numbers Examples 1
Recall from the Complex Numbers page that numbers in the form $z = a + bi$ where $a, b \in \mathbb{R}$ and $i = \sqrt{-1}$ are called complex numbers and the set of all complex numbers is denoted by $\mathbb{C}$. We will now look at some examples regarding complex numbers.
Example 1 Graph the numbers $-2 - i$ and $2 + i$ in the complex plane.
The graph of these two complex numbers is given below:
Example 2 Determine the values of $i^1$, $i^2$, … $i^n$ for $n \in \mathbb{N}$.
We first have that $i^1 = i$. So $i^2 = \sqrt{(-1)}^2 = -1$, and multiplying by $i$ again gives us that $i^3 = -i$. Multiplying by $i$ once more gives us $-i^2 = -(-1) = 1$.
Its not hard to see that:(1)
Example 3 Verify that the sequence $\{ \frac{i^{2n}}{n} \}_{n=1}^{\infty}$ converges to zero.
Note that $i^{2n} = -1$ if $n$ is odd and $i^{2n} = 1$ if $n$ is even. The first few terms of this sequence is $\left \{ -1, \frac{1}{2}, -\frac{1}{3}, \frac{1}{4}, ... \right \}$.
We note that the numerator of this sequence is bounded, that is $-1 ≤ i^{2n} ≤ 1$ for all $n \in \mathbb{N}$ and the denominator is unbounded as $\lim_{n \to \infty} n = \infty$ and so $\lim_{n \to \infty} \frac{i^{2n}}{n} = 0$.
Example 4 Simplify the product $(2i - 3i^2 + 4)(-i + 2)$. Is this a real number?
When we expand this product we get that:(2)
So this number is not a real number. Alterantively, we could have simplified the product from that start by noting that $-3i^2 = 3$, and so $(2i - 3i^2 + 4)(-i + 2) = (2i + 7)(-i + 2)$ and thus:(3) |
Hints will display for most wrong answers; explanations for most right answers. You can attempt a question multiple times; it will only be scored correct if you get it right the first time.
I used the official objectives and sample test to construct these questions, but cannot promise that they accurately reflect what’s on the real test. Some of the sample questions were more convoluted than I could bear to write. See terms of use. See the MTEL Practice Test main page to view random questions on a variety of topics or to download paper practice tests.
MTEL General Curriculum Mathematics Practice
Question 1
Which of the following is equal to eleven billion four hundred thousand?
\( \large 11,400,000\)
Hint:
That's eleven million four hundred thousand.
\(\large11,000,400,000\)
\( \large11,000,000,400,000\)
Hint:
That's eleven trillion four hundred thousand (although with British conventions; this answer is correct, but in the US, it isn't).
\( \large 11,400,000,000\)
Hint:
That's eleven billion four hundred million
Question 2
Which of the following is equal to one million three hundred thousand?
\(\large1.3\times {{10}^{6}}\)
\(\large1.3\times {{10}^{9}}\)
Hint:
That's one billion three hundred million.
\(\large1.03\times {{10}^{6}}\)
Hint:
That's one million thirty thousand.
\(\large1.03\times {{10}^{9}}\)
Hint:
That's one billion thirty million
Question 3
In each expression below N represents a negative integer. Which expression could have a negative value?
\( \large {{N}^{2}}\)
Hint:
Squaring always gives a non-negative value.
\( \large 6-N\)
Hint:
A story problem for this expression is, if it was 6 degrees out at noon and N degrees out at sunrise, by how many degrees did the temperature rise by noon? Since N is negative, the answer to this question has to be positive, and more than 6.
\( \large -N\)
Hint:
If N is negative, then -N is positive
\( \large 6+N\)
Hint:
For example, if \(N=-10\), then \(6+N = -4\)
Question 4
A class is using base-ten block to represent numbers. A large cube represents 1000, a flat represents 100, a rod represents 10, and a little cube represents 1. Which of these is not a correct representation for 2,347?
23 flats, 4 rods, 7 little cubes
Hint:
Be sure you read the question carefully: 2300+40+7=2347
2 large cubes, 3 flats, 47 rods
Hint:
2000+300+470 \( \neq\) 2347
2 large cubes, 34 rods, 7 little cubes
Hint:
Be sure you read the question carefully: 2000+340+7=2347
2 large cubes, 3 flats, 4 rods, 7 little cubes
Hint:
Be sure you read the question carefully: 2000+300+40+7=2347
Question 5
Which of the following is an irrational number?
\( \large \sqrt[3]{8}\)
Hint:
This answer is the cube root of 8. Since 2 x 2 x 2 =8, this is equal to 2, which is rational because 2 = 2/1.
\( \large \sqrt{8}\)
Hint:
It is not trivial to prove that this is irrational, but you can get this answer by eliminating the other choices.
\( \large \dfrac{1}{8}\)
Hint:
1/8 is the RATIO of two integers, so it is rational.
\( \large -8\)
Hint:
Negative integers are also rational, -8 = -8/1, a ratio of integers.
Question 6
Here are some statements: I) 5 is an integer II)\( -5 \) is an integer III) \(0\) is an integer Which of the statements are true?
I only I and II only I and III only I, II, and III
Hint:
The integers are ...-3, -2, -1, 0, 1, 2, 3, ....
Question 7
Which of the lists below contains only irrational numbers?
\( \large\pi , \quad \sqrt{6},\quad \sqrt{\dfrac{1}{2}}\)
\( \large\pi , \quad \sqrt{9}, \quad \pi +1\)
Hint:
\( \sqrt{9}=3\)
\( \large\dfrac{1}{3},\quad \dfrac{5}{4},\quad \dfrac{2}{9}\)
Hint:
These are all rational.
\( \large-3,\quad 14,\quad 0\)
Hint:
These are all rational.
Question 8
If x is an integer, which of the following must also be an integer?
\( \large \dfrac{x}{2}\)
Hint:
If x is odd, then \( \dfrac{x}{2} \) is not an integer, e.g. 3/2 = 1.5.
\( \large \dfrac{2}{x}\)
Hint:
Only an integer if x = -2, -1, 1, or 2.
\( \large-x\)
Hint:
-1 times any integer is still an integer.
\(\large\sqrt{x}\)
Hint:
Usually not an integer, e.g. \( \sqrt{2} \approx 1.414 \).
Question 9
In January 2011, the national debt was about 14 trillion dollars and the US population was about 300 million people. Someone reading these figures estimated that the national debt was about $5,000 per person. Which of these statements best describes the reasonableness of this estimate?
It is too low by a factor of 10
Hint:
14 trillion \( \approx 15 \times {{10}^{12}} \) and 300 million \( \approx 3 \times {{10}^{8}}\), so the true answer is about \( 5 \times {{10}^{4}} \) or $50,000.
It is too low by a factor of 100 It is too high by a factor of 10 It is too high by a factor of 100
Question 10
Use the expression below to answer the question that follows. \( \large \dfrac{\left( 4\times {{10}^{3}} \right)\times \left( 3\times {{10}^{4}} \right)}{6\times {{10}^{6}}}\) Which of the following is equivalent to the expression above?
2
Hint:
\(10^3 \times 10^4=10^7\), and note that if you're guessing when the answers are so closely related, you're generally better off guessing one of the middle numbers.
20
Hint:
\( \dfrac{\left( 4\times {{10}^{3}} \right)\times \left( 3\times {{10}^{4}} \right)}{6\times {{10}^{6}}}=\dfrac {12 \times {{10}^{7}}}{6\times {{10}^{6}}}=\)\(2 \times {{10}^{1}}=20 \)
200
Hint:
\(10^3 \times 10^4=10^7\)
2000
Hint:
\(10^3 \times 10^4=10^7\), and note that if you're guessing when the answers are so closely related, you're generally better off guessing one of the middle numbers.
Question 11
Use the expression below to answer the question that follows: \( \large \dfrac{\left( 7,154 \right)\times \left( 896 \right)}{216}\) Which of the following is the best estimate of the expression above?
2,000
Hint:
The answer is bigger than 7,000.
20,000
Hint:
Estimate 896/216 first.
3,000
Hint:
The answer is bigger than 7,000.
30,000
Hint:
\( \dfrac{896}{216} \approx 4\) and \(7154 \times 4\) is over 28,000, so this answer is closest.
Question 12
Use the expression below to answer the question that follows. \( \large 3\times {{10}^{4}}+2.2\times {{10}^{2}}\) Which of the following is closest to the expression above?
Five million
Hint:
Pay attention to the exponents. Adding 3 and 2 doesn't work because they have different place values.
Fifty thousand
Hint:
Pay attention to the exponents. Adding 3 and 2 doesn't work because they have different place values.
Three million
Hint:
Don't add the exponents.
Thirty thousand
Hint:
\( 3\times {{10}^{4}} = 30,000;\) the other term is much smaller and doesn't change the estimate.
Question 13
Use the expression below to answer the question that follows. \(\large \dfrac{\left( 155 \right)\times \left( 6,124 \right)}{977}\) Which of the following is the best estimate of the expression above?
100
Hint:
6124/977 is approximately 6.
200
Hint:
6124/977 is approximately 6.
1,000
Hint:
6124/977 is approximately 6. 155 is approximately 150, and \( 6 \times 150 = 3 \times 300 = 900\), so this answer is closest.
2,000
Hint:
6124/977 is approximately 6.
If you found a mistake or have comments on a particular question, please contact me (please copy and paste at least part of the question into the form, as the numbers change depending on how quizzes are displayed). General comments can be left here. |
Recall that a differential equation is an equation (has an equal sign) that involves derivatives. Just as biologists have a classification system for life, mathematicians have a classification system for differential equations. We can place all differential equation into two types: ordinary differential equation and partial differential equations.
A partial differential equation is a differential equation that involvespartial derivatives. An ordinary differential equation is a differential equation that does notinvolve partial derivatives.
Examples \(\PageIndex{1}\)
\[ \dfrac{d^2y}{dx^2} + \dfrac{dy}{dx} = 3x\; \sin \; y \]
is an ordinary differential equation since it does not contain partial derivatives. While
\[ \dfrac{\partial y}{\partial t} + x \dfrac{\partial y}{\partial x} = \dfrac{x+t}{x-t} \]
is a partial differential equation, since \(y\) is a function of the two variables \(x\) and \(t\) and partial derivatives are present.
In this course we will focus on only ordinary differential equations.
Order
Another way of classifying differential equations is by order. Any ordinary differential equation can be written in the form
\[F(x,y,y',y'',...,y^{(0)})=0 \]
by setting everything equal to zero. The order of a differential equation is the
highest derivative that appears in the above equation.
Examples \(\PageIndex{2}\)
\[ \dfrac{d^2y}{dx^2} + \dfrac{dy}{dx} = 3x\, \sin \; y \]
is a second order differential equation, since a second derivative appears in the equation.
\[ 3y^4y''' - x^3y' + e^{xy}y = 0 \]
is a third order differential equation.
Once we have written a differential equation in the form
\[ F(x,y,y',y'',...,y^{(n)}) = 0 \]
we can talk about whether a differential equation is linear or not. We say that the differential equation above is a linear differential equation if
\[ \dfrac{\partial F}{\partial y^{(i)} \partial y^{(j)} } = 0 \]
for all \(i\) and \(j\). Any linear ordinary differential equation of degree \(n\) can be written as
\[ a_0(x)y^{(n)} + a_1(x)y^{(n-1)} +\, ... + a_{n-1}(x)y' + a_n(x)y = g(x) . \]
Examples \(\PageIndex{3}\)
\[ 3x^2y'' + 2\ln \, (x)y' + e^x \, y = 3x\, \text{cos} \, x \]
is a second order linear ordinary differential equation.
\[ 4yy''' - x^3y' + \text{cos}\, y = e^{2x} \]
is
not a linear differential equation because of the \(4yy'''\) and the \(\cos y\) terms.
Nonlinear differential equations are often very difficult or impossible to solve. One approach getting around this difficulty is to
linearize the differential equation.
Example \(\PageIndex{4}\): Linearization
\[ y'' + 2y' + e^y = x \]
is nonlinear because of the \( e^y \) term. However, the Taylor expansion of the exponetial function
\[ e^y = 1 + y + \frac{y^2}{2} + \frac{y^3}{6} + \, ... \]
can be approximated by the first two terms
\[ e^y \approx 1 + y. \]
We instead solve the much easier linear differential equation
\[y'' + 2y' + 1 + y = x. \]
We say that a function \(f(x)\) is a solution to a differential equation if plugging in \(f(x)\) into the equation makes the equation equal.
Example \(\PageIndex{5}\)
Show that
\[ f(x) = x + e^{2x} \]
is a solution to
\[ y'' - 2y' = -2. \]
Solution
Taking derivatives:
\[ f'(x) = 1 + 2e^{2x} , f''(x) = 4e^{2x}. \]
Now plug in to get
\[ 4e^{2x} - 2(1 + 2e^{2x}) = 4e^{2x} - 2 - 4e^{2x} = -2 . \]
Hence it is a solution.
Two questions that will be asking repeatedly of a differential equation course are
Does there exist a solution to the differential equation? Is the solution given unique?
In the example above, the answer to the first question is yes since we verified that
\[f(x)=x+e^{2x} \]
is a solution. However, the answer to the second question is no. It can be verified that
\[s(x) = 4 + x\]
is also a solution.
Larry Green (Lake Tahoe Community College)
Integrated by Justin Marshall. |
I'm taking a stochastic processes class, and we looked at the example of Gambler's ruin with infinite target, i.e. the gambler stops when he reaches 0 fortune or N, in the limit of N going to infinity.
For a finite target, in the case of $p=q$, the probability to ruin and expected time to ruin for starting fortune $a$ are given by:
$$P_a = 1 - \frac{a}{N}$$
$$T_a = a(N-a)$$
As $N\rightarrow \infty$, $P_a \rightarrow 1$ and $T_a \rightarrow \infty$. Now there seems to be something very wrong with the probability measure on the set of possible trajectories. In calculating the probability to ruin, the set of trajectories going to infinity is of measure zero, but in calculating the stopping time, it seems that the set of trajectories going to infinity have been assigned some positive measure. So what is going wrong when taking the limit $N\rightarrow \infty$?
When N is finite, the number of possible trajectories is countably infinite, is this still the case in the limit $N\rightarrow \infty$? |
For an elliptic curve over Q that is defined with large coefficients, it can take mathematical software (such as Sage) a long to time calculate the analytic rank. However, it seems to quickly know if the rank is even or odd.
I would like to understand how they determine this so quickly.
This is hinted at in a PlanetMath article as the "root number" obtained from the sign of the functional equation:
$$\Lambda(E,s) = \pm \Lambda(E,2 - s)$$
Where $\Lambda$ is related to the $L$ function by: $$\Lambda(E,s) = N^{s/2} (2\pi)^{-s} \Gamma(s) L(E,s)$$ where $N$ is the conductor of $E$ over $\mathbb{Q}$.
(I'm still a little confused on the detailed definitions. Another reference, with slightly different definition relating $\Lambda$ and $L$, http://www.math.harvard.edu/~gross/preprints/ell2.pdf )
Anyway, the expansion definition of $L$ does not look like it would be valid for both sides of the functional equation, which would prevent just evaluating $\Lambda$ to check the sign. And due to the speed, I'm guessing Sage isn't evaluating $L$ at all here (or can immediately tell just by looking at a couple terms in the expansion?).
Is there some trick that allows extracting the sign without evaluating $L$? |
Dirac's Theorem
Theorem 1 (Dirac's Theorem): If $G = (V(G), E(G))$ is connected graph on n-vertices so that for every $x, y \in V(G)$, where $x \neq y$, and $\deg (x) + \deg (y) ≥ n$ for all $x, y \in V(G)$, then $G$ is a Hamiltonian graph.
Let's verify Dirac's theorem by testing to see if the following graph is Hamiltonian:
Clearly the graph is Hamiltonian. However, let's test all pairs of vertices:
$\deg(x) + \deg(y) \geq n$ True/False ? $\deg(a) + \deg(b) \geq 6$, so $5 + 5 \geq 6$ True $\deg(a) + \deg(c) \geq 6$, so $5 + 4 \geq 6$ True $\deg(a) + \deg(d) \geq 6$, so $5 + 5 \geq 6$ True $\deg(a) + \deg(e) \geq 6$, so $5 + 5 \geq 6$ True $\deg(a) + \deg(f) \geq 6$, so $5 + 4 \geq 6$ True $\deg(b) + \deg(c) \geq 6$, so $5 + 4 \geq 6$ True $\deg(b) + \deg(d) \geq 6$, so $5 + 5 \geq 6$ True $\deg(b) + \deg(e) \geq 6$, so $5 + 5 \geq 6$ True $\deg(b) + \deg(f) \geq 6$, so $5 + 4 \geq 6$ True $\deg(c) + \deg(d) \geq 6$, so $4 + 5 \geq 6$ True $\deg(c) + \deg(e) \geq 6$, so $4 + 5 \geq 6$ True $\deg(c) + \deg(f) \geq 6$, so $4 + 4 \geq 6$ True $\deg(d) + \deg(e) \geq 6$, so $5 + 5 \geq 6$ True $\deg(d) + \deg(f) \geq 6$, so $5 + 4 \geq 6$ True
So by Dirac's theorem, this graph must be Hamiltonian. We will now look at Ore's theorem.
Ore's Theorem
Ore's theorem is a vast improvement to Dirac's theorem:
Theorem 2 (Ore's Theorem): If $G = (V(G), E(G))$ is connected graph on $n$-vertices where $n \geq 3 $] so that for [[$x, y \in V(G)$, where $x \neq y$, and $\deg (x) + \deg (y) ≥ n$ for each pair of non-adjacent vertices $x$ and $y$ then $G$ is a Hamiltonian graph.
Recall that two vertices are said to be adjacent if they are connected by an edge. Two vertices $x$ and $y$ are said to be adjacent if $\left\{ x, y \right\} \in E(G)$. Thus, non-adjacent vertices $x$ and $y$ are vertices such that $\left\{x, y \right\} \notin E(G)$.
Let's see if this theorem is accurate by testing the graph from earlier, let's call it $G$. This graph is on $6$ vertices and is clearly Hamiltonian. Let's verify this with Ore's theorem. The only pair of non-adjacent vertices are $c$ and $f$, since $\left\{ c, f \right\} \notin E(G)$. For $G$ to be Hamiltonian, it must follow that:(1)
Thus according to Ore's theorem, the graph $G$ is Hamiltonian.
Note that these theorems provide a condition for a graph to always be Hamiltonian. If a graph $G$ does not pass this test, then it doesn't imply that $G$ is not Hamiltonian. However, if the graph $G$ does pass this test, then it is definitely Hamiltonian. |
The Law of Cosines is presented as a geometric result that relates the parts of a triangle:
While true, there’s a deeper principle at work.
The Law of Interactions: The whole is based on the parts and the interaction between them.
The wording “Law of Cosines” gets you thinking about the mechanics of the formula, not what it means. Part of my learning strategy is rewording ideas into ones that make sense.
The Law of Cosines, after cranking through geometric steps we’re prone to forget, looks like $c^2 = a^2 + b^2 – 2ab\cos(C)$.
This is suspiciously like the expansion that if $c = (a + b)$, then $c^2 = a^2 + b^2 + 2ab$
The difference is that $2ab$ has an extra factor, $\cos(C)$, which measures the “actual overlap percentage” ($2ab$ assumes we fully overlap, i.e. where $\cos(C) = 1$).
So, the Law of Cosines is really a generalization of how $c^2 = (a + b)^2$ expands when components aren’t fully lined up. We’re treating geometric lines as terms in an algebraic expansion.
Analogy: The Assistant Chef
Imagine a restaurant with a single chef, Alice. She’s overworked, so Bob is hired as her assistant (sous chef).
Based on Alice’s current performance, and Bob’s performance in his interview, what happens when they work together?
Surely the new result must be their combined effort:
Hah! Office workers everywhere are rolling their eyes. You can’t just assume people contribute identically when they’re put together: there are interactions to account for.
Beyond their individual contributions, the two might slow each other down (
Where’d you put the whisk again?), or find ways to work together ( I’m peeling carrots anyway, use some of mine.).
In a system with several parts, start with the individual contributions and then ask if their interaction will:
Help each other Hurt each other Ignore each other
The original idea that “Total = Alice + Bob” is more generally expressed as:
Exploring The Scenario
We need to separate the
list of participants (Alice, Bob) from the result of their interaction.
Take the numbers 5 and 3. We can write them like so:
Parts = (5, 3)
and we’re pretty sure they combine to make 8. But is there another way to get that conclusion?
Yes: we multiply. Beyond repeated counting, multiplication shows what happens when the parts of a system interact:
We’ve gone from “parts view”, $(5, 3)$, to “interaction view”, $(5 + 3)^2$. The result of interaction mode says the system would result in 64 if it
did interact with itself.
One caveat: when going to interaction view, we wrote down $(5 + 3)(5 + 3)$, but we can’t simplify $(5 + 3) = 8$ on the outset. We’re using addition for bookkeeping until multiplication can combine the parts.
Oh, another caveat: why can we just add the interactions, but not the parts? Great question. The individual parts might be pointing in different dimensions, and don’t line up nicely on the same scale. The interacting parts turn into
area, which can be combined to the same result no matter the orientation.
(I’ll investigate this concept more in a follow-up. It’s a neat idea that area is a generic, easily combinable quantity but individual paths are not.)
Generalizing the Principle
Simple setups like (5, 3) are easy to think through, like eyeballing $2x + 3 = 7$ and guessing $x = 2$. But a more complex scenario like $x^2 + 3x = 15$ requires a systematic approach.
The Law of Cosines is a systematic approach to working through the parts:
List the parts Get every interaction as area Add to find the total contribution Convert into the equivalent “single part”
The last step is often implied. Once we’ve merged the jumble of interactions, we want the
single part that could represent the entire system. Is there a single person (Charlie) whose efforts are identical to that of Alice and Bob working together?
The Law of Cosines gives us a way to find Charlie.
What’s the Deal with Cosine?
When two parts interact, they can help, hurt, or ignore each other:
Perfect alignment means they help 100% (5 and 3) Perfect mis-alignment means they hurt 100% (5 and -3) Partial alignment or mis-alignment means they help or hurt by a percentage No alignment means they ignore each other
How do we measure alignment? With cosine.
Using our trig analogy, cosine is the
percentage an angle moves along the ground.
A 0-degree angle follows the ground perfectly (100%), and moving vertically doesn’t follow it at all (0%). Other angles are a fraction in-between.
If the parts in our system can be written as paths, and we know the angle between them is theta ($\theta$), then we can measure the overlap with cosine. One path acts as the ground, and the other is the path we’re following:
When paths are perfectly aligned, their full strength is used ($ab$ and $ba$). The interaction factor $\cos(\theta)$ modifies that strength to show much they
actually work together.
So, our jumble of interactions becomes:
Phew! And that’s the Law of Cosines: collect every interaction, account for the alignment, and simplify it to a single part. (The formula is usually written without the square root, but usually you want $c$, not $c^2$.)
Now, why is the Law of Cosines often written with a negative sign? Well, the assumption is that in a typical triangle, a small
internal angle $C$ means the sides are negatively aligned, while theta ($\theta$) is an external look at their alignment:
Similarly, a large internal angle means the sides are positively aligned, and will help each other. Typically, a small angle means you’re moving in the same direction, but this internal/external difference means we reverse the sign.
Personally, I don’t memorize whether there’s a positive or negative sign: I think about whether the parts will help or hurt each other in the scenario, and make the interaction positive or negative. Don’t be a slave to the formula.
Quick Practice Problem
Let’s say my triangle has side $a = 10$ and side $b = 20$. What is side $c$ when the angle between $a$ and $b$ is:
45 degrees in alignment
Here, we need the Law of Cosines. $a$ and $b$ are pointing partially in the same direction. We switch to interaction mode to get to a common, combinable unit (area):
$a^2 = 100$ $b^2 = 400$ $2ab = 2 \cdot 10 \cdot 20 = 400$, but we need to adjust by the interaction factor. That is $\cos(45) = .707$, so the real interaction factor is $400 \cdot .707 = 282.8$
The overall interactions are:
and the equivalent single side (c) is:
70 degrees in mis-alignment
Again, we need the Law of Cosines. We can see that the angles fight each other, so the interaction will be negative:
Our intuition says this arrangement should be
smaller than the previous one (since the sides aren’t working together), and it is. Full alignment or mis-alignment
When our “triangle” has an angle of 0 degrees (or 180), all the parts are lying flat. Here, the parts are in the same dimension, and can be treated as regular numbers:
Fully aligned: 10 + 20 = 30 Fully mis-aligned: 10 – 20 = -10 (pointing in direction of B).
The Law of Cosines still works, of course:
Full alignment: $a^2 + b^2 + 2ab\cos(\theta) = 100 + 400 + 400\cos(0) = 900$ and $c = \sqrt{900} = 30$ Full mis-alignment: $a^2 + b^2 – 2ab\cos(\theta) = 100 + 400 + 400\cos(180) = 100$ which means $c = \sqrt{100} = 10$ (pointing backwards).
Again, we shouldn’t robotically follow the formula: have a rough idea what the result should be, and think through the calculations. (“The overall interaction is this, so the individual side would that…”).
Thinking of interactions is one interpretation: next time, we’ll see it as the Law of Projections.
Happy math.
Appendix: Pythagorean Theorem
The Law of Cosines resembles the Pythagorean Theorem, no?
Now you might suspect why. The Pythagorean Theorem is the special case of
zero interaction, which happens when the sides are at right angles. After all, 90 degree angle is vertical, and has 0% overlap with the ground.
The Law of Cosines becomes:
If we know the parts won’t interact, we can ignore interaction effects. However, the
self-interactions are still there and must be combined: $a^2$ and $b^2$ are fine, but the crossover terms $ab$ and $ba$ disappear.
Here’s another version of the Pythagorean Theorem. We can’t combine $a$ and $b$ directly, so combine their interactions and reduce them to a single part:
Appendix: The Geometric Proof
You might be hankering for a geometric proof. Here’s one from quora, based on a paper by Knuth:
The insight is that we take our original $a-b-c$ triangle and scale it by $a$ (giving the $a^2-ab-ac$ triangle) and $b$ (giving the $ab-b^2-bc$ triangle). These two triangles build a larger, similar triangle $ac-bc-c^2$, and with some trig, the bottom portion can be shown to equal $a^2 + b^2 – 2ab\cos(\theta)$.
While interesting, I don’t like these types of proofs up front. The Law of Cosines is about interactions, not re-arranging triangles. Does this explanation get you thinking about what cosine represents? About when it should be positive, negative, or zero?
Appendix: Another Way to Remember
Imagine sides A and B are pointing in the same direction along the horizontal number line. This means $c = a + b$ and the Law of Cosines reduces to:
So, for a 180-degree interior angle, we get a regular algebraic statement. This helps me remember, on the fly, when to add vs. subtract. We add $2ab\cos(\theta)$ when the interior angle is large.
ADEPT Summary
Concept Law of Cosines Analogy Imagine an assistant chef whose interactions may (or may not) be helpful. Diagram Example Suppose $a = 10$ and $b = 20$ in a triangle. If they are aligned 45-degrees, their interaction is $a^2 + b^2 + 2ab\cos(45) = 782.8$ and the remaining side is $\sqrt{782.8} = 27.97$ units long. Plain-English The Law of Interactions: The whole is based on the parts and the interaction between them. Technical Triangle with internal angle C: $c^2 = a^2 + b^2 – 2ab\cos(C) $
General interaction: $c^2 = a^2 + b^2 + 2ab\cos(\theta) $ |
2018-08-25 06:58
Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 詳細記錄 - 相似記錄 2018-08-25 06:58 詳細記錄 - 相似記錄 2018-08-25 06:58
Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 詳細記錄 - 相似記錄 2018-08-24 06:19
Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 詳細記錄 - 相似記錄 2018-08-24 06:19
Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 詳細記錄 - 相似記錄 2018-08-24 06:19
Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 詳細記錄 - 相似記錄 2018-08-24 06:19
Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 詳細記錄 - 相似記錄 2018-08-24 06:19
Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 詳細記錄 - 相似記錄 2018-08-24 06:19
First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 詳細記錄 - 相似記錄 2018-08-23 11:31
Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 詳細記錄 - 相似記錄 |
Preliminary Definitions for The Theory of First Order ODEs
Preliminary Definitions for The Theory of First Order ODEs
Before we move on to some of the theory regarding first order ordinary differential equations we will need to state some important definitions from real analysis.
Pointwise and Uniform Convergence of Sequences of Functions
Definition: Let $(f_m)$ be a sequence of real-valued functions defined on $D \subseteq \mathbb{R}^n$. Then $(f_m)$ is said to Converge Pointwise to the function $f$ on $D$ if for all $\epsilon > 0$ and for all $x \in D$ there exists an integer $M$ such that if $m \geq M$ we have that $| f_m(x) - f(x) | < \epsilon$.
Definition: Let $(f_m)$ be a sequence of real-valued functions defined on $D \subseteq \mathbb{R}^n$. Then $(f_m)$ is said to Converge Uniformly to the function $f$ on $D$ if for all $\epsilon > 0$ there exists an integer $M$ such that if $m \geq M$ we have that $| f_m(x) - f(x) | < \epsilon$ for all $x \in D$.
The definition for a sequence of functions $(f_m)$ to converge pointwise to $f$ on $D$ and to converge uniformly to $f$ on $D$ are very similar but still DIFFERENT!
Note that $(f_m)$ converging pointwise to $f$ means that for all $\epsilon > 0$, for each individual $x \in D$ we can make the difference $| f_m(x) - f(x) | < \epsilon$ by choosing an integer $M$ sufficiently large. On the other hand, $(f_m)$ converging uniformly to $f$ means that for all $\epsilon > 0$ we can make the difference $| f_m(x) - f(x) | < \epsilon$ for ALL $x \in D$ by choosing an integer $M$ sufficiently large. So for pointwise convergence, the choice of $M$ is dependent on both $\epsilon$ and $x$, while for uniform convergence, the choice of $M$ is dependent on only $\epsilon$. Hence pointwise convergence is a "weaker" property" compared to uniform convergence.
Cauchy and Uniformly Cauchy Sequences of Functions
Definition: Let $(f_m)$ be a sequence of real-valued functions defined on $D \subseteq \mathbb{R}^n$. Then $(f_m)$ is said to be Cauchy if for all $\epsilon > 0$ and for all $x \in D$ there exists an integer $M$ such that if $m, n \geq M$ we have that $| f_m(x) - f_n(x) | < \epsilon$.
Definition: Let $(f_m)$ be a sequence of real-valued functions defined on $D \subseteq \mathbb{R}^n$. Then $(f_m)$ is said to be Uniformly Cauchy if for all $\epsilon > 0$ there exists an integer $M$ such that if $m, n \geq M$ we have that $| f_m(x) - f_n(x) | < \epsilon$ for all $x \in D$.
Like with the remarks made above, the concept of a sequence $(f_m)$ being Cauchy is a weaker property than that of being uniformly Cauchy.
Theorem 1: Let $(f_m)$ be a sequence of real-valued functions that are continuous and defined on a compact set $D \subseteq \mathbb{R}^n$. Then $(f_m)$ is uniformly Cauchy on $D$ if and only if there exists a continuous function $f$ defined on the compact set $D$ such that $(f_m)$ converges uniformly to $f$ on $D$. Uniformly Bounded Sets of Functions
Definition: Let $\mathcal F$ be a collection of functions defined on $D \subseteq \mathbb{R}^n$. Then $\mathcal F$ is said to be Uniformly Bounded on $D$ if there exists an $M > 0$ such that $| f(x) | \leq M$ for all $x \in D$ and for all $f \in \mathcal F$.
For example, consider the following collection of functions which we define on $[0, 1]$:(1)
\begin{align} \quad \mathcal F &= \left \{ f_n(x) = \frac{1}{n}x : n \in \mathbb{N} \right \} \\ &= \left \{ x, \frac{1}{2}x, \frac{1}{3}x, ... \right \} \end{align}
It is not hard to show that $\mathcal F$ is uniformly bounded. Take $M = 1$. Then $\displaystyle{\biggr \lvert \frac{1}{n}x \biggr \rvert = \biggr \lvert \frac{1}{n} \biggr \rvert | x | \leq \frac{1}{n} \leq M = 1}$ for all $x \in [0, 1]$ and for all $f_n \in \mathcal F$.
Equicontinuous Sets of Functions
Definition: Let $\mathcal F$ be a collection of functions defined on $D \subseteq \mathbb{R}^n$. Then $\mathcal F$ is said to be Equicontinuous on $D$ if for all $\epsilon > 0$ there exists a $\delta > 0$ such that if $| x - y | < \delta$ then $| f(x) - f(y) | < \epsilon$ for all $x, y \in D$ and for all $f \in \mathcal F$. Lipschitz Conditions on Continuous Functions
Definition: Let $f \in C(D, \mathbb{R})$. Then $f$ is said to satisfy a Lipschitz Condition on $D$ if there exists an $L > 0$ such that for all $(t, x), (t, y) \in D$ we have that $| f(t, x) - f(t, y) | \leq L |x - y|$. The constant $L$ is called a Lipschitz Constant, and $f(t, x)$ is said to be Lipschitz Continuous in the variable $x$. Contraction Mappings
Definition: Let $(X, d)$ be a metric space. A Contraction Mapping on this metric space is a function $T : X \to X$ with the property that there exists a $k$ with $0 < k < 1$ with $d(T(x), T(y)) \leq kd(x, y)$ for all $x, y \in X$. Fixed Points
Definition: Let $(X, d)$ be a metric space and let $T : X \to X$ be a contraction mapping. A Fixed Point $x^* \in X$ is a point with the property that $T(x^*) = x^*$. Banach's Fixed Point Theorem
Theorem 1: Let $(X, d)$ be a complete metric space and let $T : X \to X$ be a contraction mapping. Then $T$ has a unique fixed point $x^* \in X$.
Recall that a metric space $(X, d)$ is a said to be complete if every Cauchy sequence in $X$ converges to a point in $X$. |
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D
OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a...
@NeuroFuzzy awesome what have you done with it? how long have you been using it?
it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game
As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity
@Secret I mean more along the lines of the fluid dynamics in that kind of game
@Secret Like how in the dan-ball one air pressure looks continuous (I assume)
@Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A.
I would bet you get lots of cool reaction-diffusion-like patterns with that rule.
(Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ...
Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a...
Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl...
@ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-)
What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ...
and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles
The documentary then showed one of the bird's eye view of the farmlands
(which pardon my sketchy drawing skills...)
Most of the farmland is tiled into grids
Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array
In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl
and in others grass grew
Two blue steel bars were visible laying across the grid, holding up a triangle pool of water
Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e.
ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it
At the end of the documentary, near a university lodge area
I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends
Reality check: I have been to London, but not Belgium
Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order
Presumably one can formulate it (using an example of a 4th order tensor) as follows:
$$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$
and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array
while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$
However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers
@DavidZ in the recent meta post about the homework policy there is the following statement:
> We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems.
This is an interesting statement.
I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking".
I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea.
I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments).
@DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic.
@peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive.
@DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds.
@EmilioPisanty Yes, but I had liked to talk to him here.
@DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things.
@peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck.
4
Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful.
@EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging". |
A) Atom is indivisible
B) Gases combine in a simple ratio
C) There is no influence of gravity on the molecules of a gas
D) None of the aboveView Solution
A) There are intermolecular attractions
B) Molecules have considerable volume
C) No intermolecular attractions
D) The velocity of molecules decreases after each collisionView Solution
A) The average velocity of the molecules
B) The most probable velocity of the molecules
C) The square root of the average square velocity of the molecules
D) The most accurate form in which velocity can be used in these calculationsView Solution
A) Molecular mass
B) Atomic mass
C) Equivalent mass
D) None of theseView Solution
A) Pressure of the gas
B) Temperature of the gas
C) Volume of the gas
D) Pressure, volume and temperature of the gasView Solution
A) 1.5 RT
B) RT
C) 0.5 RT
D) 2.5 RTView Solution
A) \[P=\frac{2}{3}E\]
B) \[P=\frac{3}{2}E\]
C) \[P=\frac{1}{2}E\]
D) \[P=2E\]View Solution
A) Pressure
B) Force
C) Temperature
D) Molar massView Solution
A) Two times that of a hydrogen molecule
B) Same as that of a hydrogen molecule
C) Four times that of a hydrogen molecule
D) Half that of a hydrogen moleculeView Solution
A) Kinetic energy of the gas becomes zero but the molecular motion does not become zero
B) Kinetic energy of the gas becomes zero and molecular motion also becomes zero
C) Kinetic energy of the gas decreases but does not become zero
D) None of the aboveView Solution
A) Three times the absolute temperature
B) Absolute temperature
C) Two times the absolute temperature
D) 1.5 times the absolute temperatureView Solution
A) The pressure exerted by the gas is proportional to the mean velocity of the molecules
B) The pressure exerted by the gas is proportional to the root mean square velocity of the molecules
C) The root mean square velocity is inversely proportional to the temperature
D) The mean translational kinetic energy of the molecules is proportional to the absolute temperatureView Solution
A) Have equal average kinetic energies
B) Have equal molecular speeds
C) Occupy equal volumes
D) Have equal effusion ratesView Solution
question_answer14) Which of the following expressions correctly represents the relationship between the average molar kinetic energy, \[\overline{K.E.}\], of CO and \[{{N}_{2}}\] molecules at the same temperature [CBSE PMT 2000]
A) \[{{\overline{KE}}_{CO}}={{\overline{KE}}_{{{N}_{2}}}}\]
B) \[{{\overline{KE}}_{CO}}>{{\overline{KE}}_{{{N}_{2}}}}\]
C) \[{{\overline{KE}}_{CO}}<{{\overline{KE}}_{{{N}_{2}}}}\]
D) Cannot be predicted unless the volumes of the gases are givenView Solution
A) The average translational KE per molecule is the same in \[{{N}_{2}}\] and \[C{{O}_{2}}\]
B) The rms speed remains constant for both \[{{N}_{2}}\] and \[C{{O}_{2}}\]
C) The density of \[{{N}_{2}}\] is less than that of \[C{{O}_{2}}\]
D) The total translational KE of both \[{{N}_{2}}\] and \[C{{O}_{2}}\] is the sameView Solution
A) Decreases
B) Increases
C) Does not change
D) Becomes zeroView Solution
A) The most probable speed increases
B) The fraction of the molecules with the most probable speed increases
C) The distribution becomes broader
D) The area under the distribution curve remains the same as under the lower temperatureView Solution
A) \[\frac{RT}{PM}\]
B) \[\frac{P}{RT}\]
C) \[\frac{M}{V}\]
D) \[\frac{PM}{RT}\]View Solution
A) \[P=0.5\,atm,\,T=600\,K\]
B) \[P=2\,atm,\,T=150\,K\]
C) \[P=1\,atm,\,T=300\,K\]
D) \[P=1.0\,atm,\,T=500\,K\]View Solution
A) 298 K
B) 273 K
C) 193 K
D) 173 KView Solution
A) \[6.02\times {{10}^{23}}\]
B) \[1.2\times {{10}^{24}}\]
C) \[3.01\times {{10}^{23}}\]
D) \[2.01\times {{10}^{23}}\]View Solution
A) 0.00065
B) 0.65
C) 14.4816
D) 14.56View Solution
question_answer23) At \[{{100}^{o}}C\] and 1 atm, if the density of liquid water is 1.0 g \[c{{m}^{-3}}\] and that of water vapour is 0.0006 g \[{{m}^{-3}}\], then the volume occupied by water molecules in 1 litre of steam at that temperature is [IIT 2000]
A) 6 \[c{{m}^{3}}\]
B) 60 \[c{{m}^{3}}\]
C) 0.6 \[c{{m}^{3}}\]
D) 0.06 \[c{{m}^{3}}\]View Solution
A) 1.33
B) 1.66
C) 2.13
D) 1.99View Solution
A) S.T.P.
B) \[{{0}^{o}}C,\,2\,atm\]
C) \[{{273}^{o}}C,\,1\,atm\]
D) \[{{273}^{o}}C,\,2\,atm\]View Solution
A) At which all molecular motion ceases
B) At which liquid helium boils
C) At which ether boils
D) All of the aboveView Solution
question_answer27) Consider the following statements : (1) Joule-Thomson experiment is isoenthalpic as well as adiabatic. (2) A negative value of \[{{\mu }_{JT}}\] (Joule Thomson coefficient corresponds to warming of a gas on expansion. (3) The temperature at which neither cooling nor heating effect is observed is known as inversion temperature. Which of the above statements are correct
A) 1 and 2
B) 1 and 3
C) 2 and 3
D) 1, 2 and 3View Solution
A) Partially potential and partially kinetic
B) Only potential
C) Only kinetic
D) None of the aboveView Solution
A) Hydrogen
B) Oxygen
C) Methane
D) All the sameView Solution
A) Energy
B) Force
C) Energy per unit volume
D) Force per unit volumeView Solution
A) 3 M
B) \[\sqrt{3}\]M
C) \[M/3\]
D) \[M/\sqrt{3}\]View Solution
You need to login to perform this action.
You will be redirected in 3 sec |
Spatial resolution Geometric effects
When working with a transmission electron microscope (TEM) in scanning (STEM) or focused probe mode, the spatial resolution depends of several effects. For probes greater than ~2 nm and thicker samples (greater than ~ 75 nm), you can approximate the resolution with simple geometric arguments that relate to the beam broadening in the specimen due to both elastic and inelastic scattering. As shown schematically in the figure below, the Auger electron signal is generated from a narrow region at the entrance and exit surfaces of the sample. The energy dispersive x-ray spectroscopy (EDS) signal is generated from the total interaction volume of the electron beam. This interaction volume is significantly broadened by electron scattering in the sample. On the other hand, the electron energy loss spectroscopy (EELS) signal, detects only the energy change in the primary electron beam that is predominately forward scattered. The broadening due to elastic scattering can affect the EELS signal, but you can limit this effect with an angle limiting aperture to reduce high angle scattering from entering the spectrometer.
Secondary excitations
In addition to the direct excitation process, it is also probable to observe secondary, non-local excitations. In many cases, this results from the scattering of high energy electrons and x-rays in the system. The detection of trace amounts of a material in one region is highly suspect if that substance resides as a major component elsewhere in the system.
Below 1 nm
The above kinematic arguments apply for high energy electrons down to about ~2 nm. To form a probe smaller than 1 nm, the probe convergence angle must increase. If the probe angle is too large, the effects of spherical aberration will create broad tails to the probe. The image resolution will come predominately from the sharp center of the probe, but if a significant fraction of the beam current is in the probe tails it will affect the microanalysis resolution. Even in the case of an optimized probe angle,
\(\alpha _{0} = (4\lambda /C_{s})^{1/4}\)
the effect of the convergence angle as the probe passes through the sample will increase the interaction volume in proportion to the specimen thickness and will be approximately ( \(\alpha _{0}t\)
With aberration correction, STEM probes at atomic dimensions can form. In this case, you need to consider the wave nature of electrons and the long range of coulomb interactions. It is often necessary to use simulations to understand the spatial extent of the interaction. A general trend is that low energy loss events tend to have lower spatial resolution. This can be understood either by impact parameter arguments (excite low energy losses from longer distances) or wave-optical considerations (low energy losses have a narrow angular range and therefore cannot be localized spatially according to the uncertainty principle). In either case, if the collection angle is too small, the signal will be delocalized by the diffraction limit of the collection process. These considerations led Egerton to propose a spatial resolution that contains 50% of the signal (
\(d _{50}\)) given by:
\((d_{50})^{2} = (0.5\lambda /\theta _{E}^{3/4})^{2} + (0.6\lambda /\beta )^{2}\)
Where \(\beta\) is the EELS collection angle and \(\theta _{E}\) is the characteristic scattering angle for the measured energy loss event (\(\theta _{E}\approx E/2E_{0}\)) with \(E_{0}\) being the primary beam energy.
Effect of noise
The spatial resolution you obtain in a measurement is typically limited not by the underlying scattering physics, but by the measurement noise and dose (per unit area) you can deposit on the sample. As a general consideration, you need a signal-to-noise ratio (SNR) of about 3 to see a feature. If you cannot achieve the required SNR with the allowed dose, you can increase the signal at the expense of spatial resolution by increasing the analyzed area either by enlarging the probe diameter or summing adjacent pixel.
The dose limit is a function of the induced physical damage to the specimen or stability of the sample and analysis system.
EELS EDS XPS Auger CL Probe → Signal e - → e - e - → \( \gamma\) \( \gamma\) → e - e - → e - e - → \(hv\) Spatial resolution Å – µm nm – mm µm – mm µm – mm nm – mm |
Covering Maps are Open Maps
Recall from the Covering Spaces page that if $X$ is a topological space then a covering space of $X$ is a pair $(\tilde{X}, p)$ where $\tilde{X}$ is a path connected and locally path connected topological space and $p : \tilde{X} \to X$ is a continuous map such that for every $x \in X$ there exists a path connected open neighbourhood $U$ of $x$ such that $p$ restricted to every path component of $p^{-1}(U)$ is a homeomorphism onto $U$.
We will now state a nice result which tells us that if $X$ is a topological space and $(\tilde{X}, p)$ is a covering space of $X$ then $p$ is an open map.
Theorem 1: Let $X$ be a topological space. If $(\tilde{X}, p)$ is a covering space of $X$ then $p : \tilde{X} \to X$ is an open map. Recall that a map between topological spaces is said to be open if the image of any open set in the domain is an open set in the range. Proof:Let $U$ be an open set in $\tilde{X}$. If $U = \emptyset$ then trivially $p(U) = p(\emptyset) = \emptyset$ which is open in $X$. So assume that $U \neq \emptyset$. Let $x \in p(U)$. We want to show that $x$ is an interior point of $U$. Let $V$ be an elementary neighbourhood of $x$. Let $C$ be a path component of $p^{-1}(V)$. Then $p$ restricted to $C$ is a homeomorphism onto $V$. Since $C$ is a path component it is open in $C \subset \tilde{X}$, and since $U$ is open in $\tilde{X}$ we have that $C \cap U$ is open in $C$. Since $p$ is a homeomorphism from $C$ onto $V$ we have that $p(C \cap U)$ is open in $V$ and is also open in $X$. But we have that: So $x \in \mathrm{int} (p(U))$ which shows that $p(U)$ is open. Hence $p : \tilde{X} \to X$ is an open map. |
Table of Contents
The Interior Points of Sets in a Topological Space
Recall from the The Open Neighbourhoods of Points in a Topological Space page that if $(X, \tau)$ is a topological space and $x \in X$ then an open neighbourhood of $x$ is any open set $U$ ($U \in \tau$) such that $x \in U$.
Given a subset $A \subseteq X$, we will give a special name to the points $a \in A$ that contain an open neighbourhood $U$ fully contained in $A$.
Definition: Let $(X, \tau)$ be a topological space and let $A \subseteq X$. A point $a \in A$ is called an Interior Point of $A$ if there exists an open neighbourhood $U$ ($U \in \tau$) of $a$ such that $a \in U \subseteq A$. The set of all interior points of $A$ is called the Interior of $A$ and is denoted $\mathrm{int} (A)$.
Let's now look at some simple results regarding interior points of a subset of $X$.
Proposition 1: Let $[[$ (X ,\tau)$ be a topological space. a) The interior of the whole set $X$ is $X$, that is, $\mathrm{int} (X) = X$. b) The interior of the empty set is the empty set, that is $\mathrm{int} (\emptyset) = \emptyset$. Proof of a)$X$ is an open set. For each $x \in X$, $X$ is an open neighbourhood of $x$, and so every $x \in X$ is an interior point of $X$. Thus $\mathrm{int} (X) = X$. Proof of b)$\emptyset$ is an open set. Since $\emptyset$ has no points, it vacuously satisfies the definition above, and thus, $\mathrm{int} (\emptyset) = \emptyset$. ||
Proposition 2: Let $(X, \tau)$ be a topological space. If $U \subseteq A \subseteq X$ and $U$ is open, then $U \subseteq \mathrm{int} (A)$. Proof:Let $U \subseteq A \subseteq X$ and let $U$ be an open set. Then for all elements in $a \in U$, we have that $a \in A$. So each $a \in U$ is such that $a \in U \subseteq A$, so $a \in \mathrm{int} (A)$. Therefore:
Proposition 3: Let $(X, \tau)$ be a topological space. If $A \subseteq X$ then $\mathrm{int} (A)$ is the largest open subset of $A$. Proof:Suppose not. Then for some $p \not \in \mathrm{int} (A)$ we have that $\mathrm{int} (A) \cup \{ p \}$ is a larger open subset of $A$. But then if $U = \mathrm{int} (A) \cup \{ p \} \in \tau$ then: Therefore $p \in \mathrm{int} (A)$, a contradiction. Therefore our assumption that a larger open subset of $A$ exists was false. Hence $\mathrm{int}(A)$ is the largest open subset of $A$. $\blacksquare$
Proposition 4 (Idempotency of the Interior of a Set): Let $(X, \tau)$ be a topological space and $A \subseteq X$. Then the interior of the interior of $A$ is equal to the interior of $A$, that is, $\mathrm{int}(\mathrm{int}(A)) = \mathrm{int}(A)$. Proof:By definition, $\mathrm{int} (\mathrm{int}(A))$ is the set of all interior points of $\mathrm{int}(A)$. By proposition 2, $\mathrm{int}(A)$ is open, and so every point of $\mathrm{int}(A)$ is an interior point of $\mathrm{int}(A)$. Therefore $\mathrm{int}(A) \subseteq \mathrm{int}(\mathrm{int}(A))$. But also by proposition 2 we have that $\mathrm{int}(\mathrm{int}(A)) \subseteq A$. $\blacksquare$ Example 1
Consider the set $X = \{ a, b, c \}$ with the nested topology $\tau = \{ \emptyset, \{ a \}, \{ a, b \}, \{a, b, c \} \}$. If we choose the set $A = \{ a, c \} \subset X$, we note that $a \in A$ is an interior point of $A$ if we let $U = \{ a \} \in \tau$ since:(3)
However, the point $c \in A$ is not an interior point with respect to the topology $\tau$. The only subset $U \in \tau$ that contains $c$ is $U = \{ a, b, c \}$ and:(4)
Therefore $\mathrm{int} (A) = \{ a \}$.
Example 2
For another example, consider the set $\mathbb{R}^2$ with the topology induced by the standard metric $d(x, y) = \| \mathbf{x} - \mathbf{y} \| = \sqrt{(x_1 - y_1)^2 + (x_2 - y_2)^2}$ for all $\mathbf{x} = (x_1, x_2), \mathbf{y} = (y_1, y_2) \in \mathbb{R}^2$. A set $S \subseteq \mathbb{R}^2$ if for every $x \in S$ there exists a positive real number $r > 0$ such that the open disk centered at $x$ with radius $r$ denoted $B(\mathbf{x}, r) = \{ \mathbf{y} \in S : d(\mathbf{x}, \mathbf{y}) < r \}$ is contained in $S$, that is:(5)
If $a < b$ and $c < d$ then graphically we can represent the subset $A = [a, b) \times [c, d) \subseteq \mathbb{R}^2$ as:
The interior points of $A = [a, b) \times [c, d)$ are the points $\mathbf{x} = (x_1, x_2)$ such that $a < x_1 < b$ and $c < x_2 < d$. Any points with $x_1 = a$ and/or $x_2 = c$ cannot be interior points since there would then exist no positive real number $r > 0$ such that the disk centered at $\mathbf{x}$ with radius $r$ would be a subset of $A$ as illustrated in the following image:
Therefore the set of interior points is $\mathrm{int} (A) = (a, b) \times (c, d)$: |
Every day one sees politicians on TV assuring us that nuclear deterrence works because there no nuclear weapon has been exploded in anger since 1945. They clearly have no understanding of statistics.
With a few plausible assumptions, we can easily calculate that the time until the next bomb explodes could be as little as 20 years.
Be scared, very scared.
The first assumption is that bombs go off at random intervals. Since we have had only one so far (counting Hiroshima and Nagasaki as a single event), this can’t be verified. But given the large number of small influences that control when a bomb explodes (whether in war or by accident), it is the natural assumption to make. The assumption is given some credence by the observation that the intervals between wars are random [download pdf].
If the intervals between bombs are random, that implies that the distribution of the length of the intervals is exponential in shape, The nature of this distribution has already been explained in an earlier post about the random lengths of time for which a patient stays in an intensive care unit. If you haven’t come across an exponential distribution before, please look at that post before moving on.
All that we know is that 70 years have elapsed since the last bomb. so the interval until the next one must be greater than 70 years. The probability that a random interval is longer than 70 years can be found from the cumulative form of the exponential distribution.
If we denote the true mean interval between bombs as $\mu$ then the probability that an intervals is longer than 70 years is
\[ \text{Prob}\left( \text{interval > 70}\right)=\exp{\left(\frac{-70}{\mu_\mathrm{lo}}\right)} \]
We can get a lower 95% confidence limit (call it $\mu_\mathrm{lo}$) for the mean interval between bombs by the argument used in Lecture on Biostatistics, section 7.8 (page 108). If we imagine that $\mu_\mathrm{lo}$ were the true mean, we want it to be such that there is a 2.5% chance that we observe an interval that is greater than 70 years. That is, we want to solve
\[ \exp{\left(\frac{-70}{\mu_\mathrm{lo}}\right)} = 0.025\]
That’s easily solved by taking natural logs of both sides, giving
\[ \mu_\mathrm{lo} = \frac{-70}{\ln{\left(0.025\right)}}= 19.0\text{ years}\]
A similar argument leads to an upper confidence limit, $\mu_\mathrm{hi}$, for the mean interval between bombs, by solving
\[ \exp{\left(\frac{-70}{\mu_\mathrm{hi}}\right)} = 0.975\]
so \[ \mu_\mathrm{hi} = \frac{-70}{\ln{\left(0.975\right)}}= 2765\text{ years}\]
If the worst case were true, and the mean interval between bombs was 19 years. then the distribution of the time to the next bomb would have an exponential probability density function, $f(t)$,
\[ f(t) = \frac{1}{19} \exp{\left(\frac{-70}{19}\right)} \]
There would be a 50% chance that the waiting time until the next bomb would be less than the median of this distribution, =19 ln(0.5) = 13.2 years.
In summary, the observation that there has been no explosion for 70 years implies that the mean time until the next explosion lies (with 95% confidence) between 19 years and 2765 years. If it were 19 years, there would be a 50% chance that the waiting time to the next bomb could be less than 13.2 years. Thus there is no reason at all to think that nuclear deterrence works well enough to protect the world from incineration.
Another approach
My statistical colleague, the ace probabilist Alan Hawkes, suggested a slightly different approach to the problem,
via likelihood. The likelihood of a particular value of the interval between bombs is defined as the probability of making the observation(s), given a particular value of $\mu$. In this case, there is one observation, that the interval between bombs is more than 70 years. The likelihood, $L\left(\mu\right)$, of any specified value of $\mu$ is thus
\[L\left(\mu\right)=\text{Prob}\left( \text{interval > 70 | }\mu\right) = \exp{\left(\frac{-70}{\mu}\right)} \]
If we plot this function (graph on right) shows that it increases with $\mu$ continuously, so the maximum likelihood estimate of $\mu$ is infinity. An infinite wait until the next bomb is perfect deterrence.
But again we need confidence limits for this. Since the upper limit is infinite, the appropriate thing to calculate is a one-sided lower 95% confidence limit. This is found by solving
\[ \exp{\left(\frac{-70}{\mu_\mathrm{lo}}\right)} = 0.05\]
which gives
\[ \mu_\mathrm{lo} = \frac{-70}{\ln{\left(0.05\right)}}= 23.4\text{ years}\]
Summary
The first approach gives 95% confidence limits for the average time until we get incinerated as 19 years to 2765 years. The second approach gives the lower limit as 23.4 years. There is no important difference between the two methods of calculation. This shows that the bland assurances of politicians that “nuclear deterrence works” is not justified.
It is not the purpose of this post to predict when the next bomb will explode, but rather to point out that the available information tells us very little about that question. This seems important to me because it contradicts directly the frequent assurances that deterrence works.
The only consolation is that, since I’m now 79, it’s unlikely that I’ll live long enough to see the conflagration.
Anyone younger than me would be advised to get off their backsides and do something about it, before you are destroyed by innumerate politicians.
Postscript
While talking about politicians and war it seems relevant to reproduce Peter Kennard’s powerful image of the Iraq war.
and with that, to quote the comment made by Tony Blair’s aide, Lance Price
It’s a bit like my feeling about priests doing the twelve stations of the cross. Politicians and priests masturbating at the expense of kids getting slaughtered (at a safe distance, of course).
Follow-up 4 Responses to How long until the next bomb? Why there’s no reason to think that nuclear deterrence works Leave a Reply
You must be logged in to post a comment. |
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is?
Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!...
I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $...
No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA...
The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why?
mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it
Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true:
(1) If $x=y$ then $x\sim y$.
(2) If $x=y$ then $y\sim x$.
(3) If $x=y$ and $y=z$ then $x\sim z$.
Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly.
This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$.
I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$."
That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems...
(comment on many many posts above)
In other news:
> C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999
probably the weirdness bunch of data I ever seen with so many 000000 and 999999s
But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti?
@AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms).
This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality.
Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it.
@schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$.
@GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course.
Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul...
@GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0?
Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$. |
Table of Contents
Second Countability under Homeomorphisms on Topological Spaces
Recall from the Homeomorphisms on Topological Spaces page that if $X$ and $Y$ are topological spaces then a bijective map $f : X \to Y$ is said to be a homeomorphism if it is continuous and open.
Furthermore, if such a homeomorphism exists then we say that $X$ and $Y$ are homeomorphic and write $X \simeq Y$.
Also, recall from the Second Countable Topological Spaces page that a topological space $(X, \tau)$ is said to be second countable if there exists a basis $\mathcal B$ of the topology on $X$ that is countable.
We will now look at a nice topological property which says that if $f$ is a homeomorphism from $X$ to $Y$ and if $X$ is a second countable topological space then $Y$ is a second countable topological space.
Theorem 1: Let $X$ and $Y$ be topological spaces and let $f : X \to Y$ be a homeomorphism. If $X$ is second countable then $Y$ is second countable. Proof:Let $X$ be second countable. Then there exists a countable basis $\mathcal B$ of the topology defined on $X$. Then for all open sets $U$ in $X$ there exists a subset $\mathcal B^* \subseteq \mathcal B$ such that: So then: Since $f$ is a homeomorphism we have that $f(U)$ is an open set of $Y$. We claim that the following set is a countable basis of the topology on $Y$: Clearly $\tilde{\mathcal B}$ is a countable set since $\mathcal B$ is a countable set. We only need to show that $\tilde{\mathcal B}$ is a basis of the topology on $Y$. Now suppose that $\tilde{\mathcal B}$ is not a basis of the topology on $Y$. Then there exists an open set $V$ in $Y$ such that for all subsets $\mathcal B^* \subseteq \mathcal B$ we have that: Then for all subsets $\mathcal B^* \subseteq \mathcal B$ we have that: But since $f$ is a homeomorphism and $V$ is an open set in $Y$ we must have that $f^{-1} (V)$ is an open set in $X$. But then the open set $f^{-1}(V)$ cannot be written as any subcollection of the sets in the basis $\mathcal B$ of the topology on $X$. This contradicts $\mathcal B$ being a basis of the topology on $X$ and so the assumption that $\tilde{\mathcal B}$ is not a basis of the topology on $Y$ is false. Hence $\tilde{\mathcal B}$ is a countable basis of the topology on $Y$ so $Y$ is second countable. $\blacksquare$ |
I am looking for a nice and readable description of how to implement BDT model: $d log(r(t)) = [\theta(t)-\frac{\sigma'(t)}{\sigma(t)}log(r(t))]dt + \sigma(t) dW$.
I assume I already have steady-state IR curve $r^*(t)$ and volatility curve $\sigma^*(t)$.
It makes no difference whether it would be binomial tree or Monte-Carlo or FDM implementation. Monte-Carlo seems to be easy but I'm not sure whether I can use $\theta(t) = r^*(t)$ and $\sigma(t)=\sigma^*(t)$.
I went thru Derman's article and Haug's "Options pricing formulas" but found no answer. |
Basis of a Vector Space
We will now look at a new definition regarding vector spaces.
Definition: A set of vectors $\{ v_1, v_2, ..., v_n \}$ is said to be a Basis of the $\mathbb{F}$-vector space $V$ if both $V = \mathrm{span} (v_1, v_2, ..., v_n)$ and $\{v_1, v_2, ..., v_n \}$ is a linearly independent set.
From the definition, to prove that a set of vectors $\{ v_1, v_2, ..., v_n \}$ from $V$ is a basis of $V$, we must show that this set of vectors is a spanning set of $V$ and that this set of vectors is linearly independent.
For example, consider the vector space $\mathbb{R}^3$ which we are familiar with already, and recall the standard unit vectors $\vec{i} = (1, 0, 0)$, $\vec{j} = (0, 1, 0)$ and $\vec{k} = (0, 0, 1)$. The set of vectors $\{ \vec{i}, \vec{j}, \vec{k} \}$ form a basis of $\mathbb{R}^n$. First notice that any vector $(x, y, z) \in \mathbb{R}^n$ can be written in the form $(x, y, z) = a_1(1, 0, 0) + a_2(0,1,0) + a_3(0, 0, 1) = (a_1, a_2, a_3)$ (by choosing $a_1 = x$, $a_2 = y$, and $a_3 = z$) for all $a_1, a_2, a_3 \in \mathbb{F}$ and so $V = \mathrm{span} (\vec{i}, \vec{j}, \vec{k})$. Furthermore, this set of vectors is linearly independent since $a_1(1,0,0) + a_2(0,1,0) + a_3(0,0,1) = (a_1, a_2, a_3) = (0, 0, 0)$ is only true for $a_1 = a_2 = a_3 = 0$.
For another example, consider the set of polynomials with degree less than or equal to $n$. The set $\{ 1, x, x^2, ..., x^n \}$ forms a basis as you should verify.
We will now look at a very important theorem which defines whether a set of vectors is a basis of a finite-dimensional vector space or not.
Theorem 1: A set of vectors $B = \{ v_1, v_2, ..., v_n \}$ from the vector space $V$ is a basis if and only if each vector $v \in V$ can be written uniquely as a linearly combination of the vectors in $B$, that is $v = a_1v_1 + a_2v_2 + ... + a_nv_n$. Proof:$\Rightarrow$ Let $B = \{ v_1, v_2, ..., v_n \}$ be a basis of the vector space $V$, and let $v \in \mathbb{V}$. We know that by definition $B$ is also a spanning set, and so $v = a_1v_1 + a_2v_2 + ... + a_nv_n$ where $a_i \in \mathbb{F}$. Now suppose also that $v = b_1v_1 + b_2v_2 + ... + b_nv_n$. In other words, suppose that $v$ has two represents as a linear combination from the vectors in $B$. The it follows from subtracting these equations that: Now recall that by definition the basis $B$ is also a linearly independent set which implies that the representation of $0 = (a_1 - b_1)v_1 + (a_2 - b_2)v_2 + ... + (a_n - b_n)v_n$ is unique, and so $a_1 - b_1 = 0$, $a_2 - b_2 = 0$, …, $a_n - b_n = 0$ which implies that $a_1 = b_1$ and $a_2 = b_2$, and …, $a_n = b_n$. Therefore $v \in V$ has a unique representation as a linear combination of the basis vectors. $\Leftarrow$ Instead, suppose that every vector $v \in V$ could be unique written as a linear combination of the vectors in $B$, that is $v = a_1v_1 + a_2v_2 + ... + a_nv_n$ with $a_i \in \mathbb{F}$. By definition, the set of vectors $\{ v_1, v_2, ..., v_n \}$ would be a spanning set of $V$. Also, since $0 \in V$ we note that $0 = a_1v_1 + a_2v_2 + ... + a_nv_n$ has a unique representation which must trivially be that $a_1 = a_2 = ... = a_n = 0$, and so the set of vectors $\{a_1, a_2, ..., a_n \}$ is also linearly independent. Since $\{ v_1, v_2, ..., v_n \}$ is a spanning set of $V$ and is linearly independent, then by definition $\{ v_1, v_2, ..., v_n \}$ is a basis of $V$. $\blacksquare$ Example 1 Consider the vector space $M_{22}$ of all $2 \times 2$ matrices whose entries are in $\mathbb{F}$. Find a basis of $M_{22}$.
The simplest basis for $M_{22}$ is the step of vectors $\left \{ \begin{bmatrix} 1 & 0\\ 0 & 0\end{bmatrix}, \begin{bmatrix} 0 & 1\\ 0 & 0\end{bmatrix}, \begin{bmatrix} 0 & 0\\ 1 & 0\end{bmatrix}, \begin{bmatrix} 0 & 0\\ 0 & 1\end{bmatrix} \right \}$.
First note that this set of vectors spans $M_{22}$, since $a_1 \begin{bmatrix} 1 & 0\\ 0 & 0\end{bmatrix} + a_2 \begin{bmatrix} 0 & 1\\ 0 & 0\end{bmatrix} + a_3 \begin{bmatrix} 0 & 0\\ 1 & 0\end{bmatrix} + a_4 \begin{bmatrix} 0 & 0\\ 0 & 1\end{bmatrix} = \begin{bmatrix} a_1 & a_2\\ a_3 & a_4\end{bmatrix}$, and for any vector $x \in M_{22}$ can be written as a linear combination of this set of vectors, that is $\begin{bmatrix} a_1 & a_2\\ a_3 & a_4\end{bmatrix} = \begin{bmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{bmatrix}$ when $x_{11} = a_1$, $x_{12} = a_2$, $x_{21} = a_3$ and $x_{22} = a_4$ and so $V = \mathrm{span} \left ( \begin{bmatrix} 1 & 0\\ 0 & 0\end{bmatrix}, \begin{bmatrix} 0 & 1\\ 0 & 0\end{bmatrix}, \begin{bmatrix} 0 & 0\\ 1 & 0\end{bmatrix}, \begin{bmatrix} 0 & 0\\ 0 & 1\end{bmatrix} \right )$.
Now we need to show that this set of vectors is linearly independent. Clearly $\begin{bmatrix} a_1 & a_2\\ a_3 & a_4\end{bmatrix} =\begin{bmatrix} 0 & 0\\ 0 & 0\end{bmatrix}$ if and only if $a_1 = a_2 = a_3 = a_4 = 0$, and so this set is linearly independent.
Therefore this set in all is a basis for $M_{22}$.
Example 2 Consider the vector space of complex numbers $\mathbb{C}$. Find a basis of $\mathbb{C}$.
The simplest basis is the set of vectors $\{ 1, i \}$. We note that $\mathbb{C} = \mathrm{span} (1, i)$ since any complex number $z$ can be written in the form $a + bi$ where $a, b \in \mathbb{F}$. Furthermore, this set of vectors is linearly independent since $a + bi = 0$ if and only if $a = b = 0$.
Example 3 Consider the vector space of complex numbers $\mathbb{C}$. Prove that the set of vectors $\{ 1 + i, 1 - i \}$ is a basis of $\mathbb{C}$.
To prove this we must show that $\mathbb{C} = \mathrm{span} (1 + i, 1 - i)$ and that $\{ 1 + i, 1 - i \}$ is a linearly independent set of vectors. First, let's chose that this set spans $\mathbb{C}$.
We note that $1 = \frac{1}{2}(1 + i) + \frac{1}{2}(1 - i)$ and that $i = \frac{1}{2} (1 + i ) - \frac{1}{2}(1 - i)$ which we verified in example 2 above to span $\mathbb{C}$ and so $\mathbb{C} = \mathrm{span} (1 + i, 1 - i)$.
Now we will show this set of vectors is linearly independent. We have that:(2)
The vector equation above is only satisfied if $a + b = 0$ $a - b = 0$ and so $a = b = 0$. Therefore $\{ 1 + i, 1 - i \}$ is a basis of $\mathbb{C}$.
Example 4 Prove that if $B = \{ v_1, v_2, ..., v_n \}$ is a basis of the vector space $V$, then $B' = \{ v_1 - v_2, v_2 - v_3, ..., v_{n-1} - v_n, v_n \}$ is also a basis of $V$.
To show that $B'$ is a basis, we must show that $V = \mathrm{span} (B')$ and that $B'$ is a linearly independent set.
First, since $B$ is a basis of $V$, by the definition of a basis, $B$ spans $V$, that is $\{ v_1, v_2, ..., v_n \}$ spans $V$. We want to show that $B'$ also spans $V$. Let $v \in V$, and consider the following vector equation:(3)
Since $v \in \mathrm{span} (B)$ there exists scalars $b_1, b_2, ..., b_n \in \mathbb{F}$ such that:(4)
So if $b_1 = a_1$, $b_2 = a_2 - a_1$, …, $b_n = a_n - a_{n-1}$ then $v$ is a linear combination of the vectors in $B'$ and so $v \in \mathrm{span} (B')$.
Now we need to show that $B'$ is linearly independent. Consider the vector equation:(5)
Since $B = \{ v_1, v_2, ..., v_n \}$ is a spanning set of $V$ and hence linearly independent, this implies that $a_1 = 0$, $a_2 - a_1 = 0$, $a_3 - a_2 = 0$, and so forth, or in other words, $a_1 = a_2 = ... = a_n = 0$. So then $B'$ is also linearly independent. $\blacksquare$ |
I've been searching a lot for relevant answers to my question. However, I was unable to find a problem formulation with satisfactory answers that would help for my problem.
In a nutshell, I would like to find an initial feasible basis, $\mathcal{B}$, to start my simplex algorithm. And I would like to find this initial basis using an initial feasible basic solution, $x_0$, that satisfies $Ax_0=b$. Note that $A$ and $x_0$ already contain the necessary slack and surplus variables such that the problem is in the following form:
$$min\ \ c^Tx \\ Ax = b\\ x\geq 0$$
A more detailed explanation that might help in finding an initial basis in this specific scenario:
The problem I am solving requires that two linear programming problems that are solved in an alternating fashion. In standard form these problems are formulated as:
$$min\ \ c^T\begin{bmatrix}v \\ s\end{bmatrix}\\ \begin{bmatrix}Gw_1 & \pm I & 0 \\ Gw_2 & 0 & \pm I\end{bmatrix}\begin{bmatrix}v \\ s\end{bmatrix} = b\\ v\geq 0,\ s\geq 0$$ and,
$$min\ \ d^T\begin{bmatrix}w_1 \\w_2\\ s\end{bmatrix}\\ \begin{bmatrix}I_2\otimes (Gv) & \pm I\end{bmatrix}\begin{bmatrix}w_1 \\ w_2 \\ s\end{bmatrix} = b\\ w_i\geq 0,\ s\geq 0$$
Here, vector $v$, and scalars $w_i$, denote the two sets of structural variables and $s$ denotes the shared slack and surplus variables as the equality constraints are equivalent. (They are derived from: $Gvw_1 \leq b$, and $Gvw_2 \leq b$).
Now suppose ($v^*$, $s^*$) is the minimizer for a given $\bar{w}$. Note that $(\bar{w},\ s^*)$ is still a basic feasible solution as the equality constraints are equivalent between the two problems. Now I'd like to start from the feasible basic solution $(\bar{w},\ s^*)$, and find the corresponding basis to start the simplex algorithm.
The trivial columns that must be included in the basis are all entries where, $\begin{bmatrix}\bar{w}\\ s^*\end{bmatrix}>0$. However, this does not necessarily yield a complete basis. And I am not able to find a solution on how to complete this basis besides trail and error.
I hope the problem is sufficiently explained and if there are any question regarding this problem I'd be happy to clarify / elaborate. |
Uplifting cardinals Uplifting cardinals were introduced by Hamkins and Johnstone in [1], from which some of this text is adapted.
An inaccessible cardinal $\kappa$ is
uplifting if and only if for every ordinal $\theta$ it is $\theta$-uplifting, meaning that there is an inaccessible $\gamma>\theta$ such that $V_\kappa\prec V_\gamma$ is a proper elementary extension.
An inaccessible cardinal is
pseudo uplifting if and only if for every ordinal $\theta$ it is pseudo $\theta$-uplifting, meaning that there is a cardinal $\gamma>\theta$ such that $V_\kappa\prec V_\gamma$ is a proper elementary extension, without insisting that $\gamma$ is inaccessible.
Being
strongly uplifting (see further) is boldface variant of being uplifting.
It is an elementary exercise to see that if $V_\kappa\prec V_\gamma$ is a proper elementary extension, then $\kappa$ and hence also $\gamma$ are $\beth$-fixed points, and so $V_\kappa=H_\kappa$ and $V_\gamma=H_\gamma$. It follows that a cardinal $\kappa$ is uplifting if and only if it is regular and there are arbitrarily large regular cardinals $\gamma$ such that $H_\kappa\prec H_\gamma$. It is also easy to see that every uplifting cardinal $\kappa$ is uplifting in $L$, with the same targets. Namely, if $V_\kappa\prec V_\gamma$, then we may simply restrict to the constructible sets to obtain $V_\kappa^L=L^{V_\kappa}\prec L^{V_\gamma}=V_\gamma^L$. An analogous result holds for pseudo-uplifting cardinals.
Contents 1 Consistency strength of uplifting cardinals 2 Uplifting cardinals and $\Sigma_3$-reflection 3 Uplifting Laver functions 4 Connection with the resurrection axioms 5 Strongly Uplifting 6 Weakly superstrong cardinal 7 References Consistency strength of uplifting cardinals Theorem.
1. If $\delta$ is a Mahlo cardinal, then $V_\delta$ has a proper class of uplifting cardinals.
2. Every uplifting cardinal is pseudo uplifting and a limit of pseudo uplifting cardinals.
3. If there is a pseudo uplifting cardinal, or indeed, merely a pseudo $0$-uplifting cardinal, then there is a transitive set model of ZFC with a reflecting cardinal and consequently also a transitive model of ZFC plus Ord is Mahlo.
Proof. For (1), suppose that $\delta$ is a Mahlo cardinal. By the Lowenheim-Skolem theorem, there is a club set $C\subset\delta$ of cardinals $\beta$ with $V_\beta\prec V_\delta$. Since $\delta$ is Mahlo, the club $C$ contains unboundedly many inaccessible cardinals. If $\kappa<\gamma$ are both in $C$, then $V_\kappa\prec V_\gamma$, as desired. Similarly, for (2), if $\kappa$ is uplifting, then $\kappa$ is pseudo uplifting and if $V_\kappa\prec V_\gamma$ with $\gamma$ inaccessible, then there are unboundedly many ordinals $\beta<\gamma$ with $V_\beta\prec V_\gamma$ and hence $V_\kappa\prec V_\beta$. So $\kappa$ is pseudo uplifting in $V_\gamma$. From this, it follows that there must be unboundedly many pseudo uplifting cardinals below $\kappa$. For (3), if $\kappa$ is inaccessible and $V_\kappa\prec V_\gamma$, then $V_\gamma$ is a transitive set model of ZFC in which $\kappa$ is reflecting, and it is thus also a model of Ord is Mahlo. QED
Uplifting cardinals and $\Sigma_3$-reflection Every uplifting cardinal is a limit of $\Sigma_3$-reflecting cardinals, and is itself $\Sigma_3$-reflecting. If $\kappa$ is the least uplifting cardinal, then $\kappa$ is not $\Sigma_4$-reflecting, and there are no $\Sigma_4$-reflecting cardinals below $\kappa$.
The analogous observation for pseudo uplifting cardinals holds as well, namely, every pseudo uplifting cardinal is $\Sigma_3$-reflecting and a limit of $\Sigma_3$-reflecting cardinals; and if $\kappa$ is the least pseudo uplifting cardinal, then $\kappa$ is not $\Sigma_4$-reflecting, and there are no $\Sigma_4$-reflecting cardinals below $\kappa$.
Uplifting Laver functions
Every uplifting cardinal admits an ordinal-anticipating Laver function, and indeed, a HOD-anticipating Laver function, a function $\ell:\kappa\to V_\kappa$, definable in $V_\kappa$, such that for any set $x\in\text{HOD}$ and $\theta$, there is an inaccessible cardinal $\gamma$ above $\theta$ such that $V_\kappa\prec V_\gamma$, for which $\ell^*(\kappa)=x$, where $\ell^*$ is the corresponding function defined in $V_\gamma$.
Connection with the resurrection axioms
Many instances of the (weak) resurrection axiom imply that ${\frak c}^V$ is an uplifting cardinal in $L$:
RA(all) implies that ${\frak c}^V$ is uplifting in $L$. RA(ccc) implies that ${\frak c}^V$ is uplifting in $L$. wRA(countably closed)+$\neg$CH implies that ${\frak c}^V$ is uplifting in $L$. Under $\neg$CH, the weak resurrection axioms for the classes of axiom-A forcing, proper forcing, semi-proper forcing, and posets that preserve stationary subsets of $\omega_1$, respectively, each imply that ${\frak c}^V$ is uplifting in $L$.
Conversely, if $\kappa$ is uplifting, then various resurrection axioms hold in a corresponding lottery-iteration forcing extension.
Theorem. (Hamkins and Johnstone) The following theories are equiconsistent over ZFC: There is an uplifting cardinal. RA(all) RA(ccc) RA(semiproper)+$\neg$CH RA(proper)+$\neg$CH for some countable ordinal $\alpha$, RA($\alpha$-proper)+$\neg$CH RA(axiom-A)+$\neg$CH wRA(semiproper)+$\neg$CH wRA(proper)+$\neg$CH for some countable ordinal $\alpha$, wRA($\alpha$-proper})+$\neg$CH wRA(axiom-A)+$\neg$CH wRA(countably closed)+$\neg$CH Strongly Uplifting
(Information in this section comes from [2])
Strongly uplifting cardinals are precisely strongly pseudo uplifting ordinals, strongly uplifting cardinals with weakly compact targets, superstrongly unfoldable cardinals and almost-hugely unfoldable cardinals.
Definitions
An ordinal is
strongly pseudo uplifting iff for every ordinal $θ$ it is strongly $θ$-uplifting, meaning that for every $A⊆V_κ$, there exists some ordinal $λ>θ$ and an $A^*⊆V_λ$ such that $(V_κ;∈,A)≺(V_λ;∈,A^*)$ is a proper elementary extension.
An inaccessible cardinal is
strongly uplifting iff for every ordinal $θ$ it is strongly $θ$-uplifting, meaning that for every $A⊆V_κ$, there exists some inaccessible(*) $λ>θ$ and an $A^*⊆V_λ$ such that $(V_κ;∈,A)≺(V_λ;∈,A^*)$ is a proper elementary extension. By replacing starred "inaccessible" with "weakly compact" and other properties, we get strongly uplifting with weakly compact etc. targets.
A cardinal $\kappa$ is
$\theta$-superstrongly unfoldable iff for every $A\subseteq\kappa$, there is some transitive $M$ with $A\in M\models\text{ZFC}$ and some $j:M\rightarrow N$ an elementary embedding with critical point $\kappa$ such that $j(\kappa)\geq\theta$ and $V_{j(\kappa)}\subseteq N$.
A cardinal $\kappa$ is
$\theta$-almost-hugely unfoldable iff for every $A\subseteq\kappa$, there is some transitive $M$ with $A\in M\models\text{ZFC}$ and some $j:M\rightarrow N$ an elementary embedding with critical point $\kappa$ such that $j(\kappa)\geq\theta$ and $N^{<j(\kappa)}\subseteq N$.
$κ$ is then called
superstrongly unfoldable (resp. almost-hugely unfoldable) iff it is $θ$-strongly unfoldable (resp. $θ$-almost-hugely unfoldable) for every $θ$; i.e. the target of the embedding can be made arbitrarily large. Equivalence
For any ordinals $κ$, $θ$, the following are equivalent:
$κ$ is strongly pseudo $(θ+1)$-uplifting. $κ$ is strongly $(θ+1)$-uplifting. $κ$ is strongly $(θ+1)$-uplifting with weakly compact targets. $κ$ is strongly $(θ+1)$-uplifting with totally indescribable targets, and indeed with targets having any property of $κ$ that is absolute to all models $V_γ$ with $γ > κ, θ$.
For any cardinal $κ$ and ordinal $θ$, the following are equivalent:
$κ$ is strongly $(θ+1)$-uplifting. $κ$ is superstrongly $(θ+1)$-unfoldable. $κ$ is almost-hugely $(θ+1)$-unfoldable. For every set $A ∈ H_{κ^+}$ there is a $κ$-model $M⊨\mathrm{ZFC}$ with $A∈M$ and $V_κ≺M$ and a transitive set $N$ with an elementary embedding $j:M→N$ having critical point $κ$ with $j(κ)> θ$ and $V_{j(κ)}≺N$, such that $N^{<j(κ)}⊆N$ and $j(κ)$ is inaccessible, weakly compact and more in $V$. $κ^{<κ}=κ$ holds, and for every $κ$-model $M$ there is an elementary embedding $j:M→N$ having critical point $κ$ with $j(κ)> θ$ and $V_{j(κ)}⊆N$, such that $N^{<j(κ)}⊆N$ and $j(κ)$ is inaccessible, weakly compact and more in $V$. Relations to other cardinals If $δ$ is a subtle cardinal, then the set of cardinals $κ$ below $δ$ that are strongly uplifting in $V_δ$ is stationary. If $0^♯$ exists, then every Silver indiscernible is strongly uplifting in $L$. In $L$, $κ$ is strongly uplifting iff it is unfoldable with cardinal targets. Every strongly uplifting cardinal is strongly uplifting in $L$. Every strongly $θ$-uplifting cardinal is strongly $θ$-uplifting in $L$. Every strongly uplifting cardinal is strongly unfoldable of every ordinal degree $α$ and a stationary limit of cardinals that are strongly unfoldable of every ordinal degree and so on. Relation to boldface resurrection axiom
The following theories are equiconsistent over $\mathrm{ZFC}$:
There is a strongly uplifting cardinal. The boldface resurrection axiom for all forcing, for proper forcing, for semi-proper forcing and for c.c.c. forcing. The weak boldface resurrection axioms for countably-closed forcing, for axiom-$A$ forcing, for proper forcing and for semi-properforcing, respectively, plus $¬\mathrm{CH}$. Weakly superstrong cardinal
(Information in this section comes from [3])
Hamkins and Johnstone called an inaccessible cardinal $κ$
weakly superstrong if for every transitive set $M$ of size $κ$ with $κ∈M$ and $M^{<κ}⊆M$, a transitive set $N$ and an elementary embedding $j:M→N$ with critical point $κ$, for which $V_{j(κ)}⊆N$, exist.
It is called
weakly almost huge if for every such $M$ there is such $j:M→N$ for which $N^{<j(κ)}⊆N$.
(As usual one can call $j(κ)$ the target.)
A cardinal is superstrongly unfoldable if it is weakly superstrong with arbitrarily large targets, and it is almost hugely unfoldable if it is weakly almost huge with arbitrarily large targets.
If $κ$ is weakly superstrong, it is $0$-extendible and $\Sigma_3$-extendible. Weakly almost huge cardinals also are $\Sigma_3$-extendible. Because $\Sigma_3$-extendibility always can be destroyed, all these cardinal properties (among others) are never Lever indestructible.
References Hamkins, Joel David and Johnstone, Thomas A. Resurrection axioms and uplifting cardinals., 2014. www arχiv bibtex Hamkins, Joel David and Johnstone, Thomas A. Strongly uplifting cardinals and the boldface resurrection axioms., 2014. arχiv bibtex Bagaria, Joan and Hamkins, Joel David and Tsaprounis, Konstantinos and Usuba, Toshimichi. Superstrong and other large cardinals are never Laver indestructible.Archive for Mathematical Logic 55(1-2):19--35, 2013. www arχiv DOI bibtex |
Defining how connected a graph is, is sometimes difficult. We will now define connectivity of a graph in terms of what are called vertex cutsets and edge cutsets.
Vertex Cutsets
Definition: For a connected graph $G = (V(G), E(G))$, a Vertex Cutset is a subset $W$ of the vertex set $W \subseteq V(G)$ if and only if the removal of $W$, $V(G) \backslash W = \{ x \in V(G) : x \notin W \}$ disconnects $G$.
For example, let's look at the following graph:
The vertex set of the graph above, let's call it $G$, is: $V(G) = \{ a, b, c, d, e, f, g, h \}$. Let's let $W$ be a vertex cutset, say $W = \{ e, f \}$. Hence it follows that $V(G) \backslash W = \{ a, b, c, d, g, h \}$, which results in the following graph:
Hence, $W$ is indeed a vertex cutset of $G$ since the resulting graph is disconnected.
Vertex Connectivity
Definition: For a graph $G = (V(G), E(G))$, if $W$ is a vertex cutset, such that $V(G) \backslash W = \{ x \in V(G) : x \notin W \}$, and $U$ is a vertex cutset such that $V(G) \backslash U = \{ x \in V(G) : x \notin U \}$, and $\mid X \mid ≤ \mid U \mid$ for all other vertex cutsets $U$, then the Vertex Connectivity of $G$ denoted $\kappa (G)$ is the least number of vertices whose removal disconnects $G$, that is $\kappa (G) = \mid W \mid$. If $G$ is a complete graph, that is $G = K_n$ then $\kappa (G) = \kappa (K_n) = n - 1$.
Essentially, $\kappa (G)$ is the size of the smallest cutset of a graph $G$. In the example from earlier for $W = \{ e, f \}$, $\mid \: W \: \mid = 2$. However, $\kappa (G) \neq 2$, since there exists another vertex cutset that is smaller than $W$. In fact, if $U = \{ g \}$, then $V(G) \backslash U$ disconnected the graph, so in fact, $U$ is the smallest vertex cutset that disconnects the graph $G$, and hence, $\kappa (G) = 1$.
Edge Cutsets
Definition: For a connected graph $G = (V(G), E(G))$, an Edge Cutset is a subset $M$ of the edge set $M \subseteq E(G)$ if and only if the removal of $M$, $E(G) \backslash M = \{ \{x , y \} \in E(G) : \{ x, y \} \notin M \}$ disconnects $G$.
Essentially, edge cutsets are the same thing as vertex cutsets with relevance only to edges. For example, in the following graph:
The edge set of the following graph, let's call it $G$, is $E(G) = \{ \{a, b \}, \{b, c \}, \{b, d \}, \{c, d \}, \{c, e \}, \{d, e\}, \{d, f \}, \{e, f \} \}$. Suppose that $M = \{\{d, f \}, \{e, f \} \}$ Then $E(G) \backslash M = \{ \{a, b \}, \{b, c \}, \{b, d \}, \{c, d \}, \{c, e \}, \{d, e\} \}$, which disconnects $G$. Hence $M$ is an edge cutset of $G$.
Edge Connectivity
Similarly to defining vertex connectivity, we will also define edge connectivity.
Definition: For a graph $G = (V(G), E(G))$, if $M$ is an edge cutset, such that $E(G) \backslash M = \{ \{ x, y\} \in E(G) : \{ x, y \} \notin M \}$, and $N$ is an edge cutset such that $E(G) \backslash N = \{ \{ x, y\} \in E(G) : \{ x, y \} \notin N \}$, and $\mid M \mid \leq \mid N \mid$, then the edge connectivity of $G$ denoted $\lambda (G)$ is the least number of edges whose removal disconnects $G$, that is $\lambda (G) = \mid M \mid$.
Once again, in the example from earlier, $M$ was not the smallest edge cutset of $G$. In fact, if $N = \{ a, b\}$, then $E(G) \backslash N$ disconnects $G$, and $| \: N \: | ≤ | \: M \: |$, and in fact, $N$ is the smallest cutset in $G$. Hence $\lambda (G) = | \: N \: | = 1$.
Note that if a graph $G$ has an edge that is a bridge, let's say edge $e = \{ x, y\}$, then the removal of $e$ will result in a disconnected graph by definition. Hence it follows that $\lambda (G) = 1$. |
Let $X$ be a continuous random variable and $Q_x$ is the associated quantile function. Show that expected shortfall $ES_X[p]$ at the confidence level $p$ which is defined as
$$ES_X[p]=\Bbb E[X|X\leq Q_x(1-p)]$$ has the representation $$ES_X[p]=\frac{1}{1-p}\int_0^{1-p} Q_x(a)da.$$
Can some one give me a hint for this?
I know the definition of quantile function $Q_X(p)=\inf\{x: F_x \geq p\}$, I can think of it on an intuitive level, but want some thoughts to get started mathematically |
584 0
Can someone please help me out with the following question?
Q. A simple harmonic oscillator, of mass m and natural frequency w_0, experiences an oscillating driving force f(t) = macos(wt). Therefore its equation of motion is:
[tex]\frac{{d^2 x}}{{dt^2 }} + \omega _0 ^2 x = a\cos \left( {\omega t} \right)[/tex]
Given that at t = 0 we have x = dx/dt = 0, find the function x(t). Describe the solution if w is approximately, but not exactly, equal to w_0.
I got:
[tex]y\left( t \right) = \frac{a}{{\left( {\omega _0 ^2 - \omega ^2 } \right)}}\left( {\cos \left( {\omega t} \right) - \cos \left( {\omega _0 t} \right)} \right)[/tex]
The answer says a couple of things about the behaviour of the solution for w ~ w_0 but I can't figure out how they got it. For instance "for large t it shows beats of maximum amplitude 2((w_0)^2 - w^2)^-1." How is that deduced and how would I determine which are the main characterstics of motion that I need to note. Any help would be appreciated.
Q. A simple harmonic oscillator, of mass m and natural frequency w_0, experiences an oscillating driving force f(t) = macos(wt). Therefore its equation of motion is:
[tex]\frac{{d^2 x}}{{dt^2 }} + \omega _0 ^2 x = a\cos \left( {\omega t} \right)[/tex]
Given that at t = 0 we have x = dx/dt = 0, find the function x(t). Describe the solution if w is approximately, but not exactly, equal to w_0.
I got:
[tex]y\left( t \right) = \frac{a}{{\left( {\omega _0 ^2 - \omega ^2 } \right)}}\left( {\cos \left( {\omega t} \right) - \cos \left( {\omega _0 t} \right)} \right)[/tex]
The answer says a couple of things about the behaviour of the solution for w ~ w_0 but I can't figure out how they got it. For instance "for large t it shows beats of maximum amplitude 2((w_0)^2 - w^2)^-1." How is that deduced and how would I determine which are the main characterstics of motion that I need to note. Any help would be appreciated.
Last edited: |
Compactness of Finite Sets in a Topological Space
Recall from the Compactness of Sets in a Topological Space page that if $X$ is a topological space then a set $A \subseteq X$ is said to be compact in $X$ if every open cover of $X$ has a finite subcover.
We will now look at a nice theorem that says that any finite set in any topological space is compact.
Theorem 1: Let $X$ be any topological space. If $A \subseteq X$ is a finite set then $A$ is compact in $X$. Proof:Let $A \subseteq X$ be a finite set, say: Let $\mathcal F = \{ A_i : i \in I \}$ be an open cover of $A$. Then: Since $\mathcal F$ covers $A$, the set $A$ can be partitioned into at most $n$ groups of elements from $A$ where all elements in these groups are contained in an open set in the collection $\mathcal F$. So there exists an subcollection $I^* \subseteq I$ where: Moreover, $\mid I^* \mid \leq n$. So $I^*$ is a finite subcover of $A$. Therefore $A$ is compact in $X$. $\blacksquare$
Corollary 1: If $X$ is a finite topological space then every subset $A \subseteq X$ is compact in $X$. Proof:Every subset $A$ of a finite set $X$ is finite. So by Theorem 1, $A$ is compact in $X$. $\blacksquare$ |
\(\def\Real{\mathbb{R}}\def\Comp{\mathbb{C}}\def\Rat{\mathbb{Q}}\def\Field{\mathbb{F}}\def\Fun{\mathbf{Fun}}\def\e{\mathbf{e}}
\def\f{\mathbf{f}}\def\bv{\mathbf{v}}\def\i{\mathbf{i}} \def\eye{\left(\begin{array}{cc}1&0\\0&1\end{array}\right)} \def\bra#1{\langle #1|}\def\ket#1{|#1\rangle}\def\j{\mathbf{j}}\def\dim{\mathrm{dim}} \def\ker{\mathbf{ker}}\def\im{\mathbf{im}} \def\tr{\mathrm{tr\,}} \def\braket#1#2{\langle #1|#2\rangle} \) 1. We know that an operator can be represented as a matrix if you fix the basis. Changing the basis changes the matrix. One can try to make the matrix simpler, bringing it to one of the normal forms. If the operator is given by a matrix (in some basis), then change of the basis (in the source, or in the target space).
The
normal forms depend on the type of the linear operator. The simplest one occurs when the operator is between different spaces: \[ A:U\to V, \] so that one can choose the bases in \(U\) and \(V\) independently. 2. Changing the basis (in \(U\) or in \(V\)) leads to multiplications of the matrix \(A_{ij}\) on the right and on the left by the change of the basis matrices: \[ A’_{i’j’}=\sum_{i,j}B_{i’i}A_{ij}C_{jj’}=\sum_{i,j}\braket{e’_{i’}}{e_i}A_{ij}\braket{f_{j}}{f’_{j’}} \] (with the \(B\) and \(C\) being the invertible matrices of replacing the basis). Theorem: With appropriate choice of bases in \(U\) and \(V\), the matrix for an operator \(A\) can be reduced to the normal form \[ \left(\begin{array}{c|c} E_r&0\\ \hline 0&0\\ \end{array}\right), \] where \(r\) i the rank of \(A\). Exercise: Consider the mapping that takes a quadratic polynomial \(q\) to its values at \(0,1,2\) and \(3\). Find the normal form of this operator.
There are many ways to think about this normal form. Essentially it says that any linear mapping is glued out of zeros (mappings that collapses everything to \(0\)), isomorphisms and trivial embeddings.
3. The situation in the case when \U=V\), and the bases in these spaces should be the same is much more involved. In this case change of basis reduces to actions on \(A\) by conjugation: \[ A\mapsto B^{-1}AB. \]
One important observation: recall that to each endomorphism \(A:U\to U\) one can associate its characteristic polynomial:
\[ P_A(z)=\det(A-zE). \] It is an invariant of the operator \(A\), not of the matrix: change of the basis leaves characteristic polynomial unchanged. 4. The roots of the characteristic polynomial are called eigenvalues. The set of eigenvalues of an operator \(A\) is called its spectrum and denoted as \(\sigma(A)\). Exercise: Find eigenvalues of the matrix of rotation by \(\phi\) in \(\Real^2\).
The coefficients of the characteristic polynomial have meanings: if \(d=\dim U\),
\[ P_A(z)=(-z)^d+\tr(A) (-z)^{d-1}+\ldots+\det(A), \] where \(\tr=\sum_i \bra{f_i}A\ket{f_i}\), the sum of the diagonal elements of the matrix for \(A\) (in any basis). The coefficients are the symmetric functions of the eigenvalues of \(A\). 5. If \(f\) is a polynomial, and \(A\) is an operator, then eigenvalues of \(f(A)\) are \(f(\lambda), \lambda\in\sigma(A)\). Example: The eigenvalues of circulant matrix are the roots of unity. 6. Now, we are ready to discuss the normal form for endomorphisms. Define a Jordan block of size \(r\) the matrix \[ J_r=\left( \begin{array}{ccccc} \lambda&1&0&\ldots&0\\ 0&\lambda&1&\ldots&0\\ \ldots&\ldots&\ldots&\ldots&\ldots\\ 0&0&0&\ldots&\lambda\\ \end{array} \right). \] Theorem: For any operator (over \(\Comp\)) there exists a basis in which the operator is represented by a matrix consisting of diagonal Jordan blocks.
The total size of the blocks corresponding to an eigenvalue \(\lambda\) is the multiplicity of \(\lambda\) in the complete factorization of the characteristic polynomial: in
\[ P_A(z)=\prod_{\lambda\in\sigma(A)}(z-\lambda)^{m_\lambda}, \] \(m_\lambda\) is the total size of the Jordan blocks corresponding to \(\lambda\). 7. If all the blocks are of size 1, the matrix (and the corresponding operator) is called diagonalizable. If the characteristic polynomial of an operator in \(d\)-dimensional space has \(d\) distinct roots, the operator is necessarily diagonalizable.
In particular, a
generic operator is diagonalizable.
Diagonalizable operators (by definition) have a basis consisting of the eigenvectors of \(A\): \(v\in U\) is called an eigenvector corresponding to the eigenvalue \(\lambda\) if
\[ Av=\lambda v. \] Exercises. Find the Jordan normal form for the operator of differentiation acting on the polynomials of degree at most \(d\). Find all eigenvectors. Find eigenvalues and eigenvectors for the matrix
\[
\left(\begin{array}{cc} 0&1\\ .01&0\\ \end{array}\right). \] |
To begin, we need to find distances. Starting with the Pythagorean Theorem, which relates the sides of a right triangle, we can find the distance between two points.
Definition: Pythagorean Theorem
The Pythagorean Theorem states that the sum of the squares of the legs of a right triangle will equal the square of the hypotenuse of the triangle.
In graphical form, given the triangle shown,\(a^{2} +b^{2} =c^{2}\).
We can use the Pythagorean Theorem to find the distance between two points on a graph.
Example 1
Find the distance between the points (-3, 2) and (2, 5).
Solution
By plotting these points on the plane, we can then draw a right triangle with these points at each end of the hypotenuse. We can calculate horizontal width of the triangle to be 5 and the vertical height to be 3.
From these we can find the distance between the points using the Pythagorean Theorem:
\(\begin{array}{l} {dist^{2} =5^{2} +3^{2} =34} \\ {dist=\sqrt{34} } \end{array}\)
Notice that the width of the triangle was calculated using the difference between the \(x\) (input) values of the two points, and the height of the triangle was found using the difference between the \(y\) (output) values of the two points. Generalizing this process gives us the distance formula.
distance formula
The distance between two points \((x_{1} ,y_{1} )\) and \((x_{2} ,y_{2} )\) can be calculated as
\(dist=\sqrt{(x_{2} -x_{1} )^{2} +(y_{2} -y_{1} )^{2} }\)
Exercise
Find the distance between the points (1, 6) and (3, -5).
Answer
\(5\sqrt{5}\)
Circles
If we wanted to find an equation to represent a circle with a radius of \(r\) centered at a point (\(h\), \(k\)), we notice that the distance between any point (\(x\), \(y\)) on the circle and the center point is always the same: \(r\). Noting this, we can use our distance formula to write an equation for the radius:
\(r=\sqrt{(x-h)^{2} +(y-k)^{2} }\)
Squaring both sides of the equation gives us the standard equation for a circle.
equation of a circle
The
equation of a circle centered at the point (\(h\), \(k\)) with radius \(r\) can be written as \((x-h)^{2} +(y-k)^{2} =r^{2}\)
Notice that a circle does not pass the vertical line test. It is not possible to write \(y\) as a function of \(x\) or vice versa.
Example 2
Write an equation for a circle centered at the point (-3, 2) with radius 4.
Solution
Using the equation from above, \(h = -3\), \(k = 2\), and the radius \(r = 4\). Using these in our formula,
\((x-(-3))^{2} +(y-2)^{2} =4^{2}\) simplified, this gives
\((x+3)^{2} +(y-2)^{2} =16\)
Example 3
Write an equation for the circle graphed here.
Solution
This circle is centered at the origin, the point (0, 0). By measuring horizontally or vertically from the center out to the circle, we can see the radius is 3. Using this information in our formula gives:
\((x-0)^{2} +(y-0)^{2} =3^{2}\) simplified, this gives
\(x^{2} +y^{2} =9\)
Exercise
Write an equation for a circle centered at (4, -2) with radius 6.
Answer
\((x-4)^{2} +(y+2)^{2} =36\)
Notice that, relative to a circle centered at the origin, horizontal and vertical shifts of the circle are revealed in the values of \(h\) and \(k\), which are the coordinates for the center of the circle.
Points on a Circle
As noted earlier, an equation for a circle cannot be written so that \(y\) is a function of \(x\) or vice versa. To find coordinates on the circle given only the \(x\) or \(y\) value, we must solve algebraically for the unknown values.
Example 4
Find the points on a circle of radius 5 centered at the origin with an \(x\) value of 3.
Solution
We begin by writing an equation for the circle centered at the origin with a radius of 5.
\(x^{2} +y^{2} =25\)
Substituting in the desired \(x\) value of 3 gives an equation we can solve for \(y\).
\(\begin{array}{l} {3^{2} +y^{2} =25} \\ {y^{2} =25-9=16} \\ {y=\pm \sqrt{16} =\pm 4} \end{array}\)
There are two points on the circle with an \(x\) value of 3: (3, 4) and (3, -4).
Example 5
Find the \(x\) intercepts of a circle with radius 6 centered at the point (2, 4).
Solution
We can start by writing an equation for the circle. \[(x-2)^{2} +(y-4)^{2} =36\]
To find the \(x\) intercepts, we need to find the points where
y = 0. Substituting in zero for \(y\), we can solve for \(x\).
\((x-2)^{2} +(0-4)^{2} =36\)
\((x-2)^{2} +16=36\) \((x-2)^{2} =20\) \(x-2=\pm \sqrt{20}\) \(x=2\pm \sqrt{20} =2\pm 2\sqrt{5}\)
The \(x\) intercepts of the circle are \(\left(2+2\sqrt{5} ,0\right)\) and \(\left(2-2\sqrt{5} ,0\right)\)
Example 6
In a town, Main Street runs east to west, and Meridian Road runs north to south. A pizza store is located on Meridian 2 miles south of the intersection of Main and Meridian. If the store advertises that it delivers within a 3-mile radius, how much of Main Street do they deliver to?
Solution
This type of question is one in which introducing a coordinate system and drawing a picture can help us solve the problem. We could either place the origin at the intersection of the two streets, or place the origin at the pizza store itself. It is often easier to work with circles centered at the origin, so we’ll place the origin at the pizza store, though either approach would work fine.
Placing the origin at the pizza store, the delivery area with radius 3 miles can be described as the region inside the circle described by \(x^{2} +y^{2} =9\).
Main Street, located 2 miles north of the pizza store and running east to west, can be described by the equation \(y = 2\).
To find the portion of Main Street the store will deliver to, we first find the boundary of their delivery region by looking for where the delivery circle intersects Main Street. To find the intersection, we look for the points on the circle where \(y = 2\). Substituting \(y = 2\) into the circle equation lets us solve for the corresponding \(x\) values.
\[\begin{array}{l} {x^{2} +2^{2} =9} \\ {x^{2} =9-4=5} \\ {x=\pm \sqrt{5} \approx \pm 2.236} \end{array}\]
This means the pizza store will deliver 2.236 miles down Main Street east of Meridian and 2.236 miles down Main Street west of Meridian. We can conclude that the pizza store delivers to a 4.472 mile long segment of Main St.
In addition to finding where a vertical or horizontal line intersects the circle, we can also find where an arbitrary line intersects a circle.
Example 7
Find where the line \(f(x)=4x\) intersects the circle \((x-2)^{2} +y^{2} =16\).
Solution
Normally, to find an intersection of two functions \(f(x)\) and \(g(x)\) we would solve for the \(x\) value that would make the functions equal by solving the equation \(f(x) = g(x)\). In the case of a circle, it isn’t possible to represent the equation as a function, but we can utilize the same idea.
The output value of the line determines the \(y\) value: \(y=f(x)=4x\). We want the \(y\) value of the circle to equal the \(y\) value of the line, which is the output value of the function. To do this, we can substitute the expression for \(y\) from the line into the circle equation.
\((x-2)^{2} +y^{2} =16\) replace \(y\) with the line formula: \(y=4x\)
\((x-2)^{2} +(4x)^{2} =16\) expand \(x^{2} -4x+4+16x^{2} =16\) simplify \(17x^{2} -4x+4=16\) since this equation is quadratic, we arrange one side to be 0 \(17x^{2} -4x-12=0\)
Since this quadratic doesn’t appear to be easily factorable, we can use the quadratic formula to solve for \(x\):
\(x=\frac{-(-4)\pm \sqrt{(-4)^{2} -4(17)(-12)} }{2(17)} =\frac{4\pm \sqrt{832} }{34}\), or approximately \(x \approx 0.966\) or -0.731
From these
x values we can use either equation to find the corresponding \(y\) values.
Since the line equation is easier to evaluate, we might choose to use it:
\(\begin{array}{l} {y=f(0.966)=4(0.966)=3.864} \\ {y=f(-0.731)=4(-0.731)=-2.923} \end{array}\)
The line intersects the circle at the points (0.966, 3.864) and (-0.731, -2.923).
Exercise
A small radio transmitter broadcasts in a 50 mile radius. If you drive along a straight line from a city 60 miles north of the transmitter to a second city 70 miles east of the transmitter, during how much of the drive will you pick up a signal from the transmitter?
Answer
The circle can be represented by \(x^{2} +y^{2} =50^{2}\). Finding a line from (0,60) to (70,0) gives \(y=60-\frac{60}{70} x\). Substituting the line equation into the circle gives \(x^{2} +\left(60-\frac{60}{70} x\right)^{2} =50^{2}\). Solving this equation, we find \(x = 14\) or \(x = 45.29\0, corresponding to points (14, 48) and (45.29, 21.18). The distance between these points is 41.21 miles.
Important Topics of This Section Distance formula Equation of a Circle Finding the \(x\) coordinate of a point on the circle given the \(y\) coordinate or vice versa Finding the intersection of a circle and a line |
If your working modelling assumptions are such that the dynamics of the log price process $\ln(S_t)$ is
space homogeneous, you have that the price of a European vanilla option is itself a space-homogeneous function of degree one. You can then appeal to Euler theorem to get the relationship you need.
More specifically, define the price at time $t$ of the option expiring at $T$ and struck at $K$ as
$$ V = DF(t,T)\, \Bbb{E}_t^\Bbb{Q} \left[ (w(S_T - K))^+ \right] := V(S_t, K, T-t, \theta) $$where $\theta$ figures the relevant model parameters and $w=\pm1$ the call/put factor. Now under the space homogeneity assumption we've just mentioned, you can write that$$ V(xS_t,xK,T-t,\theta) = x V(S_t,K,T-t,\theta), \forall x \geq 0$$
Taking the derivative with respect to $x$ on both sides and then setting $x=1$ gives:
$$ \frac{\partial V}{\partial S} S + \frac{\partial V}{\partial K} K = V $$hence$$ \frac{\partial V}{\partial K} = \frac{1}{K} \left( V - \frac{\partial V}{\partial S} S \right) $$
which is what you are looking for.
And indeed if you are pricing a digital call ($D$ below) for instance, using the notation $C$ to denote the European call price\begin{align}D &= -\frac{dC}{dK} \\ &= -\left[ \frac{\partial C}{\partial K} + \frac{\partial C}{\partial \Sigma} \frac{\partial \Sigma}{\partial K} \right] \\ &= -\left[ \frac{1}{K}\left( C - \Delta S\right) + \nu \frac{\partial \Sigma}{\partial K} \right]\end{align}where for a maturity $T$ and strike level $K$, $C$ is the corresponding European call price, $\Delta$ its BS Delta, $\nu$ its BS Vega and $\partial \Sigma/\partial K$ the IV skew. We have moved from the second line to the third using the result which we just derived. |
Path Connectivity of Connected Topological Spaces
Recall from the Path Connected Topological Spaces page that a topological space $X$ is said to be path connected if for each pair of points $x, y \in X$ there exists a continuous function $\alpha : [0, 1] \to X$, call a path, such that $\alpha(0) = x$ and $\alpha(1) = y$.
Interestingly enough, path connectivity is actually a stronger concept that regular connectivity. As we will show in the following theorem, every path connected topological space is also a connected topological space.
Theorem 1: If $X$ is a path connected topological space then $X$ is also a connected topological space. Proof:Let $X$ be a path connected topological space and assume that instead $X$ is disconnected. Since $X$ is disconnected, there exists $A, B \subset X$, where $A, B \neq \emptyset$, $A \cap B = \emptyset$ and: Let $a \in A$ and $b \in B$. Since $X$ is path connected there exists a continuous function $\alpha : [0, 1] \to X$ such that $\alpha(0) = a$ and $\alpha(1) = b$. Since $\alpha$ is continuous, $\alpha^{-1}(A)$ and $\alpha^{-1}(B)$ are both open in $I$. We claim that $\{ \alpha^{-1}(A), \alpha^{-1}(B) \}$ is a separation of $I$. We have already established that $\alpha^{-1}(A)$ and $\alpha^{-1}(B)$ are open in $[0, 1]$. Furthermore, since $0 \in \alpha^{-1}(A)$ and $1 \in \alpha^{-1}(B)$ we see that $\alpha^{-1}(A), \alpha^{-1}(B) \neq \emptyset$. Since $X = A \cup B$ we see that: We only need to show that $\alpha^{-1}(A) \cap \alpha^{-1}(B) = \emptyset$. Suppose not. Then there exists an $x \in \alpha^{-1}(A) \cap \alpha^{-1}(B)$, so $\alpha(x) \in A \cap B$. But $A \cap B = \emptyset$, which is a contradiction. Therefore $\alpha^{-1}(A) \cap \alpha^{-1}(B) = \emptyset$, so $\{ \alpha^{-1}(A), \alpha^{-1}(B) \}$ is a separation of $[0, 1]$. But $[0, 1]$ is a connected topological space (with the subspace topology), so we have arrived at a contradiction. Therefore the assumption that $X$ is disconnected was false. So if $X$ is path connected, $X$ is also connected. $\blacksquare$ |
Before introducing the Dirac equation, it was difficult to explain the behaviour of the particles as the particles with higher velocities were not studied. But Dirac equation introduced four new components to the wave. These four components were divided into two energy states: positive and negative. Both energy states have a spin of 1/2 up and down. Using Dirac equation, new spin properties and magnetic moment were assigned and the magnetic moment was given as:
\(\mu _{D}=\frac{qS}{m}\)
where,
S is the spin vector q is the charge m is the mass What is Dirac equation?
Dirac equation is a relativistic wave equation which explained that for all half spin electrons and quarks are parity inversion (sign inversion of spatial coordinates) is symmetrical. The equation was first explained in the year 1928 by P. A. M. Dirac. The equation is used to predict the existence of antiparticles. The equation also supports solution for free moving electrons.
Dirac equation formula
\(\left ( \beta mc^{2}+c\sum_{n=1}^{3}\alpha _{n}p_{n} \right )\psi (x,t)=i\hbar\frac{\partial \psi (x,t)}{\partial t}\)
Where,
𝜓=𝜓(x,t) is the electron wave function M is the electron mass at rest X, t is the spacetime coordinates p1, p2, p3 are the momentum components c is the speed of light \(\hbar\) is the Planck constant
All these physical constants are the reflection of special relativity and quantum mechanics. The purpose of formulating this equation was to study the relative motion of the electron and to treat the atom as consistent with relativity.
Applications of Dirac Equation In quantum mechanics, to resolve paradoxical features, Dirac field is used. Dirac sea was studied with the help of “hole theory” according to which there are many negative charged electrons occupied in the vacuum and they are at eigenstate. Other Formulations of the Equation Polar form: With the help of Lorentz transformation, Dirac spinor can be represented using two degrees of freedom ie; as the derivatives of scalar and pseudoscalar bi-linear quantities. As a differential equation: Spinor function of the Dirac equation for three out of four components can be represented as a partial differential equation for one component. Curved spacetime: The equation can also be represented in curved space-time. What is Dirac Field?
Dirac field is an example of the fermion field in which the canonical time equal communication relations are replaced with the canonical time equal anti-communication relations.
Related articles: Questions Related To Dirac Equations Q1. Who suggested the concept of a matter wave? Ans: de Broglie suggested the concept of a matter wave. Q2. What is the value of spin g-factor? Ans: The value of spin g-factor is 1. Q3. What is g-factor? Ans: g-factor is a dimensionless quantity that is used to characterize the magnetic moment and angular momentum of the atom. It is also known as g value or dimensionless magnetic moment. Q4. What is the total probability of finding the particle in space? Ans: The total probability of finding the particle in the space is unity. Q5. Define group velocity. Ans: Group velocity is defined as the velocity of the wave through which the shape of the wave amplitudes resulting in modulation.
To learn more about other laws of Physics, stay tuned with BYJU’S |
Currently going through a video on Counting Minimum Cuts by Tim Roughgarden. $(A_{i},B_{i}) = \big((A_{1},B_{1}), ..., (A_{t},B_{t})\big) \forall i \in \Bbb{R}$ $P\big((A_{i},B_{i})\big) \geq \frac{1}{\begin{pmatrix} n \\ 2 \end{pmatrix}} = p$, which I interpret as the lower bound on the probability of having at least one minimal cut. In the problem set that follows two answers A and B are highlighted as being correct. I understand why A is correct; but am puzzled why B is also marked as correct. A: For every graph $G$ with $n$ nodes and every min cut $(A,B)$ (I am assuming same thing as $(A_{i},B_{i})$) $P\big((A,B)\big) \geq p$. B There exists a graph $G$ with $n$ nodes and a min cut $(A,B)$ (again assuming same thing as $(A_{i},B_{i})$) of $G$ such that $P\big((A,B)\big) \leq p$.
I don't understand what you mean by "$(A_i,B_i) = ((A_1,B_1),\ldots,(A_t,B_t))$", an obviously false statement. Perhaps you meant $(A_i,B_i) \in \{(A_1,B_1),\ldots,(A_t,B_t)\}$?
I don't quite understand your interpretation of the statement $P((A_i,B_i)) \geq p := 1/\binom{n}{2}$. Here is the correct interpretation:
For any minimum cut $C$, the probability that Karger's algorithm outputs $C$ is at least $p := 1/\binom{n}{2}$.
This is exactly what A states.
For B, you need to give an example of a graph which satisfies $P(C) \leq 1/\binom{n}{2}$ for all cuts $C$. One such example is a cycle, an example you were probably shown in class. |
This is something I haven't seen online yet, indicator functions with values in a finite field. Probably for a good reason, but I would like to know why, and if there are still things that can be said. For instance what can we say of the relationship (if any) between the support of the convolution of two indicator functions with values in a finite field vs the addition of the sets they indicate.
More precisely: Let $F$ be a finite field and let $A, B$ be sets with additive cyclic structure, say $A, B \subseteq \mathbb{Z}_N$ with $N$ coprime to the characteristic of $F$. Define the "characteristic functions" $1_A, 1_B : \mathbb{Z}_N \to F$ by $1_A(x) = 1$ if $x \in A$, and $1_A(x) = 0$ otherwise. Similarly for $1_B$. Is there any relationship/proposition we can infer between the set $A + B$ and the support of the (cyclic) convolution $$ 1_A \ast 1_B(k) = \sum_{j \in \mathbb{Z}_N} 1_A(j) 1_B(k-j)? $$ For instance, if $1_A \ast 1_B(k) = 0$ for all $k \in \mathbb{Z}_N$, would this say anything at all about $A + B$?
Note in the case of the usual characteristic functions with values in $\mathbb{R}$ we have the nice property that $A + B$ is precisely the support of $1_A \ast 1_B$. So I wondered whether we can at least get something (although probably not quite this I imagine) when the values of the characteristic functions are either the additive or multiplicative identity in a finite field. Thanks!
Remark: Recall that the convolution here is taken modulo $p$, where $p$ is the characteristic of the field. Moreover note that the support of $1_A \ast 1_B$ is contained in $A + B$, since, if the convolution is non-zero modulo $p$, it is non-zero in $\mathbb{R}$ and so the support lies in $A + B$ by the fact mentioned above. This however does not say anything on whether or not the support is empty modulo $p$... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.