text
stringlengths
256
16.4k
Mini Research Project Time Updates Added CCSD(T) $n_i$ and dipole moments and tweaked discussion (the delay was caused by a system-wide storage upgrade on the machines which took nearly a week to complete). Preamble This response is in no way meant to be contrary to what Geoff has already posted. I happen to enjoy these types of questions and I like to tackle them as little research projects for fun. Therefore, my answer will be brutally detailed and in-depth. That said, this should be a good introduction to how I would approach this problem if I were going to do this at the 'production-level' of research but this definitely goes far beyond the purpose of any reasonable response for an SE answer. Its All About the Dipoles Baby Okay so I wouldn't use THAT heading in a paper but maybe a talk depending on who my audience was... We can easily determine the dipole moment of cis-2-butene via electronic structure theory. I have modeled two conformers of cis-2-butene as shown below. I will refer to the geometry on the left as StructA and the geometry on the right as StructB. A couple of things to note since I don't include any captions to tables. Energies are always reported in $\mathrm{kcal\ mol}^{-1}$ and dipole moments ($\mu$) are given in Debye. Computational Methods Full geometry optimizations and corresponding harmonic vibrational frequency computations were performed with second-order Moller-Plesset perturbation (MP2) theory and a variety of density functional theory (DFT) methods using the Gaussian 09 software package for the conformers of cis-2-butene. Both conformers were characterized in $C_{2v}$ symmetry. The DFT methods implemented include B3LYP, B3LYP-GD3(BJ), M06-2X, MN12SX, N12SX, and APFD. The B3LYP-GD3(BJ) method employs Grimme's 3rd generation dispersion correction as well as the Becke-Johnson damping function. All DFT computations employed a pruned numerical integration grid having 90 radial shells and 590 angular points per shell. The heavy-aug-cc-pVTZ basis set was employed for these computations where the heavy (non-hydrogen) atoms were augmented with diffuse functions (e.g. cc-pVTZ for H and aug-cc-pVTZ for carbon). This basis set is abbreviated as haTZ. The CCSD(T) (i.e. coupled-cluster method that includes all single and double substitutions as well as a perturbative treatment of the connected triple excitations) was similarly employed using the CFOUR software package. The magnitudes of the components of the residual Cartesian gradients of the optimized geometries were less than $6.8\times 10^{-6} E_h\ a_0^{-1}$. Single point energies were computed with the explicitly correlated MP2-F12 [specifically MP2-F12 3C(FIX)] and CCSD(T)-F12 [specifically CCSD(T)-F12b with unscaled triples contributions] methods in conjunction with the haTZ. These computations were performed with the Molpro 2010.1 software package using the default density fitting (DF) and resolution of the identity (RI) basis sets. Natural bond orbital (NBO) analyses were performed for the MP2 optimized structures using the haTZ basis set and the SCF density. All computations employed the frozen core approximation (i.e. 1s$^2$ electrons frozen in carbon). Computational Methods in English Two different conformations of cis-2-butene were characterized with a variety of cheap (but usually okay) approximations (i.e. DFT methods) as well as reliable (but generally more expensive) wave function methods (i.e. MP2 and CCSD(T)). The wave function methods are necessary to validate the DFT results. CCSD(T) is the gold-standard and gives very good results for single-reference closed-shell well-behaved systems so we will use this as our 'best estimate'. We use a variety of methods in order to look for agreement in the results. If we see good agreement across the board, we can be confident in our results. If we see massive discrepancies then we will have to be careful when we analyze the results. Geometries have been converged to a tight threshold (i.e. we have good molecules) and our DFT computations use a relatively dense integration grid (which leads to more accurate results). Notice that I employ a heavy-aug-cc-pVTZ (haTZ) basis set. Why leave the diffuse functions off hydrogen? The purpose of diffuse functions is to describe electron density far away from the nucleus of an atom. Therefore we slap these functions onto carbon which are relatively large atoms compared to hydrogen. Hydrogen, on the other hand, has only one electron and therefore has a small electron density when isolated. In cis-2-butene, hydrogen is bonded to carbon via a rather small bond distance. The electron density around the hydrogen is even more reduced than an isolated hydrogen atom in the gas phase. Therefore, it would be impractical to include diffuse functions on hydrogen. Doing so may even lead to erroneous results since we will be trying to describe electron density far away from the hydrogen nucleus when in reality there is virtually none to be found. Finally, we perform single point energies using explicitly correlated methods. Because the CCSD(T) opts and freqs will likely not be done in time for this posting, we can gauge how the resulting geometry from each optimization procedure will vary from another geometry given by a different method. If all of the energies are similar (within a few tenths of a $\mathrm{kcal\ mol}^{-1}$), then we can be confident that our geometries are not only very similar but that small deviations in the geometry will have little effect on the corresponding energies at least in this region of the potential energy surface (PES). Large deviations will usually mean that the method which produced the 'outlier' is not a good approximation for the system (I do not expect cis-2-butene to be problematic at all). Explicitly correlated methods accelerate convergence to the CBS limit. These methods have been shown to give results that a large basis set and a canonical method would provide but with a much smaller basis set. For example, the result that I get with a regular CCSD(T)/aug-cc-pV5Z basis set could be obtained using CCSD(T)-F12/aug-cc-pVTZ. This makes the computations less intensive and much more feasible. Results The number of imaginary frequencies ($n_i$), relative MP2-F12 and CCSD(T)-F12 energetic ($\Delta E^{\mathrm{MP2-F12}}$ and $\Delta E^{\mathrm{CC-F12}}$, respectively, in $\mathrm{kcal\ mol^{-1}}$) and dipole moment ($\mu_z$ in Debye) are given in the following table for both conformers of cis-2-butene for a variety of a methods.The relative energies were determined by taking the difference of the respective geometry and the reference CCSD(T) geometry [e.g. E(CCSD(T))-E(MP2)]. StructB is a second-order saddle point ($n_i = 2$) on every single PES considered and therefore is not a minimum energy structure. StructA is, however, a minimum ($n_i = 0$) on every PES considered. The characterization of the nature of the stationary point is consistent between CCSD(T), our best estimate, MP2, and DFT methods. Single point energies reveal negligible differences in the optimized geometries. The energies associated with the MP2 optimized structures is the reference point for all other relative energies. Deviations grow no larger than 0.27 $\mathrm{kcal\ mol}^{-1}$ for the MP2-F12 and CCSD(T)-F12 relative energies. In addition, there is good agreement between the MP2-F12 and CCSD(T)-F12 relative energies, suggesting that higher-order correlation effects are small. The dipoles for StructA and StructB are very similar with very small magnitudes, on the order of a couple tenths of a Debye. To put these quantities into perspective, the dipole moment of water is 1.85 D. We all know that water has a pretty large dipole moment so by comparison, cis-2-butene has a very WEAK dipole moment. You can compare to the dipole moments of other molecules by referring to this NIST reference. Clearly, the rotation of the methyl groups have a very small effect on the dipole moments of each conformer. The MP2 and DFT dipole moments deviate from the best estimate by no more than 0.03 D, but remains in qualitative agreement for both StructA and StructB. The figure below shows the directionality of the dipole. The head of the (unscaled) arrow points toward the negative pole while the tail of the (unscaled) arrow is oriented to the positive pole. The numbers on the atoms represent 'natural charges' from an Natural Bond Orbital (NBO) analysis of the MP2 optimized Struct A. Clearly the carbon atoms have a small negative charge to them as these atoms are sucking (great scientific term here) electron density away from the neighboring hydrogen atoms. This is because the nuclear charge (i.e. the number of protons) on carbon is much greater than that of hydrogen (6 vs. 1). It was suggested that I add some data highlighting the energy difference between the two different conformers. The following table presents the energy difference (where $\Delta E_{\mathrm{A-B}}$ is equivalent to E(A)-E(B)) of each optimized geometry using the MP2-F12 and CCSD(T)-F12 single point energies (in $\mathrm{kcal\ mol}^{-1}$). We can see that StructA is about 1.5 $\mathrm{kcal\ mol}^{-1}$ lower in energy than StructB. This makes sense because StructB is a higher order saddle point and StructA is a minimum on the PES. (I'm actually quite surprised that the energy difference is this large for a couple of methyl rotations...) Conclusions The dipole moment for two conformers of cis-2-butene has been examined using seven different computational approaches and a triple-$\zeta$ quality basis set. The performance of these methods have been tested by evaluating the energies of each geometry with the 'gold standard' CCSD(T) method (the explicitly correlated variant). Good agreement is seen across the board with respect to the Hessian indices, energies, and dipole moments. Only StructA was a minimum on each PES. Both conformers of cis-2-butene have a very weak dipole moment on the order of 0.2 -- 0.3 D. The positive pole is in the vicinity of the methyl groups whereas the negative pole is centered around the sp2 hybridized carbons. FAQ So you may be asking yourself (or rather, should be asking yourself) questions such as these listed below. I will tackle them one at a time. 1.) Why did we look at two conformers of cis-2-butene and why does it matter? StructA is a minimum which means that if you were to characterize this guy in the gas phase, you'd find StructA rather than StructB. This is important because if we were to report these results to other scientists, they will want to know what they can expect to find without wondering. Therefore, StructA is going to be our conformer of interest rather than StructB. The latter still will provide insightful results but that is about it for the purposes of this examination. 2.) Why did we use a variety of methods to characterize these molecular systems? Computational methods are simply approximations. They are not guaranteed to give the 'correct' answer. So we try to address this by using a variety of methods (i.e. approximations) and we analyze the results accordingly. If the results are in agreement, then we can feel confident that the results are correct since we tested them against a list of methods. You can never rely on just one method unless it is rigorous, well-tested and has been shown repeatedly to perform well in the literature. When we say 'perform well', this usually means that the computational results are in reasonable agreement with experimental results. This is important because experimental results is the LAW (for all intents and purposes). If the computations disagree with experiment, 99.9% of the time this means that your computational approach sucked and that your approximation was either flawed or misapplied. By using a host of methods, we can put a little more faith into the results we observe because the chances of massive disagreement between the results of the methods is very unlikely in a normal situation. 3.) What is the point of doing a bunch of energy points? Again, because we used a slew of methods to characterize the geometries of cis-2-butene, we end up with non-identical geometries each time we use a new approximation. For instance, the methyl C-H bond lengths from B3LYP are going to be a little bit different from those obtained with MP2. So then that begs the question, "How do these small differences effect the property of the system that we are interested in?" Generally, minute differences will have little effect on the resulting energies of each molecule under consideration. Energy is a very important property that chemists love to look at. So, if we take each geometry (and each one is unique from the other) and we evaluate the energy of the molecule at the same level of theory (in this case, MP2-F12 and CCSD(T)-F12), then we can quickly see how 'resolved' each geometry was. There should be very good agreement between the relative energies of each geometry (probably to within a few tenths of a $\mathrm{kcal\ mol}^{-1}$). 4.) Okay, so why MP2-F12 AND CCSD(T)-F12? We use MP2-F12 AND CCSD(T)-F12 to test for 'higher-order correlation effects'. MP2 methods are much cheaper than CCSD(T) methods but MP2 is not as rigorous and can be error prone in a host of molecular systems. Therefore, we test the performance of MP2 by busting out our 'gold-standard' which is the CCSD(T) method. If MP2 agrees well with CCSD(T), then we can feel confident in our MP2 results and never have to revisit the difficult, time-consuming CCSD(T) computations ever again. ALso, CCSD(T) will also tell us how DFT performed as well. DFT methods must always be calibrated against something more rigorous since DFT is known for 'getting the right answer for the wrong reasons' and it isn't always right. The 'F12' bit just means that these are 'explicitly-correlated' methods. Rather than give an introduction to what this means, you should understand why we use it instead. You may have noticed that whenever we do a computational job, we specify a method AND a basis set (e.g. heavy-aug-cc-pVTZ). These basis sets can be measured by how many atomic orbitals (or functions) are given in the set. The more that are given, the better the 'basis-set approximation'. Think of this in terms of Riemann sums where you try to approximate the area under a curve using a set of rectangles. Each rectangle is a basis function and the number of rectangles you use forms a basis set. The more rectangles you use, the better your approximation of the area under that curve will be. Basis sets in computational chemistry behave the same way. When you approach an infinite set of rectangles, you approach the exact answer. When you approach an infinite number of basis functions, you approach what is called the CBS (complete basis set) limit. At the CBS limit, you have an exact answer. We cannot implement an infinite basis set in chemistry (for obvious reasons), and very large basis sets are cost prohibitive. Therefore, people have devised these F12 approximations that are constructed in such a way to give results that are comparable to those that you'd get with a large basis set, but you can get them by using a relatively small basis set instead! This is a powerful approach to convergent quantum chemistry that saves you a lot of time while maintaining a set of very good results. 5.) Why didn't you provide more pretty pictures? That's just the nature of the beast. Computational chemistry is usually short on graphics but very dense on spreadsheets. Plus... I'm no artist. I actually spent a good couple hours trying to get some electrostatic potentials posted but the new version of G09 hates the molecular viewer programs I currently use so I ditched that idea.
Axiom:Axiomatization of 1-Based Natural Numbers Axioms The following axioms are intended to capture the behaviour of the ($1$-based) natural numbers $\N_{>0}$, the element $1 \in \N_{>0}$, and the operations of addition $+$ and multiplication $\times$ as they pertain to $\N_{>0}$: \((A)\) $:$ \(\displaystyle \exists_1 1 \in \N_{> 0}:\) \(\displaystyle a \times 1 = a = 1 \times a \) \((B)\) $:$ \(\displaystyle \forall a, b \in \N_{> 0}:\) \(\displaystyle a \times \paren {b + 1} = \paren {a \times b} + a \) \((C)\) $:$ \(\displaystyle \forall a, b \in \N_{> 0}:\) \(\displaystyle a + \paren {b + 1} = \paren {a + b} + 1 \) \((D)\) $:$ \(\displaystyle \forall a \in \N_{> 0}, a \ne 1:\) \(\displaystyle \exists_1 b \in \N_{> 0}: a = b + 1 \) \((E)\) $:$ \(\displaystyle \forall a, b \in \N_{> 0}:\) \(\displaystyle \)Exactly one of these three holds:\( \) \(\displaystyle a = b \lor \paren {\exists x \in \N_{> 0}: a + x = b} \lor \paren {\exists y \in \N_{> 0}: a = b + y} \) \((F)\) $:$ \(\displaystyle \forall A \subseteq \N_{> 0}:\) \(\displaystyle \paren {1 \in A \land \paren {z \in A \implies z + 1 \in A} } \implies A = \N_{> 0} \) Note The above axiom schema specifies the old-fashioned definition of the natural numbers as: $\text{The set of natural numbers} = \set {1, 2, 3, \ldots}$ as opposed to the more modern approach which defines them as: $\text{The set of natural numbers} = \set {0, 1, 2, 3, \ldots}$ In order to eliminate confusion, on $\mathsf{Pr} \infty \mathsf{fWiki}$ the set $\set {1, 2, 3, \ldots}$ will be denoted as $\N_{> 0}$ or $\N_{\ne 0}$ or $\N_{\ge 1}$. When $\N$ is used, $\N = \set {0, 1, 2, 3, \ldots}$ is to be understood.
31 0 Schrodinger Equation in momentum space? ?? I don't know if this makes any sense at all, but I'm studying QM and just trying to generalize some things I'm learning. Please let me know where I go wrong.. Basically by my understanding the most general form of the Schrodinger Equation can be written as follows: [tex]i\hbar\frac{\partial\Psi}{\partial t}=\hat{H}\Psi[/tex] Then we have that H is given by: [tex]\hat{H}=\frac{\hat{p}^{2}}{2m}+V[/tex] Then we just need to know what the 'p' operator is. My book shows that in configuration space it is: [tex]\hat{p}=-i\hbar\nabla[/tex] (I generalized it a bit, the book just has an x derivative) however, in momentum space it is just: [tex]\hat{p}=p[/tex] Now.. if we take the momentum operator in configuration space and plug that into H and then plug that into the form of the Schrodinger Equation I wrote at the top we get the familiar form: [tex]i\hbar\frac{\partial\Psi}{\partial t}=-\frac{\hbar^{2}}{2m}\nabla^{2}\Psi+V\Psi[/tex] Ok... but what I am wondering now is what happens if we use the momentum operator in momentum space instead of configuration space, then plugging that into H and H into the first equation as before we would end up with: [tex]i\hbar\frac{\partial\Psi}{\partial t}=\frac{p^{2}}{2m}\Psi+V\Psi[/tex] This seems very odd though, I think it must be wrong but I don't know why... perhaps it is right but only for a wavefunction expressed in momentum space... ?? Can anyone help me understand what is going on here?? Perhaps there is no justification in plugging the momentum operator for momentum space into H like I did.. I just don't know? Thanks for any insight.
I'm a bit confused by the conception of "mixture model" I'm studying hidden Markov model, which is frequently referred to as a "mixture model". But I don't know what the term "mixture" implies. Suppose the HMM has state variable sequence $v_1,v_2,\dots,v_t$, $v_i\in\{1,\dots,k\}$. The emission model has pdf $f(y_i\mid\theta_{v_i})$. For each state $m$, the emission probability is a gaussian mixture $f(y\mid\theta_m)=\frac{1}{3}\mathcal{N}(\mu_m,\sigma^2)+\frac{1}{3}\mathcal{N}(\mu_m-10,\sigma^2)+\frac{1}{3}\mathcal{N}(\mu_m+20,\sigma^2)$ Why is it a mixture model? Is it because it has multiple states and each state has its own emission probability? Or is it because each state uses a Gaussian mixture? According to Wikipedia, the second understanding is more probable. However, intuitively, I think the first understanding is more reasonable. ---------------added on July 13, 2015-------------------------- I know the definition of mixture model in Wikipedia. I (think I) know what is a mixture model. Now I need to know what is not a mixture model. Is a multi-state model a mixture model in general? Another evidence to support the first understanding. In the first three paragraphs of section 7, the paper claims the observation $y_{t+1}$ is drawn from the mixture component indexed by $v_{t+1}$. Here, $v_{t+1}\in\{1,\dots,k\}$ is a state variable, and given the realization of the variable, $f(y_{t+1}\mid\theta_{v_{t+1}})$ is an emission probability. Hence this multi-state model is called a "mixture model" and each probability (corresponding to a state), without being summed up, is called a "mixture component" in the paper. Moreover, there might be a way to "mediate" the two understandings: if each state $m$ has an occurrence probability $\pi_m$, then, the marginal emission probability (state variable is integrated) is $\sum_{m=1}^k\pi_mf(y\mid\theta_m)$, this is obviously a mixture model. Hence we have a general claim: a multi-state distribution can also be called a mixture model due to the above transformation. Is this claim valid? Why?
Lethbridge Number Theory and Combinatorics Seminar: Farzad Aryan Date: 09/22/2014 Time: 12:00 University of Lethbridge University of Lethbridge On Binary and Quadratic Divisor Problem Let $d(n)=\sum_{d|n} 1$. This is known as the divisor function. It counts the number of divisors of an integer. Consider the following shifted convolution sum \begin{equation*} \sum_{an-m=h}d(n) \, d(m) \, f(an, m), \end{equation*} where $f$ is a smooth function which is supported on $[x, 2x]\times[x, 2x]$ and oscillates mildly. In 1993, Duke, Friedlander, and Iwaniec proved that $$ \sum_{an-m=h}d(n) \, d(m) \, f(an, m) = \textbf{Main term}(x)+ \mathbf{O}(x^{0.75}).$$ Here, we improve (unconditionally) the error term in the above formula to $\mathbf{O}(x^{0.61})$, and conditionally, under the assumption of the Ramanujan-Petersson conjecture, to $\mathbf{O}(x^{0.5})$. We will also give some new results on shifted convolution sums of functions coming from Fourier coefficients of modular forms. Location: B660 University Hall
Course Notes for lecture 4, ECE301 Fall 2008, Prof. Boutin Note: these were taken by students: they are NOT the official instructor's notes. Watch out for typos and mistakes! Periodic Functions The definition of a periodic function given in class is as follows:The function x(n) is periodic if and only if there exists an integer N such that x(n+N) = x(n). The value of N is called the "period". As an example, we can use the function $ x(n) = e^{\omega_0 j n} $. To prove this, we do the following: $ x(n+N) = x(n) $ $ e^{\omega_0 j (n+N)} = e^{\omega_0 j n} $ $ e^{\omega_0 j n} e^{\omega_0 j N} = e^{\omega_0 j n} $ $ e^{\omega_0 j N} = 1 $ $ \cos(\omega_0 N) + j\sin(\omega_0 N) = 1 $ ---Which is true if: $ \omega_0 N = k2\pi $ (where k is an integer) ---at some point. This leads to the conclusion that if $ {\omega_0 \over 2\pi} = {k \over N} $ or, put another way, $ {\omega_0} \over {2\pi} $ is a rational number, then the function is periodic. Put yet another way: if the equation is of the form $ e^{\omega_0 j n} $ and $ \omega_0 $ is made up of $ \pi $ and a rational component (contains no irrationals besides $ \pi $) then the function is periodic.
I think that the answer is that there is no correct answer. Whilst this is a simple question, the answer is kinda open. You're asking about the entropy of correlated data with respect to random number generation. The key reference for this is NIST Special Publication 800-90B, Recommendation for the Entropy Sources Used for Random Bit Generation. Look to sections 5 and 6. You be faced with two questions; Is my data correlated (non IID) and if so, what is it's entropy rate? You have already realised that the standard $ -K\sum_{i=1}^n {p_i \log p_i} $ isn't appropriate. Is your data correlated? This is the first step, and NIST suggests permutation testing, as below:- There are 11 tests NIST recommends, all of which are somewhat complex. To help (not) they have release some code on GitHub that implement these tests. I'm unsure of how official this is, but see the well received How to interpret the entropy results for a NIST test file as to their soundness. An immediately apparent problem for implementation is that as $ T, T_i \in \mathbb{R} $, the equality test in Fig.4, section 2.2.2 becomes difficult without bounds. In a nutshell, the code doesn't work well enough for field use. What is the entropy? If you manage to satisfy yourself that the data is non IID, NIST recommends taking the lowest value of another 10 complex statistical tests as the min. entropy value. Similarly, the Python and C++ code has problems outputting a coherent and credible entropy estimate. Perhaps you can implement the tests yourself in another fashion. What I do. I too build random number generators and I use similar permutation tests, but focused solely on a compression test. Compression exploits correlation and eliminates redundancy. Compression is also used in both the NIST non IID confirmation and entropy estimation tests above. One can calculate what I have termed a correlation factor, $ c_f $, explored in What exactly does compression say about correlation of data? If $ c_f \simeq 1 $ then the data is IID, and non IID if $ c_f > 1 $. I then use the non permuted compressed file size and divide twice by two to get a conservative entropy rate. This method seems to be as good as any, and more robust that most. I estimate about 1.6 bits / byte from a 24V Zener diode at 10kSa/s. That's 16kbps of cryptographic strength entropy. Notes. I now use fp8 and paq8px compressors as they are about the best I can find that will compile for me. They'll compress IID data to within 1% of the theoretical Shannon limit. Divide by two as a security precaution to allow for improvements in compression algorithms, whilst realising that these improvements are asymptotic with diminishing returns. Divide by two again just because.
This is not an answer to your question but a long comment on its motivation. Multiplication is at least two conceptually distinct things, only one of which can reasonably be described as repeated addition: The natural map $\mathbb{Z} \times A \to A$ given by $(n, a) \mapsto na$ where $A$ is an abelian group; this really is repeated addition, and is in particular bilinear. The composition $\text{End}(A) \times \text{End}(A) \to \text{End}(A)$ of endomorphisms of an abelian group. What's confusing is that these two definitions agree in familiar cases. If $A = \mathbb{Z}$ (the abelian group), repeated addition gives a natural map $\mathbb{Z} \times \mathbb{Z} \to \mathbb{Z}$. On the other hand, $\text{End}(\mathbb{Z}) \cong \mathbb{Z}$ (the ring), and composition of endomorphisms gives a natural map $\mathbb{Z} \times \mathbb{Z} \to \mathbb{Z}$. These happen to be the same map, but this is an illusion caused by the fact that we're looking at such a fundamental abelian group $\mathbb{Z}$. Similarly, if $A = \mathbb{R}$ (the abelian group), repeated addition gives a natural map $\mathbb{Z} \times \mathbb{R} \to \mathbb{R}$. On the other hand, the reasonable endomorphisms of $\mathbb{R}$ form a ring isomorphic to $\mathbb{R}$ (the ring), giving a natural map $\mathbb{R} \times \mathbb{R} \to \mathbb{R}$, and the restriction to $\mathbb{Z}$ in the first factor of the second map gives the first. But when we think of multiplication of real numbers, the first picture is misleading: in what sense is $\pi \times \pi = \pi^2$ repeated addition? Simple: it isn't. It's better conceptualized as composition of scalings of the real line (the endomorphism definition). The endomorphism definition generalizes immediately to multiplication of complex numbers and matrix multiplication, a context where "repeated addition" doesn't even begin to capture what multiplication is all about. This is probably a big reason why people have a difficult time with complex numbers: nobody's explained to them that they're just composing rotations and scalings of the plane. Exponentiation is at least three conceptually distinct things, only one of which can reasonably be described as repeated multiplication: The natural map $\mathbb{Z} \times G \to G$ given by $(n, g) \mapsto g^n$ where $G$ is a group; this really is repeated multiplication. Note that for fixed $n$ we don't get a homomorphism in general if $G$ is non-abelian, but for fixed $g$ we get a homomorphism $\mathbb{Z} \to G$. The natural map $B \to B$ given by $x \mapsto e^x = \exp(x) = \sum \frac{x^k}{k!}$ where $B$ is a topological ring and the series converges (which for example is always true in a Banach algebra). If it exists for all $x \in B$, this map is a homomorphism from the additive group of $B$ to the multiplicative group of $B$; moreover, the homomorphism $t \mapsto e^{tx}$ from $\mathbb{R}$ to $B^{\times}$ is (in nice cases) uniquely determined by the fact that its derivative at $t = 0$ (in nice cases where this exists) is $x$. In other words, this is a very, very natural map. Any map that extends or is analogous to one or both of the above two maps. The reason maps in the third category exist is because of the nice homomorphism properties that anything behaving like an exponential ought to satisfy, which we often want to imitate in other settings (e.g. the exponential map in Riemannian geometry). Thus, for example, we have an exponential $(a, x) \mapsto a^x = e^{x \log a}$ where $a$ is a positive real and $x$ is an element of a topological ring, generalizing the second definition, such that $a^{x+y} = a^x a^y$ (when $x, y$ commute) and $(ab)^x = a^x b^x$ and if $x$ is chosen to be a scalar multiple of the identity, we get back a special case of the first map for $G = (\mathbb{R}_{>0}, \times)$. But I still think this is misleading. One sign is that exponentials with arbitrary bases behave quite badly once $a$ is allowed to be anything other than a positive real. The first time I tried to graph the equation $$y = (-10)^x$$ on my calculator impressed this point on me very strongly. (Try it and see what happens.) Of course this is due to the fact that logarithms aren't well-defined in general, which, while interesting, only further emphasises the point that instead of allowing both arbitrary bases and exponents we should stick to repeated multiplication, $e^x$, and logarithms, which are the real stars of the show. So insofar as multiplication and exponentiation are repeated addition and multiplication, at least this is sensible because addition and multiplication are associative. Exponentiation is not associative, and it shouldn't be, because in many more general cases its two inputs are different types of things. Therefore, there's no reason to expect repeated exponentiation to have any reasonable properties along the lines of the natural and useful homomorphism properties of multiplication and exponentiation, and as far as I know, it doesn't.
Analytical problem : what you are expecting is positive diffusion : you want the $T_i$ values to spread over your domain as time passes to eventually reach $T_i(t\rightarrow \infty) = cte$ if $\alpha$ was a negative number, you would have what is called negative diffusion : you'll have exactly the opposite i.e. the gradients will get greater through time. The sign of $\alpha$ hence dictates the behaviour of your analytical solution. $\alpha < 0$ case : Ideally, the numerical solution should have the same behaviour as the analytical solution. However, the finite difference theory assumes the solution to be smooth : if the solution features gradients that are too sharp, then your numerical method will not be able to handle them. We have just said that in the case where $\alpha < 0$, the gradients grow greater with time. The error generated by the simulation will not be smeared out, as would be the case with positive diffusion $\alpha > 0$, but instead will be amplified. For that reason, if α<0, you know for sure your simulation is going to blow up at some point. $\alpha > 0$ case : If $\alpha > 0$, you are however not safe. If your time step is too large, your simulation will not be stable either. The stability condition $\Delta t < \frac{\Delta x ^2}{2 \alpha}$ indicates whether your numerical method has a chance of being stable or not. Note that it is a necessary condition for your numerical method to be stable, not a sufficient condition. Yet in practice, it turns out to be a very powerful tool. Also, the mesh Fourier number for a diffusive term can be defined as $\alpha \frac{\Delta t}{\Delta x^2}$. In practice it is more convenient to write the stability condition in terms of the mesh Fourier$\alpha \frac{\Delta t}{\Delta x^2} < \frac{1}{2}$ This way you can see that the parameters of your simulation $\Delta t$, $\Delta x$ and $\alpha$ are all on the left hand side and $\frac{1}{2}$ is the critical value that must not be exceeded for the simulation to have a chance of being stable. In practice, the value for $\alpha$ is given by your problem and you will have chosen $\Delta x$ already. Hence, $\Delta t$ is the only parameter you can play with so that the stability condition on diffusion is observed. The value of the critical mesh Fourier number depends on the space and time discretisation you have chosen. Some time integrators have broader stability regions than others, hence they will allow larger mesh Fourier numbers. Practically speaking, this means you'd be able to choose larger time steps while still having a stable numerical method. To summarise : if $\alpha < 0$, you will have negative diffusion and your simulation will in any case not be stable. if $\alpha > 0$, your simulation might be stable... or it may not ! the stability condition on diffusion (and the mesh Fourier number) helps you choose the time step $\Delta t$ for your numerical method to be stable. I recommend you make a dummy simulation and play with the parameters to see what happens. No need to waste time into programming something : a spreadsheet software is enough for your particular case. Edit: partial rewrite of my answer to make it clearer
Nontrivial ordered ω-limit sets in a linear degenerate parabolic equation 1. Department of Mathematics I, Wüllnerstr. 5-7, RWTH Aachen, 52056 Aachen, Germany $ u_t=a(x) (\Delta u+\lambda_1 u) \qquad $ (*) with zero Dirichlet data in a smoothly bounded domain $\Omega \subset \R^n$, $n\ge 1$. Here $a$ is positive in $\Omega$ and Hölder continuous in $\bar\Omega$, and $\lambda_1>0$ denotes the principal eigenvalue of $-\Delta$ in $\Omega$ with Dirichlet data. It is shown that if $\int_\Omega \frac{(\dist(x,\partial\Omega))^2}{a(x)}dx=\infty$ then there exist initial data in $W^{1,\infty}(\Omega)$ such that the solution of (*) is bounded but not convergent as $t\to\infty$: It has a totally ordered $\omega$-limit set which is not a singleton. Under the above condition, the occurrence of even unbounded ordered $\omega$-limit sets is demonstrated. Conversely, if $\frac{(\dist(x,\partial\Omega))^2}{a(x)}$ is integrable then any solution emanating from initial data in $W^{1,\infty}(\Omega)$ converges to some stationary solution of (*) as time approaches infinity. Mathematics Subject Classification:Primary: 35B40, 35B41, 35K6. Citation:Michael Winkler. Nontrivial ordered ω-limit sets in a linear degenerate parabolic equation. Discrete & Continuous Dynamical Systems - A, 2007, 17 (4) : 739-750. doi: 10.3934/dcds.2007.17.739 [1] Changjing Zhuge, Xiaojuan Sun, Jinzhi Lei. On positive solutions and the Omega limit set for a class of delay differential equations. [2] [3] [4] Andrew D. Barwell, Chris Good, Piotr Oprocha, Brian E. Raines. Characterizations of $\omega$-limit sets in topologically hyperbolic systems. [5] [6] [7] [8] [9] Francisco Balibrea, J.L. García Guirao, J.I. Muñoz Casado. A triangular map on $I^{2}$ whose $\omega$-limit sets are all compact intervals of $\{0\}\times I$. [10] Liangwei Wang, Jingxue Yin, Chunhua Jin. $\omega$-limit sets for porous medium equation with initial data in some weighted spaces. [11] José Ginés Espín Buendía, Víctor Jiménez Lopéz. A topological characterization of the $\omega$-limit sets of analytic vector fields on open subsets of the sphere. [12] Emma D'Aniello, Saber Elaydi. The structure of $ \omega $-limit sets of asymptotically non-autonomous discrete dynamical systems. [13] Alexander Blokh, Michał Misiurewicz. Dense set of negative Schwarzian maps whose critical points have minimal limit sets. [14] [15] [16] Michelle Nourigat, Richard Varro. Conjectures for the existence of an idempotent in $\omega $-polynomial algebras. [17] [18] Alexandre N. Carvalho, Jan W. Cholewa. Strongly damped wave equations in $W^(1,p)_0 (\Omega) \times L^p(\Omega)$. [19] James W. Cannon, Mark H. Meilstrup, Andreas Zastrow. The period set of a map from the Cantor set to itself. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
This question is based on the application of the pdf which was an earlier question of mine asked here Confusion regarding pdf of circularly symmetric complex gaussian rv If $v \sim CN(0,2\sigma^2_v)$ is a circularly complex Gaussian random variable which acts as the measurement noise in this model $$y_n = A + v_n \tag{1} $$ where $y$ is the observation and $A$ is a scalar unknown value which needs to be estimated. I am having a slight confusion whether there will be a 2 in the denominator of Eq(3) and Eq(4) with the $\exp(.)$ term. Based on the answer in the link, there should be no sqrt term with $\pi$ in the denominator, if $v \sim CN(0,2\sigma^2_v)$. If $v \sim N(0,\sigma^2_v)$ then there is a sqrt term. Can somebody please check if I have correctly written out the log-likelihood? I think I am missing a 2 in the denominator of $\exp[.]$ term in Eq(3) but I am not quite sure. Thank you for your time and help. $$P_y(y_1,y_2,...,y_N) = \prod_{n=1}^N\frac{1}{2\pi \sigma^2_v} \exp \bigg(\frac{-{({y_n-A})}^H ({y_n-A})}{2\sigma^2_v} \bigg) \tag{2}$$ taking log $$\ell = -N\ln(2\pi\sigma^2_v) - \frac{1}{\sigma^2_v} {\bigg[{[\sum_{n=1}^{N} {(y_n - A)}{(y_n - A)}^{\mathsf{H}} ]}\bigg]}. \tag{3}$$ $$ = -N\ln(2\pi\sigma^2_v)- \frac{1}{2 \sigma^2_v}{\bigg[ \sum_{n=1}^{N}y_n y_n^\mathsf{H} - 2 \sum_{n=1}^N y_n A\bigg]} - \frac{1}{2 \sigma^2_v}{\bigg[ \sum_{n=1}^N {AA}^\mathsf{H} ] \bigg]} \tag{4}$$
In Peskin & Schroeder p.39 they introduce the 4x4-matrices $$\left(\mathcal{J}^{\mu\nu}\right)_{\alpha\beta} = i \left(\delta^{\mu}_{\;\alpha} \delta^{\nu}_{\;\beta} - \delta^{\mu}_{\;\beta}\delta^{\nu}_{\;\alpha}\right) \tag{3.18}$$ which they "pull out of a hat" and which represent the Lorentz algebra, that is they satisfy $$\left[ \mathcal{J}^{\mu\nu} , \mathcal{J}^{\rho\sigma} \right] = i \left( \eta^{\nu\rho} \mathcal{J}^{\mu\sigma} - \eta^{\mu\rho}\mathcal{J}^{\nu\sigma} - \eta^{\nu\sigma}\mathcal{J}^{\mu\rho} + \eta^{\mu\sigma}\mathcal{J}^{\nu\rho} \right)\tag{3.17}$$ I am having difficulties interpreting the objects $\left( \mathcal{J}^{\mu\nu} \right)_{\alpha\beta}$. In his script about QFT, David Tong describes p.82 the objects $\left(\mathcal{J}^{\mu\nu}\right)$ as the six antisymmetric matrices describing the six transformations of the Lorentz group: 3 rotations $\mathbf{J}=\left(J_x,J_y,J_z\right)$ and 3 boosts $\mathbf{K}=\left(K_x,K_y,K_z\right)$, where as I understand it each $J_i$ and $K_i$ is a matrix. Then he says that the indices $\alpha\beta$ refer to the components of the matrices $\left(\mathcal{J}^{\mu\nu}\right)$, and gives as an example $$\left(\mathcal{J}^{01}\right)^{\alpha}_{\;\beta} = \left[ \begin{matrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{matrix} \right]$$ which generates boosts in the $x^1$ direction. So far so good. I am interested into inserting (3.18) in (3.17) in order to check that the matrices satisfy the Lorentz algebra, however I am confused by the four indices. It seems to me that $\left(\mathcal{J}^{\mu\nu}\right)_{\alpha\beta}$ and $\left(\mathcal{J}^{\mu\nu}\right)$ cannot be the same objects really, so how do I retrieve the latter?
Global Modeling of a Non-Maxwellian Discharge in COMSOL® Global modeling of plasmas is a powerful approach to study large chemistry sets. In these models, the reactions are represented by rate coefficients. In particular, the rate coefficients of electron impact collisions depend on the electron energy distribution function (EEDF), which is often non-Mawellian and can be computed from an approximation of the Boltzmann equation (BE). Here, we explain how to create a global model fully coupled with the BE in the two-term approximation using the COMSOL Multiphysics® software. Setting Up a Global Model of a Non-Maxwellian Discharge The equations in a global model are greatly simplified because the spatial information of the different quantities in the plasma reactor is treated as volume-averaged. Without the spatial derivatives, the numerical solution of the equation set becomes considerably simpler and the computational time is reduced. Consequently, this type of model is useful when investigating a broad region of parameters for plasmas with complex chemistries. For a closed reactor without net mass creation at the surfaces, a mixture of k=1,\dotsc,Q species and j=1,\dotsc,N reactions is described by the mass fraction balance equations for Q-1 species where V is the reactor volume, \rho is the mass density, w_k is the mass fraction of species k, A_l is the area of surface l, h_l is a correction factor of surface l, R_s is the surface rate expression of surface l, R_k is the volume rate expression for species k, and M_k is the molar mass. The sum in the last term is over surfaces where species are lost or created. One of the species mass fractions is found from mass conservation. The electron number density is obtained from the electron neutrality condition where n_k is the number density and Z_k is the charge number. The electron energy density, n_{\varepsilon}, can be computed from +\sum_l \sum_{ions} e h_l \frac{A_l}{V}R_{surf,k,l} N_A \left( \varepsilon_e + \varepsilon_i \right) where n_{\varepsilon}=n_e \overline \varepsilon, \overline \varepsilon is the mean electron energy, P_{abs} is the power absorbed by the plasma, and e is the elementary charge. The last term on the right-hand side accounts for the kinetic energy transported to the surface by electrons and ions. The summation is over all positive ions and boundaries with surface reactions, \varepsilon_e is the mean kinetic energy lost per electron lost, \varepsilon_i is the mean kinetic energy lost per ion lost, and N_A is Avogadro’s number. In the equations above, the source terms R_k and R_{\varepsilon} are computed using rate coefficients that represent the effect of collisions. In particular, for electron impact collisions, the rate coefficients depend on the EEDF, which is often a non-Maxwellian distribution and depends on the discharge conditions. In practice, the EEDF can be obtained by solving an approximation of the electron BE using fundamental collision cross-section data. Once the EEDF is known, the rate coefficients are computed by a suitable averaging of the electron impact cross sections over the EEDF. Describing the Boltzmann Equation and Two-Term Approximation The BE that describes the evolution of an ensemble of electrons in a six-dimensional phase space is where f is the EEDF, \textbf{v} is the velocity coordinates, m is the electron mass, \textbf{E} is the electric field, \nabla_v is the velocity gradient operators, and C is the rate of change in f due to collisions. Normally, a rather simplified BE is solved instead. It is assumed that the electric field and the collision probabilities are spatially uniform. The BE is then written in terms of spherical coordinates in the velocity space and f is expanded in spherical harmonics. The series is truncated after the second term, and the so-called two-term approximation of f is where f_0 is the isotropic part of f, f_1 is an anisotropic perturbation, v is the magnitude of the velocity, \theta is the angle between the velocity and the field direction, and z is the position along this direction. The problem is further simplified by solving only steady-state cases where the electric field and EEDF are either stationary or oscillate at a high frequency. The last piece of simplification consists of separating the energy dependence of the EEDF from its time and space dependence using where F_{0,1} is an energy distribution function constant in time and space that verifies the following normalization where \gamma=\sqrt{2e/m} and \varepsilon = \left( v / \gamma \right)^2. Using the above-mentioned approximations and after some manipulations, the equation for F_0 can be written in the form of a 1D convection-diffusion-reaction equation (For more details, see Ref. 1.) This equation can be used to compute an EEDF, providing a set of electron collision cross sections and a reduced electric field, E/N (ratio of the electric field strength to the gas number density). Depending on the operating conditions, it might be necessary to include the effect of superelastic collisions and electron-electron collisions. Quite often, the input quantity of interest is the mean electron energy. In this case, a Lagrange multiplier is introduced to solve for the reduced electric field such that the equation below is satisfied. Once the EEDF is computed, the rate coefficients needed for a plasma global model are computed from where \sigma_k is the cross section of reaction k. The figure below plots a computed EEDF obtained for argon at \overline{\varepsilon} = 5 eV and a corresponding Maxwellian. Note how the computed EEDF strongly deviates from a Maxwellian and how sharply it falls above the first excitation level of argon at 11.5 eV. In the same figure, the cross section for the excitation of the lumped level (corresponding to the first 4-s levels of argon) is plotted. With the information in this figure, the rate coefficient for the excitation of the 4-s levels can be computed. Also important to note from this figure is that the computed EEDF and the cross section vary by several orders of magnitude in the overlapping region. In consequence, a small variation in \overline{\varepsilon} (or E/N) causes a large change in the rate coefficients. This example is for argon, but the same behavior is found in many other gases, and it is one of the reasons why plasmas have very nonlinear behavior. In a practical application, the BE in the two-term approximation can be solved to provide rate coefficients to a global model. In such cases, the EEDF is computed every time the input conditions for the BE have changed. Coupling the Global Model with the BE in the Two-Term Approximation In this section, we show how to make a global model fully coupled with the BE in the two-term approximation using COMSOL Multiphysics. A three-step procedure is advised: Create a global model where an analytic EEDF is used Use the EEDF Initializationstudy to solve only for the EEDF Solve the fully coupled problem Creating an EEDF Initialization Study After having the global model working for an analytic EEDF (step 1), you can decide to investigate further to see if the EEDF used is suitable for your needs. You can do this by using an EEDF Initialization study to solve the BE in the two-term approximation. This study solves the BE for the electron impact cross sections provided and a choice of the reduced electric field or the mean electron energy. This procedure is exemplified in the screenshots below. First, select the Boltzmann equation, two-term approximation (linear) option — or the Boltzmann equation, two-term approximation (quadratic) option — in the Electron Energy Distribution Function Settings section. Then, set the Reduced electric field so that it’s used in the solution of the EEDF. At this stage, you can compare the computed EEDF and the rate coefficients with the ones you used in the global model in step 1 and assess if the model needs further improvements. If you decide to solve the fully coupled problem, add another study and use the solution of the EEDF Initialization study as the initial condition, as shown below. Using the solution from the EEDF Initialization study is a requirement. Coupling the Global Model Equations and BE The coupling between the global model equations and the BE can happen in two different ways, depending on whether you use the Local field approximation or the Local energy approximation to define the mean electron energy in the Plasma Properties section. When using the Local field approximation, the excitation of the system is given from a reduced field. This electric field can be constant (a parameterization can be made over E/N) or can come from a solution of an equation (e.g., a circuit equation). When using the Local energy approximation, the global model equation for the electron mean energy is solved and the power absorbed by the plasma needs to be set by the user. In this case, the E/N is found so that the equation below is satisfied. Example: A Plasma Sustained by a Direct Current Voltage Source As a practical example, we chose to model an argon plasma created within a 4-mm gap by a direct current (DC) voltage source of 1 kV in series with a 100-kΩ resistance at 100 mTorr. This model is inspired by Ref. 2. We emphasize that the model has no spatial description and that geometrical parameters and volume-averaged quantities are used to describe the plasma in the gap. The voltage applied to the plasma, V_p, comes from the circuit equation where V_{dc} is the applied voltage and R is the circuit resistance. The plasma current, I_p, is computed from where A is the plasma cross-sectional area and \mu N is the reduced electron mobility. Solving for E/N, we obtain where d is the gap distance between electrodes. If we choose to use the Local field approximation, the equation for E/N above can be used directly in the EEDF Inputs section, as shown in the screenshot below. If we choose to use the Local energy approximation, the power absorbed by the plasma can be defined as in the Mean Electron Energy section, as in the screenshot below. In this model, both approaches give very similar results, since the same electron energy loss/gain from collision events is accounted for in the BE and the mean electron energy equation, and because no energy losses to the wall are included in the mean electron energy equation. In the figure below, the temporal evolution of the charged species and the reduced electric field are shown. Initially, there is no plasma in the gap and the electric field (black line, right axis) maintains a constant value. When breakdown starts to occur, there is a rapid increase of the charged carriers and the current flowing into the circuit, resulting in a voltage drop across the gap. After this transient regime, a steady state is reached where the plasma is sustained with a reduced electric field of only 4 Td. The temporal evolution of the EEDF is presented below. Initially, the EEDF has a large population above 15 eV, as it is necessary to facilitate the plasma breakdown. After the plasma formation, and due to the decrease of the electric field, the electron population cools down and the EEDF develops a tail with a steeper slope. As time progresses, and with the increase of the argon excited-state density, the influence of the superelastic reactions in the EEDF becomes noticeable, with the appearance of a bump at the high-energy end. Note that the time variation is presented on a log scale in this animation. Next Steps To try the example featured in this blog post, click the button below. Doing so will take you to the Application Gallery, where with a valid software license, you can download MPH-file for the model in addition to the step-by-step documentation. You can also read more about modeling plasma physics in the following blog posts: Introduction to Plasma Modeling with Non-Maxwellian EEDFs The Boltzmann Equation, Two-Term Approximation Interface Electron Energy Distribution Function References G.J.M. Hagelaar and L.C. Pitchford, “Solving the Boltzmann equation to obtain electron transport coefficients and rate coefficients for fluid models,” Plasma Sources Science and Technology, vol. 14, pp. 722–733, 2005. S. Pancheshnyi, B. Eismann, G. Hagelaar, and L. Pitchford, “ZDPlasKin: A New Tool for Plasmachemical Simulations”, The Eleventh International Symposium on High Pressure, Low Temperature Plasma Chemistry, 2008. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
This answer is a response to a comment by the OP on on yoda's answer. Suppose that $h(t)$, the impulse response of a continuous-time linear time-invariant system, has the property that $$\int_{-\infty}^{\infty} |h(t)| \mathrm dt = M$$ forsome finite number $M$. Then, for each and every bounded input $x(t)$, the output $y(t)$ is bounded also.If $|x(t)| \leq \hat{M}$ for all $t$ where $\hat{M}$is some finite number, then $|y(t)| \leq \hat{M}M$ for all $t$where $\hat{M}M$ is also a finite number.The proof is straightforward.$$\begin{align*} |y(t)| &= \left |\int_{-\infty}^\infty h(\tau)x(t - \tau)\mathrm d\tau\right |\\ &\leq \int_{-\infty}^\infty |h(\tau)x(t - \tau)|\mathrm d\tau\\ &\leq \int_{-\infty}^\infty |h(\tau)|\cdot|x(t - \tau)|\mathrm d\tau\\ &\leq \hat{M}\int_{-\infty}^\infty |h(\tau)|\mathrm d\tau\\ &= \hat{M}M. \end{align*}$$In other words, $y(t)$ is bounded whenever $x(t)$ is bounded. Thus, the condition $\displaystyle\int_{-\infty}^{\infty} |h(t)| \mathrm dt < \infty$ is sufficient for BIBO-stability. The condition $\displaystyle\int_{-\infty}^{\infty} |h(t)| \mathrm dt < \infty$ is also necessary for BIBO-stability. Assume that every bounded inputproduces a bounded output. Now consider the input $x(t) = \text{sgn}(h(-t)) ~\forall~ t$. This is clearly bounded,($|x(t)| \leq 1$ for all $t$), and at $t=0$, it produces output$$\begin{align*} y(0) &= \int_{-\infty}^\infty h(0-\tau)x(-\tau)\mathrm d\tau\\ &= \int_{-\infty}^\infty h(-\tau)\text{sgn}(h(-\tau))\mathrm d\tau &= \int_{-\infty}^\infty |h(-\tau)|\mathrm d\tau\\ &= \int_{-\infty}^\infty |h(t)|\mathrm dt. \end{align*}$$Our assumption that the system is BIBO stable means that $y(0)$ is necessarily finite, that is,$$\int_{-\infty}^{\infty} |h(t)| \mathrm dt < \infty$$ The proof for discrete-time systems is similar with the obvious change that all the integrals are replaced by sums. Ideal LPFs are not BIBO-stable systems because the impulse response is not absolutely integrable,as stated in the answer by yoda. But his answer does not really answer the question Can anyone give me a proof that ideal LPF can indeed be BIBO unstable? A specific example of a bounded input signal that produces an unbounded outputfrom an ideal LPF (and thus proves that the system is not BIBO-stable)can be constructed as outlined above (see also my comment on the main question).
The Annals of Probability Ann. Probab. Volume 25, Number 2 (1997), 787-802. Ladder heights, Gaussian random walks and the Riemann zeta function Abstract Let $\{S_n: n \geq 0\}$ be a random walk having normally distributed increments with mean $\theta$ and variance 1, and let $\tau$ be the time at which the random walk first takes a positive value, so that $S_{\tau}$ is the first ladder height. Then the expected value $E_{\theta} S_{\tau}$, originally defined for positive $\theta$, maybe extended to be an analytic function of the complex variable $\theta$ throughout the entire complex plane, with the exception of certain branch point sin-gularities. In particular, the coefficients in a Taylor expansion about $\theta = 0$ may be written explicitly as simple expressions involving the Riemann zeta function. Previously only the first coefficient of the series developed here was known; this term has been used extensively in developing approximations for boundary crossing problems for Gaussian random walks. Knowledge of the complete series makes more refined results possible; we apply it to derive asymptotics for boundary crossing probabilities and the limiting expected overshoot. Article information Source Ann. Probab., Volume 25, Number 2 (1997), 787-802. Dates First available in Project Euclid: 18 June 2002 Permanent link to this document https://projecteuclid.org/euclid.aop/1024404419 Digital Object Identifier doi:10.1214/aop/1024404419 Mathematical Reviews number (MathSciNet) MR1434126 Zentralblatt MATH identifier 0880.60070 Citation Chang, Joseph T.; Peres, Yuval. Ladder heights, Gaussian random walks and the Riemann zeta function. Ann. Probab. 25 (1997), no. 2, 787--802. doi:10.1214/aop/1024404419. https://projecteuclid.org/euclid.aop/1024404419
another form of cos(30deg-A)+sin(60deg+A)sinx? Last edited by MathGuru; May 26th 2005 at 03:20 PM. Follow Math Help Forum on Facebook and Google+ $\displaystyle 2\,\sin(\frac{1}{3}\,\pi +A)$ Reply if you need more explanations. the equations is cos(30(degrees)-A + sin(60(degrees) + A) is one of the froms of this equation $\displaystyle sinx$ Last edited by kelsey; May 28th 2005 at 09:37 AM. Please rephrase your entire question using proper and syntax-checked sentences and expressions. No, that is not an equation. An equation contains the equal sign. Is what you mean: 1. $\displaystyle (cos(30)-A) + sin(60 + A)$2. $\displaystyle cos(30-A) + sin(60 + A)$3. $\displaystyle cos(30-A + sin(60 + A))$ Last edited by Math Help; Jun 1st 2005 at 07:21 AM. Originally Posted by Math Help Is what you mean: 1. $\displaystyle (cos(30)-A) + sin(60 + A)$ 2. $\displaystyle cos(30-A) + sin(60 + A)$ 3. $\displaystyle cos(30-A + sin(60 + A))$ In the cases of 1 and 3, there are no better ways to reduce the number of terms. In the case of 2, I already gave the result. View Tag Cloud
First question: 1) Is the sum of subgroup indices of dihedral group with $2n$ elements equal to $\sigma_2(n)+2\cdot \sigma(n)$? Second question: 2) Is $\sigma_2(n)+2\cdot \sigma(n) \le L(H(D_n))$? where $L(x) = x+\exp(x)\cdot \log(x)$, $H(D_n) = H_n-5/2+2 H_{n/2+1}+2/(n+4)$, if $n\equiv 0 \mod(2)$ and $H(D_n) = H_n-5/2+2 H_{(n+3)/2}$, if $n\equiv 1 \mod(2)$ The inequality might be seen as the Lagarias inequality for the dihedral group or if you want "the Riemann hypothesis for the dihedral group", where the "usual" Rieman hypothesis is for the cyclic group (Lagarias inequality is equivalent to RH) Notation: $H_n=$ $n$-th harmonic number, $\sigma(n) = $ sum of divisors, $\sigma_2(n) = $ sum of squared divisors. Context: Replace $\sigma(G)=\sum_{U \le G} |U|$ in the following question with $\sigma(G) = \sum_{U \le G} [G:U]$ and set $G=D_n = <r,s>, S= \{r,s\}$ For the second question, it seems numerically that the upper bound imposed on $\sigma(n)$ is bigger than the Lagarias upper bound, so the 2) question does not imply RH. Edit:Here is some SAGE code which tests the two questions: # Lee norm on Z/(n)def lee(a,n): return min(a%n,(-a)%n)# harmonic numbers:def harmonic(n): return sum([1/k for k in range(1,n+1)])# Harmonic numbers for DihedralGroup(n)def HDn(n): return harmonic(n)+sum([1/(lee(a,n)+2) for a in range(n)])# Harmonic numbers for DihedralGroup(n)def HDn2(n): if n%2 ==0: h = 1/2+2*(harmonic(n/2+1)-3/2)+2/(n+4) else: h = 1/2+2*(harmonic((n+3)/2)-3/2) return harmonic(n)+h# sum of subgroup indices:def sigmaGr2(G): return sum([len(G)/len(U.list()) for U in (G.subgroups())])# conjectured sum of subgroup indices for DihedralGroup(n):def sigmaDihedral(n): return sigma(n,2)+2*sigma(n)for n in range(1,21): h = HDn2(n) s = sigma(n,2)+2*sigma(n) S = sigmaGr2(DihedralGroup(n)) print n, S == s and s <= (h+exp(h)*log(h)).N() Related question for cyclic group and $S=\{\pm 1\}$: https://mathoverflow.net/questions/331228/a-question-about-lagarias-inequality Second Edit: In Theorem 3.1 by Keith Conrad it is stated, that: (a) For each $d|n$ there is a subgroup with index $2d$. (b) For each $d|n$ there are $d^2$ subgroups with index $d$. From this it follows that: $$\sigma(D_n) = \sum_{d|n} 2d + \sum_{d|n} \sum_{i=0}^{d-1} d = 2 \sigma(n) + \sigma_2(n)$$ which was to be shown. Question 2) remains open.
Large scale albedo changes Albedo is the reflectivity of solar radiation off the surface of a planet or moon or something. It represents how much of the solar radiation is sent back into space as opposed to how much is retained. The planetary albedo of the Earth is about 0.367. However, this varies greatly from place to place. Cumulus clouds and fresh snow can have an albedo of 0.7 or more; that is, 70% of incident radiation is reflected right back into space. On the other hand the ocean's albedo is about 0.08, which means that 92% of the solar radiation striking the ocean stays with it. There are certain things that humans can do the change the albedo. Most plausibly for ancient man, a forest might have an albedo of 0.1 while dry bare soil might be 0.3. Thus, if Man could change the environment from one to the other over a large enough area, he could change the Earth's albedo. How much area is reasonable for Man to change? There were some large scale deforestation incidents in the ancient world. Possibly the biggest was the reduction of the Gangetic Plains. 10,000 years ago the area from Delhi to Bengal was covered in moist deciduous tropical forest; an area of around 500,000 km$^2$. This was ultimately replaced by cropland over the period from roughly 1000 BC to 1000 AD. Crops and irrigated soil have an albedo not too much higher than a forest, so the climate change impact wasn't much. But that Ganges plain is actually pretty dry; had humans not spent so much time irrigating it could have ended up a barren semi-desert like the African Sahel. This 500,000 km$^2$ represents 0.1% of the Earth's surface. If .1% of the Earth's surface increased in albedo by 20 percentage points, the overall Earth's albedo would have changed from 0.367 to about 0.3677; not much. But what is super self destructive humans did this to similar decidous tropical forests around the world? There are 2.7 million km$^2$ of miombo woodland in southern Africa; about 0.7 million km$^2$ more in India's northern Deccan Plateau; 0.4 million in Indochina; 0.3 million on the Pacific coast of Central America; and 0.9 million in the Dry Chaco of Argentina and Bolivia. This is about 5.5 million km$^2$ of forest total, or up to 1.1% of the Earth's surface. The albedo effect of turning all this forest into barrens would be from 0.367 to 0.375. What does that albedo change do? Albedo can be used along with stellar luminosity and orbital distance to calculate planetary effective temperature using $$T_{eff} = \sqrt[4]{\frac{1}{4}\frac{L(1-a)}{4\pi\epsilon\sigma R^2}}$$ where $\sigma$ is the Stefan-Boltzman constant ($3.67\times10^{-8} \text{W m}^{-2}\text{K}^{-4}$); $\epsilon$ is planetary emissivity (0.96 for Earth); $R$ is distance from the sun ($1.50\times10^{11} \text{ m}$); $L$ is the luminosity of the sun ($3.83\times10^{26} \text{ W}$); and $a$ is albedo. For Earth's albedo of 0.367, $T_{eff} = 276.5 \text{ K}$. This is a bit low; planetary average temp is more like 285 K, and the difference is mostly due to the greenhouse effect. But it is close enough to demonstrate the changes. When albedo raises to 0.375, $T_{eff} = 275.6 \text{ K}$. This is about 1 degree of cooling from albedo changes, or about the same magnitude as the heating that we have currently caused due to climate change. Conclusion The ancients had it in their power to change the climate the same amount that we have changed it today. However, this will require them to cut down some 5 million km$^2$ of forest; an area about twice the size of Argentina. And not just cut it down, leave it fallow so its turns into a scrubby desert with lots of exposed dirt. Now, humanity certainly has the capacity to do this, even with stone age technology. All these forests I listed have dry seasons of more than 6 months. It wouldn't be too much work to set everything on fire at the end of the dry season. As a double bonus, all these forests have heavy monsoonal rains during the wet season, so if you timed it just right, you could get the monsoon to wash all the ashes away, leaving parched, unfertile soil that would take centuries to recover. This seems a bit extreme, but given the sorts of things we have and are doing to the Earth, maybe we should consider ourselves lucky this didn't happen already.
Search Now showing items 31-40 of 182 Jet-hadron correlations relative to the event plane at the LHC with ALICE (Elsevier, 2017-11) In ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC), conditions are met to produce a hot, dense and strongly interacting medium known as the Quark Gluon Plasma (QGP). Quarks and gluons from incoming ... Measurements of the dielectron continuum in pp, p-Pb and Pb-Pb collisions with ALICE at the LHC (Elsevier, 2017-11) Dielectrons produced in ultra-relativistic heavy-ion collisions provide a unique probe of the whole system evolution as they are unperturbed by final-state interactions. The dielectron continuum is extremely rich in physics ... Exploring jet substructures with jet shapes in ALICE (Elsevier, 2017-11) The characterization of the jet substructure can give insight into the microscopic nature of the modification induced on high-momentum partons by the Quark-Gluon Plasma that is formed in ultra-relativistic heavy-ion ... Measurements of the nuclear modification factor and elliptic flow of leptons from heavy-flavour hadron decays in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 and 5.02 TeV with ALICE (Elsevier, 2017-11) We present the ALICE results on the nuclear modification factor and elliptic flow of electrons and muons from open heavy-flavour hadron decays at mid-rapidity and forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ ... Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
This is the first in a series of posts about the computational aeroacoustics that is abbreviated as CAA. Aeroacoustics is mainly concerned with the generation of sound or noise through a fluid flow. Familiar examples are They can be studied by both experimental and computational methods. In an experimental approach, the aerodynamically generated sound is measured in the anechoic chamber that is designed to prevent reflections of sound on the wall, making the room echo-free. The following video created by Microsoft will help us to understand the structure of it. The computational techniques for simulating flow-generated noise can be classified into two broad categories: Direct Approaches The governing equations of the compressible fluid flow is the compressible Navier-Stokes equations and they also describe the generation and propagation of acoustic noise, so we can solve the computational aeroacoustics problems by solving the transient compressible Navier-Stokes equations both in the source region where the flow disturbances generate noise and propagation region where the generated acoustic waves propagate. Acoustic waves have to be resolved in both regions so that the noise can be accurately simulated at observation locations. However, the solution of the Navier–Stokes equations with fine mesh over large domains to determine farfield noise is computationally very expensive. As is the case in the experimental approaches, it is essential to prevent the reflection of the acoustic waves on the artificially truncated boundary (i.e. it does not stretch to infinity) of the computational domain in order to obtain an accurate result. A variety of numerical techniques have been developed for this purpose: Navier-Stokes Characteristic Boundary Conditions (NSCBC) Artificial dissipation and damping in an absorbing zone Grid stretching and numerical filtering in a “sponge layer” or “exit zone” Perfectly matched layer (PML) Hybrid Approaches (Acoustic Analogy) David P. Lockard and Jay H. Casper [1] states that: The physics-based, airframe noise prediction methodology under investigation is a hybrid of aeroacoustic theory and computational fluid dynamics (CFD). The near-field aerodynamics associated with an airframe component are simulated to obtain the source input to an acoustic analogy that propagates sound to the far field. The acoustic analogy employed within this current framework is that of Ffowcs Williams and Hawkings, who extended the analogies of Lighthill and Curle to the formulation of aerodynamic sound generated by a surface in arbitrary motion. Lighthill’s analogy Lighthill derived the following wave equation \eqref{eq:Lighthill} from the compressible Navier-Stokes equations: \begin{align} \frac{\partial^2 \rho}{\partial t^2} – c_0^2 \nabla^2 \rho = \frac{\partial^2 T_{ij}}{\partial x_i \partial x_j}, \tag{1} \label{eq:Lighthill} \end{align} where the so-called Lighthill (turbulence) stress tensor is expressed as \begin{align} T_{ij} = \rho u_i u_j + \left( p-c_0^2 \rho \right)\delta_{ij} -\mu \left(\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i} – \frac{2}{3}\delta_{ij}\frac{u_k}{x_k} \right), \tag{2} \label{eq:Tij} \end{align} \(\delta_{ij}\) is the Kronecker delta and \(c_0\) is the speed of sound in the medium in its equilibrium state. Curle’s analogy The existence of the objects (solid walls) is not considered in the Lighthill’s theory and Curle developed the theory so that it can be dealt with. The density variation at the observer location \(\boldsymbol{x}\) is calculated from the following equation \eqref{eq:Curle} \begin{align} \acute{\rho}(\boldsymbol{x}, t) &= \rho(\boldsymbol{x}, t)\;- \rho_0 \\ &=\frac{1}{4 \pi c_0^2} \frac{\partial^2}{\partial x_i \partial x_j} \int_{V}\frac{[T_{ij}]}{r}dV – \frac{1}{4 \pi c_0^2} \frac{\partial}{\partial x_i}\int_{S}\frac{[P_i]}{r}dS, \tag{3} \label{eq:Curle} \end{align} where \begin{align} P_i = -n_j \{ \delta_{ij}p -\mu \left(\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i} – \frac{2}{3}\delta_{ij}\frac{u_k}{x_k} \right) \}, \tag{4} \label{eq:Pi} \end{align} \(r\) is the distance between the receiver \(\boldsymbol{x}\) and the source position \(\boldsymbol{y}\) in \(V\) (or on \(S\)) and the operator [] denotes evaluation at the retarded time \(t-r/c_0\). Ffowcs-Williams and Hawkings (FW-H) analogy References (English) [1] Permeable Surface Corrections for Ffowcs Williams and Hawkings Integrals [2] CFD Online – Acoustic Solver with openfoam [3] WIKIBOOKS – Engineering Acoustics/Analogies in aeroacoustics References (Japanese) [4] 加藤千幸,大規模数値流体解析による流体音の予測 [5] 飯田明由,送風機(ファン)の騒音発生メカニズムと低減・静粛化対策とその事例 [6] 大嶋拓也,流れ中の柱状物体列から発生する空力音の数値予測に関する研究 [7] Newsletters Nagare Dec. issue 2010
In 16-bit 2’s complement representation, the decimal number −28 is: Which one of the following is NOT a valid identity? Consider Z = X – Y, where X, Y and Z are all in sign-magnitude form. X and Y are each represented in n bits. To avoid overflow, the representation of Z would require a minimum of: Consider three 4-variable functions f1, f2, and f3, which are expressed in sum-of-minterms as f1 = $\sum$ (0, 2, 5, 8, 14), f2 = $\sum$ (2, 3, 6, 8, 14, 15), f3 = $\sum$ (2, 7, 11, 14) For the following circuit with one AND gate and one XOR gate, the output function f can be expressed as: What is the minimum number of 2-input NOR gates required to implement a 4-variable function expressed in sum-of-minterms form as f = $\mathrm\Sigma$ (0, 2, 5, 7, 8, 10, 13, 15)? Assume that all the inputs and their complements are available. Answer: _______________. Let $\style{font-family:'Times New Roman'}\oplus$ and $\style{font-family:'Times New Roman'}\odot$ denote the Exclusive OR and Exclusive NOR operations, respectively. The number of states in the state transition diagram of this circuit that have a transition back to the same state on some value of “in” is _____. Consider the minterm list form of a Boolean function F given below. $\style{font-family:'Times New Roman'}{F\left(P,Q,R,S\right)=\sum m\left(0,2,5,7,9,11\right)+d\left(3,8,10,12,14\right)}$ The n-bit fixed-point representation of an unsigned real number X uses f bits for the fraction part. Let i = n - f. The range of decimal values for X in this representation is Consider the Karnaugh map given below, where X represents "don't care" and blank represents 0. Assume for all inputs (a, b, c, d), the respective complements $\style{font-family:'Times New Roman'}{\left(\overline a,\;\overline b,\;\overline c,\;\overline d\right)}$ are also available. The above logic is implemented using 2-input NOR gets only. The minimum number of gates required is _____________. The representation of the value of a 16-bit unsigned integer X in hexadecimal number system is BCA9. The representation of the value of X in octal number system is Given the following binary number in 32-bit (single precision) IEEE-754 format: 00111110011011010000000000000000 The decimal value closest to this floating-point number is If w, x, y, z are Boolean variables, then which one of the following is INCORRECT ? Given $\style{font-family:'Times New Roman'}{f(w,\;x,\;y,\;z)=\sum\nolimits_m(0,\;1,\;2,\;3,\;7,\;8,\;10)+\sum\nolimits_d(5,\;6,\;11,\;15)}$ where d represents the don't-care condition in Karnaugh maps. Which of the following is a minimum product-of-sums (POS) form of $ f\left(w,x,y,z\right) $ ? Consider a binary code that consists of only four valid codewords as given below: 00000,01011,10101,11110 Let the minimum Hamming distance of the code be p and the maximum number of erroneous bits that can be corrected by the code be q. Then the values of p and q are The next state table of a 2-bit saturating up-counter is given below. The counter is built as a synchronous sequential circuit using T flip-flops. The expression for T1 and T0 are Consider the Boolean operator # with the following properties: $x\#0\;=\;x,\;x\#1\;=\;\overline x,\;x\#x\;=\;0\;and\;x\#\overline x\;=1.$ Then $ x#y $ is equivalent to We want to design a synchronous counter that counts the sequence 0-1-0-2-0-3 and then repeats. The minimum number of J-K flip-flops required to implement this counter is . Consider the two cascaded 2-to-1 multiplexers as shown in the figure. The minimal sum of products form of the output $X$ is Consider an eight-bit ripple-carry adder for computing the sum of A and B, where A and B are integers represented in 2’s complement form. If the decimal value of A is one, the decimal value of B that leads to the longest latency for the sum to stabilize is ___________ . Let,x1 ⊕ x2 ⊕ x3 ⊕ x4 =0 where x1, x2, x3, x4 are Boolean variables, and ⊕is the XOR operator. Which one of the following must always be TRUE? Let X be the number of distinct 16-bit integers in 2's complement representation. Let Y be the number of distinct 16-bit integers in sign magnitude representation. Then X-Y is ________. Consider a 4-bit Johnson counter with an initial value of 0000. The counting sequence of this counter is The binary operator ≠ is defined by the following truth table. Which one of the following is true about the binary operator ≠? A positive edge-triggered D flip-flop is connected to a positive edge-triggered JK flip-flop as follows. The Q output of the D flip-flop is connected to both the J and K inputs of the JK flip-flop, while the Q output of the JK flip-flop is connected to the input of the D flip-flop. Initially, the output of the D flip-flop is set to logic one and the output of the JK flip-flop is cleared. Which one of the following is the bit sequence(including the initial state) generated at the Q output of the JK flip-flop when the flip-flops are connected to a free-running common clock? Assume that J = K = 1 is the toggle mode and J = K = 0 is the state-holding mode of the JK flip-flop. Both the flip-flops have non-zero propagation delays. The minimum number of JK flip-flops required to construct a synchronous counter with the count sequence (0, 0, 1, 1, 2, 2, 3, 3, 0, 0,….) is _______.> The number of min-term after minimizing the following Boolean expression is _____. $ \left[D'\;+\;AB'+A'C+AC'D\;+\;A'C'D'\right] $ A half adder is implemented with XOR and AND gates. A full adder is implemented with two half adders and one OR gate. The propagation delay of an XOR gate is twice that of and AND/OR gate. The propagation delay of an AND/OR gate is 1.2 microseconds. A 4-bit ripple-carry binary adder is implemented by using four full adders. The total propagation time of this 4- bit binary adder in microseconds is _____. Let # be a binary operator defined as X # Y = X' +Y' where X and Y are Boolean variables. Consider the following two statements S1 (P # Q) # R = P # (Q # R) S2 Q # R = R # Q Which of the following is/are true for hte Boolean variables P, Q and R? Given the function F = P' + QR, where F is a function in three Boolean variables P, Q and R and P' = !P, consider the following statements. (S1) F = ∑ (4, 5, 6) (S2) F = ∑ (0, 1, 2, 3, 7) (S3) F = ∏ (4, 5, 6) (S4) F = ∏ (0, 1, 2, 3, 7) Which of the following is true? Consider the following Boolean expression for F: FP,Q,R,S=PQ+P¯QR+P¯QR¯S The minimal sum-of-products form of F is The base (or radix) of the number system such that the following equation holds is____________. 31220=13.1 Consider the 4-to-1 multiplexer with two select lines S1 and S0 given below. The minimal sum-of-products form of the Boolean expression for the output F of the multiplexer is The dual of a Boolean function F(x1, x2, … , xn, +, · , ′ ), written as FD, is the same expression as that of F with + and ⋅ swapped. F is said to be self-dual if F = FD. The number of self-dual functions with n Boolean variables is Let k= 2n. A circuit is built by giving the output of an n-bit binary counter as input to an n-to-2n bit decoder. This circuit is equivalent to a Consider the equation (123)5 = (x8)y with x and y as unknown. The number of possible solutions is _____ . Consider the following minterm expression for F: FP,Q,R,S=∑0,2,5,7,8,10,13,15 The minterms 2, 7, 8 and 13 are ‘do not care’ terms. The minimal sum-of-products form for F is Consider the following combinational function block involving four Boolean variables x, y, a, b where x, a, b are inputs and y is the output. f (x, y, a, b) { if (x is 1) y = a; else y = b; } Which one of the following digital logic blocks is the most suitable for implementing this function? The above synchronous sequential circuit built using JK flip flop is initialized with Q2Q1Q0=000.THe state sequence for these circuit for next 3 clock cycle is Let ⊕ denote the Exclusive OR (XOR) operation. Let ‘1’ and ‘0’ denote the binary constants. Consider the following Boolean expression for F over two variables P and Q: FP,Q=1⊕P⊕P⊕Q⊕P⊕Q⊕Q⊕0 The equivalent expression for F is The smallest integer that can be represented by an 8-bit number in 2’s complement form is In the following truth table, V = 1 if and only if the input is valid. What function does the truth table represent? Which one of the following expressions does NOT represent exclusive NOR of x and y? The truth table represents the Boolean function The decimal value 0.5 in IEEE single precision floating point representation has What is the minimal form of the Karnaugh map shown below? Assume that X denotes a don’t care term. Which one of the following circuits is NOT equivalent to a 2-input XNOR (exclusive NOR) gate? The simplified SOP (Sum of Product) form of the Boolean expression P+Q+R.P+Q+R.P+Q+R is Consider the following circuit involving three D-type flip-flop used ina certain type of counter configuration. if at some instance prior to the occurance of the clock edge P ,Q,and R have value 0, 1 and 0 respectively, what shall be the val ue of PQR after the clock edge? If all the flip-flop were reset to 0 at power on ,what is the total number of distinct outputs (states) represented by PQR generated by the counter? The minterm expansion of fP,Q,R=PQ+QR+PR is P is a 16-bit signed integer. The 2’s complement representation of P is (F87B)16. The 2’s complement representation of 8*P is The Boolean expression for the output f of the multiplexer shown below is What is the Boolean expression for the output f of the combinational logic circuit of NOR gates given below? In the sequential circuit shown below, if the initial value of the output Q1Q0 is 00, what are the next four values of Q1Q0? (1217)8 is equivalent to What is the minimum number of gates required to implement the Boolean function (AB+C) if we have to use only 2-input NOR gates? In the IEEE floating point representation the hexadecimal value 0x00000000 corresponds to In the Karnaugh map shown below, X denotes a don’t care term. What is the minimal form of the function represented by the Karnaugh map? Let r denote number system radix. The only value(s) of r that satisfy the equation 121r=11r is / are Given f1, f3 and f in canonical sum of products form (in decimal) for the circuit f1 = ∑m (4, 5, 6, 7, 8) f3 = ∑m (1, 6, 15) f = ∑m (1, 6, 8, 15) then f2 is If P, Q, R are Boolean variables, then P+Q¯P.Q¯+P.RP¯.R¯+Q¯ Simplifies to What is the maximum number of different Boolean functions involving n Boolean variables? How many 3-to-8 line decoders with an enable input are needed to construct a 6-to-64 line decoder without using any other logic gates? Consider the following Boolean function of four variables: fw,x,y,z=∑1,3,4,6,9,11,12,14 The function is Let fw,x,y,z=∑0,4,5,7,8,9,13,15. Which of the following expressions are NOT equivalent to f ? (P) x'y'z' + w'xy' + wy'z + xz (Q) w'y'z' + wx'y' + xz (R) w'y'z' + wx'y' + xyz + xy'z (S) x'y'z' + wx'y' + w'y Define the connective * for the Boolean variables X and Y as: X * Y = XY + X'Y'. Let Z =X *Y. Consider the following expressions P,Q and R. P: X = Y *Z Q: Y = X *Z R: X *Y *Z = 1 Which of the following is TRUE? Suppose only one multiplexer and one inverter are allowed to be used to implement any Boolean function of n variables. What is the minimum size of the multiplexer needed? In a look-ahead carry generator, the carry generate function Gi and the carry propagate function Pi for inputs Ai and Bi are given by: Pi=Ai⊕Bi and Gi=AiBi The expressions for the sum bit Si and the carry bit Ci+1 of the look-ahead carry adder are given by: Si=Pi⊕Ci and Ci+1=Gi+PiCi ,where C0 is the input carry Consider a two-level logic implementation of the look-ahead carry generator. Assume that all Pi and Gi are available for the carry generator circuit and that the AND and OR gates can have any number of inputs. The number of AND gates and OR gates needed to implement the look-ahead carry generator for a 4-bit adder with S3, S2, S1, S0, and C4 as its outputs are respectively: The control signal functions of a 4-bit binary counter are given below (where X is “don’t care”): The counter is connected as follows: Assume that the counter and gate delays are negligible. If the counter starts at 0, then it cycles through the following sequence: This is not the official website of GATE. It is our sincere effort to help you.
\[ F_{net}= \dot{m} (U_2 -U_1) + P_2 A_2 - P_1 A_1 \label{gd:iso:eq:monImp} \tag{64} \] The net force is denoted here as \(F_{net}\). The mass conservation also can be applied to our control volume \[ \dot{m} = \rho_1A_1U_1 = \rho_2A_2U_2 \label{gd:iso:eq:massImp} \tag{65} \] Combining equation (64) with equation (??) and by utilizing the identity in equation (50) results in \[ F_{net}= kP_2A_2{M_2}^2 - kP_1A_1{M_1}^2 + P_2 A_2 - P_1 A_1 \label{gd:iso:eq:MomMassImp} \tag{66} \] Rearranging equation (66) and dividing it by \(P_0 A^{*}\) results in \[ {F_{net} \over P_0 A^{*}} = \overbrace{P_2A_2 \over P_0 A^{*}}^{f(M_2)} \overbrace{\left( 1 + k{M_2}^2 \right)}^{f(M_2)} - \overbrace{P_1A_1 \over P_0 A^{*}}^{f(M_1)} \overbrace{\left( 1 + k{M_1}^2 \right)}^{f(M_1)} \label{gd:iso:eq:beforeDefa} \tag{67} \] Examining equation (67) shows that the right hand side is only a function of Mach number and specific heat ratio, \(k\). Hence, if the right hand side is only a function of the Mach number and \(k\) than the left hand side must be function of only the same parameters, \(M\) and \(k\). Defining a function that depends only on the Mach number creates the convenience for calculating the net forces acting on any device. Thus, defining the Impulse function as \[ F = PA\left( 1 + k{M_2}^2 \right) \label{gd:iso:eq:impulsDef} \tag{68} \] In the Impulse function when \(F\) (\(M=1\)) is denoted as \(F^{*}\) \[ F^{*} = P^{*}A^{*}\left( 1 + k \right) \label{gd:iso:eq:impulsDefStar} \tag{69} \] The ratio of the Impulse function is defined as \[ {F \over F^{*}} = {P_1A_1 \over P^{*}A^{*}} {\left( 1 + k{M_1}^2 \right) \over \left( 1 + k \right) } = {1 \over \underbrace{P^{*}\over P_{0} }_ {\left(2 \over k+1 \right)^{k \over k-1}}} \overbrace Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/11:_Compressible_Flow_One_Dimensional/11.4_Isentropic_Flow/11.4.7:_The_Impulse_Function), /content/body/p[3]/span, line 1, column 10 qref{gd:iso:eq:beforeDefa}}} {1 \over \left( 1 + k \right) } \label{gd:iso:eq:ImpulseRatio} \tag{70} \] This ratio is different only in a coefficient from the ratio defined in equation (67) which makes the ratio a function of \(k\) and the Mach number. Hence, the net force is \[ F_{net} = P_0 A^{*} (1+k) {\left( k+1 \over 2 \right)^{k \over k-1}} \left( {F_2 \over F^{*} } - { F_1 \over F^{*}}\right) \label{gd:iso:eq:NetForce} \tag{71} \] To demonstrate the usefulness of the this function consider a simple situation of the flow through a converging nozzle. Example 11.11 Fig. 11.10 Schematic of a flow of a compressible substance (gas) through a converging nozzle for example (??) Consider a flow of gas into a converging nozzle with a mass flow rate of \(1[kg/sec]\) and the entrance area is \(0.009[m^2]\) and the exit area is \(0.003[m^2]\). The stagnation temperature is \(400K\) and the pressure at point 2 was measured as \(5[Bar]\). Calculate the net force acting on the nozzle and pressure at point 1. Solution 11.11 The solution is obtained by getting the data for the Mach number. To obtained the Mach number, the ratio of \(P_1A_1/A^{*}P_0\) is needed to be calculated. The denominator is needed to be determined to obtain this ratio. Utilizing Fliegner's equation (59), provides the following \[ A^{*} P_0 = \dfrac{\dot{m} \sqrt{R\,T} }{ 0.058} = \dfrac{1.0 \times \sqrt{400 \times 287} }{ 0.058} \sim 70061.76 [N] \] and \[ \dfrac{A_2\, P_2 }{ A^{\star}\, P_0} = \dfrac{ 500000 \times 0.003 }{ 70061.76 } \sim 2.1 \] Isentropic Flow Input: \dfrac{A\, P }{ A^{\star} \, P_0} k = 1.4 \(M\) \(\dfrac{T}{T_0}\) \(\dfrac{\rho}{\rho_0}\) \(\dfrac{A}{A^{\star} }\) \(\dfrac{P}{P_0}\) \(\dfrac{A\, P }{ A^{\star} \, P_0}\) \(\dfrac{F }{ F^{\star}}\) 0.27353 0.98526 0.96355 2.2121 0.94934 2.1000 0.96666 With the area ratio of \({A \over A^{\star}}= 2.2121\) the area ratio of at point 1 can be calculated. \[ \dfrac{ A_1 }{ A^{\star}} = \dfrac{A_2 }{ A^{\star}} \dfrac{A_1 }{ A_2} = 2.2121 \times \dfrac{0.009 }{ 0.003} = 5.2227 \] And utilizing again Potto-GDC provides Isentropic Flow Input: \dfrac{A\, P }{ A^{\star} \, P_0} k = 1.4 \(M\) \(\dfrac{T}{T_0}\) \(\dfrac{\rho}{\rho_0}\) \(\dfrac{A}{A^{\star} }\) \(\dfrac{P}{P_0}\) \(\dfrac{A\, P }{ A^{\star} \, P_0}\) \(\dfrac{F }{ F^{\star}}\) 0.11164 0.99751 0.99380 5.2227 0.99132 5.1774 2.1949 The pressure at point \(1\) is \[ P_1 = P_2 {P_0 \over P_2} { P_1 \over P_0} = 5.0 times 0.94934 / 0.99380 \sim 4.776[Bar] \] The net force is obtained by utilizing equation (71) \[ F_{net} &= P_2 A_2 {P_0 A^{*} \over P_2 A_2} (1+k) {\left( k+1 \over 2 \right)^{k \over k-1}} \left( {F_2 \over F^{*} } - { F_1 \over F^{*}}\right) \ & = 500000 \times {1 \over 2.1}\times 2.4 \times 1.2^{3.5} \times \left( 2.1949 - 0.96666 \right) \sim 614[kN] \] Contributors Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
To be clear, Maxwell's equations are known as "Lorentz-invariant" equations, which means that they take the same form in every Lorentz-transformed frame of reference. Special relativity actually came about from studying Maxwell's (classical) equations without charges or currents. Then we get: $$\nabla \cdot \mathbf{E}=0$$$$\nabla \cdot \mathbf{B}=0$$$$\nabla\times\mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t}$$$$\nabla\times\mathbf{B} = \mu_0\epsilon_0\frac{\partial \mathbf{E}}{\partial t}$$ Take the curl of Faraday's Law:$$\nabla\times(\nabla\times\mathbf{E})=\nabla(\nabla\cdot \mathbf{E})-\nabla^2\mathbf{E}=\nabla\cdot(-\frac{\partial \mathbf{B}}{\partial t})$$ And substitute Gauss's law for $\nabla\cdot\mathbf{E}$ and Ampere's law for $\frac{\partial}{\partial t}(\nabla\cdot \mathbf{B})$ and you'll find the wave equation for $\mathbf{E}$:$$(\nabla^2-\frac{1}{c^2}\frac{\partial^2}{\partial t^2})\mathbf{E}=0$$ where $c=1/\sqrt{\mu_0\epsilon_0}\approx 2.998\cdot 10^{8}m/s$. This wave equation is only for the case of what's called "Lorenz gauge" (not "Lorentz"), which corresponds to a frame of reference which is at rest compared to the medium. Back when people thought there was some kind of fluid or "ether" that electromagnetic waves traveled through, it makes sense that if you're moving relative to the fluid, then the velocity of waves in the fluid will change. The Michelson-Morley experiment helped to show that there was no "ether" that light travels through. Einstein's insight was that electromagnetic waves travel at the same speed no matter what frame of reference you are using. This $\textbf{principle of relativity}$ is only satisfied if velocities don't add in the Galilean sense and rather follow a different set of rules about transforming frames of reference, called Lorentz invariance. It was in a sense luck that we discovered a theory of electromagnetism Lorentz invariant, but in another sense it was inevitable since the theory is inherently relativistic. Notice that nowhere here do I discuss the particle nature of light. Special relativity really has nothing to do with classical versus quantum theory. It's all about the difference between Galilean invariance and Lorentz invariance. Aside: When asked later on why he believed in special relativity, Einstein quoted Fizeau's experiment.
Current browse context: astro-ph.CO Change to browse by: References & Citations Bookmark(what is this?) Astrophysics > Cosmology and Nongalactic Astrophysics Title: Primordial magnetic fields and the HI signal from the epoch of reionization (Submitted on 2 Nov 2009) Abstract: The implication of primordial magnetic-field-induced structure formation for the HI signal from the epoch of reionization is studied. Using semi-analytic models, we compute both the density and ionization inhomogeneities in this scenario. We show that: (a) The global HI signal can only be seen in emission, unlike in the standard $\Lambda$CDM models, (b) the density perturbations induced by primordial fields, leave distinctive signatures of the magnetic field Jeans' length on the HI two-point correlation function, (c) the length scale of ionization inhomogeneities is $\la 1 \rm Mpc$. We find that the peak expected signal (two-point correlation function) is $\simeq 10^{-4} \rm K^2$ in the range of scales $0.5\hbox{-}3 \rm Mpc$ for magnetic field strength in the range $5 \times 10^{-10} \hbox{-}3 \times 10^{-9} \rm G$. We also discuss the detectability of the HI signal. The angular resolution of the on-going and planned radio interferometers allows one to probe only the largest magnetic field strengths that we consider. They have the sensitivity to detect the magnetic field-induced features. We show that thefuture SKA has both the angular resolution and the sensitivity to detect the magnetic field-induced signal in the entire range of magnetic field values we consider, in an integration time of one week. Submission historyFrom: Shiv Sethi [view email] [v1]Mon, 2 Nov 2009 06:23:33 GMT (49kb)
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
Equations (1), (2), and (3) can be converted into a dimensionless form. The reason that dimensionless forms are heavily used in this book is because by doing so it simplifies and clarifies the solution. It can also be noted that in many cases the dimensionless equations set is more easily solved. From the continuity equation (1) substituting for density, \(\rho\), the equation of state yields \[ \dfrac{P_x }{ R\, T_x } \,U_x = \dfrac{P_y }{ R\, T_y } \, U_y \label{shock:eq:continutyNonD} \tag{7} \] Squaring equation (7) results in \[ \dfrac{ {P_x }^{2} }{ R^{2} \,{T_x}^2}\, {U_x}^{2} = \dfrac{ {P_y }^{2} }{ R^{2} \,{T_y}^2}\, {U_y}^{2} \label{shock:eq:massNonD0} \tag{8} \] Multiplying the two sides by the ratio of the specific heat, k, provides a way to obtain the speed of sound definition/equation for perfect gas, \(c^2 = k\,R\,T\) to be used for the Mach number definition, as follows: \[ \dfrac{ {P_x }^{2} }{ T_x \underbrace{k\, R\, {T_x}}_ Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/11:_Compressible_Flow_One_Dimensional/11.5_Normal_Shock/11.5.1:_Solution_of_the_Governing_Equations), /content/body/p[3]/span, line 1, column 4 {U_y}^{2} \label{shock:eq:massNonD1} \tag{9} \] Note that the speed of sound is different on the sides of the shock. Utilizing the definition of Mach number results in \[ \dfrac{ {P_x}^{2} }{ T_x } {M_x}^{2} = \dfrac{ {P_y}^{2} }{ T_y } {M_y}^{2} \label{shock:eq:massNonD2} \tag{10} \] Rearranging equation (10) results in \[ \dfrac{T_y }{ T_x} = \left( \dfrac{ P_{y} }{ P_{x}} \right)^{2} \left( \dfrac{M_y }{ M_x} \right)^{2} \label{shock:eq:nonDimMass} \tag{11} \] Energy equation (3) can be converted to a dimensionless form which can be expressed as \[ T_y \left( 1 + \dfrac{k-1 }{ 2}\, {M_y}^{2} \right) = T_x \left( 1 + \dfrac{k-1 }{ 2}\, {M_x}^{2} \right) \label{shock:eq:energyDless} \tag{12} \] It can also be observed that equation (12) means that the stagnation temperature is the same, \({T_0}_y = {T_0}_x\). Under the perfect gas model, \(\rho\, U^{2}\) is identical to \(k\, P\, M^{2}\) because \[ \rho U^{2} = \overbrace{P \over R\,T}^{\rho} \overbrace{\left( {U^2 \over \underbrace{k\,R\,T}_{c^2}}\right)} ^{M^2} k\,R\,T = k\, P\, M {2} \label{shock:eq:Rindenty} \tag{13} \] Using the identity (13) transforms the momentum equation (2) into \[ P_x + k\, P_x\, {M_x}^{2} = P_y + k\, P_y\, {M_y}^{2} \label{shock:eq:Punarranged} \tag{14} \] Rearranging equation (14) yields \[ \dfrac{P_y }{ P_x} = \dfrac{1 + k\,{M_{x}}^2 }{ 1 + k\,{M_{y}}^2} \label{gd:shock:eq:pressureRatio} \tag{15} \] The pressure ratio in equation (15) can be interpreted as the loss of the static pressure. The loss of the total pressure ratio can be expressed by utilizing the relationship between the pressure and total pressure (see equation (??)) as \[ \dfrac Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/11:_Compressible_Flow_One_Dimensional/11.5_Normal_Shock/11.5.1:_Solution_of_the_Governing_Equations), /content/body/p[4]/span, line 1, column 4 \label{shock:eq:combineDimMassEnergy} \tag{17} \] Combining the results of (17) with equation (15) results in \[ \left( \dfrac{1 + k\,{M_{x}}^2 }{ 1 + k\,{M_{y}}^2} \right)^{2} = \left( \dfrac{ M_x }{ M_y }\right)^{2} { 1 + \dfrac{ k-1 }{ 2} {M_x}^{2} \over 1 + \dfrac{ k-1 }{ 2} {M_y}^{2}} \label{shock:eq:toBeSolved} \tag{18} \] Equation (18) is a symmetrical equation in the sense that if \(M_y\) is substituted with \(M_x\) and \(M_x\) substituted with \(M_y\) the equation remains the same. Thus, one solution is \[ M_y = M_x \label{shock:eq:Msolution1} \tag{19} \] It can be observed that equation (18) is biquadratic. According to the Gauss Biquadratic Reciprocity Theorem this kind of equation has a real solution in a certain range which will be discussed later. The solution can be obtained by rewriting equation (18) as a polynomial (fourth order). It is also possible to cross–multiply equation (??) and divide it by \(\left({M_x}^2- {M_y}^2\right)\) results in \[ 1 + \dfrac{k -1 }{ 2} \left({M_{y}}^2+ {M_{y}}^2 \right) - k \,{M_{y}}^2\, {M_{y}}^2 = 0 \label{shock:eq:generalSolution} \tag{20} \] Equation (20) becomes Shock Solution \[ \label{shock:eq:solution2} {M_y}^2 = \dfrac{ {M_x}^2 + \dfrac{2 }{ k -1} } {\dfrac{2\,k }{ k -1}\, {M_x}^2 - 1 } \tag{21} \] The first solution (19) is the trivial solution in which the two sides are identical and no shock wave occurs. Clearly, in this case, the pressure and the temperature from both sides of the nonexistent shock are the same, i.e. \(T_x=T_y,\; P_x=P_y\). The second solution is where the shock wave occurs. The pressure ratio between the two sides can now be as a function of only a single Mach number, for example, \(M_x\). Utilizing equation (15) and equation (21) provides the pressure ratio as only a function of the upstream Mach number as \begin{align*} {P_y \over P_x} = \dfrac{2\,k }{ k+1 } {M_x}^2 - \dfrac{k -1 }{ k+1} \qquad \text{or} \end{align*} Shock Pressure Ratio \[ \label{shock:eq:pressureMx} \dfrac{P_y }{ P_x} = 1 + \dfrac{ 2\,k }{ k+1} \left({M_x}^2 -1 \right ) \tag{22} \] The density and upstream Mach number relationship can be obtained in the same fashion to became Shock Density Ratio \[ \label{shock:eq:densityMx} \dfrac{\rho_y }{ \rho_x} = \dfrac{U_x }{ U_y} = \dfrac{( k +1) {M_x}^{2} }{ 2 + (k -1) {M_x}^{2} } \tag{23} \] The fact that the pressure ratio is a function of the upstream Mach number, \(M_x\), provides additional way of obtaining an additional useful relationship. And the temperature ratio, as a function of pressure ratio, is transformed into Shock Temperature Ratio \[ \label{shock:eq:temperaturePbar} \dfrac{T_y }{ T_x} = \left( \dfrac{P_y }{ P_x} \right) \left( \dfrac{\dfrac{k + 1 }{ k -1 } + \dfrac{P_y }{ P_x}} { 1+ \dfrac{k + 1 }{ k -1 } \dfrac{P_y }{ P_x}} \right) \tag{24} \] In the same way, the relationship between the density ratio and pressure ratio is Shock \(P-\rho\) \[ \label{shock:eq:densityPbar} \dfrac{\rho_x }{ \rho_y} = \dfrac{ 1 + \left( \dfrac{k +1 }{ k -1} \right) \left( \dfrac{P_y }{ P_x} \right) } { \left( \dfrac{k+1}{ k-1}\right) +\left( \dfrac{P_y }{ P_x} \right)} \tag{25} \] which is associated with the shock wave. Contributors Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
Search Now showing items 1-10 of 15 A free-floating planet candidate from the OGLE and KMTNet surveys (2017) Current microlensing surveys are sensitive to free-floating planets down to Earth-mass objects. All published microlensing events attributed to unbound planets were identified based on their short timescale (below 2 d), ... OGLE-2016-BLG-1190Lb: First Spitzer Bulge Planet Lies Near the Planet/Brown-Dwarf Boundary (2017) We report the discovery of OGLE-2016-BLG-1190Lb, which is likely to be the first Spitzer microlensing planet in the Galactic bulge/bar, an assignation that can be confirmed by two epochs of high-resolution imaging of the ... OGLE-2015-BLG-1459L: The Challenges of Exo-Moon Microlensing (2017) We show that dense OGLE and KMTNet $I$-band survey data require four bodies (sources plus lenses) to explain the microlensing light curve of OGLE-2015-BLG-1459. However, these can equally well consist of three lenses ... OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only (2018) We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ... OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function (2018) We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ... OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy (2018) We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ... OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge (2018) We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ... Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb (2018) We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ... OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit (2018) We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ... KMT-2016-BLG-0212: First KMTNet-Only Discovery of a Substellar Companion (2018) We present the analysis of KMT-2016-BLG-0212, a low flux-variation $(I_{\rm flux-var}\sim 20$) microlensing event, which is well-covered by high-cadence data from the three Korea Microlensing Telescope Network (KMTNet) ...
It is known that if $\delta$ is a Woodin cardinal and $\kappa < \delta$, then the stationary tower forcing $\mathbb Q^\kappa_{<\delta}$ preserves cardinals up to $\kappa$ and forces $\delta = \kappa^+$. Thus if there is a Woodin cardinal $\delta$ then there is a forcing preserving cardinals up to $\aleph_\omega$ and making $\delta = \aleph_{\omega+1}$. But it is also known that $\mathbb Q^\kappa_{<\delta}$ is not $\delta$-c.c. Question: Is there some large cardinal assumption that implies the existence of a cardinal $\kappa > \aleph_{\omega+1}$ and a $\kappa$-c.c. forcing $\mathbb P$ which preserves $\aleph_n$ for finite $n$ and makes $\kappa = \aleph_{\omega+1}$? I'm no expert on these things, but naively I would suggest two possible approaches: (a) Find a large cardinal $\delta$ that implies the existence of a $\delta$-saturated tower of ideals with similar effects as the stationary tower. (b) Find an inaccessible cardinal $\delta$ with a precipitous tower of ideals of height $\delta$ that preserves the $\aleph_n$'s but actually collapses $\delta$, so that $\delta^+$ is the witness. Update: (b) is ruled out by Mohammad's result here. Note: It is consistent relative to large cardinals that there is some $\kappa$-c.c. forcing collapsing a regular $\kappa$ to be $\aleph_{\omega+1}$ while preserving cardinals below $\aleph_\omega$. Namely an $\aleph_{\omega+2}$-saturated ideal on $\aleph_{\omega+1}$, which can be forced from a huge cardinal. But I want to see if it is outright implied by large cardinals, because then it is much easier to combine with other things. New Idea: Foreman-Magidor-Shelah show in "Martin's Maximum Part I" that if $\mu$ is regular and $\kappa > \mu$ is supercompact, then $\mathrm{Col}(\mu,<\kappa)$ forces that $NS_\mu$ is precipitous. I believe this was improved by Goldring to a Woodin cardinal. So perhaps for large $\kappa$, $\mathrm{Col}(\aleph_{\omega+1},<\kappa) * \dot{\mathcal{P}(\aleph_{\omega+1}) / NS}$ does the trick. If we force below $cof(\omega_n)$ for $n > 0$, then we are sure to collapse $\kappa = \aleph_{\omega+2}$ (by a theorem of Shelah), and the whole forcing is $\kappa$-dense, so $\kappa^+$ should be the witness. But the problem is, what happens below $\aleph_\omega$? Despite being precipitous, could forcing with $NS_{\aleph_{\omega+1}}$ actually make $\kappa$ countable? The proof of precipitousness given in the paper is a bit abstract so I have no idea how the generic ultrapower compares to the generic extension.
Difference between revisions of "Kunen inconsistency" m (→Reinhardt cardinal: Has not been refuted in ZFC, but ZFC_2) (Reinhardt there) Line 1: Line 1: {{DISPLAYTITLE:The Kunen inconsistency}} {{DISPLAYTITLE:The Kunen inconsistency}} − The Kunen inconsistency, the theorem showing that there can be no nontrivial [[elementary embedding]] from the universe to itself, remains a focal point of large cardinal set theory, marking a hard upper bound at the summit of the main ascent of the large cardinal hierarchy, the first outright refutation of a large cardinal axiom. On this main ascent, large cardinal axioms assert the existence of elementary embeddings $j:V\to M$ where $M$ exhibits increasing affinity with $V$ as one climbs the hierarchy. The $\theta$-[[strong]] cardinals, for example, have $V_\theta\subset M$; the $\lambda$-[[supercompact]] cardinals have $M^\lambda\subset M$; and the [[huge]] cardinals have $M^{j(\kappa)}\subset M$. The natural limit of this trend, first suggested by Reinhardt, is a nontrivial elementary embedding $j:V\to V$, the critical point of which is accordingly known as a ''Reinhardt'' + The Kunen inconsistency, the theorem showing that there can be no nontrivial [[elementary embedding]] from the universe to itself, remains a focal point of large cardinal set theory, marking a hard upper bound at the summit of the main ascent of the large cardinal hierarchy, the first outright refutation of a large cardinal axiom. On this main ascent, large cardinal axioms assert the existence of elementary embeddings $j:V\to M$ where $M$ exhibits increasing affinity with $V$ as one climbs the hierarchy. The $\theta$-[[strong]] cardinals, for example, have $V_\theta\subset M$; the $\lambda$-[[supercompact]] cardinals have $M^\lambda\subset M$; and the [[huge]] cardinals have $M^{j(\kappa)}\subset M$. The natural limit of this trend, first suggested by Reinhardt, is a nontrivial elementary embedding $j:V\to V$, the critical point of which is accordingly known as a ''Reinhardt'' cardinal. Shortly after this idea was introduced, however, Kunen famously proved that there are no such embeddings, and hence no Reinhardt cardinals in $\text{ZFC}$. − cardinal. Shortly after this idea was introduced, however, Kunen famously proved that there are no such embeddings, and hence no Reinhardt cardinals in $\text{ZFC}$. + Since that time, the inconsistency argument has been generalized by various authors, including Harada Since that time, the inconsistency argument has been generalized by various authors, including Harada Line 22: Line 21: == Metamathematical issues == == Metamathematical issues == − Kunen formalized his theorem in Kelly-Morse set theory, but it is also + Kunen formalized his theorem in Kelly-Morse set theory, but it is also to prove it in the weaker system of Gödel-Bernays set theory. In each case, the embedding $j$ is a $\text{GBC}$ class, and elementary of $j$ is asserted as a $\Sigma_1$-elementary embedding, which implies $\Sigma_n$-elementarity when the two models have the ordinals. − + − + − + − + − + − + − + − + {{References}} {{References}} Latest revision as of 23:41, 22 April 2019 The Kunen inconsistency, the theorem showing that there can be no nontrivial elementary embedding from the universe to itself, remains a focal point of large cardinal set theory, marking a hard upper bound at the summit of the main ascent of the large cardinal hierarchy, the first outright refutation of a large cardinal axiom. On this main ascent, large cardinal axioms assert the existence of elementary embeddings $j:V\to M$ where $M$ exhibits increasing affinity with $V$ as one climbs the hierarchy. The $\theta$-strong cardinals, for example, have $V_\theta\subset M$; the $\lambda$-supercompact cardinals have $M^\lambda\subset M$; and the huge cardinals have $M^{j(\kappa)}\subset M$. The natural limit of this trend, first suggested by Reinhardt, is a nontrivial elementary embedding $j:V\to V$, the critical point of which is accordingly known as a Reinhardt cardinal. Shortly after this idea was introduced, however, Kunen famously proved that there are no such embeddings, and hence no Reinhardt cardinals in $\text{ZFC}$. Since that time, the inconsistency argument has been generalized by various authors, including Harada [1](p. 320-321), Hamkins, Kirmayer and Perlmutter [2], Woodin [1](p. 320-321), Zapletal [3] and Suzuki [4, 5]. There is no nontrivial elementary embedding $j:V\to V$ from the set-theoretic universe to itself. There is no nontrivial elementary embedding $j:V[G]\to V$ of a set-forcing extension of the universe to the universe, and neither is there $j:V\to V[G]$ in the converse direction. More generally, there is no nontrivial elementary embedding between two ground models of the universe. More generally still, there is no nontrivial elementary embedding $j:M\to N$ when both $M$ and $N$ are eventually stationary correct. There is no nontrivial elementary embedding $j:V\to \text{HOD}$, and neither is there $j:V\to M$ for a variety of other definable classes, including $\text{gHOD}$ and the $\text{HOD}^\eta$, $\text{gHOD}^\eta$. If $j:V\to M$ is elementary, then $V=\text{HOD}(M)$. There is no nontrivial elementary embedding $j:\text{HOD}\to V$. More generally, for any definable class $M$, there is no nontrivial elementary embedding $j:M\to V$. There is no nontrivial elementary embedding $j:\text{HOD}\to\text{HOD}$ that is definable in $V$ from parameters. It is not currently known whether the Kunen inconsistency may be undertaken in ZF. Nor is it known whether one may rule out nontrivial embeddings $j:\text{HOD}\to\text{HOD}$ even in $\text{ZFC}$. Metamathematical issues Kunen formalized his theorem in Kelly-Morse set theory, but it is also possible to prove it in the weaker system of Gödel-Bernays set theory. In each case, the embedding $j$ is a $\text{GBC}$ class, and elementary of $j$ is asserted as a $\Sigma_1$-elementary embedding, which implies $\Sigma_n$-elementarity when the two models have the ordinals. References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Zapletal, Jindrich. A new proof of Kunen's inconsistency.Proc Amer Math Soc 124(7):2203--2204, 1996. www DOI MR bibtex Suzuki, Akira. Non-existence of generic elementary embeddings into the ground model.Tsukuba J Math 22(2):343--347, 1998. MR bibtex | Abstract Suzuki, Akira. No elementary embedding from $V$ into $V$ is definable from parameters.J Symbolic Logic 64(4):1591--1594, 1999. www DOI MR bibtex
July 4th, 2017, 03:26 PM # 1 Senior Member Joined: Jan 2017 From: Toronto Posts: 209 Thanks: 3 double integral with Jacobian determinant Suppose we want to integrate f(x, y) = x + y over the diamond U with boundary lines y = x + 1, y = x - 1, y = -x + 1 and y = -x - 1 Fill in the three blank spaces below (two bounds and the integrand) to write the integral of f(x, y) = x + y as an integral in u and v: Using substitution x = u + v + 1 and y = u - v + 2. The First and third bounding lines above have already been converted into (u, v) bounds and included below. DO NOT EVALUATE this integral. (Hint: You may use the fact that "Jacobian determinant" = -2 for this substitution.) $\displaystyle \int_{0}^{?} \int_{?}^{-1} ????? du dv $ My question is how to use the Jacobian determinant as substitution to find out the inner lower limit? July 4th, 2017, 08:56 PM # 2 Senior Member Joined: Dec 2012 From: Hong Kong Posts: 853 Thanks: 311 Math Focus: Stochastic processes, statistical inference, data mining, computational linguistics $\displaystyle y = x + 1 \Rightarrow u - v + 2 = u + v + 1 + 1 \Rightarrow v = 0$ $\displaystyle y = x - 1 \Rightarrow u-v+2=u+v+1-1 \Rightarrow v = 2$ $\displaystyle u-v+2=-u-v-1+1 \Rightarrow 2u = -2 \Rightarrow u = -1$ $\displaystyle y=-x-1 \Rightarrow u-v+2=-u-v-1-1 \Rightarrow 2u = -4 \Rightarrow u = -2$ So the first missing limit is 2 and the second missing limit is -2. As for the Jacobian determinant, $\displaystyle x + y = 2u + 3 \Rightarrow u = \frac{x+y-3}{2}$ $\displaystyle x - y = 2v - 1 \Rightarrow v = \frac{x-y+1}{2}$ $\displaystyle \dfrac{\partial(u, v)}{\partial(x, y)} = \begin{vmatrix} \dfrac{\partial u}{\partial x} & \dfrac{\partial u}{\partial y}\\[1em] \dfrac{\partial v}{\partial x} & \dfrac{\partial v}{\partial y} \end{vmatrix}=\begin{vmatrix} \dfrac{\partial (x+y-3)/2}{\partial x} & \dfrac{\partial (x+y-3)/2}{\partial y}\\[1em] \dfrac{\partial (x-y+1)/2}{\partial x} & \dfrac{\partial (x-y+1)/2}{\partial y} \end{vmatrix}=\begin{vmatrix} \frac{1}{2} & \frac{1}{2}\\[1em] \frac{1}{2} & -\frac{1}{2} \end{vmatrix} = -\frac{1}{2}$ The Jacobian determinant has two definitions; one is the reciprocal of the other, and your textbook appears to use the other. Finally, $\displaystyle x + y = u + v + 1 + u - v + 2 = 2u+3$. So the integral is $\displaystyle \int_{0}^2 \int^{-1}_{-2} -2(2u+3) du dv$. Tags determinant, double, integral, jacobian Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post Double integral, repeated integral and the FTC Jhenrique Calculus 5 June 30th, 2015 03:45 PM Jacobian determinant bereprada Calculus 1 March 17th, 2014 06:20 PM integral of double integral in a region E maximus101 Calculus 0 March 4th, 2011 01:31 AM Derivation of Jacobian Determinant alt20 Calculus 1 September 12th, 2009 08:27 AM The problem of Jacobian determinant!! abcdefg10645 Calculus 0 May 2nd, 2009 03:47 AM
I think @ADG has provided a nice summary of when it is and isn't acceptable to post answers involving CAS. CAS is a lovely tool that I certainly use to check my hand-derived results and sometimes to get around tedious algebra that isn't the entire point of a problem. However, CAS can be downright misleading, if not thoroughly disconcerting, if used mindlessly, even if technically correct. I'll discuss a real example here on M.SE. The problem concerns a double integration. Really, the trick to analytical evaluation lies in a change in the order of integration. That is where the thinking is. Maybe a CAS can recognize the thought pattern and produce the correct answer. I don't know of one, however. All I know is what happened when someone (a Maple salesperson?) answered the question with Maple I/O. So, I reproduce the pure CAS answer: $$-1/12\,{\frac {2\,{\mbox{$_3$F$_2$}(1/6,1/2,1/2;\,7/6,3/2;\,1)}\Gamma \left( 5/6 \right) \Gamma \left( 2/3 \right) -{\pi }^{3/2}}{\Gamma \left( 5/6 \right) \Gamma \left( 2/3 \right) }}$$ To the inexperienced reader trying to learn something, this is enough to discourage. Seriously, if you were struggling in Calc III and were presented with this answer, wouldn't you be tempted to give up? The sad part is that the answer is quite correct, numerically. But we have generalized hypergeometric and ugly-looking gammas. That integral must be so very hard! This is why CAS-only solutions are unacceptable in many cases, even if the OP only asked for the result of evaluating the integral. There is a level of thought - at this time, human thought - that the problem deserves, and that someone posting an answer at M.SE needs to describe. The OP needs to be taught to recognize that a change in order of integration can reduce some of these double integrals to simple single integrals. In this case, as the accepted solution explains, the double integral evaluates to $\pi/24$. That's it. I don't care if the CAS solution agrees with this somehow, either numerically or through a complicated series of identities; the CAS has failed to present the answer in a useful form. It, and any answer like it that favors mindlessness and I/O over understanding and exposition, should be downvoted thoroughly.
In this section we will discuss the geometric progression problem which are important according to the exam. But before seeing these problems, student should know all the formulas of geometric progression. If you are not conversant with the formulas then please learn the following formulas. Formulas for geometric progression problem If a1,a2,a3,....an are in G.P. then1) nth term of G.P = $a_{n} = a r^{n - 1}$2) sum of 'n' terms of a G.P = $S_{n} = a\left ( \frac{r^{n} - 1}{r - 1} \right )$ r > 1 For r $\neq $ 1,$S_{n} = a\left ( \frac{1 - r^{n}}{1 - r} \right )$ 3) $S_{n}$ = n and if r = l$S_{n} = \left ( \frac{a - lr} {l - r} \right )$ OR $S_{n} = \left ( \frac{lr -a} {r - 1} \right )$ where 'l' is the last term. 4) G.M = $\sqrt{ab}$5) A.M. =$\frac{a + b}{2}$ 1)If the 4th,10th and 16th terms of a G.P. are x,y and z respectively. Prove that x,y,z are in G.P. Solution: nth term of G.P is given by$a_{n} = ar^{n - 1}$∴ $a_{4} = ar^{4 - 1}$x = a$r^{3}$$a_{10} = ar^{10 - 1}$y = a$r^{9}$$a_{16} = ar^{16 - 1}$z = a$r^{15}$x = a$r^{3}$ y = a$r^{9}$ z = a$r^{15}$∴ $y^{2} = a^{2}r^{18}$ xy = a$r^{3}.ar^{15}$xy = $a^{2}r^{18}= y^{2}$∴ $y^{2}$= xyHence, x,y,z are in G.P. 2)If AM and GM of roots of quadratic equation are 8 and 5 respectively, then obtain the quadratic equation. Solution: If A and G are arithmetic and geometric mean respectively between two positive numbers a and b, then the quadratic equation having a,b as its roots is$x^{2}$−2Ax + $G^{2}$ = 0A = $\frac{a + b}{2}$ and $G^{2}=\sqrt{ab}$8 = $\frac{a + b}{2}$∴ (a + b) = 165 =$\sqrt{ab}$25 = abThe quadratic equation with roots a,b is$x^{2}$−(a+ b)x + ab = 0$x^{2}$−16x + 25 = 0 3) Insert 5 geometric means between 576 and 9. Solution: Let a1,a2,a3,a4,a5 be 5 geometric means between a = 576 and b = 9Sequence will be 576, a1, a2, a3, a4, a5, 9 is in G.P.common ratio = r = $\left ( \frac{b}{a} \right )^{\frac{1}{n + 1}}$r = $\left ( \frac{9}{576} \right )^{\frac{1}{5 + 1}}$= $\left ( \frac{1}{64} \right )^{\frac{1}{6}} = \frac{1}{2}$a1 = a$r^{1} = 576 \times \frac{1}{2}$ = 288a2 = a$r^{2} = 576 \times \frac{1}{4}$ = 144a3 = a$r^{3} = 576 \times \frac{1}{8}$ = 72a4 = a$r^{4} = 576 \times \frac{1}{16}$ = 36a5 = a$r^{5} = 576 \times \frac{1}{32}$ = 18Hence 288,144,72,36,18 are the 5 geometric means inserted between 576 and 9.
The Feferman-Schütte ordinal, $\Gamma_0$ The Feferman-Schütte ordinal, denoted $\Gamma_0$ ("gamma naught"), is the first ordinal fixed point of the Veblen function. It figures prominently in the ordinal-analysis of the proof-theoretic strength of several mathematical theories. This page needs additional information. Veblen hierarchy Every increasing continuous ordinal function $f$ has an unbounded set of fixed points; Proof When $f$ is increasing, $f(\alpha)\geq \alpha$ for all $\alpha$; when also continuous, $$ f ( \cup_n f^n (\alpha + 1)) = \cup_n f^n (\alpha + 1) $$ is a fixed point greater than $\alpha$ Since the set of fixed points is an unbounded, well-ordered set, there is an ordinal function $\varphi^{[f]}$ listing these fixedpoints; it is in turn increasing and continuous. The Veblen Hierarchy is the sequence of functions $\varphi_\alpha$ defined by $\varphi_0 x = \omega^x$ $\varphi_{\alpha + 1} = \varphi^{[\varphi_\alpha]}$ for $ 0 \lt \beta = \cup \beta $, $\varphi_\beta(x)$ enumerates the fixedpoints common to all $\varphi_\alpha$ for $\alpha \lt \beta$ (For $\alpha \lt \beta$, the fixed point sets of $\varphi_\alpha$ are all closed sets, and so their intersection is closed; it is unbounded because $\cup_\alpha \varphi_\alpha(t+1)$ is a common fixed point greater than $t$) In particular the function \(\varphi_1\) enumerates epsilon numbers i.e. \(\varphi_1(\alpha)=\varepsilon_\alpha\) The Veblen functions have the following properties: if \(\beta<\gamma\) then \(\varphi_\alpha(\beta)<\varphi_\alpha(\gamma)\) if \(\alpha<\beta\) then \(\varphi_\alpha(0)<\varphi_\beta(0)\) if \(\alpha>\gamma\) then \(\varphi_\alpha(\beta)=\varphi_\gamma(\varphi_\alpha(\beta))\) \(\varphi_\alpha(\beta)\) is an additive principal number. An ordinal \(\alpha\) is an additive principal number if \(\alpha>0\) and \(\alpha>\delta+\eta\) for all \(\delta, \eta<\alpha\). Let \(P\) denote the set of all additive principal numbers. We define the normal form for ordinals \(\alpha\) such that \(0<\alpha<\Gamma_0=\min\{\beta|\varphi(\beta,0)=\beta\}\) \(\alpha=_{NF}\varphi_\beta(\gamma)\) if and only if \(\alpha=\varphi_\beta(\gamma)\) and \(\beta,\gamma<\alpha\) \(\alpha=_{NF}\alpha_1+\alpha_2+\cdots+\alpha_n\) if and only if \(\alpha=\alpha_1+\alpha_2+\cdots+\alpha_n\) and \(\alpha>\alpha_1\geq\alpha_2\geq\cdots\geq\alpha_n\) and \(\alpha_1,\alpha_2,...,\alpha_n\in P\) Let \(T\) denote the set of all ordinals which can be generated from the ordinal number 0 using the Veblen functions and the operation of addition \(0 \in T\) if \(\alpha=_{NF}\varphi_\beta(\gamma)\) and \(\beta,\gamma \in T\) then \(\alpha\in T\) if \(\alpha=_{NF}\alpha_1+\alpha_2+\cdots+\alpha_n\) and \(\alpha_1,\alpha_2,...,\alpha_n\in T\) then \(\alpha\in T\) For each limit ordinal number \(\alpha\in T\) we assign a fundamental sequence i.e. a strictly increasing sequence \((\alpha[n])_{n<\omega}\) such that the limit of the sequence is the ordinal number \(\alpha\) if \(\alpha=\alpha_1+\alpha_2+\cdots+\alpha_k\) then \(\alpha[n]=\alpha_1+\alpha_2+\cdots+(\alpha_k[n])\) if \(\alpha=\varphi_0(\beta+1)\) then \(\alpha[n]=\varphi_0(\beta)\times n\) if \(\alpha=\varphi_{\beta+1}(0)\) then \(\alpha[0]=0\) and \(\alpha[n+1]=\varphi_\beta(\alpha[n])\) if \(\alpha=\varphi_{\beta+1}(\gamma+1)\) then \(\alpha[0]=\varphi_{\beta+1}(\gamma)+1\) and \(\alpha[n+1]=\varphi_\beta(\alpha[n])\) if \(\alpha=\varphi_{\beta}(\gamma)\) and \(\gamma\) is a limit ordinal then \(\alpha[n]=\varphi_{\beta}(\gamma[n])\) if \(\alpha=\varphi_{\beta}(0)\) and \(\beta\) is a limit ordinal then \(\alpha[n]=\varphi_{\beta[n]}(0)\) if \(\alpha=\varphi_{\beta}(\gamma+1)\) and \(\beta\) is a limit ordinal then \(\alpha[n]=\varphi_{\beta[n]}(\varphi_{\beta}(\gamma)+1)\) The Feferman-Schütte ordinal, \(\Gamma_0\) is the least ordinal not in \(T\). Gamma function The Gamma function is a function enumerating ordinal numbers \(\alpha\) such that \(\varphi(\alpha,0)=\alpha\) if \(\alpha=\Gamma_0\) then \(\alpha[0]=0\) and \(\alpha[n+1]=\varphi(\alpha[n],0)\) if \(\alpha=\Gamma_{\beta+1}\) then \(\alpha[0]=\Gamma_{\beta}+1\) and \(\alpha[n+1]=\varphi(\alpha[n],0)\) if \(\alpha=\Gamma_{\beta}\) and \(\beta\) is a limit ordinal then \(\alpha[n]=\Gamma_{\beta[n]}\) References Oswald Veblen. Continuous Increasing Functions of Finite and Transfinite Ordinals. Transactions of the American Mathematical Society (1908) Vol. 9, pp.280–292
Let me clear up a few definitions before answering. We say a set $X$ is indexed by a set $I$ if there is a function from $I$ onto $X$--we typically denote this function by something like $i\mapsto X_i,$ which allows us to write $X=\{X_i\}_{i\in I}$ or $X=\{X_i\mid i\in I\}.$ Every set can be indexed by itself, of course. Given an indexed set of sets $\{X_i\}_{i\in I}$ we may define the Cartesian product of the sets $X_i$ by $$\prod_{i\in I}X_i:=\left\{f:I\to\bigcup_{i\in I}X_i\mid \forall i\in I,f(i)\in X_i\right\}.$$ That is, the elements of a Cartesian product are functions from the index set into the union of all the sets, such that each $i$th element is a member of the $i$th set in the collection. This is the same idea as what is going on in the finite case, only the "tuples" aren't necessarily ordered (or even orderable, as far as we care). To see why these are "the same," take any two sets $A,B.$ We can readily index the set $\{A,B\}$ with the set $\{1,2\},$ by $1\mapsto A,2\mapsto B$--that is, we can rewrite $A=X_1,B=X_2$ for the indexed notation. So, on the one hand, the elements of $$\prod_{i\in\{1,2\}}X_i$$ are functions on $\{1,2\}$ such that $1$ is sent to an element of $X_1=A,$ and $2$ is sent to an element of $X_2=B.$ But such functions are effectively ordered pairs whose first entry is an element of $A$ and whose second entry is an element of $B,$ meaning that this product is effectively the same as $A\times B$ in the sense you may be more familiar with. Note that the indexing makes a difference--if we'd chosen the indexing $1\mapsto B,2\mapsto A,$ then it would be "more natural" to say that the Cartesian product (in the sense defined above) was the same as $B\times A$ in the sense that you're used to. Now, where is it that the Axiom of Choice comes into play in ensuring that arbitrary Cartesian products of non-empty sets are non-empty? Well, let's start with an arbitrary non-empty indexed set of non-empty sets $\{X_i\}_{i\in I}.$ As it stands, we don't know anything about the set $I$ or the sets $X_i,$ except that they are non-empty. They may not necessarily be orderable at all so far as we are assuming. Even if we were to add the assumption of orderability, then we would still have to choose an ordering for each set $X_i$ (and for $I,$ if we like, though it isn't really necessary), and unless $X_i$ is a singleton, there will be more than one way to order $X_i$ if it can be ordered at all. Even if we assume that each set $X_i$ is already ordered, it won't be enough! After all, not every ordered set has a least element, as mentioned before, and though we could always "fix" a given order of that nature by pulling an element out of the usual order, and calling it the least element under a new order, we may have to choose which element to make the least element for every $X_i$! So, we would have to make the assumption that each $X_i$ is ordered in such a way that it has a least element, or your argument doesn't work. This is a fairly tall assumption, and (it turns out) requires the Axiom of Choice to prove that it is even possible! After all, even if we know that $X_i$ admits such an order for each $i\in I,$ we still have to choose an order for each such $X_i,$ and in general unless $X_i$ is a singleton, there will be more than one way to do this! Let me outline (most of) the proof of the equivalence of some common formulations of the Axiom of Choice. The following are equivalent: The Cartesian product of (a non-empty indexed set of) non-empty sets is non-empty. For every non-empty set $\mathcal A$ of pairwise-disjoint non-empty sets, there exists a set (which we'll call a "choice set" for $\mathcal A$) containing exactly one element from each element of $\mathcal A.$ Every non-empty set of non-empty sets admits a choice function). Every set admits a well-order. For the first implication (Cartesian product implies choice set), recall that every set can be indexed, so taking the Cartesian product of the elements of $\mathcal A,$ and picking out an element $f$ from this Cartesian product, the range of $f$ can be shown to be a choice set for $\mathcal A.$ For the second implication (choice set implies choice function), start with a non-empty set $X$ of non-empty sets, then note that $\mathcal A:=\bigl\{\{A\}\times A\mid A\in X\bigr\}$ is a non-empty set of pairwise disjoint non-empty sets. Furthermore, a choice set for $\mathcal A$ can be shown to be a choice function for $X.$ Proving the third implication (choice function implies well-orderability) is the most non-trivial of the results, and I won't get into it here, but I wanted to mention it. The fourth implication (well-orderability implies Cartesian product), start with a non-empty indexed set of non-empty sets $\{X_i\}_{i\in I}.$ Since $\bigcup_{i\in I}X_i$ is a set, then it is well-orderable, so pick any well-ordering for this union. Now, by definition of well-ordering, we have a least element for each $X_i,$ so we can show that $\prod_{i\in I}X_i$ is non-empty, letting $f:I\to\bigcup_{i\in I}X_i$ be defined by letting $f(i)$ be the least element of $X_i$ for each $i\in I.$ Alternately, we can leave out the well-orderability, and prove that choice function implies Cartesian product. Indeed, suppose we have a non-empty indexed set of non-empty sets $\{X_i\}_{i\in I}.$ Let $g$ be a choice function for $\{X_i\}_{i\in I},$ meaning that $g:\{X_i\}_{i\in I}\to\bigcup_{i\in I}X_i$ is such that $g(X_i)\in X_i$ for all $i\in I.$ Defining $f(i)=g(X_i)$ for each $i\in I$ makes $f$ an element of $\prod_{i\in I}X_i,$ as desired.
If $\displaystyle x\in A$ verifies that $\displaystyle \forall\epsilon:B(x,\epsilon)\cap A\ne\varnothing,$ how is it called? Thank you! Follow Math Help Forum on Facebook and Google+ Originally Posted by Connected If $\displaystyle x\in A$ verifies that $\displaystyle \forall\epsilon:B(x,\epsilon)\cap A\ne\varnothing,$ how is it called? Is there a typo here?Because this is always true since $\displaystyle x\in B(x,\epsilon)\cap A$. Last edited by bkarpuz; Mar 30th 2011 at 05:09 PM. No, no typo, it's just that I want to know how the point $\displaystyle x$ is called. Originally Posted by Connected No, no typo, it's just that I want to know how the point $\displaystyle x$ is called. Then I want to learn this too. Haha, okay, perhaps another known example: $\displaystyle x\in A$ is said to be an interior point of $\displaystyle A$ if $\displaystyle \exists\delta>0:B(x,\delta)\subseteq A.$ $\displaystyle x\in X$ is said to be an accumulation (limit) point of $\displaystyle A$ provided that $\displaystyle (B(x,\varepsilon)\backslash\{x\})\cap A\neq\emptyset$ for every $\displaystyle \varepsilon>0$. May be you are looking for this one? Got it, the concept is "adherent point." Originally Posted by Connected Got it, the concept is "adherent point." What you mean to say is that $\displaystyle x\in A$ or etc. I think the other name that has is "closure point," does that make sense?
In this puzzle you need to obtain the highest score with the following rules 1) choose two digits from 1, 2, 3, 4, 5, 6, 7, 8 and 9. These digits are represented by two letters, let us say $T$ and $M$. You are also given the digit 0. 2) make up an equation using all three of your digits, $T$, 0 and $M$, once. The numerical result of the equation should be a three digit number which can be $TM0$, $T0M$, $MT0$, $M0T$, $0MT$ and $0TM$. You may not concatenate digits, but you may use as many as the following symbols as you would like $+$, $-$, $\div$, $\times$, !, $\sqrt{}$, (, ). Note that multiple ! symbols are considered each to be factorial functions. See the note below. 3) Your score is your three digit number divided by the sum of your digits. So for example if you start with 0, 2, 4 then your equation could be $$(0*2)+4!=024$$ which would score $024 \div (2+4+0) = 24\div 6 = 4$ This question was inspired by this question Note that: $4!! = (4!)! = 24!$ this is allowed. Double, triple etc. factorials are not allowed; $4!! \ne 4 \times 2$
HP 17bII+ Silver solver 09-14-2018, 09:20 PM Post: #1 HP 17bII+ Silver solver I'm very satisfied with my HP 17bII+ Silver, which I find very powerful, but also nice and not looking too complicated (no [f] and [g] functions on keys like the HP 12c or HP 35s that I used to use at work every day, but a really complete calculator except for trigs and complex numbers calculations). So the 17BII+ is my new everyday calculator. I don't come back to arguments where a good (HP) calculator is a perfect complement or subsitute to Excel. I chose the 17BII+ after having carefully studied the programs I use in my day to day work: - price and costs calculations: can be modeled in the solver, with 4 or 5 equations and share vars - margin calculations: built-in functions - time value of money: built-in functions - time functions: built-in functions For the few moments I need trigs or complex calculations, I always have Free42 or a real 35s / 15c not really far from me. After having rebuilt my work environment in the (so powerful) solver, I started to study the behavior of the solver, looking at loops (Σ function), conditional branchings (IF function), menu selection (S function), and the Get and Let functions (G(), L()). I googled a lot and found an interesting pdf file about the Solvers of the 19B, 17B, 17BII and - according to the author, the 17BII+ Silver. The file is here : http://www.mh-aerotools.de/hp/documents/...ET-LET.pdf I tried a few equations in the "Using New and Old Values" chapter, pages 4 and following. There I found lots of differences with my actual 17BII+ Silver, which I would like to share here with you. For instance, the following equation found page 4: Code: In the next example found page 5: Code: Then the equation : Code: I don't understand the first 2 cases. In the first one, A should be set to -B, not B, if not using the "old" value of A. In the second one, the solver does not use the "old" value, but it does not also solve the equation, as there is no defined solution. In the last case, I understand that the equation is evaluated twice before finding an answer. So the old value is used there, but not the way I could expect. I finally found one - and only one - way to use iterations in the solver, with the equation : Code: Note that neither A=G(A)+1 or A=1+G(A) or G(A)+2=A works. I'm not disappointed, as the solver is a really interesting and useful feature of the calculator, but I'm just surprised not having found more working cases of iterations, or a clear understanding of how the solver works. Comments are welcomed. Regards, Thibault 09-14-2018, 10:14 PM (This post was last modified: 09-14-2018 10:25 PM by rprosperi.) Post: #2 RE: HP 17bII+ Silver solver The 17BII+ (Silver Edition) is an excellent machine, in fact it has the best keyboard of any machine made today by HP, but the solver does have a bug, and there are also a few other smaller issues making the solver slightly inferior to the 17B/17BII/19B/19BII/27S version. See these 3 articles for details: http://www.hpmuseum.org/cgi-sys/cgiwrap/...ead=242551 http://www.hpmuseum.org/forum/thread-657...l#pid58685 http://www.hpmuseum.org/cgi-sys/cgiwrap/...ead=134189 Overall these are not dramatic issues, and once you understand the solver bug, you likely can create equations that can avoid the issue. I have most HP machines but the 17BII and 17BII+ are the ones I use most often for real work (vs. playing, exploring or following along interesting threads here). Edit: added 3rd link --Bob Prosperi 09-15-2018, 12:05 AM Post: #3 RE: HP 17bII+ Silver solver (09-14-2018 09:20 PM)pinkman Wrote: I'm not disappointed, as the solver is a really interesting and useful feature of the calculator, but I'm just surprised not having found more working cases of iterations, or a clear understanding of how the solver works. Thibault, when Kinpo built the 17bii+ calculator years ago (both the gold one and the silver one), they basically goofed the solver implementation. This has been discussed at length in the HPMuseum forum. The solvers on the 17b and 17bii work fine, just as you would expect them to. If you plan on making significant use of the solver and its incredible capabilities, forget the + and get an original 17b or 17bii. You won't be sorry. Also, get the manual (it's on the Museum DVD) Technical Applications for the HP-27s and HP-19b. It applies to the 17b as well. The Sigma function also works fine on the 17b and 17bii. 09-15-2018, 04:36 AM Post: #4 RE: HP 17bII+ Silver solver Thanks to both of you for the details, links and advice. I've read the threads carefully, I did not find them by myself first. It's the end of the night now, I'll try to make few testing later. 09-15-2018, 11:55 PM (This post was last modified: 09-15-2018 11:58 PM by rprosperi.) Post: #5 RE: HP 17bII+ Silver solver If you want to really explore the capabilities of the awesome Pioneer Solver, and confidently try to push it without worrying about using L() this way or that, I agree with Don, buy a 17BII and use that for the Solver stuff, but continue to use the 17BII+ for every day stuff. Here's a very nice 17BII for only $25 (shipping included, in US) and you can even find them cheaper if you're willing to wait: https://www.ebay.com/itm/HP-17Bll-Financ...3252533550 The 17BII is bug-free for solver use, while the 17BII+ has a much better LCD, readable in a wider range of lighting & use conditions. In case you haven't seen this yet, here's an example of what can be done with the solver: http://www.hpmuseum.org/forum/thread-2630.html --Bob Prosperi 09-16-2018, 09:59 PM Post: #6 RE: HP 17bII+ Silver solver Well I'll try to find one, even if I'm not in the US. I also want to continue using my actual 17bII+ at work, as the solver is powerful enough for me (for the moment), and it looks really good. Don and you Bob have done a lot to help understand what the solver can do, that's pretty good stuff. Regards, Thibault 09-16-2018, 10:23 PM Post: #7 RE: HP 17bII+ Silver solver Nah, Don and Gerson are the real Pioneer Solver Masters, I just have a good collection of links. Of all the various tools built-in to the various calculator models I've explored, the Pioneer Solver is easily the one that most exceeds it's initial apparent capability. This awesome tool must have been incredibly well-tested by the QA team. The fact that the sheer size (and audacity!) of Gerson's Trig formulas still return amazingly accurate results says more about the underlying design and code than any comments I could add. Enjoy exploring it, and when you've mastered it (or at least tamed it a bit), come back here and share some interesting Solver formulas. There are numerous folks here that enjoy entering the formulas and running some test cases. (well, I suppose "enjoy" is not really the right word about entering the formulas - I guess feel good about accomplishing it successfully is more accurate). When I see some of these long Solver equations, it reminds of TECO commands back in the PDP-11 days. --Bob Prosperi 09-16-2018, 11:05 PM Post: #8 RE: HP 17bII+ Silver solver If you feel like entering long formulas you can solve the 8-queens problem. ):0) Cheers Thomas 09-17-2018, 06:30 AM (This post was last modified: 09-18-2018 09:47 AM by Don Shepherd.) Post: #9 RE: HP 17bII+ Silver solver How about a nifty, elegant, simple number base conversion Solver equation for the 17b/17bii, courtesy Thomas Klemm: BC:ANS=N+(FROM-TO)\(\times \Sigma\)(I:0:LOG(N)\(\div\)LOG(TO):1:L(N:IDIV(N:TO))\(\times\)FROM^I) Note: either FROM or TO must be 10 unless you are doing HEX conversions 09-17-2018, 07:49 AM Post: #10 RE: HP 17bII+ Silver solver 09-17-2018, 09:17 AM (This post was last modified: 09-17-2018 10:28 AM by Don Shepherd.) Post: #11 RE: HP 17bII+ Silver solver (09-17-2018 07:49 AM)Thomas Klemm Wrote: Wow, I didn't realize that Thomas. Stunning. Don 09-17-2018, 10:24 AM Post: #12 RE: HP 17bII+ Silver solver As has already been written, unfortunately the solver in this calculator is flawed. Part of the problems comes from the fact that it evaluates the equation twice which leads to incrementing by two instead of one etc. But there seem to be more quirks. This is too bad, as the solver is a very capable tool (with G() and L()), while still easy and versatile to use (with the menu buttons). Accomplishing something similar (solve for one variable today, for another tomorrow) with modern tools like Excel is more complicated. Martin 09-17-2018, 12:48 PM Post: #13 RE: HP 17bII+ Silver solver (09-16-2018 10:23 PM)rprosperi Wrote: Nah, Don and Gerson are the real Pioneer Solver Masters, I just have a good collection of links. An obvious correction is in order here: Don, Gerson and Thomas Klemm are the real Pioneer Solver Masters... No disrespect intended by the omission. A review of past posts of significant solver formulas quickly reveals just how often all three of these three guys were the authors. --Bob Prosperi 09-28-2018, 01:33 PM Post: #14 RE: HP 17bII+ Silver solver Be sure I have great respect of everyone mentioned above Thanks for the advices, my new old 17bii is now in my hands ($29 on French TAS), I'm just waiting for the next rainy Sunday for pushing the solver to it's limits (mmh, to MY limits I guess). 09-28-2018, 07:47 PM (This post was last modified: 09-28-2018 07:48 PM by Jlouis.) Post: #15 RE: HP 17bII+ Silver solver Just one doubt here, Are the solver of 18C and 19B / BII the same of the 17BII and 27S? TIA 09-29-2018, 01:37 AM Post: #16 RE: HP 17bII+ Silver solver (09-28-2018 07:47 PM)Jlouis Wrote: Just one doubt here, Are the solver of 18C and 19B / BII the same of the 17BII and 27S? Check out the document by Martin Hepperle in this thread which lists the functions available for each calculator. The document is an excellent introduction to the Solver application. Solver GET-LET ~Mark Who decides? 09-29-2018, 02:25 AM Post: #17 RE: HP 17bII+ Silver solver (09-29-2018 01:37 AM)mfleming Wrote:(09-28-2018 07:47 PM)Jlouis Wrote: Just one doubt here, Are the solver of 18C and 19B / BII the same of the 17BII and 27S? Thanks mfleming, I should have read the beginning of the manual. It is the same solver for the calculators of my question. I prefer to use the 19bII due the dedicated alpha keyboard, but the 27s is a pleasure to use too. Cheers 09-29-2018, 11:13 AM Post: #18 RE: HP 17bII+ Silver solver 09-29-2018, 11:53 AM (This post was last modified: 09-29-2018 05:12 PM by Don Shepherd.) Post: #19 RE: HP 17bII+ Silver solver (09-28-2018 07:47 PM)Jlouis Wrote: Just one doubt here, Are the solver of 18C and 19B / BII the same of the 17BII and 27S?Here is a 3-page cross-reference I created a few years ago of Solver functions present in the 19bii, 27s, and 17bii. solver 1.PDF (Size: 1.08 MB / Downloads: 33) solver 2.PDF (Size: 968.77 KB / Downloads: 21) solver 3.PDF (Size: 494.75 KB / Downloads: 19) 09-29-2018, 01:36 PM (This post was last modified: 09-29-2018 01:49 PM by Jlouis.) Post: #20 RE: HP 17bII+ Silver solver (09-29-2018 11:53 AM)Don Shepherd Wrote: Really thanks Dom for taking time to make these documents. This is what makes MoHC a fantastic community. Cheers JL Edited: Looks like the 19BII is a little more complete than the 27s and that the 17BII is the weaker one. User(s) browsing this thread: 1 Guest(s)
Why don’t photons interact with the Higgs field and hence remain massless? Massless photon Photons interact with the "Higgs doublet" but they don't interact with the "ordinary" component of the Higgs field whose excitations are the Higgs bosons. The reason is that the Higgs vacuum expectation value is only nonzero for the component of the Higgs field whose total electric charge, $Q=Y+T_3$ where $Y$ is the hypercharge and $T_3$ is the $z$-component of the $SU(2)_w$ weak isospin gauge group, is equal to zero, i.e. for $Y=\pm 1/2$ and $T_3=\mp 1/2$. That's why the coefficient of the $(h+v) A_\mu A^\mu$ term is zero. In other words, the vacuum condensate of the Higgs field that fills the space is charged under the weak charges, including the hypercharges and the weak $SU(2)$ charge, but exactly under the right combination of these charges, the electric charge, the condensate is neutral. It would be "bad" if the vacuum carried a nonzero electric charge. It doesn't. So the $A_\mu A^\mu$ interaction, whose coefficient is proportional to the electric charge of the Higgs field, isn't there. The photon remains massless and the electromagnetic interaction remains a long-range force, dropping as a power law at long distances (instead of the exponential decrease for short-range forces: W-bosons and Z-bosons do interact with the Higgs condensate and they get massive and their forces get short-range). OPERA anomaly The OP's question used to have two parts but this second part has been deleted. But I won't delete the answer because the votes and other things may have already reacted to this part as well etc. Yes, the anomaly of the OPERA neutrino speed measurement has been resolved. First, ICARUS, using directors in the very same cave, measured the speed as well and got $v=c$ within the error margin (the same error margin as OPERA's). Second, a few months ago, OPERA found out that they had a loosely connected fiber optical cable to a computer card. Using some independent data OPERA recorded, it was possible to determine that the cable error (plus another source of error whose mean value is much smaller) shifts the timing by $73\pm 9$ nanoseconds in the right direction (it's the right direction because the cable problem had delayed some older neutrino-free measurements of the time but was fixed once the neutrinos were being measured), see so when the error is corrected, the "neutrinos by $60\pm 10$ nanoseconds too fast" become "neutrinos coming $13\pm 15$ nanoseconds after light" which is consistent with $v=c$. Note that relativity with light but massive neutrinos predicts $c-v\sim 10^{-20} c$ for these neutrinos, experimentally indistinguishable from $v=c$. The spokesman of the experiment and the physics coordinators have already resigned; the spokesman resigned first: before another no-confidence vote but after some preparation votes for the no-confidence vote. It seems that they have known the mistake since December 8th, 2011, but they were hiding it for a few months (it was leaked to Science News by someone else in February) and they wanted to do experiments for additional months, even in May 2012, even though the error has been known to eliminate the anomaly for quite some time. They apparently enjoyed the unjustifiable fame. The massless photon: The zero mass is not due to a special value of the Weinberg angle, the angle which determines the mass of the other three bosons $W^+$, $W^-$ and $Z$ The mass is zero because the vacuum expectation value of the Higgs field doublet is single valued rather than two valued. This means it can in principle always be expressed by. $\langle \phi \rangle ~=~ \left(\begin{array}{c} 0 \\ v \end{array}\right)$ It's the 0 here which leaves one of the four bosons massless. Just to show a bit of the math: The gauge transform of the Higgs field is defined with $\beta$ corresponding to an Abelian field and the three $\alpha$ corresponding to Non Abelian fields. $\phi \longrightarrow ~~\exp \,\frac{i}{2}\left\{\beta \left(\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right) + \alpha^1\left(\begin{array}{cc} 0 & 1\\ 1 & 0 \end{array}\right) + \alpha^2\left(\begin{array}{cc} 0 & \!-\!i\\ i & 0 \end{array}\right) + \alpha^3\left(\begin{array}{cc} 1 & 0\\ 0 & \!-\!1 \end{array}\right) \right\}~\phi$ The $\beta$ corresponds to the hypercharge Y (see also Luboš Motl's post) and $\alpha^1$, $\alpha^2$ and $\alpha^3$ correspond to the three components of the iso-spin T. Now the combination $\beta=\alpha^3$ results in the matrix. $\left(\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right) + \left(\begin{array}{cc} 1 & 0\\ 0 & \!-\!1 \end{array}\right) = \left(\begin{array}{cc} 1 & 0\\ 0 & 0 \end{array}\right)$ So it is this combination which doesn't interact with the vacuum expectation value, $\left(\begin{array}{cc} 1 & 0\\ 0 & 0 \end{array}\right)\left(\begin{array}{c} 0 \\ v \end{array}\right)\equiv\left(\begin{array}{c} 0 \\ 0 \end{array}\right)$ and it is this combination which represents the massless photon. Hans. There is an aspect to this question that nobody seems to have addressed and that is, although the higgs (the 'radial' component of the field) is neutral, and therefore doesn't interact with the photon at 'tree level' we still see the decay $h \rightarrow \gamma \gamma$. This is because, roughly, by quantum effects a higgs will fluctuate into a particle/ anti-particle pair (electrons, quarks etc) which can they produce photons. So while the higgs does not strictly interact with the photon, at low energies we can parameterize a low-energy effective interaction where the higgs does interact with the photon. This is diagrammatically expressed in the Feynman diagrams: which I have borrowed from http://resonaances.blogspot.com/2012/07/h-day-3-how-to-pump-up-higgs-to-gamma.html. Before saying anything about why photons don't interact with the Higgs field, i would like to emphasize the very meaning of mass By Einstein's famous equation $E=mc^{2}$, " Mass and Energy are the same thing". This connection between matter and energy was at the heart the solution to problem of how and from where particles (Guage bosons, except the photon) gets their mass, as proposed in 1964 by Peter Higgs and others who worked with him. Now why and how do we know that photons are massless? The gauge bosons a group of particles which are responsible for any of the fundamental interactions of nature, Along with other groups of particles (quarks and leptons) together form the Standard model of particle physics. Photons are one of the guage bosons which are responsible for the . electromagnetic interactions In particular in order for the theory of electromagnetism to hold together in a mathematically self-consistent way the photon has to be exactly massless. if it did have a mass then if you try to do calculations with the theory with the massive photon included you would find that calculating the same quantity in different ways would give you different answers (inconsistent results) Now mathematically for the other 3 guage bosons, say the WZ Boson, which is responsible for the weak nuclear interaction, should also have zero mass. But experimentally it seems that they're about a hundred times heavier than a proton So in order to solve this mathematical inconsistency a radical new idea was introduced which states that Space everywhere in the universe is filled with a quantum field which is called as the Higgs field. The Answer Some particles travel to this field without knowing it's there. For example the photon simply doesn't interact in any way with this field. Thus it doesn't have any mass. Other particles interact highly with the higgs field thus slowing them down and this slowing down of the particles due to interactions cause the feeling of mass Now why doesn't photon doesn't interact at all? well simply speaking, its just a property of photon, similar to properties of other particles like electrons or protons having a charge which is their property. In order to describe this property we can use mathematics as other answers did. just like the electromagnetic fields made of photons the Higgs field is made up the smaller particles are smaller bits for the field called Higgs bosons. PS: Since a very simple but detailed answer(An answer for layman) was needed i didn't use any mathematics or very deep physics. First of all, photons are observed to be massless, and the $W^\pm$ and $Z$ are observed to have mass. So we have to build a model that agrees with this. Mathematically (at a non-rigorous but intuitive level), the $SU(2)_W\times U(1)_Y$ electroweak gauge symmetry is a 4-dimensional Lie group ($=U(2)$). Given the experimental facts, we want to find a representation under which the orbit of the non-zero vev under this Lie group action is 3-dimensional, so that the 3-dimensional orbit contributes to the longitudinal components of $W^\pm, Z$ (3 of them); since the original Lie group is 4-dimensional, so there is 1-dimension that acts trivially on the vev, that is the photon. What representation satisfies this? The simplest choice is 2-component complex vectors (i.e. doublet) transforming under $U(2)$ matrices. The orbit of $U(2)$ acting on a non-zero complex doublet is $S^3$, leaving one dimension in the $U(2)$ acting trivially. (This can be attributed to the fact that by knowing two component vectors $v, u$ and $U v= u$, you cannot uniquely determine $U$.) On the other hand, a complex doublet takes value in an $\mathbb{R}^4$; this $\mathbb{R}^4$ mod out the $S^3$ orbits leave one dimension that is the higgs. So the rough idea is, given the symmetry group and the representation: $$dim(\mbox{symmetry group})-dim(\mbox{orbit of action})=dim(\mbox{residual symmetry})$$ $$dim(\mbox{rep space})-dim(\mbox{orbit of action})=dim(\mbox{physical degrees of freedom})$$ (The dimension of orbit of action in global symmetry is the number of massless Goldstone bosons; in gauge symmetry the longitudinal components of massive gauge fields. If the symmetry is global, the "physical degrees of freedom" is replaced by "massive particles".) protected by Qmechanic♦ Mar 13 '13 at 15:41 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Answer The function will have a minimum, and the value of the minimum will be $y=-2$. Work Step by Step The value of $A$ is positive, so the function will open upwards. Therefore, the function will have a minimum. The $x$-coordinate of the minimum will be $-\frac{B}{2A}$, if the function is in the form $Ax^{2}+Bx+C$. $A=\frac{3}{2}$, $B=6$, and $C=4$, so the $x$ coordinate will be: $=-6\div((2)\times(\frac{3}{2}))=-6\div(\frac{6}{2})=-\frac{6\times2}{6}=-\frac{12}{6}=-2$. Therefore, the value of the minimum is $y=-2$.
This situation pose a simple mathematical problem while the physical situation occurs in cases where a specific flow rate is required with a given pressure ratio (range) (this problem was considered by some to be somewhat complicated). The specific flow rate can be converted to entrance Mach number and this simplifies the problem. Thus, the problem is reduced to find for given entrance Mach, \(M_1\), and given pressure ratio calculate the flow parameters, like the exit Mach number, \(M_2\). The procedure is based on the fact that the entrance star pressure ratio can be calculated using \(M_1\). Thus, using the pressure ratio to calculate the star exit pressure ratio provide the exit Mach number, \(M_2\). An example of such issue is the following example that combines also the "Naughty professor'' problems. Example 11.21 Calculate the exit Mach number for \(P_2/P_1 =0.4\) and entrance Mach number \(M_1 = 0.25\). Solution 11.21 The star pressure can be obtained from a table or Potto-GDC as Fanno Flow Input: \(M_1\) k = 1.4 \(M_1\) \(\dfrac{4\,f\,L}{D}\) \(\dfrac{P}{P^{\star}}\) \(\dfrac{P_0} (click for details)\) Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/11:_Compressible_Flow_One_Dimensional/11.7:_Fanno_Flow/11.7.11:_Subsonic_Fanno_Flow_for_a_Given_\(M_1\)_and_Pressure_Ratio), /content/body/div[2]/div/table[1]/tbody/tr[2]/td[4]/span, line 1, column 4 \(\dfrac{\rho}{\rho^{\star}}\) \(\dfrac{U}{U^{\star}}\) \(\dfrac{T}{T^{\star}}\) 0.2500 8.4834 4.3546 2.4027 3.6742 0.27217 1.1852 And the star pressure ratio can be calculated at the exit as following \begin{align*} {P_2 \over P^{*} } = {{P_2 \over P_1 } {P_1 \over P^{*} } } = 0.4 \times 4.3546 = 1.74184 \end{align*} And the corresponding exit Mach number for this pressure ratio reads Fanno Flow Input: \(\dfrac{P}{P^{\star}}\) k = 1.4 \(M_1\) \(\dfrac{4\,f\,L}{D}\) \(\dfrac{P}{P^{\star}}\) \(\dfrac{P_0} (click for details)\) Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/11:_Compressible_Flow_One_Dimensional/11.7:_Fanno_Flow/11.7.11:_Subsonic_Fanno_Flow_for_a_Given_\(M_1\)_and_Pressure_Ratio), /content/body/div[2]/div/table[2]/tbody/tr[2]/td[4]/span, line 1, column 4 \(\dfrac{\rho}{\rho^{\star}}\) \(\dfrac{U}{U^{\star}}\) \(\dfrac{T}{T^{\star}}\) 0.60694 0.46408 1.7418 1.1801 1.5585 0.64165 1.1177 A bit show off the Potto–GDC can carry these calculations in one click as Fanno Flow Input: (\M_1\) and \(\dfrac{P_2}{P_1}\) k = 1.4 \(M_1\) \(M_2\) \(\dfrac{4\,f\,L}{D}\) \(\dfrac{P_2}{P_1}\) 0.250 0.60693 8.0193 0.400 As it can be seen for the Figure 11.38 the dominating parameter is \(\dfrac{4\,f\,L}{D}\). The results are very similar for isothermal flow. The only difference is in small dimensionless friction, \(\dfrac{4\,f\,L}{D}\). Contributors Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
https://doi.org/10.1351/goldbook.B00746 The term applies to either of the equations: \[\frac{k_{\text{HA}}}{p} = G\left ( \frac{q\ K_{\text{HA}}}{p} \right )^{\alpha}\] \[\frac{k_{\text{A}}}{q} = G\left ( \frac{q\ K_{\text{HA}}}{p} \right )^{-\beta} \] (or their logarithmic forms) where \(\alpha\), \(\beta\) and \(G\) are constants for a given reaction series (\(\alpha\) and \(\beta\) are called 'Brønsted exponents'), \(k_{\text{HA}}\) and \(k_{\text{A}}\) are @C00885@ (or rate coefficients) of reactions whose rates depend on the concentrations of HA and/or of A −. \(K_{\text{HA}}\) is the acid @D01801@ constant of the acid HA, \(p\) is the number of equivalent acidic protons in the acid HA, and \(q\) is the number of equivalent basic sites in its conjugate base A −. The chosen values of \(p\) and \(q\) should always be specified. (The charge designations of H and A are only illustrative.) The Brønsted relation is often termed the 'Brønsted @C00875-1@' (or the '@C00875-2@'). Although justifiable on historical grounds, this name is not recommended, since Brønsted relations are known to apply to many uncatalysed and pseudo-catalysed reactions (such as simple @P04915@). The term 'pseudo-Brønsted relation' is sometimes used for reactions which involve @N04250@ instead of acid–base @C00874@. Various types of Brønsted parameters have been proposed such as \(\beta_{\text{lg}}\), \(\beta_{\text{nuc}}\), \(\beta_{\text{eq}}\) for @L03493@, nucleophile and equilibrium constants, respectively. See also: linear free-energy relation
No, I don't think auto-regulation explain much in the population sizes of predators. Group selection may explain such auto-regulation but I don't think it is of any considerable importance for this discussion. The short answer is, as @shigeta said [predators] tend to starve to death as they are too many! To have a better understanding of what @shigeta said, you'll be interested in understanding various model of prey-predator or of consumer-resource interactions. For example the famous Lotka-Volterra equations describe the population dynamics of two co-existing species where one is the prey and the other is a predator. Let's first define some variables… $x$ : Number of preys $y$ : number of predators $t$ : time $\alpha$, $\beta$, $\xi$ and $\gamma$ are parameters describing how one species influence the population size of the other one. The Lotka-Voltera equations are: $$\frac{dx}{dt} = x(\alpha - \beta y)$$$$\frac{dy}{dt} = -y(\gamma - \xi x)$$ You can show that for some parameters the matrix for these equations have a complex eigenvalue meaning that the long term behavior of this system is cyclic (periodic behavior). If you simulate such systems you'll see that the population sizes of the two species fluctuate like this: where the blue line represents the predators and the red line represents the preys. Representing the same data in phase space, meaning with the population size of the two species on axes $x$ and $y$ you get: where the arrows shows the direction toward which the system moves. If the population size of the predators ($y$) reaches 0 (extinction), then $\frac{dx}{dt} = x(\alpha - \beta y)\space$ becomes $\frac{dx}{dt} = x\alpha \space$ (which general solution is $x_t = e^{\alpha t}x_0$) and therefore the populations of preys will grow exponentially. If the population size of preys ($x$) reaches 0 (extinction), then $\frac{dy}{dt} = -y(\gamma - \xi x)\space$ becomes $\frac{dy}{dt} = -y\gamma \space$, and therefore the population of predators will decrease exponentially. Following this model, your question is actually: Why are the parameters $\alpha$, $\beta$, $\xi$ and $\gamma$ not "set" in a way that predators cause the extinction of preys (and therefore their own extinction)? One might equivalently ask the opposite question? Why don't preys evolve in order to escape predators so that the population of predators crushes? As showed, you don't need a complex model to allow the co-existence of predators and preys. You could describe your model a bit more accurately in another post and ask why in your model the preys always get extinct. But there are tons of possibilities to render your model more realistic such as adding spatial heterogeneities (places to hide for example as suggested by @AudriusMeškauskas). One can also consider other trophic levels, stochastic effects, varying selection pressure through time (and other types of balancing selection), age, sex or health-specific mortality rate due to predation (e.g. predators may target preferentially young ones or diseased ones), several competing species, etc.. I would also like to talk about other things that might be of interest in your model (two of them need you to allow evolutionary processes in your model): 1) lineage selection: predators that eat too much end up disappearing because they caused their preys to get extinct. This hypothesis has nothing to do with some kind of auto-regulation for the good of species. Of course you'd need several species of predators and preys in your model. This kind of hypothesis are usually considered as very unlikely to have any explanatory power. 2) Life-dinner principle. While the wolf runs for its dinner, the rabbit runs for its life. Therefore, there is higher selection pressure on the rabbits which yield the rabbits to run in average slightly faster than wolves. This evolutionary process protects the rabbits from extinction. 3) You may consider..
Here is Prob. 25 (a), Chap. 4 in the book Principles of Mathematical Analysis by Walter Rudin, 3rd edition: If $A \subset \mathbb{R}^k$ and $B \subset \mathbb{R}^k$, define $A + B$ to be the set of all sums $x+y$ with $x \in A$, $y \in B$. (a) If $K$ is compact and $C$ is closed in $\mathbb{R}^k$, prove that $K+C$ is closed. Hint:Take $z \not\in K+C$, put $F = z-C$, the set of all $z-y$ with $y \in C$. Then $K$ and $F$ are disjoint. Choose $\delta$ as in Exercise 21. Show that the open ball with center $z$ and radius $\delta$ does not intersect $K+C$. Now here is Prob. 21, Chap. 4 in Baby Rudin, 3rd edition: Suppose $K$ and $F$ are disjoint sets in a metric space $X$, $K$ is compact, $F$ is closed. Prove that there exists $\delta > 0$ such that $d(p, q) > \delta$ if $p \in K$, $q \in F$. Hint:$\rho_F$ is a continuous positive function on $F$. Show that the conclusion may fail for two disjoint closed sets if neither is compact. My effort: We show that the set $\mathbb{R}^k \setminus (K+C)$ is open. Let $z$ be any point of $\mathbb{R}^k \setminus (K+C)$, and let the set $F$ be defined as $$F \colon= z-C = \left\{ \ z-y \ \colon \ y \in C \ \right\}.$$ Then the map $x \mapsto z-x$ of $\mathbb{R}^k$ into $\mathbb{R}^k$ is continuous, and under this map, the inverse image of the closed set $C$ equals $F$, which implies that $F$ is also closed in $\mathbb{R}^k$. Now we show that the sets $K$ and $F$ are disjoint. Let $p$ be a point of $K \cap F$. Then as $p \in F$, so $p = z-y$ for some $y \in C$, and so $$z= p+y \in K+C, \ \mbox{ because } \ p \in K \ \mbox{ and } \ y \in C.$$ But $z \not\in K+C$. So $K$ and $F$ are disjoint. Thus $K$ and $F$ are two disjoint sets in $\mathbb{R}^k$ such that $K$ is compact and $F$ is closed. So we can find a positive real number $\delta$ such that $$d(p, q) = \Vert p-q \Vert > \delta \ \mbox{ for every point $p \in K$ and for every point $q \in F$}. \ \tag{1} $$ Now let $p \in \mathbb{R}^k$ such that $\Vert p-z \Vert < \delta$. If $p \in K+C$, then we note that $$p = x + y \ \mbox{ for some } \ x \in K \ \mbox{ and for some } \ y \in C.$$ Then $$p-z = (x+y) - z = x - (z-y).$$ Now as $x \in K$ and $z-y \in F$, so we must have $$\Vert x-(z-y) \Vert > \delta$$ by (1) above, which is the same as $$\Vert p-z \Vert > \delta,$$ and this contradicts the choice of $p$. Thus we can conclude that if $p \in \mathbb{R}^k$ and $\Vert p-z\Vert < \delta$, then that $p$ cannot belong to $K+C$, which implies that $z$ is an interior point of $\mathbb{R}^k - (K+C)$. But $z$ was an arbitrary point of $\mathbb{R}^k - (K+C)$. So $\mathbb{R}^k - (K+C)$ is open, and therefore $K+C$ is closed. Is this proof correct? If so, then is my presentation clear enough too? If not, then where do problems lie?
I am trying to get a grasp of hard margin SVMs. In the lecture I am watching the professor talks about a classification equation which when a positive sample is input, returns a value of $1$ or more; and when a negative sample is input, returns a value of $-1$ or less. The graph below shows the vector $\overrightarrow{w}$ which is perpendicular to the separating hyperplane and an arbitrary point with unknown class $\overrightarrow{u}$. In the lecture it says that: $$\overrightarrow{w} \cdot \overrightarrow{u_+} + b \geq 1 \textrm{ , for positive class samples}$$ $$\overrightarrow{w} \cdot \overrightarrow{u_-} + b \leq -1 \textrm{ , for negative class samples} $$ To me it seems that we should take the projection of $\overrightarrow{u}$ on $\frac{\overrightarrow{w}}{|\overrightarrow{w}|}$, since this would give the component of $\overrightarrow{u}$ in the direction of $\overrightarrow{w}$. If this component is greater than the distance to the decision boundary/hyperplane, $b$, then $\overrightarrow{u}$ is a positive sample, if it is less then it is a negative sample. In math terms $$\frac{\overrightarrow{w}}{|\overrightarrow{w}|} \cdot \overrightarrow{u_+} - b > 0$$ $$\frac{\overrightarrow{w}}{|\overrightarrow{w}|} \cdot \overrightarrow{u_-} - b < 0$$ If $\overrightarrow{u}$ lies on the decision boundary then the above expressions will be equal to 0. If $\overrightarrow{u}$ lies on a support vector hyperplane, then the above expressions will be equal to $m$ or $-m$ for positive and negative samples respectively, where $m$ is the margin from the decision boundary to any support vector. I don't understand the lecturer's equations. Why is the value for positive sample classification $\geq 1$? Why is the value for negative sample classification $\leq -1$? Furthermore, what does $b$ represent in the lecturer's equations? Basically, I have trouble understanding the proposed equations, but they should be doing the same thing as mine.
If economic growth is indeed highly desirable (see this question), why must this growth be exponential? With finite resources, exponential growth might hit limits rapidly (or be impossible?). Why not express growth in linear rather than exponential terms? Growth as is meant here "must" be nothing in particular. It is a specific metric, the percentage change in yearly GNP/GDP, and it is what it is. In Blanchard and Fischer 's "Lectures on Macroeconomics", in the introductory chapter 1, page 2, Figure 1.1, the logarithm of USA GNP 1874-1986 is graphed: and it is impressively linear , bar a disturbance around World-War II (a dive before it that was roughly equally compensated immediately after). But this means that $$\ln Y \approx at \Rightarrow Y \approx e^{at}$$ (for the US Economy, $a \approx 0.030\;\; \text{to} \;\;0.037$ for the period). It is the data that told us that "growth was exponential" during this period. (Note that "exponential growth" usually includes the concept of constant growth rate, while in informal language, "exponential" may also refer to exploding paths, paths with increasing growth rate). And so economic models were deemed relevant if they could replicate to a respectable degree the observed data. The question "can this go on forever?" is an altogether different issue, starting with the meaning of the word "forever". Because linear functions don't match the data. You can't express a series $$[1,2,4,9,16]$$ as $$f(x)=x+y$$ for any possible $y$. Because we use today's capital stock to produce tomorrow's output, some fraction of which is invested, so you should expect something like $dK/dt=\alpha f(K)$ where $f$ is increasing in $K$. growth makes most sense as a percentage. looking at absolute numbers does have value but percentage growth allows for some pretty good comparisons. You seem to think exponential growth means infinite growth. It is a pretty logical assumption to make, but I believe it takes these models and uses them in a way they were not meant to be used. Economists seldom care about making predictions 200 years in the future. Exponential growth is quite bad at forecasting that far ahead in anything, in shorter time scales it isn't too bad (Source needed). I'll try and make it clearer: Consider a basic model of GDP growth. Suppose GDP is growing at 1% per year ($r=1.01$) and initially is at \$1,000,000. Let $Y_t$ denote the populations size $t$ years after the initial population of $Y_0 = \$1,000,000$. If one asks what the GDP will be in 50 years there are two options. At 1% per year growth, the dynamic equation would be \begin{gather*} Y_{t+1} - P_t = 0.01 \, Y_t \end{gather*} and the corresponding iteration equation is \begin{gather*} Y_{t+1} = 1.01 \, Y_t \end{gather*} Starting with the initial condition, $Y_0 = 1,000,000$, we could calculate $P_1 = 1.01 \times 1,000,000 = 1,010,000$, $P_2 = 1.01 \times 1,010,000 = 1,020,100$ and so on for 50 iterations. This is equivalent to: \begin{gather*} Y_t = 1.01^t \left( 1,000,000 \right) \end{gather*} so that we immediately have a formula for the population after 50 years: \begin{gather*} Y_{50} = 1.01^{50} \left( 1,000,000 \right) = 1,644,631. \end{gather*} A point I am trying to make here is that exponential growth is really just the size of something as a function of itself in a different state or time frame. If you want exponential growth over a longer timeframe, it makes sense to extend the model. What if $r$ was endogenous to the model? As Y gets larger, r gets smaller. Still growing exponentially, and the size of the economy in $t+1$ is still dependent on the size of the economy in $t$.
The simplest way to do so (for Pearson correlation) is to use Fisher's z-transformation. Let r be the correlation in question. Let n be the sample size used to acquire the correlation. tanh is the hyperbolic tangent atanh or $\tanh^{-1}$ is the inverse hyperbolic tangent. Let z = atanh(r), then z is normally distributed with variance $\frac{1}{n-3}$` Using this, you can construct a confidence interval $ C.I.(\rho) = \tanh\left(\tanh^{-1}(\rho) \pm q \cdot \frac{1}{\sqrt{n-3}}\right) $, where $q$ is the value that describes the level of confidence you want (i.e., the value you would read from a normal distribution table (e.g., 1.96 for 95% confidence)), If zero is in the confidence interval, then you would fail to reject the null hypothesis that the correlation is zero. Also, note that you cannot use this for correlations of $\pm 1$ because if they are one for data that is truly continuous, then you only need 3 data points to determine that. For one-sided values, simply use the z-score you'd use for a 1-sided p-value for it, and then transform it back and see if your correlation is within the range of that interval. Edit: You can use a 1-sided test using the same values. Also, I changed sample values $r$ to theoretical values $\rho$, since that's a more appropriate use of confidence intervals. Source: http://en.wikipedia.org/wiki/Fisher_transformation
Strongly compact cardinal The strongly compact cardinals have their origins in the generalization of the compactness theorem of first order logic to infinitary languages, for anuncountable cardinal $\kappa$ is strongly compact if the infinitary logic $L_{\kappa,\kappa}$ exhibits the $\kappa$-compactness property. It turns out that this model-theoretic concept admits fruitful embedding characterizations, which as with so many large cardinal notions, has become the focus of study. Strong compactness rarefies into a hierarchy, and a cardinal $\kappa$ is strongly compact if and only if it is $\theta$-strongly compact for every ordinal $\theta\geq\kappa$. The strongly compact embedding characterizations are closely related to that of supercompact cardinals, which are characterized by elementary embeddings with a high degree of closure: $\kappa$ is $\theta$-supercompact if and only if there is an embedding $j:V\to M$ with critical point $\kappa$ such that $\theta<j(\kappa)$ and every subset of $M$ of size $\theta$ is an element of $M$. By weakening this closure requirement to insist only that $M$ contains a small cover for any subset of size $\theta$, or even just a small cover of the set $j''\theta$ itself, we arrive at the $\theta$-strongly compact cardinals. It follows that every $\theta$-supercompact cardinal is $\theta$-strongly compact and so every supercompact cardinal is strongly compact. Furthermore, since every ultrapower embedding $j:V\to M$ with critical point $\kappa$ has $M^\kappa\subset M$, for $\theta$-strong compactness we may restrict our attention to the case when $\kappa\leq\theta$. Contents 1 Diverse characterizations 1.1 Strong compactness characterization 1.2 Strong compactness embedding characterization 1.3 Cover property characterization 1.4 Fine measure characterization 1.5 Filter extension characterization 1.6 Discontinuous ultrapower characterization 1.7 Discontinuous embedding characterization 1.8 Ketonen characterization 1.9 Regular ultrafilter characterization 2 Strongly compact cardinals and forcing 3 Relation to other large cardinal notions 4 Topological Relevance 5 References Diverse characterizations There are diverse equivalent characterizations of the strongly compact cardinals. Strong compactness characterization An uncountable cardinal $\kappa$ is strongly compact if every $\kappa$-satisfiable theory in the infinitary logic $L_{\kappa,\kappa}$ is satisfiable. The signature of an $L_{\kappa,\kappa}$ language consists, just as in the first order context, of a set of finitary function, relation and constant symbols. The $L_{\kappa,\kappa}$ formulas, however, are built up in an infinitary process, by closing under infinitary conjunctions $\wedge_{\alpha<\delta}\varphi_\alpha$ and disjunctions $\vee_{\alpha<\delta}\varphi_\alpha$ of any size $\delta<\kappa$, as well as infinitary quantification $\exists\vec x$ and $\forall\vec x$ over blocks of variables $\vec x=\langle x_\alpha\mid\alpha<\delta\rangle$ of size less than $\kappa$. A theory in such a language is satisfiable if it has a model under the natural semantics. A theory is $\kappa$-satisfiable if every subtheory consisting of fewer than $\kappa$ many sentences of it is satisfiable. First order logic is precisely $L_{\omega,\omega}$, and the classical compactness theorem asserts that every $\omega$-satisfiable $L_{\omega,\omega}$ theory is satisfiable. Similarly, an uncountable cardinal $\kappa$ is defined to be strongly compact if every $\kappa$-satisfiable $L_{\kappa,\kappa}$ theory is satisfiable (and we call this the $\kappa$-compactness property}). The cardinal $\kappa$ is weakly compact, in contrast, if every $\kappa$-satisfiable $L_{\kappa,\kappa}$ theory, in a language having at most $\kappa$ many constant, function and relation symbols, is satisfiable. Strong compactness embedding characterization A cardinal $\kappa$ is $\theta$-strongly compact if and only if there is an elementary embedding $j:V\to M$ of the set-theoretic universe $V$ into a transitive class $M$ with critical point $\kappa$, such that $j''\theta\subset s\in M$ for some set $s\in M$ with $|s|^M\lt j(\kappa)$. [1] Cover property characterization A cardinal $\kappa$ is $\theta$-strongly compact if and only if there is an ultrapower embedding $j:V\to M$, with critical point $\kappa$, that exhibits the $\theta$-strong compactness cover property, meaning that for every $t\subset M$ of size $\theta$ there is $s\in M$ with $t\subset s$ and $|s|^M<j(\kappa)$. Fine measure characterization An uncountable cardinal $\kappa$ is $\theta$-strongly compact if and only if there is a fine measure on $\mathcal{P}_\kappa(\theta)$. The notation $\mathcal{P}_\kappa(\theta)$ means $\{\sigma\subset\theta\mid |\sigma|<\kappa\}$. [1] Filter extension characterization An uncountable cardinal $\kappa$ is $\theta$-strongly compact if and only if every $\kappa$-complete filter of size at most $\theta$ on a set extends to a $\kappa$-complete ultrafilter on that set. [1] Discontinuous ultrapower characterization A cardinal $\kappa$ is $\theta$-strongly compact if and only if there is an ultrapower embedding $j:V\to M$ with critical point $\kappa$, such that $\sup j''\lambda<j(\lambda)$ for every regular $\lambda$ with $\kappa\leq\lambda\leq\theta^{\lt\kappa}$. In other words, the embedding is discontinuous at all such $\lambda$. Discontinuous embedding characterization A cardinal $\kappa$ is $\theta$-strongly compact if and only if for every regular $\lambda$ with $\kappa\leq\lambda\leq\theta^{\lt\kappa}$, there is an embedding $j:V\to M$ with critical point $\kappa$ and $\sup j''\lambda<j(\lambda)$. Ketonen characterization An uncountable regular cardinal $\kappa$ is $\theta$-strongly compact if and only if there is a $\kappa$-complete uniform ultrafilter on every regular $\lambda$ with $\kappa\leq\lambda\leq\theta^{\lt\kappa}$. An ultrafilter $\mu$ on a cardinal $\lambda$ is uniform if all final segments $[\beta,\lambda)= \{\alpha<\lambda\mid \beta\leq\alpha\}$ are in $\mu$. When $\lambda$ is regular, this is equivalent to requiring that all elements of $\mu$ have the same cardinality. Regular ultrafilter characterization An uncountable cardinal $\kappa$ is $\theta$-strongly compact if and only if there is a $(\kappa,\theta)$-regular ultrafilter on some set. An ultrafilter $\mu$ is $(\kappa,\theta)$-regular if it is $\kappa$-complete and there is a family $\{X_\alpha\mid\alpha<\theta\}\subset \mu$ such that $\bigcap_{\alpha\in I}X_\alpha=\emptyset$ for any $I$ with $|I|=\kappa$. Strongly compact cardinals and forcing If there is proper class-many strongly compact cardinals, then there is a generic model of $\text{ZF}$ + "all uncountable cardinals are singular". If each strongly compact cardinal is a limit of measurable cardinals, and if the limit of any sequence of strongly compact cardinals is singular, then there is a forcing extension V[G] that is a symmetric model of $\text{ZF}$ + "all uncountable cardinals are singular" + "every uncountable cardinal is both almost Ramsey and a Rowbottom cardinal carrying a Rowbottom filter". This also directly follows from the existence of a proper class of supercompact cardinals, as every supercomact cardinal is simultaneously strongly compact and a limit of measurable cardinals. Relation to other large cardinal notions Strongly compact cardinals are measurable. The least strongly compact cardinal can be equal to the least measurable cardinal, or to the least supercompact cardinal, by results of Magidor. [2] (It cannot be equal to both at once because the least measurable cardinal cannot be supercompact.) Even though strongly compact cardinals imply the consistency of the negation of the singular cardinal hypothesis, SCH, for any singular strong limit cardinal $\kappa$ above the least strongly compact cardinal, $2^\kappa=\kappa^+$ (also known as "SCH holds above strong compactness"). [2] If there is a strongly compact cardinal $\kappa$ then for all $\lambda\geq\kappa$ and $A\subseteq\lambda$, $\lambda^+$ is ineffable in $L[A]$. It is not currently known whether the existence of a strongly compact cardinal is equiconsistent with the existence of a supercompact cardinal. The ultrapower axiom gives a positive answer to this, but itself isn't known to be consistent with the existence of a supercompact in the first place. Every strongly compact cardinal is strongly tall, although the existence of a strongly compact cardinal is equiconsistent with "the least measurable cardinal is the least strongly compact cardinal, and therefore the least strongly tall cardinal", so it could be the case that the least of the measurable, tall, strongly tall, and strongly compact cardinals all line up. Topological Relevance Strongly compact cardinals are related to the topological notion of compactness, interestingly enough. Specific Relations If $\kappa$ is uncountable, then the following are equivalent (under the axiom of choice): $\kappa$ is strongly compact. If $X$ is a space such that all $\kappa$-complete ultrafilters converge, then $X$ is $\kappa$-compact. The product of any collection of $\kappa$-compact spaces is itself $\kappa$-compact. Intuition A topological space $X$ is called $\kappa$-compact when every open cover has a subcover of size below $\kappa$. More intuitively, it "looks" as though it has size below $\kappa$. For example, the $\aleph_0$-compact subspaces of the real number line are just the subspaces which are bounded. For example, a shape with finite area could be considered $\aleph_0$-compact, even though the amount of points is not only infinite but continuum-sized. The product of a collection of spaces is a little difficult to describe intuitively. However, it notably increases the amount of "dimensions" to a space. For example, the product of $n$-copies of the real number line is just the $n$-dimensional euclidean space (the line, the plane, etc.). Also, the general intuition is that it doesn't make spaces any bigger than the biggest one in the collection, so the product of a bunch of small spaces and a big space should be no 'bigger' than the big space. The idea is that the product of $\kappa$-compact spaces should itself be $\kappa$-compact, since the product doesn't make spaces any "bigger." However, there are examples of two $\aleph_1$-compact spaces (they "look countably infinite") which combine to make a space which isn't $\aleph_1$-compact ("looks uncountable"). However, if $\kappa$ is strongly compact, then this intuition holds; the product of any $\kappa$-compact spaces is strongly compact. One could maybe see why strongly compact cardinals are so big then; they imply that combining a bunch of small-relative-to-$\kappa$ spaces together by adding arbitrarily many dimensions keeps the space looking small relative to $\kappa$.' Tychonoff's theorem is precisely the statement that the product of $\aleph_0$-compact spaces is $\aleph_0$-compact; that is, if you combine a bunch of finite-looking spaces together and keep adding more and more dimensions, you get a space which is finite-looking. (Sources to be added) References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory.
The slick answer is Since each of $f$ and $g$ are clearly injective, apply the Cantor-Bernstein theorem to $f$ and $g$. This gives a bijection $(0,1)\to[0,1)$. It may be implicit in the question that you're supposed to "crank the handle" on the proof of Cantor-Bernstein in order to find an explicit description of the particular bijection it produces. In that case you'd start by finding the ranges (not merely the domain) of $f$ and $g$, namely $(0,1)$ and $[\frac12,1)$. What happens next depends on the details of the proof of Cantor-Bernstein you're working with, but in one that is easiest to apply here, we look for the the part of $[0,1)$ that is not in the range of $f$, which is the singleton $\{0\}$. If we iterate $f\circ g$ on this, we get $A=\{1-2^{-n}\mid n\in \mathbb N\}$. Then the bijection $H:[0,1)\to(0,1)$ is$$ H(x) = \begin{cases} g(x) & x\in A \\ f^{-1}(x) & x\notin A \end{cases} $$Now unfold this definition and invert it to get $h$.
I'm writing a math paper in LaTeX and for math formulas I'm just using $$math formula$$. Is is possible to numerate formulas? Or do I have to you use some specific commands such as align? I'm writing a math paper in LaTeX and for math formulas I'm just using Vanilla displayed equation idioms: \documentclass{article}\usepackage{amsmath}\begin{document}The \verb|equation| environment creates numbered formulas you canlabel and refer to elsewhere:% this commented blank line prevents start of a new paragraph\begin{equation}\label{eq:pythagoras}a^2 + b^2 = c^2 .\end{equation}% this commented blank line prevents start of a new paragraphEquation~\ref{eq:pythagoras} is the heart of the Pythagorean theorem.Use the \verb|\eqref| macro to put parentheses around equationreferences: \eqref{eq:pythagoras}.For equations with no numbers, use \verb|equation*|:%\begin{equation*}2 + 2 = 4 .\end{equation*}For multiline formulas, use \verb|align| or \verb|align*|:%\begin{align}e^{i\pi} & = \cos(\pi) + i\sin(\pi) \notag \\ & = -1 .\end{align}\end{document} Edit: TeX will complain if it sees a blank line before the \end of one of these environments. Debugging that error message is difficult since you tend to think that TeX automatically does the right thing with white space. See"File ended while scanning use of \align*" For inline math use $...$ (or \( ... \)). For display math style equations, if you use aligned then alignment with enumerate's \item works better: Code: \documentclass{article}\usepackage{amsmath}\begin{document}\begin{enumerate}\item $\begin{aligned}[t] E &= mc^2 \\ F & = ma\end{aligned}$\item $\begin{aligned}[t] E &= mc^2 \\ F & = ma\end{aligned}$\end{enumerate}\end{document}
I have been reading about regression models for missing data imputation and I'm quite confused regarding the following: if I can perfectly predict the value of feature f2 using feature f1, why would I use f2? If both were real, would this mean that they are highly correlated, even if in a non-linear fashion? As far as I know, this class of imputation methods tries to predict a feature using another set of features. EDIT 1: To give some technical/theoretical background, in section 3.2.1 of the book "Flexible Imputation of Missing Data": For univariate $Y$ we write lowercase $y$ for $Y$ . Any predictors in the imputation model are collected in $X$. Symbol $X_{obs}$ indicates the subset of $n_1$ rows of $X$ for which $y$ is observed, and $X_{mis}$ is the complementing subset of n 0 rows of $X$ for which $y$ is missing. The vector containing the $n_1$ observed data in $y$ is denoted by $y_{obs}$ , and the vector of $n_0$ imputed values in $y$ is indicated by $\dot{y}$. This section reviews four different ways of creating imputations under the normal linear model. The four methods are: Predict. $\dot{y} = \hat{\beta_{0}} + X_{mis} \hat{\beta_{1}}$ , where $\hat{\beta_{0}}$ and $\hat{\beta_{1}}$ are least squares estimates calculated from the observed data. Section 1.3.4 named this regression imputation. In mice this method is available as "norm.predict". Predict + noise. $\dot{y} = \hat{\beta_{0}} + X_{mis} \hat{\beta_{1}} + \dot{\epsilon}$, where $\dot{\epsilon}$ is randomly drawn from the normal distribution as $\dot{\epsilon} \sim N(0, \hat{\sigma}^2)$. Section 1.3.5 named this stochastic regression imputation. In mice this method is available as "norm.nob". Bayesian multiple imputation. $\dot{y} = \dot{\beta_{0}} + X_{mis} \dot{\beta_{1}} + \dot{\epsilon}$, where $\dot{\epsilon} \sim N(0, \dot{\sigma}^2)$ and $\dot\beta_{0}$ , $\dot\beta_{1}$ and $\dot\sigma$ are random draws from their posterior distribution, given the data. Section 3.1.3 named this “predict + noise + parameters uncertainty.” The method is available as "norm". Bootstrap multiple imputation. $\dot{y} = \dot{\beta_{0}} + X_{mis} \dot{\beta_{1}} + \dot{\epsilon}$, and where $\dot{\epsilon} \sim N(0, \dot{\sigma}^2)$ and $\dot\beta_{0}$ , $\dot\beta_{1}$ and $\dot\sigma$ are the least squares estimates calculated from a bootstrap sample taken from the observed data. This is an alternative way to implement “predict + noise + parameters uncertainty.” The method is available as "norm.boot".
I have a case where $$\lim_{x\rightarrow\infty}=\frac{f\left(x\right)}{h\left(x\right)}$$ I know that $\lim_{x\rightarrow\infty} f(x)=0$ and $\lim_{x\rightarrow\infty} h(x)=\infty$ So at the and I have $\frac{0}{\infty}$. I know that infinity is not a real number but I am not sure if the limit is indeterminate. (Also, there are people who are saying contradictory things on internet) I know very well that it is not possible to use Hopital's rule. My guess is that : As we know that $\lim_{x\rightarrow\infty}\frac{1}{\infty}=0$, We can just write $\lim_{x\rightarrow\infty} \frac{1}{\infty}=0$ $$\lim_{x\rightarrow\infty} \frac{1-0}{\infty}=0$$ $$\lim_{x\rightarrow\infty} \frac{1}{\infty}-\frac{0}{\infty}=0$$ So, in this case $\frac{0}{\infty}=0$. What could be the answer and its explanation ?
Is the binomial theorem actually more efficient than just distributing a given binomial. I believe it is more confusing for me to remember and work out using the binomial theorem as a guide, than to just distribute when given a problem like: $$(x-4)^6$$ I think distributing that would be equally as painful as using the binomial theorem. Am I alone or is there actually a reason to practice the painful procedures? The binomial theorem allows you to write out the expansion of your polynomial immediately. It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?" Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$. It is a good thing. There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$... $$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$ As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series... $$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$ If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation: The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand by hand with the binomial theorem, without, it's doable but very tedious). A useful shorthand for lower powers of the exponent is to remember the first few rows of Pascal's Triangle along with the rule for adding more if you need them. Example of first few rows: $$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$ The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try). One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?) Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small. The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat all cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that calculus is used only on familiar cases, but we want to be useful also for very imaginative cases! To add to other answers, in higher-level math it's important to be able to generalize the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem. For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore.
Mertens function has, by residues, an explicit formula of $M(x)=\displaystyle\sum_{\rho}\frac{x^\rho}{\rho\zeta'(\rho)}-2+\sum_{n=1}^\infty\frac{(-1)^{2 n}(2\pi)^{2n}}{(2n)! n \zeta(2n+1)x^{2n}}$ where $\rho$ are the zeros of $\zeta(s)$, as usual. Meanwhile, if we use this generalized identity for the number of divisors function, $d_z(n)=\displaystyle\prod_{p^\alpha | n}\frac{(z)(z+1)..(z+\alpha-1)}{\alpha!}$, it's not much work to see that the Moebius function $\mu(n)$ is equal to $d_{-1}(n)$, and with $D_z(n) = \sum_{j=1}^n d_z(j)$, that $M(n) = D_{-1}(n)$. Is there an explicit formula, similar to that of $M(n)$, above, for the more general case of $D_z(n)$ that the formula for $M(n)$ is a specialization of? Some more detail, in response to Eric N.: I understand that we can't use residues to get an explicit formula, for the reasons mentioned. But does that lead naturally to the idea that there isn't / couldn't be explicit formulas for $D_k(n), k>0$ that use the Zeta Zeros? I want to make a visual, intuitive argument here. Here's an identity for $D_z(n)$ for complex z. $\displaystyle D_z(n) = \frac{z^0}{0!}1+\frac{z^1}{1!}\sum_{j=2}^n \kappa(j)+\frac{z^2}{2!}\sum_{j=2}^n \sum_{k=2}^{\lfloor \frac{n}{j} \rfloor} \kappa(j) \kappa(k)+\frac{z^3}{3!}\sum_{j=2}^n \sum_{k=2}^{\lfloor \frac{n}{j} \rfloor}\sum_{l=2}^{\lfloor \frac{n}{j k} \rfloor} \kappa(j) \kappa(k) \kappa(l)+\frac{z^4}{4!}...$ where $\kappa(n) = \frac{\Lambda(n)}{\log n}$. Define $\displaystyle P_k(n)=\sum_{j=2}^{n}\kappa(j) P_{k-1}(\lfloor \frac{n}{j} \rfloor)$ with $P_0(n)=1$, and restate that as $\displaystyle D_z(n) = \frac{z^0}{0!}P_0(n)+\frac{z^1}{1!}P_1(n)+\frac{z^2}{2!}P_2(n)+\frac{z^3}{3!}P_3(n)+\frac{z^4}{4!}P_4(n)+...$ $P_k(n) = 0$ if $n < 2^k$, so only $\log_2 n$ terms are non-zero. This means, if you've computed those non-zero $P_k(n)$ terms, it's trivial to compute $D_z(n)$ for any z in $\log_2 n$ operations. Now, use this identity to animate, in Mathematica, $\displaystyle\frac{(D_z(n)-1)}{z}$ over the range $z = 1$ to $z = -1$. K[n_] := FullSimplify[MangoldtLambda[n]/Log[n]] P[n_, k_] := P[n, k] = Sum[ K[j] P[Floor[n/j], k - 1], {j, 2, n}];P[n_, 0] := 1 DD[n_, k_] := Sum[ k^j/j! P[n, j], {j, 0, Log[2, n]}] Animate[DiscretePlot[ (DD[n, z = Cos[k] ] - 1)/z, {n, 1, 100}], {k, 0, 2 Pi, .0001}] What you'll see, if you watch this animation, is an animating line that starts as f(x)=(x-1), races down and at its fastest is the Riemann Prime Counting function right when z=0, and then finally comes to a halt at (1-Mertens Function), before it cycles back up - all in all, a nice gradual transformation between those three important functions. I know it's only an appeal to visuals, but I feel like what's going on at $D_{-.2}(n)/-.2$ looks continuous with what's going on at $D_{.2}(n)/.2$. Here's a closer look at that. Animate[DiscretePlot[(DD[n, z = Cos[k]*.2] - 1)/z, {n, 1, 400}], {k, 0, 2 Pi, .0001}] I guess anything's possible, but it deeply offends my senses of symmetry to think the Zeta Zeros are accounting for the high frequency part of the line there, from -.2 to 0, and then in the blink of an eye, something else is accounting for what is almost the exact same high frequency information. Are my instincts wrong? A Few More Notes About All This That identity for $D_z(n)$ stems from Linnik's identity summed, inverted, and generalized a bit. There's a corresponding identity for $d_z(n)$ as well. In the notation above, $P_1(n) = \Pi(n)$, the Riemann Prime Counting Function. As was casually demonstrated above, $\displaystyle \lim_{z \to 0}\frac{D_z(n)-1}{z} = \Pi(n)$. You can get this last result more easily by taking $d_z(n)=\displaystyle\prod_{p^\alpha | n}\frac{(z)(z+1)..(z+\alpha-1)}{\alpha!}$ and noting that $\displaystyle \lim_{z \to 0}\frac{d_z(n)}{z} = \frac{\Lambda(n)}{\log n}$ except at 1, where the limit is infinity.
Some of your students will become engineers, and engineers use complex numbers all the time, e.g., to represent impedance. This kind of thing is by far the most common application. Complex numbers are also used in quantum mechanics.after algebra II, they never use complex numbers until pretty much complex analysis.I assume you mean "they never use ... I have worked with a lot of students coming out of courses such as yours who:passed the course by blindly memorising proofs, theorems, and algorithms;learnt nothing (lasting) except solving some calculating exercises, a very vague idea of some terminology;had no idea what they just learnt, why they learnt it, and how it relates too their field of study; ... We owe students a presentation of the Fundamental Theorem of Algebra -- that every nonconstant polynomial has a root; or, equivalently, the marvelous fact that every polynomial of nth degree has precisely n roots (including multiplicities). When made, it serves as a capstone and culmination of all the work that the student has done in elementary algebra. Of ... At my University, there are four different first-semester Linear Algebra courses taken by Undergraduates:Math 214, Applied Linear Algebra, is "an introduction to matrices and linear algebra... The emphasis is on concepts and problem solving. The sequence 214-215 is not for math majors. It is designed as an alternate to the sequence 215-216 for engineering ... I am surprised that nobody has mentioned differential equations. If you know about complex numbers and Euler's formula then there is a beautiful unified theory of linear differential equations with constant coefficients in terms of the characteristic equation. Without complex numbers the theory becomes somewhat ad-hoc, with different solutions depending on ... I believe you need to listen beyond what your student is saying. Your student is not saying "I want to do some applications in class." What your student is really saying is "I'm bored and lost and this is rapidly becoming a waste of time for me, so I'm making this suggestion because I care enough about my own learning and trying to connect with you is how I ... I think the fundamental tenet of this question is simply false. Here are some of the many encounters that undergraduate college students have in my classes:In Calculus II, as an application of power series, we discuss Euler's identity:$$e^{i\theta} = \cos(\theta)+i\sin(\theta).$$Still in Calculus II as an application of the preceding example, we derive ... There's a reason why you can't find a good non-geometric example: when the dimensions of the axes on a graph are distinct, perpendicularity is units-dependent. There is thus no natural aspect ratio with which to draw the graph. If you change the scale for just one axis, you destroy any perpendicularity that was present, thus demonstrating that it was an ... Consider something besides an "all or nothing" approach.Here's what I did a couple of times when the topic was optional and I didn't have much time, but I still wanted to give students an introduction to the method. Simply restrict yourself to introducing the method by defining the Laplace transform, computing it for some simple examples, explaining what ... Symmetries give profound insight into structure. This is demonstrated by Noether's theorem, proved just under 100 years ago by Emmy Noether (probably the greatest female mathematician in history).Noether's theorem states that every symmetry of a physical system is there because something is being conserved. Examples:Translational symmetry -> ... There is Methods of Modern Mathematical Physics by Reed and Simon, which is a 4-volume book which teaches functional analysis, with a focus on operators in Hilbert spaces.Its main aim is to provide a sound mathematical background for the methods used in quantum mechanics, but it serves well as a textbook on functional analyis (for example, it covers some ... People tend to use the presence of symmetry in a phenomenon to simplify a model of it. For example, in physics, systems with certain symmetries are the easiest to model (e.g., an infinite plane, an infinitely long cylinder, a sphere, etc.). But we have to be careful here.Ian Stewart has a very nice article on symmetry-breaking. (The article is from his ... At the introductory level, Erwin Kreysiz's Introductory Functional Analysis with Applications is excellent. See the reviews at amazon.com. It's probably a bit elementary for you at this point (still, it could be very suitable for others reading this thread for recommendations), but I recommend at least looking through a copy at the library. I've actually had ... Just another question from a math guy showing his ignorance of math as a service course.2nd order diffyQ with constant coeffiecients (most important diffyQ for applications) has complex roots in the characteristic equation. [IOW very standard part of standard ODE course; OFTEN part of even a second semester calculus course during the diffyQ survey...was ... Going back to Euclid, I have found questions such as `Given a large supply of rods of length $15$ and $21$, what lengths can be measured?' can appeal to students. This also motivates the result $\gcd(a,b) = ra+sb$ of the (extended) Euclidean Algorithm. Some nice geometrical applications arise in the analysis of periodical curves such as Roulettes (Spirograph curves), Star Polygons, etc. Concrete experience with implementations in toys like Spirograph also provides excellent motivation for more abstract concepts such as cyclic groups. Given a set of sites determine which points are closer one of the sites than any of the others? If closer is measured by Euclidean distance then given the distinct points A and B the points equidistant from A and B lie on the perpendicular bisector of the segment A and B. The regions one obtains are known as the Voronoi diagram associated with the sites, and ... Peter Lax: Functional Analysis.It is sometimes difficult to use because of its cryptic style, but it is a great source of applications and a great source of historical references on applications which motivated functional analytic concepts. One possible class of examples of high dimensional systems is discretizations of infinite dimensional ones.Differential equations, as suggested in the comments, are one option, but let me propose another one that hopefully seems interesting and useful to students.This is a linear problem that is solved several times every day in hospitals around the world.... Also having a short period of time to introduce my DE students to Laplace Transforms I began with two 'Axioms':(1) $\mathcal{L}\{c_1y_1(t)+c_2y_2(t)\}=c_1Y_1(s)+c_2Y_2(s)$(2) $\mathcal{L}\{y^\prime\}=sY(s)-y(0)$.From these two we derived the Laplace transforms for polynomial, exponential and sinusoidal functions and proceeded to use those results and ... Most people are not going to use most of what they use in high school. High school (in the US, at least) is about building a broad foundation for students to be able to jump into any specialization when they go to college.Some examples of things I haven't had to do since high school:Name the 5 (I think 5, maybe it changed?) biological kingdomsDiagram a ... Here's a word problem for the greatest common divisor:12 boys and 15 girls are to march in a parade. The organizer wants them to march in rows, with each row having the same number of children, and with each row composed of children with the same gender. What is the largest number of children per row that satisfies these constraints?There should be $\... Builders use plumb lines and levels to determine vertical and horizontal directions (which are perpendicular) in order to ensure that floors are horizontal and walls are vertical.When dealing with vectors (in the plane or in space), which are used constantly in statics and dynamics, one often represents each vector as the sum of a pair of perpendicular ... I love this example. Suppose you want to cut out an irregular triangle from thecenter of a piece of paper. You can do it with one straight cut,first folding flat along angle bisectors:And of course the angle bisectors meet at the center of the incircle(Proposition 4, Book IV of Euclid):&... I challenge the assertion that students need to see applications in everything.When I first started teaching I labored under the delusion that I should explain connections to physics whenever I could (in calculus, DEQns, linear algebra etc.). Now, I think my efforts do have an audience, but not the main audience that I find in my classes. Personally, I've ... I'd like to expand a bit on The Chef's answer. Specifically, there's no need to require any kind of emphasis on "real world" applications. That is: Generally "real world" applications refers to some kind of industry mechanism in which some piece of mathematics is directly used (circuits, GPS, etc.).A student uninterested in a particular aspect of ... What is Mathematical Modeling?You might think this to be a simple, straightforward answer, but unfortunately we have no such luck. The definition of mathematical modeling varies depending on the author unlike other more clearly defined mathematical terms like prime or group.Pollak DefinitionArguably, one leading mathematician in the field of modeling ...
The following question kept me wondering for some time: Given the symmetric matrices $A,B,C\in\mathbb{R}^{n×n}$ where $A$ and $C$ are positive definite (hence invertible), and $B$ is positive semidefinite (hence not necessarily invertible) with $\text{trace}(B)\neq0$, prove that $\text{trace}\big\{C^{-1/2}BC^{−1/2}(A^{−1}+C^{−1/2}BC^{−1/2})^{−1}\big(A+\frac{n}{\text{trace}(B)}C\big)^{-1}\big\}\geq \text{trace}\big\{\big(A+\frac{n}{\text{trace}(B)}C\big)^{−2}A\big\}$ If it would help, one can also consider the simpler version with $C=I$: prove that $\text{trace}\big\{B(A^{−1}+B)^{−1}\big(A+\frac{n}{\text{trace}(B)}I\big)^{-1}\big\}\geq \text{trace}\big\{\big(A+\frac{n}{\text{trace}(B)}I\big)^{−2}A\big\}$. Please note that the matrix inversion lemma is not applicable at first to $C^{-1/2}BC^{−1/2}(A^{−1}+C^{−1/2}BC^{−1/2})^{−1}$ since $B$ is positive semidefinite. Although I'm not sure, it seems like $\text{trace}(B)=\text{trace}{(\text{trace}(B)/n)I}$ should be utilized first in some way to arrive at some inequality to which the matrix inversion lemma can later be applied. I would highly appreciate if anyone can provide some help or suggestions on this. Based on the suggestion of user2097, an invertible (positive definite) $B$ modifies the inequality to $\text{trace}\big\{(A+C^{1/2}B^{-1}C^{1/2})^{−1}\big(A+\frac{n}{\text{trace}(B)}C\big)^{-1}A\big\}\geq \text{trace}\big\{\big(A+\frac{n}{\text{trace}(B)}C\big)^{−2}A\big\}$ which looks easier, but I was still unable to prove it.
Contents The fundamental idea behind stochastic linear programming is the concept of recourse. Recourse is the ability to take corrective action after a random event has taken place. A simple example of two-stage recourse is the following: Choose some variables, to control what happens today. x, Overnight, a random event happens. Tomorrow, take some recourse action, to correct what may have gotten messed up by the random event. y, We can formulate optimization problems to choose and x in an optimal way. In this example, there are two periods; the data for the first period are known with certainty and some data for the future periods are stochastic, that is, random. y Example You are in charge of a local gas company. When you buy gas, you typically deliver some to your customers right away and put the rest in storage. When you sell gas, you take it either from storage or from newly-arrived supplies. Hence, your decision variables are 1) how much gas to purchase and deliver, 2) how much gas to purchase and store, and 3) how much gas to take from storage and deliver to customers. Your decision will depend on the price of gas both now and in future time periods, the storage cost, the size of your storage facility, and the demand in each period. You will decide these variables for each time period considered in the problem. This problem can be modeled as a simple linear program with the objective to minimize overall cost. The solution is valid if the problem data are known with certainty, that is, if the future events unfold as planned. More than likely, the future will not be precisely as you have planned; you don't know for sure what the price or demand will be in future periods though you can make good guesses. For example, if you deliver gas to your customers for heating purposes, the demand for gas and its purchase price will be strongly dependent on the weather . Predicting the weather is rarely an exact science; therefore, not taking this uncertainty into account may invalidate the results from your model. Your ``optimal'' decision for one set of data may not be optimal for the actual situation. Scenarios Suppose in our example that we are experiencing a normal winter and that the next winter can be one of three scenarios: normal, cold, or very cold. To formulate this problem as a stochastic linear program, we must first characterize the uncertainty in the model. The most common method is to formulate scenarios and assign a probability to each scenario. Each of these scenarios has different data as shown in the following table: Scenario Probability Gas Cost ($) Demand (units) Normal 1/3 5.0 100 Cold 1/3 6.0 150 Very Cold 1/3 7.5 180 Both the demand for gas and its cost increase as the the weather becomes colder. The storage cost is constant, say, 1 unit of gas is $1 per year. If we solve the linear program for each scenario separately, we arrive at three purchase/storage strategies: Normal - Normal Year Purchase to Use Purchase to Store Storage Cost 1 100 0 0 500 2 100 0 0 500 Total Cost = $1000 Normal - Cold Year Purchase to Use Purchase to Store Storage Cost 1 100 0 0 500 2 150 0 0 900 Total Cost = $1400 Normal - Very Cold Year Purchase to Use Purchase to Store Storage Cost 1 100 180 180 1580 2 0 0 0 0 Total Cost = $1580 We do not know which of the three scenarios will actually occur next year, but we would like our current purchasing decision to put is in the best position to minimize our expected cost. Bear in mind that by the time we make our second purchasing decision, we will know which of the three scenarios has actually happened. Stochastic programs seek to minimize the cost of the first-period decision plus the expected cost of the second-period recourse decision. \[ \begin{array}{ll} \min & c^T x + E_{\omega} Q(x,\omega) \\ \mbox{s.t.} & Ax = b \\ & x \geq 0 \end{array}\] where \[ \begin{array}{lll} Q(x,\omega) = & \min & d(\omega)^T y \\ & \mbox{s.t.} & T(\omega)x + W(\omega) y = h(\omega) \\ & & y \geq 0 \end{array} \] The first linear program minimizes the first-period direct costs, \(c^T x \) plus the expected recourse cost, \(Q(x,\omega) \), over all of the possible scenarios while meeting the first-period constraints, \(Ax = b\). The recourse cost \(Q\) depends both on \(x\) the first-period decision and on the random event, \(\omega\). The second LP describes how to choose \(y(\omega)\) (a different decision for each random scenario \(\omega\)). It minimizes the cost \(d^Ty\) subject to some recourse function, \(Tx + Wy = h\). This constraint can be thought of as requiring some action to correct the system after the random event occurs. In our example, this constraint would require the purchase of enough gas to supplement the original amount on hand in order to meet the demand. One important thing to notice in stochastic programs is that the first-period decision, \(x\), is independent of which second-period scenario actually occurs . This is called the nonanticipativity property. The future is uncertain and so today's decision cannot take advantage of knowledge of the future. Deterministic Equivalent The formulation above looks a lot messier than the deterministic LP formulation that we discuss elsewhere. However, we can express this problem in a deterministic form by introducing a different second-period \(y\) variable for each scenario. This formulation is called the . deterministic equivalent \[ \begin{array}{lllll} \min & c^Tx + \sum_{i=1}^N p_i d_i^T y_i & & & \\ \mbox{s.t.} & Ax & = & b & \\ & T_i x + W_i y_i & = & h_i & i=1,\ldots,N \\ & x & \geq & 0 & \\ & y_i & \geq & 0 & i=1,\ldots,N \end{array} \] where \(N\) is the number of scenarios and \(p_i\) is the probability of the scenario \(i\)'s occurrence. For our three-scenario problem, we have \min & c^Tx + p_1 d_1^T y_1 + p_2 d_2^T y_2 + p_3 d_3^T y_3 & & & \\ \mbox{s.t.} & Ax & = & b & \\ & T_1 x + W_1 y_1 & = & h_1 & \\ & T_2 x + W_2 y_2 & = & h_2 & \\ & T_3 x + W_3 y_3 & = & h_3 & \\ & x & \geq & 0 & \\ & y_i & \geq & 0 & i=1,2,3 \end{array}\] Notice that the nonanticipativity constraint is met. There is only one first-period decision, \(x\), whereas there are \(N\) second-period decisions, one for each scenario. The first-period decision cannot anticipate one scenario over another and must be feasible for each scenario. That is, \(Ax = b\) and \(T_i x + W_i y_i = h_i\) for \(i = 1,...,N\). Because we solve for all the decisions, \(x\) and \(y_i\) simultaneously, we are choosing \(x\) to be (in some sense) optimal over all the scenarios. Another feature of the deterministic equivalent is worth noting. Because the \(T\) and \(W\) matrices are repeated for every scenario in the model, the size of the problem increases linearly with the number of scenarios. Since the structure of the matrices remains the same and because the constraint matrix has a special shape, solution algorithms can take advantage of these properties. Taking uncertainty into account leads to more robust solutions but also requires more computational effort to obtain the solution. Strictly speaking, a deterministic equivalent is any mathematical program that can be used to compute the optimal first-stage decision, so these will exist for continuous probability distributions as well, when one can represent the second-stage cost in some closed form. The deterministic equivalent is often called the extensive form. Because stochastic programs require more data and computation to solve, most people have opted for simpler solution strategies. One method requires the solution of the problem for each scenario. The solutions to these problems are then examined to find where the solutions are similar and where they are different. Based on this information, subjective decisions can be made to decide the best strategy. Expected-Value Formulation A more quantifiable approach is to solve the original LP where all the random data have been replaced with their expected values. Hopefully in this approach we will do alright on average. For our example then, we consider the (expected value) problem data to be Year Gas cost ($) Demand 1 5.0 100 2 6.167 143.33 Solving this problem gives the following result: Year Purchase to Use Purchase to Store Storage Cost 1 100 143.33 143.33 1360 2 0 0 0 0 Cost = $1360.00 Let's compute what happens in each scenario if we implement the expected value solution: Scenario Recourse Action Recourse Cost Total Cost Normal Store 43.33 excess @ $1 per unit 43.33 1403.33 Cold Buy 6.67 units @ $6 per unit 40 1400 Very Cold Buy 36.67 units @ $7.5 per unit 275 1635 The expected total cost over all scenarios is \(\frac{1}{3} 1403.33 + \frac{1}{3} 1400 + \frac{1}{3} 1635 = 1479.44.\) Stochastic Programming Solution Forming and solving the stochastic linear program gives the following solution: Year Purchase to Use Purchase to Store Storage Cost 1 Normal 100 100 100 1100 2 Normal 0 0 0 0 2 Cold 50 0 0 300 2 Normal 80 0 0 600 Cost = \(1100 + \frac{1}{3}300 + \frac{1}{3} 600 = 1400.\) Similarly we can compute the costs for the stochastic programming solution for each scenario: Scenario Recourse Action Recourse Cost Total Cost Normal None 0 1100 Cold Buy 50 units @ $6 per unit 300 1400 Very Cold Buy 80 units @ $7.5 per unit 600 1700 The expected total cost over all scenarios is \(\frac{1}{3}1100 + \frac{1}{3}1400 + \frac{1}{3}1700= 1400.\) The difference in these average costs (\$ 79.44) is the value of the stochastic solution over the expected-value solution. Also, notice that the cost of the stochastic solution is greater than or equal to the optimal solution for each scenario solved separately (\(1100\geq 1000, 1400 \geq 1400, 1635 \geq 1580\)). By solving each scenario alone, one assumes perfect information about the future to obtain a minimum cost. The stochastic solution is minimizing over a number of scenarios and, as a result, sacrifices the minimum cost for each scenario in order to obtain a robust solution over all the scenarios. Randomness in problem data poses a serious challenge for solving many linear programming problems. The solutions obtained are optimal for the specific problem but may not be optimal for the situation that actually occurs. Being able to take this randomness into account is critical for many problems where the essence of the problem is dealing with the randomness in some optimal way. Stochastic programming enables the modeller to create a solution that is optimal over a set of scenarios.
Let me attempt to answer your question, since your question is about SO(10) GUT model, so I will assume that you have the knowledge of simpler version of GUT namely SU(5) GUT model and also little of group theory. You have 4 different questions >>> 01. Isn't this term ($\psi^{T} C \psi$) already invariant under SO(10)? 02. Doesn't this term ($\psi^{T} C \psi$) yield Majorana mass terms? 03. Why are terms ($\psi^{T} C \psi \phi_{10}$) of this form invariant ? 04. why does the decomposition quoted above ($16 \times 16= 10+120+126$) tell us which Higgs representation we must use in order to get invariant terms? Since all these questions are kind of interrelated so while answering sometimes I may not be able to follow the particular ordering but mix things up. I apologize for that. Anyway, ANS to QUS#01: yes it is! SO(10) is an orthogonal group so any SO(10) invariant quantity (say X) must remain the same under the group rotation, i.e, $O^{T}XO=X$, so you are right about that. ANS to QUS#02: the term ($\psi^{T} C \psi$) is neither a mass term nor a Yukawa term, it is just an invariant fermion bilinear. To understand how to form fermion bilinears and also how many of such terms possible, please consult any introductory field theory/particle physics book, as for example: Introduction to Elementary Particles--D. Griffiths; chapter#07,page#224,Equation#7.68 [ http://www.amazon.com/Introduction-Elementary-Particles-David-Griffiths/dp/3527406018 ]. ANS to QUS#03 and #04: now, to generate mass for the fermions, at first you need construct Yukawa coupling terms in the Lagrangian. Always remember two things, (a) structure of Yukawa coupling $\sim$ fermion * fermion * Higgs scalar ,and (b) Lagrangian always has to be group invariant. From (a) we get for SO(10), $16\times 16 \times Higgs$ but from (b) this Higgs representation has to be choosen in such a way that such terms are invariant. In this stage you need a little bit of group theory. Group theory says, $16 \times 16= 10+120+126$, which says, fermion * fermion$\neq 1$ ( $1=$ group singlet and singlets are group invariants). So, to form Yukawa coupling which has to be group invariant, the only options you have are 10,120 or 126 representations, choosing any other representation under SO(10) will not give you a singlet. To understand in details why other representations of Higgs will not form singlet, please consult with Group Theory for Unified Model Building--R.Slansky; page#105 [http://inspirehep.net/record/10204?ln=en ]. This answers QUS#04 and QUS#03. Now, while answering QUS#02 above, I was very brief, as to explain it clearly, I need the elements from the last paragraph. So lets go back to your QUS#02. Now you know that possible Yukawa coupling terms are: (i) $16_{F} \times 16_{F} \times 10_{H}$ (ii) $16_{F} \times 16_{F} \times 120_{H}$ (iii) $16_{F} \times 16_{F} \times 126_{H}$ ,F and H stand for Fermion and Higgs respectively. For minimality of SO(10) GUT models, 120 representation of Higgs is not used and so I will not talk about it rather concentrate on 10 and 126 dimensional representations. Lets assume that you already know SU(5) GUT as your question involves SO(10) GUT. Again from group theory one can write down the Branching Rules of the representations that we are interested in for SO(10)-->SU(5) Group Theory for Unified Model Building--R.Slansky; page#106 as: (A) $16=1+\bar{5}+10$ (B) $10=5+\bar{5}$ (C) $126=1+5+\bar{10}+...$ (dots mean higher dimensional representations that we are not interested in). Now, if you substitute these in the Yukawa couplings, you will get 10 terms that are invariants, but I will only pick two of them for illustration and to attack your question of Majorana mass that you are interested in. (i) $1_{F}\times \bar{5}_{F} \times 5_{H}$ (ii) $1_{F}\times 1_{F}\times 1_{H} $ If you know SU(5) GUT you already know that first term (i) is a Dirac mass term where as second term (ii) is Majorana mass term, as their basic forms are: Dirac mass $\sim$ Right Handed Fermion * Left Handed Fermion * Higgs = R * L * $\langle\phi_{H}\rangle$ Majorana mass $\sim$ Right Handed Fermion * Right Handed Fermion * Higgs = R * R * $\langle\phi_{H}\rangle$ (in $\phi_{H}$ I put $\langle\phi_{H}\rangle$ to represent vacuum expectation value [vev in short], as to give the fermions mass, scalar fields have to get vev [to understand vev, see any introductory field theory book as for example: Quantum Field theory---L.Ryder , Chapter#08] ) because, in (i) $1_{F}$ contains right handed neutrino and $\bar{5}_{F}$ contains left handed neutrino and so Dirac type mass ;on the contrary in (ii) both $1_{F}$ contain right handed neutrino and so Majorana type mass. Hope that it will help, Thanks!
You wanted $\;s=\sqrt{u^2-1}$, so $\;s^2=u^2-1$, $\;2s\,ds=2u\,du$, and $\;s\,ds=u\,du$. Substituting gives us:$$\begin{align}\int\frac{du}{u\sqrt{u^2-1}} &= \int\frac{u\,du}{u^2 \sqrt{u^2-1}} \\&= \int\frac{s\,ds}{(s^2+1)s}\\&= \int\frac{ds}{s^2+1} \\ \\&= \arctan s + C.\end{align}$$ $s = \sqrt{u^2-1} = \sqrt{(x+1)^2 - 1} = \sqrt{x^2 + 2x}$ Now don't forget to "back substitute": So our answer is: $$\quad\arctan(\sqrt{x^2 + 2x}) + C$$ Note that with trig substitution, and trigonometric solutions, there can be any number of solutions (constants may vary), as is evident by the number of trigonometric identities! E.g.: $$\arctan\sqrt{x^2+2x}-\left(\arcsin\left(\frac1{x+1}\right)-\frac\pi2\right) = 0$$ $$-\arcsin\frac1{x+1}=\arctan\sqrt{x^2+2x}-\text{ constant} = \arctan \sqrt{x^2 + 2x} + C$$ To check your integration answers, you can always take the the derivative of your solutions, and if you can derive the original integrand, you are "good to go." Try differentiating each solution (your's and the text's) and both will give you the equivalent of your original integrand.
Research Open Access Published: Extended q-Dedekind-type Daehee-Changhee sums associated with extended q-Euler polynomials Advances in Difference Equations volume 2015, Article number: 272 (2015) Article metrics 900 Accesses 6 Citations 2 Altmetric Abstract In the present paper, we aim to specify a p-adic continuous function for an odd prime inside a p-adic q-analog of the extended Dedekind-type sums of higher order according to extended q-Euler polynomials (or weighted q-Euler polynomials) which is derived from a fermionic p-adic q-deformed integral on \(\mathbb{Z}_{p}\). Introduction Let p be chosen as a fixed odd prime number. In this paper \(\mathbb{Z}_{p}\), \(\mathbb{Q}_{p}\), \(\mathbb{C}\) and \(\mathbb{C}_{p}\) will, respectively, denote the ring of p-adic rational integers, the field of p-adic rational numbers, the complex numbers, and the completion of an algebraic closure of \(\mathbb{Q}_{p}\). Let \(v_{p}\) be a normalized exponential valuation of \(\mathbb{C}_{p}\) by When one talks of a q-extension, q is variously considered as an indeterminate, a complex number \(q\in\mathbb{C}\) or a p-adic number \(q\in \mathbb{C}_{p}\). If \(q\in\mathbb{C}\), we assume that \(\vert q\vert <1\). If \(q\in \mathbb{C}_{p}\), we assume that \(\vert 1-q\vert _{p}<1\) (see, for details, [1–16]). The following measure is defined by Kim: for any positive integer n and \(0\leq a< p^{n}\), Extended q-Euler polynomials (also known as weighted q-Euler polynomials) are defined by for \(n\in \mathbb{Z}_{+}:= \{ 0,1,2,3,\ldots \} \). We note that where \(E_{n} ( x ) \) are nth Euler polynomials, which are defined by the rule (for details, see [13]). In the case \(x=0\) in (1), then we have \(\widetilde{E}_{n,q}^{ ( \alpha ) } ( 0 ) :=\widetilde{E}_{n,q}^{ ( \alpha ) }\), which are called extended q-Euler numbers (or weighted q-Euler numbers). Extended q-Euler numbers and polynomials have the following explicit formulas: Moreover, for \(d\in \mathbb{N}\) with \(d\equiv1\ ( \operatorname{mod}2 ) \), see [13]. where \(\overline{E}_{m} ( x ) \) are mth periodic Euler functions. Kim [6] derived some interesting properties for Dedekind-type DC sums and considered a p-adic continuous function for an odd prime number to contain a p-adic q-analog of the higher order Dedekind-type DC sums \(k^{m}S_{m+1} ( h,k ) \). Simsek [15] gave a q-analog of Dedekind-type sums and derived interesting properties. Furthermore, Araci et al. studied Dedekind-type sums in accordance with modified q-Euler polynomials with weight α [14], modified q-Genocchi polynomials with weight α [4], and weighted q-Genocchi polynomials [16]. Recently, weighted q-Bernoulli numbers and polynomials were first defined by Kim in [11]. Next, many mathematicians, by utilizing Kim’s paper [11], have introduced various generalization of some known special polynomials such as Bernoulli polynomials, Euler polynomials, Genocchi polynomials, and so on, which are called weighted q-Bernoulli, weighted q-Euler, and weighted q-Genocchi polynomials in [1, 2, 11–13]. By the same motivation of the above knowledge, we give a weighted p-adic q-analog of the higher order Dedekind-type DC sums \(k^{m}S_{m+1} ( h,k ) \) which are derived from a fermionic p-adic q-deformed integral on \(\mathbb{Z}_{p}\). Extended q-Dedekind-type sums associated with extended q-Euler polynomials Let w be the Teichmüller character \((\operatorname {mod}p)\). For \(x\in \mathbb{Z}_{p}^{\ast}:= \mathbb{Z}_{p}/p \mathbb{Z}_{p}\), set Let a and N be positive integers with \(( p,a ) =1\) and \(p\mid N \). We now consider In particular, if \(m+1\equiv0\ (\operatorname{mod}p-1)\), then Thus, \(\widetilde{C}_{q}^{ ( \alpha ) } ( m,a,N:q^{N} ) \) is a continuous p-adic extension of Let \([ \cdot ] \) be the Gauss symbol and let \(\{ x \} =x- [ x ] \). Thus, we are now ready to introduce the q-analog of the higher order Dedekind-type DC sums \(\widetilde{J}_{m,q}^{ ( \alpha ) } ( h,k:q^{l} ) \) by the rule If \(m+1\equiv0\ ( \operatorname{mod}p-1 ) \), where \(p\mid k\), \(( hM,p ) =1\) for each M. By (1), we easily state the following: where \((hM)_{k}\) denotes the integer x such that \(0\leq x< n\) and \(x\equiv \alpha\ ( \operatorname{mod}k ) \). It is not difficult to indicate the following: So, where \(( p^{-1}a ) _{N}\) denotes the integer x with \(0\leq x< N\), \(px\equiv a\ ( \operatorname{mod}N ) \) and m is integer with \(m+1\equiv0\ (\operatorname{mod}p-1)\). Therefore, we have where \(p\nmid k\) and \(p\nmid hm\) for each M. Thus, we give the following definition, which seems interesting for further studying the theory of Dedekind sums. Definition 1 Let h, k be positive integer with \(( h,k ) =1\), \(p\nmid k\). For \(s\in \mathbb{Z}_{p}\), we define a p-adic Dedekind-type DC sums as follows: As a result of the above definition, we state the following theorem. Theorem 2.1 For \(m+1\equiv0\ (\operatorname{mod}p-1)\) and \(( p^{-1}a ) _{N}\) denotes the integer x with \(0\leq x< N\), \(px\equiv a\ ( \operatorname {mod}N ) \), then we have References 1. Araci, S, Acikgoz, M, Park, KH: A note on the q-analogue of Kim’s p-adic log gamma-type functions associated with q-extension of Genocchi and Euler numbers with weight α. Bull. Korean Math. Soc. 50(2), 583-588 (2013) 2. Araci, S, Erdal, D, Seo, JJ: A study on the fermionic p-adic q-integral representation on \(\mathbb{Z}_{p}\) associated with weighted q-Bernstein and q-Genocchi polynomials. Abstr. Appl. Anal. 2011, Article ID 649248 (2011) 3. Araci, S, Acikgoz, M, Seo, JJ: Explicit formulas involving q-Euler numbers and polynomials. Abstr. Appl. Anal. 2012, Article ID 298531 (2012). doi:10.1155/2012/298531 4. Araci, S, Acikgoz, M, Esi, A: A note on the q-Dedekind-type Daehee-Changhee sums with weight αarising from modified q-Genocchi polynomials with weight α. J. Assam Acad. Math. 5, 47-54 (2012) 5. Kim, T: A note on p-adic q-Dedekind sums. C. R. Acad. Bulgare Sci. 54, 37-42 (2001) 6. Kim, T: Note on q-Dedekind-type sums related to q-Euler polynomials. Glasg. Math. J. 54, 121-125 (2012) 7. Kim, T: Note on Dedekind type DC sums. Adv. Stud. Contemp. Math. 18, 249-260 (2009) 8. Kim, T: The modified q-Euler numbers and polynomials. Adv. Stud. Contemp. Math. 16, 161-170 (2008) 9. Kim, T: q-Volkenborn integration. Russ. J. Math. Phys. 9, 288-299 (2002) 10. Kim, T: On a q-analogue of the p-adic log gamma functions and related integrals. J. Number Theory 76, 320-329 (1999) 11. Kim, T: On the weighted q-Bernoulli numbers and polynomials. Adv. Stud. Contemp. Math. 21(2), 207-215 (2011) 12. Rim, SH, Jeong, J: A note on the modified q-Euler numbers and polynomials with weight α. Int. Math. Forum 6(65), 3245-3250 (2011) 13. Ryoo, CS: A note on the weighted q-Euler numbers and polynomials. Adv. Stud. Contemp. Math. 21, 47-54 (2011) 14. Seo, JJ, Araci, S, Acikgoz, M: q-Dedekind-type Daehee-Changhee sums with weight αassociated with modified q-Euler polynomials with weight α. J. Chungcheong Math. Soc. 27(1), 1-8 (2014) 15. Simsek, Y: q-Dedekind type sums related to q-zeta function and basic L-series. J. Math. Anal. Appl. 318, 333-351 (2006) 16. Şen, E, Acikgoz, M, Araci, S: A note on the modified q-Dedekind sums. Notes Number Theory Discrete Math. 19(3), 60-65 (2013) Acknowledgements The authors thank the reviewers for their helpful comments and suggestions, which have improved the quality of the paper. Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions All authors contributed equally to this work. All authors read and approved the revised manuscript.
Here's a geometric argument, but it isn't as slick as some of the Calculus-based ones. Consider the unit circle about $O$, through $R$ and $S$, with $\theta = \angle ROS$. The perpendicular from $S$ to $\overline{OR}$ has length $\sin\theta$, while the perpendicular from $R$ up to $T$ on the extension of $\overline{OS}$ has length $\tan\theta$. Let $M$ be the midpoint of $\overline{ST}$. Then $$2\;|\text{area of sector}\;ROS| = \theta \qquad\text{and}\qquad 2\;|\triangle ORM| = \frac{1}{2}\left(\sin\theta + \tan\theta\right)$$ "All we need to do" is show that the triangle has more area than the sector. This seems pretty clear; after all, the triangle contains almost-all of the sector, except for the circular segment defined by $\overline{KR}$, where $K$ is the intersection of $\overline{RM}$ and the circle. There is a concern, though, that the excess area in the triangular region $KSM$ could be less than that of the tiny sliver of a circular segment for small $\theta$; we need to dispel that concern. There's probably a simpler route to this, but I coordinatized and, with the help of Mathematica, found$$M = \left(\frac{1 + \cos\theta}{2}, \frac{\sin\theta (1 + \cos\theta)}{2 \cos\theta}\right)$$$$K = \left(\frac{1 + 3 \cos\theta + 2 \cos^2\theta + 2 \cos^3\theta}{1 + 3 \cos\theta + 4 \cos^2\theta}, \frac{2 \sin\theta \cos\theta ( 1 + \cos\theta)}{1 + 3 \cos\theta + 4 \cos^2\theta}\right)$$so that (after a bit more symbol-crunching)$$\frac{|\overline{MK}|}{|\overline{KR}|} = \frac{1 + 3 \cos\theta}{4 \cos^2\theta} = 1 + \frac{1 + 3 \cos\theta - 4 \cos^2\theta}{4 \cos^2\theta} = 1 + \frac{(1-\cos\theta)(1 + 4 \cos\theta)}{4 \cos^2\theta} > 1$$ for $0 < \theta < \pi/2$. This says that $\overline{MK}$ is longer than $\overline{KR}$, so that we could reflect $R$ in $K$ to get $R^\prime$, and copy circular segment $KR$ as circular segment $KR^\prime$ inside $\triangle ORM$ yet tangent to the unit circle (and therefore outside of sector $ORS$). Consequently, the triangle definitely has more area than the sector, so we're done. $\square$
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
A practical approach that in many examples works [but not always, I know] is trying to find the nesting structure of the strings in the language. "Nested dependencies" have to be generated at the same time in different parts of the string. Also we have the basic toolbox: concatenation: $S\to S_1S_2$ if you can split the language in two consecutive parts use this production union: $S\to S_1 \mid S_2$ split into disjoint parts iteration: $S\to S_1S \mid \varepsilon$ Example 1 Here an example for the nesting (thank you Raphael). $L=\{b^ka^l(bc)^ma^nb^o \mid k,l,m,n,o\in {\Bbb N},k\neq o,2l=n,m\ge 2 \}$ Replace $n$ by $2l$. We can now drop $n$ in conditions. Replace $k \neq o$ by $k > o \text{ or } k < o$ (confused? $o$ is 'oh' not 'zero'). Apply tools for union. We work with $k > o$ here. Also $k>o$ iff $k=s+o$ and $s>0$ where $s$ is a new variable. Replace $k$ by $s+o$. $L_1 =\{b^{s+o}a^l(bc)^ma^{2l}b^o \mid l,m,o,s\in {\Bbb N},s>0,m\ge 2 \}$ Some simple rewrites. $L_1 =\{bb^sb^o a^l bcbc(bc)^m (aa)^{l}b^o \mid l,m,o,s\in {\Bbb N} \}$ Now we see the nesting structure, and start building a grammar. $S_1 \to TV$, $T\to bU$, $U\to bU \mid \varepsilon$ (see: concatenation and iteration here) $V \to bVb \mid W$ (we generate $o$ $b$'s on both sides) $W \to aWaa\mid X$ $X\to YZ$, $Y\to bcbc$, $Z\to bcZ\mid \varepsilon$ Example 2 $K =\{ a^kb^lc^m \mid l=m+k\}$ A first "obvious" rewrite. $K =\{ a^kb^{m+k}c^m \mid m,k\ge 0\} = \{ a^kb^mb^kc^m \mid m,k\ge 0\}$ In linguistice this is called "cross-serial dependency": the interleaving $k,m,k,m$ (usually) strongly indicates non-contextfreeness. Of course $m+k=k+m$ and we are saved. $K =\{ a^kb^{k+m}c^m \mid m,k\ge 0\} = \{ a^kb^kb^mc^m \mid m,k\ge 0\}$ with productions $S\to XY$, $X\to aXb\mid \varepsilon$, $Y\to bYc\mid \varepsilon$ Similarly $K'= \{ a^kb^lc^m \mid m=k+l\} = \{ a^kb^lc^lc^k \mid k,l\ge 0\}$ with productions $S\to aSc \mid X$, $X\to bXc\mid \varepsilon$ Final comment: these techniques help you come up with a candidate context-free grammar that will hopefully recognize your language. A correctness proof may still be needed, to ensure that the grammar really works to recognize your language (nothing more, and nothing less).
I'm curious why rockets are so big in their size. Since both the gravitational potential one need to overcome in order to put thing into orbit, and the chemical energy burned from the fuel, are proportional to the mass, so if we shrink the rocket size, it would seem to be fine to launch satellites. So why not build small rocket say the size of human? I can imagine small rocket would be easier to manufacture in large quantities and easier to transport. And maybe someone can make a business out of small rocket, carrying one's own satellite. The problem is what Konstantin Tsiolkovsky discovered 100 years ago: as speed increases, the mass required (in fuel) increases exponentially. This relation, specifically, is$$\Delta v=v_e\ln\left(\frac{m_i}{m_f}\right)$$where $v_e$ is the exhaust velocity, $m_i$ the initial mass and $m_f$ the final mass. The above can be rearranged to get $$ m_f=m_ie^{-\Delta v/v_e}\qquad m_i=m_fe^{\Delta v/v_e} $$ or by taking the difference between the two, $$ M_f=1-\frac{m_f}{m_i}=1-e^{-\Delta v/v_e} $$ where $M_f$ is the exhaust mass fraction. If we assume we are starting from rest to reach 11.2 km/s (i.e., Earth's escape velocity) with a constant $v_e=4$ km/s (typical velocity for NASA rockets), we'd need $$ M_f=1-e^{-11.2/4}=0.939 $$ which means almost 94% of the mass at launch needs to be fuel! If we have a 2000 kg craft (about the size of a car), we would need nearly 31,000 kg of fuel in a craft that size. The liquid propellant has a density similar to water (so 1000 kg/m$^3$), so you'd need an object with a volume of 31.0 m$^3$ to hold it. Our car sized object's interior would be around 3 m$^3$, a factor of 10 too small! This means we need a bigger craft which means more fuel! And explains why this mass-speed relation has been dubbed "the tyranny of the rocket problem". This also explains the fact that modern rockets are multi-staged. In an attempt to alleviate the required fuel, once a stage uses all of its fuel, it is released from the rocket and the next stage is ignited (doing this over land is dangerous for obvious reasons, hence NASA launching rockets over water), and the mass of the craft is lowered by the mass of the (empty) stage. More on this can be found at these two Physics.SE posts: TL;DR: This answer arrives at roughly the same conclusion as Kyle Kanos' answer, i.e. in addition to payload considerations, the difficulty lies in stuffing a small rocket with a mass of fuel exceeding the mass of the rocket itself. This answer, however, is more rigorous in how the $\Delta v$ budget is treated. The rocket equation: Consider the Tsiolkovsky rocket equation, which describes the motion of vehicles that propel themselves by expelling part of their mass with a certain velocity. A simplified version which only takes (constant) gravity and thrust into account is given below: $$ \Delta v(t) = v_e \cdot \ln \frac{m_0}{m(t)} - g\left(\frac{m_f}{\dot m}\right) $$ where $v_e$ is the effective exhaust velocity, $m_f$ is the mass of the fuel aboard, $\dot m$ is the the mass burn rate (constant with respect to time), $m_0$ is the the initial mass of the rocket and $m(t)$ is the current mass of the rocket. Note that this is essentially a momentum exchange equation: you have a finite amount of momentum available from expulsion of fuel, which you must spend on increasing the velocity of the rocket + remaining fuel system, as well as overcoming gravity (i.e. dragging the planet ever so slightly). A form of the Tsiolkovsky equation that does not take this into account (as in the other answer) will give you non-physical results. Constrained variables: Now, what can we play with in this equation? Assuming $t_{escape}$ is the time at which the rocket escapes Earth's gravity: $\Delta v(t_{escape})$ is simply our desired escape velocity (assuming the rocket starts from rest), which is dictated by where we're trying to send the rocket $m(t_{escape})$ will optimally be the mass of the rocket without any fuel The effective exhaust velocity $v_e$ and the rate of mass flow $\dot m$ are a function of the type of engine/propellant available This means none of these quantities are negotiable; we are constrained by the demands of the mission and the available technology. Developing a relationship between rocket and fuel mass: All we are left to play with is the initial masses of the rocket fuel $m_f$ and rocket body $m_r$. Let us substitute in the values of $v$ and $m$ at the instant when the rocket escapes gravity, noting that $m_0 = m_f + m_r$: $$ \begin{align} v_{escape} & = v_e \cdot \ln \frac{m_f + m_r}{m_r} - g\left(\frac{m_f}{\dot m}\right)\\ & = v_e \cdot \ln\left(1 + \frac{m_f}{m_r}\right) - g\left(\frac{m_f}{\dot m}\right) \end{align} $$ Rearranging, we have: $$ m_r = m_f \cdot \left(\exp\left(\frac{v_{esc} + g\left(\frac{m_f}{\dot m}\right)}{v_e}\right) -1\right)^{-1} $$ Note that this is effectively providing $m_r$ as a function of $m_f$, since all the other parameters are fixed by the constraints of the mission and equipment as well as environmental constants. Since the relationship isn't immediately obvious, here is a plot of $m_r$ against $m_f$ for selected values of the constants: In red, we have a plot of rocket mass versus initial fuel mass, while in blue we have a plot of the ratio of initial fuel mass to total mass. Note that the axis for the blue plot starts at 0.9!! This indicates that regardless of what rocket mass you picked, the net initial mass of your vehicle would have to consist almost entirely of fuel. So what does this mean? Filling a vehicle with a mass of fuel exceeding its own is increasingly difficult for small rockets, but not so difficult for much larger rockets (think of how the enclosed volume of a hollow body scales versus mass). This is why making smaller and smaller rockets becomes progressively more difficult. In addition, a minimum limit on the rocket mass we can choose is imposed by the weight of the payload it must carry, which could be anything from a satellite to a single person. Upper limit on payload: A very interesting thing happens near the inflection point of the rocket mass - fuel mass curve. Before the inflection point, adding more fuel allowed us to hoist a larger payload to the desired velocity. However, somewhere around $4 \cdot 10^6$ kg of fuel mass (for our selected parameter values) we discover that adding more fuel starts to decrease the payload that can be hoisted! What is happening here is that the cost of the additional fuel having to fight against gravity begins to win out against the benefit of having a high fuel to payload mass ratio. This shows there is a theoretical upper limit to the payload that can be hoisted on Earth using the propellant technology we have available. It is not possible to simply keep increasing the payload and fuel masses in equal proportion in order to lift arbitrarily large loads, as would be suggested by using the Tsiolkovsky equation with no extra terms for gravity. Consider the problem in the from of a ratio, what is the ratio of mass used to lift the rocket(fuel), to the mass finally put into orbit(cockpit). That proportion will be much the same regarding smaller objects that must be put into orbit. If you use the same ratio or proportion to calculate the needed fuel mass for a small craft, you will find you can't even carry the device holding your fuel. This is also why rockets use stages. The type of fuel used also has an impact, but those are details that need a new question. Because most payloads are quite heavy. I am not sure what kind of payloads you had in mind, I am no expert on this, but I think that most launches contain satellites, which might be heavier then you think, for instance the satellite in this BBC Documentary weighs 6000 kg. And according to Wikipedia, miniaturized satellites weigh less than 500 kg (so heavier is normal). And some of those miniaturized satellites are using excess capacity on larger launch vehicles. And I think that smaller rockets will experience the turbulence of our atmosphere much violently. Also think of the relatively higher costs in terms of personnel (such as mission control). And I would also expect that certain aspects do not scale linearly in size, but for be this would just be speculation. xxxxxx Mainly because you need a lot of speed to go into space, and to each that speed, you need to accelerate. If you need a high speed, you will need to accelerate for a long time, thus the need for a large quantity of fuel. You also need to compensate for gravity the whole lift. There are ways to reduce that fuel requirement, like a horizontal takeoff, you reach a high altitude and then launch, so you keep the engine, but you still need a lot of energy to fight against gravity, and wings can't lift you very high, so that would not be such a good fuel economy, and the plane would still require to be quite big. $ E = mc^2 $ The larger the mass, the more energy can be produced. And we still haven't found any fuel which in small quantities gives the needed amount of energy. I know you will be thinking of nuclear energy; we cannot fit a nuclear reactor inside a rocket with current technology, and even if we can fit it I don't think our existing knowledge of nuclear science is sufficient to ensure accident-free reactors at such velocities. protected by Qmechanic♦ Dec 16 '14 at 22:55 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
In many studies, we measure more than one variable for each individual. For example, we measure precipitation and plant growth, or number of young with nesting habitat, or soil erosion and volume of water. We collect pairs of data and instead of examining each variable separately (univariate data), we want to find ways to describe bivariate data, in which two variables are measured on each subject in our sample. Given such data, we begin by determining if there is a relationship between these two variables. As the values of one variable change, do we see corresponding changes in the other variable? We can describe the relationship between these two variables graphically and numerically. We begin by considering the concept of correlation. Definition: Correlation Correlation is defined as the statistical association between two variables. A correlation exists between two variables when one of them is related to the other in some way. A scatterplot is the best place to start. A scatterplot (or scatter diagram) is a graph of the paired (x, y) sample data with a horizontal x-axis and a vertical y-axis. Each individual (x, y) pair is plotted as a single point. Figure 1. Scatterplot of chest girth versus length. In this example, we plot bear chest girth (y) against bear length (x). When examining a scatterplot, we should study the overall pattern of the plotted points. In this example, we see that the value for chest girth does tend to increase as the value of length increases. We can see an upward slope and a straight-line pattern in the plotted data points. A scatterplot can identify several different types of relationships between two variables. A relationship has no correlationwhen the points on a scatterplot do not show any pattern. A relationship is non-linearwhen the points on a scatterplot follow a pattern but not a straight line. A relationship is linearwhen the points on a scatterplot follow a somewhat straight line pattern. This is the relationship that we will examine. Linear relationships can be either positive or negative. Positive relationships have points that incline upwards to the right. As x values increase, y values increase. As x values decrease, y values decrease. For example, when studying plants, height typically increases as diameter increases. Figure 2. Scatterplot of height versus diameter. Negative relationships have points that decline downward to the right. As x values increase, yvalues decrease. As x values decrease, y values increase. For example, as wind speed increases, wind chill temperature decreases. Figure 3. Scatterplot of temperature versus wind speed. Non-linear relationships have an apparent pattern, just not linear. For example, as age increases height increases up to a point then levels off after reaching a maximum height. Figure 4. Scatterplot of height versus age. When two variables have no relationship, there is no straight-line relationship or non-linear relationship. When one variable changes, it does not influence the other variable. Figure 5. Scatterplot of growth versus area. Linear Correlation Coefficient Because visual examinations are largely subjective, we need a more precise and objective measure to define the correlation between the two variables. To quantify the strength and direction of the relationship between two variables, we use the linear correlation coefficient: $$r = \dfrac {\sum \dfrac {(x_i-\bar x)}{s_x} \dfrac {(y_i - \bar y)}{s_y}}{n-1}$$ where \(\bar x\) and \(s_x\) are the sample mean and sample standard deviation of the x’s, and \(\bar y\) and \(s_y\) are the mean and standard deviation of the y’s. The sample size is n. An alternate computation of the correlation coefficient is: $$r = \dfrac {S_{xy}}{\sqrt {S_{xx}S_{yy}}}$$ where $$S_{xx} = \sum x^2 - \dfrac {(\sum x)^2}{n}$$ $$S_{xy} = \sum xy - \dfrac {(\sum x)(\sum y )}{n}$$ $$S_{yy} = \sum y^2 - \dfrac {(\sum x)^2}{n}$$ The linear correlation coefficient is also referred to as Pearson’s product moment correlation coefficient in honor of Karl Pearson, who originally developed it. This statistic numerically describes how strong the straight-line or linear relationship is between the two variables and the direction, positive or negative. The properties of “r”: It is always between -1 and +1. It is a unitless measure so “r” would be the same value whether you measured the two variables in pounds and inches or in grams and centimeters. Positive values of “r” are associated with positive relationships. Negative values of “r” are associated with negative relationships. Examples of Positive Correlation Figure 6. Examples of positive correlation. Examples of Negative Correlation Figure 7. Examples of negative correlation. Note Correlation is not causation!!! Just because two variables are correlated does not mean that one variable causes another variable to change. Examine these next two scatterplots. Both of these data sets have an r = 0.01, but they are very different. Plot 1 shows little linear relationship between x and y variables. Plot 2 shows a strong non-linear relationship. Pearson’s linear correlation coefficient only measures the strength and direction of a linear relationship. Ignoring the scatterplot could result in a serious mistake when describing the relationship between two variables. Figure 8. Comparison of scatterplots. When you investigate the relationship between two variables, always begin with a scatterplot. This graph allows you to look for patterns (both linear and non-linear). The next step is to quantitatively describe the strength and direction of the linear relationship using “r”. Once you have established that a linear relationship exists, you can take the next step in model building.
In the case of the Riemann zeta function, this proves the GUE random matrix model prediction in derivative aspect. In more detail, towards the bottom of the second page they say Theorem 3 in the case of the Riemann zeta function is the derivative aspect Gaussian Unitary Ensemble(GUE) random matrix model prediction for the zeros of Jensen polynomials. To make this precise, recall that Dyson, Montgomery, and Odlyzko ... conjecture that the non-trivial zeros of the Riemann zeta function are distributed like the eigenvalues of random Hermitian matrices. These eigenvalues satisfy Wigner's Semicircular Law, as do the roots of the Hermite polynomials $H_d(X)$, when suitably normalized, as $d\rightarrow+\infty$ ... The roots of $J_{\gamma}^{d,0}(X)$, as $d\rightarrow+\infty$, approximate the zeros of $\Lambda\left(\frac{1}{2}+z\right)$, ... and so GUE predicts that these roots also obey the Semicircular Law. Since the derivatives of $\Lambda\left(\frac{1}{2}+z\right)$ are also predicted to satisfy GUE, it is natural to consider the limiting behavior of $J_{\gamma}^{d,n}(X)$ as $n\rightarrow+\infty$. The work here proves that these derivative aspect limits are the Hermite polynomials $H_d(X)$, which, as mentioned above, satisfy GUE in degree aspect. I am hoping someone can further explain this. In particular, does this result shed any light on the horizontal distribution of the zeros of the derivative of the Riemann zeta function? Edit: Speiser showed that the Riemann hypothesis is equivalent to $\zeta^\prime(s)\ne 0$ for $0<\sigma<1/2$. Since then quite a lot of work has gone into studying the horizontal distribution of the zeros of $\zeta^\prime(s)$. For example, Duenez et. al.compared this distribution with the radial distribution of zeros of the derivative of the characteristic polynomial of a random unitary matrix. (Caveat: I'm not up to date on all the relevant literature.) This is a very significant question. If the GUE distribution holds for the Riemann zeros, then rarely but infinitely often there will be pair with less than than half the average (rescaled) gap. From this, by the work of Conrey and Iwaniec, one gets good lower bounds for the class number problem. In this paper Farmer and Ki showed that if the derivative of the Riemann zeta function has sufficiently many zeros close to the critical line, then the zeta function has many closely spaced zeros, which by the above, also solves the class number problem. The question of modeling the horizontal distribution of the zeros of $\zeta^\prime(s)$ with the radial distribution of zeros of the derivative of the characteristic polynomial of a random unitary matrix, is intimately connected to the class number problem. Based on the answer of Griffin below, I don't think that's what the Griffin-Ono-Rolen-Zagier paper does, but it's worth asking about.
Tsukuba Journal of Mathematics Tsukuba J. Math. Volume 34, Number 1 (2010), 117-128. Odd dimensional Riemannian submanifolds admitting the almost contact metric structure in a Euclidean sphere Abstract We investigate some odd dimensional Rimannian submanifolds admitting the almost contact metric structure $(\phi, \xi, \eta, \langle , \rangle)$ of a certain Euclidean sphere from the viewpoint of the weakly $\phi$-invariance of the second fundamental form. The family of such submanifolds contains some homogeneous submanifolds of the ambient sphere. In the latter half of this paper, we caluculate the mean curvature and the length of the derivative of the mean curvature vector of these homogeneous submanifolds. Article information Source Tsukuba J. Math., Volume 34, Number 1 (2010), 117-128. Dates First available in Project Euclid: 8 September 2010 Permanent link to this document https://projecteuclid.org/euclid.tkbjm/1283967411 Digital Object Identifier doi:10.21099/tkbjm/1283967411 Mathematical Reviews number (MathSciNet) MR2723727 Zentralblatt MATH identifier 1198.53016 Keywords real hypersurfaces complex projective spaces real hypersurfaces of type (A) Hopf hypersurfaces ruled real hypersurfaces homogeneous submanifold strongly $\phi$-invariant weakly $\phi$-invariant the first standard minimal embedding Euclidean spheres mean curvature vector length of the mean curvature vector Citation Kazuhiro, Okumura. Odd dimensional Riemannian submanifolds admitting the almost contact metric structure in a Euclidean sphere. Tsukuba J. Math. 34 (2010), no. 1, 117--128. doi:10.21099/tkbjm/1283967411. https://projecteuclid.org/euclid.tkbjm/1283967411
Hypothesis Test about the Population Mean ( μ) when the Population Standard Deviation ( σ) is Known We are going to examine two equivalent ways to perform a hypothesis test: the classical approach and the p-value approach. The classical approach is based on standard deviations. This method compares the test statistic (Z-score) to a critical value (Z-score) from the standard normal table. If the test statistic falls in the rejection zone, you reject the null hypothesis. The p-value approach is based on area under the normal curve. This method compares the area associated with the test statistic to alpha (α), the level of significance (which is also area under the normal curve). If the p-value is less than alpha, you would reject the null hypothesis. As a past student poetically said: If the p-value is a wee value, Reject Ho Both methods must have: Data from a random sample. Verification of the assumption of normality. A null and alternative hypothesis. A criterion that determines if we reject or fail to reject the null hypothesis. A conclusion that answers the question. There are four steps required for a hypothesis test: State the null and alternative hypotheses. State the level of significance and the critical value. Compute the test statistic. State a conclusion. The Classical Method for Testing a Claim about the Population Mean (μ) when the Population Standard Deviation ( σ) is Known Example \(\PageIndex{1}\): A Two-sided Test A forester studying diameter growth of red pine believes that the mean diameter growth will be different from the known mean growth of 1.35 inches/year if a fertilization treatment is applied to the stand. He conducts his experiment, collects data from a sample of 32 plots, and gets a sample mean diameter growth of 1.6 in./year. The population standard deviation for this stand is known to be 0.46 in./year. Does he have enough evidence to support his claim? Solution Step 1) State the null and alternative hypotheses. Ho: μ = 1.35 in./year H1: μ ≠ 1.35 in./year Step 2) State the level of significance and the critical value. We will choose a level of significance of 5% (α = 0.05). For a two-sided question, we need a two-sided critical value – Z α/2 and + Z α/2. The level of significance is divided by 2 (since we are only testing “not equal”). We must have two rejection zones that can deal with either a greater than or less than outcome (to the right (+) or to the left (-)). We need to find the Z-score associated with the area of 0.025. The red areas are equal to α/2 = 0.05/2 = 0.025 or 2.5% of the area under the normal curve. Go into the body of values and find the negative Z-score associated with the area 0.025. Figure 1. The rejection zone for a two-sided test. The negative critical value is -1.96. Since the curve is symmetric, we know that the positive critical value is 1.96. ±1.96 are the critical values. These values set up the rejection zone. If the test statistic falls within these red rejection zones, we reject the null hypothesis. Step 3) Compute the test statistic. The test statistic is the number of standard deviations the sample mean is from the known mean. It is also a Z-score, just like the critical value. $$z = \frac {\bar {x} -\mu}{\frac {\sigma}{\sqrt {n}}}$$ For this problem, the test statistic is $$z = \frac {1.6-1.35}{\frac {0.46}{\sqrt {32}}} =3.07$$ Step 4) State a conclusion. Compare the test statistic to the critical value. If the test statistic falls into the rejection zones, reject the null hypothesis. In other words, if the test statistic is greater than +1.96 or less than -1.96, reject the null hypothesis. Figure 2. The critical values for a two-sided test when α = 0.05. In this problem, the test statistic falls in the red rejection zone. The test statistic of 3.07 is greater than the critical value of 1.96.We will reject the null hypothesis. We have enough evidence to support the claim that the mean diameter growth is different from (not equal to) 1.35 in./year. Example \(\PageIndex{2}\): A Right-sided Test A researcher believes that there has been an increase in the average farm size in his state since the last study five years ago. The previous study reported a mean size of 450 acres with a population standard deviation ( σ) of 167 acres. He samples 45 farms and gets a sample mean of 485.8 acres. Is there enough information to support his claim? Solution Step 1) State the null and alternative hypotheses. Ho: μ = 450 acres H1: μ >450 acres Step 2) State the level of significance and the critical value. We will choose a level of significance of 5% (α = 0.05). For a one-sided question, we need a one-sided positive critical value Zα. The level of significance is all in the right side (the rejection zone is just on the right side). We need to find the Z-score associated with the 5% area in the right tail. Figure 3. Rejection zone for a right-sided hypothesis test. Go into the body of values in the standard normal table and find the Z-score that separates the lower 95% from the upper 5%. The critical value is 1.645. This value sets up the rejection zone. Step 3) Compute the test statistic. The test statistic is the number of standard deviations the sample mean is from the known mean. It is also a Z-score, just like the critical value. $$z = \frac {\bar {x} -\mu}{\frac {\sigma}{\sqrt {n}}}$$ For this problem, the test statistic is $$z = \frac {485.8-450}{\frac {167}{\sqrt {45}}} =1.44$$ Step 4) State a conclusion. Compare the test statistic to the critical value. Figure 4. The critical value for a right-sided test when α = 0.05. The test statistic does not fall in the rejection zone. It is less than the critical value. We fail to reject the null hypothesis. We do not have enough evidence to support the claim that the mean farm size has increased from 450 acres. Example \(\PageIndex{3}\):A Left-sided Test A researcher believes that there has been a reduction in the mean number of hours that college students spend preparing for final exams. A national study stated that students at a 4-year college spend an average of 23 hours preparing for 5 final exams each semester with a population standard deviation of 7.3 hours. The researcher sampled 227 students and found a sample mean study time of 19.6 hours. Does this indicate that the average study time for final exams has decreased? Use a 1% level of significance to test this claim. Solution Step 1) State the null and alternative hypotheses. Ho: μ = 23 hours H1: μ < 23 hours Step 2) State the level of significance and the critical value. This is a left-sided test so alpha (0.01) is all in the left tail. Figure 9. The rejection zone for a left-sided hypothesis test. Go into the body of values in the standard normal table and find the Z-score that defines the lower 1% of the area. The critical value is -2.33. This value sets up the rejection zone. Step 3) Compute the test statistic. The test statistic is the number of standard deviations the sample mean is from the known mean. It is also a Z-score, just like the critical value. $$z = \frac {\bar {x} - \mu}{\frac {\sigma} {\sqrt {n}}}$$ For this problem, the test statistic is $$z= \frac {19.6-23}{\frac {7.3}{\sqrt {277}}}$$ Step 4) State a conclusion. Compare the test statistic to the critical value. Figure 10. The critical value for a left-sided test when α = 0.01. The test statistic falls in the rejection zone. The test statistic of -7.02 is less than the critical value of -2.33. We reject the null hypothesis. We have sufficient evidence to support the claim that the mean final exam study time has decreased below 23 hours. Testing a Hypothesis using P-values The p-value is the probability of observing our sample mean given that the null hypothesis is true. It is the area under the curve to the left or right of the test statistic. If the probability of observing such a sample mean is very small (less than the level of significance), we would reject the null hypothesis. Computations for the p-value depend on whether it is a one- or two-sided test. Steps for a hypothesis test using p-values: State the null and alternative hypotheses. State the level of significance. Compute the test statistic and find the area associated with it (this is the p-value). Compare the p-value to alpha (α) and state a conclusion. Instead of comparing Z-score test statistic to Z-score critical value, as in the classical method, we compare area of the test statistic to area of the level of significance. Note:The Decision Rule If the p-value is less than alpha, we reject the null hypothesis. Computing P-values If it is a two-sided test (the alternative claim is ≠), the p-value is equal to two times the probability of the absolute value of the test statistic. If the test is a left-sided test (the alternative claim is “<”), then the p-value is equal to the area to the left of the test statistic. If the test is a right-sided test (the alternative claim is “>”), then the p-value is equal to the area to the right of the test statistic. Let’s look at Example 6 again. A forester studying diameter growth of red pine believes that the mean diameter growth will be different from the known mean growth of 1.35 in./year if a fertilization treatment is applied to the stand. He conducts his experiment, collects data from a sample of 32 plots, and gets a sample mean diameter growth of 1.6 in./year. The population standard deviation for this stand is known to be 0.46 in./year. Does he have enough evidence to support his claim? Step 1) State the null and alternative hypotheses. Ho: μ = 1.35 in./year H1: μ ≠ 1.35 in./year Step 2) State the level of significance. We will choose a level of significance of 5% (α = 0.05). Step 3) Compute the test statistic. For this problem, the test statistic is: $$z=\frac{1.6-1.35}{\frac{0.46}{\sqrt {32}}}=3.07$$ The p-value is two times the area of the absolute value of the test statistic (because the alternative claim is “not equal”). Figure 11. The p-value compared to the level of significance. Look up the area for the Z-score 3.07 in the standard normal table. The area (probability) is equal to 1 – 0.9989 = 0.0011. Multiply this by 2 to get the p-value = 2 * 0.0011 = 0.0022. Step 4) Compare the p-value to alpha and state a conclusion. Use the Decision Rule (if the p-value is less than α, reject H0). In this problem, the p-value (0.0022) is less than alpha (0.05). We reject the H0. We have enough evidence to support the claim that the mean diameter growth is different from 1.35 inches/year. Let’s look at Example 7 again. A researcher believes that there has been an increase in the average farm size in his state since the last study five years ago. The previous study reported a mean size of 450 acres with a population standard deviation ( σ) of 167 acres. He samples 45 farms and gets a sample mean of 485.8 acres. Is there enough information to support his claim? Step 1) State the null and alternative hypotheses. Ho: μ = 450 acres H1: μ >450 acres Step 2) State the level of significance. We will choose a level of significance of 5% (α = 0.05). Step 3) Compute the test statistic. For this problem, the test statistic is $$z= \frac {485.8-450}{\frac {167}{\sqrt {45}}}=1.44$$ The p-value is the area to the right of the Z-score 1.44 (the hatched area). This is equal to 1 – 0.9251 = 0.0749. The p-value is 0.0749. Figure 12. The p-value compared to the level of significance for a right-sided test. Step 4) Compare the p-value to alpha and state a conclusion. Use the Decision Rule. In this problem, the p-value (0.0749) is greater than alpha (0.05), so we Fail to Reject the H0. The area of the test statistic is greater than the area of alpha (α). We fail to reject the null hypothesis. We do not have enough evidence to support the claim that the mean farm size has increased. Let’s look at Example 8 again. A researcher believes that there has been a reduction in the mean number of hours that college students spend preparing for final exams. A national study stated that students at a 4-year college spend an average of 23 hours preparing for 5 final exams each semester with a population standard deviation of 7.3 hours. The researcher sampled 227 students and found a sample mean study time of 19.6 hours. Does this indicate that the average study time for final exams has decreased? Use a 1% level of significance to test this claim. Step 1) State the null and alternative hypotheses. H0: μ = 23 hours H1: μ < 23 hours Step 2) State the level of significance. This is a left-sided test so alpha (0.01) is all in the left tail. Step 3) Compute the test statistic. For this problem, the test statistic is $$z=\frac {19.6-23}{\frac {7.3}{\sqrt {227}}}=-7.02$$ The p-value is the area to the left of the test statistic (the little black area to the left of -7.02). The Z-score of -7.02 is not on the standard normal table. The smallest probability on the table is 0.0002. We know that the area for the Z-score -7.02 is smaller than this area (probability). Therefore, the p-value is <0.0002. Figure 13. The p-value compared to the level of significance for a left-sided test. Step 4) Compare the p-value to alpha and state a conclusion. Use the Decision Rule. In this problem, the p-value (p<0.0002) is less than alpha (0.01), so we Reject the H0. The area of the test statistic is much less than the area of alpha (α). We reject the null hypothesis. We have enough evidence to support the claim that the mean final exam study time has decreased below 23 hours. Both the classical method and p-value method for testing a hypothesis will arrive at the same conclusion. In the classical method, the critical Z-score is the number on the z-axis that defines the level of significance (α). The test statistic converts the sample mean to units of standard deviation (a Z-score). If the test statistic falls in the rejection zone defined by the critical value, we will reject the null hypothesis. In this approach, two Z-scores, which are numbers on the z-axis, are compared. In the p-value approach, the p-value is the area associated with the test statistic. In this method, we compare α (which is also area under the curve) to the p-value. If the p-value is less than α, we reject the null hypothesis. The p-value is the probability of observing such a sample mean when the null hypothesis is true. If the probability is too small (less than the level of significance), then we believe we have enough statistical evidence to reject the null hypothesis and support the alternative claim. Software Solutions Minitab (referring to Ex. 8) One-Sample Z Test of mu = 23 vs. < 23 The assumed standard deviation = 7.3 99% Upper N Mean SE Mean Bound Z P 227 19.600 0.485 20.727 -7.02 0.000 Excel Excel does not offer 1-sample hypothesis testing.
Hypothesis Test about the Population Mean ( μ) when the Population Standard Deviation ( σ) is Unknown Frequently, the population standard deviation (σ) is not known. We can estimate the population standard deviation (σ) with the sample standard deviation (s). However, the test statistic will no longer follow the standard normal distribution. We must rely on the student’s t-distribution with n-1 degrees of freedom. Because we use the sample standard deviation (s), the test statistic will change from a Z-score to a t-score. $$z=\frac {\bar {x}-\mu}{\frac {\sigma}{\sqrt {n}}} \longrightarrow t = \frac {\bar {x} - \mu}{\frac {s}{\sqrt {n}}}$$ Steps for a hypothesis test are the same that we covered in Section 2. State the null and alternative hypotheses. State the level of significance and the critical value. Compute the test statistic. State a conclusion. Just as with the hypothesis test from the previous section, the data for this test must be from a random sample and requires either that the population from which the sample was drawn be normal or that the sample size is sufficiently large (n≥30). A t-test is robust, so small departures from normality will not adversely affect the results of the test. That being said, if the sample size is smaller than 30, it is always good to verify the assumption of normality through a normal probability plot. We will still have the same three pairs of null and alternative hypotheses and we can still use either the classical approach or the p-value approach. Selecting the correct critical value from the student’s t-distribution table depends on three factors: the type of test (one-sided or two-sided alternative hypothesis), the sample size, and the level of significance. For a two-sided test (“not equal” alternative hypothesis), the critical value (tα/2), is determined by alpha (α), the level of significance, divided by two, to deal with the possibility that the result could be less than OR greater than the known value. If your level of significance was 0.05, you would use the 0.025 column to find the correct critical value (0.05/2 = 0.025). If your level of significance was 0.01, you would use the 0.005 column to find the correct critical value (0.01/2 = 0.005). For a one-sided test (“a less than” or “greater than” alternative hypothesis), the critical value (tα) , is determined by alpha (α), the level of significance, being all in the one side. If your level of significance was 0.05, you would use the 0.05 column to find the correct critical value for either a left or right-side question. If you are asking a “less than” (left-sided question, your critical value will be negative. If you are asking a “greater than” (right-sided question), your critical value will be positive. Example \(\PageIndex{1}\) Find the critical value you would use to test the claim that μ ≠ 112 with a sample size of 18 and a 5% level of significance. Solution In this case, the critical value (\(t_{α/2}\)) would be 2.110. This is a two-sided question (≠) so you would divide alpha by 2 (0.05/2 = 0.025) and go down the 0.025 column to 17 degrees of freedom. Example \(\PageIndex{2}\) What would the critical value be if you wanted to test that μ < 112 for the same data? Solution In this case, the critical value would be 1.740. This is a one-sided question (<) so alpha would be divided by 1 (0.05/1 = 0.05). You would go down the 0.05 column with 17 degrees of freedom to get the correct critical value. Example \(\PageIndex{3}\):A Two-sided Test In 2005, the mean pH level of rain in a county in northern New York was 5.41. A biologist believes that the rain acidity has changed. He takes a random sample of 11 rain dates in 2010 and obtains the following data. Use a 1% level of significance to test his claim. 4.70, 5.63, 5.02, 5.78, 4.99, 5.91, 5.76, 5.54, 5.25, 5.18, 5.01 The sample size is small and we don’t know anything about the distribution of the population, so we examine a normal probability plot. The distribution looks normal so we will continue with our test. Figure 14. A normal probability plot for Example 9. The sample mean is 5.343 with a sample standard deviation of 0.397. Solution Step 1) State the null and alternative hypotheses. Ho: μ = 5.41 H1: μ ≠ 5.41 Step 2) State the level of significance and the critical value. This is a two-sided question so alpha is divided by two. Figure 15. The rejection zones for a two-sided test. t α/2 is found by going down the 0.005 column with 14 degrees of freedom. t α/2 = ±3.169. Step 3) Compute the test statistic. The test statistic is a t-score. $$t=\frac {\bar {x}-\mu}{\frac {s}{sqrt {n}}}$$ For this problem, the test statistic is $$t=\frac {5.343-5.41}{\frac {0.397}{\sqrt {11}}} = -0.560$$ Step 4) State a conclusion. Compare the test statistic to the critical value. Figure 16. The critical values for a two-sided test when α = 0.01. The test statistic does not fall in the rejection zone. We will fail to reject the null hypothesis. We do not have enough evidence to support the claim that the mean rain pH has changed. Example \(\PageIndex{4}\):A One-sided Test Cadmium, a heavy metal, is toxic to animals. Mushrooms, however, are able to absorb and accumulate cadmium at high concentrations. The government has set safety limits for cadmium in dry vegetables at 0.5 ppm. Biologists believe that the mean level of cadmium in mushrooms growing near strip mines is greater than the recommended limit of 0.5 ppm, negatively impacting the animals that live in this ecosystem. A random sample of 51 mushrooms gave a sample mean of 0.59 ppm with a sample standard deviation of 0.29 ppm. Use a 5% level of significance to test the claim that the mean cadmium level is greater than the acceptable limit of 0.5 ppm. The sample size is greater than 30 so we are assured of a normal distribution of the means. Solution Step 1) State the null and alternative hypotheses. Ho: μ= 0.5 ppm H1: μ> 0.5 ppm Step 2) State the level of significance and the critical value. This is a right-sided question so alpha is all in the right tail. Figure 17. Rejection zone for a right-sided test. t α is found by going down the 0.05 column with 50 degrees of freedom. t α = 1.676 Step 3) Compute the test statistic. The test statistic is a t-score. $$t=\frac {\bar {x}-\mu}{\frac {s}{\sqrt {n}}}$$ For this problem, the test statistic is $$t=\frac {0.59-0.50}{\frac {0.29}{\sqrt {51}}}=2.216$$ Step 4) State a Conclusion. Compare the test statistic to the critical value. Figure 18. Critical value for a right-sided test when α = 0.05. The test statistic falls in the rejection zone. We will reject the null hypothesis. We have enough evidence to support the claim that the mean cadmium level is greater than the acceptable safe limit. BUT, what happens if the significance level changes to 1%? The critical value is now found by going down the 0.01 column with 50 degrees of freedom. The critical value is 2.403. The test statistic is now LESS THAN the critical value. The test statistic does not fall in the rejection zone. The conclusion will change. We do NOT have enough evidence to support the claim that the mean cadmium level is greater than the acceptable safe limit of 0.5 ppm. Note The level of significance is the probability that you, as the researcher, set to decide if there is enough statistical evidence to support the alternative claim. It should be set before the experiment begins. P-value Approach We can also use the p-value approach for a hypothesis test about the mean when the population standard deviation ( σ) is unknown. However, when using a student’s t-table, we can only estimate the range of the p-value, not a specific value as when using the standard normal table. The student’s t-table has area (probability) across the top row in the table, with t-scores in the body of the table. To find the p-value (the area associated with the test statistic), you would go to the row with the number of degrees of freedom. Go across that row until you find the two values that your test statistic is between, then go up those columns to find the estimated range for the p-value. Example \(\PageIndex{5}\) Estimating P-value from a Student’s T-table Table 3. Portion of the student’s t-table. Solution If your test statistic is 3.789 with 3 degrees of freedom, you would go across the 3 df row. The value 3.789 falls between the values 3.482 and 4.541 in that row. Therefore, the p-value is between 0.02 and 0.01. The p-value will be greater than 0.01 but less than 0.02 (0.01<p<0.02). Conclusion If your level of significance is 5%, you would reject the null hypothesis as the p-value (0.01-0.02) is less than alpha (α) of 0.05. If your level of significance is 1%, you would fail to reject the null hypothesis as the p-value (0.01-0.02) is greater than alpha (α) of 0.01. Software packages typically output p-values. It is easy to use the Decision Rule to answer your research question by the p-value method. Software Solutions Minitab (referring to Ex. 12) One-Sample T Test of mu = 0.5 vs. > 0.5 95% Lower N Mean StDev SE Mean Bound T P 51 0.5900 0.2900 0.0406 0.5219 2.22 0.016 Additional example: www.youtube.com/watch?v=WwdSjO4VUsg. Excel Excel does not offer 1-sample hypothesis testing.
Seeing how the essential question was answered, I want to stress something else in your post which needs to be pointed out: It is true that cardinals (namely, Aleph numbers) are usually treated as ordinals, however the multiplications and addition of cardinals and ordinals are very different, and most of all - exponentiation is different as well. For ordinals $\alpha$ and $\beta$ we define the sum to be: $\alpha + 0 = \alpha$ $\alpha + (\beta + 1) = (\alpha + \beta) + 1$ (where $+1$ is the successor ordinal) $\alpha + \beta$ for a limit ordinal $\beta$ is the limit of $\alpha+\gamma$ for $\gamma<\beta$ One can notice that it is usually non-commutative as $2+\omega = \sup\{2+n\colon n<\omega\} = \omega \neq \omega+2$. The ordinal multiplication is defined in a similar way, as well exponentiation. (Namely a simple rule for zero, and successor and a limit for limits) and an interesting result is that $\omega^\omega$ is countable when dealing with ordinal exponentiation. In contrast, if $\lambda$ and $\mu$ are cardinals then $\lambda + \mu = \mu + \lambda = \lambda \cdot \mu = \mu \cdot \lambda = \max \{\lambda, \mu\}$, and the exponentiation is defined as $\lambda^\mu = |\{f | f\colon\mu\to\lambda\}|$ - that is the cardinality of the collection of functions from $\mu$ into $\lambda$.For further information and definitions you can see this wikipedia link So once you were dealing with the cardinality of the basis you were looking for cardinal arithmetics and not ordinal arithmetics. Which are two different things.
Hypothesis Test for a Population Proportion ( p) Frequently, the parameter we are testing is the population proportion. We are studying the proportion of trees with cavities for wildlife habitat. We need to know if the proportion of people who support green building materials has changed. Has the proportion of wolves that died last year in Yellowstone increased from the year before? Recall that the best point estimate of p, the population proportion, is given by $$\hat {p} = \dfrac {x}{n}$$ where x is the number of individuals in the sample with the characteristic studied and n is the sample size. The sampling distribution of p̂ is approximately normal with a mean \(\mu_{\hat {p}} = p\) and a standard deviation $$\sigma_{\hat {p}} = \sqrt {\dfrac {p(1-p)}{n}}$$ when np(1 – p)≥10. We can use both the classical approach and the p-value approach for testing. The steps for a hypothesis test are the same that we covered in Section 2. State the null and alternative hypotheses. State the level of significance and the critical value. Compute the test statistic. State a conclusion. The test statistic follows the standard normal distribution. Notice that the standard error (the denominator) uses p instead of p̂, which was used when constructing a confidence interval about the population proportion. In a hypothesis test, the null hypothesis is assumed to be true, so the known proportion is used. $$ z= \dfrac {\hat {p} - p} {\sqrt {\dfrac {p(1-p)}{n}}}$$ The critical value comes from the standard normal table, just as in Section 2. We will still use the same three pairs of null and alternative hypotheses as we used in the previous sections, but the parameter is now pinstead of μ: For a two-sided test, alpha will be divided by 2 giving a ± Zα/2 critical value. For a left-sided test, alpha will be all in the left tail giving a – Zα critical value. For a right-sided test, alpha will be all in the right tail giving a Zα critical value. Example \(\PageIndex{1}\) botanist has produced a new variety of hybrid soy plant that is better able to withstand drought than other varieties. The botanist knows the seed germination for the parent plants is 75%, but does not know the seed germination for the new hybrid. He tests the claim that it is different from the parent plants. To test this claim, 450 seeds from the hybrid plant are tested and 321 have germinated. Use a 5% level of significance to test this claim that the germination rate is different from 75%. Solution Step 1) State the null and alternative hypotheses. Ho: p = 0.75 H1: p ≠ 0.75 Step 2) State the level of significance and the critical value. This is a two-sided question so alpha is divided by 2. Alpha is 0.05 so the critical values are ± Zα/2 = ± Z.025. Look on the negative side of the standard normal table, in the body of values for 0.025. The critical values are ± 1.96. Step 3) Compute the test statistic. The test statistic is the number of standard deviations the sample mean is from the known mean. It is also a Z-score, just like the critical value. $$ z= \dfrac {\hat {p} - p} {\sqrt {\dfrac {p(1-p)}{n}}}$$ For this problem, the test statistic is $$z=\dfrac {0.713-0.75}{\sqrt {\dfrac {0.75(1-0.75)}{450}}} = -1.81$$ Step 4) State a conclusion. Compare the test statistic to the critical value. Figure 19. Critical values for a two-sided test when α = 0.05. The test statistic does not fall in the rejection zone. We fail to reject the null hypothesis. We do not have enough evidence to support the claim that the germination rate of the hybrid plant is different from the parent plants. Let’s answer this question using the p-value approach. Remember, for a two-sided alternative hypothesis (“not equal”), the p-value is two times the area of the test statistic. The test statistic is -1.81 and we want to find the area to the left of -1.81 from the standard normal table. On the negative page, find the Z-score -1.81. Find the area associated with this Z-score. The area = 0.0351. This is a two-sided test so multiply the area times 2 to get the p-value = 0.0351 x 2 = 0.0702. Now compare the p-value to alpha. The Decision Rule states that if the p-value is less than alpha, reject the H0. In this case, the p-value (0.0702) is greater than alpha (0.05) so we will fail to reject H0. We do not have enough evidence to support the claim that the germination rate of the hybrid plant is different from the parent plants. Example \(\PageIndex{2}\): You are a biologist studying the wildlife habitat in the Monongahela National Forest. Cavities in older trees provide excellent habitat for a variety of birds and small mammals. A study five years ago stated that 32% of the trees in this forest had suitable cavities for this type of wildlife. You believe that the proportion of cavity trees has increased. You sample 196 trees and find that 79 trees have cavities. Does this evidence support your claim that there has been an increase in the proportion of cavity trees? Use a 10% level of significance to test this claim. Solution Step 1) State the null and alternative hypotheses. Ho: p = 0.32 H1: p > 0.32 Step 2) State the level of significance and the critical value. This is a one-sided question so alpha is divided by 1. Alpha is 0.10 so the critical value is Zα = Z .10 Look on the positive side of the standard normal table, in the body of values for 0.90. The critical value is 1.28. Figure 20. Critical value for a right-sided test where α = 0.10. Step 3) Compute the test statistic. The test statistic is the number of standard deviations the sample proportion is from the known proportion. It is also a Z-score, just like the critical value. $$ z= \dfrac {\hat {p} - p} {\sqrt {\dfrac {p(1-p)}{n}}}$$ For this problem, the test statistic is: $$z= \frac {0.403-0.32}{\sqrt {\frac {0.32(1-0.32)}{196}}}=2.49$$ Step 4) State a conclusion. Compare the test statistic to the critical value. Figure 21. Comparison of the test statistic and the critical value. The test statistic is larger than the critical value (it falls in the rejection zone). We will reject the null hypothesis. We have enough evidence to support the claim that there has been an increase in the proportion of cavity trees. Now use the p-value approach to answer the question. This is a right-sided question (“greater than”), so the p-value is equal to the area to the right of the test statistic. Go to the positive side of the standard normal table and find the area associated with the Z-score of 2.49. The area is 0.9936. Remember that this table is cumulative from the left. To find the area to the right of 2.49, we subtract from one. p-value = (1 – 0.9936) = 0.0064 The p-value is less than the level of significance (0.10), so we reject the null hypothesis. We have enough evidence to support the claim that the proportion of cavity trees has increased. Software Solutions Minitab (referring to Ex. 15) Test and CI for One Proportion Test of p = 0.32 vs. p > 0.32 90% Lower Sample X N Sample p Bound Z-Value p-Value 1 79 196 0.403061 0.358160 2.49 0.006 Using the normal approximation. Excel Excel does not offer 1-sample hypothesis testing.
Mathematics is like any activity, sport, or skill: it must be honed and practiced. With that in mind, I have been bolstering up my abilities in algebra with a fantastic book A Book of Abstract Algebra, by Charles C. Pinter. 1. As I go through the chapters, I will be posting and discussing selected relevant exercises that have applications. This first post is a short one on operations and concatenation. I discussed operators here, but will revisit the topic. What’s an operation? First, we define an operation. From Pinter: Definition (operation): An operation on a given set A, often denoted \ast is a rule which assigns a pair of elements (a,b) in A to exactly one element denoted a\ast b in A Pinter stresses several things about this definition that should not just be read and glossed over. First, a\ast b has to be defined for every single pair of elements in the set A. Some obvious examples include multiplication on integers, or addition on real numbers. Some rules that try to sneak by, but are not operations including division on the real numbers. Why does this fail? Remember that 0 is a real number. I can divide 0 by anything I like, but can I divide anything by 0? No. A real number divided by 0 is not defined. 2. Secondly, the rule sends a pair of elements to exactly one element. Mathematically, this means a rule must be well defined to be an operation. It can’t take the same pair of elements and get two possible answers. Lastly, a rule must be closed to be an operation. Whatever (a,b) gets mapped to must also be in the set A. Here again, addition of real numbers is closed. Adding two real numbers yields a real number. Division also becomes our counterexample here, but let’s look at division on the space of just integers. Dividing 2 by 3 is certainly defined, but the result is not an integer; it’s a rational number. 3. On to Concatenation Operations are not just defined on numbers, and they don’t necessarily have to just be your standard arithmetic examples (addition, subtraction, multiplication). We can define operations on any space we want, provided we satisfy the definition. Here we will talk about concatenation as an operation on the space of sequences of symbols. If you’re a programmer, you probably assume we’re looking at binary sequences, but we concatenate English words too. (We just call these compound words, like “lifetime” or “backbone”.) Let’s call A the alphabet. If we are living in binary, then A = \{0,1\}. If we are speaking of English letters, the alphabet is A = \{a,b,c,\ldots,z\}. Now we can call A^{\ast} the set of all sequences of symbols in the alphabet A. 4. Now we will define an operation on A^{*}: concatenation. Many of you already know what this is, so we’ll formally define it here: If a and b are two sequences in A^{*}, where \mathbf{a} = a_{1}a_{2}\ldots a_{n}, and \mathbf{b} = b_{1}b_{2},\ldots b_{m} (and each a_{i}, b_{i} are drawn from the alphabet), then the concatenation of a and b is given by That is, just stick b to the end of a. Quick example, if we live in binary, then for \mathbf{a} = 1010 and \mathbf{b} = 001, then Let’s also note that there is an empty sequence or NULL sequence \lambda that consists of nothing at all. It’s pretty easy to check that concatenation meets the definition of an operation. Pinter asks us to do three simple things here: 1. Prove that the operation defined above is associative. Associativity is the property of an operation that allows us to group however we like. Addition is associative:(1+2) + 3 = 1+ (2+3) In general, to show associativity of an operation, we need to show that for \mathbf{a,b,c} \in A^{*},(\mathbf{ab})\mathbf{c} = \mathbf{a}(\mathbf{bc}) First, we will define three generic sequences in our A^{*}. Let\begin{aligned}\mathbf{a} &= a_{1}a_{2}\ldots a_{m}\\\mathbf{b} &=b_{1}b_{2}\ldots b_{n}\\\mathbf{c} &= c_{1}c_{2}\ldots c_{p}\end{aligned} Notice here that I did not specify an alphabet, and that the length of \mathbf{a,b} and \mathbf{c} are different. We need to keep this as general as possible to prove the statement. Every restriction we place (same length, specific alphabet, etc) weakens the argument. Now we will show associativity formally:\begin{aligned}(\mathbf{ab})\mathbf{c}&=(a_{1}a_{2}\ldots a_{m}b_{1}b_{2}\ldots b_{n})c_{1}c_{2}\ldots c_{p}\\&= a_{1}a_{2}\ldots a_{m}b_{1}b_{2}\ldots b_{n}c_{1}c_{2}\ldots c_{p}\end{aligned} Here we can see that we can put parentheses any way we want to, and it doesn’t affect the end word. That is,\begin{aligned}(\mathbf{ab})\mathbf{c}&=(a_{1}a_{2}\ldots a_{m}b_{1}b_{2}\ldots b_{n})c_{1}c_{2}\ldots c_{p}\\&= a_{1}a_{2}\ldots a_{m}b_{1}b_{2}\ldots b_{n}c_{1}c_{2}\ldots c_{p}\\&=a_{1}a_{2}\ldots a_{m}(b_{1}b_{2}\ldots b_{n}c_{1}c_{2}\ldots c_{p})\\&=\mathbf{a}(\mathbf{bc})\end{aligned} So we’ve concluded that the grouping in which you perform the operation on several elements doesn’t matter, and thus we have concluded that concatenation is associative. 2. Explain why the operation is not commutative. A commutative operation is an operation where the order of the elements in the operation doesn’t matter. That is, for an operation \ast, a\ast b = b\ast a. Examples of commutative operations are addition and multiplication on real numbers ( 2+5 = 5+2, for example). An example of a noncommutative operation is matrix multiplication. In general, the order in which you multiply matrices matters. As an explicit illustration, We can see now why concatenation is not commutative. If you append \mathbf{b} to \mathbf{a}, you will definitely not get the same result as appending \mathbf{a} to \mathbf{b}. “Townhome” and “hometown” are certainly two different words, but the two pieces: “home” and “town” are the same. Switching the order of concatenation gave a different result. Since I have produced a single counter example, concatenation cannot be commutative in general. 5 3. Prove there is an identity element for this operation. An identity element is an element that lives in A^{*} that, when applied via the operation in question to any other element in the set, in any order, returns that other element. Formally, \mathbf{e} \in A^{*} is an identity element for operation \ast if and only if \mathbf{ea} = \mathbf{ae} = \mathbf{a} for every single element in \mathbf{A^{*}}. Some examples: the identity element for regular addition on the real numbers is 0. 0 + x = x + 0 = x for any real number x the identity element for multiplication on the real numbers is 1. For concatenation, we seek an element that, when concatenated with any other element, in any order, returns that element. Proving existence has several strategies. The simplest one (in theory) is to find a candidate element, and show it meets the criteria. 6. Here, since concatenation involves essentially “compounding” two things together, the only way we could keep an element “unchanged” is to concatenate nothing to it. The NULL element \lambda (which is certainly a word, just a NULL word. Computer scientists are very familiar with this concept) fits our bill here. Try it out. If you take nothing and concatenate a word to it, you just get that word. Conversely, concatenating nothing to a given word doesn’t change the word. So the NULL element is our identity element. Conclusion Operations and rules aren’t equivalent. A rule is anything we make up for some purpose, but an operation has to meet specific criteria to be called one. This post was meant to show that operations are not restricted to just the space of numbers that we deal with every day, but that we can look at spaces of objects, matrices, functions, words, sequences…anything we like. In addition, we can define operations, as long as we satisfy the definition. This represents the power of abstract algebra: we can take structure we thought only belongs to a very restrictive space (like numbers, or even matrices), and look at other things, like concatenation, in a different light. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Footnotes Dover editions are amazing. There are many fantastic books for a far more affordable price than the “mainstream” publishers. (Looking at you, Springer) This is why it’s not entirely correct to say that division is the inverse operation of multiplication. Division isn’t an operation on the set of real numbers, unless you restrict the space to exclude 0 Division is usually a good place to start if you’re looking for counterexamples in algebra I didn’t specify a length here. So if we are in binary, 00, 01011, and 1 are in the set of sequences of symbols built from the binary alphabet. Feel free to try other alphabets as well: binary, Greek, Cyrillic, hexadecimal, etc. Challenge question: can you find an alphabet and sequence space under which concatenation would be commutative? Matrix multiplication is commutative in very special cases. Does this also hold true for concatenation? This seems simple, but some problems do not yield an obvious possible candidate. Sometimes it takes years to find this when proving existence. A lot relies on intuition and experience, which isn’t exactly what you want to hear. But in this particular case, we can look at stuff we already are familiar with: 0 and 1, and try to find something “similar” in our new space.
Reihaneh Kardehi Moghaddam and Navid Moshtaghi Yazdan index i shows the rule number in the rule set. In the next step, to increase the diversity of the data sets, the “remainder random selection” is used to select several pairs of parents from the strings showing the condition section of the existing data. The condition section of the new data is created using the crossover method applied to the parent strings. In this method, the value of each conditional variable is obtained using the following relation:a i = α ( a i F ) + ( 1 − α ) ( a i M )$$\begin{array}{}\displaystyle ai = \alpha(aiF) + (1 - \alpha , F. (2006). An analysis of European low-cost airlines and their networks. Journal of Transport Geography , 14 (4), 249-264.Ducruet, C., & Lugo, I. (2013). Structure and dynamics of transportation networks: Models, methods and applications. In J. P. Rodrigue, T. E. Notteboom, J. Shaw (Eds.), The SAGE Handbook of Transport Studies (pp. 347-364). SAGE.Eagle, N., Macy, M., & Claxton, R. (2010). Network diversity and economic development. Science , 328 (5981), 1029-1031.Faghih-Imani, A., Eluru, N., El-Geneidy, A. M., Rabbat, M., & Haq, U Sini-Kaisu Kinnunen, Jyri Hanski, Salla Marttonen-Arola and Timo Kärri References[1] Y. Geng and R. Côté, “Diversity in industrial ecosystems”, International Journal of Sustainable Development and World Ecology, vol. 14, no. 4, pp. 329-335, 2007.[2] G.K.S. Gossain, “Reinventing value: The new business ecosystem”, Strategy & Leadership, vol. 26, no. 5, pp. 28-33, 1998.[3] M. Iansiti and R. Levien, “Strategy as Ecology”, Harvard Business Review, vol. 82, no. 3, pp. 68-78, 2004.[4] J. Korhonen, “Four ecosystem principles for an industrial ecosystem”, Journal of flexibility of the labour market and international cooperation. Smart mobility is perceived by the accessibility of information and communication infrastructure, through the development of sustainable, innovative and safe transport. The smart environment is measured by the attractiveness of the natural environment, pollution levels, environmental protection activities and resource management methods.Smart people are characterised by the level of qualifications, lifelong learning, social and ethnic diversity, creativity, openness and participation in public life. Smart
Taiwanese Journal of Mathematics Taiwanese J. Math. Volume 15, Number 4 (2011), 1447-1456. Ergodic Retractions for Semigroups in Strictly Convex Banach Spaces Abstract We study the existence of ergodic retractions for semigroups of mappings in strictly convex Banach spaces. We prove, for instance, the following theorem. Let $(X,\|\cdot\|)$ be a strictly convex Banach space and let $\Gamma$ be a norming set for $X$. Let $C$ be a bounded and convex subset of $X$, and suppose $C$ is compact in the $\Gamma$-topology. If $\mathcal S$ is a right amenable semigroup, $\varphi=\{T_s:s\in\mathcal S\}$ is a semigroup on $C$ with a nonempty set $F=F(\varphi)$ of common fixed points, and each $T_s$ is ($F$-quasi-) nonexpansive, then there exists an ($F$-quasi-) nonexpansive retraction $R$ from $C$ onto $F$ such that $RT_s=T_sR=R$ for each $s\in \mathcal S$, and every $\Gamma$-closed, convex and $\varphi$-invariant subset of $C$ is also $R$-invariant. Article information Source Taiwanese J. Math., Volume 15, Number 4 (2011), 1447-1456. Dates First available in Project Euclid: 18 July 2017 Permanent link to this document https://projecteuclid.org/euclid.twjm/1500406356 Digital Object Identifier doi:10.11650/twjm/1500406356 Mathematical Reviews number (MathSciNet) MR2848966 Zentralblatt MATH identifier 1244.47049 Subjects Primary: 47H09: Contraction-type mappings, nonexpansive mappings, A-proper mappings, etc. 47H10: Fixed-point theorems [See also 37C25, 54H25, 55M20, 58C30] 47H20: Semigroups of nonlinear operators [See also 37L05, 47J35, 54H15, 58D07] Keywords $\Gamma$-topology mean nonexpansive ergodic retraction nonexpansive mapping nonexpansive semigroup quasi-nonexpansive ergodic retraction quasi-nonexpansive mapping quasi-nonexpansive semigroup Citation Kaczor, Wieslawa; Reich, Simeon. Ergodic Retractions for Semigroups in Strictly Convex Banach Spaces. Taiwanese J. Math. 15 (2011), no. 4, 1447--1456. doi:10.11650/twjm/1500406356. https://projecteuclid.org/euclid.twjm/1500406356
Hypothesis Test about a Variance When people think of statistical inference, they usually think of inferences involving population means or proportions. However, the particular population parameter needed to answer an experimenter’s practical questions varies from one situation to another, and sometimes a population’s variability is more important than its mean. Thus, product quality is often defined in terms of low variability. Sample variance \(s^2\) can be used for inferences concerning a population variance \(\sigma^2\). For a random sample of n measurements drawn from a normal population with mean μ and variance \(\sigma^2\), the value \(s^2\) provides a point estimate for \(\sigma^2\). In addition, the quantity \(\frac {(n-1)s^2}{\sigma^2}\) follows a Chi-square(\(\chi^{2}\)) distribution, with \(df = n – 1\). The properties of Chi-square (\(\chi^{2}\) ) distribution are: Unlike Z and t distributions, the values in a chi-square distribution are all positive. The chi-square distribution is asymmetric, unlike the Z and t distributions. There are many chi-square distributions. We obtain a particular one by specifying the degrees of freedom \((df = n – 1)\) associated with the sample variances \(s^2\). Figure 22. The chi-square distribution. One-sample (\(\chi^{2}\) ) test for testing the hypotheses: Null hypothesis: \(H_0: \sigma^{2} = \sigma^{2}_{0}\)(constant) Alternative hypothesis: \(H_a: σ^2 > \sigma_{0}^{2}\)(one-tailed), reject \(H_0\) if the observed \(\chi^2 > \chi_{U}^{2}\)(upper-tail value at α). \(H_a: σ^2 <\sigma_{0}^{2}\) (one-tailed), reject \(H_0\) if the observed \(\chi^2 < \chi_{L}^{2}\)(lower-tail value at α). \(H_a: σ^2 ≠ \sigma_{0}^{2}\) (two-tailed), reject \(H_0\) if the observed \(\chi^2 > \chi_{U}^{2}\)or \(\chi^{2} < \chi_{L}^{2}\)at α/2. where the \(\chi^2\) critical value in the rejection region is based on degrees of freedom \(df = n – 1\) and a specified significance level of α . Test statistic: $$\chi^2 = \frac{(n-1)S^2}{\sigma _{0}^{2}}$$ As with previous sections, if the test statistic falls in the rejection zone set by the critical value, you will reject the null hypothesis. Example \(\PageIndex{1}\): A forester wants to control a dense understory of striped maple that is interfering with desirable hardwood regeneration using a mist blower to apply an herbicide treatment. She wants to make sure that treatment has a consistent application rate, in other words, low variability not exceeding 0.25 gal./acre (0.06 gal.2). She collects sample data (n = 11) on this type of mist blower and gets a sample variance of 0.064 gal.2 Using a 5% level of significance, test the claim that the variance is significantly greater than 0.06 gal.2 \(H_0: \sigma^{2} = 0.06\) \(H_1: \sigma^{2} >0.06\) The critical value is 18.307. Any test statistic greater than this value will cause you to reject the null hypothesis. The test statistic is $$\chi^2 = \frac {(n-1)S^2}{\sigma_{0}^{2}}=\frac {(11-1)0.064}{0.06}=10.667$$ We fail to reject the null hypothesis. The forester does NOT have enough evidence to support the claim that the variance is greater than 0.06 gal.2 You can also estimate the p-value using the same method as for the student t-table. Go across the row for degrees of freedom until you find the two values that your test statistic falls between. In this case going across the row 10, the two table values are 4.865 and 15.987. Now go up those two columns to the top row to estimate the p-value (0.1-0.9). The p-value is greater than 0.1 and less than 0.9. Both are greater than the level of significance (0.05) causing us to fail to reject the null hypothesis. Software Solutions Minitab (referring to Ex. 16) Test and CI for One Variance Method Null hypothesis Sigma-squared = 0.06 Alternative hypothesis Sigma-squared > 0.06 The chi-square method is only for the normal distribution. Tests Test Method Statistic DF P-Value Chi-Square 10.67 10 0.384 Excel Excel does not offer 1-sample \(\chi^2\) testing.
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A June 2009 , Volume 24 , Issue 2 Select all articles Export/Reference: Abstract: This paper presents a geometric description on Lie algebroids of Lagrangian systems subject to nonholonomic constraints. The Lie algebroid framework provides a natural generalization of classical tangent bundle geometry. We define the notion of nonholonomically constrained system, and characterize regularity conditions that guarantee that the dynamics of the system can be obtained as a suitable projection of the unconstrained dynamics. The proposed novel formalism provides new insights into the geometry of nonholonomic systems, and allows us to treat in a unified way a variety of situations, including systems with symmetry, morphisms, reduction, and nonlinearly constrained systems. Various examples illustrate the results. Abstract: This paper studies the internal controllability and stabilizability of a family of Boussinesq systems recently proposed by J. L. Bona, M. Chen and J.-C. Saut to describe the two-way propagation of small amplitude gravity waves on the surface of water in a canal. The space of the controllable data for the associated linear system is determined for all values of the four parameters. As an application of this newly established exact controllability, some simple feedback controls are constructed such that the resulting closed-loop systems are exponentially stable. When the parameters are all different from zero, the local exact controllability and stabilizability of the nonlinear system are also established. Abstract: We prove that a $C^2$ diffeomorphism $f$ of a compact manifold $M$ satisfies Axiom A and the strong transversality condition if and only if it is Hölder stable, that is, any $C^1$ diffeomorphism $g$ of $M$ sufficiently $C^1$ close to $f$ is conjugate to $f$ by a homeomorphism which is Hölder on the whole manifold. Abstract: We establish a quenched Central Limit Theorem (CLT) for a smooth observable of random sequences of iterated linear hyperbolic maps on the torus. To this end we also obtain an annealed CLT for the same system. We show that, almost surely, the variance of the quenched system is the same as for the annealed system. Our technique is the study of the transfer operator on an anisotropic Banach space specifically tailored to use the cone condition satisfied by the maps. Abstract: We consider one parameter families of analytic vector fields and diffeomorphisms, including for a parameter value, say $\varepsilon = 0$, the product of rotations in $\R^{2m}\times \R^n$ such that for positive values of the parameter the origin is a hyperbolic point of saddle type. We address the question of determining the limit stable invariant manifold when $\varepsilon$ goes to zero as a subcenter invariant manifold when $\varepsilon = 0$. Abstract: We give an explicit formula for exponential decay properties of positive solutions for a class of semilinear elliptic equations with Hardy term in the whole space R . n Abstract: Let $T_{f}$ : S 1→ S 1be a circle homeomorphism with two break points a , c b that means the derivative $Df$ of its lift $f\ :\ \mathbb{R}\rightarrow\mathbb{R}$ has discontinuities at the points ã b , ĉ b , which are the representative points of a b , c b in the interval $[0,1)$, and irrational rotation number ρ b . Suppose that $Df$ is absolutely continuous on every connected interval of the set [0,1]\{ã f , ĉ b }, that DlogDf ∈ L b 1([0,1]) and the product of the jump ratios of $ Df $ at the break points is nontrivial, i.e. $\frac{Df_{-}(\tilde{a}_{b})}{Df_{+}(\tilde{a}_{b})}\frac{Df_{-}(\tilde{c}_{b})}{Df_{+}(\tilde{c}_{b})} \ne1$. We prove, that the unique T - invariant probability measure $\mu_{f}$ is then singular with respect to Lebesgue measure on S f 1. Abstract: We consider nonlinear elliptic problems driven by the $p$-Laplacian with a nonsmooth potential depending on a parameter $\lambda\ >\ 0$. The main result guarantees the existence of two positive, two negative and a nodal (sign-changing) solution for the studied problem whenever $\lambda\ >\ 0$ belongs to a small interval (0, λ ) and $p$ ≥ 2. We do not impose any symmetry hypothesis on the nonlinear potential. The constant-sign solutions are obtained by using variational techniques based on nonsmooth critical point theory (minimization argument, Mountain Pass theorem, and a Brézis-Nirenberg type result for * C 1-minimizers), while the nodal solution is constructed by an upper-lower solutions argument combined with the Zorn lemma and a nonsmooth second deformation theorem. Abstract: In this paper, we consider the analytic reducibility problem of an analytic $d-$dimensional quasi-periodic cocycle $(\alpha,\ A)$ on $U(n)$ where $ \alpha$ is a Diophantine vector. We prove that, if the cocycle is conjugated to a constant cocycle $(\alpha,\ C)$ by a measurable conjugacy $(0,\ B)$, then for almost all $C$ it is analytically conjugated to $(\alpha,\ C)$ provided that $A$ is sufficiently close to some constant. Moreover $B$ is actually analytic if it is continuous. Abstract: In this paper, a one-dimensional bipolar hydrodynamic model is considered. This system takes the form of Euler-Poisson with electric field and frictional damping added to the momentum equations. The large time behavior of L ∞entropy solutions of the bipolar hydrodynamic model is firstly studied. Previous works on this topic are mainly concerned with the smooth solution in which no vacuum occurs and the initial data is small. It is proved in this paper that any bounded entropy solution strongly converges to the similarity solution of the porous media equation or the heat equation in L 2(R) with time decay rate. The initial data can contain vacuum and can be arbitrarily large. The method is also applied to improve the convergence rate of [F.Huang, R.Pan, Arch. Rational Mech. Anal.,166(2003),359-376] for compressible Euler equations with damping. As a by product, it is shown that the bounded L ∞entropy solution of the bipolar hydrodynamic model converges to the entropy solution of Euler equations with damping as $t\rightarrow\infty$. Abstract: We consider Anosov thermostats on a closed surface and the X-ray transform on functions which are up to degree two in the velocities. We show that the subspace where the X-ray transform fails to be s-injective is finite dimensional. Furthermore, if the surface is negatively curved and the thermostat is pure Gaussian (i.e. no magnetic field is present), then the X-ray transform is s-injective. Abstract: This paper is concerned with the following Lotka-Volterra cross-diffusion system u = Δ[(1+kρ(x) v)u] +u(a-u-c(x)v) in Ω Χ (0, ∞), t τv = Δv +v(b+d(x)u-v) in Ω Χ (0, ∞) t in a bounded domain Ω ⊂ R with Neumann boundary conditions δ N u = δ v v = 0 on δΩ. In the previous paper [18], the author has proved that the set of positive stationary solutions forms a v fishhookshaped branch Γ under a segregationof $\rho (x)$ and $d(x)$. In the present paper, we give some criteria on the stability of solutions on Γ. We prove that the stability of solutions changes only at every turning point of Γ if τ is large enough. In a different case that $c(x)\ >\ 0$ is large enough, we find a parameter range such that multiple Hopf bifurcation points appear on Γ. Abstract: We study critical threshold phenomena in a dynamic continuum traffic flow model known as the Payne and Whitham (PW) model. This model is a quasi-linear hyperbolic relaxation system, and when equilibrium velocity is specifically associated with pressure, the equilibrium characteristic speed resonates with one characteristic speed of the full relaxation system. For a scenario of physical interest we identify a lower threshold for finite time singularity in solutions and an upper threshold for the global existence of the smooth solution. The set of initial data leading to global smooth solutions is large, in particular allowing initial velocity of negative slope. Abstract: In this paper, we consider a non-autonomous stochastic Lotka-Volterra competitive system $ dx_i (t) = x_i(t)$[($b_i(t)$-$\sum_{j=1}^{n} a_{ij}(t)x_j(t))$$dt$$+ \sigma_i(t) d B_i(t)]$, where $B_i(t)$($i=1 ,\ 2,\cdots,\ n$) are independent standard Brownian motions. Some dynamical properties are discussed and the sufficient conditions for the existence of global positive solutions, stochastic permanence, extinction as well as global attractivity are obtained. In addition, the limit of the average in time of the sample paths of solutions is estimated. Abstract: We prove that the Cauchy problem for the three-dimensional Zakharov-Kuznetsov equation is locally well-posed for data in $H^s(\R^3)$, s > $\frac{9}{8}$. Abstract: We are interested in the planar Lorentz process with a periodic configuration of strictly convex obstacles and with finite horizon. Its recurrence comes from a criteria of Conze in [8] or of Schmidt in [15] and from the central limit theorem for the billiard in the torus ([2,4,19]) Another way to prove recurrence is given by Szász and Varjú in [18]. Total ergodicity follows from these results (see [16] and [12]). In this paper we answer a question of Szász about the asymptotic behaviour of the number of visited cells when the time goes to infinity. It is not more difficult to study the asymptotic of the number of obstacles hit by the particle when the time goes to infinity. We give an estimate for the expectation and a result of almost sure convergence. For the simple random walk in Z , this question has been studied by Dvoretzky and Erdös in [10]. We adapt the proof of Dvoretzky and Erdös. The lack of independence is compensated by a strong decorrelation result due to Chernov ([6])and by some refinement (got in [14])of the local limit theorem proved by Szász and Varjú in [18]. 2 Abstract: Skew-product semiflows induced by semi-convex and type-K competitive almost periodic delay differential equations are studied. If $M$ is a compact positively invariant subset of the skew-product semiflow, then continuous separation of the skew-product semiflow on $M$ holds. Furthermore, if two minimal subsets $M_{1}$ and $M_{2}$ of the skew-product semiflow satisfying completely strongly type-K ordering $M_{1}$«$^C_K M_{2}$, then $M_{1}$ is an attractor. Finally, these results are applied to a nonautonomous delayed Hopfield-type neural networks with the diagonal-nonnegative type-K monotone interconnection matrix and sufficient conditions are obtained for the existence of global or partial attractors. Abstract: We give an explicit geometric description of the $\times2$, $\times3$ system, and use this to study a uniform family of Markov partitions related to those of Wilson and Abramov. The behaviour of these partitions is stable across expansive cones and transitions in this behaviour detect the non-expansive lines. Abstract: We study a free boundary problem modelling the growth of non-necrotic tumors with fluid-like tissues. The fluid velocity satisfies Stokes equations with a source determined by the proliferation rate of tumor cells which depends on the concentration of nutrients, subject to a boundary condition with stress tensor effected by surface tension. It is easy to prove that this problem has a unique radially symmetric stationary solution. By using a functional approach, we prove that there exists a threshold value γ* > 0 for the surface tension coefficient $\gamma$, such that in the case γ > γ* this radially symmetric stationary solution is asymptotically stable under small non-radial perturbations, whereas in the opposite case it is unstable. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
I've read all the existing answers long ago but still feel that none have gotten to the heart of the issue. We obtain mathematical results through a process of reasoning. That reasoning must be logical and enough to convince anyone that our results are correct given our initial assumptions. That is the actual purpose of a proof. It does not matter what form the reasoning takes, whether using only words or only mathematical symbols or only a diagram. The requirement is simply to convince the other person. If we cannot do so, then our reasoning is insufficient or incorrect. This proper attitude must start right from the basics. For example $\frac{1+2}{1+3} \ne \frac{\not{1}+2}{\not{1}+3}$. Explaining to the student that one cannot do that is almost useless. Instead, the student should be asked: "Why do you cancel?" and then "Why does cancelling keep the value the same?". The problem is that if this is not done from the beginning of arithmetic, it simply causes students to create for themselves a deep quagmire of guesswork in order to heuristically write down things which they believe will get them their grades. If you have seen students who try to mimic their teachers' phrasing but clearly without understanding of the meaning, or students who care only about how to get the answer and not why the method is correct, you know what I mean. As a result, very few students have a full grasp of even the fundamentals, namely the field of rationals. What I mean by this is that few are able to state all the field axioms correctly and prove results like the uniqueness of inverses (when they exist) and that $0 \times x = 0$ and that $-x \times -y = x \times y$. (Out of these, fewer still can give any explanation as to the rationale for the axioms, but that is another topic.) It is obvious that with a proper foundation as I briefly described above, no student would ever write $(a+b)^2 = a^2+b^2$. Why? Because they know that "$x^2$" is defined as "$x \times x$" and "$()$" are used to denote what to do first, so $(a+b)^2 = (a+b) \times (a+b)$. Moreover, they also would know the distributivity field axiom that gives first $(a+b) \times (a+b) = a \times (a+b) + b \times (a+b)$ and then after 2 more applications the full expansion, using the commutativity and associativity axioms. Likewise none of the other mistakes that you mentioned would occur. Furthermore, if students cannot handle the field axioms correctly, one might as well throw the induction axiom out of the window. The way it is taught in most textbooks and curricula is seriously lacking, precisely because it is not based on sufficiently formal reasoning. A simple example that most students who were brought up with textbook induction fail to solve is: Given a function $f:\mathbb{Z}\to\mathbb{R}$ such that $f(0) = 0$ and $f(1) = 1$ and $f(x+1) + 6 f(x-1) = 5 f(x)$ for any $x \in \mathbb{Z}$, prove that $f(x) = 3^x - 2^x$ for any $x \in \mathbb{Z}$. It is not hard at all, but only those who understand the logical structure of induction would be able to give a correct proof. In case anyone is wondering what I mean by textbook induction, two examples that I would consider seriously lacking are: Finally, proper reasoning naturally requires sufficient precision, because one cannot reason logically about statements whose meaning is undefined or unclear. Vagueness in mathematics is one great recipe for confusion. This must start with the teacher. A teacher who is sloppy with mathematical statements or steps in reasoning is simply telling the students that it is alright to be sloppy and by extension it is alright if they do not know what they are doing as long as they get the answer! One terrible example of sloppiness in most high-school curricula is solving differential equations by "separating variables". Try giving the following to any student: Solve for $y$ as a function of a real variable $x$ given that the differential equation $\frac{dy}{dx} = 2\sqrt{y}$ holds. You know what answer to expect, and I hope you know the correct answer. Even Wolfram Alpha gets it wrong. Now for students who give the wrong answer, tell them that it is wrong but do not tell them the correct answer, and ask if they can identify the mistake and fix it. Most will fail to identify the mistake, and fixing the mistake will require the foundation in logic that most students do not have. Here are the solution sketches for the problems I've given above. I strongly encourage one to thoroughly check one's own work to verify whether each step follows completely logically from the preceding deductions, and merely look at these solutions to confirm. Problem Given a function $f:\mathbb{Z}\to\mathbb{R}$ such that $f(0) = 0$ and $f(1) = 1$ and $f(x+1) + 6 f(x-1) = 5 f(x)$ for any $x \in \mathbb{Z}$, prove that $f(x) = 3^x - 2^x$ for any $x \in \mathbb{Z}$. Hints Induction only allows you to derive something about the natural numbers. The desired theorem is about integers. Also, if you cannot prove the implication needed for the induction, a key technique that often works is to strengthen the induction hypothesis to include enough information so that you can prove the implication step. Of course that also means that the implication you need to prove has changed! Solution sketch First notice that the theorem to be proven is that $f(x) = 3^x - 2^x$ for all integers $x$, and so induction in one direction is not enough! Also, notice that it is impossible to prove that $f(x) = 3^x - 2^x$ implies $f(x+1) = 3^{x+1} - 2^{x+1}$, and hence the induction hypothesis must contain information about at least two 'data points' for $f$. The easiest one would be to let $P(x)$ be "$f(x) = 3^x - 2^x$ and $f(x-1) = 3^{x-1} - 2^{x-1}$". Then one must prove $P(x+1)$, which expands to "$f(x+1) = 3^{x+1} - 2^{x+1}$ and $f(x) = 3^x - 2^x$". I would not accept if the student does not fully prove $P(x+1)$. This would handle the natural numbers, and a similar induction would handle the negative integers. It is of course possible to combine both inductions into one, which it should be explored, although in general it is good to keep a proof as modular as possible. Problem Solve for $y$ as a function of a real variable $x$ given that the differential equation $\frac{dy}{dx} = 2\sqrt{y}$ holds. Hint The answer is not $y = (x+a)^2$, which you would get by the method of separating variables. What went wrong? Note that the error would still be there if you used the theorem that allows change of variables in an integral. Look carefully at each deduction step. One step cannot be justified based on any axiom. Think basic arithmetic. After you get that, you need to consider cases and use the completeness axiom for reals to extend the open intervals on which the standard solution works. Solution sketch The field axioms only give you a multiplicative inverse when it is not zero. Now how to solve the problem? Split into cases. Note that you need to work on intervals since having isolated points where $y$ is nonzero is useless. First prove that for any point where $y \ne 0$, there is an open interval around $x$ for which $y \ne 0$. Then we can use the completeness axiom for reals to extend the interval in both directions as far as $y \ne 0$. Now we can use any method to solve for $y$ on that interval. Note that the method of separating variables is formally invalid, so we should use the change of variables substitution. But the prerequisite for that is that $\frac{dy}{dx}$ is continuous, so we need to prove that! Well, $y$ is differentiable and hence continuous, so $2\sqrt{y}$ is continuous. So we get the solution on the extended interval, and it shows that $y$ becomes zero in exactly one direction in this example. Hence after some checking you will get either $y = 0$ or $y = \cases{ 0 & \text{if } x \le a \\ (x-a)^2 & \text{if } x > a }$ for some real $a$. Alternative subproof In fact, the substitution theorem can be completely avoided as follows. On any interval $I$ where $y \ne 0$, we have $y'^2 = 4y$, where "${}'$" denotes the derivative with respect to $x$. Thus $(y'^2)' = (4y)'$, which gives $2y'y'' = 4y'$, and hence $y'' = 2$ since $y' = 2\sqrt{y} \ne 0$. Thus $y' = 2x+c$ on $I$ for some real $c$, and hence $y = x^2+cx+d$ on $I$ for some real $d$. Note that most of the above steps are not reversible and hence we need to check all the solutions we finally obtain with the original differential equation. We would get $c^2 = 4d$. After simple manipulation we obtain the same result for $y$ on $I$ as in the other solution. The other parts of the solution still need to be there.
Putting it all Together Using the Classical Method To Test a Claim about μ when σ is Known Write the null and alternative hypotheses. State the level of significance and get the critical value from the standard normal table. Compute the test statistic. $$z=\frac {\bar {x}-\mu}{\frac {\sigma}{\sqrt {n}}}$$ Compare the test statistic to the critical value (Z-score) and write the conclusion. To Test a Claim about μ When σ is Unknown Write the null and alternative hypotheses. State the level of significance and get the critical value from the student’s t-table with n-1 degrees of freedom. Compute the test statistic. $$t=\frac {\bar {x}-\mu}{\frac {s}{\sqrt {n}}}$$ Compare the test statistic to the critical value (t-score) and write the conclusion. To Test a Claim about p Write the null and alternative hypotheses. State the level of significance and get the critical value from the standard normal distribution. Compute the test statistic. $$z=\frac {\hat {p}-p}{\sqrt {\frac {p(1-p)}{n}}}$$ Compare the test statistic to the critical value (Z-score) and write the conclusion. Table 4. A summary table for critical Z-scores. To Test a Claim about Variance Write the null and alternative hypotheses. State the level of significance and get the critical value from the chi-square table using n-1 degrees of freedom. Compute the test statistic. $$\chi^2 = \frac {(n-1)S^2}{\sigma^{2}_{0}}$$ Compare the test statistic to the critical value and write the conclusion.
From S.L Linear Algebra: Show that the association $A \rightarrow g_A$ is an isomorphism between the space of $m \times n$ matrices, and the space of bilinear maps of $\mathbb{K}^m \times \mathbb{K}^n$ into $\mathbb{K}$. Note: In calculus, if $f$ is a function of $n$ variables, one associates with $f$ a matrix of second partial derivatives ($\frac{\partial^2 f}{\partial x_i \partial x_j}$), which is symmetric. This matrix represents the second derivative, which is a bilinear map. $g_A$ simply implies that the bilinear map $g$ has a matrix $A$ associated with it, such that $g_A(X, Y)=X^TAY$. Confusion: It is visible that $A$ is simply an arbitrary $m \times n$ matrix and $g_A$ is bilinear map $\mathbb{K}^m \times \mathbb{K}^n \rightarrow \mathbb{K}$ Association above $A \rightarrow g_A$, can be "expanded" to $A \rightarrow \mathbb{K}^m \times \mathbb{K}^n \rightarrow \mathbb{K}$, which makes it seem like a composite map $L: A \circ \mathbb{K}$. Thus if we have $X \in \mathbb{K}^m$ and $Y \in \mathbb{K}^m$, then it sems like $L(X, Y)=g_A(X^TAY)=(X^TAY)^2$. I'm highly confused about the definition of mapping $L$, if I'm certain that the definition that I've obtained is correct, then perhaps I could show that kernel of $L$ is trivial (showing injectivity), and then by rank nullity that map is bijective. Therefore, is the result above appropriate by any means? Perhaps the example in the Note was a linear map similar to $L$ above (which is why second derivative is discussed there)? It seems to me the Note is talking about specialization of hessian matrix which contains bilinear forms (second derivatives) as elements. In short, what's the correct definition of association $A \rightarrow g_{A}$.
J.M. Hure addressed parts of the problem in Solutions of the axi-symmetric Poisson equation from elliptic integrals (2005), giving a form of the electric field in the vicinity of a loop charge. I will post his published result here with his notation. These equations are for a mass distributed in a loop with linear mass $\lambda$, radius $a$, and located elevated to a height of $z$ (as far as I'm concerned $z=0$). Then the coordinates used are $R$ for distance to vertical axis and $Z$ for the vertical position. Note that this notation is very different from what I use. Two intermediary values are introduced which are $k$ and $k'$. The answer is for the gravitational field using the case of Newtonian gravity (standard use of $G$) and is denoted $< \kappa_R, \kappa_\Phi, \kappa_Z>$. The equations follow. $$k^2 = \frac{ 4 a R }{ (a+R)^2 + (z-Z)^2 }$$$$k'^2 = 1-k^2$$ $$\kappa_R = \frac{G}{R} \sqrt{\frac{a}{R} } k \left( E(k)-K(k)+\frac{(a-R) k^2 E(k)}{2 a k'^2 } \right)$$$$\kappa_\Phi = 0$$$$\kappa_Z = \frac{G (z-Z)}{ 2 R \sqrt{a R} } \frac{k^3 E(k) }{k'^2}$$ And yes, this is what I had. I gave the potential in my question because it was the most simple expression that could convey the mathematics for a loop geometry. Hure also noted a paper by Durand in 1964 in regarding these equations, Electrostatique. I. Les distributions, which may have the first published form for loop geometry. Hure also discusses self-gravitation of the loop geometry and introduces the concept of a loop singularity. I think he means to say that a loop singularity implies infinite self-gravitation but I think his wording does not explicitly say that. Nonetheless, he avoids the problem by writing an integral for self gravitation using a mass density, $\rho$, and not linear mass density, $\lambda$, (similar to my $F_{effective}$ below) indicating that the issue of infinite self-gravitation was well understood. Hure's paper then goes on to offer empirical methods for the self-gravitation as well as some other problems. Before I move on to my own solution, I should note one difference about the gravitational problem. This question is about the electric wire force, in which case the charge is a surface charge. In the case of gravitation, the entire volume contains a mass density, meaning that ultimately I address a surface integral here but the gravitational problem requires a volume integral. The volume integral is not easy with the $r<<R$ method I apply here. I've tried. I will write my own equation for the field here which is mathematically equivalent to the above one. I will make one new definition, which is the closest distance to the loop from a point. these should really be negative, since I've written them for electric field, but I'll keep them positive for consistency with the above equations, which are for gravitational field $$r = \sqrt{ (\rho-R)^2 + z^2 }$$ $$< F_\rho, F_\theta, F_z> = \nabla E(\rho,z)$$ $$F_\rho = \frac{2 k R \lambda }{ l \rho } \left( K \left( 2 \frac{ \sqrt{ R \rho } }{l} \right) + \frac{ \rho^2-R^2-z^2}{r^2 } E\left( 2 \frac{ \sqrt{ R \rho } }{l} \right) \right)$$$$F_\theta = 0$$$$F_z = \frac{ 4 k R \lambda z }{r^2 l} E\left( 2 \frac{ \sqrt{ R \rho } }{l} \right)$$ Starting from here I will directly address the question I asked. Recall the equation for the field around an infinite line charge above. The potential then follows a $ln(r)$ form, but keep in mind that the potential is a relative measure, but also, since the potential around a ring charge given above was found by integrating the $1/r$ differential potentials, it will have the same offset. $$E(r) = - 2 \lambda k ln(r) = 2 k \lambda ln \left( \frac{C}{r} \right) $$ Objectively, we will find a simplified approximation of the previously given $E(\rho,z)$ as $r$ goes to zero and a part of it will be identified as being equivalent to $E(r)$ above, along with another, yet unknown, component. I did just that, and it was no easy task. Here is what I obtained. The two parts are identified and written separately to make the result crystal clear. $$E(\rho,z) \approx 2 k \lambda \left( ln( \frac{8 R}{r} ) + \frac{\rho-R}{R} \left( 1-ln( \frac{8 R}{r} ) \right) \right)$$$$E_{line}(\rho,z) = 2 k \lambda \left( ln( \frac{8 R}{r} ) \right)$$$$E_{loop}(\rho,z) = 2 k \lambda \left( \frac{\rho-R}{R} \left( 1-ln( \frac{8 R}{r} ) \right) \right)$$ Again, this is an approximation of the previous equation for $E(\rho,z)$ as $r$ goes to zero. I found it to be a good approximation for $r/R<0.1$ or so, and the approximation becomes more correct as $r$ decreases further. It is almost exact for wire radius of $cm$ or $mm$ with a loop radius of $m$. Next, the electric field due to the global loop geometry, $E_{loop}$, is used to answer the problem. The gradient of this quantity gives the field, $\vec{F}_{loop}$, which can then be used to obtain the force. The problem is that this field acts very non-uniformly at a certain $r$. Note that $E_{line}$ acts uniformly and both the potential and field for the infinite line approximation will dominate in magnitude compared to the global loop effects, but results in no net force on the wire while $E_{loop}$ does, this means that a uniform surface charge distribution is a valid approximation. The average value of $\vec{F}_{loop}$ at some $r$ is obtained by integrating, and the vector nature is dropped while it is noted that the field is directed outward, or in the positive $\rho$ direction. $$\rho(r,\psi) = R+r \cos{\psi}$$$$z(r,\psi) = r \sin{\psi}$$ $$F_{effective} = \frac{1}{2 \pi} \int_0^{2 \pi} \left( \frac{d}{d\rho} E_{loop}(\rho,z) |_{(\rho,z)=\rho(r,\psi),z(r,\psi)} \right) d\psi = \frac{\lambda k}{R} ( ln( \frac{8 R}{r} ) - \frac{3}{2} ) $$ This point is very close to the final answer. The math for finding the tension in the wire is returned to and an expression for written for it in terms of the desired quantities. I also answer the problem for the previously stated values. $$T = \lambda R F_{effective} = \lambda^2 k ( ln( \frac{8 R}{r} ) - \frac{3}{2} ) = 1.7 GN$$ Now, the other part of the question was about what happens as the wire thickness goes to zero. For this case (constant charge) the force goes to infinity. Well well all knew that (otherwise why would I have asked the question). But what about the case of constant voltage and constant surface charge density? The previous is written using $V$ for the wire voltage and $\sigma$ for the surface charge density. Note that the voltage of the wire of radius $r$ can validly be found by the infinite line charge approximation as per my prior arguments. $$T = \frac{V^2}{16 k} \frac{ ln( \frac{8 R}{r} ) - \frac{3}{2} }{ln( \frac{8 R}{r})^2} = 4 \sigma^2 \pi^2 r^2 k ( ln( \frac{8 R}{r} ) - \frac{3}{2} )$$ Now the limits can be addressed. Here is what happens to the force in the wire as the radius goes to infinity with the quantities kept constant. For completeness, I address the same things in terms of tensile stress. Constant charge --> infinity, infinity Constant voltage --> zero, infinity Constant surface charge density --> zero, infinity This is an interesting result. I should note that due to the tensile stress result this means that no curved wire with an extraordinarily small radius, a curve, and a charge can exist without tearing itself apart. I welcome disagreement on that point.
One way to calculate pi is using the Monte Carlo method devised by Nicholas Constantine Metropolis. It goes as follows. Start with a circle of radius r=1 centred on the origin. The circle sits within a square of dimension l=2r=2 also centred on the origin like so: The ratio of the areas of the circle and square is given by: LaTeX Markup problem Show/Hide latex markup \documentclass{article} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amssymb} \usepackage{bm} \newcommand{\mx}[1]{\mathbf{\bm{#1}}} % Matrix command \newcommand{\vc}[1]{\mathbf{\bm{#1}}} % Vector command \newcommand{\T}{\text{T}} % Transpose \pagestyle{empty} \begin{document} $\rho = \frac{area of circle}{area of square} = \frac{\pi r^2}{2r}^2 = \frac{\pi}{4}$ \end{document} Show/Hide latex error Unable to find DVI conversion log file. So to get some p all we have to do is find this ratio and multiply by 4. Simple... But wait, how do we get the areas, and therefore ratio, without using the formulas? This is where Monte Carlo comes in handy. If we start placing random points within the square and measure what proportion also fall within the circle then we'll get a convergent Create a Monte Carlo code to calculate p using the intrinsic function random_number: Exercise: Compile and run with different numbers of NTHROW You can print out the values of pi as a function of n and get them visualaised with gnuplot using pi-1D.plt, see or with xmgrace Exercise: Convert this code into a 3D code by using the ratio between the box and sphere volumes.
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
A Naive Bayes predictor makes its predictions using this formula: $$P(Y=y|X=x) = \alpha P(Y=y)\prod_i P(X_i=x_i|Y=y)$$ where $\alpha$ is a normalizing factor. This requires estimating the parameters $P(X_i=x_i|Y=y)$ from the data. If we do this with $k$-smoothing, then we get the estimate $$\hat{P}(X_i=x_i|Y=y) = \frac{\#\{X_i=x_i,Y=y\} + k}{\#\{Y=y\}+n_ik}$$ where there are $n_i$ possible values for $X_i$. I'm fine with this. However, for the prior, we have $$\hat{P}(Y=y) = \frac{\#\{Y=y\}}{N}$$ where there are $N$ examples in the data set. Why don't we also smooth the prior? Or rather, do we smooth the prior? If so, what smoothing parameter do we choose? It seems slightly silly to also choose $k$, since we're doing a different calculation. Is there a consensus? Or does it not matter too much?
I am really confused on how to convert a formula I have for acceleration to find the velocity at that point. The equation is: $$a=\omega^2\cdot52.24\cdot\cos(\theta)$$ where $\omega$ is the angular velocity in radians per second and $\theta$ is the angle it is at (starting at 0) I want to find the velocity at this point but have no idea how. I know I have to integrate, but what do I integrate exactly? And how would I write it out? I have entered $\int_0^1\omega(t)^2\cdot52.24\cdot cos(0)\,dt\,$ into Wolfram Alpha but it just spits out the same as the answer.
Fastest Mixing Markov Chain Introduction This example is derived from the results in Boyd, Diaconis, and Xiao (2004), section 2. Let \(\mathcal{G} = (\mathcal{V}, \mathcal{E})\) be a connected graph with vertices \(\mathcal{V} = \{1,\ldots,n\}\) and edges \(\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}\). Assume that \((i,i) \in \mathcal{E}\) for all \(i = 1,\ldots,n\), and \((i,j) \in \mathcal{E}\) implies \((j,i) \in \mathcal{E}\). Under these conditions, a discrete-time Markov chain on \(\mathcal{V}\) will have the uniform distribution as one of its equilibrium distributions. We are interested in finding the Markov chain, constructing the transition probability matrix \(P \in {\mathbf R}_+^{n \times n}\), that minimizes its asymptotic convergence rate to the uniform distribution. This is an important problem in Markov chain Monte Carlo (MCMC) simulations, as it directly affects the sampling efficiency of an algorithm. The asymptotic rate of convergence is determined by the second largest eigenvalue of \(P\), which in our case is \(\mu(P) := \lambda_{\max}(P - \frac{1}{n}{\mathbf 1}{\mathbf 1}^T)\) where \(\lambda_{\max}(A)\) denotes the maximum eigenvalue of \(A\). As \(\mu(P)\) decreases, the mixing rate increases and the Markov chain converges faster to equilibrium. Thus, our optimization problem is \[ \begin{array}{ll} \underset{P}{\mbox{minimize}} & \lambda_{\max}(P - \frac{1}{n}{\mathbf 1}{\mathbf 1}^T) \\ \mbox{subject to} & P \geq 0, \quad P{\mathbf 1} = {\mathbf 1}, \quad P = P^T \\ & P_{ij} = 0, \quad (i,j) \notin \mathcal{E}. \end{array} \] The element \(P_{ij}\) of our transition matrix is the probability of moving from state \(i\) to state \(j\). Our assumptions imply that \(P\) is nonnegative, symmetric, and doubly stochastic. The last constraint ensures transitions do not occur between unconnected vertices. The function \(\lambda_{\max}\) is convex, so this problem is solvablein CVXR. For instance, the code for the Markov chain in Figure 2below (the triangle plus one edge) is P <- Variable(n,n)ones <- matrix(1, nrow = n, ncol = 1)obj <- Minimize(lambda_max(P - 1/n))constr1 <- list(P >= 0, P %*% ones == ones, P == t(P))constr2 <- list(P[1,3] == 0, P[1,4] == 0)prob <- Problem(obj, c(constr1, constr2))result <- solve(prob) where we have set \(n = 4\). We could also have specified \(P{\mathbf 1} = {\mathbf 1}\) with sum_entries(P,1) == 1, which uses the sum_entries atom to represent the row sums. Example In order to reproduce some of the examples from Boyd, Diaconis, and Xiao (2004), we create functions to build up the graph, solve the optimization problem and finally display the chain graphically. ## Boyd, Diaconis, and Xiao. SIAM Rev. 46 (2004) pgs. 667-689 at pg. 672## Form the complementary graphantiadjacency <- function(g) { n <- max(as.numeric(names(g))) ## Assumes names are integers starting from 1 a <- lapply(1:n, function(i) c()) names(a) <- 1:n for(x in names(g)) { for(y in 1:n) { if(!(y %in% g[[x]])) a[[x]] <- c(a[[x]], y) } } a}## Fastest mixing Markov chain on graph gFMMC <- function(g, verbose = FALSE) { a <- antiadjacency(g) n <- length(names(a)) P <- Variable(n, n) o <- rep(1, n) objective <- Minimize(norm(P - 1.0/n, "2")) constraints <- list(P %*% o == o, t(P) == P, P >= 0) for(i in names(a)) { for(j in a[[i]]) { ## (i-j) is a not-edge of g! idx <- as.numeric(i) if(idx != j) constraints <- c(constraints, P[idx,j] == 0) } } prob <- Problem(objective, constraints) result <- solve(prob) if(verbose) cat("Status: ", result$status, ", Optimal Value = ", result$value) list(status = result$status, value = result$value, P = result$getValue(P))}disp_result <- function(states, P, tol = 1e-3) { if(!("markovchain" %in% rownames(installed.packages()))) { rownames(P) <- states colnames(P) <- states print(P) } else { P[P < tol] <- 0 P <- P/apply(P, 1, sum) ## Normalize so rows sum to exactly 1 mc <- new("markovchain", states = states, transitionMatrix = P) plot(mc) }} Results Table 1 from Boyd, Diaconis, and Xiao (2004) is reproduced below. We reproduce the results for various rows of the table. g <- list("1" = 2, "2" = c(1,3), "3" = c(2,4), "4" = 3)result <- FMMC(g, verbose = TRUE) ## Status: optimal , Optimal Value = 0.7071067 disp_result(names(g), result$P) g <- list("1" = 2, "2" = c(1,3,4), "3" = c(2,4), "4" = c(2,3))result <- FMMC(g, verbose = TRUE) ## Status: optimal , Optimal Value = 0.6363633 disp_result(names(g), result$P) g <- list("1" = c(2,4,5), "2" = c(1,3), "3" = c(2,4,5), "4" = c(1,3), "5" = c(1,3))result <- FMMC(g, verbose = TRUE) ## Status: optimal , Optimal Value = 0.4285714 disp_result(names(g), result$P) g <- list("1" = c(2,3,5), "2" = c(1,4,5), "3" = c(1,4,5), "4" = c(2,3,5), "5" = c(1,2,3,4,5))result <- FMMC(g, verbose = TRUE) ## Status: optimal , Optimal Value = 0.25 disp_result(names(g), result$P) Extensions It is easy to extend this example to other Markov chains. To changethe number of vertices, we would simply modify n, and to add orremove edges, we need only alter the constraints in constr2. Forinstance, the bipartite chain in Figure Figure 3 is produced bysetting \(n = 5\) and constr2 <- list(P[1,3] == 0, P[2,4] == 0, P[2,5] == 0, P[4,5] == 0) Session Info sessionInfo() ## R version 3.6.0 (2019-04-26)## Platform: x86_64-apple-darwin18.5.0 (64-bit)## Running under: macOS Mojave 10.14.5## ## Matrix products: default## BLAS/LAPACK: /usr/local/Cellar/openblas/0.3.6_1/lib/libopenblasp-r0.3.6.dylib## ## locale:## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8## ## attached base packages:## [1] stats graphics grDevices datasets utils methods base ## ## other attached packages:## [1] markovchain_0.6.9.14 CVXR_0.99-6 ## ## loaded via a namespace (and not attached):## [1] igraph_1.2.4.1 Rcpp_1.0.1 knitr_1.23 ## [4] magrittr_1.5 bit_1.1-14 lattice_0.20-38 ## [7] R6_2.4.0 highr_0.8 matlab_1.0.2 ## [10] stringr_1.4.0 tools_3.6.0 parallel_3.6.0 ## [13] grid_3.6.0 xfun_0.7 R.oo_1.22.0 ## [16] scs_1.2-3 htmltools_0.3.6 RcppParallel_4.4.3## [19] yaml_2.2.0 bit64_0.9-7 digest_0.6.19 ## [22] bookdown_0.11 Matrix_1.2-17 gmp_0.5-13.5 ## [25] ECOSolveR_0.5.2 R.utils_2.8.0 evaluate_0.14 ## [28] rmarkdown_1.13 blogdown_0.12.1 stringi_1.4.3 ## [31] compiler_3.6.0 Rmpfr_0.7-2 R.methodsS3_1.7.1 ## [34] stats4_3.6.0 expm_0.999-4 pkgconfig_2.0.2 Source References Boyd, S., P. Diaconis, and L. Xiao. 2004. “Fastest Mixing Markov Chain on a Graph.” SIAM Review 46 (4): 667–89.
Quadratic Formula (deterministic---no guess and check about it) The QF yields that $-{\frac13}$ and $5$ are roots. So $$3x^2-14x-5=c\left(x+\frac13\right)(x-5)$$ Comparing leading coefficients, $c$ must be $3$: $$\begin{align}3x^2-14x-5&=3\left(x+\frac13\right)(x-5)\\&=(3x+1)(x-5)\end{align}$$ Use Parabola Vertex Form (deterministic---no guess and check about it) The $x$-coordinate of the vertex of the parabola $y=3x^2-14x-5$ is $-{\frac{b}{2a}}=-{\frac{-14}{2\cdot3}}={\frac73}$. The $y$-coordinate is $3\left(\frac73\right)^2-14\left(\frac73\right)-5=\frac{49}{3}-\frac{2\cdot49}{3}-5=-{\frac{49}{3}}-\frac{15}{3}=-{\frac{64}{3}}$. So $y=c\left(x-\frac73\right)^2-\frac{64}{3}$. Comparing leading coefficients, $c=3$, so $$\begin{align}y&=3\left(x-\frac73\right)^2-\frac{64}{3}\\&=\frac{1}{3}\left(9\left(x-\frac73\right)^2-64\right)\\&=\frac{1}{3}\left(3\left(x-\frac73\right)-8\right)\left(3\left(x-\frac73\right)+8\right)\\&=\frac{1}{3}\left(3x-15\right)\left(3x+1\right)\\&=\left(x-5\right)\left(3x+1\right)\end{align}$$ Complete the Square (deterministic---no guess and check about it) Starting with $3x^2-14x-5$, always multiply and divide by $4a$ to avoid fractions:$$\begin{align}&3x^2-14x-5\\&=\frac{4\cdot3}{4\cdot3}\left(3x^2-14x-5\right)\\&=\frac{1}{12}\left(36x^2-12\cdot14x-60\right)\\&=\frac{1}{12}\left(\left(6x\right)^2-2(6x)(14)-60\right)\\&=\frac{1}{12}\left(\left(6x\right)^2-2(6x)(14)+14^2-14^2-60\right)\\&=\frac{1}{12}\left((6x-14)^2-196-60\right)\\&=\frac{1}{12}\left((6x-14)^2-256\right)\\&=\frac{1}{12}(6x-14-16)(6x-14+16)\\&=\frac{1}{6\cdot2}(6x-30)(6x+2)\\&=(x-5)(3x+1)\\\end{align}$$ AC Method (involves integer factorization and a list of things to inspect) $$3x^2-14x-5$$ Take $3\cdot(-5)=-15$. List pairs that multiply to $-15$: $$(-15,1),(-5,3),(-3,5),(-1,15)$$ We could have stopped at the first pair, because $-15+1=-14$, the middle coefficient. Use this to replace the $-14$: $$3x^2-15x+x-5$$ Group two terms at a time and factor out the GCF: $$3x(x-5)+1(x-5)$$$$(3x+1)(x-5)$$ Prime Factor what you can version 1 (involves integer factorization and a list of things to inspect) If $3x^2-14x-5$ factors, then prime factoring $3$, it factors as $$(3x+?)(x+??)$$And $(?)(??)=-5$. There are only four possibilities. $(?,??)$ is one of $$(1,-5),(-1,5),(5,-1),(-5,1)$$Multiplying out $(3x+?)(x+??)$ for each of the four cases reveals $3x^2-14x-5=(3x+1)(x-5)$. Rational Root Theorem (involves integer factorization and a list of things to inspect) If $3x^2-14x-5$ factors, there are rational roots. They must be of the form $\pm\frac{a}{b}$ where $a\mid5$ and $b\mid3$. The only options are $\pm5,\pm{\frac53},\pm1,\pm{\frac13}$. Check these eight inputs to $3x^2-14x-5$ and find that $-{\frac13}$ and $5$ are roots. So $$3x^2-14x-5=c(x+1/3)(x-5)$$ Comparing leading coefficients, $c$ must be $3$. Prime Factor what you can version 2 (using Rational Root Theorem to speed up version 1) If $3x^2-14x-5$ factors, then prime factoring $3$, it factors as $$(3x+?)(x+??)$$The latter factor reveals that if the thing factors at all, one of its roots is an integer. Considering the RRT, check if any of $\pm5,\pm1$ are roots, and discover that $5$ is. Conclude $$(3x+?)(x-5)$$ and then conclude $$(3x+1)(x-5)$$ Graphing to improve efficiency ot Rational Root Theorem method Using the vertex formula again, locate the vertex at $\left(\frac73,-{\frac{64}{3}}\right)$. Since $a=3$, consider the sequence $\{3\cdot1,3\cdot3,3\cdot5,3\cdot7,\ldots\}$. Extend horizontally outward from the vertex by $1$ in each direction, move up $3$ and plot a point. Extend horizontally outward again by $1$, move up $9$ and plot a point. Continue until you've plotted points that cross over the $x$-axis. Now you have a rough idea where the roots are. Returning to the rational root theorem approach, you can eliminate many of the potential roots now from the initial list, speeding up that approach.
Lesson Overview In this lesson, we'll derive a formula known as Green's Theorem. This formula is useful because it gives us a simpler way of calculating a specific subset of line integral problems—namely, problems in which the curve is closed (plus a few extra criteria described below). We won't concern ourselves with using this formula to solve problems in this article; we'll save that for future lessons. In this lesson, we will require that the curve \(c\) be closed plus specify some other restrictions (but even with these conditions, our analysis will be pretty general); after doing so, we'll take the line integral, then do some calculus and algebra to derive a simple formula for calculating that line integral. Although this derivation might seem pretty tedious at times, just remember that it's mostly just calculus and algebra which you are already familiar with. We derive Green's Theorem for any continuous, smooth, closed, simple, piece-wise curve such that this curve is split into two separate curves; even though we won't prove it in this article, it turns out that our analysis is more general and can apply to that same curve even if it's split into an \(n\) number of curves. Green's Theorem Proof (Part 1) In this lesson, we're going to focus on proving Green's Theorem. We discussed in a previous lesson how to calculate any line integral by parameterizing the integrand and limits of integration. Solving that parameterized integral can be quite tedious sometimes but it is, in general, how we calculate any line integral. But what if we considered calculating a special subset of line integrals that involved taking the line integral of a vector field around certain types of closed curves? Well, when it comes to calculating these kinds of line integrals we don't have to use that complicated parameterized definite integrals discussed earlier. For such line integrals of vector fields around these certain kinds of closed curves, we can use Green's theorem to calculate them. These particular kinds of closed curves can be fully described by the following description: they are any arbitrary curve \(C\) on the \(xy\)-plane that is piece-wise smooth, positively oriented, simple, and closed as illustrated in Figure 1. That description might sound like a mouthful, but let's break down the meaning of each term. A closed curve, as the name suggests, is any curve such that if you start at a point on that curve and then "walk around" that curve, you'll come back to the same point that you started at. A simple curve is any curve that doesn't criss cross and intersect itself; for example, a curve shaped like the number eight would not be a simple curve. A positively-oriented curve is one that you travel around counter-clock wise and a piece-wise-smooth curve can be subdivided into an \(n\) number of smooth curves with an \(n\) number of edges. Whenever we take a line integral of a vector field around these kinds of curves, it is usually easier to calculate the line integral using Green's theorem. The kinds of vector fields that we can calculate the line integral of using Green's theorem are pretty general but must meet a few criteria: they can be any arbitrary vector field \(\vec{F}(x,y)\) defined as $$\vec{F}(x,y)=P(x,y)\hat{i}+Q(x,y)\hat{j},$$ so long as the vector field \(\vec{F}(x,y)\) is differentiable at every point inside of the region \(R\) (enclosed by the curve \(C\)) and at every point along the curve \(C\). (We'll see why this criteria must be met as we are proving Green's theorem; Green's theorem involves taking the partial derivatives of the vector field.) Our goal is to calculate the line integral, \(∮_c\vec{F}(x,y)·d\vec{S}\), for the particular kind of vector field and curve just described. (Notice that a circle is drawn on the integral; this is to signify that the curve \(C\) is a closed curve.) Regardless of whether or not we were to use Green's theorem or the technique already discussed involving parameterizing the integrand and limits of integration, the first step in calculating this integral would be the same: we must first evaluate the dot product \(\vec{F}(x,y)·d\vec{S}\). Doing so, we have $$\vec{F}(x,y)·d\vec{S}=(P(x,y)\hat{i}+Q(x,y)\hat{j})·(dx\hat{i}+dy\hat{j}).$$ Since \(\hat{i}\) is perpendicular to \(\hat{j}\), the cross terms cancel. The other two terms involve the dot product \(\hat{i}·\hat{i}\) and \(\hat{j}·\hat{j}\) which simply just equal one since both unit vectors are parallel and have magnitudes of one. Thus, the dot product can be further simplified to $$(P(x,y)\hat{i}+Q(x,y)\hat{j})·(dx\hat{i}+dy\hat{j})=P(x,y)dx\hat{i}·\hat{i}+Q(x,y)dy\hat{j}·\hat{j}=P(x,y)dx+Q(x,y)dy.$$ Substituting this simplified version of the dot product into the line integral \(∮_c\vec{F}(x,y)·d\vec{S}\), we have $$∮_c\vec{F}(x,y)·d\vec{S}=∮_c(P(x,y)dx+Q(x,y)dy)=∮_cP(x,y)dx+∮_cQ(x,y)dy.\tag{1}$$ One way to calculate the line integral in Equation (1) would be to parameterize the right-hand side of Equation (1); this would allow us to calculate any line integral. But as I previously mentioned, this process can, in general, get quite complicated. But when we consider the subset of line integrals which deal with taking the line integrals of vector fields over the kinds of closed curves we just discussed, we can can calculate the line integral by calculating \(∮_cP(x,y)dx\) in terms of \(x\), then calculating \(∮_cQ(x,y)dy\) in terms of \(y\), and then adding the two results together. Let's first start out by calculating \(∮_cP(x,y)dx\) in terms of \(x\). Let's start out by splitting the curve \(C\) into two separate curves \(C_1\) and \(C_2\) as illustrated in Figure 2. To get everything in terms of \(x\), let's represent each \(y\)-coordinate on each point on the curves \(C_1\) and \(C_2\) in Figure 2 (and also shown in the video above) as functions of \(x\); \(y_1(x)\) will specify each \(y\)-coordinate associated with each point on \(C_1\) and \(y_2(x)\) will specify each \(y\)-coordinate associated with each point on \(C_2\). Substituting both of these functions into the integral \(∮_cP(x,y)dx\), we have $$∮_cP(x,y)dx=\int_{x=a}^{x=b}P(x,y_1(x))dx+\int_{x=b}^{x=a}P(x,y_2(x))dx.\tag{2}$$ (Notice that since everything in each integral is represented in terms of a single variable, the line integral simplified to a definite integral. As we discussed in the lesson on the Introduction of Line Integrals, if the integrand and limits of integration which are, in general, expressed with respect to the arclength \(S\) can instead be represented with respect to say \(x\) or \(y\), then the line integral can be simplified to a definite integral.) The limits of integration in the integral \(\int_{x=b}^{x=a}P(x,y_2(x))dx\) go from \(x=b\) to \(x=a\). If we "swap" the lower and upper limits of integration of that integral to get \(\int_{x=a}^{x=b}P(x,y_2(x))dx\), we are essentially just changing the order of subtraction of the anti-derivative; if we add a minus sign in front of the integral \(\int_{x=a}^{x=b}P(x,y_2(x))dx\), then that integral will be the same as the integral \(\int_{x=b}^{x=a}P(x,y_2(x))dx\). Thus, we have $$\int_{x=b}^{x=a}P(x,y_2(x))dx=-\int_{x=a}^{x=b}P(x,y_2(x))dx.\tag{3}$$ Substituting Equation (3) into (2), we have $$∮_cP(x,y)dx=\int_{x=a}^{x=b}P(x,y_1(x))dx-\int_{x=a}^{x=b}P(x,y_2(x))dx=\int_{x=a}^{x=b}\biggl(P(x,y_1(x))-P(x,y_2(x))\biggr)dx.$$ Thus, $$∮_cP(x,y)dx=\int_{x=a}^{x=b}\biggl(P(x,y_1(x))-P(x,y_2(x))\biggr)dx.\tag{4}$$ The next step is to multiply the right-hand side of Equation (4) by \((-1)·(-1)=1\). Initially, this step might seem pretty ad hoc, but essentially what we're going to do is "build up an integral in reverse." In the next few steps, you'll see that the integrand in the right-hand side of Equation (4) can be written as an integral. Multiplying the right-hand side of Equation (4) by \((-1)·(-1)=1\), we have $$(-1)·(-1)·\int_{x=a}^{x=b}\biggl(P(x,y_1(x)-P(x,y_2(x)\biggr)dx=-1·\int_{x=a}^{x=b}\biggl[-1·\biggl(P(x,y_1(x)-P(x,y_2(x)\biggr)\biggr]dx$$ $$=-\int_{x=a}^{x=b}\biggl(P(x,y_2(x)-P(x,y_1(x)\biggr)dx.$$ Thus, we have $$∮_cP(x,y)dx=-\int_{x=a}^{x=b}\biggl(P(x,y_2(x)-P(x,y_1(x)\biggr)dx.\tag{5}$$ Notice that the integrand, \(P(x,y_2(x)-P(x,y_1(x)\), in the right-hand side of Equation (5), is the same thing as \(\int_{y(x)=y_1(x)}^{y(x)=y_2(x)}\frac{∂P(x,y(x)}{∂y}dy\) and thus $$P(x,y_2(x)-P(x,y_1(x)=\int_{y(x)=y_1(x)}^{y(x)=y_2(x)}\frac{∂P(x,y(x)}{∂y}dy.\tag{6}$$ (This step can be quite confusing so let me explain why it is valid. The partial derivative, \(\frac{∂P(x,y(x)}{∂y}\), is the same thing as taking the ordinary derivative of \(P(x,y)\) with respect to \(y\) with \(y(x)\) set equal to some constant—in other words, \(\frac{∂P(x,y(x)}{∂y}=\frac{dP(constant,y)}{dy}\). When we evaluate the integral, or anti-derivative, of the integrand \(\frac{∂P(x,y(x)}{∂y}=\frac{dP(x,constant)}{dy}\), we "undo the derivative" so to speak. This means that the anti-derivative (another name for the integral) of \(\frac{dP(constant,y)}{dy}=\frac{∂P(x,y(x)}{∂y}\) is just \(P(x,y(x)\). That would be the solution if we were taking an indefinite integral, but since we are taking the definite integral from \(y(x)=y_1(x)\) to \(y(x)=y_2(x)\), the solution to the integral is actually \(P(x,y(x))|_{y(x)=y_1(x)}^{y(x)=y_2(x)}=P(x,y_2(x)-P(x,y_1(x)\).) Substituting Equation (6) into (5), we have $$-∫_{x=a}^{x=b}(P(x,y_2(x)-P(x,y_1(x))dx=-∫_{x=a}^{x=b}\biggl(∫_{y(x)=y_1(x)}^{y(x)=y_2(x)}\frac{∂P(x,y(x)}{∂x}dy\biggr)dx.\tag{7}$$ Thus, we have $$∮_cP(x,y)dx=-∫_{x=a}^{x=b}∫_{y(x)=y_1(x)}^{y(x)=y_2(x)}\frac{∂P(x,y(x))}{∂y}dydx.\tag{8}$$ Equation (8) is essentially just the volume contained between the surface \(-\frac{∂P(x(y),y)}{∂y}\) and the region \(R\). In other words, Equation (8) represents the infinite sum of infinitesimally skinny columns of volume \(P_y(x(y)),y)dxdy\) over the region \(R\). The notation for writing this is $$-∫_{x=a}^{x=b}∫_{y(x)=y_1(x)}^{y(x)=y_2(x)}\frac{∂P(x,y(x))}{∂y}dydx=∫∫_R-\frac{∂P(x,y(x))}{∂y}dA.\tag{9}$$ Canceling out the minus signs on both sides of Equation (9), we have $$∫_{x=a}^{x=b}∫_{y(x)=y_1(x)}^{y(x)=y_2(x)}\frac{∂P(x,y(x))}{∂y}dydx=∫∫_R\frac{∂P(x,y(x))}{∂y}dA.$$ Finally, if we substitute this result into Equation (8), we have $$∮_cP(x,y)dx=-∫∫_R\frac{∂P(x,y(x))}{∂y}dA.\tag{10}$$ Green's Theorem Proof (Part 2) Equation (10) allows us to calculate the line integral \(∮_cP(x,y)dx\) entirely in terms of \(x\). The final step we need to complete to calculate the line integral of \(\vec{F}(x,y)\) is to calculate the line integral \(∮_cQ(x,y)dy\) and then add this result to Equation (10). To calculate the line integral \(∮_cQ(x,y)dy\), we'll go through an analogous procedure to the one which we went through to calculate \(∮_cP(x,y)dy\). First, let's split up the curve \(C\) into the two separate curves \(C_1\) and \(C_2\) illustrated in Figure 3. Let's express the variable \(x\) associated with each point on the curve \(C\) as \(x_1(y)\) and \(x_2(y)\) associated with each \(x\)-value on the two curves \(C_1\) and \(C_2\), respectively. Analogous to what we did previously, let's write the integral \(∮_cQ(x,y)dy\) as the sum of two line integrals of the form, $$∮_cQ(x,y)dy=∮_{c_1}Q(x_1(y),y)dy+∮_{c_2}Q(x_2(y),y)dy.\tag{11}$$ Just like last time, our goal will be to write the right-hand side of Equation (11) as a double integral. First, we do some manipulations to write the two line integrals as a single definite integral; then, after that, we do some algebra and calculus to rewrite the integrand as another definite integral. Since the two lines integrals on the right-hand side of Equation (11) are expressed in terms of \(y\), we can rewrite them as definite integrals to get $$∮_{c_1}Q(x_1(y),y)dy+∮_{c_2}Q(x_2(y),y)dy=∫_{y=a}^{y=b}Q(x_1(y),y)dy+∫_{y=b}^{y=a}Q(x_2(y),y)dy.\tag{12}$$ Substituting \(∫_{y=b}^{y=a}Q(x_2(y),y)dy=-∫_{y=a}^{y=b}Q(x_2(y),y)dy\) into the right-hand side of Equation (12) and making the same algebraic simplifications as before, we have $$∫_{y=a}^{y=b}Q(x_1(y),y)dy+∫_{y=b}^{y=a}Q(x_2(y),y)dy=∫_{y=a}^{y=b}Q(x_1(y),y)dy-∫_{y=a}^{y=b}Q(x_2(y),y)dy$$ $$=∫_{y=a}^{y=b}(Q(x_1(y),y)-Q(x_2(y),y))dy=∫_{y=a}^{y=b}(Q(x(y),y)|_{x(y)=x_2(y)}^{x(y)=x_1(y)}dy.$$ $$=∫_{y=a}^{y=b}\biggl(∫_{x_2(y)}^{x_1(y)}\frac{∂Q(x(y),y)}{∂x}dx\biggr)dy.$$ Thus, $$∮_cQ(x,y)dy=∫_{y=a}^{y=b}\biggl(∫_{x_2(y)}^{x_1(y)}\frac{∂Q(x(y),y)}{∂x}dx\biggr)dy.\tag{13}$$ Equation (13) is essentially just the volume contained between the surface \(\frac{∂Q(x(y),y)}{∂x}\) and the region \(R\). In other words, Equation (13) represents the infinite sum of infinitesimally skinny columns of volume \(Q_x(x(y),y)dxdy\) over the region \(R\). The notation for writing this is $$∫_{y=a}^{y=b}\biggl(∫_{x_2(y)}^{x_1(y)}\frac{∂Q(x(y),y)}{∂x}dx\biggr)dy=∫∫_R\frac{∂Q(x(y),y)}{∂x}dA.\tag{14}$$ Let's substitute Equations (10) and (14) into (1) to get $$∮_c\vec{F}(x,y)·d\vec{S}=∫∫_R\frac{∂Q(x(y),y)}{∂x}dA+∫∫_R-\frac{∂P(x,y(x))}{∂x}dA.$$ Simplifying the expression on the right-hand side of the above equation, we get Green's theorem which states that $$∮_c\vec{F}(x,y)·d\vec{S}=∫∫_R\biggl(\frac{∂Q(x(y),y)}{∂x}-\frac{∂P(x,y(x))}{∂x}\biggr)dA,\tag{15}$$ or, equivalently, $$∮_cP(x,y)dx+∮_cQ(x,y)dy=∫∫_R\biggl(\frac{∂Q(x(y),y)}{∂x}-\frac{∂P(x,y(x))}{∂x}\biggr)dA.\tag{16}$$ In the next couple of lessons, we'll use Green's theorem to solve some line integrals of vector fields over piecewise smooth, simple, closed curves. This article is licensed under a CC BY-NC-SA 4.0 license. Sources: Khan Academy
Suppose there is a ring in the space where there is no gravity. The width of the ring is $r$ which is negligible compared to its inner radius $R$. The ring is in horizontal position. Now imagine a human starts to move on this ring. Before the start of the motion angular and linear momentum are zero and system is isolated. When the human walks on the ring linear and angular momentum must remain zero. $$m_{human}*R^2*\omega_{human} = -M_{ring}*R^2*\omega_{ring} \rightarrow \omega_{human}=-\frac{M_{ring}}{m_{human}}*\omega_{ring}$$ There is no problem with this part. Now we consider the linear momentum. The linear momentum of the ring after the beginning of motion is zero because for every element of mass there is another element of mass moving in the opposite direction. However, the linear momentum of the human which can be considered a point mass is not zero. The magnitude of the linear momentum of the human is $ m_{human} \cdot \omega_{human} \times R$. So the total linear momentum is not zero despite the fact that the initial linear momentum was zero and the system is isolated. If there is not anything wrong I have to conclude that $\omega_{human} = 0$. What am I missing here?
Materials Pencil; Eraser; Blank A4 sheet of paper; and A ruler (also to use as a straightedge). Puzzle: Draw a square with a side-length of $a=16$cm. Then, draw eight circles inside. The circles must have $2$cm in diameter; they must all be the same size; they must not touch each other; their distances must all be different from each other (i.e. their positions must be randomised); and they must not touch the perimeter of the square. $$\bigcirc\quad\bigcirc\quad\bigcirc\quad\bigcirc\quad\bigcirc\quad\bigcirc\quad\bigcirc\quad\bigcirc$$ Now, draw two points on each side of the square. Begin with the top side, and from left to right, make points $A_1$ and $B_1$. The distance from these two points is arbitrary, but none of them can be on the corners of the square. Now, rotate the square $90^\circ$ anti-clockwise. You will now have a new top side, where you can put points $A_2$ and $B_2$ from left to right. Their distance is also arbitrary and cannot be on the corners of the square, but it also cannot be the same distance as any other points placed (i.e. the previously placed points). Continue with this method to make points $A_3$ and $B_3$, and then $A_4$ and $B_4$, making sure that the distance between each two points on a side is unique. Now, connect the points with a line in the following fashion. $$A_1\to A_2\to A_3\to A_4\to B_1\to B_2\to B_3\to B_4\to A_1$$ But, ensure that the lines do not touch nor intersect any of the circles! Note: You might have to place your points carefully. Aim: $$\verb|Ensure the lines do not touch nor intersect any of the circles!|$$ Edit: Removed Part 2 of the puzzle as it is much too difficult and incorrect (or perhaps impossible) in very specific cases of drawing circles. Thus, I am adding a bonus: Bonus: What is the minimum value of $a$ for your specific case you have chosen (i.e. how you have randomly positioned the circles)? Note that you cannot move the circles in different positions after plotting them. I used the tag connections-puzzle because you have to connect points with lines.
Suppose I fit a Binomial regression and obtain the point estimates and variance-covariance matrix of the regression coefficients. That will allow me to get a CI for the expected proportion of successes in a future experiment, $p$, but I need a CI for the observed proportion. There have been a few related answers posted, including simulation (suppose I don't want to do that) and a link to Krishnamoorthya et al (which does not quite answer my question). My reasoning is as follows: if we use just the Binomial model, we are forced to assume that $p$ is sampled from Normal distribution (with the corresponding Wald CI) and therefore it is impossible to get CI for the observed proportion in closed form. If we assume that $p$ is sampled from beta distribution, then things are much easier because the count of successes will follow Beta-Binomial distribution. We will have to assume that there is no uncertainty in the estimated beta parameters, $\alpha$ and $\beta$. There are three questions: 1) A theoretical one: is it ok to use just the point estimates of beta parameters? I know that to construct a CI for future observation in multiple linear regression $Y = x'\beta + \epsilon, \epsilon \sim N(0, \sigma^2)$ they do that w.r.t. error term variance, $\sigma^2$. I take it (correct me if I am wrong) that the justification is that in practice $\sigma^2$ is estimated with a far greater precision than the regression coefficients and we won't gain much by trying to incorporate the uncertainty of $\sigma^2$. Is a similar justification applicable to the estimated beta parameters, $\alpha$ and $\beta$? 2) What package is better (R: gamlss-bb, betareg, aod?; I also have access to SAS). 3) Given the estimated beta parameters, is there an (approximate) shortcut to get the quantiles (2.5%, 97.5%) for the count of future successes or, better yet, for the proportion of future successes under Beta-Binomial distribution.
We’re surrounded by time series. It’s one of the more common plots we see in day-to-day life. Finance and economics are full of them – stock prices, GDP over time, and 401K value over time to name a few. The plot looks deceptively simple; just a nice univariate squiggle. No crazy vectors, no surfaces, just one predictor – time. It turns out time is a tricky and fickle explanatory variable, which makes analysis of time series a bit more nuanced than first glance. This nuance is obscured by the ease of automatic implementation of time series modeling in languages like R 1 As nice as this is for practitioners, the mathematics behind this analysis is lost. Ignoring the mathematics can lead to improper use of these tools. This series will examine some of the mathematics behind stationarity and what is known as ARIMA ( Auto- Regressive Integrated Moving Average) modeling. Part 1 will examine the very basics, showing that time series modeling is really just regression with a twist. Back to Basics: Regression Line We all know the basic equation for a line: y = mx + b. Now, if I change the independent, or explanatory variable 2 to time and denote it t, then a basic equation for some phenomenon y that depends linearly with time can be written as Plugging in some numbers, perhaps y = 2t + 4. Next step: make it regular linear regression. That means there’s some additional error terms that cause our line to be imperfect. These error terms can be due to all sort of things, but typically are attributed to natural variation. For each point in time we take a measurement, we get an error term \epsilon_{t}. Let’s denote y_{t} to be the value of y at time t. Then, with our error terms, the regression equation becomes Traditional regression analysis typically uses the method of least squares to estimate m and b, and assumes that the residuals \epsilon_{t} are all just drawn randomly and independently from the same distribution (typically a normal distribution) with constant variance. That is, it’s assumed that the \epsilon_{t} are i.i.d. and don’t form a random process that actually does depend on previous error terms. Time for the Twist So what happens when those error terms aren’t exactly just being drawn randomly out of a hat? As I mentioned in the introduction, time is a bit tricky. We can’t assume that the residuals at each point in time are actually truly independent of each other. In time series analysis, we replace that \epsilon_{t} with a random process we’ll call X_{t}. That is, now we assume there is an actual order to the residuals, and there is some kind of process that governs them. A random process \{X_{t}, t \geq 0\} is a sequence of random variables that may or may not be identically distributed, or even independent. To proceed further, we’ll need to define a couple of things: the autocovariance function and the notion of stationarity 3. Autocovariance Function If we take a random process \{X_{t}\}, it’s really just a collection of random variables with an ordering or indexing. Just like with regular random variables, we can calculate the covariance between them, measuring the joint variability. Here, we call it the auto covariance between two random variables in the time series, and the formula is defined exactly as regular covariance. We denote by \gamma_{X} the auto covariance function of the random process \{X_{t}\}, and define it for two points in the sequence X_{r}, X_{s} as where E[\cdot] is the expectation (or mean) of the random variable, and r,s are indices. (See this post for an explanation of expectation.) Stationarity Stationarity is an important concept in the study of random processes, as its existence yields many mathematical properties we need to make inferences and forecasts later. It was important in discussing Poisson processes, and also shows up again here. The definition of stationarity for time series looks a little different, but the meaning is inherently the same. Definition: Stationarity A time series \{X_{t}, t \in \mathbb{Z}\} is said to be stationary if the following hold: (i) E[|X_{t}|^{2}] < \infty for all t (ii)E[X_{t}] = m for all t (iii)\gamma_{X}(r,s) = \gamma_{x}(r+t, s+t) for all r,s,t We’ll pick apart each piece of the definition to understand the notion of stationarity. The first part just ensures we have a finite variance for all points in time. We prefer not to deal with infinity. The second part means that all variables in the random process must have the same mean. If the mean varies with time, it’s not a stationary process. Finally, the third requirement of the definition may be views this way. Two random variables in the sequence X_{r} and X_{s} that are r-s apart 4have a certain autocovariance. If we shift both variables by the same amount in time t, then those two new random variables X_{r+t} and X_{s+t} should have the same autocovariance as the first pair X_{s}, X_{t}. That is, in a stationary process, the autocovariance only depends on the distance apart the variables are in the sequence, not on their location in time. This means we can actually write the autocovariance function of a stationary process in a special way, one that only shows the distance between the current point X_{t} and some lag X_{t+h}: for all t,h. Let’s test an example of a random process for stationarity. If a random process is indeed stationary, than all parts of the definition should be satisfied, in particular that \gamma_{X}(h) does not have any dependence on t – only on h. Example: Let’s assume A and B are two uncorrelated random variables. (That is, \text{Cov}(A,B) = 0). Assume the mean of both A and B is 0 (E[A] = E[B] = 0), and the variance of A and B is 1. (\text{Var}[A] = \text{Var}[B] = 1). Suppose the angle \theta \in [-\pi, \pi], and the random process is given by X_{t} = A\cos(\theta t) + B\sin(\theta t). Is \{X_{t}\} stationary? First, note that A and B are the only random variables here. The mean of both is 0, and both cosine and sin stay between \pm 1, so we will definitely have part (i) of our definition. Next, we have to make sure the mean is the same for all X_{t}:\begin{aligned}E[X_{t}] &= E[A\cos(\theta t) + B\sin(\theta t)]\end{aligned} Now, \cos(\theta t) and \sin(\theta t) aren’t random at all, so the expectation of them is just…well…them. We also know that the expectation is a linear operator 5, so because E[A] = E[B] = 0. Good, part (ii) is satisfied. Finally, we have to see if \gamma_{X}(h) has any t in it:\begin{aligned}\gamma_{X}(h) &= \text{Cov}(X_{t+h},X_{t})\\&=\text{Cov}[A\cos(\theta (t+h)) + B\sin(\theta(t+h)), A\cos(\theta t) + B\sin(\theta t)]\end{aligned} OK, stick with me. We’re going to use the fact that \text{Cov}(u+v, w+z) =\text{Cov}(u,w) +\text{Cov}(u,z) +\text{Cov}(v,w) +\text{Cov}(v,z). Now,\begin{aligned}\gamma_{X}(h) &= \text{Cov}(X_{t+h},X_{t})\\&=\text{Cov}[A\cos(\theta (t+h))+B\sin(\theta(t+h)), A\cos(\theta t)+B\sin(\theta t)]\\&=\text{Cov}(A\cos(\theta(t+h)),A\cos(\theta t))+\text{Cov}(A\cos(\theta(t+h)),B\sin(\theta t))\\&\qquad+\text{Cov}(B\sin(\theta(t+h)),A\cos(\theta t))+\text{Cov}(B\sin(\theta(t+h)),B\sin(\theta t))\end{aligned} Now, remember that the cosines and sines aren’t random. That means they are just constants in terms of the covariance, so we can pull them out, multiplying them together. That is,\text{Cov}(A\cos(\theta(t+h)), A\cos(\theta t)) = \cos(\theta(t+h))\cos(\theta t)\text{Cov}(A,A) for the first term. We do the same with the other three terms:\begin{aligned}\gamma_{X}(h) &=\text{Cov}(X_{t+h},X_{t})\\&=\text{Cov}[A\cos(\theta (t+h)) + B\sin(\theta(t+h)), A\cos(\theta t)+B\sin(\theta t)]\\&=\text{Cov}(A\cos(\theta(t+h)),A\cos(\theta t))+\text{Cov}(A\cos(\theta(t+h)),B\sin(\theta t))\\&\qquad+\text{Cov}(B\sin(\theta(t+h)),A\cos(\theta t))+\text{Cov}(B\sin(\theta(t+h)),B\sin(\theta t))\\&=\cos(\theta(t+h))\cos(\theta t)\text{Cov}(A,A)+\cos(\theta(t+h))\sin(\theta t)\text{Cov}(A,B)\\&\qquad+\sin(\theta(t+h))\cos(\theta t)\text{Cov}(B,A)+\sin(\theta(t+h))\sin(\theta t)\text{Cov}(B,B)\end{aligned} Now, we already know that \text{Cov}(A,B) = \text{Cov}(B,A) = 0, and \text{Cov}(A,A) = \text{Var}(A) = 1 = \text{Var}(B) = \text{Cov}(B,B). Then we get\begin{aligned}\gamma_{X}(h) &=\text{Cov}(X_{t+h},X_{t})\\&=\text{Cov}[A\cos(\theta (t+h))+B\sin(\theta(t+h)), A\cos(\theta t)+B\sin(\theta t)]\\&=\text{Cov}(A\cos(\theta(t+h)),A\cos(\theta t))+\text{Cov}(A\cos(\theta(t+h)),B\sin(\theta t))\\&\qquad+\text{Cov}(B\sin(\theta(t+h)),A\cos(\theta t))+\text{Cov}(B\sin(\theta(t+h)),B\sin(\theta t))\\&=\cos(\theta(t+h))\cos(\theta t)\text{Cov}(A,A)+\cos(\theta(t+h))\sin(\theta t)\text{Cov}(A,B)\\&\qquad+\sin(\theta(t+h))\cos(\theta t)\text{Cov}(B,A)+\sin(\theta(t+h))\sin(\theta t)\text{Cov}(A,A)\\&=\cos(\theta (t+h))\cos(\theta t)+\sin(\theta(t+h))\sin(\theta t)\end{aligned} \begin{aligned}\gamma_{X}(h) &=\text{Cov}(X_{t+h},X_{t})\\&=\text{Cov}[A\cos(\theta (t+h))+B\sin(\theta(t+h)), A\cos(\theta t)+B\sin(\theta t)]\\&=\text{Cov}(A\cos(\theta(t+h)),A\cos(\theta t))+\text{Cov}(A\cos(\theta(t+h)),B\sin(\theta t))\\&\qquad+\text{Cov}(B\sin(\theta(t+h)),A\cos(\theta t))+\text{Cov}(B\sin(\theta(t+h)),B\sin(\theta t))\\&=\cos(\theta(t+h))\cos(\theta t)\text{Cov}(A,A)+\cos(\theta(t+h))\sin(\theta t)\text{Cov}(A,B)\\&\qquad+\sin(\theta(t+h))\cos(\theta t)\text{Cov}(B,A)+\sin(\theta(t+h))\sin(\theta t)\text{Cov}(A,A)\\&=\cos(\theta (t+h))\cos(\theta t)+\sin(\theta(t+h))\sin(\theta t)\\&=\cos(\theta t-\theta(t+h)\\&=\cos(-\theta h)\\&=\cos(\theta h)\end{aligned} because cosine is an even function. Notice that the autocovariance function doesn’t depend on t so our process is indeed stationary. Conclusion Now that we understand some terminology we’ll need later, we can go to the most high-level definition of a time series. A time series \{Y_{t}, t \geq 0\} is a random process given by the classical decompositionY_{t} = m_{t} + s_{t} + X_{t} where m_{t} is called the trend component (like our line y = mt + b), s_{t} is called the seasonal component 7, and X_{t} is a stationary process. There is one particular type of stationary process that we can study, called the ARIMA ( Auto- Regressive Integrated Moving Average) process. After removing the trend and seasonal components, we must still estimate X_{t} and get some sort of equation for it (as it’s no longer guaranteed to just be from a set of i.i.d. normal random variables). The next articles in this series will take a deeper dive into ARIMA modeling. Times series analysis can get quite mathematically heavy. However, it is important to at least have a working familiarity with the mathematics behind these models. The easier the implementation, the greater the danger in misuse, because a practitioner (and even some instructors) aren’t required to understand the mathematical nuances that can pepper something as complicated as time series. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Footnotes For example, decomposition of time series into trend, seasonal, and noise is done automatically, and ARIMA modeling of that noise can also be done with a simple function call of auto.arima(). Sometimes called the input We have discussed the notion of stationarity in terms of the Poisson process before, but we’ll go over it again here in this context. WLOG (without loss of generality), we can assume r > s. If you don’t know what this is, don’t worry. It just means that the expectation of a sum of stuff equals the sum of the expectations of the individual stuffs. All math is connected somehow. You never know when this shows up. We didn’t discuss this bit in this article. This is just another function that has a seasonal period. It’s typically used when we have to account for something like sales peaking at the beginning of a month, as an example
I'm probably missing something elementary here, but I guess the only way to be sure is to ask here. Now, I have encountered a situation where given an nth-degree polynomial $p_n(z)$ with complex coefficients, and a positive real number $\rho$, I need to find the value(s) of $\theta$, $0\leq\theta<2\pi$, such that the value of $|p_n(\rho\exp(i\theta))|$ is minimized (i.e., find the lowest point of the absolute value of a complex polynomial around a radius $\rho$ circle). I know about the usual methods for univariate minimization (golden section, Brent's method, Newton"s method), but I am wondering if there may be special methods that can be used that are more efficient, given that the function to be minimized can be turned into a "trigonometric polynomial". Or would finding these minima be of the same level of difficulty as finding the roots of the polynomial itself? Thus far, the only simplification I have been able to come up with is that if all the coefficients of $p_n(z)$ are real, I can restrict the search for the optimal $\theta$ in the interval $[0,\pi]$, since $p_n(\bar{z})=\overline{p_n(z)}$. A "grid search", using FFT to evaluate the polynomial at equispaced points around the circle was one idea I thought of, but it seemed wasteful of effort since I have been unable to find a way to reuse the effort done by FFT when the number of points around the circle is doubled. In short: might there be an easier, more obvious way I am missing? Addendum: The application where I'm considering this procedure as a subroutine operates as follows: The complex polynomial and an initial estimate of $\rho$ are given. The minimization procedure finds the value of $\theta$ where the objective function is minimized; if there is more than one possible $\theta$, the value nearest to the positive real axis is taken (this is the rather ad hocportion of the application I'm looking at). The tentative $\theta$ is subjected to an "oracle" that a. if a success flag is returned, the algorithm exits, else b. a smaller value of $\rho$ is computed through another black-box procedure, and we return to step 2.
The quantum wavepacket spreading is underlain by the dispersiveness of the free Schrödinger equation, which is basically a diffusion equation of sorts. For simplicity, let's consider an electron coming from outer space, and stick to one dimension, with m and ħ set equal to one—we'll reinstate them later. Also, let's not use "collapse" (reserved for observation) to denote wavepacket spreading. In momentum space, there is no counterintuitive stuff: momenta are conserved and the wave packet profile in momentum space preserves its shape upon propagation: no forces. Restricting attention to one dimension, the solution to the Schrödinger equation satisfying the Gaussian initial condition starting at the origin with minimal space uncertainty, $$u(x,0) =\int \frac{dk}{2\sqrt{\pi}}e^{ikx}e^{-(k-\bar{k})^2/4}= e^{−x^2+i\bar{k} x},$$ is seen to be$$\begin{align} u(x,t) &= \frac{1}{\sqrt{1 + 2it}} e^{-\frac{1}{4}\bar{k}^2} ~ e^{-\frac{1}{1 + 2it}\left(x - \frac{i\bar{k}}{2}\right)^2}\\ &= \frac{1}{\sqrt{1 + 2it}} e^{-\frac{1}{1 + 4t^2}(x - \bar{k}t)^2}~ e^{i \frac{1}{1 + 4t^2}\left((\bar{k} + 2tx)x - \frac{1}{2}t\bar{k}^2\right)} \\&= e^{i\bar{k}x-it\bar{k}^2/2}~~\frac{e^{-\frac{1-2it}{1 + 4t^2}(x - \bar{k}t)^2}~ }{\sqrt{1 + 2it}} ~.\end{align} $$The leading part is the plane wave corresponding to the "center" of the wave packet, and the trailing part contains the real exponent which demarcates the envelope, so the probability density $|u|^2\sqrt{2/\pi}$ propagates "classically" with group velocity $\bar k$, as it rapidly spreads, $$|u(x,t)|^2 = \frac{1}{\sqrt{1+4t^2}}~e^{-\frac{2(x-\bar{k}t)^2}{1+4t^2}}~. $$A bit too rapidly... The width here, $\sqrt{1+4t^2} \to 2t$, after reinstating the naturalized constants, amounts to $$\Delta x=A\sqrt{1+(\hbar t/mA^2)^2},$$ for A the initial width. If we took it to be ångströms, plugging in values for the electron mass, we see a spread to kilometers in a milisec. That is, the normalized probability envelope above has all-but dissolved to a delocalized flapjack, and the electron consists of plane wave components, which is why distant cosmic rays are modeled by plane waves. (1. Yes, multi-multi-multiple kilometers, before detection.) The electron has not disappeared to nothingness (2. It's all there, so it will keep coming and be detected, ultimately, with the same probability), it is that its precise location has quantum-diffused to all over the place. With some probability, these electrons will come to your detector, normally modeled by plane waves, and will have mostly momenta/velocities close to $\bar k$, as posited; this distribution has not changed. The consequent spread in detection times, non-relativistically, will be $$\Delta t = \frac{\Delta x} {\bar{v}}= \frac{t}{A\bar{k}}~. $$ Where did they come from (in x; the momenta in 3D specify the direction)? Who knows. Quantification may be fiendishly conducive to errors by dozens of orders of magnitude, however. See Tzara 1988 for reconciliation/reassurance that the short ultra relativistic neutrino burstof the SN1987A supernova did not violate the above standard wavepacket spreading notions, as erroneously claimed: one must compute the spreading in the wavepacket's rest frame and then transform to the terrestrial detector frame! Phew... Numbers : Prompted by the question, let's summarize the basic numerics of the spread. Define a characteristic time $$\tau=A^2 \frac{m }{\hbar} , \qquad \Longrightarrow \qquad \Delta x= A \frac{t}{\tau} .$$For an electron and A ~ ångström, $\tau= 10^{-16} s$, whence the above spread to a kilometer in miliseconds. But for an iron nucleus and A in microns, we have, instead, $\tau= 10^{-7}s$. This would amount to only an expansion to 10m in a second. Can you estimate the ages required to expand the probabilistic size of a quantum basketball by such a factor of 10?
The KS test uses the statistic $$ D_n=\sup_x |\hat{F}_n(x)-F_0(x)| $$ where $F_0(x)$ is the distribution to be tested and $\hat{F}_n(x)$ the empirical distribution. Under the null hypothesis $D_n$ is distributed according to the Kolmogorov distribution and the KS-test is performed testing $D_n$ against the Kolmogorov distribution. My question concerns the bootstrapped version , i.e., $$ D^*_n=\sup_x |\hat{F}_n^*(x)-F_0(x)| $$ where $\hat{F}_n^*(x)$ is obtained through bootstrapping from $\hat{F}_n(x)$. Given the data, which is the distribution of $D^*_n$ generated by bootstrap? Which is the relationship between this distribution and the kolmogorov distribution for $n\rightarrow \infty$? Can I derive a KS-test based on the distribution $D^*_n$ given the data? Is there some known result about that? Thank you for any help.
You are right, the horizontal tail of a conventional airplane appears to have a higher incidence, but the actual angle of attack is smaller than that of the wing. The wing, flying ahead of the tail, produces downwash, so the flow at the tail location has a distinct downward component. The downwash angle can be calculated from the lift coefficient and the geometry of the aircraft: To simplify things, let's assume the wing is just acting on the air with the density $\rho$ flowing with the speed $v$ through a circle with a diameter equal to the span $b$ of the wing. If we just look at this stream tube, the mass flow is $$\frac{dm}{dt} = \frac{b^2}{4}\cdot\pi\cdot\rho\cdot v$$ Lift $L$ is then the impulse change which is caused by the wing and equal to weight. With the downward air speed $v_z$ imparted by the wing, lift is: $$L = \frac{b^2}{4}\cdot\pi\cdot\rho\cdot v\cdot v_z = S\cdot c_L\cdot\frac{v^2}{2}\cdot\rho$$ $S$ is the wing area and $c_L$ the overall lift coefficient. If we now solve for the vertical air speed, we get $$v_z = \frac{S\cdot c_L\cdot\frac{v^2}{2}\cdot\rho}{\frac{b^2}{4}\cdot\pi\cdot\rho\cdot v} = \frac{2\cdot c_L\cdot v}{\pi\cdot AR}$$with $AR = \frac{b^2}{S}$ the aspect ratio of the wing. Now we can divide the vertical speed by the air speed to calculate the angle by which the air has been deflected by the wing. Let's call it $\alpha_w$: $$\alpha_w = arctan\left(\frac{v_z}{v}\right) = arctan \left(\frac{2\cdot c_L}{\pi\cdot AR}\right)$$A typical airliner cruise lift coefficient is 0.4, and a typical aspect ratio is around 8: This results in a downwash angle of nearly 2° if the lift distribution over span is elliptical. In reality, it is more triangular-shaped, so the downwash angle is larger near the center of the aircraft. Note that the engine nacelles of the DC-9 and the MD-80 range are tilted 3° up to align them with the local flow. The resulting angle of attack is lower by those 3°, and if the angle of attack difference between wing and tail is less than that, the tail surface will appear angled upward. To achieve static stability, the tail will have to fly at a slightly lower angle of attack than the wing.
This problem can be tidied up using the floor function, where $\lfloor x\rfloor$ denotes the smallest integer greater than or equal to $x$. For example, we have $\lfloor 3.14\rfloor=3$ and $\lfloor 6\rfloor=6$. In particular, if $n$ and $m$ are positive integers, then $\lfloor n/m\rfloor$ is the quotient you obtain when you divide $n$ by $m$ (i.e., throw away the remainder). Then, for integers $q>0$ you have a family of languages$$L_q=\{a^nb^m\mid \lfloor n/m\rfloor=q\}$$We'll generalize your proof slightly to show that $L_q$ is not regular for any integer $q>0$. For a fixed $q$ assume that $L_q$ was regular. Then the Pumping Lemma implies that there is an integer $p$ such that any string $w\in L_q$ of length greater than or equal to $p$ can be written as $w=xyz$ with $xy^iz\in L_q$ for any $i\ge 0$ $|y|>0$ $|xy|\le p$ Choose the string $a^{pq}b^p\in L_q$ and write $a^{pq}b^p=xyz$ as above. Then, as you noted, we may say $xy=a^k$ without loss of generality (the case where $xy=b^k$ will be handled in the same way as below and the case where $xy=a^ib^j$ will produce an immediate contradiction for $xy^2z$, since it will be the wrong form to be in $L_q$). Now we know that $y=a^j$ for some $0<j\le p$. The first inequality comes from condition (2) of the PL and the second comes from condition (3). As you noted, we'll then have $$xyz=(a^r)(a^j)(a^{pq-r-j}b^p)$$and so $$xz=(a^r)(a^{pq-r-j}b^p)=a^{pq-j}b^p$$Now we'll show that $xz\notin L_q$, contradicting condition (1) of the PL.If, to the contrary, $xz\in L_q$, we'd have to have$$\left\lfloor\frac{pq-j}{p}\right\rfloor=q$$but$$\left\lfloor\frac{pq-j}{p}\right\rfloor=\left\lfloor q-\frac{j}{p}\right\rfloor$$But now we came to the heart of your question: since $0<j\le p$ by (2) and (3) we'll have $q-1\le q-(j/p) < q$, so$$\left\lfloor\frac{pq-j}{p}\right\rfloor=q-1$$and so $xz\notin L_q$, giving us the contradiction we needed, completing the proof that $L_q$ is not regular for any $q>0$.
my questions is, what is the time dependent part of the $\vec B$ field that is not determined by Faraday's law? The problem I am working on states that we should 'set it to zero', but I don't see it. Given $$\vec E (r, \theta, \phi, t) = A \frac{\sin \theta}{r} \cos(kr - \omega t) \hat \phi.$$ I use Faraday's law $\vec \nabla \times \vec E = - \frac{\partial \vec B}{\partial t}$ and the expression of the curl in spherical polar coordinates to find that; $$\vec \nabla \times \vec E = \frac{2A \cos \theta}{r^2} \cos(kr - \omega t) \hat r + kA \sin \theta \sin(kr - \omega t) \hat \theta.$$ Integrating with respect to time to find $\vec B$ yields; \begin{align} \vec B &= - \left[\frac{2A \cos \theta}{r^2} \hat r \int \cos(kr - \omega t)dt + kA \sin \theta \hat \theta \int \sin(kr - \omega t)dt\right] \\ &= \frac{2A \cos \theta}{r^2 \omega} \sin(kr - \omega t) \hat r - \frac{kA \sin \theta}{\omega} \cos(kr - \omega t) \hat \theta + C\end{align} Assuming that what I have done is okay, what is meant by a time dependent component that is not determined by Faraday's law? Both components of the magnetic field are time dependent and they are determined. I hope my question is clear. Thanks.
This article is the fourth part in a series based on a report reviewing the technical report of Meyer and Tschudin[11] who have extended the notion of an artificial chemistry to an artificial packet chemistry with the intention of exploiting the natural behavior of chemical reactions to design better flow management policies for computer networks. Part 1 introduced the chemistry foundations necessary for study of this technical report, including the Law of Mass Action. Part 2 elaborated on the mathematical model of an artificial packet chemistry. Part 3 discussed the various types of mathematical analyses for various queueing questions now available with the expression of a computer network and its flow as an artificial packet chemistry. This part will discuss an actual engineering application of all the ideas discussed thus far–a scheduler and a chemical control plane. Implementation of a Scheduler Based on the Law of Mass Action Likely, this section will be of greatest interest to network engineers. The authors have indeed designed and implemented a scheduler that utilizes this approach in an elegant fashion. In addition, they discuss a “chemical control plane” that can automatically be compiled from the abstract model. In another application, they relax the static nature of the network to allow an active networking approach that reshapes the queuing network at run-time. The authors do discuss specifics of implementation, though this article will only briefly touch on it. Scheduler Each network node/reaction vessel has its own scheduler. The scheduler computes the next occurrence time of each rule r \in R_{i} in its local node (this is equivalent to “serving” or processing a packet or set of packets for bimolecular reactions) according to the Law of Mass Action. It then will sort the events into a priority queue, wait until the first event occurs, then execute. The main difficulty for a scheduler is to dynamically react and reschedule events properly as packets are added to or drained from its queues. The authors note that an efficient mass action scheduler can be implemented that requires only O(\log(|\mathcal{R}|)) time to enqueue or dequeue packets. This is based on the Next Reaction Method[4] of Gibson and Bruck. Here we’ll recount an explicit example that illustrates the concept. If we return to Figure 1 reproduced below, we can walk through Meyer and Tschudin’s scheduler implementation. There are two queues, X and Y. Reaction 1 (Server 1) is bimolecular: X+Y \rightarrow Z, so the server pulls packets from two queues to execute the service. Reaction 2 (Server 2) is unimolecular, pulling only from queue Y. If we assume the reaction constants k_{1} = 1000/(\text{packet}\cdot s) and k_{2} = 1000/\text{s}, that X begins with two packets in its queue, and Y begins with 3 packets in its queue, then the reaction rates \nu_{r}, r=1,2 are respectively \nu_{1} = k_{1}c_{X}c_{Y} = 1000\cdot2\cdot3 = 6000 and \nu_{2} = k_{2}c_{Y} = 1000\cdot 3 = 3000. The occurrence time is the reciprocal of the reaction rate, so the occurrence times \tau_{r} are respectively \tau_{1} = \frac{1}{6} ms and \tau_{2} = \frac{1}{3} ms. That means the first server executes its action first, extracting packets from both X and Y. Since the occurrence time of r_{2} is coupled with r_{1} (both servers pull from queue Y), the action of r_{1} requires a rescheduling of r_{2}. After r_{1} pulls a packet each from X and Y, there is 1 packet left in X and 2 in Y, which means we have to recalculate the rate \nu_{2} = 1000\cdot 2 = 2000. The occurrence time of r_{2} is at ms \frac{1}{3}, so its time of execution hasn’t arrived. But thanks for r_{1}‘s effect, we have to rescale and reschedule the occurrence time of r_{2}. This is done by the following:\tau_{r,\text{new}}-\frac{\nu_{r,\text{new}}}{\nu_{r,\text{old}}}(\tau_{r,\text{old}}-t_{\text{now}}) + t_{\text{now}}, where (\tau_{r,\text{old}} -t_{\text{now}}) is the time remaining between the original execution time and the current time. The multiplier in front is a scaling effect. In this example, at t_{\text{now}} = 1/6 ms, r_{2} was supposed to go at time 1/3 ms, but will now be prolonged. A note here, I did the math for their specific example, and it seems off. I think the multiplier should be as I’ve written above. The authors wrote the reciprocal, which prolongs too far. I’ll work to contact the authors to verify this. There are other timed scheduling algorithms utilized in computer networking, such as Earliest Deadline First, which require tagging each packet with a timestamp. This scheduler does not require such an imposition. The Chemical Control Plane Here, the authors describe what they term as a chemical control plane that is intended to avoid the messy necessity of sending packets through a complex queueing network in order to shape packet flow as desired. The control plane takes advantage of concepts in enzymatic chemical reactions in order to control flow. This is a different application than the flow networks discussed thus far (as I understand it). Here the forwarding plane which executes actions is separated from the control plane which will shape the flow of packets in the forwarding plane. The chemical control plane will dynamically determine the service rates; the servers do not have them predefined. There are some number of FIFO queues n, one for each type of ingress packet flow and they are drained by one server each, representing a unimolecular reaction. In the control plane, each queue is represented by an input species X_{i} and product species X_{i}^{*}. The chemical reaction network lives abstractly in the control plane, which is designed by a traffic engineer and can look like any digraph or network he wishes. Here we note the difference here between the prior sections, which dealt with physical flows modeled by a chemical reaction network, and moving the chemical reaction network to an abstract control plane. The queues now are not necessarily physically linked together, but we can choose to couple them abstractly to shape traffic. When a packet physically enters one of the queues, the control plane injects one instance of the corresponding molecule species into the abstract network. The scheduler described previously is implemented and eventually an instance of the output species is generated. Once this happens, the corresponding server in the forwarding plane physically processes the packet and dequeues the next. The advantage here is that the abstract molecules in the control plane have no payload, so implementation of this model only requires storing an integer value for each species that keeps track of the number of packets in each queue. This allows analysis of behavior at the design phase. In the simplest case, a unimolecular reaction X \to X^{*} in the chemical control plane acts like a low-pass filter to the packet flow, smoothing bursts with high frequency components. If the differential equation \dot{x} = \lambda-kx that approximates a unimolecular reaction is converted to the frequency domain via the Laplace transform, the transfer function F(s) has a cut-off frequency at k, the reaction constant:F(s) = \frac{\mu(s)}{\lambda(s)} = \frac{k}{s+k} That is, higher-frequency flows will be attenuated, much like dark glasses do with sunlight. Applying this filter at an ingress point of a network leads to less chaotic traffic patterns, but with a cost of a delay \frac{1}{k} and memory to buffer the packets. Therefore, the mean queue length for this single queue will grow proportionally with the delay and flow rate. That is, \hat{x} = \frac{\lambda}{k}. Another consideration of the LoMA queues described by Meyer and Tschudin that differs from the standard M/M/1 queuing models is that the service rate is ultimately unbounded (for infinite capacity queues/networks), since it is proportional to the queue length. This is undesirable to allow in a network, and thus the authors borrow from biological systems and design an abstract enzymatic reaction to limit the rate of packet flow. In biological systems, enzymes bind to reactant molecules X, called substrates in order to prevent a particular molecule from reacting immediately. Some amount of enzyme molecules E exist, and they can either exist free-form or bound in a complex (EX). The more enzyme molecules in bound form, the slower the rate of transmission grows for an increasing arrival rate. At equilibrium, the influx and efflux of substrate-enzyme complex molecules are equal according to Kirchoff’s Law, so Take a look at Figure 8 above in the chemical control plane to see this action. The number of enzymes is constant, so c_{E} + c_{EX} = e_{0}, which yields the Michaelis-Menten equation, expressing the transmission rate \mu in terms of the queue length c_{X}.\mu = \nu_{\max}\frac{c_{X}}{K_{M} + c_{X}}, which yields a hyperbolic saturation curve. \nu_{\max} = k_{s}e_{0}, and K_{M} = \frac{k_{s}}{k_{w}} and specifies the concentration of X at which half of \nu_{\max} is reached. When the queue length at queue X is high, the transmission rate converges to \nu_{\max}, and behaves like a normal unimolecular reaction when queue length is short. The authors also extend this model to handle dynamic changes to the topology of the queuing network, which means that instances of queues and flow relations can be generated “on the fly,” as it were. Tschudin[13] has created an executable string and multiset rewriting system called Fraglets that allow for the implementation and running of protocols based on the ideas put forth thus far. They describe in the paper how to implement explicitly the enzymatic rate-limiter in the chemical control plane in Figure 8. In this implementation, rather than flow interactions being static and determined at the design phase, each fraglet (packet) sorts itself into a queue. After a packet is serviced, the header of a fraglet is treated as code, allowing a packet to determine its route comparable to active networking. The relationship between the abstract model and execution layer remains, which allows a mathematical model of the behavior of a Fraglets implementation to be generated automatically, and a queuing network to be design and then realized easily in Fraglets language. Continuation The final part of this work will discuss an application of artificial packet chemistry to the implementation of a congestion control algorithm, briefly discuss design motifs, and conclude. References Dittrich, P., Ziegler, J., and Banzhaf, W. Artificial chemistries – a review. Artificial Life 7(2001), 225–275. Feinburg, M. Complex balancing in general kinetic systems. Archive for Rational Mechanics and Analysis 49 (1972). Gadgil, C., Lee, C., and Othmer, H. A stochastic analysis of first-order reaction networks. Bulletin of Mathematical Biology 67 (2005), 901–946. Gibson, M., and Bruck, J. Effcient stochastic simulation of chemical systems with many species and many channels. Journal of Physical Chemistry 104 (2000), 1876–1889. Gillespie, D. The chemical langevin equation. Journal of Chemical Physics 113 (2000). Gillespie, D. The chemical langevin and fokker-planck equations for the reversible isomerizationreaction. Journal of Physical Chemistry 106 (2002), 5063–5071. Horn, F. On a connexion between stability and graphs in chemical kinetics. Proceedings of the RoyalSociety of London 334 (1973), 299–330. Kamimura, K., Hoshino, H., and Shishikui, Y. Constant delay queuing for jitter-sensitive iptvdistribution on home network. IEEE Global Telecommunications Conference (2008). Laidler, K. Chemical Kinetics. McGraw-Hill, 1950. McQuarrie, D. Stochastic approach to chemical kinetics. Journal of Applied Probability 4 (1967), 413–478. Meyer, T., and Tschudin, C. Flow management in packet networks through interacting queues and law-of-mass-action-scheduling. Technical report, University of Basel. Pocher, H. L., Leung, V., and Gilles, D. An application- and management-based approach to atm scheduling. Telecommunication Systems 12 (1999), 103–122. Tschudin, C. Fraglets- a metabolistic execution model for communication protocols. Proceedings of the 2nd annual symposium on autonomous intelligent networks and systems (2003). This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
This tutorial assumes you already have a LaTeX distribution and a text editor installed. If that is not the case, follow the installation steps on the Getting Started page. The remainder of this tutorial presents examples in Texmaker. If you do not use Texmaker, don’t worry, just a few steps will be different! Alternatively, use Overleaf since it eliminates installation issues and is most likely the platform you’ll be doing the majority of your group assignments on, since it offers real-time collaboration. Think Google Docs, but for LaTeX. As is mentioned on the downloads page, the LaTeX distribution comes with a couple of different TeX engines. To keep things simple, we’ll use pdfLaTeX in this tutorial which Texmaker is configured to use by default. The Document A LaTeX document is a plain text file consisting of text and routines as understood by the TeX compiler. This means you can use any plain text editor to write LaTeX documents, e.g. Texmaker, TeXStudio, Notepad, Wordpad, TextEdit, nano, vi, etc. The list is long! As an example, the following snippet is a minimal LaTeX document that writes “Hello World!” in the top left corner of a paper. \documentclass{article}\usepackage[left=1cm, top=1cm, right=1cm, bottom=1cm]{geometry}\begin{document}Hello World!\begin{equation} \sum \limits_{i = 0}^{N} a_i\end{equation}\end{document} This snippet, as with all LaTeX documents, consists of two parts: a preamble and a document body. The preamble is the name of the content that is before the \begin{document}-line and is where all document-specific formatting is placed. For example, \documentclass{article}, includes settings from the article document class, e.g., page style, size of margins, font sizes of paragraphs and headings, definitions of useful routines and much more. Following the document class, you can include your own customisations and define your own commands and routines. Preamble Usually, someone else has already made the change you desire and instead of spending hours just to change the size of the margins, you can simply include the geometry-package and specify the (left, top, right and bottom) size in the unit you desire: \usepackage[left=1cm, top=1cm, right=1cm, bottom=1cm]{geometry} A LaTeX package is a black box of features and customisations that can be included in a LaTeX document. The geometry package is an example of a package that makes it easier to change page dimensions. However, the geometry package has much more under the hood than the snippet above shows. Your LaTeX distribution comes with many packages by default and they are initially fetched from CTAN – Packages, an archive of thousands of packages that are freely available for anyone to use. On this website, the documentation for a specific package can also be found. Commonly used packages include: float, biblatex, amsmath and cleveref. If you are interested, you can read about them in their documentation on CTAN. A preamble can quickly grow with many packages and other customisations, and a cluttered LaTeX document is not what We want! That is where the role of a document class shines, which abstracts all the customisations and reduces them to a single line: \documentclass{article}. With that being said, many classes exist and if you want to make a poster, use the tikzposter class, or maybe you want to write a book, then use the memoir class. All of the different classes serve a specific purpose, but the most general, easiest and most simple class to use, is the article class. The syntax of LaTeX consists of certain special characters as you might have noticed from the snippet above: \, { and }, [ and ] (sometimes). Apart from these, the following characters are also special: # (argument reference), & (alignment), % (comments), $, ^, and _. The first three are not covered in this tutorial. \(backslash): The backslash followed by a word is understood by the TeX compiler as a reference to a defined command(or control sequence). Thus, in the document example above, three commands are referenced/invoked: documentclass, usepackage, begin, sum, limits, and, end. {and }(curly brackets): Curly brackets must written in pairs and everything inside this groupcan be thought of as an argument. [and ](square brackets): Usually, when used in combination with a command, the meaning of the square brackets are optional arguments. documentclassaccepts optional arguments, e.g., 12ptand a4paperto set the base font size to 12pt (points) and document dimensions for an A4 paper: \documentclass[12pt, a4paper]{article}. Note: the default paper format is configured during the installation of the LaTeX distribution. Optional arguments are also used in the package snippet where the margins are set when invoking the usepackagecommand for the geometryargument. Document Body All the properties included in the preamble, either specifically defined there or included by a package, becomes available in the body of the document. The body of the document is the content inside the document environment, started by the \begin{document} command, and ended by the \end{document} command. Everything within this document environment is output as text in the compiled document, or in the special case of commands, which first gets evaluated and then substituted by their returned value. Environments isolate their content and may have special commands only available inside them. As an example, a powerful environment that makes LaTeX so great, is the math environment (also referred to as math-mode) which typesets mathematical expressions differently from normal text and enables shorthand versions of sub- and super-script as _ and ^. These two operators only affect the following character, except in the situation where the character is a { which then starts a group of characters that must be ended with a }. The grouped characters inside these brackets will all be affected by the operator. One way to think of it, is to perceive _{input} as a function that takes the argument input. Since the math environment is used often, shorthand versions of \begin{math} and \end{math} are defined as a pair of the special character, $: $\pi = 3.14$. This simple way of writing mathematical expressions should be used for inline math. That is, when you want to introduce math inside a paragraph. By using double dollar signs, $$\pi = 3.14$$, the expression is centered on its own line. The above usage of the shorthand versions of the math environment are specific to TeX syntax. While they work for LaTeX, the revised LaTeX way of doing it, is by using \( and \) for inline math, and \[ and \] for display math. The use of brackets are much more convenient and errors are much easier to spot. The following snippet covers these shorthand operators. It compiles into a discrete sum ( \sum \limits) of length N ( ^{N}) with i initialised to 0 ( _{i = 0}) and it is centered on the page ( \[ and \]). The \limits command simply moves the sub- and superscript to below and above the summation symbol. Leaving it out will preserve the normal behaviour which is rarely desired for sums (e.g. in-line sums). \[ \sum \limits_{i = 0}^{N} a_i\] Often you want to reference an equation in your text. While the display math usage (as the snippet above) doesn’t tie a number to the math expression, the equation environment does! The snippet can be rewritten as the following which will label the equation as (1). \begin{equation} \sum \limits_{i = 0}^{N} a_i\end{equation} This is just a brief sneak-peak to references and labels in LaTeX! To keep this tutorial at the basics, let’s skip ahead and compile the document. Compiling The Document In Texmaker, go to File -> New and paste the example from the beginning of this tutorial into the text field. Save the file somewhere, perhaps your desktop. In the top toolbar of Texmaker there are two arrows as illustrated on the following image. Texmaker’s compile toolbar These are buttons which will perform the action that is selected in the dropdown menu. By default, the left one performs a Quick Build and the right one updates the PDF viewer. Texmaker has a built-in PDF viewer which you can have side-by-side with your LaTeX document so you don’t have to switch forth and back between windows to see your changes. The other options in the right dropdown are not important since you will only compile your document into a PDF (that is, by using pdfLaTeX). “What does Quick Build do?” you might wonder. It’s a combination of programs that are executed in sequence when you click the arrow. If you browse the configuration in Texmaker ( Options -> Configure Texmaker), you can find the following under the Quick Build-pane. Texmaker’s Quick Build options Texmaker is simply a text editor that interfaces with the LaTeX distribution on your system. Thus, Texmaker executes the programs as specified by the selected Quick Build option that are provided by the LaTeX distribution. Texmaker cannot execute these programs if they aren’t installed, meaning you must have a LaTeX distribution available on your system. The default Quick Build setup is sufficient for regular documents, but once you start to include bibliography, label references, nomenclature, etc., multiple files have to be processed over several steps. By using Quick Build, you don’t have to think about doing these steps manually and, instead, you just click the arrow-button and wait for the program sequence to terminate. With the default Quick Build setting, Texmaker will run pdfLaTeX followed by updating the PDF viewer on the right hand side. If you now click the left arrow in the compile toolbar, you shall see your first LaTeX compiled PDF document in the PDF viewer, which will look like the following (cropped to fit here): Compiled PDF document Murphy’s Law Murphy’s Law also applies for LaTeX, unfortunately. While you won’t expect it, errors will happen often and they will, as a whole, be your new worst enemy which you will have to learn to beat. The LaTeX syntax is very strict and the smallest mistake can make your document uncompilable – which is horrible because you have to turn in your report in less than an hour! Next Steps This tutorial walked through the process of creating a simple LaTeX document and compiling it to a PDF document with the pdfLaTeX compiler. The LaTeX distribution that you use comes with other compilers than just pdfLaTeX.
In this section, we consider three more families of discrete probability distributions. There are some similarities between the three, which can make them hard to distinguish at times. So throughout this section we will compare the three to each other and the binomial distribution, and point out their differences. Hypergeometric Distribution Consider the following example. Example \(\PageIndex{1}\) An urn contains a total of \(N\) balls, where some number \(m\) of the balls are orange and the remaining \(N-m\) are grey. Suppose we draw \(n\) balls from the urn without replacement, meaning once we select a ball we do not place it back in the urn before drawing out the next one. Then some of the balls in our selection may be orange and some may be grey. We can define the discrete random variable \(X\) to give the number of orange balls in our selection. The probability distribution of \(X\) is referred to as the hypergeometric distribution, which we define next. Definition \(\PageIndex{1}\) Suppose in a collection of \(N\) objects, \(m\) are of type 1 and \(N-m\) are of another type 2. Furthermore, suppose that \(n\) objects are randomly selected from the collection without replacement. Define the discrete random variable \(X\) to give the number of selected objects that are of type 1. Then \(X\) has a with parameters \(N, m, n\). The probability mass function of \(X\) is given by hypergeometric distribution \begin{align} p(x) = P(X=x) &= P(x\ \text{type 1 objects &}\ n-x\ \text{type 2}) \notag \\ &= \frac{(\text{# of ways to select}\ x\ \text{type 1 objects from}\ m) \times (\text{# of ways to select}\ n-x\ \text{type 2 objects from}\ N-m)}{\text{total # of ways to select}\ n\ \text{objects of any type from}\ N} \notag \\ &= \frac{\displaystyle{\binom{m}{x}\binom{N-m}{n-x}}}{\displaystyle{\binom{N}{n}}} \label{hyperpmf} \end{align} In some sense, the hypergeometric distribution is similar to the binomial, except that the method of sampling is crucially different. In each case, we are interested in the number of times a specific outcome occurs in a set number of repeated trials, where we could consider each selection of an object in the hypergeometric case as a trial. In the binomial case we are interested in the number of "successes" in the trials, and in the hypergeometric case we are interested in the number of a certain type of object being selected, which could be considered a "success". However, the trials in a binomial distribution are independent, while the trials in a hypergeometric distribution are not because the objects are selected without replacement. If, in Example 3.6.1, the balls were drawn with replacement, then each draw would be an independent Bernoulli trial and the distribution of \(X\) would be binomial, since the same number of balls in the urn would be the same each time another ball is drawn. However, when the balls are drawn without replacement, each draw is not independent, since the number of balls in the urn decreases after each draw as well as the number of balls of a given type. Exercise \(\PageIndex{1}\) Suppose your friend has 10 cookies, 3 of which are chocolate chip. Your friend randomly divides the cookies equally between herself and you. What is the probability that you get all the chocolate chip cookies? Answer Let random variable \(X=\) number of chocolate chip cookies you get. Then \(X\) is hypergeometric with \(N=10\) total cookies, \(m=3\) chocolate chip cookes, and \(n=5\) cookies selected by your friend to give to you. We want the probability that you get all the chocolate chip cookies, i.e., \(P(X=3)\), which is $$P(X=3) = \frac{\displaystyle{\binom{3}{3}\binom{7}{2}}}{\displaystyle{\binom{10}{5}}} = 0.083\notag$$ Note that \(X\) has a hypergeometric distribution and not binomial because the cookies are being selected (or divided) without replacement. Geometric Distribution & Negative Binomial Distribution The geometric and negative binomial distributions are related to the binomial distribution in that the underlying probability experiment is the same, i.e., independent trials with two possible outcomes. However, the random variable defined in the geometric and negative binomial case highlights a different aspect of the experiment, namely the number of trials needed to obtain a specific number of "successes". We start with the geometric distribution. Definition \(\PageIndex{2}\) Suppose that a sequence of independent Bernoulli trials is performed, with \(p = P(\text{"success"})\) for each trial. Define the random variable \(X\) to give the number of trial at which the first success occurs. Then \(X\) has a with parameter \(p\). The probability mass function of \(X\) is given by geometric distribution \begin{align} p(x) = P(X=x) &= P(1^{st}\ \text{success on}\ x^{th}\ \text{trial}) \notag \\ &= P(1^{st}\ (x-1)\ \text{trials are failures &}\ x^{th}\ \text{trial is success}) \notag \\ &= (1-p)^{x-1}p, \quad\text{for}\ x = 1, 2, 3, \ldots \label{geompmf} \end{align} Exercise \(\PageIndex{2}\) Verify that the pmf for a geometric distribution (Equation \ref{geompmf}) satisfies the two properties for pmf's, i.e., \(p(x) \geq 0\), for \(x=1, 2, 3, \ldots\) \(\displaystyle{\sum^{\infty}_{x=1} p(x) = 1}\) Hint: It's called "geometric" for a reason! Answer Note that \(0\leq p\leq 1\), so that we also have \(0\leq (1-p) \leq 1\) and \(0\leq (1-p)^{x-1} \leq 1\), for \(x=1, 2, \ldots\). Thus, it follows that \(p(x) = (1-p)^{x-1}p \geq 0\). Recall the formula for the sum of a geometric series: $$\sum^{\infty}_{x=1} ar^{x-1} = \frac{a}{1-r}, \quad\text{if}\ |r|<1.\notag$$ Note that the sum of the geometric pmf is a geometric series with \(a=p\) and \(r = 1-p < 1\). Thus, we have $$\sum_{x=1}^{\infty} p(x) = \sum^{\infty}_{x=1} (1-p)^{x-1}p = \frac{p}{1 - (1-p)} = \frac{p}{p} = 1\ \checkmark\notag$$ Example \(\PageIndex{2}\) Each of the following is an example of a random variable with the geometric distribution. Toss a fair coin until the first heads occurs. In this case, a "success" is getting a heads ("failure" is getting tails) and so the parameter \(p = P(h) = 0.5\). Buy lottery tickets until getting the first win. In this case, a "success" is getting a lottery ticket that wins money, and a "failure" is not winning. The parameter \(p\) will depend on the odds of wining for a specific lottery. Roll a pair of fair dice until getting the first double 1's. In this case, a "success" is getting double 1's, and a "failure" is simply not getting double 1's (so anything else). To find the parameter \(p\), note that the underlying sample space consists of all possible rolls of a pair of fair dice, of which there are \(6\times6 = 36\) because each die has 6 possible sides. Each of these rolls is equally likely, so $$p = P(\text{double 1's}) = \frac{\text{# of ways to roll double 1's}}{36} = \frac{1}{36}.\notag$$ The negative binomial distribution generalizes the geometric distribution by considering any number of successes. Definition \(\PageIndex{3}\) Suppose that a sequence of independent Bernoulli trials is performed, with \(p = P(\text{"success"})\) for each trial. Fix an integer \(r\) to be greater than or equal to 2 and define the random variable \(X\) to give the number of trial at which the \(r^{th}\) success occurs. Then \(X\) has a with parameters \(r\) and \(p\). The probability mass function of \(X\) is given by negative binomial distribution \begin{align*} p(x) = P(X=x) &= P(r^{th}\ \text{success is on}\ x^{th}\ \text{trial}) \\ &= \underbrace{P(1^{st}\ (r-1)\ \text{successes in}\ 1^{st}\ (x-1)\ \text{trials})}_{\text{bniomial with}\ n=x-1} \times P(r^{th}\ \text{success on}\ x^{th}\ \text{trial}) \\ &= \binom{x-1}{r-1}p^{r-1}(1-p)^{(x-1)-(r-1)}\times p \\ &= \binom{x-1}{r-1}p^r(1-p)^{x-r}, \quad\text{for}\ x=r, r+1, r+2, \ldots \end{align*} Example \(\PageIndex{3}\) For examples of the negative binomial distribution, we can alter the geometric examples given in Example 3.6.2. Toss a fair coin until get 8 heads. In this case, the parameter \(p\) is still given by \(p = P(h) = 0.5\), but now we also have the parameter \(r = 8\), the number of desired "successes", i.e., heads. Buy lottery tickets until win 5 times. In this case, the parameter \(p\) is still given by the odds of winning the lottery, but now we also have the parameter \(r = 5\), the number of desired wins. Roll a pair of fair dice until get 100 double 1's. In this case, the parameter \(p\) is still given by \(p = P(\text{double 1's}) = \frac{1}{36}\), but now we also have the parameter \(r = 100\), the number of desired "successes". In general, note that a geometric distribution can be thought of a negative binomial distribution with parameter \(r=1\). Note that for both the geometric and negative binomial distributions the number of possible values the random variable can take is infinite. These are still discrete distributions though, since we can "list" the values. In other words, the possible values are countable. This is in contrast to the Bernoulli, binomial, and hypergeometric distributions, where the number of possible values is finite. We again note the distinction between the binomial distribution and the geometric and negative binomial distributions. In the binomial distribution, the number of trials is fixed, and we count the number of "successes". Whereas, in the geometric and negative binomial distributions, the number of "successes" is fixed, and we count the number of trials needed to obtain the desired number of "successes".