content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Integral Bifurcation Method together with a Translation-Dilation Transformation for Solving an Integrable 2-Component Camassa-Holm Shallow Water System
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 736765, 21 pages
Research Article
Integral Bifurcation Method together with a Translation-Dilation Transformation for Solving an Integrable 2-Component Camassa-Holm Shallow Water System
Center for Nonlinear Science Research, College of Mathematics, Honghe University, Yunnan, Mengzi 661100, China
Received 12 September 2012; Accepted 15 November 2012
Academic Editor: Michael Meylan
Copyright © 2012 Weiguo Rui and Yao Long. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
An integrable 2-component Camassa-Holm (2-CH) shallow water system is studied by using integral bifurcation method together with a translation-dilation transformation. Many traveling wave solutions
of nonsingular type and singular type, such as solitary wave solutions, kink wave solutions, loop soliton solutions, compacton solutions, smooth periodic wave solutions, periodic kink wave solution,
singular wave solution, and singular periodic wave solution are obtained. Further more, their dynamic behaviors are investigated. It is found that the waveforms of some traveling wave solutions vary
with the changes of parameter, that is to say, the dynamic behavior of these waves partly depends on the relation of the amplitude of wave and the level of water.
1. Introduction
In this paper, employing the integral bifurcation method together with a translation-dilation transformation, we will study an integrable 2-component Camassa-Holm (2-CH) shallow water system [1] as
follows: which is a nonlinear dispersive wave equation that models the propagation of unidirectional irrotational shallow water waves over a flat bed [2], as well as water waves moving over an
underlying shear flow [3]. Equation (1.1) also arises in the study of a certain non-Newtonian fluids [4] and also models finite length, small amplitude radial deformation waves in cylindrical
hyperelastic rods [5], where and are two dimensionless parameters. The interpretation of , respectively, describes the horizontal fluid velocity and the density in the shallow water regime, where the
variable describes the horizontal velocity of the fluid in direction at time , and the variable is related to the free surface elevation from equilibrium position (or scalar density) with the
boundary assumptions. The parameter denotes the level of water, the parameter denotes the typical amplitude of the water wave, and the parameter denotes the typical wavelength of the water wave. The
constant denotes the speed of the water current which is related to the shallow water wave speed. The case , respectively, corresponds to the two situations in which the gravity acceleration points
downwards and upwards. Especially, when the speed of the water current and the parameter , (1.1) becomes the following form where . The system (1.2) appeared in [1], which was first derived by
Constantin and Ivanov from the Green-Naghdi equations [6, 7] via the hydrodynamical point of view. Under the scaling , (1.1) can be reduced to the following two-component generalization of the
well-known 2-component Camassa-Holm (2-CH) system [1, 8–10]: where . The equation (1.3) attracts much interest since it appears. Attention was more paid on the local well-posedness, blow-up
phenomenon, global existence, and so forth. When , (1.3) has been studied by many authors, see [11–26] and references cited therein. Especially, under the parametric conditions , (1.3) can be reduced
to the celebrated Camassa-Holm equation [27, 28], In 2006, the integrability of (1.3) for was proved and some peakon and multikink solutions of this system were presented by Chen et al. in [9]. In
2008, the Lax pair of (1.3) for any value of was given by Constantin and Ivanov in [1]. In [29], by using the method of dynamical systems, under the traveling wave transformation , some explicit
parametric representations of exact traveling wave solutions of (1.3) were obtained. But the loop solitons were not obtained in [29]. In [30], under , one-loop and two-loop soliton solutions and
multisoliton solutions of (1.3) are obtained by using the Darboux transformations.
Although (1.1) can be reduced to (1.3) by the scaling transformation , the dynamic properties of some traveling wave solutions for these two equations are very different. In fact, the dynamic
behaviors of some traveling waves of system (1.1) partly depends on the relation () of the amplitude of wave and the level (deepness) of water. In the other words, their dynamic behavior vary with
the changes of parameter , that is, the changes of ratio for the amplitude of wave and the deepness of water . In addition, compared with the research results of (1.3), the research results for (1.1)
are few in the existing literatures. Thus, (1.1) is very necessary to be further studied.
It is worthy to mention that the solutions obtained by us in this paper are different from others in existing references, such as [9, 12–17, 29]. On the other hand, under different transformations,
by using different methods, different results will be presented. In this paper, by using the integral bifurcation method [31], we will investigate different kinds of new traveling wave solutions of (
1.1) and their dynamic properties under the translation-dilation transformation . By the way, the integral bifurcation method possessed some advantages of the bifurcation theory of the planar dynamic
system [32] and auxiliary equation method (see [33, 34] and references cited therein), it is easily combined with computer method [35] and useful for many nonlinear partial diffential equations
(PDEs) including some PDEs with high power terms, such as equation [36]. So, by using this method, we will obtain some new traveling wave solutions of (1.1). Some interesting phenomena will be
The rest of this paper is organized as follows. In Section 2, we will derive the two-dimensional planar system of (1.1) and its first integral equations. In Section 3, by using the integral
bifurcation method, we will obtain some new traveling wave solutions of nonsingular type and singular type and investigate their dynamic behaviors.
2. The Two-Dimensional Planar System of (1.1) and Its First Integral Equations
Obviously, (1.1) can be rewritten as the following form In order to change the PDE (2.1) into an ordinary differential equation, we make a transformation where and is an arbitrary nonzero constant.
In fact, (2.2) is a translation-dilation transformation, it has been used extensively in many literatures. Its idea came from many existing references. For example, in [37], the expression is a
dilation transformation. In [38], the expression is a translation transformation. In [39, 40], the expression which was first used by Parkes and Vakhnenko is also a translation-dilation
After substituting (2.2) into (2.1), integrating them once yields where the integral constant and denotes . From the second equation of (2.3), we easily obtain where . Substituting (2.4) into the
first equation of (2.3), we obtain where denotes .
Let . Thus (2.5) can be reduced to 2-dimensional planar system as follows: where denotes .
Making a transformation (2.6) becomes where is a parameter. From the point of view of the geometric theory, the parameter is a fast variable, but the parameter is a slow one. The system (2.8) is
still a singular system through the system (2.6) becomes (2.8) by the transformation (2.7). This case is not same as that in [29]. Of course, as in [29], we can also change the system (2.6) into a
regular system under another transformation . But we do not want to do that due to the following two reasons: (i) we need to keep the singularity of the original system; (ii) we need not to make any
analysis of phase portraits as in [29].
Obviously, systems (2.6) and (2.8) have the same first integral as follows: where is an integral constant.
3. Traveling Wave Solutions of Nonsingular Type and Singular Type for (1.1) and Their Dynamic Behaviors
In this section, we will investigate different kinds of exact traveling wave solutions for (1.1) and their dynamic behaviors.
It is easy to know that (2.9) can be reduced to four kinds of simple equations when the parametric conditions satisfy the following four cases:
Case 1. and .
Case 2. and .
Case 3. and .
Case 4. .
We mainly aim to consider the new results of (1.1), so we only discuss the first two typical cases in this section. The other two cases can be similarly discussed, here we omit them.
3.1. The Exact Traveling Wave Solutions under Case 1
Under the parametric conditions and , (2.9) can be reduced to where . Substituting (3.1) into the first expression in (2.8) yields Write The if only if . By using the exact solutions of (3.2), we can
obtain different kinds of exact traveling wave solutions of parametric type of (1.1), see the following discussion.(i)If , then (3.2) has one exact solution as follows: Substituting (3.4) into (2.7),
and then integrating it yields Substituting (3.4) into (2.2) and (2.4), then combining with (3.5), we obtain a couple of soliton-like solutions of (1.1) as follows: (ii)If , then (3.2) has one exact
solution as follows: Similarly, by using (3.8), (2.7), (2.2), and (2.4), we obtain two couples of soliton-like solutions of (1.1) as follows: (iii)If and , then (1.1) has one couple of soliton-like
solutions as follows: (iv)If and (i.e., ), then (1.1) has one couple of soliton-like solutions as follows: (v)If and (i.e. ), then (1.1) has one couple of kink and antikink wave solutions as follows:
(vi)If and (i.e. ), then (1.1) has one couple of soliton-like solutions as follows: In order to show the dynamic properties of above soliton-like solutions and kink and antikink wave solutions
intuitively, as examples, we plot their graphs of some solutions, see Figures 1, 2, 3, and 4. Figures 1(a)–1(h) show the profiles of multiwaveform to solution the first solution of (3.6) for fixed
parameters and different -values. Figures 2(a)–2(d) show the profiles of multiwaveform to solution (3.7) for fixed parameters and different -values. Figures 3(a)–3(d) show the profiles of
multiwaveform to solution (3.9) for fixed parameters and different -values. Figures 4(a)-4(b) show the profiles of kink wave solution (the first formula of (3.12)) for fixed parameters and different
We observe that some profiles of above soliton-like solutions are very much sensitive to one of parameters, that is, their profiles are transformable (see Figures 1–3). But the others are not, their
waveforms do not vary no matter how the parameters vary (see Figure 4). Some phenomena are very similar to those in [41, 42]. In [41], the waveforms of soliton-like solution of the generalized KdV
equation vary with the changes of parameter and depend on the velocity extremely. Similarly, in [42], the properties of some traveling wave solutions of the generalized KdV-Burges equation depend on
the dissipation coefficient ; if dissipation coefficient , it appears as a monotonically kink profile solitary wave; if , it appears as a damped oscillatory wave.
From Figures 1(a)–1(h), it is easy to know that the profiles of solution (3.6) vary gradually, its properties depend on the parameter . When parametric values of increase from to , the solution (3.6)
has eight kinds of waveforms: Figure 1(a) shows a shape of antikink wave when ; Figure 1(b) shows a shape of transmutative antikink wave when ; Figure 1(c) shows a shape of thin and dark solitary
wave when ; Figure 1(d) shows a shape of fat and dark solitary wave when ; Figure 1(e) shows a shape of compacton wave when ; Figure 1(f) shows a shape of fat loop soliton when ; the Figure 1(g)
shows a shape of thin loop soliton when ; Figure 1(h) shows a shape of oblique loop soliton when .
Similarly, from Figures 2(a)–2(d), it is also easy to know that the profiles of solution (3.7) are transformable, but their changes are not gradual, it depends on the parameter extremely. When
parametric values of increase from to , the solution (3.7) has four kinds of waveforms: Figure 2(a) shows a shape of smooth kink wave when ; Figure 2(b) shows a shape of fat and bright solitary wave
when ; Figure 2(c) shows a shape of thin and bright solitary wave when ; Figure 2(d) shows a shape of singular wave of cracked loop soliton when . Especially, the changes of waveforms from Figure 2
(a) to Figure 2(b) and from Figure 2(c) to Figure 2(d) happened abruptly.
As in Figure 1, the profiles of the first solution of (3.9) in Figure 3 vary gradually. When parametric values of increase from to , the waveform becomes a bright compacton from a shape of bright
solitary wave, then becomes a shape of fat loop soliton and a shape of thin loop soliton at last.
Different from the properties of solutions (3.6), (3.7) and (3.9), the property of the first solution of (3.12) is stable. The profile of the first solution of (3.12) is not transformable no matter
how the parameters vary. Both Figures 4(a) and 4(b) show a shape of smooth kink wave.
3.2. The Traveling Wave Solutions Under Case 2
Under the parametric conditions and , (2.9) becomes where . Substituting (3.14) into the first expression in (2.8) yields We know that the case corresponds to the situation in which the gravity
acceleration points upwards. As an example, in this subsection, we only discuss the case . The case can be similarly discussed, but we omit them here. Especially, when , the above values of can be
reduced to .
(i) If then the . Under these parametric conditions, (3.15) has a Jacobi elliptic function solution as follows: or where is the model of Jacobi elliptic function and .
Substituting (3.16) and (3.17) into (2.7), respectively, we obtain Substituting (3.16) into (2.2) and (2.4), using (3.18) and the transformation , we obtain a couple of periodic solutions of (1.1) as
follows: Substituting (3.17) into (2.2) and (2.4), using (3.19) and the transformation , we also obtain a couple of periodic solutions of (1.1) as follows:
(ii)If , then the . In the case of these parametric conditions, (3.15) has a Jacobi elliptic function solution as follows:
As in the first case (i) using the same method, we obtain a couple of periodic solutions of (1.1) as follows: or Especially, when (i.e., ), . From (3.23) and (3.24), we obtain a couple of kink-like
solutions of (1.1) as follows:
(iii)If , then the . In the case of these parametric conditions, (3.15) has a Jacobi elliptic function solution as follows: As in the first case (i), similarly, we obtain a couple of periodic
solutions of (1.1) as follows: where .
(iv)If , then the . In the case of these parametric conditions, (3.15) has a Jacobi elliptic function solution as follows: Similarly, we obtain a couple of periodic solutions of (1.1) as follows:
where .
(v)If , then the . In the case of these parametric conditions, (3.15) has a Jacobi elliptic function solution as follows:
As in the first case (i) using the same method, we obtain a couple of periodic solutions of (1.1) as follows:
In order to show the dynamic properties of above periodic solutions intuitively, as examples, we plot their graphs of the solutions (3.20), (3.23) and (3.24), see Figures 5 and 6.
Figures 5(a) and 5(b) show two shapes of smooth and continuous periodic waves, all of them are nonsingular type. Figure 6(a) shows a shape of periodic kink wave. Figure 6(b) shows a shape of singular
periodic wave. Both Figures 6(a) and 6(b) show two discontinuous periodic waves, all of them are singular type.
From the above illustrations, we find that the waveforms of some solutions partly depend on wave parameters. Indeed, in 2006, Vakhnenko and Parkes’s work [43] successfully explained similar
phenomena. In [43], the graphical interpretation of the solution for gDPE is presented. In this analysis, the 3D-spiral (whether one loop from a spiral or a half loop of a spiral) has the different
projections that is the essence of the possible solutions. Of caurse, this approach also can be employed to the 2-component Camassa-Holm shallow water system (1.1). By using the Vakhnenko and
Parkes’s theory, the phenomena which appeared in this work are easily understood, so we omit this analysis at here. However, it is necessary to say something about the cracked loop soliton (Figure 2
(d)) and the singular periodic wave (Figure 6(b)), what are the 3D-curves with projections associated with two peculiar solutions (3.7) and (3.24) which are shown in Figure 2(d) and Figure 6(b)? In
order to answer this question, by using the Vakhnenko and Parkes’s approach, as an example we give the 3D-curves of solution (3.24) and their projection curve in the -plane, which are shown in Figure
7. For convenience to distinguish them, we colour the 3D-curves red and colour their projection curve green. From Figure 7, we can see that the 3D-curves are not intersected, but their projection
curve is intersected in -plane.
4. Conclusion
In this work, by using the integral bifurcation method together with a translation-dilation transformation, we have obtained some new traveling wave solutions of nonsingular type and singular type of
2-component Camassa-Holm equation. These new solutions include soliton solutions, kink wave solutions, loop soliton solutions, compacton solutions, smooth periodic wave solutions, periodic kink wave
solution, singular wave solution, and singular periodic wave solution. By investigating these new exact solutions of parametric type, we found some new traveling wave phenomena, that is, the
waveforms of some solutions partly depend on wave parameters. For example, the waveforms of solutions (3.6), (3.7), and (3.9) vary with the changes of parameter. These are three peculiar solutions.
The solution (3.6) has five kinds of waveforms, which contain antikink wave, transmutative antikink wave, dark soliton, compacton, and loop soliton according as the parameter varies. The solution (
3.7) has three kinds of waveforms, which contain kink wave, bright soliton, and singular wave (cracked loop soliton) according as the parameter varies. The solution (3.9) also has three kinds of
waveforms, which contain bright soliton, compacton, and loop soliton according as the parameter varies. These phenomena show that the dynamic behavior of these waves partly depends on the relation of
the amplitude of wave and the level of water.
The authors thank reviewers very much for their useful comments and helpful suggestions. This work was financially supported by the Natural Science Foundation of China (Grant no. 11161020) and was
also supported by the Natural Science Foundation of Yunnan Province (no. 2011FZ193).
1. A. Constantin and R. I. Ivanov, “On an integrable two-component Camassa-Holm shallow water system,” Physics Letters A, vol. 372, no. 48, pp. 7129–7132, 2008. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH
2. H. R. Dullin, G. A. Gottwald, and D. D. Holm, “Camassa-Holm, Korteweg-de Vries-5 and other asymptotically equivalent equations for shallow water waves,” Japan Society of Fluid Mechanics, vol. 33,
no. 1-2, pp. 73–95, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
3. R. S. Johnson, “The Camassa-Holm equation for water waves moving over a shear flow,” Fluid Dynamics Research, vol. 33, no. 1-2, pp. 97–111, 2003. View at Publisher · View at Google Scholar · View
at Zentralblatt MATH
4. V. Busuioc, “On second grade fluids with vanishing viscosity,” Comptes Rendus de l'Académie des Sciences I, vol. 328, no. 12, pp. 1241–1246, 1999. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH
5. H.-H. Dai, “Exact travelling-wave solutions of an integrable equation arising in hyperelastic rods,” Wave Motion, vol. 28, no. 4, pp. 367–381, 1998. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH
6. R. S. Johnson, “Camassa-Holm, Korteweg-de Vries and related models for water waves,” Journal of Fluid Mechanics, vol. 455, pp. 63–82, 2002. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH
7. A. E. Green and P. M. Naghdi, “Derivation of equations for wave propagation in water of variable depth,” Journal of Fluid Mechanics, vol. 78, no. 2, pp. 237–246, 1976. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at Scopus
8. S.-Q. Liu and Y. Zhang, “Deformations of semisimple bihamiltonian structures of hydrodynamic type,” Journal of Geometry and Physics, vol. 54, no. 4, pp. 427–453, 2005. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH
9. M. Chen, S.-Q. Liu, and Y. Zhang, “A two-component generalization of the Camassa-Holm equation and its solutions,” Letters in Mathematical Physics A, vol. 75, no. 1, pp. 1–15, 2006. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH
10. R. I. Ivanov, “Extended Camassa-Holm hierarchy and conserved quantities,” Zeitschrift Fuer Naturforschung, vol. 61, pp. 133–138, 2006. View at Zentralblatt MATH
11. C. Guan and Z. Yin, “Global existence and blow-up phenomena for an integrable two-component Camassa-Holm shallow water system,” Journal of Differential Equations, vol. 248, no. 8, pp. 2003–2014,
2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
12. C. Guan and Z. Yin, “Global weak solutions for a two-component Camassa-Holm shallow water system,” Journal of Functional Analysis, vol. 260, no. 4, pp. 1132–1154, 2011. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH
13. O. G. Mustafa, “On smooth traveling waves of an integrable two-component Camassa-Holm shallow water system,” Wave Motion, vol. 46, no. 6, pp. 397–402, 2009. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH
14. G. Gui and Y. Liu, “On the global existence and wave-breaking criteria for the two-component Camassa-Holm system,” Journal of Functional Analysis, vol. 258, no. 12, pp. 4251–4278, 2010. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH
15. Z. Guo, “Asymptotic profiles of solutions to the two-component Camassa-Holm system,” Nonlinear Analysis. Theory, Methods & Applications A, vol. 75, no. 1, pp. 1–6, 2012. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH
16. L. Tian, Y. Xu, and J. Zhou, “Attractor for the viscous two-component Camassa-Holm equation,” Nonlinear Analysis. Real World Applications, vol. 13, no. 3, pp. 1115–1129, 2012. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH
17. M. Yuen, “Perturbational blowup solutions to the 2-component Camassa-Holm equations,” Journal of Mathematical Analysis and Applications, vol. 390, no. 2, pp. 596–602, 2012. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH
18. Z. Guo, M. Zhu, and L. Ni, “Blow-up criteria of solutions to a modified two-component Camassa-Holm system,” Nonlinear Analysis. Real World Applications, vol. 12, no. 6, pp. 3531–3540, 2011. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH
19. Z. Popowicz, “A 2-component or $\text{N}=2$ supersymmetric Camassa-Holm equation,” Physics Letters A, vol. 354, no. 1-2, pp. 110–114, 2006. View at Publisher · View at Google Scholar
20. J. Escher, M. Kohlmann, and J. Lenells, “The geometry of the two-component Camassa-Holm and Degasperis-Procesi equations,” Journal of Geometry and Physics, vol. 61, no. 2, pp. 436–452, 2011. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH
21. J. Escher, O. Lechtenfeld, and Z. Yin, “Well-posedness and blow-up phenomena for the 2-component Camassa-Holm equation,” Discrete and Continuous Dynamical Systems A, vol. 19, no. 3, pp. 493–513,
2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
22. Z. Guo and Y. Zhou, “On solutions to a two-component generalized Camassa-Holm equation,” Studies in Applied Mathematics, vol. 124, no. 3, pp. 307–322, 2010. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH
23. Z. Guo, “Blow-up and global solutions to a new integrable model with two components,” Journal of Mathematical Analysis and Applications, vol. 372, no. 1, pp. 316–327, 2010. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH
24. P. Zhang and Y. Liu, “Stability of solitary waves and wave-breaking phenomena for the two-component Camassa-Holm system,” International Mathematics Research Notices, vol. 2010, no. 11, pp.
1981–2021, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
25. Z. Guo and L. Ni, “Persistence properties and unique continuation of solutions to a two-component Camassa-Holm equation,” Mathematical Physics, Analysis and Geometry, vol. 14, no. 2, pp. 101–114,
2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
26. Z. Guo and M. Zhu, “Wave breaking for a modified two-component Camassa-Holm system,” Journal of Differential Equations, vol. 252, no. 3, pp. 2759–2770, 2012. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH
27. R. Camassa and D. D. Holm, “An integrable shallow water equation with peaked solitons,” Physical Review Letters, vol. 71, no. 11, pp. 1661–1664, 1993. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH
28. R. Camassa, D. D. Holm, and J. M. Hyman, “A new integrable shallow water equation,” Advances in Applied Mechanics, vol. 31, pp. 1–33, 1994. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at Scopus
29. J. B. Li and Y. S. Li, “Bifurcations of travelling wave solutions for a two-component Camassa-Holm equation,” Acta Mathematica Sinica, vol. 24, no. 8, pp. 1319–1330, 2008. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH
30. J. Lin, B. Ren, H.-M. Li, and Y.-S. Li, “Soliton solutions for two nonlinear partial differential equations using a Darboux transformation of the Lax pairs,” Physical Review E, vol. 77, no. 3,
Article ID 036605, p. 10, 2008. View at Publisher · View at Google Scholar
31. W. Rui, B. He, Y. Long, and C. Chen, “The integral bifurcation method and its application for solving a family of third-order dispersive PDEs,” Nonlinear Analysis. Theory, Methods & Applications
A, vol. 69, no. 4, pp. 1256–1267, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
32. J. Li and Z. Liu, “Smooth and non-smooth traveling waves in a nonlinearly dispersive equation,” Applied Mathematical Modelling, vol. 25, no. 1, pp. 41–56, 2000. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at Scopus
33. J. Hu, “An algebraic method exactly solving two high-dimensional nonlinear evolution equations,” Chaos, Solitons and Fractals, vol. 23, no. 2, pp. 391–398, 2005. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH
34. E. Yomba, “The extended F-expansion method and its application for solving the nonlinear wave, CKGZ, GDS, DS and GZ equations,” Physics Letters A, vol. 340, no. 1–4, pp. 149–160, 2005. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
35. W. Rui, Y. Long, B. He, and Z. Li, “Integral bifurcation method combined with computer for solving a higher order wave equation of KdV type,” International Journal of Computer Mathematics, vol.
87, no. 1–3, pp. 119–128, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
36. X. Wu, W. Rui, and X. Hong, “Exact traveling wave solutions of explicit type, implicit type, and parametric type for $K\left(m,n\right)$ equation,” Journal of Applied Mathematics, vol. 2012,
Article ID 236875, 23 pages, 2012. View at Publisher · View at Google Scholar
37. P. Rosenau, “On solitons, compactons, and Lagrange maps,” Physics Letters A, vol. 211, no. 5, pp. 265–275, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
38. H.-H. Dai and Y. Li, “The interaction of the ω-soliton and ω-cuspon of the Camassa-Holm equation,” Journal of Physics A, vol. 38, no. 42, pp. L685–L694, 2005. View at Publisher · View at Google
39. E. J. Parkes and V. O. Vakhnenko, “Explicit solutions of the Camassa-Holm equation,” Chaos, Solitons & Fractals, vol. 26, no. 5, pp. 1309–1316, 2005. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH
40. V. O. Vakhnenko and E. J. Parkes, “Periodic and solitary-wave solutions of the Degasperis-Procesi equation,” Chaos, Solitons & Fractals, vol. 20, no. 5, pp. 1059–1073, 2004. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH
41. W. Rui, C. Chen, X. Yang, and Y. Long, “Some new soliton-like solutions and periodic wave solutions with loop or without loop to a generalized KdV equation,” Applied Mathematics and Computation,
vol. 217, no. 4, pp. 1666–1677, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
42. W. Zhang and X. Li, “Approximate damped oscillatory solutions for generalized KdV-Burgers equation and their error estimates,” Abstract and Applied Analysis, vol. 2011, Article ID 807860, 26
pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
43. V. O. Vakhnenko and E. J. Parkes, “The solutions of a generalized Degasperis-Procesi equation,” Reports of the National Academy of Sciences of Ukraine, no. 8, pp. 88–94, 2006. | {"url":"http://www.hindawi.com/journals/jam/2012/736765/","timestamp":"2014-04-19T22:54:54Z","content_type":null,"content_length":"744407","record_id":"<urn:uuid:c59b38a0-8fd2-4fc2-98d0-f718330314d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Page 0 - Mathematics Online Resources at Education Index
We're continually reviewing new sites and adding resources, and appreciate your comments and suggestions.
If you're taking algebra and need some help, or just want more information, you can check out this site. It has a lot of hints, help, and examples to make algebra easier...and maybe even more fun.
Online mathematical resources for professional mathematicians, professors, and more. A complete online list of members, member services, mathematics journals, papers, and much more. A valuable
On this site, suitable for elementary through college students (and beyond), you can read through others' questions regarding math. If your question has not been answered there, send it in for an
answer by one of 100 math doctors from around the world.
This is a series of explanations and experiments with light refraction, reflection, and characteristics of travel. It's straightforward and informative with helpful figures to go along with the text.
All of it is nicely tied together in that, combined, the principles explain rainbows.
Automatically create practice worksheets for improving math fact skills. Each worksheet you generate is unique, since the problems are randomly re-ordered every time. Worksheets are precisely
formatted for printing at 8.5" x 11", with pagination. No software to download. Free for parents, students, teachers, and other educators.
This site provides teachers with ways to increase their effectiveness with lessons, activities, articles, and the highlighting of 13 outstanding sites every month.
Constructed for K-12 math students, this site allows you to "learn about Vectors and Vector Operations by sketching and playing with them." Drag your mouse across the grid to create the vectors and
then follow the directions to perform calculations on them.
The title pretty much explains it all. You can find information by subject, region, or timeline. Site also includes reference to related online and print resources.
This site contains "many interactive science and engineering solution tools." If you’re computing problems in math, geometry, linear algebra, curve fitting or regression, integration,
differentiation, statistics and more, there are tools here that you can use. You may also download the library and pay a shareware fee.
Devoted to the Chaos Theory, this colorful page offers information on the theory's history, real life examples, a glossary of chaos terminology, diagrams, fractals... all the CHAOS you could want.
Site dedicated to identifying problems and areas of mathematical research needed in other sciences.
This is a good site, especially for those who feel they 'hate math' or can't be any good at it. It has a very good essay on the fear of math so many people seem to have and provides games and puzzles
dealing with arithmetic, algebra, geometry, and probability. Another great feature is the CTK Exchange for questions and answers.
Magic Squares are fun mathematics, and this site is a great resource on the subject. From basic explanations to lesson plans to other web resources on the same topic, what you need to enjoy magic
squares is right here!
This amusingly named site pertains to "the invasion of our schools by the New-New Math". Interesting reading. Worth a visit.
This site contains software and materials for teaching college mathematics; a great resource for teachers and students alike. And it's very nicely done.
An interactive site targeting grades K-12. It's designed to encourage participation in problem solving through the Web. Very impressive, and a neat idea.
1 2 Next >
(33 entries)
Submit a review
Search for the school, college or university you would like to review and get typing.
Upload a photo
Share photos of your school, college or university with others.. | {"url":"http://www.educationindex.com/math/page0.html","timestamp":"2014-04-20T16:16:08Z","content_type":null,"content_length":"17638","record_id":"<urn:uuid:5170fee1-2795-44f4-90c7-4dc591b6d5e3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
Workshop on Geometry of Banach spaces and infinite dimensional Ramsey
One of the acknowledged major successes of set theory in analysis has been the use of Ramsey theory in the study of Banach spaces. The first such use was the concept of a spreading model of a Banach
space due to Brunel and Sucheston,. This was a way of joining the finite and infinite dimensional structure of a Banach space in an asymptotic manner. Perhaps the best known of these applications of
Ramsey theory is Rosenthal's theorem: A Banach space $Y$ does not contain an isomorphic copy of $l_1$ if and only if every bounded sequence in $Y$ has a weakly Cauchy subsequence. Closely related to
this is a subsequent theorem of Rosenthal about pointwise compact sets of
functions: Any sequence of continuous functions on a Polish space which is pointwise bounded, and every cluster point of which is a Borel function, has a converging subsequence. Shortly afterwards
Bourgain, Fremlin,and Talagrand have extended this result to the conclusion that the sequence is actually sequentially dense in its pointwise closure. This, along with earlier work of Odell and
Rosenthal, resulted in a renewed interest in spaces of Baire-class 1 functions; namely, pointwise limits of continuus functions. Using the results of Bourgain, Fremlin and Talagrand, Godefroy showed
that this class of spaces enjoys some interesting permanence properties. For example, if a compact space $K$ is representable as a compact set of Baire class-1 functions then so is $P(K)$, the space
of all Radon probability measures on $K$ with the weak$^*$ topology. Recent results have been obtained towards a fine structure theory of compact sets of first Baire class by Todorcevic; in
particular, every compact set of first Baire class contains a dense metrizable subspace.
The involvement of infinite dimensional Ramsey theory was recently lifted to a higher level of sophistication by W.~T.~Gowers in his positive solution to the homogeneous space problem of Banach: if a
Banach space is isomorphic to all of its infinite dimensional subspaces then it is isomorphic to a Hilbert space. The Ramsey theoretic part of his result, stating that every Banach space contains a
subspace which either has an unconditional basis or is hereditarily indecomposable was combined with some analytical work of Komorowski and Tomczak-Jaegermann.
The strongest potential Ramsey theorem in a Banach space setting can be stated as: is every uniformly continuous real valued function on a unit sphere $S_X$ of a Banach space $X$ oscillation stable?
This is called the distortion problem. It was solved by Odell and Schlumprecht in the 90's in the negative. However the existence of a distortable space of "bounded distortion" remains unkown. Such a
space could lead to a new form of a weak Ramsey theorem of some sort. Interesting work on this problem has been done by Odell, Schlumprecht, Maurey, V. Milman, Tomczak-Jaegermann, among others.
Recent solutions of the two most famous problems in this area of Banach space theory, the distortion problem (Odell and Schlumprecht) and the unconditional basic sequence problem (Gowers and Maurey),
are closely tied to a deeper understanding of a particular example of a non-classical Banach space due to Tsirelson. The inductive definition of its norm involving ``admissible families'' of sets
makes the space susceptible to a set theoretical analysis where the notion of ``admissible'' appears with a different name ``relatively small'' (Ketonen-Solovay). Gowers' Ramsey-theoretic dichotomy
for Banach spaces also seems susceptible to a further set-theoretical analysis, in particular in the direction of Ellentuck-type theorems, which are so abundant in the infinite dimensional Ramsey
theory. Bringing together people in these two areas will very likely result in a much better understanding of these
The online pre-registration form has now been removed from this site. To attend the workshop, please complete a walk-in registration form upon your arrival at The Fields Institute
A block of rooms for participants have been arranged the hotels listed below. Please request the Fields Institute rate when booking . Rooms must be reserved before October 9, 2002 to ensure
availability. For additional accommodation resources, please see the Fields Housing page
Days Inn (10-15 minutes walk from the Institute) Quality Hotel (10-15 minutes walk from the Institute)
30 Carleton Street, 280 Bloor Street West
Toronto, ON, M5B 2E9 Toronto, ON, M5S 1V8
Tel: 416 977-6655 Tel: (416) 968-0010
Toll Free 1-800-367-9601 (8:30 am- 6pm) Fax: (416) 968-7765
(approx. $99/night CDN) (approx. $100/night CDN)
To receive on-going information about this program please subscribe to the
mail list
or contact | {"url":"http://www.fields.utoronto.ca/programs/scientific/02-03/set_theory/workshop2/","timestamp":"2014-04-18T01:18:26Z","content_type":null,"content_length":"15323","record_id":"<urn:uuid:11c209c9-ac6c-4de9-a353-03233dadc54c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
• This page allows you to choose 4 integers (0 ~ 100,000) and creates a 4x4 magic square with the specified 4 numbers in the top row. ("Magic square" in Wikipedia)
• All numbers are unique unless duplicate(s) / triplicate / quadruplicate are specified for the top row.
• Negative numbers are used only if no solution is available otherwise.
• The numbers in the top row appear with leading 0's if they are entered with leading 0's.
(Useful for a date matrix, e.g., enter 20, 14, 04, 18 for April 18, 2014.)
• Read more at "behind the Magic Square Maker".
• This program is written in R (r-project.org).
The "brew" package is used to integrate R and HTML.
• Magic square maker • How many days
• Pythagorean triples finder • Prime factorization
• Bivariate baseball score plots • 3+3 designs | {"url":"http://data.vanderbilt.edu/~koyamat/brew/magicsquare.html","timestamp":"2014-04-18T19:23:07Z","content_type":null,"content_length":"5152","record_id":"<urn:uuid:28871d23-b878-4120-b479-b1d7ea84f729>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Precise minimax redundancy and regret
Results 1 - 10 of 29
- IEEE Transactions on Information Theory , 2006
"... Abstract — A framework with two scalar parameters is introduced for various problems of finding a prefix code minimizing a coding penalty function. The framework involves a two-parameter class
encompassing problems previously proposed by Huffman [1], Campbell [2], Nath [3], and Drmota and Szpankowsk ..."
Cited by 9 (6 self)
Add to MetaCart
Abstract — A framework with two scalar parameters is introduced for various problems of finding a prefix code minimizing a coding penalty function. The framework involves a two-parameter class
encompassing problems previously proposed by Huffman [1], Campbell [2], Nath [3], and Drmota and Szpankowski [4]. It sheds light on the relationships among these problems. In particular, Nath’s
problem can be seen as bridging that of Huffman with that of Drmota and Szpankowski. This leads to a linear-time algorithm for the last of these with a solution that solves a range of Nath
subproblems. We find simple bounds and linear-time Huffmanlike optimization algorithms for all nontrivial problems within the class.
- IEEE Trans. Inf. Theory
"... ..."
- in Proc., IEEE Information Theory Workshop , 2002
"... Abstract—The problem of selecting a code for finite monotone sources with x symbols is considered. The selection criterion is based on minimizing the average redundancy (called Minave criterion)
instead of its maximum (i.e., Minimax criterion). The average probability distribution € x, whose associa ..."
Cited by 5 (0 self)
Add to MetaCart
Abstract—The problem of selecting a code for finite monotone sources with x symbols is considered. The selection criterion is based on minimizing the average redundancy (called Minave criterion)
instead of its maximum (i.e., Minimax criterion). The average probability distribution € x, whose associated Huffman code has the minimum average redundancy, is derived. The entropy of the average
distribution (i.e.,
"... Abstract — We study the Tunstall code using the machinery from the analysis of algorithms literature. In particular, we propose an algebraic characterization of the Tunstall code which, together
with tools like the Mellin transform and the Tauberian theorems, leads to new results on the variance and ..."
Cited by 5 (2 self)
Add to MetaCart
Abstract — We study the Tunstall code using the machinery from the analysis of algorithms literature. In particular, we propose an algebraic characterization of the Tunstall code which, together with
tools like the Mellin transform and the Tauberian theorems, leads to new results on the variance and a central limit theorem for dictionary phrase lengths. This analysis also provides a new argument
for obtaining asymptotic results about the mean dictionary phrase length and average redundancy rates. I.
"... Abstract—Conventional wisdom states that the minimum expected length for fixed-to-variable length encoding of an n-block memoryless source with entropy H grows as nH+O(1). However, this
performance is obtained under the constraint that the code assigned to the whole n-block is a prefix code. Droppin ..."
Cited by 4 (3 self)
Add to MetaCart
Abstract—Conventional wisdom states that the minimum expected length for fixed-to-variable length encoding of an n-block memoryless source with entropy H grows as nH+O(1). However, this performance
is obtained under the constraint that the code assigned to the whole n-block is a prefix code. Dropping this unnecessary constraint we show that the minimum expected length grows as nH − 1 log n + O
(1) 2 unless the source is equiprobable. I.
, 2008
"... A variable-to-fixed length encoder partitions the source string into variable-length phrases that belong to a given and fixed dictionary. Tunstall, and independently Khodak, designed
variable-to-fixed length codes for memoryless sources that are optimal under certain constraints. In this paper, we s ..."
Cited by 4 (2 self)
Add to MetaCart
A variable-to-fixed length encoder partitions the source string into variable-length phrases that belong to a given and fixed dictionary. Tunstall, and independently Khodak, designed
variable-to-fixed length codes for memoryless sources that are optimal under certain constraints. In this paper, we study the Tunstall and Khodak codes using analytic information theory, i.e., the
machinery from the analysis of algorithms literature. After proposing an algebraic characterization of the Tunstall and Khodak codes, we present new results on the variance and a central limit
theorem for dictionary phrase lengths. This analysis also provides a new argument for obtaining asymptotic results about the mean dictionary phrase length and average redundancy rates.
- IEEE Trans. Inf. Theory , 2008
"... Abstract — Let P = {p(i)} be a measure of strictly positive probabilities on the set of nonnegative integers. Although the countable number of inputs prevents usage of the Huffman algorithm,
there are nontrivial P for which known methods find a source code that is optimal in the sense of minimizing ..."
Cited by 4 (3 self)
Add to MetaCart
Abstract — Let P = {p(i)} be a measure of strictly positive probabilities on the set of nonnegative integers. Although the countable number of inputs prevents usage of the Huffman algorithm, there
are nontrivial P for which known methods find a source code that is optimal in the sense of minimizing expected codeword length. For some applications, however, a source code should instead minimize
one of a family of nonlinear objective P functions, β-exponential means, those of the form loga i p(i)an(i) , where n(i) is the length of the ith codeword and a is a positive constant. Applications
of such minimizations include a novel problem of maximizing the chance of message receipt in single-shot communications (a < 1) and a previously known problem of minimizing the chance of buffer
overflow in a queueing system (a> 1). This paper introduces methods for finding codes optimal for such exponential means. One method applies to geometric distributions, while another applies to
distributions with lighter tails. The latter algorithm is applied to Poisson distributions and both are extended to alphabetic codes, as well as to minimizing maximum pointwise redundancy. The
aforementioned application of minimizing the chance of buffer overflow is also considered. Index Terms — Communication networks, generalized entropies, generalized means, Golomb codes, Huffman
- In Proceedings of the International Symposium on Information Theory , 1944
"... Abstract — This paper presents new lower and upper bounds for the optimal compression of binary prefix codes in terms of the most probable input symbol, where compression efficiency is
determined by the nonlinear codeword length objective of minimizing maximum pointwise redundancy. This objective re ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract — This paper presents new lower and upper bounds for the optimal compression of binary prefix codes in terms of the most probable input symbol, where compression efficiency is determined by
the nonlinear codeword length objective of minimizing maximum pointwise redundancy. This objective relates to both universal modeling and Shannon coding, and these bounds are tight throughout the
interval. The upper bounds also apply to a related objective, that of d th exponential redundancy. I.
"... Abstract — There are several applications in information transfer and storage where the order of source letters is irrelevant at the destination. For these source-destination pairs, multiset
communication rather than the more difficult task of sequence communication may be performed. In this work, w ..."
Cited by 2 (1 self)
Add to MetaCart
Abstract — There are several applications in information transfer and storage where the order of source letters is irrelevant at the destination. For these source-destination pairs, multiset
communication rather than the more difficult task of sequence communication may be performed. In this work, we study universal multiset communication. For classes of countable-alphabet sources that
meet Kieffer’s condition for sequence communication, we present a scheme that universally achieves a rate of n + o(n) bits per multiset letter for multiset communication. We also define redundancy
measures that are normalized by the logarithm of the multiset size rather than per multiset letter and show that these redundancy measures cannot be driven to zero for the class of finite-alphabet
memoryless multisets. This further implies that finite-alphabet memoryless multisets cannot be encoded universally with vanishing fractional redundancy. I.
, 2008
"... Analytic information theory aims at studying problems of information theory using analytic techniques of computer science and combinatorics. Following Hadamard’s precept, these problems are
tackled by complex analysis methods such as generating functions, Mellin transform, Fourier series, saddle poi ..."
Cited by 2 (0 self)
Add to MetaCart
Analytic information theory aims at studying problems of information theory using analytic techniques of computer science and combinatorics. Following Hadamard’s precept, these problems are tackled
by complex analysis methods such as generating functions, Mellin transform, Fourier series, saddle point method, analytic poissonization and depoissonization, and singularity analysis. This approach
lies at the crossroad of computer science and information theory. In this survey we concentrate on one facet of information theory (i.e., source coding better known as data compression), namely the
redundancy rate problem. The redundancy rate problem determines by how much the actual code length exceeds the optimal code length. We further restrict our interest to the average redundancy for
known sources, that is, when statistics of information sources are known. We present precise analyses of three types of lossless data compression schemes, namely fixed-to-variable (FV) length codes,
variable-to-fixed (VF) length codes, and variable-to-variable (VV) length codes. In particular, we investigate average redundancy of Huffman, Tunstall, and Khodak codes. These codes have succinct
representations as trees, either as coding or parsing trees, and we analyze here some of their parameters (e.g., the average path from the root to a leaf). | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=442631","timestamp":"2014-04-25T05:00:30Z","content_type":null,"content_length":"36572","record_id":"<urn:uuid:a01d578a-7ff5-4788-857a-1fe22d77ac82>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cg Programming/Unity/Nonlinear Deformations
This tutorial introduces vertex blending as an example of a nonlinear deformation. The main application is actually the rendering of skinned meshes.
While this tutorial is not based on any other specific tutorial, a good understanding of Section “Vertex Transformations” is very useful.
Blending between Two Model TransformationsEdit
Most deformations of meshes cannot be modeled by the affine transformations with 4×4 matrices that are discussed in Section “Vertex Transformations”. The deformation of bodies by tight corsets is
just one example. A more important example in computer graphics is the deformation of meshes when joints are bent, e.g. elbows or knees.
This tutorial introduces vertex blending to implement some of these deformations. The basic idea is to apply multiple model transformations in the vertex shader (in this tutorial we use only two
model transformations) and then blend the transformed vertices, i.e. compute a weighted average of them with weights that have to be specified for each vertex. For example, the deformation of the
skin near a joint of a skeleton is mainly influenced by the position and orientation of the two (rigid) bones meeting in the joint. Thus, the positions and orientations of the two bones define two
affine transformations. Different points on the skin are influenced differently by the two bones: points at the joint might be influenced equally by the two bones while points farther from the joint
around one bone are more strongly influenced by that bone than the other. These different strengths of the influence of the two bones can be implemented by using different weights in the weighted
average of the two transformations.
For the purpose of this tutorial, we use two uniform transformations float4x4 _Trafo0 and float4x4 _Trafo1, which are specified by the user. To this end a small JavaScript (which should be attached
to the mesh that should be deformed) allows us to specify two other game objects and copies their model transformations to the uniforms of the shader:
@script ExecuteInEditMode()
public var bone0 : GameObject;
public var bone1 : GameObject;
function Update ()
if (null != bone0)
if (null != bone1)
if (null != bone0 && null != bone1)
transform.position = 0.5 * (bone0.transform.position
+ bone1.transform.position);
transform.rotation = bone0.transform.rotation;
public class MyClass : MonoBehaviour
public GameObject bone0;
public GameObject bone1;
void Update ()
if (null != bone0)
if (null != bone1)
if (null != bone0 && null != bone1)
transform.position = 0.5f * (bone0.transform.position
+ bone1.transform.position);
transform.rotation = bone0.transform.rotation;
The two other game objects could be anything — I like cubes with one of the built-in semitransparent shaders such that their position and orientation is visible but they don't occlude the deformed
In this tutorial, the weight for the blending with the transformation _Trafo0 is set to input.vertex.z + 0.5:
float weight0 = input.vertex.z + 0.5;</code>
and the other weight is 1.0 - weight0. Thus, the part with positive input.vertex.z coordinates is influenced more by _Trafo0 and the other part is influenced more by _Trafo1. In general, the weights
are application dependent and the user should be allowed to specify weights for each vertex.
The application of the two transformations and the weighted average can be written this way:
float4 blendedVertex =
weight0 * mul(_Trafo0, input.vertex)
+ (1.0 - weight0) * mul(_Trafo1, input.vertex);
Then the blended vertex has to be multiplied with the view matrix and the projection matrix. The view transformation is not available directly but it can be computed by multiplying the model-view
matrix (which is the product of the view matrix and the model matrix) with the inverse model matrix (which is available as _World2Object times unity_Scale.w except for the bottom-right element, which
is 1):
float4x4 modelMatrixInverse =
_World2Object * unity_Scale.w;
modelMatrixInverse[3][3] = 1.0;
float4x4 viewMatrix =
mul(UNITY_MATRIX_MV, modelMatrixInverse);
output.pos =
mul(UNITY_MATRIX_P, mul(viewMatrix, blendedVertex));
In order to illustrate the different weights, we visualize weight0 by the red component and 1.0 - weight0 by the green component of a color (which is set in the fragment shader):
output.col = float4(weight0, 1.0 - weight0, 0.0, 1.0);
For an actual application, we could also transform the normal vector by the two corresponding transposed inverse model transformations and perform per-pixel lighting in the fragment shader.
Complete Shader CodeEdit
All in all, the shader code looks like this:
Shader "Cg shader for vertex blending" {
SubShader {
Pass {
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
// Uniforms set by a script
uniform float4x4 _Trafo0; // model transformation of bone0
uniform float4x4 _Trafo1; // model transformation of bone1
struct vertexInput {
float4 vertex : POSITION;
struct vertexOutput {
float4 pos : SV_POSITION;
float4 col : COLOR;
vertexOutput vert(vertexInput input)
vertexOutput output;
float weight0 = input.vertex.z + 0.5;
// depends on the mesh
float4 blendedVertex =
weight0 * mul(_Trafo0, input.vertex)
+ (1.0 - weight0) * mul(_Trafo1, input.vertex);
float4x4 modelMatrixInverse =
_World2Object * unity_Scale.w;
modelMatrixInverse[3][3] = 1.0;
float4x4 viewMatrix =
mul(UNITY_MATRIX_MV, modelMatrixInverse);
output.pos =
mul(UNITY_MATRIX_P, mul(viewMatrix, blendedVertex));
output.col = float4(weight0, 1.0 - weight0, 0.0, 1.0);
// visualize weight0 as red and weight1 as green
return output;
float4 frag(vertexOutput input) : COLOR
return input.col;
This is, of course, only an illustration of the concept but it can already be used for some interesting nonlinear deformations such as twists around the $z$ axis.
For skinned meshes in skeletal animation, many more bones (i.e. model transformations) are necessary and each vertex has to specify which bone (using, for example, an index) contributes with which
weight to the weighted average. However, Unity computes the blending of vertices in software; thus, this topic is less relevant for Unity programmers.
Congratulations, you have reached the end of another tutorial. We have seen:
• How to blend vertices that are transformed by two model matrices.
• How this technique can be used for nonlinear transformations and skinned meshes.
Further ReadingEdit
If you still want to learn more
• about the model transformation, the view transformation, and the projection, you should read the description in Section “Vertex Transformations”.
• about vertex skinning, you could read the section about vertex skinning in Chapter 8 of the “OpenGL ES 2.0 Programming Guide” by Aaftab Munshi, Dan Ginsburg, and Dave Shreiner, published 2009 by
Unless stated otherwise, all example source code on this page is granted to the public domain.
Last modified on 8 April 2014, at 16:53 | {"url":"http://en.m.wikibooks.org/wiki/Cg_Programming/Unity/Nonlinear_Deformations","timestamp":"2014-04-19T04:21:34Z","content_type":null,"content_length":"41152","record_id":"<urn:uuid:e9d71d7a-9e5d-4e14-b3f7-2f7341889091>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |
A calculation the size of Manhattan: U-M professor part of team to solve century-old math equation
A calculation the size of Manhattan:
U-M professor part of team to solve century-old math equation
John Stembridge, a professor of mathematics, is a member of an international team of 18 mathematicians and computer scientists who successfully have mapped the Lie group E[8], one of the largest and
most complicated structures in mathematics. The findings will be unveiled today (March 19) at a presentation by David Vogan, professor of mathematics at MIT and member of the team.
The magnitude of the E[8] calculation is being compared to Human Genome Project, and is so large it would cover Manhattan if it were written out on paper, according to the American Institute of
Mathematics (AIM), one of the leading math institutes in the country.
E[8] is an example of a Lie (pronounced "Lee") group. Lie groups were invented by the 19th century Norwegian mathematician Sophus Lie to study symmetry. Underlying any symmetrical object, such as a
sphere, is a Lie group. Balls, cylinders or cones are familiar examples of symmetric three-dimensional objects. Mathematicians study symmetries in higher dimensions. In fact, E[8] is the group of
symmetries of a geometric object like a sphere, cylinder or cone, but the object in this case is 57-dimensional. E[8] itself is 248-dimensional. For details on E[8] visit aimath.org/E8/.
"E[8] was discovered over a century ago, in 1887, and until now, no one thought the structure could ever be understood," says Jeffrey Adams, project leader and mathematics professor at the University
of Maryland. "This groundbreaking achievement is significant both as an advance in basic knowledge, as well as a major advance in the use of large scale computing to solve complicated mathematical
problems. The mapping of E[8] may well have unforeseen implications in mathematics and physics which won't be evident for years to come.
"We've determined the basic building blocks that are needed to understand how E[8] is represented in nature,'' says Stembridge, one of the co-principal investigators. "This could lead to better
understanding of the physical models of the universe."
Stembridge says the way the calculation was achieved by a group of 18 mathematicians and computer scientists working in intensive collaboration for four years also is a significant breakthrough.
Partners on this project also include MIT, Cornell University, the University of Utah and University of Maryland.
"Mathematicians more typically work in groups of one, two or three," Stembridge says. "We have a large group spread all over the U.S. and in France. That is unusual."
Stembridge says the mapping of E[8] also will create opportunities for students at U-M. "Students might get to work on a cool problem as part of this project."
The magnitude and nature of the E[8] calculation invites the comparison with the Human Genome Project, according to AIM. The human genome, which contains all the genetic information of a cell, is
less than a gigabyte in size. The result of the E[8] calculation, which contains all the information about E[8] and its representations, is 60 gigabytes. This is enough to store 45 days of continuous
music in MP3 format.
The computation required sophisticated new mathematical techniques and computing power not available even a few years ago. While many scientific projects involve processing large amounts of data, the
E[8] calculation is very different, as the size of the input is comparatively small, but the answer itself is enormous and very dense.
The E[8] calculation is part of an ambitious project sponsored by AIM and the National Science Foundation, known as the Atlas of Lie Groups and Representations. The goal of the project is to
determine the unitary representations of all the Lie groups (E[8] is the largest of the exceptional Lie groups). This is one of the most important unsolved problems of mathematics. The E[8]
calculation is a major step, and signals that the Atlas team is well on the way to solving the problem, mathematicians say.
"This is an impressive achievement," says Hermann Nicolai, director of the Albert Einstein Institute in Potsdam, Germany. "While mathematicians have known for a long time about the beauty and the
uniqueness of E[8], we physicists have come to appreciate its exceptional role only more recently. Understanding the inner workings of E[8] is not only a great advance for pure mathematics, but may
also help physicists in their quest for a unified theory." | {"url":"http://www.ur.umich.edu/0607/Mar19_07/04.shtml","timestamp":"2014-04-21T14:41:30Z","content_type":null,"content_length":"19291","record_id":"<urn:uuid:b59bed75-544c-45ee-bc5d-f5d65b564e78>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fast separation in a graph with an excluded minor
Bruce Reed, David R. Wood
Let G be an n-vertex m-edge graph with weighted vertices. A pair of vertex sets A,B⊆V(G) is a 2/3-separation of order |A∩B| if A ∪B = V(G), there is no edge between A \ B and B \ A, and both A \ B
and B \ A have weight at most 2/3 the total weight of G. Let ℓ∈ℤ+ be fixed. Alon, Seymour and Thomas [J. Amer. Math. Soc. 1990] presented an algorithm that in O(n1/2m) time, either outputs a Kℓ-minor
of G, or a separation of G of order O(n1/2). Whether there is a O(n+m) time algorithm for this theorem was left as open problem. In this paper, we obtain a O(n+m) time algorithm at the expense of O
(n2/3) separator. Moreover, our algorithm exhibits a tradeoff between running time and the order of the separator. In particular, for any given ε∈[0,1/2], our algorithm either outputs a Kℓ-minor of
G, or a separation of G with order O(n(2-ε)/3) in O(n1+ε+m) time.
Full Text:
GZIP Compressed PostScript PostScript PDF original HTML abstract page | {"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/proceedings/article/viewArticle/dmAE0110","timestamp":"2014-04-19T13:22:23Z","content_type":null,"content_length":"11464","record_id":"<urn:uuid:c4d122fa-95ff-42bb-9280-4ba41910eff1>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: J.Stat.Mech.(2008)P03006
ournal of Statistical Mechanics:
An IOP and SISSA journalJ Theory and Experiment
Optimal spatial transportation networks
where link costs are sublinear in link
D J Aldous
Department of Statistics, University of California Berkeley, 367 Evans Hall
# 3860, Berkeley, CA 94720, USA
E-mail: aldous@stat.berkeley.edu
URL: www.stat.berkeley.edu/users/aldous
Received 30 January 2008
Accepted 20 February 2008
Published 11 March 2008
Online at stacks.iop.org/JSTAT/2008/P03006
Abstract. Consider designing a transportation network on n vertices in the
plane, with traffic demand uniform over all sourcedestination pairs. Suppose the
cost of a link of length and capacity c scales as c for fixed 0 < < 1. Under
appropriate standardization, the cost of the minimum cost Gilbert network grows | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/799/1660114.html","timestamp":"2014-04-20T19:17:24Z","content_type":null,"content_length":"7965","record_id":"<urn:uuid:f7021529-8254-4650-93bb-6e4b26699a91>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Optimization of a Objective function that consist many numerically integrated function
Replies: 1 Last Post: Dec 9, 2013 10:37 AM
Messages: [ Previous | Next ]
Re: Optimization of a Objective function that consist many numerically
integrated function
Posted: Dec 9, 2013 10:37 AM
On 12/9/2013 12:34 AM, Rubayet wrote:
> Hello,
> I am dealing with a nonlinear objective function that contains many
> numerical integral functions. Also i have to pass some extra
> parameters inside of each integral function.I have tried several
> times to solve this but i have failed .
> Below i have shown the format of my objective function :
> expected total cost = normal function(P,D,R,thita) + integral function
> 1(P,D,thita,lamda,R) + integral function 2((P,D,thita,lamda,R)+.........
> Here P,D,R,thita is input parameters .
> Also i have to employ numerical integration because this integral
> function cannot be solved analytically.
> Can any body help me to resolve/figure out this optimization problem
> in simple way?
Perhaps these documentation sections will be of use:
and a worked example:
Alan Weiss
MATLAB mathematical toolbox documentation
Date Subject Author
12/9/13 Optimization of a Objective function that consist many numerically integrated function Rubayet
12/9/13 Re: Optimization of a Objective function that consist many numerically Alan Weiss
integrated function | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2610000&messageID=9338669","timestamp":"2014-04-20T17:06:14Z","content_type":null,"content_length":"18742","record_id":"<urn:uuid:2104af9f-8f72-4df8-b179-42647e51cbac>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Configuration properties for DATA_PROFILE that affect loading:
Name: COPY_DATA
Type: BOOLEAN
Valid Values: true | false
Default: true
Setting this to true will enable copying of data from source to profile workspace.
Name: FORCE_COPY_DATA
Type: BOOLEAN
Valid Values: true | false
Default: false
Setting this to true will allways force a profile to run.
Name: CALCULATE_DATATYPES
Type: BOOLEAN
Valid Values: true | false
Default: false
Setting this to true will enable data type discovery for the selected table.
Name: CALCULATE_COMMON_FORMATS
Type: BOOLEAN
Valid Values: true | false
Default: false
This tells the profiler if common formats are to be discovered for all sources in this profile.
Name: NULL_VALUE
Type: STRING
Valid Values: any string value
Default: null
This value will be considered as the null value when profiling. Please enclose the value in single quotes. An unqouted null (the current default value) will be considered a database null.
Name: SAMPLE_RATE
Type: NUMBER
Valid Values: 1-100
Default: 100
This value will be the percent of total rows that will be randomly selected during loading.
Configuration properties for DATA_PROFILE that affect profiling:
Name: CALCULATE_DOMAINS
Type: BOOLEAN
Valid Values: true | false
Default: true
Setting this to true will enable domain discovery.
Name: DOMAIN_MAX_COUNT
Type: NUMBER
Valid Values: 1-any number
Default: true
The maximum number of distinct values in a column in order for that column to be discovered as possibly being defined by a domain. Domain Discovery of a column occurs if the number of distinct values
in that column is at or below the Max Distinct Values Count property, AND, the number of distinct values as a percentage of total rows is at or below the Max Distinct Values Percent property.
Name: DOMAIN_MAX_PERCENT
Type: NUMBER
Valid Values: 1-100
Default: true
The maximum number of distinct values in a column, expressed as a percentage of the total number of rows in the table, in order for that column to be discovered as possibly being defined by a domain.
Domain Discovery of a column occurs if the number of distinct values in that column is at or below the Max Distinct Values Count property, AND, the number of distinct values as a percentage of total
rows is at or below the Max Distinct Values Percent property.
Name: DOMAIN_MIN_COUNT
Type: NUMBER
Valid Values: 1-any number
Default: true
The minimum number of rows for the given distinct value in order for that distinct value to be considered as compliant with the domain. Domain Value Compliance for a value occurs if the number of
rows with that value is at or above the Min Rows Count property, AND, the number of rows with that value as a percentage of total rows is at or above the Min Rows Percent property.
Name: DOMAIN_MIN_PERCENT
Type: NUMBER
Valid Values: 1-100
Default: true
The minimum number of rows, expressed as a percentage of the total number of rows, for the given distinct value in order for that distinct value to be considered as compliant with the domain. Domain
Value Compliance for a value occurs if the number of rows with that value is at or above the Min Rows Count property, AND, the number of rows with that value as a percentage of total rows is at or
above the Min Rows Percent property.
Name: CALCULATE_UK
Type: BOOLEAN
Valid Values: true | false
Default: true
Setting this to true will enable unique key discovery.
Name: UK_MIN_PERCENT
Type: NUMBER
Valid Values: 1-100
Default: 75
This is the minimum percentage of rows that need to satisfy a unique key relationship.
Name: CALCULATE_FD
Type: BOOLEAN
Valid Values: true | false
Default: true
Setting this to true will enable functional dependency discovery.
Name: FD_MIN_PERCENT
Type: NUMBER
Valid Values: 1-100
Default: 75
This is the minimum percentage of rows that need to satisfy a functional dependency relationship.
Name: FD_UK_LHS_COUNT
Type: NUMBER
Valid Values: 1-number of attributes of source less 1
Default: 1
This is the maximum number of attributes for unique key and functional dependency profiling.
Name: CALCULATE_FK
Type: BOOLEAN
Valid Values: true | false
Default: true
Setting this to true will enable foreign key discovery.
Name: FK_MIN_PERCENT
Type: NUMBER
Valid Values: 1-100
Default: 75
This is the minimum percentage of rows that need to satisfy a foreign key relationship.
Name: CALCULATE_REDUNDANT_COLUMNS
Type: BOOLEAN
Valid Values: true | false
Default: false
Setting this to true will enable redundant column discovery with respect to a foreign key-unique key pair.
Name: REDUNDANT_MIN_PERCENT
Type: NUMBER
Valid Values: 1-100
Default: 75
This is the minimum percentage of rows that are redundant.
Name: CALCULATE_DATA_RULES
Type: BOOLEAN
Valid Values: true | false
Default: false
Setting this to true will enable data rule profiling for the selected table.
Name: CALCULATE_PATTERNS
Type: BOOLEAN
Valid Values: true | false
Default: false
Setting this to true will enable pattern discovery.
Name: MAX_NUM_PATTERNS
Type: NUMBER
Valid Values: any number less than the number of rows of the source
Default: 10
This tells the profiler to get the top-N patterns for the attribute. | {"url":"http://docs.oracle.com/cd/E18283_01/owb.112/e14406/chap7002.htm","timestamp":"2014-04-21T01:45:40Z","content_type":null,"content_length":"16863","record_id":"<urn:uuid:b1059969-b58a-4491-bad1-53366b22be5b>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boston Math Tutor
Find a Boston Math Tutor
...I have been informally singing for as long as I can remember, and formally singing for over four years. I have training in solo vocals as well as in a choral setting, so I am aware of
techniques and exercises that improve vocal range and timbre as well as ear training. I currently sing in an ac...
38 Subjects: including algebra 1, probability, SAT math, precalculus
...My name is Sean, and I am a graduate student in the Department of Psychological and Brain Sciences at Boston University. Prior to my move to Boston, I received a B.S. (with high honors) in
Biomedical Sciences from Rochester Institute of Technology (RIT). I am passionate about teaching and have ...
22 Subjects: including ACT Math, SAT math, reading, writing
...As a tax and trial attorney with many years experience, I bring effective communication skills and the ability to work diligently and efficiently with you on the subjects you need help with.
Give me a try. You won't be disappointed.I enjoy helping young students learn math.
11 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I left Taiwan in 2006 and have been tutoring students in Hawaii since. I recently moved to Boston in 2011 and am glad to see there is a strong academic focus in the city and a belief that
tutoring can improve a student's academic performance. In terms of my credentials, I am a full-time engineer helping people save money on energy and water.
42 Subjects: including algebra 1, English, accounting, trigonometry
...I, myself, am a visual learner so I try to come up with fun, neat tricks to help me remember difficult subjects. I enjoy helping others and hopefully you will allow me to help you and,or your
child overcome their academic obstacles and struggles. I sure have had mine to get where I'm at!I successfully completed high school math with grades of ninety and above.
16 Subjects: including algebra 2, elementary (k-6th), trigonometry, ESL/ESOL
Nearby Cities With Math Tutor
Brighton, MA Math Tutors
Brookline, MA Math Tutors
Cambridge, MA Math Tutors
Charlestown, MA Math Tutors
Chelsea, MA Math Tutors
Dorchester, MA Math Tutors
East Boston Math Tutors
Everett, MA Math Tutors
Jamaica Plain Math Tutors
Malden, MA Math Tutors
Medford, MA Math Tutors
Revere, MA Math Tutors
Roxbury, MA Math Tutors
Somerville, MA Math Tutors
South Boston, MA Math Tutors | {"url":"http://www.purplemath.com/boston_ma_math_tutors.php","timestamp":"2014-04-19T02:38:28Z","content_type":null,"content_length":"23683","record_id":"<urn:uuid:942db841-41c4-4174-81d5-f8a78ca05619>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hans Saar's thesis
up vote 1 down vote favorite
I would love to have a look on some results which are claimed by some people to be in Saar's thesis:
H. Saar, Kompakte, vollständig beschränkte Abbildungen mit Werten in einer nuklearen C${}^\ast$-Algebra, Diplomarbeit, Universität des Saarlandes, Saarbrücken, 1982.
Unfortunately, I cannot localize the mathematician named H. Saar (in particular, the Math Genealogy Project gives no outputs).
I would appreciate any help with finding his thesis.
EDIT: quid is quite right. The Universität des Saarlandes Library lists it as Master's thesis. If you google for it, then you'll see it is pretty frequently cited and it is very unusual (ar least in
my opinion) for a Master's thesis to have such impact.
reference-request c-star-algebras oa.operator-algebras
Searching for 'Saar' in the catalogue of the relevant library infomath-bib.de/en/welcome.shtml gives an entry for the document. In principle it seems feasible to get it (or a copy) via some
2 interlibrary loan from there; or by asking someone there. Sideremark: this is not (the analog of) a PhD thesis; more like a master or senior thesis. It is thus unsurprsing it is not in Math
Genealogy database. – quid Apr 3 '13 at 17:06
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged reference-request c-star-algebras oa.operator-algebras or ask your own question. | {"url":"http://mathoverflow.net/questions/126413/hans-saars-thesis","timestamp":"2014-04-20T18:35:09Z","content_type":null,"content_length":"46361","record_id":"<urn:uuid:5bce7006-bc71-4027-9fed-6cd358ae0137>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
When is the Freudenthal compactification an ANR?
up vote 3 down vote favorite
Let $X$ be a locally compact metric ANR (or, if preferred, a locally compact simplicial complex). If needed, assume that $X$ has finitely many ends or is of finite dimension. My question is:
What are the necessary conditions for the Freudenthal compactification of $X$ to be a metric ANR?
In the literature I have found some sufficient conditions (though up to date I was not able to dig through the papers I refer to and compare the conditions), like:
A full characterization of $X$'s with the property I ask for is probably difficult. An analogous question for the one-point compactification is stated open (Problem 79SC6) in the book J. van Mill,
G.M. Reed: Open Problems in Topology. At least in the case of finitely many ends these two problems look at least close to being equivalent. The book has some references to the papers of Dydak ('On
$LC^n$ divisors' and 'On maps preserving $LC^n$ divisors'), which probably give an answer in the finite dimensional case, but which at the moment I still do not understand (if some short exposition
is possible, it would be welcome).
Perhaps for me even better than the answer to the first question would be a good answer to the following one:
What are some simple examples of $X$ whose Freudenthal compactification is not an ANR and why is it so?
I think gt.geometric-topology and at.algebraic-topology would be relevant tags. – Sergey Melikhov Apr 19 '12 at 16:53
add comment
1 Answer
active oldest votes
Regarding simple examples: take $X$ to be the infinite mapping telescope of an inverse sequence of connected compact polyhedra $P_i$ (so that $X$ has one end) such that for some $j$, the
inverse sequence of the groups $G_i=H_j(P_i)$ does not satisfy the Mittag-Leffler condition or its inverse limit does not inject in any $G_i$. Then the Freudenthal compactification, which
in this case is the same as the one-point compactification $X^+$, is not an ANR.
The Mittag-Leffler condition is not satisfied for instance if each $P_i=S^j$ and each bonding map $P_{i+1}\to P_i$ is a degree $2$ map ($j>0$). The inverse limit does not inject in any
$G_i$ for instance if each $P_i$ is the $j$-fold suspension $S^{j-1}*[2^i]$ over the discrete space of cardinality $2^i$ (here $j>0$ in order to have $P_i$ connected), and each bonding
map $P_{i+1}\to P_i$ is the suspension over a trivial $2$-fold cover.
up vote 2 More generally one can take the $P_i$ to be disconnected, but such that the inverse limit of $\pi_0(P_i)$ injects into some $\pi_0(P_i)$. Then $X$ has finitely many ends (though there
down vote might be infinitely many proper homotopy classes of proper maps $[0,\infty)\to X$ if $\pi_1(P_i)$ do not satisify the Mittag-Leffler condition). So if the Freudenthal compactification of
accepted $X$ is an ANR, then $X^+$ is an ANR.
So why is $X^+$ not an ANR if the $G_i$ do not satisfy the Mittag-Leffler condition or their inverse limit does not inject in any $G_i$? This follows from Dydak's necessary and sufficient
condition for $X^+$ to be ANR, where $X$ is the infinite mapping telescope of an inverse sequence of compact polyhedra $P_i$, in terms of the homology groups and the fundamental groups of
the $P_i$. I've tried to understand this result, and have written up a somewhat different proof of the homological part (which perhaps also counts as a short exposition that you request),
see Theorem 6.12 in http://arxiv.org/abs/0812.1407. Theorem 3.12 also gives a similar necessary and sufficient condition for the mapping telescope $X$ to be forward tame, in terms of the
homotopy groups of the $P_i$. See also Lemma 3.4 for a quick explanation of some terms.
add comment
Not the answer you're looking for? Browse other questions tagged gn.general-topology compactifications examples gt.geometric-topology at.algebraic-topology or ask your own question. | {"url":"https://mathoverflow.net/questions/94462/when-is-the-freudenthal-compactification-an-anr","timestamp":"2014-04-18T15:58:30Z","content_type":null,"content_length":"56497","record_id":"<urn:uuid:93f3da66-b569-40c5-bf50-271d63acb956>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maxwell-Boltzmann distribution
Maxwell-Boltzmann distribution
is an important relationship that finds many applications in physics and chemistry. It forms the basis of the
kinetic theory of gases
, which accurately explains many fundamental gas properties, including pressure and diffusion. The Maxwell-Boltzmann distribution also finds important applications in electron transport and other
The Maxwell-Boltzmann distribution can be derived using statistical mechanics (see the derivation of the partition function). It corresponds to the most probable energy distribution, in a
collisionally-dominated system consisting of a large number of non-interacting particles. Since interactions between the molecules in a gas are generally quite small, the Maxwell-Boltzmann
distribution provides a very good approximation of the conditions in a gas.
In many other cases, however, the condition of elastic collisions dominating all other processes is not even approximately fulfilled. That is true, for instance, for the physics of the ionosphere and
space plasmas where recombination and collisional excitation (i.e. radiative processes) are of far greater importance: in particular for the electrons. Not only would the assumption of a Maxwell
distribution yield quantitatively wrong results, but even prevent a correct qualitative understanding of the physics involved.
The Maxwell-Boltzmann distribution can be expressed as:
where N
is the number of molecules at equilibrium temperature T, having energy level E
, N is the total number of molecules in the system and k is
Boltzmanns constant
. Essentially Equation 1 provides a means for calculating the fraction of molecules (N
/N) that have energy E
at a given temperature, T. Because velocity and speed are related to energy, Equation 1 can be used to derive relationships between temperature and the speeds of molecules in a gas.
Maxwell-Boltzmann Velocity Distribution
For the case of an "ideal gas" consisting of non-interacting atoms in the ground state, all energy is in the form of kinetic energy. From the Particle in a box problem in Quantum mechanics we know
that the energy levels for a gas in a rectangular box with sides of lengths a[x], a[y], a[z] are given by:
where, n[x], n[y], and n[z] are the quantum numbers for x,y, and z motion, respectively. However, for a macroscopic sized box, the energy levels are very closely spaced, so the energy levels can be
considered continuous and we can replace the sum with an integral. Furthermore, we can recognize that (h^2n[i]^2/4a[i]^2) corresponds to the square of the ith component of momentum, p[i]^2 giving:
where q corresponds to the denominator in Equation 1. This distribution of N[i]/N is proportional to the probability distribution function f[p] for finding a molecule with these values of momentum
components, so:
The constant of proportionality, c, can be determined by recognizing that the probability of a molecule having
momentum must be 1. Therefore the integral of equation 4 over all p
, p
, and p
must be 1.
It can be shown that:
so in order for the integral of equation 4 to be 1,
Substituting Equation 6 into Equation 4 and using p
for each component of momentum gives:
Finally recognizing that the velocity probability distribution, f
is proportional to the momentum probability distribution function as
we get:
Which is the Maxwell-Boltzmann velocity distribution.
Velocity Distribution in One Direction
For the case of a single direction Equation 8 can be reduced to:
This distribution has the form of a Gaussian error curve. As expected for a gas at rest, the average velocity in any particular direction is zero.
Distribution of Speeds
Usually, we are more interested in the speed of molecules rather than the component velocities, where speed, v is defined such that:
The corresponding speed distribution is:
Average Speed
Although Equation 11 gives the distribution of speeds or in other words the fraction of molecules having a particular speed, we are often more interested in quantities such as the average speed of
the particles rather than the actual distribution. In the following subsections we will define and derive the most probable speed, the mean speed and the root-mean-square speed.
Most Probable Speed
The most probable speed, v[p], is the speed most likely to be possessed by any molecule in the system and corresponds to the maximum value or mode of F(v). To find it, we calculate dF/dv, set it to
zero and solve for v:
Mean Speed
The mean speed, , or average speed can be calculated using the expression:
Substituting in Equation 11 and performing the integration gives:
Note that and v[p] differ by a constant factor (4/π)^1/2.
Root-mean-square Speed
The root mean square speed, v[rms] is given by
Substituting for F(v) and performing the integration, we get | {"url":"http://www.fact-index.com/m/ma/maxwell_boltzmann_distribution.html","timestamp":"2014-04-17T06:42:39Z","content_type":null,"content_length":"9918","record_id":"<urn:uuid:7ca988a7-ef8c-401f-b956-a1d2ad65d3f1>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Project idea?
January 6th, 2011, 04:19 PM
Project idea?
Hi, I'm mostly new to Java. I started to learn it for a project I wanted to do for school. My initial idea was to make a Soduko solver, even though I know it's possible for me to make, I wanted
to make an "easier" program to start out with. I don't want something like a clock.... I want something that is interesting, somewhat of a challenge, and "cool".
Was just wondering if you guys had any ideas of projects for me to do?
Thanks :)
January 6th, 2011, 05:56 PM
Re: Project idea?
A Sudoku solver is fairly easy to make (at least brute force solvers are). Hmm... You could make an AI which will play tic-tac-toe with you, or maybe some other simple game.
If you're looking for a challenge of your analytical and math skills (and a test of how good your number theory is) you can take a look at Project Euler. This will definitely be harder than any
Sudoku solver (well, not all of the problems are harder), though the programming portions are actually quite easy. And depending on who you are, solving these problems is kind of cool :)
January 6th, 2011, 07:22 PM
Re: Project idea?
Well if you say a sudoku solver is kind of simple, I might stick with it :)
Could you give me some tips on it? I've searched the web, but haven't found /too/ much stuff on it.
Thanks! :)
January 6th, 2011, 10:09 PM
Re: Project idea?
See: Sudoku algorithms - Wikipedia, the free encyclopedia. There's actually an implementation of the back-track (aka. brute force) algorithm for solving a sudoku about 3/4 the way down the page (
link). It's implemented in ruby, but you can try treating this as pseudo code and converting it to Java. | {"url":"http://www.javaprogrammingforums.com/%20java-theory-questions/6726-project-idea-printingthethread.html","timestamp":"2014-04-18T04:12:55Z","content_type":null,"content_length":"6088","record_id":"<urn:uuid:6157b0f4-8a8c-4861-a61e-17b5338703ff>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
A subgroup of the Weyl group
up vote 5 down vote favorite
Let $D$ be a connected Dynkin diagram with an automorphism $\nu$ of order 2. Let $Q=Q(D)$ denote the root lattice of $D$. Let $W=W(D)$ denote the Weyl group, it acts effectively on $Q$ and it is
generated by reflections $r_\alpha$ for $\alpha\in D$. The automorphism $\nu$ acts on $Q$. Let $W_0$ denote the centralizer of $\nu$ in $W\subset {\rm Aut}\, Q$.
I want to understand this group $W_0$. Let $D^\nu$ denote the subset of $\nu$-fixed vertices in $D$. For $\beta\in D^\nu$ we have $r_\beta\in W_0$. I assume that for all $\gamma\in D\smallsetminus D^
\nu$, the vertices $\gamma$ and $\nu(\gamma)$ are not connected by an edge (thus I exclude the case $D={\bf A}_{2n}$). Then $r_\gamma$ and $r_{\nu(\gamma)}$ commute, and we have $r_\gamma r_{\nu(\
gamma)}\in W_0$.
Question. Is it true that $W_0$ is generated by $r_\beta$ for $\beta\in D^\nu$ and by $r_\gamma r_{\nu(\gamma)}\in W_0$ for $\gamma\in D\smallsetminus D^\nu$?
I am interested in the case $D={\bf D}_n$, but I would prefer to get a classification-free answer.
lie-groups algebraic-groups root-systems
1 This set-up is well-studied in connection with the construction of quasi-split groups over various fields such as finite fields. I think Carter's old book Simple Groups of Lie Type may be a good
source for an affirmative answer to your question, but I'd need to check more carefully. There are also accounts by Tits, Satake, etc. – Jim Humphreys Aug 23 '13 at 13:20
P.S. The early sections of Chapter 13 in Carter's book deal with your situation, though his notation differs from yours. Some of the arguments are classification-free, involving generation of a
reflection subgroup of the Weyl group. Then there is case-by-case discussion of the actual possibilities. He is of course treating all possible diagram symmetries. – Jim Humphreys Aug 23 '13 at
@Jim Humphreys: Thank you, it was very helpful. I am interested in twisting compact groups over $\mathbb{R}$, rather than split groups over finite fields. – Mikhail Borovoi Aug 23 '13 at 18:17
For the Weyl group it doesn't really make any difference. I think Helgason's book deals more directly with your issue, but the formalism for roots and Weyl groups doesn't change. Satake and Tits
also have lecture note treatments covering the Lie groups, as do some textbooks I don't have at hand. Carter's treatment is quite concrete in any case. – Jim Humphreys Aug 23 '13 at 20:37
@Jim Humphreys: Sure! Carter's Proposition 13.1.2 gives the affirmative answer to my question. – Mikhail Borovoi Aug 23 '13 at 21:20
add comment
1 Answer
active oldest votes
As indicated in my comments, the 1968 book Simple Groups of Lie Type by R.W. Carter has a good elementary treatment of your question (to which the answer is yes) in Chapter 13. All of
this goes back pretty far in the history of Lie theory, with the Weyl group and root system arising from a simple Lie algebra (or Lie group) or from a simple algebraic group. Typically
the discussion of symmetries of the Dynkin diagram occurs along with specific constructions in the Lie algebra or associated group. But Carter's exposition is clear and detailed, showing
up vote 4 how to pass from the original Weyl group to its subgroup commuting with the given diagram symmetry. In particular, this subgroup has a natural set of generators as in your question.
down vote
accepted While there are many treatments in textbooks or lecture notes, one online source may be useful: the 1967-68 Yale lecture notes on Chevalley groups by Steinberg here. See his section 11,
where he starts with standard examples and then treats the general theory. In all such sources, notation varies quite a bit but the ideas are pretty much the same.
add comment
Not the answer you're looking for? Browse other questions tagged lie-groups algebraic-groups root-systems or ask your own question. | {"url":"http://mathoverflow.net/questions/140208/a-subgroup-of-the-weyl-group","timestamp":"2014-04-23T18:48:38Z","content_type":null,"content_length":"58161","record_id":"<urn:uuid:54d47e6c-ddbb-4e6b-aced-b67029de0ad8>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Windham, NH Math Tutor
Find a Windham, NH Math Tutor
...My hourly rate is flexible, depending upon the needs of the student and the distance I will have to travel to the tutoring location. I look forward to working with you!I have taught Algebra 1
numerous times, in the classroom as well as on an individualized basis. I specialize in making the abstract concepts of algebra more concrete for thinkers who might not be mathematically
28 Subjects: including geometry, ACT Math, algebra 1, algebra 2
I have been tutoring, specializing in mathematics for the past nine years. I offer math tutoring for students third grade through pre-calculus in your home, at my home office or at a convenient
public location. My specialty is in individual tutoring, dedicating myself to helping students understan...
12 Subjects: including precalculus, algebra 1, algebra 2, geometry
As a 2011 Graduate of Merrimack College with an English degree, a member of the school's English club and someone who got a 750 in the writing portion of the SATs, I have extensive experience in
writing and literature-related subjects. I also have worked at Artworks art studio in Medford, MA for th...
27 Subjects: including prealgebra, algebra 1, grammar, SAT math
...My areas of expertise are Geometry, Algebra I, Pre-Algebra, and Algebra II. I am a Math MCAS specialist. I believe that you must work in a safe, relaxed environment which is why I like working
at one of the local libraries when I tutor someone or small group.
4 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I obtained my Bachelor's degree in Biopsychology and have extensive coursework and research experience in this subject. I emphasize the importance of study skills to retain course material. I
also utilize my own experiences to challenge students to think like psychologists and bring those skills to tackle future endeavors.
10 Subjects: including algebra 1, algebra 2, biology, chemistry
Related Windham, NH Tutors
Windham, NH Accounting Tutors
Windham, NH ACT Tutors
Windham, NH Algebra Tutors
Windham, NH Algebra 2 Tutors
Windham, NH Calculus Tutors
Windham, NH Geometry Tutors
Windham, NH Math Tutors
Windham, NH Prealgebra Tutors
Windham, NH Precalculus Tutors
Windham, NH SAT Tutors
Windham, NH SAT Math Tutors
Windham, NH Science Tutors
Windham, NH Statistics Tutors
Windham, NH Trigonometry Tutors
Nearby Cities With Math Tutor
Amherst, NH Math Tutors
Atkinson, NH Math Tutors
Derry, NH Math Tutors
Hampstead, NH Math Tutors
Hudson, NH Math Tutors
Litchfield, NH Math Tutors
Londonderry, NH Math Tutors
Lynnfield Math Tutors
Milford, NH Math Tutors
North Reading Math Tutors
North Salem, NH Math Tutors
Pelham, NH Math Tutors
Pepperell Math Tutors
Salem, NH Math Tutors
Tyngsboro Math Tutors | {"url":"http://www.purplemath.com/windham_nh_math_tutors.php","timestamp":"2014-04-21T10:27:16Z","content_type":null,"content_length":"23876","record_id":"<urn:uuid:20012b69-2eae-4d96-b3a5-62aa76decdd4>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Begin by dividing both sides of the equation by sqrt(y). This gives, ( dy /dx ) / sqrt(y) = sqrt(x), and after multiplying by dx, dy / sqrt(y) = sqrt(x) dx. Now integrate both sides and get S dy/sqrt
(y) = S sqrt(x) dx ... 2*sqrt(y) = (2/3)*x*sqrt(x) + C. The initial...
a cylinder has a surface area of 402 cm^2. The height is three times greater than the radius. What is the height of the cylinder? (answer)
2π r2 + 2π r(3r) = 2π r2 + 6π r2 = 8π r2. I hope this helped, and if you need more assistance, I or another tutor will be happy to help.
Factor the following sum of two cubes. (answer)
z3 + 125 = z3 + 53 = (z + 5)(z2 - 5z + 25). This follows from the "sum of cubes" formula, below. a3+ b3 = (a + b)(a2 - ab + b2). Hope this helps. If there's any more I can do to help please ask. I or
another tutor...
When trying to figure the slope from your own choice of coordinates on a line, does it matter when coordinates you choose? (answer)
Any choice of points on a line will give you the same result when you calculate the slope. For lines slope is a constant, and so will not be dependent on which points you choose to "plug into" the
formula. Hope this answers your question, and if there is anything else you need assistance...
Choose the equation below that represents the line passing through the point (-3, -1) with a slope of 4. (answer)
There is a formula well-known as the point-slope equation. It gives the equation of the line through a given point with a given slope. Here it is: y - y1 = m(x - x1), where (x1, y1) is the given
point and m is the slope. I hope this helps. If you would like more assistance, please feel...
in a subtraction problem with two numbers that have variables, does the subtraction sign go to the second number making it a negative number? (answer)
-9x - 13 - 9x is equivalent to -9x - 13 + (-9x). Remember that subtracting a number is equivalent to adding its opposite. Put another way, a - b = a + (-b). I hope this helps, and if you need more
explained, please feel free to ask. I or another tutor will be happy to assist...
(2x+5)/5 = (x-2)/3 (answer)
I'm reading your equation as ( 2x + 5 )/5 = ( x - 2 )/3. "Cross-multiplying" refers to rewriting 'A/B = C/D' as 'AD = BC.' Try this first.
algebraic expression for each word phrase?a. 16 more than a number n b. 22 less than a number n c. 3 times a number n d. the quotient of a number n and 12 (answer)
Here it looks like they would like expressions written algebraically that are equivalent to the given statements involving the variable 'n.' An few examples might clear it up for you. (i) two more
than a number n. n+2. (ii) four times a number n. 4n. (iii)...
Pre calculus (answer)
Problem: To maximize 8R + 10F [profit] subject to the two constraints 8R + 7F <= 208 [fab dept], R + 3F <= 60 [fin dept]. Work with this and if you would like more help, I or another tutor will be
happy to assist you. &...
Summation Convergency tests by 3-condition test (answer)
The Alternating Series Test ∑(-1)nBn converges when the following two conditions are met: (i) lim Bn = 0 and (ii) {Bn} is (eventually) decreasing. Note: AST doesn't apply when either of the
conditions is not met, and so never is a test for divergence. If the first condition isn't...
how do you simplify this expression: 12x(9xy^12)(-5x^-8y^7)? (answer)
Remember the following rules when dealing with exponents. am*an = am+n. am/an = am-n. a-n = 1/an. Also remember that a0 = 1. Also it may help to remember that you can regroup the terms according to
bases. To illustrate I'll use another example, and leave yours for...
latest mathematician? (answer)
Sadly, I'd say there are no famous mathematicians today, in the same sense that celebrity athletes, actors, politicians, and the like are famous. But that doesn't stop people from going into the
field and continuing the work of mathematicians past. They just don't do it for the fame...
Havin a problem with this slope equation. Can you help?(9,-5); 3x-5y=6 (answer)
Read the instructions for this problem carefully, and make sure you have included them in this post. As you've written it here so far, I can't figure what the problem is asking you to do. I certainy
could guess, but that wouldn't be too instructive, and I would really like to help.
How do I find the (LCM) or the (LCD) of fraction? (answer)
Yes they are the same number, but LCM, for Least Common Multiple is a more descriptive term, as it hints at a way to find it. To find these, one could list the multiples of each number (or
denominator). For example, say one needs the LCM of 6 and 15. 6: 6, 12, 18, 24, 30, 36,...
y=-x^2+4x+3 (answer)
I think a technique called "completing the square" is needed here. It rewrites y = Ax2 + Bx + C in the form y = A(x + P)2 + Q. It is accomplished in several steps. To demonstrate, I'll use an example
like your own, leaving it for you to try yourself...
explain why the negation of an if then statement can not be an if then statement (answer)
Perhaps a Truth Table might shed some light on this. Below is a TT for "if p, then q." p. q. if p, then q. T. T. T. T. F. F. [note this case. "if T, then F" = F.] F. T. T. F. F. T. Notice...
What are 3 common multiples of 7 and 21? (answer)
The "multiples" of a number, N, are the numbers N, 2*N, 3*N, ..., etc. For example, the multiples of 4 are listed below in the usual order. 4, 8, 12, 16, 20, ..., and so on. So by "common multiple"
one means a number that is a multiple of 2 or more numbers. Try...
how to simplify (2x-3)^2 (answer)
(a + b)2 = (a + b)(a + b) = a2 + ab + ba + b2 = a2 + 2ab + b2. Above I've given a short derivation of a very useful formula which demonstrates how to "square a binomial," which is just a fancy way of
saying "raise the sum to the 2nd power." In...
Roy and S ally are building a 12 ft by 8 ft fence for there back yard. If the cost is $9 per square foot plus 7% tax, how much will this cost Roy and Sally? (answer)
First work out the area by multiplying the dimensions of the fence. Now that you have the area, multiply this by the unit cost per square foot, to get the pre-tax cost of the fence. Now calculate the
tax by multiplying the pre-tax cost, by 7% or .07. After adding the tax to the...
how to do (6-c)+3(2c-7)=1 (answer)
Let's work an example like yours that will require similar steps to complete. Say we need to solve the following equation for 'b': (5-b) + 4(3b - 1) = 23. The first step would be to expand the LHS by
distributing the 4. 5 - b + 12b - 4 = 23. Now simplify this as follows,... | {"url":"http://www.wyzant.com/resources/answers/users/view/79377660","timestamp":"2014-04-23T20:52:48Z","content_type":null,"content_length":"39970","record_id":"<urn:uuid:c22c826d-f453-4463-a3f6-2d334bd1a071>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Add Mixed Numbers
Method 1 of 2: Adding the Whole Numbers and Fractions Separately
1. 1
Add the whole numbers together. The whole numbers are 1 and 2, so 1 + 2 =3.
2. 2
Find the lowest common denominator (LCD) of both fractions.The LCD is the lowest number that is evenly divisible by both numbers. Since the denominators of the fractions are 2 and 4, the LCD is
4, because 4 is the smallest number that is divisible by both 2 and 4.
3. 3
Convert the fractions to have the LCD as their denominator. Before you can add the fractions, they have to have 4 as their denominator, so you have to make the fractions maintain their value
while having a new base. Here's how to do it:
□ Since the denominator of the fraction 1/2 has to be multiplied by 2 to get 4 as the new base, you should multiply the numerator 1 by 2 as well. 1 * 2 = 2, so the new fraction is 2/4. The
fraction 2/4 = 1/2, but has been put in a larger ratio to have a larger base. This means that the numbers are equivalent fractions. They have a different base, but their value remains the
□ Since the fraction 3/4 already has the base of 4, you don't have to change it.
4. 4
Add the fractions. Once you have a common denominator, the fractions can be added by simply adding the numerators.
5. 5
Convert any improper fractions into mixed numbers. An improper fraction is a fraction where the numerator is equal to or larger than the denominator. You need to convert the improper fraction
into a mixed number before you can add it to the sum of the whole numbers. Since the original problem used mixed numbers, your answer should use mixed numbers too. Here's how to do it:
□ First, divide the numerator by the denominator. Do long division to divide 4 into 5. 4 goes into 5 1 time. This means that the quotient is 1. The remainder, or the number that is left over,
is 1.
□ Make your quotient the new whole number. Take your remainder and place it over the original denominator to finish converting the improper fraction into a mixed number. The quotient is 1, the
remainder is 1, and the original denominator was 4, so the final answer is 1 1/4.
6. 6
Add the sum of the whole numbers and the sum of the fractions. To get your final answer, you have to add the two sums you found. 1 + 2 = 3 and 1/2 + 3/4 = 1 1/4, so 3 + 1 1/4 = 4 1/4.
Method 2 of 2: Converting the Mixed Numbers to Improper Fractions and Adding Them
1. 1
Convert the mixed numbers into improper fractions. You can do this by multiplying the denominator and whole number of a mixed number, and then adding this to the numerator of the fraction of the
mixed number. Your answer will be the new numerator while the denominator remains the same.
□ To convert 1 1/2 to a mixed number, multiply the whole number 1 by the denominator 2, and then add it to the numerator. Put your new answer over the original base.
☆ 1 * 2 = 2, and 2 + 1 = 3. Place the answer 3 over the original denominator and you have 3/2.
□ To convert 2 3/4 to a mixed number, multiply the whole number 2 by the denominator 4. 2 * 4 = 8.
☆ Next, add this number to the original numerator and place it over the original denominator. 8 + 3 = 11. Put 11 over 4 to get 11/4.
2. 2
Find the lowest common multiple (LCM) of both the denominators. The LCM is the lowest number that is evenly divisible by both numbers. If the denominators are already the same, skip this step.
□ If one of the denominators is divisible by the other, the larger denominator is the LCM. The LCM of 2 and 4 is 4 because 4 is evenly divisible by 2.
3. 3
Make the denominators the same. You can do this by finding equivalent fractions. Multiply the denominator by a number that will give you the LCM as the product. Multiply the numerator by the same
number. Do this with both fractions.
□ Since the denominator of 3/2 has to be multiplied by 2 to get the new denominator of 4, you should multiply the numerator by 2 to find the equivalent fraction of 3/2. 3 * 2 = 6, so the new
fraction is 6/4.
□ Since 11/4 already has the denominator of 4, you're in luck. You don't have to change it.
4. 4
Add the two fractions. Now that the denominators are the same, just add the numerators to get your answer while keeping the same base.
5. 5
Convert the improper fraction back into a mixed fraction. Since the original problem was in mixed number form, you can convert it back into a mixed number. Here's how to do it:
□ First, divide the numerator by the denominator. Divide 4 into 17. 4 goes into 17 4 times, so the quotient is 4. The remainder, or the number that is left over, is 1.
□ Make your quotient the new whole number. Take your remainder and place it over the original denominator to finish converting the improper fraction into a mixed number. The quotient is 4, the
remainder is 1, and the original denominator was 4, so the final answer is 4 1/4.
Article Info
Thanks to all authors for creating a page that has been read 18,949 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Add-Mixed-Numbers","timestamp":"2014-04-19T02:15:53Z","content_type":null,"content_length":"70671","record_id":"<urn:uuid:0303f001-87c1-47f2-99d9-b09b7071604e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
Windham, NH Math Tutor
Find a Windham, NH Math Tutor
...My hourly rate is flexible, depending upon the needs of the student and the distance I will have to travel to the tutoring location. I look forward to working with you!I have taught Algebra 1
numerous times, in the classroom as well as on an individualized basis. I specialize in making the abstract concepts of algebra more concrete for thinkers who might not be mathematically
28 Subjects: including geometry, ACT Math, algebra 1, algebra 2
I have been tutoring, specializing in mathematics for the past nine years. I offer math tutoring for students third grade through pre-calculus in your home, at my home office or at a convenient
public location. My specialty is in individual tutoring, dedicating myself to helping students understan...
12 Subjects: including precalculus, algebra 1, algebra 2, geometry
As a 2011 Graduate of Merrimack College with an English degree, a member of the school's English club and someone who got a 750 in the writing portion of the SATs, I have extensive experience in
writing and literature-related subjects. I also have worked at Artworks art studio in Medford, MA for th...
27 Subjects: including prealgebra, algebra 1, grammar, SAT math
...My areas of expertise are Geometry, Algebra I, Pre-Algebra, and Algebra II. I am a Math MCAS specialist. I believe that you must work in a safe, relaxed environment which is why I like working
at one of the local libraries when I tutor someone or small group.
4 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I obtained my Bachelor's degree in Biopsychology and have extensive coursework and research experience in this subject. I emphasize the importance of study skills to retain course material. I
also utilize my own experiences to challenge students to think like psychologists and bring those skills to tackle future endeavors.
10 Subjects: including algebra 1, algebra 2, biology, chemistry
Related Windham, NH Tutors
Windham, NH Accounting Tutors
Windham, NH ACT Tutors
Windham, NH Algebra Tutors
Windham, NH Algebra 2 Tutors
Windham, NH Calculus Tutors
Windham, NH Geometry Tutors
Windham, NH Math Tutors
Windham, NH Prealgebra Tutors
Windham, NH Precalculus Tutors
Windham, NH SAT Tutors
Windham, NH SAT Math Tutors
Windham, NH Science Tutors
Windham, NH Statistics Tutors
Windham, NH Trigonometry Tutors
Nearby Cities With Math Tutor
Amherst, NH Math Tutors
Atkinson, NH Math Tutors
Derry, NH Math Tutors
Hampstead, NH Math Tutors
Hudson, NH Math Tutors
Litchfield, NH Math Tutors
Londonderry, NH Math Tutors
Lynnfield Math Tutors
Milford, NH Math Tutors
North Reading Math Tutors
North Salem, NH Math Tutors
Pelham, NH Math Tutors
Pepperell Math Tutors
Salem, NH Math Tutors
Tyngsboro Math Tutors | {"url":"http://www.purplemath.com/windham_nh_math_tutors.php","timestamp":"2014-04-21T10:27:16Z","content_type":null,"content_length":"23876","record_id":"<urn:uuid:20012b69-2eae-4d96-b3a5-62aa76decdd4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
While I'm not running the UCL Undergrad colloquium anymore, I promised the new management that I could be their back-up speaker in case someone else drops out. Someone did, and I had an opportunity
give an undergrad-level talk on motivations for studying Ergodic theory. I tried to present it as a game: trying to determine orbits of increasingly complex systems. You can
download my notes as a pdf
or just read them here:
1. First Game: Sets and Functions
Let $X$ be an arbitrary set (e.g. points in space, animals in a zoo.) and let $f: X \rightarrow X$ be some function on $X$. For some $x \in X$, we're gonna look at what happens to the set $\{ x, f
(x), f(f(x)), \ldots \}$. We'll call this set the orbit of $x$, and denote it $\text{orb}(x)$. We'll write $f^0(x)$ for $x$, $f^2(x)$ for $f(f(x))$, $f^3(x)$ for $f(f(f(x)))$, et cetera. In this way
we have an action of the natural numbers $\mathbb{N}$ on $X$. To be explicit, each $n \in \mathbb{N}$ acts on each $x \in X$ by
$\displaystyle n \mapsto f^n(x)$
It might be worth pointing out that:
$\displaystyle n+m \mapsto f^{n+m}(x) = f^n ( f^m(x))$
If $f$ is invertible, then we have an action of the group $\mathbb{Z}$ on $X$. In fact, we can play our game with any group $G$ that acts on $X$, but for now we're only considering $\mathbb{N}$.
We're gonna play three games with the pair $(X, f)$. The first has to do with sets and functions, the second with topological spaces and continuous functions, and the third with a measurable space
$X$. Each game will get progressively harder, but progressively more rewarding.
So here's the first game: given a set $X$, we have to find an $x \in X$ and a $f: X \rightarrow X$ so that as we cycle through $x$, $f(x)$, $f^2(x)$, \ldots we end up with all of X. Id est, $\text
{orb}(x) = X$.
At first glance it may seem impossible to tell without more information about X. But we can already exclude an entire category of sets $X$. Can you see it?
That's right, uncountable sets won't play. $\text{orb}(x) = \{ f^n(x) \; : \; n \in \mathbb{N}\}$, thus its always countable. Let's try it for a really easy set: the natural numbers $\mathbb{N}$
themselves. Can you find a number $n$ and a function $f: \mathbb{N} \rightarrow \mathbb{N}$ such that $\text{orb}(n) = \mathbb{N}$ ?
I hope no one has trouble seeing that $n=0$ and $f(n) = n + 1$ does the trick. What about for the integers $X = \mathbb{Z}$? Can one find a function $f: \mathbb{Z} \rightarrow \mathbb{Z}$ and an
integer $n$ such that $\text{orb}(n) = \mathbb{Z}$ ? How about for $\mathbb{Q}$?
Now lets try to finish the game: Can we do this for an arbitrary countably infinite set $X$?
Since $X$ is countably infinite, we know we have a bijection $\phi: X \rightarrow \mathbb{N}$. Let $x \in X$, and choose $\phi$ so that $\phi (x) = 0$. Now remember our function for the natural
numbers? We'll rename it $p(n) = n+1$ here. We have something like this:
$\displaystyle X \xrightarrow{\phi} \mathbb{N} \xrightarrow{\phi^{-1}} X$
So we pass into the natural numbers, use the orbit there, and then pass back into $X$. Id est, let $f : X \rightarrow X$ by $f(x)= \phi^{-1} \circ p \circ \phi (x)$. Then $\text{orb}(x) = X$. As it
turns out, the condition that there exists an $f: X \rightarrow X$ and an $x \in X$ such that $\text{orb}(x) = X$ is exactly the same condition that X is countably infinite, as $\text{orb}(x)$ puts
$X$ in a bijection with the natural numbers. This game was rigged so that we could win!
But while we're at it, let's try a slightly harder game: given a countably infinite set $X$, can we find a function $f: X \rightarrow X$ such that $\text{orb}(x) = X$ for all $x \in X$ ?
No! Can you see why not? Let $x, y \in X$. since $\text{orb}(x) = \text{orb}(y) = X$, there exist numbers $n, m$ such that $f^n(x) = y$ and $f^m(y) = x$. Hence $f^m(f^n(x)) = x$. In other words, $f^
{n+m}(x) = x$, meaning the action of the natural numbers on $X$ is periodic, and $|\text{orb}(x)| \leq n+m$. The orbits of an element tell us something special about that element, and unless we're in
a finite set, only a few elements can visit the entire set.
2. Second Game: Orbits for Topological Spaces
So as we stated, the last game (finding a function and element such that the orbit is the entire set) was rigged so that we always win. But it's helpful to see just how badly its rigged. In the
category of sets, the only ``structure'' we have to preserve is the cardinality of the set. Bijective functions obviously preserve cardinality. Since countably infinite sets are all bijective with
the natural numbers by definition they are, in a sense, all the same. The category of countably infinite sets has only one element, $\mathbb{N}$. Anything we prove about $\mathbb{N}$ as a countably
infinite set is automatically true for all other countably infinite sets.
So to make the game a bit more interesting, we're going to have to play in a more exciting category. Instead of looking at countably infinite sets, we're going to look at topological spaces. Loosely
speaking, a topological space is our most general notion of a space. It gives us a mathematical language for describing when points are ``near each other'', or ``in a neighborhood''. If we say that
$X$ is a topological space, we mean that there exists a collection of open subsets $U \subset X$, and these subsets must behave in a particular way. (For more information, see a decent book on
topology, like those by Munkres or Armstrong, or talk to students in our General Topology study group.)
Instead of dealing with arbitrary functions $f: X \rightarrow X$, we'll also want a new condition on $f$. Since $X$ is a topological space, we want functions that live on topological spaces, in the
same way a linear map lives on vector spaces, or group homomorphism live on groups. These functions are the ``continuous functions''. We require $f: X \rightarrow X$ to be continuous, that is, if $U
\subset X$ is open, then $f^{-1} (U) = \{ x \in X : f(x) \in U \}$ must be open, too.
Now that we have $X$ a topological space and $f$ a continuous function (we'll call the pair $(X, f)$ a topological dynamical system), we can get back to our game. When is $\text{orb}(x) = X$ ?
Almost never. Topological spaces (like $\mathbb{R}$ or $\mathbb{C}$) are almost always uncountable, and as we already discussed, this is impossible. So we have to modify our game a bit. If $\text
{orb}(x) \neq X$, what is the next best thing? (If you've not taken measure theory, functional analysis, or general topology, you may be forgiven for not knowing.)
In this game, we want $\text{orb}(x)$ to be dense in $X$. If you've never seen dense sets before, think of the rationals $\mathbb{Q}$ sitting in $\mathbb{R}$. In $\mathbb{R}$, the open sets are
exactly the open intervals. And any open interval in $\mathbb{R}$ intersects $\mathbb{Q}$. If you've had more analysis, another way to say this is that a subset of $X$ is dense in $X$ if its closure
is all of $X$.
So this is the game: given a pair $(X,f)$, $X$ a topological space and $f$ a continuous function, can you find an element $x \in X$ such that $\overline{\text{orb}(x)} = X$ (id est, the closure of $\
text{orb}(x)$ is $X$).
This game is a lot harder, and it'll require some new tools. Let me give you a definition and a few lemmas.
Def1 1 Let $(X, f)$ be a topological dynamical system. We call the system minimal if given a closed set $V \subset X$ such that $f(V) = V$, then $V = X$ or $V = \emptyset$. (That is, if the only
invariant closed sets are the whole space or the null set.)
Minimality allows us to win the game. Actually, its equivalent to winning; consider the following proposition:
Let $(X, f)$ be a topological dynamical system. The system is minimal if and only if $\overline{\text{orb}(x)} = X$ for all $x \in X$.
Proof: Assume that $(X, f)$ is minimal. $\overline{\text{orb}(x)}$ is clearly closed, invariant under $f$, and non-empty. Thus it must be X. Conversely, assume X is not minimal, and let $V$ be a
closed, non-empty $f$-invariant subset of X. Let $x \in V$. Since V is $f$-invariant, $\text{orb}(x) \subset V$, and hence its closure can't be all of X. $\Box$
This game is called Topological Dynamics. Another lemma, not stated here, says that any topological space $X$ has an minimal subsystem. The proof is a simple application of Zorn's lemma. We can
actually state (and prove!) a key theorem in the subject:
Thrm 2 (Birkhoff Recurrence Theorem) Let $(X,f)$ be a minimal topological dynamical system. Then there is some $x \in X$ and a sequence $1 \leq n_1 < n_2 < \ldots $ of natural numbers such that
$f^{n_k}(x) \rightarrow x$ as $k \rightarrow \infty$.
Let $x \in X$. Since the system is minimal, we know that $\text{orb}(x)$ is dense in $X$. If there is some $n$ such that $f^n(x)=x$, then we're done. Otherwise, $\text{orb}(x)$ dense means that its
closure is the whole space, which means that we can find some sequence $y_k \in \text{orb}(x)$ such that $y_n \rightarrow x$. Naturally, each $y_k = f^{n_k}(x)$. So we're done. $\Box$
Who else plays this game? It's actually been used successfully to prove theorems in number theory. More specifically, it's been used to prove a Ramsey-type problem about colouring the integers. I'll
state it.
Thrm 3 (Van der Waerden's Theorem) Suppose that the integers are coloured in r colours. Then for every $k \geq 2$ there is a monochromatic arithmetic progression of length k.
The proof uses a generalization of the above Recurrence Theorem. It also uses compact metric spaces instead of general topological spaces. But, of course, compact metric spaces is a sub-category of
topological spaces, and they're much easier to work with (general topological spaces can be quite pathological.)
So we've seen how to make the game more interesting my looking at more exciting categories. We've stated an equivalent condition to winning the game (minimality), and though I haven't shown you any
specific examples of the game, I've given you some of the rewards (e.g. Van der Waerden's Theorem) of winning. Let's look at the third game:
3. Third game: Measurable Systems
Now we're going to look at another category of sets and functions. This time $X$ must be a compact metric space. But we want a bit more structure. $X$ must be measurable, that is, for certain ``well
behave'' subsets $U \subset X$, we have a function, called a measure, that assigns a ``size'' to U, $\mu(U) \in [0,1]$, and we require the size of the total space to be 1, id est, $\mu(X) = 1$. The
subsets for with $\mu$ is defined are called the measurable sets. If you've taken measure theory or probability, then you should recognize that we want $X$ to be a probability space. You can check
those courses for more details on this category.
The function $f: X \rightarrow X$ must meet new requirements, too. It must ``live'' on measurable spaces, id est, $f$ must be a measurable map. The definition is similar to that for continuous
functions: $f$ is measurable if for all measurable subsets $U \subset X$, $f^{-1}(U)$ is also a measurable set. The measuring function $\mu$ must also be $f$-invariant. That is, $\mu(f^{1}(U) = \mu
We call the triple $(X, \mu, f)$ a measure-preserving system. Like in the last two games, we want to know what happens to $\text{orb}(x)$. But things get very subtle here. Like in the last game, we
had a condition (minimal systems) that told us when we were onto something. We have a similar condition here:
Def1 4 We call the measure preserving system $(X, \mu, f)$ ergodic if the only $f$-invariant sets have measure 0 or measure 1.
This game is called Ergodic Theory. Who do you think plays it and why? Can you prove a result similar to our proposition on minimality for ergodicity?
At the core, this game is concerned with the statistical study paths of motion of points in some space, whether it be the phase space of a Hamiltonian in physics, or the state spaces of Markov chains
(random processes) in statistics.
The idea is to think of the action $n \in \mathbb{N}$ as a particular time, and then $\text{orb}(x) = \{ f^n(x) : n\mathbb{N}\}$ is the time-path of state $x$. This allows us to get a ``time
average'' and a ``space average'' of functions on the measure preserving system. Let $\phi: X \rightarrow \mathbb{C}$ be an integrable function.
Def1 5 Let $(X, \mu, f)$ be a measure preserving system. The time average of $\phi$ starting from $x \in X$ is denoted
$\displaystyle <\phi>_x = \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=0}^{n} \phi(f^i(x))$
Def1 6 The space average of $\phi$ is denoted
$\displaystyle \overline{\phi} = \int \phi(x) d\mu$
(This is the integral with respect to the measure $\mu$. If you've not seen this before, talk to someone who has taken measure theory.)
To see think about these definitions, consider a subset $U \in X$, and the indicator function on $A$ (that's the function $I: X \rightarrow \mathbb{C}$, $I(x) = 1$ is $x \in UA$, otherwise $I(x)=0$).
The time average of $<I>_x$ is time that the orbit of $x$ will spend in A, while the space average is the probability that a random state $x$ is in $U$.
One of the important results about measure theory is the following theorem:
Thrm 7 For an ergodic measure preserving system $(X, \mu, f)$ and $\phi: X \rightarrow \mathbb{C}$ a measurable function, the limit $<\phi>_x$ exists and is equal to $\overline{\phi}$ for almost
all $x \in X$.
Since the two are almost always equal, almost all paths cover the state space in the same way. In other words, this theorem tells us that for a sufficiently large amount of time (id est, a
sufficiently large sample) we can learn information about the entire system (or entire population.) Think ``law of large numbers''.
While we can't play too many ergodic games, I do want to give an example.
Let $X = \mathbb{R}^2/\mathbb{Z}^2$, that is, let $X$ be a torus, and let $\alpha \in \mathbb{R}^2$. Let $f: X \rightarrow X$ be the function $f(x) = x + \alpha$. With a bit of work from measure
theory, you can show that this is an ergodic measure preserving system.
When $\alpha = (\sqrt{2},\sqrt{3})$ and $x = (0,0)$, then $\text{orb}(x)$ is dense in $X$.
If $\alpha = (\frac{1}{3},\frac{2}{5})$ and $x = (0,0)$, then $\overline{\text{orb}(x)}$ is a finite set of fifteen points.
If $\alpha = (\sqrt{2},\sqrt{2})$ and $x = (0,0)$, then $\text{orb}(x)$ is dense in the subspace $\{ (x,x) : x \in \mathbb{R}/\mathbb{Z} \}$.
These weird facts pop out of a much deeper theorem, Ratner's theorem, that I cannot even state, but it roughly says that orbit closures are ``algebraic sets''. Ratner actually has several related
theorems, and these theorems are related to the proof of the Oppenheim conjecture. The Oppenheim conjecture is an important statement about analytic number theory and real quadratic forms in several
variables. It was proven using Ergodic theorems on Lie Groups. The proof of this theorem could actually make a third year project.
4. Sources
I lifted material for this talk from several sources, most prominently:
1. Wikipedia | {"url":"http://mebassett.blogspot.com/2011_02_01_archive.html","timestamp":"2014-04-18T10:57:23Z","content_type":null,"content_length":"82436","record_id":"<urn:uuid:52c8b116-ee7b-44c9-89f4-1e0bded13952>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: January 2010 [00530]
[Date Index] [Thread Index] [Author Index]
Re: restricting interpolating functions to be positive
• To: mathgroup at smc.vnet.net
• Subject: [mg106607] Re: restricting interpolating functions to be positive
• From: Noqsi <jpd at noqsi.com>
• Date: Mon, 18 Jan 2010 02:35:17 -0500 (EST)
• References: <higdjs$kfi$1@smc.vnet.net> <201001131058.FAA06854@smc.vnet.net>
On Jan 17, 5:11 am, Ray Koopman <koop... at sfu.ca> wrote:
> On Jan 15, 12:18 am, schochet123 <schochet... at gmail.com> wrote:
> > On Jan 14, 12:46 pm, DrMajorBob <btre... at austin.rr.com> wrote:
> >> For some data, that works pretty well; for other samples
> >> it has HUGE peaks, reaching far above any of the data:
> > This problem can be avoided by using a transformation that
> > is close to the identity for positive values:
> > trans[x_] = (Sqrt[1 + x^2] + x)/2
> > inv[y_] = (4 y^2 - 1)/(4 y)
> There's a whole family of transformations here:
> trans[c_,x_] = (Sqrt[4c + x^2) + x)/2
> inv[c_,y_] = y - c/y
> Extremely small values of c give results that approach those obtained
> by chopping the interpolation at 0.
These transformations have the property that the interpolation behaves
differently for small and large numbers, with the transition at a
scale given by Sqrt[c]. The Log/Exp pair I suggested is scaleless.
{Sqrt[x],y^2} is another scaleless transformation.
All such transformations will exhibit artifacts from some point of
view given some data set: there is no flawless universal interpolation
The {Sqrt[x],y^2} pair is useful for data that's proportional to
counts of independent events (like electric current over a potential
barrier), as it approximately gives each point its proper "weight" in
the analysis, given the correlation of the shot noise variance with
the expectation. However, the interpolation may "bounce" off the x-
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jan/msg00530.html","timestamp":"2014-04-19T17:16:10Z","content_type":null,"content_length":"27127","record_id":"<urn:uuid:a36c5c09-42b0-4a69-b28e-3a946e7675e3>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
First Order RL Circuit
Complete Response of a First Order RL Circuit
The figure below shows the complete response of a RL circuit to the input voltage
The complete response of the RL circuit is described by the equation
The values of the parameters A and B, as well as the values of the resistance R and inductance L, can be changed using the scrollbars.
1. Set A=0 mA, B=4 mA, R=1000 Ohms and L = 5 mH to see the circuit obtained by applying Thevenin's Theorem in Example 8.3-2.
2. Set A=40 mA, B=20 mA, R=200 Ohms and L = 5 mH. to see the circuit obtained by applying Thevenin's Theorem in Example 8.3-4.
3. Set A=10 mA, R=1000 Ohms and L = 20 mH. Vary B.
4. Set B=10 mA, R=1000 Ohms and L = 20 mH. Vary A.
5. Set A=9 mA, B=9 mA and R=1000 Ohms. Vary L.
6. Set A=-9 mA, B=-9 mA and L = 2 mH. Vary R.
7. Set A = 7 mA, B = -8 mA and R = 4400 Ohms. Predict the value of L required to cause the inductor current to be zero at time t = 20 us. Check this prediction using the scrollbars.
8. Set A = 3 mA, B = -5 mA and L = 85 mH. Predict the value of R required to cause the inductor current to be zero at time t = 15 us. Check this prediction using the scrollbars. | {"url":"http://people.clarkson.edu/~jsvoboda/eta/plots/RL.html","timestamp":"2014-04-18T10:42:06Z","content_type":null,"content_length":"1977","record_id":"<urn:uuid:bfa07db7-e100-422d-9e52-82d10c80c2ea>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Snellville Science Tutor
Find a Snellville Science Tutor
...Although I prefer tutoring for the MCAT, I am also willing to tutor for specific subjects covered on the exam. I am a senior majoring in biology with a minor in chemistry. In addition, I have
three years experience tutoring the subjects on the MCAT.
5 Subjects: including biology, chemistry, physics, MCAT
...I feel qualified to teach a basic beginners’ course, but not an intermediate/advanced level. If your needs require Beginners' Arabic, you may count on me. I studied phonics in college, for
my BA and did well in the subject.
38 Subjects: including anthropology, archaeology, English, reading
...I also taught in the college environment for over 10 years and I am currently teaching Math. I have tutored middle and high school math for 20+ years. I enjoy working with the students and
receive many rewards when I see their successes.
20 Subjects: including ACT Science, calculus, GRE, GMAT
...After earning my B.S. in Mathematics at Georgia State University I was offered the position of Mathematics and Science Lab Supervisor. In that position I continued to tutor students and to
train other tutors as well. I love math, and helping others to learn it.
15 Subjects: including chemistry, biology, calculus, geometry
...I am a Georgia Tech Graduate and have been teaching and tutoring "all subjects" on the SAT and ACT to hundreds of students for over thirty-five years with excellent results. Over the years, I
have been able to assist first time students obtain high initial scores and dramatically improve scores ...
42 Subjects: including ACT Science, reading, English, writing
Nearby Cities With Science Tutor
Alpharetta Science Tutors
Buford, GA Science Tutors
Decatur, GA Science Tutors
Duluth, GA Science Tutors
Dunwoody, GA Science Tutors
Grayson, GA Science Tutors
Johns Creek, GA Science Tutors
Lawrenceville, GA Science Tutors
Lilburn Science Tutors
Loganville, GA Science Tutors
Norcross, GA Science Tutors
Roswell, GA Science Tutors
Stone Mountain Science Tutors
Tucker, GA Science Tutors
Woodstock, GA Science Tutors | {"url":"http://www.purplemath.com/snellville_ga_science_tutors.php","timestamp":"2014-04-21T12:49:04Z","content_type":null,"content_length":"23738","record_id":"<urn:uuid:fac6d3fe-853c-4d37-89c3-27a65b2e78d9>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determinants and the Area of a Triangle
Date: 12/14/98 at 13:50:41
From: Frank Chiaravalli
Subject: Matrices and determinants
The area of a triangle having vertices (A,B), (C,D), and (E,F) is the
absolute value of the determinant of M, where:
| A B 1 |
M = 1/2 | C D 1 |
| E F 1 |
How did the textbook arrive at this formula?
Many thanks for any help you can give us. Quite a few students have
been asking about this.
Date: 12/14/98 at 14:41:59
From: Doctor Anthony
Subject: Re: Matrices and determinants
Draw a figure with vertices (A,B), (C,D), (E,F) in the first quadrant.
For the sake of argument let (A,B) be nearest the y axis, (C,D)
farthest from the y axis and (E,F) between the other two vertices and
lower than either so that it is closest to the x axis.
Now draw verticals from the vertices to the x axis:
So the area of the triangle is:
(1/2)[(B+D)(C-A) - (B+F)(E-A) - (F+D)(C-E)]
(1/2)[BC-BA+DC-DA - BE+BA-FE+FA - FC+FE-DC+DE]
(1/2)[BC - DA - BE + FA - FC + DE]
(1/2)[-AD - BE - CF + ED + FA + BC]
compared with:
|A B 1|
(1/2)|C D 1| = (1/2)[AD + BE + CF - ED - FA - BC]
|E F 1|
and apart from being opposite in sign the two expressions are the same.
So the determinant gives twice the area of the triangle.
- Doctor Anthony, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/55063.html","timestamp":"2014-04-20T03:59:05Z","content_type":null,"content_length":"7137","record_id":"<urn:uuid:8d8a33b6-c697-4046-9402-080685324b15>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
solve the equation: with Logarithm & one word problem
April 19th 2010, 10:40 PM #1
Apr 2010
solve the equation: with Logarithm & one word problem
the stress i had a few days has went down so much since i became a member here, i actually feel prepared for my test coming up thanks to all of you. out of the 45 problems these are the last few
im having serious problems with, i wish i understood these instead of bugging you guys haha
these last 4 problems i cant get a single answer.. problems #20 #21 and #22
problems #24 is a word problem that i cant figure out as well, help with any problems will be much appreciated.
i scanned the packet, here are the few problems im having trouble with.
the test is friday and i want to know the material as best as possible because the last test i failed. i normal dont fail things, its just harder for me to understand math more than others. its
very frustrating
Last edited by murkr; April 20th 2010 at 11:22 PM.
the stress i had a few days has went down so much since i became a member here, i actually feel prepared for my test coming up thanks to all of you. out of the 45 problems these are the last few
im having serious problems with, i wish i understood these instead of bugging you guys haha
these last 4 problems i cant get a single answer.. problems #20 #21 and #22
problems #24 is a word problem that i cant figure out as well, help with any problems will be much appreciated.
i scanned the packet, here are the few problems im having trouble with.
the test is friday and i want to know the material as best as possible because the last test i failed. i normal dont fail things, its just harder for me to understand math more than others. its
very frustrating
To #20:
Two logarithms are equal if their arguments are equal too:
Solve for x
to #21
$\frac13 \log_2(x+6)=\log_8(3x)$ Use the base-change-formula to get equal bases for all logarithms:
$\frac13 \log_2(x+6)=\frac{\log_2(3x)}{\log_2(8)}$ Since $\log_2(8)=3$ your equation becomes:
$\frac13 \log_2(x+6)=\frac13 \log_2(3x)$ Therefore
Solve for x.
to #22:
$\log_{21}(x+6)=1-\log_{21}(x)~\implies~\log_{21}(x+6)+\log_{21}(x)= 1$
$\log_{21}(x^2+6x)=\log_{21}(21)$ Therefore:
Solve for x. Keep in mind that a logarithm is only defined for positive numbers.
To #20:
Two logarithms are equal if their arguments are equal too:
Solve for x
to #21
$\frac13 \log_2(x+6)=\log_8(3x)$ Use the base-change-formula to get equal bases for all logarithms:
$\frac13 \log_2(x+6)=\frac{\log_2(3x)}{\log_2(8)}$ Since $\log_2(8)=3$ your equation becomes:
$\frac13 \log_2(x+6)=\frac13 \log_2(3x)$ Therefore
Solve for x.
to #22:
$\log_{21}(x+6)=1-\log_{21}(x)~\implies~\log_{21}(x+6)+\log_{21}(x)= 1$
$\log_{21}(x^2+6x)=\log_{21}(21)$ Therefore:
Solve for x. Keep in mind that a logarithm is only defined for positive numbers.
thank you so much, that was all very helpful, i cant thank you enough! very good
i believe you made a small mistake in problem #22 though, you posted (x+6) you must have looked at the problem above, it was (x+4) i just changed the number around and got the correct answer.
in the end i got (x+3)(x-7) and as you said only postive numbers so the answer is 3 im assuming. thank you again!!
can anyone else help me with problems #23 and #24?
thank you so much, that was all very helpful, i cant thank you enough! very good
i believe you made a small mistake in problem #22 though, you posted (x+6) you must have looked at the problem above, it was (x+4) i just changed the number around and got the correct answer.
in the end i got (x+3)(x-7) <<<<< here is your mistake - so we are quit
and as you said only postive numbers so the answer is 3 im assuming. thank you again!!
can anyone else help me with problems #23 and #24?
Thanks for spotting my mistake!
You probably got:
which yields $x = 3 \text{ or } x=-7$
but only x = 3 is a valid answer as you stated correctly
Number 24 isn't really a word problem. They are asking you to solve the equation 1+ 1.5 ln(x+1)= 10. As always, you solve an equation by "backing out": subtract 1 from both sides to get 1.5 ln
(x+1)= 9. Divide both sides by 1.5 to get ln(x+1)= 9/1.5= 6.
Notice that each step has been an "inverse"- the opposite of some operation. Initially, we had 1 added on the left so we do the opposite- subtract 1 from both sides. Then we had 1.5 multiplying
the function of x so again we do the opposite- divide both sides by 1.5. Now we have a natural logarithm. What is the "opposite" of that? The exponential! ln(x) and $e^x$ are "inverse functions":
$ln(e^x)= x$ and $e^{ln(x)}= x$.
Taking the exponential of both sides, $e^{ln(x+1)}= x+ 1= e^6$.
Finally, of course, subtract 1 from both sides. $x= e^6- 1$. If you want a decimal (but only approximate) answer, use a calculator.
Number 24 isn't really a word problem. They are asking you to solve the equation 1+ 1.5 ln(x+1)= 10. As always, you solve an equation by "backing out": subtract 1 from both sides to get 1.5 ln
(x+1)= 9. Divide both sides by 1.5 to get ln(x+1)= 9/1.5= 6.
Notice that each step has been an "inverse"- the opposite of some operation. Initially, we had 1 added on the left so we do the opposite- subtract 1 from both sides. Then we had 1.5 multiplying
the function of x so again we do the opposite- divide both sides by 1.5. Now we have a natural logarithm. What is the "opposite" of that? The exponential! ln(x) and $e^x$ are "inverse functions":
$ln(e^x)= x$ and $e^{ln(x)}= x$.
Taking the exponential of both sides, $e^{ln(x+1)}= x+ 1= e^6$.
Finally, of course, subtract 1 from both sides. $x= e^6- 1$. If you want a decimal (but only approximate) answer, use a calculator.
ok thank you so much! that really helped me ALOT
for #23 i got 219 as the answer.. but i cant find what point is on the graph of G.. answers A and C both have 219.. but i cant figure out the second point. thanks again in advance, you are a
really big help for me
Last edited by murkr; April 20th 2010 at 05:52 PM.
April 19th 2010, 11:51 PM #2
April 20th 2010, 12:18 AM #3
Apr 2010
April 20th 2010, 07:15 AM #4
April 20th 2010, 09:02 AM #5
MHF Contributor
Apr 2005
April 20th 2010, 05:35 PM #6
Apr 2010 | {"url":"http://mathhelpforum.com/algebra/140223-solve-equation-logarithm-one-word-problem.html","timestamp":"2014-04-20T01:41:02Z","content_type":null,"content_length":"60078","record_id":"<urn:uuid:c88fadee-2d66-4056-8a1d-b940d1691a42>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with Arzela-Ascoli theorem proof
February 28th 2010, 05:02 PM
Help with Arzela-Ascoli theorem proof
Our teacher gaves a few days ago the standard proof of this theorem which can be found in many text books, dealing with finding a dense numerable set of the set in which the functions are
defined, diagonilization, etc.
The thing is in this proof I can not understand why equicontinuity is necessary and not just simple uniform continuity for each function of the diagonal.
Any help would be appreciated.
February 28th 2010, 05:13 PM
Our teacher gaves a few days ago the standard proof of this theorem which can be found in many text books, dealing with finding a dense numerable set of the set in which the functions are
defined, diagonilization, etc.
The thing is in this proof I can not understand why equicontinuity is necessary and not just simple uniform continuity for each function of the diagonal.
Any help would be appreciated.
Assuming this really is the most famous proof...
At one point we need to say "...there exists a $\delta>0$ such that $d(x,x')<\delta\implies |f_n(x)-f_n(x')|<\frac{\varepsilon}{3}$ for all $f_n\in \Delta$". If all the $f_n\in \Delta$ were
merely uniformly continuous we would certainly have a corresponding $\delta_n>0$ which satisfies the right conditions. But, to make sure it works for all $f_n$'s we would need to take $\inf_{n\in
\mathbb{N}}\delta_n$ and there is a very real possibility that this is zero.
February 28th 2010, 06:05 PM
The thing is that I don't see why the same delta is needed
Your absolutely right, if the same $\delta$ were needed then then inf of the deltas could be zero.
But the thing is I don't see why the same delta is needed. i have the following proof ( which I'll summarize in some parts to avoid a long text):
The proof follows much on the lines of the proof given in Elements of Real Analysis, Bartle
Given $C(K,\mathbb{R}^{q})$( the set of continuous function from K to $\mathbb{R}^{q}),$ where K is compact and a subset of $\mathbb{R}^{p}$ we want to show that if $\Im$ is a subset of $C(K,\
mathbb{R}^{q}$)and $\Im$ is bounded and equicontinuous then there exists a subsequence for every sequence of functions in $\Im$ that converges.
We can find a set $A := \left\lbrace x_{1},x_{2},...\right\rbrace$ which is numerable and dense in K, then we can find a sequence of functions which i'll lable $\left\lbrace g_{n}\right\rbrace$
which for every point in A the sequence $\left\lbrace g_{n}(x_{i})\right\rbrace$ converges in $\mathbb{R}^{q}$.
The thing is that then from equicontinuity we can find a $\delta$ which makes every function $\|g_{n}(x)-g_{n}(y)\|<\epsilon/3$, if $\|x-y\|<\delta$ and then take open balls with center in each
$x_{i}$ and radius $\delta$ the union of these balls complete covers K, and because K is compact we can find a finite open subcover such that it covers K,using the fact that these functions are
Cauchy convergent in $\mathbb{R}^{q}$. we find that for $x \in K$, $\|g_{n}(x)-g_{m}(x)\|<\epsilon$
Why can't we take open balls with different radius delta (i.e., use uniform continuity)?
February 28th 2010, 06:12 PM
Your absolutely right, if the same $\delta$ were needed then then inf of the deltas could be zero.
But the thing is I don't see why the same delta is needed. i have the following proof ( which I'll summarize in some parts to avoid a long text):
The proof follows much on the lines of the proof given in Elements of Real Analysis, Bartle
Given $C(K,\mathbb{R}^{q})$( the set of continuous function from K to $\mathbb{R}^{q}),$ where K is compact and a subset of $\mathbb{R}^{p}$ we want to show that if $\Im$ is a subset of $C(K,\
mathbb{R}^{q}$)and $\Im$ is bounded and equicontinuous then every subsequence of a sequence of functions in $\Im$.
We can find a set $A := \left\lbrace x_{1},x_{2},...\right\rbrace$ which is numerable and dense in K, then we can find a sequence of functions which i'll lable $\left\lbrace g_{n}\right\rbrace$
which for every point in A the sequence $\left\lbrace g_{n}(x_{i})\right\rbrace$ converges in $\mathbb{R}^{q}$.
The thing is that then from equicontinuity we can find a $\delta$ which makes every function $\|g_{n}(x)-g_{n}(y)\|<\epsilon/3$, if $\|x-y\|<\delta$ and then take open balls with center in each
$x_{i}$ and radius $\delta$ the union of these balls complete covers K, and because K is compact we can find an open subcover such that it covers K,using the fact that these functions are Cauchy
convergent in $\mathbb{R}^{q}$. we find that for [tex]x \in K[\math], $\|g_{n}(x)-g_{m}(x)\|<\epsilon$
Why can't we take open balls with different radius delta (i.e., use uniform continuity)?
Because, if memory serves, we later need to make a statement that has to do with $\f_m(x)-f_n(x)|$ and if we don't have $\delta$ that works for all of them we're S.O.L.
If my memory doesnt serve tell me and I'll go look ath the proof.
February 28th 2010, 06:41 PM
most probably but I don't see it.
Continuing with the proof:
Then $\exists \left\lbrace z_{1},z_{2},...,z_{r} \right\rbrace$ such that K is a subset of the open balls with radius delta and center in each of these points. Then we can find an N (because
there is a finite number of points) such that the functions $g_{n}(z_{i})$ are smaller (using the norm of $\mathbb{R}_{q}$) than epsilon / 3 then you just add up things using the triangle
inequality and you get the result.
February 28th 2010, 06:44 PM
Continuing with the proof:
Then $\exists \left\lbrace z_{1},z_{2},...,z_{r} \right\rbrace$ such that K is a subset of the open balls with radius delta and center in each of these points. Then we can find an N (because
there is a finite number of points) such that the functions $g_{n}(z_{i})$ are smaller using the norm than epsilon / 3 then you just add up things using the triangle inequality and you get the
I need to go back and look at the proof, I'll get back to you (it's a pretty long proof if I remember correctly).
I'm sure someone will answer it before I do though.
March 29th 2010, 10:23 AM
I now know why
We can not take different deltas because, then the union of the open balls for these deltas might not cover K, for instance in [0, 1], if our numerable set does not include 1 we could have open
balls that get tinnier as we choose x closer and closer to 1, but they never include 1. | {"url":"http://mathhelpforum.com/differential-geometry/131298-help-arzela-ascoli-theorem-proof-print.html","timestamp":"2014-04-25T00:44:42Z","content_type":null,"content_length":"21811","record_id":"<urn:uuid:d3536278-831e-432c-8bf7-090b14c54e4a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sagar D.
My name is Sagar D. I am a tutor for Trigonometry, College Algebra, Pre-calculus, Geometry, General Physics, General Chemistry, Biology, and Microbiology. I am also a tutor at California State
University at Los Angeles since September, 2012.
In May 2011, I graduated from University of Texas at Arlington (UT Arlington) with my Baccalaureate degree in Biology with minor in Mathematics. At UT Arlington, I worked as a tutor for 14 months. I
used to tutor for following subjects: PRE-ALGEBRA, COLLEGE ALGEBRA, TRIGONOMETRY, STATISTICS, CALCULUS I, CALCULUS II, PHYSICS I, PHYSICS II, GENERAL BIOLOGY, MICROBIOLOGY, GENERAL CHEMISTRY, and
ORGANIC CHEMISTRY. I have taken some high level courses that help me to be better tutor. Some of the relevant courses are: Anatomy and Human Physiology I and II, Biochemistry, Calculus III,
Differential Equations and Linear Algebra, Genetics, Immuno-biology, and Toxicology. While I was in my undergraduate level, I was also a undergraduate Teaching/Research Assistant for Microbiology
lab. My work experience required me to deal with various students that essentially enhance my skill to interpret and explain concepts to the students in various ways so that the student can
understand well. My dedication and honesty at my work ethics gave me promotion from basic to Level I and then to Level II tutor.
Currently, I am doing my master's in Applied Biotechnology from California State University. Along with my studies, I am also tutoring at CSU Los Angeles from September 2012. So, I already have more
than 3 years of tutoring experiences in above mentioned subjects. Apart from my studies. I enjoy reading scientific journals and books. I also read biographies of scientists. I have started home
tutoring and actively tutoring after I was done with my first year of Masters. I thoroughly enjoy teaching, and it has always been my passion. The interest might have to do something with my family
background as my parents are also teachers in primary school. I like helping students to understand difficult concepts and develop problem solving skills. The aim in my life is to get my doctorate
degree in Genetic/Molecular Biology field and work as a professor in the University. I would like to have my own research lab too.
Sagar's subjects | {"url":"http://www.wyzant.com/Tutors/CA/South-Gate/8001236/?g=3JY","timestamp":"2014-04-16T14:26:54Z","content_type":null,"content_length":"88974","record_id":"<urn:uuid:caa62533-94da-49a8-9c76-6f2e643585c2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
- The American Saddlebred Today Horse Care Wiki is Under Development
Amortization calculator is a tool that's offered for free on the internet and lenders websites. This really is employed for calculating the loan's amortization. That's the way how you are able to
break up the repayment terms. When you purchase a loan, this tool can help you in many ways to get the best possible loans for your needs.
How to Use an Amortization Calculator
To make use of amortization calculator, you have to first find one. They are free of charge and you'll locate them almost everywhere. You don't have to wait for a lender, who provides the calculator
for doing things. You will need the next data:
The principal amount of money you will be borrowing in the loan company, whether to buy a house or any other things.
The rate of interest the lender has given you. that you should be aware of what's for you, you need to compare the rates of different lenders or get free quotes from different companies, and go into the rate using the one you're qualified with.
The terms of the loans that you're applying. The calculator needs the size of the loan you need to pay. It needs your schedule of payments to settle the borrowed funds.
Ensure that you provide all this information to the calculator to have an accurate result. Make sure that the details are right.
What Amortization Calculator will produce
The amortization calculator will produce this stuff after it did some figuring. Here are things that you might consider:
Your monthly payment for that loan. This really is in line with the information you provided.
The break down from the payment, how much interest is paid and how much principal amount is going to be paid in each and every payment per month you'll pay. You need to expect that the interest rates are higher throughout the loan's first many will ultimately be lower weight loss principal amount pays.
The total interest cost for the home loan. This is a high number that nobody desired to view it at all.
The total price of the loan, including the amount of the eye and the principal amount. This is the amount you'll pay for that loan.
This tool provides great help to obtain the needed information instantly. You can even go back and customize the information for that loan to meet your requirements. The terms of the loan can also be
lengthened for you to see whether it can lower the quantity of your monthly payment. In the event that the eye is too high, you may go back to the beginning and find other loans that offer lower
rates and refigure it. The pricing from the houses can also be compared. The best of this is, you will be able to know each one of these things. | {"url":"http://american-saddlebred.com/wiki/index.php?title=EcheverriaLeverett813","timestamp":"2014-04-18T23:17:28Z","content_type":null,"content_length":"14284","record_id":"<urn:uuid:aa9b9444-779e-4a8c-acda-f8b22b8cd228>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marietta, GA Math Tutor
Find a Marietta, GA Math Tutor
...Varying teaching methods are used to demystify those abstract concepts that impede the understanding of mathematical concepts, by moving from simple to complex and from what the learner knows
to the instructional objectives. Questions are encouraged and steps reviewed to give the learner an oppo...
12 Subjects: including calculus, elementary (k-6th), discrete math, C
I have 6+ years of in-class language teaching experience at university level, plus 7+ years of interpretation / translation / personalized tutoring experiences servicing a wide range of students
and clients. Students range from age 4 to 60+. Clients include many Chinese business delegations, also C...
9 Subjects: including SAT math, Chinese, SAT writing, grammar
...I have designed all of my websites from scratch using HTML, Frames, PHP, MySQL databases, GIF animations, and Photoshop. I develop unique graphics and buttons for the websites that I design.
Sample websites: 3rd World Bikers Club in Georgetown SC (animated motorcycles, 3 worlds background, cust...
59 Subjects: including precalculus, SAT math, reading, English
...I am fully certified in this area in the state of Georgia. I have had much success in developing an individualized teaching method for ADD/ADHD students and helping them achieve their goals. I
have taught in the special RESA program in Georgia for students that have psychiatric/severe special needs including aspergers, autism, bi-polar for more than 8 years.
47 Subjects: including algebra 1, algebra 2, ACT Math, SAT math
...I earned a master's degree in special education, specifically behavioral disorders. One of my skills is breaking down subject matter into smaller chunks that make it meaningful to the student.
The best tutorial method for special needs students is the hands on, kinesthetic approach to learning.
18 Subjects: including algebra 2, English, algebra 1, prealgebra
Related Marietta, GA Tutors
Marietta, GA Accounting Tutors
Marietta, GA ACT Tutors
Marietta, GA Algebra Tutors
Marietta, GA Algebra 2 Tutors
Marietta, GA Calculus Tutors
Marietta, GA Geometry Tutors
Marietta, GA Math Tutors
Marietta, GA Prealgebra Tutors
Marietta, GA Precalculus Tutors
Marietta, GA SAT Tutors
Marietta, GA SAT Math Tutors
Marietta, GA Science Tutors
Marietta, GA Statistics Tutors
Marietta, GA Trigonometry Tutors
Nearby Cities With Math Tutor
Acworth, GA Math Tutors
Alpharetta Math Tutors
Atlanta Math Tutors
College Park, GA Math Tutors
Decatur, GA Math Tutors
Douglasville Math Tutors
Dunwoody, GA Math Tutors
Johns Creek, GA Math Tutors
Kennesaw Math Tutors
Lawrenceville, GA Math Tutors
Mableton Math Tutors
Roswell, GA Math Tutors
Sandy Springs, GA Math Tutors
Smyrna, GA Math Tutors
Woodstock, GA Math Tutors | {"url":"http://www.purplemath.com/marietta_ga_math_tutors.php","timestamp":"2014-04-16T19:44:43Z","content_type":null,"content_length":"23890","record_id":"<urn:uuid:62291f52-2986-469e-8d00-d04e973ea16c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
g05hkc Univariate time series, generate n terms of either a symmetric GARCH process or a GARCH process with asymmetry of the form (ε[t-1] + γ)^2
g05hlc Univariate time series, generate n terms of a GARCH process with asymmetry of the form (|ε[t-1]| + γ ε[t-1])^2
g05hmc Univariate time series, generate n terms of an asymmetric Glosten, Jagannathan and Runkle (GJR) GARCH process
g13fac Univariate time series, parameter estimation for either a symmetric GARCH process or a GARCH process with asymmetry of the form (ε[t-1] + γ)^2
g13fbc Univariate time series, forecast function for either a symmetric GARCH process or a GARCH process with asymmetry of the form (ε[t-1] + γ)^2
g13fcc Univariate time series, parameter estimation for a GARCH process with asymmetry of the form (|ε[t-1]| + γ ε[t-1])^2
g13fdc Univariate time series, forecast function for a GARCH process with asymmetry of the form (|ε[t-1]| + γ ε[t-1])^2
g13fec Univariate time series, parameter estimation for an asymmetric Glosten, Jagannathan and Runkle (GJR) GARCH process
g13ffc Univariate time series, forecast function for an asymmetric Glosten, Jagannathan and Runkle (GJR) GARCH process
© The Numerical Algorithms Group Ltd, Oxford UK. 2002 | {"url":"http://www.nag.com/numeric/CL/manual/html/indexes/kwic/garch.html","timestamp":"2014-04-18T00:34:19Z","content_type":null,"content_length":"5056","record_id":"<urn:uuid:2079e04e-e491-48be-859b-a7070e6f0e73>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
SAS-L archives -- May 2008, week 3 (#167)LISTSERV at the University of Georgia
Date: Fri, 16 May 2008 10:35:14 -0700
Reply-To: Dale McLerran <stringplayer_2@YAHOO.COM>
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: Dale McLerran <stringplayer_2@YAHOO.COM>
Subject: Re: PROC GLIMMIX--How to Check Linearity Assumption
In-Reply-To: <81074a80-9057-4b89-b627-6239b8762e3a@i76g2000hsf.googlegroups.com>
Content-Type: text/plain; charset=iso-8859-1
--- Shiling Zhang <shiling99@YAHOO.COM> wrote:
> > How do I check that the 50K obs of the above predictors are
> linearly related to the FRAUD variable?
> It is neither necessary nor sufficient. In fact the model assumes
> that
> {logodds of FRAUD} NOT FRAUD is linearly related to your linearly
> predictors.
> Here is a way to views it.
> 1) Bin a predictor into, say 30 bins. i=1 to 30
> 2) Calculate logodds of FRAUD for each bin.
> 3) plot logodds of FRAUD against bined predictor values(mean,
> median)
> Based on what you see, you may take a proper transformation. One is
> parametric and the other is non-parametric. If you have a large
> number
> of events( FRAUD), the non-parametric way is prefered. ......
> HTH
Yes, this is most certainly one way to examine the linearity
assumption. This approach works best if you have only a single
predictor variable. In the multivariate setting where the effect
of each predictor is conditional on the effects of other predictors,
then this approach may not work as well. Also, I don't see the need
for any more than 10 bins in most circumstances.
Another way to examine whether linearity holds is to include terms
in your model which represent some departure from linearity. If
there is significant improvement in the model fit when these terms
are included, then the assumption of linearity in the predictors
does not hold. Often, this is performed simply by including
polynomials of your predictors. A little more sophisticated approach
may be to employ a spline basis for representing nonlinearity. My
favorite spline basis is to use restricted cubic splines as discussed
Harrell, Frank. "Regression Modeling Strategies: With Applications
to Linear Models, Logistic Regression, and Survival Analysis."
Springer, 2001.
There are a couple of SAS macros available for generating splined
variables. Go to
and follow the links from there.
Dale McLerran
Fred Hutchinson Cancer Research Center
mailto: dmclerra@NO_SPAMfhcrc.org
Ph: (206) 667-2926
Fax: (206) 667-5977 | {"url":"http://www.listserv.uga.edu/cgi-bin/wa?A2=ind0805c&L=sas-l&O=D&P=19231","timestamp":"2014-04-19T07:01:16Z","content_type":null,"content_length":"11652","record_id":"<urn:uuid:8d7191b0-1cc8-4c27-b067-d72291151d96>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Snellville Science Tutor
Find a Snellville Science Tutor
...Although I prefer tutoring for the MCAT, I am also willing to tutor for specific subjects covered on the exam. I am a senior majoring in biology with a minor in chemistry. In addition, I have
three years experience tutoring the subjects on the MCAT.
5 Subjects: including biology, chemistry, physics, MCAT
...I feel qualified to teach a basic beginners’ course, but not an intermediate/advanced level. If your needs require Beginners' Arabic, you may count on me. I studied phonics in college, for
my BA and did well in the subject.
38 Subjects: including anthropology, archaeology, English, reading
...I also taught in the college environment for over 10 years and I am currently teaching Math. I have tutored middle and high school math for 20+ years. I enjoy working with the students and
receive many rewards when I see their successes.
20 Subjects: including ACT Science, calculus, GRE, GMAT
...After earning my B.S. in Mathematics at Georgia State University I was offered the position of Mathematics and Science Lab Supervisor. In that position I continued to tutor students and to
train other tutors as well. I love math, and helping others to learn it.
15 Subjects: including chemistry, biology, calculus, geometry
...I am a Georgia Tech Graduate and have been teaching and tutoring "all subjects" on the SAT and ACT to hundreds of students for over thirty-five years with excellent results. Over the years, I
have been able to assist first time students obtain high initial scores and dramatically improve scores ...
42 Subjects: including ACT Science, reading, English, writing
Nearby Cities With Science Tutor
Alpharetta Science Tutors
Buford, GA Science Tutors
Decatur, GA Science Tutors
Duluth, GA Science Tutors
Dunwoody, GA Science Tutors
Grayson, GA Science Tutors
Johns Creek, GA Science Tutors
Lawrenceville, GA Science Tutors
Lilburn Science Tutors
Loganville, GA Science Tutors
Norcross, GA Science Tutors
Roswell, GA Science Tutors
Stone Mountain Science Tutors
Tucker, GA Science Tutors
Woodstock, GA Science Tutors | {"url":"http://www.purplemath.com/snellville_ga_science_tutors.php","timestamp":"2014-04-21T12:49:04Z","content_type":null,"content_length":"23738","record_id":"<urn:uuid:fac6d3fe-853c-4d37-89c3-27a65b2e78d9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
'Brimful' printed from http://nrich.maths.org/
For revolution of a curve $y=f(x)$ about the $x$ axis between $0$ and $a$, the volume of revolution is
$$V=\pi\int^a_0y^2 dx$$
How can we alter this to work out the volume obtained by rotating about the $y$ axis? | {"url":"http://nrich.maths.org/6426/clue?nomenu=1","timestamp":"2014-04-21T14:59:07Z","content_type":null,"content_length":"3145","record_id":"<urn:uuid:c7027317-aad4-466d-a2c5-f82d3214b205>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: [SI-LIST] : Question on transmission line sim
RE: [SI-LIST] : Question on transmission line simulation by AGILE NT ADS /Momentum
From: ABOULHOUDA,SAMIR (A-England,ex1) (samir_aboulhouda@agilent.com)
Date: Mon Apr 30 2001 - 06:23:49 PDT
To answer to Nianci question: LineCalc vs. Momentum? I would say, it
It depends on the frequency range, the dimensions and above all, on the
complexity of your circuit. First of all, we should be aware that sims in
Momentum are less accurate a low frequencies than sims at high frequencies.
At high frequency and for the long structures, where radiation effect can no
longer be neglected, Momentum is definitively more accurate and may be
recommended. To overcome the limitation at low frequencies, there is now
something called "Momentum RF Modes". With Momentum RF Mode, you can expect
sims solutions stable down to DC. Momentum RF mode is usually more efficient
mode when a circuit is electrically small, complex, and does not radiate...
and due to the difference of method used to mesh the structures, Momentum RF
Mode requires less CPU time and memory.
-----Original Message-----
From: Dan Swanson [mailto:DSWANSON@BartleyRF.com]
Sent: Monday, April 30, 2001 1:08 PM
To: 'Nianci Wang'; 'si-list@silab.eng.sun.com'
Subject: RE: [SI-LIST] : Question on transmission line simulation by HP
AD S /Momentum
In Lince Calc, the basic calculation for microstrip impedance
assumes an infinetly thin strip. To account for thickness, a
corrected strip width is computed internally. The equations they
use were developed by curve fitting to a large data base of
field-solver runs.
In Momentum, the basic calculation is again for an infinitely thin
strip. If they ask you for thickness at any point ( I don't remember)
it will be a correction to the surface impedance for loss. You can
approximate thick strips by adding a ficticious air layer that is
the thickness of your strip and then add a second layer of metal
on top. Depending on the length in wavelengths, you may want
to stitch the sides together at a few points. For large, complicated
circuits this is not very efficient because you have doubled the
number of unknowns and forced the solution time to be 6 to 8 times
In the end, you are comparing two field-solvers.
Dan Swanson EMAIL: d.swanson@ieee.org
Bartley RF Systems, Inc. TEL: 978-834-4085
37 South Hunt Road FAX: 978-388-7077
Amesbury, MA 01913
> -----Original Message-----
> From: Nianci Wang [SMTP:nwang@amcc.com]
> Sent: Friday, April 27, 2001 4:24 PM
> To: 'si-list@silab.eng.sun.com'
> Subject: [SI-LIST] : Question on transmission line simulation by HP
> ADS /Momentum
> Hi,
> I have a question on simulating finite metal thickness transmission line
> by
> using HP ADS Line Calc and Momentum.
> Basically I did simulation by using both line Calc and Momentum for
> several
> different structures, I found that Momentum always give me a higher
> characteristic impedance than that of Line Calc. The thicker the metal is,
> the
> larger the difference between two simulation results. I did talk with HP
> ADS
> help line and got the answer was that Momentum gave more accurate
> simulation
> results. I wonder if any one did the simulation by each line Calc or
> Momentum
> and compared the result with that of the measurement.
> Thank you
> Nianci
> **** To unsubscribe from si-list or si-list-digest: send e-mail to
> majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE
> si-list or UNSUBSCRIBE si-list-digest, for more help, put HELP.
> si-list archives are accessible at http://www.qsl.net/wb6tpu
> ****
**** To unsubscribe from si-list or si-list-digest: send e-mail to
majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE
si-list or UNSUBSCRIBE si-list-digest, for more help, put HELP.
si-list archives are accessible at http://www.qsl.net/wb6tpu
**** To unsubscribe from si-list or si-list-digest: send e-mail to
majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE
si-list or UNSUBSCRIBE si-list-digest, for more help, put HELP.
si-list archives are accessible at http://www.qsl.net/wb6tpu
This archive was generated by hypermail 2b29 : Thu Jun 21 2001 - 10:11:46 PDT | {"url":"http://www.qsl.net/wb6tpu/si-list/0684.html","timestamp":"2014-04-19T19:39:45Z","content_type":null,"content_length":"9723","record_id":"<urn:uuid:66e9cc68-6893-45f5-9269-154268a38a36>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rate of change calculas
April 17th 2008, 07:54 PM #1
Mar 2008
Rate of change calculas
Hey this is a double post im sorry the first post did not work.
I just cant work this one out. I was told I need to use the first derivative test, but i dont know how to apply to this question.
Help would be much appreciated as i have not done this level of maths in some time.
I am also aware i speled calculus wrong :P
The temperature of a chemical reaction is given by the formula
where Tis the temperature and t is the time in seconds since the reaction was started.
Draw a neat graph showing the reaction temperature during the first eight seconds of the reaction.
What is the maximum temperature reached during this reaction?
What is the maximum rate of increase in temperature achieved by this reaction? At what time is this rate achieved?
Last edited by Bastler; April 17th 2008 at 08:10 PM.
What work have you done so far?
I can give you hints..
a) Do you know what the function $f(x)=e^{-x}$ looks like in a plot of f(x) vs. x? If so, you should have some intuition into the nature of your temperature decay function.
What happens at t=0? You should be able to find the 'y' intercept.
Have you seen a Gaussian before? It is of similar form, $e^{x^2}$, perhaps your graph will look like a bell curve?
b) How do we find max's and min's in calculus? This is where you should take the first derivative, set it equal to 0.
c) Essentially they're asking at what point is the slope greatest. When you hear "slope of a function", your mind should think 'derivatives!'
April 17th 2008, 09:26 PM #2
Feb 2008 | {"url":"http://mathhelpforum.com/calculus/34991-rate-change-calculas.html","timestamp":"2014-04-17T07:01:01Z","content_type":null,"content_length":"32522","record_id":"<urn:uuid:3a96ec13-b197-4977-bac5-85ac239e5362>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Term Symbols
The heirarchy of labels for the electrons of multi-electron atoms is configuration, term, level, and state.
The term uses the multiplicity 2S + 1, total orbital angular momentum L, and total angular momentum J. It assumes that all the spins combine to produce S, all the orbital angular momenta couple to
produce L, and then the spin and orbital terms combine to produce a total angular momentum J. The angular momentum symbol follows the spectroscopic notation scheme.
Different terms will in general have different energies, and the order of those energies is usually that given by Hund's Rules, although there are exceptions. The different terms for a given
configuration are obtained by forming the different combinations of angular momenta for the electrons outside closed shells, making sure the Pauli Exclusion Principle is obeyed. | {"url":"http://hyperphysics.phy-astr.gsu.edu/hbase/atomic/term.html","timestamp":"2014-04-17T10:18:45Z","content_type":null,"content_length":"2266","record_id":"<urn:uuid:102c50d7-c2f1-4bb2-9be9-9797f2e71a24>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
Everyday Math Unit 2 - Addition & Subtraction
Need a little help practicing your addition and subtraction facts or with bigger addition/subtraction problems? Try one of these websites to help practice those skills!
Math Mayhem is a website where you can practice you math facts (addition, subtraction, multiplication, or division) against other people on the web. You can input your first name or leave it as
guest. It's a race of the clock to answer as many problems as you can in that time. At the end you'll see your score compared to others who played at the same time. It's a lot of fun! Try it out by
clicking here!
"Math Baseball" is a game that Everyday Math has as well as the great website, FunBrain. Here you can play the game at home, by yourself! You can choose addition, subtraction, multiplication, or
division. For each right answer you get you a hit. FunBrain will decide if that hit is a single, double, triple, or home run. If you get an answer wrong then you get an out. Just like in baseball,
three outs and the inning (game) is over. Try "Math Baseball" here!
If you need a visual for numbers, try using virtual base ten blocks! Your job is to drag base ten block pieces (cubes, rods, and flats) to create numbers. Hands at the bottom will clap and you'll
hear cheers if you get it correct! It's a good way to practice what digits in a number mean! Practice using base ten blocks here!
Is rounding hard/confusing for you? Here is a site where you can play the "Sea Shell Rounding" activity. Make the numbers that end in 1 through 4 into the next lower number that ends in 0. For
example 84 rounded to the nearest ten would be 80. Numbers that end in a digit of 5 or more should be rounded up to the next ten. The number 58 rounded to the nearest ten would be 60. Look at the
number on each shell. Click on the nearest ten. Collect a sand dollar each time you choose the correct answer! Try it by clicking here! | {"url":"http://henneberrythirdgrade.blogspot.com/2012/07/everyday-math-unit-2-addition.html","timestamp":"2014-04-19T22:05:50Z","content_type":null,"content_length":"72944","record_id":"<urn:uuid:c870fbbd-48fa-4c5b-8a01-941ac3d79d65>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simple linear equations
Simple linear equations resources
Linear relationships
In many business applications, two quantities are related linearly. This means a graph of their relationship forms a straight line. This leaflet discusses one form of the mathematical equation which
describes linear relationships.
Simple linear equations
This leaflet shows how simple linear equations can be solved by performing the same operations on both sides of the equation.
Solving linear equations
Equations always involve one or more unknown quantities which we try to find when we solve the equation. The simplest equations to deal with are linear equations. On this leaflet we describe how
these are solved.
Solving linear equations
This leaflet explains how simple linear equations are solved. (Engineering Maths First Aid Kit 2.12) | {"url":"http://www.mathcentre.ac.uk/types/quick-reference/linearequations/","timestamp":"2014-04-17T15:49:57Z","content_type":null,"content_length":"8503","record_id":"<urn:uuid:14d23dcd-3289-40b5-90a2-8055e69e7e3e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Graphs on surfaces and their applications. Appendix by Don B. Zagier.
(English) Zbl 1040.05001
Encyclopaedia of Mathematical Sciences 141(II). Berlin: Springer (ISBN 3-540-00203-0/hbk). xv, 455 p. EUR 101.60 (2004).
The goal of this book is to explain the interrelations between three distinct ways to consider an embedded graph: as a topological object, as a sequence of permutations, as a way of representing a
ramified covering of the sphere by a compact two-dimensional manifold.
Chapter 1 introduces the objects of study, namely constellations (finite sequences of permutations), ramified coverings of the sphere, embedded graphs in a variety of forms, and Riemann surfaces.
Chapter 2 is concerned with dessins d’enfants and the combinatorial and geometric consequences of Belyi’s theorem which asserts the faithfulness of a certain group action on maps.
In Chapter 3 the subject of matrix integrals in map enumeration is introduced. The main emphasis here is the interpretation of the matrix integrals in combinatorial terms as well as the encoding of a
combinatorial problem in terms of a matrix integral.
The method of matrix integrals is a tool to compute the Euler characteristic of moduli spaces of complex algebraic curves, which are the subject of Chapter 4.
A general study of meromorphic functions is provided in Chapter 5. Enumerative questions are related to algebraic geometry and singularity theory via the Lyashko-Looijenga mapping, while the flexible
classification of meromorphic functions is related to braid group action on constellations.
Chapter 6 explains the structure of the Hopf algebra on chord diagrams interpreted as one vertex maps.
An appendix by Don Zaiger provides the reader with a short course on representation and character theory of finite groups, with applications to the enumeration of constellations.
The authors introduce their objects of study, namely graphs and maps, with great care and detail. A wealth of applications is pointed out, but of course these applications are sometimes introduced
with minimal or no explanation. The Appendix provides some elegant and concise proofs of results used in the main body of the book and the bibliography contains 313 entries. Well-chosen examples and
strategically placed exercises support the reader in gaining an understanding of graphs on surfaces and their fascinating applications.
05-02 Research monographs (combinatorics)
05C10 Topological graph theory
05C30 Enumeration in graph theory
14H55 Riemann surfaces; Weierstrass points; gap sequences
15A52 Random matrices (MSC2000)
20Bxx Permutation groups
20F36 Braid groups; Artin groups
30Fxx Riemann surfaces (one complex variable)
32G15 Moduli of Riemann surfaces, Teichmüller theory
57M15 Relations of manifolds with graph theory
57M12 Special coverings
57M27 Invariants of knots and 3-manifolds
81T18 Feynman diagrams
81T40 Two-dimensional field theories, conformal field theories, etc. | {"url":"http://zbmath.org/?q=an:1040.05001&format=complete","timestamp":"2014-04-18T00:23:30Z","content_type":null,"content_length":"24106","record_id":"<urn:uuid:bb4ad905-4830-4a73-848c-f9b10574cb51>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
N Richlnd Hls, TX Calculus Tutor
Find a N Richlnd Hls, TX Calculus Tutor
...It is currently believed to be the second largest family of flowering plants (only the Asteraceae is larger), with between 21,950 and 26,049 currently accepted species, found in 880 genera.[1]
[2] The number of orchid species equals more than twice the number of bird species, and about four times ...
93 Subjects: including calculus, chemistry, physics, English
...Gavin M.I have taught all areas of math for years 8, 10 and 11. I have also taught applied math for year 12. I have completed a double degree in Mechanical Engineering (Mechatronics) and
Computer Science at the University of Melbourne.
56 Subjects: including calculus, chemistry, physics, statistics
...He lives, along with his wife and 3-year-old daughter, near Dallas, TX. He has traveled quite a bit, both in the United States and abroad. He and his family spent three years in Mexico,
initially serving at an orphanage in Chihuahua.
37 Subjects: including calculus, reading, chemistry, Spanish
...I have used physics as part of my research for over 20 years. My mathematical background includes algebra, geometry, trigonometry, 3 semesters of calculus, differential equations and linear
algebra. In addition, my research required that I use my math skills on a daily basis to analyze data and design experiments.
55 Subjects: including calculus, chemistry, reading, statistics
...I have been tutoring these subjects for the past 25 years. I passed the ACT exam 25 years ago with a score of 29 as a requirement before entering BYU. I have since helped several students
prepare for this college entrance exam.
48 Subjects: including calculus, chemistry, physics, ASVAB | {"url":"http://www.purplemath.com/N_Richlnd_Hls_TX_calculus_tutors.php","timestamp":"2014-04-19T15:17:34Z","content_type":null,"content_length":"24196","record_id":"<urn:uuid:2363cfa3-6647-4e48-b400-d8254801a19b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finite interpolation by a nondecreasing polynomial
up vote 10 down vote favorite
Let $x_1 < x_2 < \ldots < x_n$ and $y_1 < y_2 < \ldots < y_n$ be two sequences of $n$ real numbers. It is well known that there are polynomials that "interpolate" in that $f(x_i)=y_i$ for all $i$,
and the Lagrange interpolating polynomial even warrants a solution of degree $ < n$. Now, what happens if we want the polynomial $f$ to be nondecreasing on the interval $[x_0,x_n]$ ? Is there always
a solution, and is there a bound on the degree also ?
approximation-theory polynomials
add comment
4 Answers
active oldest votes
This problem has appeared before in literature and is now well understood, I guess. The general version is when you have no restriction on the $y_i$'s and you ask for an interpolating
polynomial that is monotone on each sub-interval $[x_ix_{i+1}]$. The first paper proving the existence of such a polynomial is:
W.Wolibner, "Sur un polynom d'interpolation", Colloq. Math (2) 1951, 136-137
but it is a non-constructive proof, as it uses the Weierstrass approximation theorem much like the answer given by Harald Hanche-Olsen above. Another proof for the case $0=y_0\le \cdots \
le y_n=1$ is given in "Polynomial Approximations to Finitely Oscillating Functions" by W.J. Kammerer (Theorem 4.1) and the non-constructive aspect of his proof is the use of uniform
convergence of appropriate Bernstein polynomials. In "Piecewise monotone polynomial interpolation", S.W. Young proves the same theorem and makes the final remark that the existence of
such monotone interpolating polynomial is in fact equivalent to the Weierstrass theorem. On the other hand Rubinstein has some papers devoted to proving the existence of interpolating
up vote 12 polynomials which are increasing in all of $\mathbb R$.
down vote
accepted The first paper which gives bounds on the degrees is, I think,
E. Passow, L. Raymon, "The degree of piecewise monotone interpolation", which is here
and an improvement is made in "Exact estimates for monotone interpolation" by G.L. Iliev. Note that the bounds are in terms of $$A=\max \Delta y_i=\max (y_{i}-y_{i-1}) \\qquad B=\min \
Delta y_i \qquad C=\min \Delta x _i$$
And no uniform bound exists.
add comment
To add to Gjergji Zaimi's informative answer: It is easy to see that the degree cannot be bounded in terms of $n$ alone, even when $n=3$.
Suppose that we want $f$ of degree $m$ such that $f(0)=0$, $f(1)=\epsilon$, and $f(2)=1$, and $f$ is increasing on $[0,1]$, where $\epsilon>0$ is small. Then $|f(k/m)| \le \epsilon$ for $k
up vote 11 =0,\ldots,m$, so the Lagrange interpolation formula shows that for fixed $m$, the coefficients of $f$ are $O(\epsilon)$, so $f(2)$ is $O(\epsilon)$ and cannot be $1$ if $\epsilon$ is small
down vote enough. In other words, the degree of any solution $f$ must grow as $\epsilon$ shrinks.
1 Nice argument. So this brings up the question about sparse interpolation -- in which the number of non-zero terms is bounded. – Victor Miller Mar 1 '10 at 7:29
add comment
I don't know if this has been studied, but at least if you forget about a bound on the degree, a sledgehammer approach gives you a positive answer. For simplicity, assume $x_i\in[0,1]$ and
proceed by induction on $n$, with the induction hypothesis being the existence of an increasing interpolating polynomial $p_n$. To get from $n-1$ to $n$, let $$p_n(x)=p_{n-1}(x)+(x-x_1)\cdots
(x-x_{n-1})q_n(x)$$ where $q_n$ is a polynynomial to be determined. Given $p_{n-1}'(x)>\epsilon>0$ we have a little wiggle room: We merely need $|q_n(x)|$ and $|q_n'(x)|$ to be very small for
$0<x<x_{n-1}$ (exactly how small left as an exercise) and $q_n'(x)>0$ for $x_{n-1}<x<1$. To achieve this, write $$q_n(x)=\int_0^x r_n(x)$$ and use the Weierstrass approximation theorem to let
up vote $r_n$ approximate a suitable continuous function. Adjust with a positive multiplicative constant to hit $p_n(x_n)=y_n$ exactly.
4 down
vote The astute reader will notice a problem with this: If $p_{n-1}(x_n)\ge y_n$ this prescription loses. So we have to make sure that $r_{n-1}$, after shooting up to a nice big value around $x_
{n-1}$, comes quickly back down to a small value in order to have this not happen. This complicates the proof quite a bit though, and I am not about to work through the details. I'd be
interested to hear about pointers to the literature.
add comment
I just came across the paper by Powers and Reznick "Polynomials that are Positive on an interval" http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.120.9773&rep=rep1&type=
up vote 1 down vote In particular they point to a theorem of Schmudgen who gives a characterization of polynomials positive on a specific compact set.
add comment
Not the answer you're looking for? Browse other questions tagged approximation-theory polynomials or ask your own question. | {"url":"http://mathoverflow.net/questions/16673/finite-interpolation-by-a-nondecreasing-polynomial/16699","timestamp":"2014-04-19T22:32:54Z","content_type":null,"content_length":"65276","record_id":"<urn:uuid:abb4faff-9f86-4f49-bc73-2d37486ffc4e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics in Many-Sheeted Space-Time
Authors: Matti Pitkänen
This book is devoted to what might be called classical TGD. </p><p> <OL> <LI> Classical TGD identifies space-time surfaces as kind of generalized Bohr orbits. It is an exact part of quantum TGD. <LI>
The notions of many-sheeted space-time, topological field quantization and the notion of field/magnetic body, follow from simple topological considerations. Space-time sheets can have arbitrarily
large sizes and their interpretation as quantum coherence regions implies that in TGD Universe macroscopic quantum coherence is possible in arbitrarily long scales. Also long ranged classical color
and electro-weak fields are predicted. <LI> TGD Universe is fractal containing fractal copies of standard model physics at various space-time sheets and labeled by p-adic primes assignable to
elementary particles and by the level of dark matter hierarchy characterized partially by the value of Planck constant labeling the pages of the book like structure formed by singular covering spaces
of the imbedding space M<sup>4</sup>× CP<sub>2</sub> glued together along four-dimensional back. Particles at different pages are dark relative to each other since local interactions defined in
terms of the vertices of Feynman diagram involve only particles at the same page. </p><p> The simplest view about the hierarchy of Planck constants is as an effective hierarchy describable in terms
of local, singular coverings of the imbedding space. The basic observation is that for Kähler action the time derivatives of the imbedding space coordinates are many-valued functions of
canonical momentum densities. If all branches for given values of canonical momentum densities are allowed, one obtains the analogs of many-sheeted Riemann surfaces with each sheet giving same
contribution to the Kähler action so that Planck constant is effectively a multiple of the ordinary Planck constant. <LI> Zero energy ontology brings in additional powerful interpretational
principle. </OL> </p><p> The topics of the book are organized as follows. <OL> <LI> In Part I extremals of Kähler action are discussed and the notions of many-sheeted space-time, topological
field quantization, and topological condensation and evaporation are introduced. <LI> In Part II many-sheeted-cosmology and astrophysics are summarized. p-Adic and dark matter hierarchies imply that
TGD inspired cosmology is fractal. Cosmic strings and their deformations giving rise to magnetic flux tubes are basic objects of TGD inspired cosmology. Magnetic flux tubes can in fact be interpreted
as carriers of dark energy giving rise to accelerated expansion via negative magnetic "pressure". The study of imbeddings of Robertson-Walker cosmology shows that critical and over-critical cosmology
are unique apart from their duration. The idea about dark matter hierarchy was originally motivated by the observation that planetary orbits could be interpreted as Bohr orbits with enormous value of
Planck constant, and this picture leads to a rather detailed view about macroscopically quantum coherent dark matter in astrophysics and cosmology. </p><p> <LI> Part III includes old chapters about
implications of TGD for condensed matter physics. The phases of CP<sub>2</sub> complex coordinates could define phases of order parameters of macroscopic quantum phases manifesting themselves in the
properties of living matter and even in hydrodynamics. For instance, Z<sup>0</sup> magnetic gauge field could make itself visible in hydrodynamics and Z<sup>0</sup> magnetic vortices could be
involved with super-fluidity. </OL>
Comments: 985 Pages.
Download: PDF
Submission history
[v1] 3 Aug 2009
[v2] 14 Oct 2009
[v3] 19 Oct 2009
[v4] 3 Nov 2010
[v5] 2011-12-04 00:23:03
[v6] 2012-01-30 22:33:44
[v7] 2012-03-16 03:37:55
[v8] 2012-07-01 08:31:55
[v9] 2012-09-06 22:51:33
[vA] 2012-09-22 02:24:03
[vB] 2012-10-11 23:22:20
[vC] 2012-11-13 01:13:11
[vD] 2013-01-01 07:54:39
[vE] 2013-02-17 01:35:34
[vF] 2013-09-09 07:04:49
Unique-IP document downloads: 486 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted
as unhelpful.
comments powered by | {"url":"http://vixra.org/abs/0908.0017","timestamp":"2014-04-21T07:14:12Z","content_type":null,"content_length":"11594","record_id":"<urn:uuid:eac56214-3c48-4676-90f2-f5e52119e1d9>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
: Use FLOOR to assign condition based on participant number
TOPIC: Use FLOOR to assign condition based on participant number
Use FLOOR to assign condition based on participant number 10 months 2 weeks ago #96611
Dear readers, I am having a problem with the following. In my survey, I assign people to condition based on their participant number, using the FLOOR function. However, some numbers
are SKIPPED completely, and I can't pinpoint the error. Here is the process I use:
Participants' number is filled in in a question called 'ppno'
The next question is equation type, called 'condition'. The equation is {((((ppno/3)-(FLOOR(ppno/3)))*3)}
• cjvanlissa
• Then there are 3 questions, each of which is a different condition of my experimental manipulation. The questions have these relevance equations:
• Fresh condition == 0
Lemon condition == 1
• condition == 2
• Posts: 4
• Karma: 0 Now, when I entered participant number 211, I first noticed the problem that it SKIPS these questions completely. Yet I don't see how the outcome of ((((211/3) - (floor(211/3)))*3)
• could be anything other than 0|1|2
Could anyone point me in the right direction please? I've been at this for 3 hours and it seems like it should be simple!
Thank you so much,
Use FLOOR to assign condition based on participant number 10 months 2 weeks ago #96636
Yeah, that should equal 1 but I think you have an extra bracket. Try this:
• tpartner {(((ppno/3) - (floor(ppno/3))) * 3)}
• Cheers,
• LimeSurvey Team
• Tony Partner
• Posts: 3813
• Thank you received: 683 Solutions, code and workarounds presented in these forums are given without any warranty, implied or otherwise.
• Karma: 327
• LimeSurvey is open-source and run entirely by volunteers so please
consider donating to support the project.
Use FLOOR to assign condition based on participant number 10 months 1 week ago #96664
• cjvanlissa Thanks for your response tpartner! Unfortunately, the way you wrote it up is exactly like it is in my actual survey. I mistyped it in my original post.
• To be clear: the problem is not that this code NEVER works, the problem is that the code doesn't work for SOME values of ppno. The first time the code failed me is when using ppno =
• Fresh 211. The problem is consistent, i.e. it NEVER works with ppno = 211, and I've since then discovered it also fails with some other numbers.
• Any idea how it could be that the code fails only for some values of 'ppno'?
• Posts: 4
• Karma: 0
Use FLOOR to assign condition based on participant number 10 months 1 week ago #96679
Try this to make sure you get a whole number as a result:
• tpartner {ceil(((ppno/3) - (floor(ppno/3))) * 3)}
• Cheers,
• LimeSurvey Team
• Tony Partner
• Posts: 3813
• Thank you received: 683 Solutions, code and workarounds presented in these forums are given without any warranty, implied or otherwise.
• Karma: 327
• LimeSurvey is open-source and run entirely by volunteers so please
consider donating to support the project.
Use FLOOR to assign condition based on participant number 10 months 1 week ago #96681
• cjvanlissa
• Ace! That did the trick! Thanks you for your help
• Fresh Lemon
• Posts: 4
• Karma: 0
Use FLOOR to assign condition based on participant number 10 months 1 week ago #96711
• cjvanlissa
• Actually, still not working. It worked for 211, but a different ppno (95) still resulted in the questionnaire skipping the conditional questions.
• Fresh Lemon This is really bizarre; the code has been working perfectly for months, and now suddenly it results in major issues. Even updating Limesurvey did not make a difference.
• Posts: 4
• Karma: 0
Use FLOOR to assign condition based on participant number 10 months 1 week ago #96714
I get exactly 3 with a ppno of 95.
Try putting something like this in a text-display question to see the equation component values:
• tpartner ppno/3: {(ppno.shown/3)}
• floor(ppno/3): {floor(ppno.shown/3)}
• (ppno/3) - (floor(ppno/3): {((ppno.shown/3) - (floor(ppno.shown/3)))}
• LimeSurvey Team ceil(((ppno/3) - (floor(ppno/3))) * 3): {ceil(((ppno.shown/3) - (floor(ppno.shown/3))) * 3)}
• Posts: 3813 Cheers,
• Thank you received: 683
• Karma: 327 Tony Partner
Solutions, code and workarounds presented in these forums are given without any warranty, implied or otherwise.
LimeSurvey is open-source and run entirely by volunteers so please
consider donating to support the project.
Powered by Kunena Forum | {"url":"https://www.limesurvey.org/en/forum/can-i-do-this-with-limesurvey/93531-use-floor-to-assign-condition-based-on-participant-number","timestamp":"2014-04-16T11:23:19Z","content_type":null,"content_length":"50728","record_id":"<urn:uuid:96a7c6dc-6966-4362-99e2-439dac46fef9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Questions and Answers
Students often face hard-to-solve and mind-numbing physics problems, that cause a lot of distress into the studying process. Not everyone can cope with the hardships physics problems cause, and many
end up with a bunch of physics questions that need to be solved. Our service is the solution provider for your physics questions. Ask your question here and get physics answers that would help you do
your assignment in the quickest way possible with maximum results. Our experts will gladly provide physics answers for your benefit.
how the energy is produced in a nuclear reactors?
A hiker sets out on a trek heading [N35degreesE] at a pace of 5.0km/h for 48.0 min. He then heads west at 4.5km/h for 40.0 min. Finally, he heads [N30degreesW] for 6.0km/h, until he reaches the
camground 1.5 hours later. What is his total displacement?
1.Below is the description of gthe position in metres of the puck as it moves in the x plane where time is in seconds ,
(a)How long did the puck move after 2 seconds?
(b)What is the velocity of the puck at 2 seconds?
(c)What is the acceleration of the puck at 2 seconds?
2.An object moves in a uniform circural motion with centripetal acceleration of 25 m/s^2 ,if the diameter of the circle is 70m,
(a)Find the velocity at which the object is moving.
(b)Find the revolution period of the motion.
Read answer In Progress...
1 newton is represented as
why is the curve radius minimum when a projected ballistic ball reaches the maximum height?
A 25g rifle bullet traveling 220m/s buries itself in a 3.9kg pendulum hanging on a 2.6m long string, which makes the pendulum swing upward in an arc. Determine the vertical and horizontal components
of the pendulum's displacement.
Read answer In Progress...
Farmer Crockett is preparing tomato seedlings for his spring planting by growing the small plants over five 46-ohms strip heaters wired in parallel. a) How much current does each heater draw from a
120-V line? b) How much current do they all have together?
Timmy is playing with a new electonics kit he has recieved for his birthday. He takes out four resistors with resistances of 15ohms, 20ohms, 20ohms, and 30ohms. a) How would Timmy have to wire the
risistors so that they would allow the maximum amount of current to be drawn? Calculate the total resistance in this circuit. b) How must he wire the resistors so that they draw a minimum amount of
current? Calculate the total resisance in this circuit.
Titan’s size (R = 2575 km) and density (ρ = 1.9 g cm -3) are much smaller than Earth’s (R = 6378 km and ρ = 5.5 g cm -3). Therefore, we would expect that its surface gravity gT is weaker than Earth’s
g, so any material on Titan would weigh much less than it does on Earth (the weight of any mass m on any planet (or moon) is given by the product of its mass m and surface gravity g, i.e., W = mg).
However, atmospheric pressure on Titan’s surface is greater than Earth’s, about 1.6 Earth atmospheres (pressure is the weight per square meter on the surface of the planet (or moon), i.e., P = W / A
where A is the area)! This means that the amount of mass in Titan’s atmosphere above any square meter of Titan’s surface must be much greater than the amount of mass in the atmosphere of Earth above
any square meter on Earth’s surface. Calculate the amount mass of atmosphere per square meter (we’ll call this σ = m/A) on Titan’s surface compared to that on Earth. (Hint: You’ll need to calculate
Titan’s surface gravity relati
Read answer In Progress...
why do we need to measure extremely small interval of times
Read answer In Progress... | {"url":"http://www.assignmentexpert.com/free-answers/physics/373","timestamp":"2014-04-17T01:27:36Z","content_type":null,"content_length":"62147","record_id":"<urn:uuid:3b5ffb0f-d387-47fb-9f30-cfdf6e218046>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
College Mathematics for Business, Economics, Life Sciences, and Social Sciences, Eleventh Edition
Back to skip links
College Mathematics for Business, Economics, Life Sciences, and Social Sciences, Eleventh Edition | 978-0-13-157225-6
ISBN-13: 9780131572256 See more
Author(s): Raymond A. Barnett; Michael R. Ziegler; Karl E. Byleen
Price Information
Rental OptionsExpiration Date
eTextbook Digital Rental:360 days
Our price: $71.99
Regular price:$180.00
You save:$108.01
Additional product details
ISBN-13 9780136055198, ISBN-10 0136055192
ISBN-10 0-13-157225-3, ISBN-13 978-0-13-157225-6
Author(s): Raymond A. Barnett; Michael R. Ziegler; Karl E. Byleen
Publisher: Pearson
Copyright year: © 2008
Marketing Promotion
Three Ways to Study
with eTextbooks!
• Read online from your computer or mobile device.
• Read offline on select browsers and devices when the internet won't be available.
• Print pages to fit your needs.
CourseSmart eTextbooks let you study the best way – your way. | {"url":"http://www.coursesmart.com/9780136055198","timestamp":"2014-04-19T07:01:47Z","content_type":null,"content_length":"59005","record_id":"<urn:uuid:679c3be3-c128-4ba7-877f-7cd97e20c812>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Russell's paradox, major league version - Statistical Modeling, Causal Inference, and Social Science
Russell’s paradox, major league version
How fast is Rickey? Rickey is so fast that he can steal more bases than Rickey. (And nobody steals more bases than Rickey.)
2 Comments
1. But there's no mention of sets!
There could be a Zeno's version —-
For Rickey to steal second base, he has to get from 1st to 2nd. To get from 1st to 2nd, he has to get halfway from 1st to 2nd, and so on.
2. It is neat. The illegality of Ricky not being a member of a set that Ricky is a member of goes nicely with the idea that "nobody" steals more bases than Ricky because that invokes an empty set
but also creates another, sort of recursive illegality in which Ricky is always stealing more bases than Ricky but nobody is always stealing more. So we have a countable infinity "inside" another | {"url":"http://andrewgelman.com/2009/12/07/russells_parado/","timestamp":"2014-04-19T19:35:04Z","content_type":null,"content_length":"21183","record_id":"<urn:uuid:39afa327-12da-4d24-9079-1ff44e067271>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra (Graduate Texts in Mathematics)
Author: , Date: 22 Nov 2010, Views: ,
Algebra (Graduate Texts in Mathematics)
by Thomas W. Hungerford
Sрrіngеr | 2003 | ISBN: 0387905189 | 528 pages | File type: PDF | 11.6 mb
Algebra fulfills a definite need to provide a self-contained, one volume, graduate level algebra text that is readable by the average graduate student and flexible enough to accomodate a wide variety
of instructors and course contents. The guiding philosophical principle throughout the text is that the material should be presented in the maximum usable generality consistent with good pedagogy.
Therefore it is essentially self-contained, stresses clarity rather than brevity and contains an unusually large number of illustrative exercises. The book covers major areas of modern algebra, which
is a necessity for most mathematics students in sufficient breadth and depth.
Download link::
3% Recovery.
[Fast Download] Algebra (Graduate Texts in Mathematics)
Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us,
we'll remove relevant links or contents immediately.
Astronomy and Cosmology Physics
Philosophy Medicine
Mathematics DSP
Cryptography Chemistry
Biology and Genetics Psychology and Behavior | {"url":"http://www.ebook3000.com/Algebra--Graduate-Texts-in-Mathematics-_106816.html","timestamp":"2014-04-16T21:58:19Z","content_type":null,"content_length":"13916","record_id":"<urn:uuid:88a7d691-3ec4-4454-ae25-2c124dc4eed8>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hobart, IN Math Tutor
Find a Hobart, IN Math Tutor
I have several degrees in Education and Business. I have 4 years' experience serving as a Coordinator of Assessment and Curriculum. I am dedicated to expanding the student learning experience,
while contributing my knowledge of the Public School System to develop a superior curriculum and student development program.
15 Subjects: including geometry, study skills, ACT Math, special needs
...Therefore, my ability to adapt to different learning curves will be a great asset when I am a teacher. I will be able to teach every student to his or her potential. Throughout my college
career I was involved in an internship through Northern Illinois University where I helped tutor students in the local high school; as well as several local students.
11 Subjects: including prealgebra, ACT Math, algebra 1, algebra 2
...I worked as a teacher's aide and Title I liaison for Elwood Middle School for 8 years where I also did after-school help sessions and summer tutoring. I just completed my student teaching for
my elementary license in December 2011. As the oldest of 9 children, I have always been around young children.
25 Subjects: including algebra 2, ACT Math, GED, trigonometry
...I am working on getting my master's degree in clinical psychology; I will be graduating this coming May. I have tutored many students and adults in the past 5 years, in the areas of Arabic,
Hebrew and Math. My number one goal with all of my clients is to achieve their goals on learning the new ...
8 Subjects: including geometry, physics, psychology, differential equations
...I am a recent alum of the prestigious Teach for America program, where I worked in a Baltimore City School and was responsible for seeing math test results more than double from their previous
years. While in Baltimore I began working with MERIT to help tutor some of the city's most promising yo...
20 Subjects: including ACT Math, trigonometry, SAT math, linear algebra
Related Hobart, IN Tutors
Hobart, IN Accounting Tutors
Hobart, IN ACT Tutors
Hobart, IN Algebra Tutors
Hobart, IN Algebra 2 Tutors
Hobart, IN Calculus Tutors
Hobart, IN Geometry Tutors
Hobart, IN Math Tutors
Hobart, IN Prealgebra Tutors
Hobart, IN Precalculus Tutors
Hobart, IN SAT Tutors
Hobart, IN SAT Math Tutors
Hobart, IN Science Tutors
Hobart, IN Statistics Tutors
Hobart, IN Trigonometry Tutors | {"url":"http://www.purplemath.com/hobart_in_math_tutors.php","timestamp":"2014-04-21T14:49:15Z","content_type":null,"content_length":"23865","record_id":"<urn:uuid:7edb7ee6-54a4-4b41-9dfe-83e7a2371810>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logarithm Function (Python, functional)
From LiteratePrograms
When the waters of the Flood subsided, Noah spake unto the animals: "Go forth and multiply." All the animals did as they were bid, save a pair of serpents, who said, "We cannot multiply, for we are
adders." Then Noah bade them, "Come thou hither, upon the rough furniture where I partake my meals." They did so, and in due time, became exceeding numerous, for even adders can multiply with a log
Calculate base-10 logarithms to a specified number of decimal places.
[edit] theory
If one wishes to use logarithms in Python, then the built-in math.log10 is the obvious solution.
Here we take advantage of the algebraic structure of $\mathbb{R}$ to produce an entirely functional expression approximating log10 to a given number of digits.
[edit] practice
For much of this program, we deal with pairs (x,l), where l accumulates an integer approximation to the logarithm, and x encodes the remainder of the argument for which we seek the logarithm.
• shift, in expressing the identity l + log[10]x = l + s + log[10]x / 10^s, shifts the decimal point of x.
• scale is a shift with an argument of -1, 0, or 1, either bringing the x closer to units magnitude while incrementing (or decrementing) l, or, with an s at 0, returning x and l unchanged.
• ipart is the value which iterated scales converge to, at which point l is the integer part of the logarithm.
<<extracting the integer part>>=
shift = lambda x,l,s: (x/(10.0**s), l+s)
scale = lambda x,l: shift(x, l, (x >= 10)-(x < 1))
ipart = lambda x: fix(scale, (x,0))
To extract the fractional digits, we repeat the steps above for each fractional decimal place. A single digit is l, and the digits which come after it are handled by shifting the effective decimal
place in the recursion: logx^10 = 10 * logx
In a lazy language, the restriction d on number of decimal places extracted could be handled in the top-level expression, but as python is eager, we must propagate it through the entire expression,
truncating with a value of 0.0 after the last decimal place desired.
<<approximating log10>>=
digit = lambda x,l,d: l + alog(x**10, d-1)/10.0
alog = lambda x,d=12: (d>=0) and digit(*ipart(x)+(d,)) or 0.0
Question: why not write (d<0) and 0.0 or digit(*ipart(x)+(d,))?
Question: which instances of 10 in the code above refer to the base of the logarithm, and which refer to the decimal output?
[edit] wrapping up
We now provide the ancillary detail of arranging for a computation to converge.
Exercise: in this case, there is no danger of scale entering a loop. Write a more general fixpoint, which detects convergence to a loop and terminates properly. In such cases, does it matter which
loop element is returned?
<<converging to a fixpoint>>=
fix = lambda f,v1,v0=None: (v0 == v1) and v0 or fix(f, f(*v1), v1)
Finally, we wrap everything up in a module, which, when run, uses the same test vectors as Logarithm Function (Python)
converging to a fixpoint
extracting the integer part
approximating log10
if __name__ == "__main__":
value = 4.5
print alog(value, 3)
print alog(value, 6)
print alog(value)
... and add a quick sanity check: $log_{10} 10^x = 10^{log_{10} x} = x$
e = 2.7182818284590451
print alog(10**e)
print 10**alog(e)
print e
The output should be:
[edit] agendum
• Rewrite this article according to the following exercise, but not until several years after the appropriate python version has been available.
Exercise: rewrite alog and fix to use the ternary operator.
hijacker hijacker | {"url":"http://en.literateprograms.org/Logarithm_Function_(Python,_functional)","timestamp":"2014-04-16T11:39:05Z","content_type":null,"content_length":"29354","record_id":"<urn:uuid:40f4abd0-50cd-4aff-9c18-532553a0ca36>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00612-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
i am having trouble remembering how to do this: write the system of inequalities with the solution shown in the graph. you will end up with 2 inequality equations. will attach graph!
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b7bb09e4b0c789d50fd54c","timestamp":"2014-04-18T13:53:28Z","content_type":null,"content_length":"62406","record_id":"<urn:uuid:5a1e6943-b8aa-4c1a-8497-0fa782257a96>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pine Lake Algebra 2 Tutor
...I enjoy helping students to write clearly and to achieve their essay goals. The SAT reading section can be a confusing section if a student is not prepared for the format and patterns in the
questions. I begin by learning about what the student likes to read and then comparing those preferences to the types of reading on the SAT.
17 Subjects: including algebra 2, chemistry, physics, geometry
Awarded Wyzant.com's Top 50 Tutor Award for 2013. I've been successfully helping students learn a wide variety of subjects for over 15 years. I'm a friendly, calm individual who is good at
observing what it is you're struggling with.
28 Subjects: including algebra 2, physics, calculus, economics
...I have certification from the Georgia Professional Standards Commission (GPSC) in Special Education in all subjects for grades PreK-12 and in Early Childhood Education grades PreK-8. I began
teaching in 2001, working in the field of Early Intervention and Special/Deaf Education. Since then, I have worked with various age groups from birth to high school in MS, TN, and GA.
45 Subjects: including algebra 2, chemistry, English, reading
...I have tutored and taught on the high school and college level. I am very comfortable helping students discover and learn math. I have a Bachelor of Science and Master of Arts in mathematics.
19 Subjects: including algebra 2, geometry, trigonometry, SAT math
...I graduated from Georgia Tech in May 2011, and am currently tutoring a variety of math topics. I have experience in the following at the high school and college level:- pre algebra- algebra-
trigonometry- geometry- pre calculus- calculusIn high school, I took and excelled at all of the listed cl...
16 Subjects: including algebra 2, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/Pine_Lake_Algebra_2_tutors.php","timestamp":"2014-04-17T07:41:32Z","content_type":null,"content_length":"23871","record_id":"<urn:uuid:59192273-749e-4d3d-88aa-ada50f7b1de5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof that the rationals are countable.
September 27th 2011, 04:54 AM #1
Senior Member
Dec 2008
Proof that the rationals are countable.
Is this proof ok?
If I assume that the set of two-touples $X \{(a,b):a,b\in \mathbb{N}\}$ is countable (which I think follows directly from the proof the the countable union of countable sets is countable).
Can we make the natural bijection $f:\mathbb{Q}^+\rightarrow X$ s.t.
$\frac{a}{b}\rightarrow (a,b)$. Then we can see that the positive rationals are countable.
Making the similar bijection $g:\mathbb{Q}^-\rightarrow X$ s.t.
$\frac{a}{b}\rightarrow (-a,b)$. Then we see that the negative rationals are countable.
If we the take $(\mathbb{Q}^+\bigcup \{0\}\bigcup\mathbb{Q}^-)=\mathbb{Q}$, then as the countable union of countable sets is countable we have that $\mathbb{Q}$ is countable.
Note: I make the bijection to the set of two-touples as it is countable, so it has a bijection to the natural numbers and the composition of bijhections is a bijection.
Thanks for any help
Re: Proof that the rationals are countable.
The mapping $f:\mathbb{Q}^+\to X$, $f\left(\frac{a}{b}\right)=(a,b)$ is not a function because it maps the same element of the domain into different elements of the codomain.
Re: Proof that the rationals are countable.
ah, is there any way to modify this or is it just plain wrong
Re: Proof that the rationals are countable.
Here is one way to show that $\mathbb{N}\times\mathbb{N}$ is countable: define $f(a,b)=2^a\cdot 3^b$. Show that $f$ is an injection.
Subsets of countable sets are countable. Can you identify the positive rationals with a subset of $\mathbb{N}\times\mathbb{N}$?
Re: Proof that the rationals are countable.
Consider reducing rational fractions to lowest terms.
Re: Proof that the rationals are countable.
Ah ok thanks very much I get that.
However you said that the mapping is not a bijection as it sends the same element to different elements of the set X e.g. $\frac{2}{2}\ \mbox{and}\ 1$ go to different elements but is this not
some extra structure specific the rationals?
If we didn't consider this structure is this a bijection? because then the reduced rationals would be a subset of this?
thanks for any help
Re: Proof that the rationals are countable.
If we didn't consider this structure is this a bijection?
I need more specifics to answer. What exactly is the domain of the alleged bijection? If the domain is the set of all fractions in lowest terms, $\left(\frac{a}{b}\right)\mapsto(a,b)$ is an
injection but not a surjection: e.g., (2, 2) is not in the image.
Re: Proof that the rationals are countable.
Sorry for not being too clear
I meant when you said that my origional mapping $f:\mathbb{Q}^+\rightarrow X\ s.t. \frac{a}{b}\rightarrow (a,b)$ was not a bijection (as it was not a function) because it takes the same elements
in $\mathbb{Q}^+$ to different elements in X for example $\frac{2}{2}\ \mbox{and}\ \frac{1}{1}$.
I was asking if we could not just consider the set of all fractions where $\frac{2}{2}\ \mbox{and}\ \frac{1}{1}$ are distinct elements, and then create the bijection as before. We would then have
that this set is countable and that $\mathbb{Q}^+$ is a subset of this set and is so countable?
thanks for any help (sorry if I am not being that clear)
Re: Proof that the rationals are countable.
The set of all fractions where $\frac{2}{2}\ \mbox{and}\ \frac{1}{1}$ are distinct elements seems to be the same as $\mathbb{N}\times\mathbb{N}$. We just write $\frac{a}{b}$ instead of $(a, b)$.
So you could directly show that $\mathbb{Q}^+$ is a subset of $\mathbb{N}\times\mathbb{N}$ (or, more precisely, that there is an injection from $\mathbb{Q}^+$ to $\mathbb{N}\times\mathbb{N}$).
Re: Proof that the rationals are countable.
ok thanks very much for all the help
September 27th 2011, 05:19 AM #2
MHF Contributor
Oct 2009
September 27th 2011, 06:33 AM #3
Senior Member
Dec 2008
September 27th 2011, 06:44 AM #4
September 27th 2011, 06:56 AM #5
MHF Contributor
Oct 2009
September 27th 2011, 12:20 PM #6
Senior Member
Dec 2008
September 27th 2011, 02:40 PM #7
MHF Contributor
Oct 2009
September 27th 2011, 02:54 PM #8
Senior Member
Dec 2008
September 27th 2011, 03:29 PM #9
MHF Contributor
Oct 2009
September 27th 2011, 03:35 PM #10
Senior Member
Dec 2008 | {"url":"http://mathhelpforum.com/differential-geometry/188972-proof-rationals-countable.html","timestamp":"2014-04-18T10:11:23Z","content_type":null,"content_length":"66353","record_id":"<urn:uuid:5ace0787-459c-4524-8577-9b0331e1dd7b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adventures in Geometrical Intuition
31 October 2010
Over the past few days I’ve posted several strictly theoretical pieces that have touched on geometrical intuition and what I have elsewhere called thinking against the grain — A Question for
Philosophically Inclined Mathematicians, Fractals and the Banach-Tarski Paradox, and A visceral feeling for epsilon zero.
Not long previously, in my post commemorating the passing of Benoît Mandelbrot, I discussed the rehabilitation of geometrical intuition in the wake of Mandelbrot’s work. The late nineteenth and early
twentieth century work in the foundations of mathematics largely made the progress that it did by consciously forswearing geometrical intuition and seeking instead logically rigorous foundations that
made no appeal to our ability to visualize or conceive particular spatial relationships. Mandelbrot said that, “The eye had been banished out of science. The eye had been excommunicated.” He was
right, but the logically motivated foundationalists were right also: we are misled by geometrical intuition at least as often as we are led rightly by it.
Geometrical intuition, while it suffered during a period of relative neglect, was never entirely banished, never excommunicated to the extent of being beyond rehabilitation. Even Gödel, who
formulated his paradoxical theorems employing the formal machinery of arithmetization, therefore deeply indebted to the implicit critique of geometrical intuition, wrote: “I only wanted to show that
an innate Euclidean geometrical intuition which refers to reality and is a priori valid is logically possible and compatible with the existence of non-Euclidean geometry and with relativity theory.”
(Collected Papers, Vol. III, p. 255) This is, of course, to damn geometrical intuition by way of faint praise, but being damned by faint praise is not the same as being condemned (or excommunicated).
Geometrical intuition was down, but not out.
As Gödel observed, even non-Euclidean geometries are compatible with Euclidean geometrical intuition. When non-Euclidean geometries were first formulated by Bolyai, Lobachevski, and Riemann (I
suppose I should mention Gauss too), they were interpreted as a death-blow to geometrical intuition, but it became apparent as these discoveries were integrated into the body of mathematical
knowledge that what the non-Euclidean geometries had done was not to falsify geometrical intuition by way of counter-example, but to extend geometrical intuition through further (and unexpected)
examples. The development of mathematics here exhibits not Aristotelian logic but Hegelian dialectical logic: Euclidean geometry was the thesis, non-Euclidean geometry was the antithesis, and
contemporary geometry, incorporating all of these discoveries, is the synthesis.
Bertrand Russell, who was central in the philosophical struggle to find rigorous logical formulations for mathematical theories that had previously rested on geometrical intuition, wrote: “A logical
theory may be tested by its capacity for dealing with puzzles, and it is a wholesome plan, in thinking about logic, to stock the mind with as many puzzles as possible, since these serve much the same
purpose as is served by experiments in physical science.” (from the famous “On Denoting” paper) Though Russell thought of this as a test of logical theories, it is also a wholesome plan to stock the
mind with counter-intuitive geometrical examples. Non-Euclidean geometry greatly contributed to the expansion and extrapolation of geometrical intuition by providing novel examples toward which
intuition can expand.
In the interest of offering exercises and examples for geometrical intuition, In Fractals and the Banach-Tarski Paradox I suggested the construction of a fractal by raising a cube on each side of a
cube. I realized that if instead of raising a cube we sink a cube inside, it would make for an interesting pattern. With a cube of the length of 3, six cubes indented into this cube, each of length
1, would meet the other interior cubes at a single line.
If we continue this iteration the smaller cubes inside (in the same proportion) would continue to meet along a single line. Iterated to infinity, I suspect that this would look interesting. I’m sure
it’s already been done, but I don’t know the literature well enough to cite its previous incarnations.
The two dimensional version of this fractal looks like a square version of the well-known Sierpinski triangle, and the pattern of fractal division is quite similar.
One particularly interesting counter-intuitive curiosity is the ability to construct a figure of infinite length starting with an area of finite volume. If we take a finite square, cut it in half,
and put the halves end-to-end, and then cut one of the halves again, and again put them end-to-end, and iterate this process to infinity (as with a fractal construction, though this is not a
fractal), we take the original finite volume and stretch it out to an infinite length.
With a little cleverness we can make this infinite line constructed from a finite volume extend infinitely in both directions by cutting up the square and distributing it differently. Notice that,
with these constructions, the area remains exactly the same, unlike Banach-Tarski constructions in which additional space is “extracted” from a mathematical continuum (which could be of any
Thinking of these above two constructions, it occurred to me that we might construct an interesting fractal from the second infinite line of finite area. This is unusual, because fractals usually
aren’t constructed from rearranging areas in quite this way, but it is doable. We could take the middle third of each segment, cut it into three pieces, and assemble a “U” shaped construction in the
middle of the segment. This process can be iterated with every segment, and the result would be a line that is infinite two over: it would be infinite in extent, and it would be infinite between any
two arbitrary points. This constitutes another sense in which we might construct an infinite fractal.
. . . . .
Fractals and Geometrical Intuition
2. A Question for Philosophically Inclined Mathematicians
3. Fractals and the Banach-Tarski Paradox
4. A visceral feeling for epsilon zero
5. Adventures in Geometrical Intuition
6. A Note on Fractals and Banach-Tarski Extraction
. . . . .
. . . . .
. . . . .
3 Responses to “Adventures in Geometrical Intuition”
1. 13 January 2011 at 6:51 pm
Thanks for some really great insights. Do you think Russell is saying if you play enough of these puzzles that hold meaningful results it builds up a weight of truth relative to one another? Then
this can be built upon. (As Russell’s student Wittgenstein says of language ‘one learns the game by watching others play.’)
Perhaps the reference for the cube is the
Menger Sponge
Which relates to the Sierponski carpet
I am very very new to both mathematics and philosophy so you will have to forgive any blundering errors. I am actually writing a definition of art but it turns out there are parallels between the
infinite complexity of strange attractors of non periodic flows and the layering of definitions of art within societies.
Can you give me any sources of Mandelbrot discussing intuition in print?
Thanks Again,
□ 15 January 2011 at 4:44 pm
Dear Josh,
Thanks for your note!
While I would not have formulated Russell’s approach in these particulars, it strikes me as a very fair way of putting it. Certainly if we test a logical theory against a lot of paradoxes and
find that the given logical theory has an answer to all of them, they do “build up a weight of truth relative to one another.” This is formal thought undertaken in an inductive spirit, i.e.,
in the spirit of empirical science, and this is an especially appropriate way to characterize Russell’s work.
I thought I knew Wittgenstein pretty well, but I was not familiar with the quote that you cited, “one learns the game by watching how others play.” (I have since found the source in
Philosophical Investigations, I, sec. 54) This unquestionably sounds like the later Wittgenstein. It is also a profoundly conservative statement. I do not disagree that this is how the rules
of games are usually learned, and there are some famous psychological experiments that seem to bear this out.
What we may call the speciation of games, however, must emerge from changing rules of existing games, or formulating rules de novo, and one does not learn this from watching how others play
the game. However, one might well learn this by watching how others change the game, or how others create games. But this could also be called “cheating,” depending upon how exactly the game
is changed. What exactly is the difference between cheating and revising a game?
Quine has an argument that I think he called “change of logic, change of topic,” which holds that when we formulate a new logic, we have really changed the topic of discussion. This could be
applied, mutatis mutandis, to games, such that an innovation in rules always results in a new game. This would produce a very finely-grain account of game speciation. It would also seem a
little silly from a common sense perspective, because then every time we change the wild card in a game of poker, we would no longer be playing poker but some other card game. This dilemma in
turn suggests a sorites paradox in relation to game speciation. These are all very interesting questions that invite a treatment in some depth of detail rather than this kind of
off-the-top-of-my-head version.
Thanks for the examples you sent to me. I will look into them.
Best wishes, | {"url":"http://geopolicraticus.wordpress.com/2010/10/31/adventures-in-geometrical-intuition/","timestamp":"2014-04-21T02:29:13Z","content_type":null,"content_length":"85946","record_id":"<urn:uuid:84e22cc5-fe43-42e6-a1a4-b03cf2d77767>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
Amusements in Mathematics
Times change, whether we like it or not, sometimes for the better and sometimes for the worse. There was a time before Facebook, before World of Warcraft, before the internet, before television,
movies, radio: a time when people had to turn elsewhere to satisfy the universal and eternal human need for entertainment. Some turned to recreational mathematics — trying to solve problems such as
A gentleman who recently died left the sum of ₤8,000 to be divided among his widow, five sons and four daughters. He directed that every son should receive three times as much as a daughter, and
that every daughter should have twice as much as their mother. What was the widow’s share?
That is the seventh problem in Amusements in Mathematics, a collection of problems and solutions, first published in 1917. (The answer is ₤205 2s. 6d. and 10/13 of a penny.)
Recreational mathematics, mathematics of the fun of it, goes back a long way. The first book devoted to it was Problèmes Plaisants & Délectables by Claude-Gaspar Bachet (1581–1638), first published
in 1612 and still in print, with a new edition being published this year. It contains the problem of getting four pints of water given containers holding 8, 5, and 3 pints and many other recreational
problems still being posed today.
Recreational mathematics had its greatest flowering in the second half of the nineteenth century and into the twentieth, when many periodicals for general audiences had puzzle sections. In his
preface, Dudeney gives credit to The Strand Magazine, Cassell's Magazine, The Queen, Tit-Bits, and The Weekly Dispatch, where some of his problems originally appeared. Lewis Carroll’s A Tangled Tale
first appeared in 1880 as a serial in The Monthly Packet. In the 1890s the Brooklyn Daily Eagle had a puzzle column.
Dudeney and Sam Loyd (1841–1911) were the puzzle giants of their day, Dudeney in England and Loyd in the United States. For a time they collaborated but Loyd, who tried to make a living as a
recreational mathematician, allegedly stole some problems from Dudeney, who broke off the relationship. Dudeney wisely kept his day job.
Other recreational mathematicians of the day included W. W. Rouse Ball (1850–1925) and, in France, Édouard Lucas (1842–1891) who originated the Tower of Hanoi puzzle that lives on as a first exercise
in recursion. Hubert Phillips (1891–1964), whose nom de problème was “Caliban”, created original puzzles in the 1930s in England.
In spite of radio, movies, television, and everything else, recreational mathematics carries on. The work of Martin Gardner (1914–2010) does not need to be summarized here. A Canadian, J. A. H.
Hunter, had puzzle columns in newspapers in the middle of the twentieth century. In 1961, Joseph Madachy started, all by himself, Recreational Mathematics Magazine, which was taken over by Baywood
Publishers and renamed the Journal of Recreational Mathematics in 1968. It continues to this day. Besides Amusements in Mathematics, Dover Publications offers many other recreational mathematics
The book contains 430 problems. Its sections are labeled
• Arithmetical and algebraical problems
• Geometric problems
• Points and lines problems
• Moving counter problems
• Unicursal and route problems
• Combination and group problems
• Chessboard problems
• Measuring, weighing, and packing puzzles
• Crossing river problems
• Problems concerning games
• Puzzle games
• Magic square problems
• Mazes and how to thread them
• The paradox party
• Unclassified problems
Statements of problems consume about 150 pages and 100 pages are given over to answers and solutions. The solutions section contains mini-essays on mazes and on magic squares. Some problems have
answers only but many have interesting comments, at various lengths, on the solutions. Though Dudeney does not indicate which is which, many problems are new while others are not. (The problem of
cutting out corners from a sheet of material and folding to make a box of maximum volume is here, but with just an answer. How Dudeney expected his non-calculus readers, who must have been the vast
majority, to find it he does not say.)
The problems range in difficulty from simple, as the inheritance problem quoted above, to those that are difficult indeed, such as bisecting a line segment using compasses alone. Though readers will
probably not want to try to solve many of the problems, it is fun to read their solutions. And that, fun, is the point of recreational mathematics.
Woody Dudley retired from teaching in 2004 and retired from doing recreational mathematics problems some years before that. Life contains enough problems. | {"url":"http://www.maa.org/publications/maa-reviews/amusements-in-mathematics","timestamp":"2014-04-16T20:44:36Z","content_type":null,"content_length":"102698","record_id":"<urn:uuid:3db5cb95-146c-45f4-9306-1d19f16f00a3>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
R Tutorial
The kurtosis of a univariate population is defined by the following formula, where μ[2] and μ[4] are the second and fourth central moments.
Intuitively, the kurtosis is a measure of the peakedness of the data distribution. Negative kurtosis would indicates a flat data distribution, which is said to be platykurtic. Positive kurtosis would
indicates a peaked distribution, which is said to be leptokurtic. Incidentally, the normal distribution has zero kurtosis, and is said to be mesokurtic.
Find the kurtosis of eruption duration in the data set faithful.
We apply the function kurtosis from the e1071 package to compute the kurtosis of eruptions. As the package is not in the core R library, it has to be installed and loaded into the R workspace.
> library(e1071) # load e1071
> duration = faithful$eruptions # eruption durations
> kurtosis(duration) # apply the kurtosis function
The kurtosis of eruption duration is -1.5116, which indicates that eruption duration distribution is platykurtic. This is consistent with the fact that its histogram is not bell-shaped.
Find the kurtosis of eruption waiting period in faithful.
The default algorithm of the function kurtosis in e1071 is based on the formula g[2] = m[4]∕s^4 - 3, where m[4] and s are the fourth central moment and sample standard deviation respectively. See the
R documentation for selecting other types of kurtosis algorithm. | {"url":"http://www.r-tutor.com/elementary-statistics/numerical-measures/kurtosis","timestamp":"2014-04-20T21:14:38Z","content_type":null,"content_length":"39270","record_id":"<urn:uuid:e6a16a10-7a6b-4370-b597-d1ce330591aa>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Write 88.2% as a decimal i know the answer i just need the explination
• 10 months ago
• 10 months ago
Best Response
You've already chosen the best response.
tell me what you got for the answer
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
you get that by dividing 88.2 by 100 88.2/100 = 0.882 or you can just move the decimal point 2 spots to the left
Best Response
You've already chosen the best response.
so in general x% = x/100
Best Response
You've already chosen the best response.
the word "percent" refers to a division by 100 if we take a number: n 1% of n is equal to: n/100 a decimal refers to a number between 0 and 1, so if we let n=1, then 1/100 = 1% if you want 88,.2
of those ... then 88.2*1/100
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51b62dace4b06ee3ee2890b8","timestamp":"2014-04-19T17:26:26Z","content_type":null,"content_length":"37416","record_id":"<urn:uuid:95b999ca-deeb-4528-8bd9-db99cd5bbc44>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Petr N. Vabishchevich
Keldysh Institute of Applied Mathematics, 4 Miusskaya Square, 125047 Moscow, Russia
Abstract: Numerical algorithms for solving problems of mathematical physics on modern parallel computers employ various domain decomposition techniques. Domain decomposition schemes are developed
here to solve numerically initial/boundary value problems for the Stokes system of equations in the primitive variables pressure–velocity. Unconditionally stable schemes of domain decomposition are
based on the partition of unit for a computational domain and the corresponding Hilbert spaces of grid functions.
Keywords: Viscous incompressible flows, numerical methods, domain decomposition techniques, operator-splitting schemes
Classification (MSC2000): 65M06; 65M12; 76D07
Full text of the article: (for faster download, first choose a mirror)
Electronic fulltext finalized on: 10 May 2012. This page was last modified: 12 Jun 2012.
© 2012 Mathematical Institute of the Serbian Academy of Science and Arts
© 2012 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/PIMB/105/12.html","timestamp":"2014-04-18T10:40:24Z","content_type":null,"content_length":"4790","record_id":"<urn:uuid:0726f364-9afa-4f24-b838-1da98ae3d2df>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hobart, IN Math Tutor
Find a Hobart, IN Math Tutor
I have several degrees in Education and Business. I have 4 years' experience serving as a Coordinator of Assessment and Curriculum. I am dedicated to expanding the student learning experience,
while contributing my knowledge of the Public School System to develop a superior curriculum and student development program.
15 Subjects: including geometry, study skills, ACT Math, special needs
...Therefore, my ability to adapt to different learning curves will be a great asset when I am a teacher. I will be able to teach every student to his or her potential. Throughout my college
career I was involved in an internship through Northern Illinois University where I helped tutor students in the local high school; as well as several local students.
11 Subjects: including prealgebra, ACT Math, algebra 1, algebra 2
...I worked as a teacher's aide and Title I liaison for Elwood Middle School for 8 years where I also did after-school help sessions and summer tutoring. I just completed my student teaching for
my elementary license in December 2011. As the oldest of 9 children, I have always been around young children.
25 Subjects: including algebra 2, ACT Math, GED, trigonometry
...I am working on getting my master's degree in clinical psychology; I will be graduating this coming May. I have tutored many students and adults in the past 5 years, in the areas of Arabic,
Hebrew and Math. My number one goal with all of my clients is to achieve their goals on learning the new ...
8 Subjects: including geometry, physics, psychology, differential equations
...I am a recent alum of the prestigious Teach for America program, where I worked in a Baltimore City School and was responsible for seeing math test results more than double from their previous
years. While in Baltimore I began working with MERIT to help tutor some of the city's most promising yo...
20 Subjects: including ACT Math, trigonometry, SAT math, linear algebra
Related Hobart, IN Tutors
Hobart, IN Accounting Tutors
Hobart, IN ACT Tutors
Hobart, IN Algebra Tutors
Hobart, IN Algebra 2 Tutors
Hobart, IN Calculus Tutors
Hobart, IN Geometry Tutors
Hobart, IN Math Tutors
Hobart, IN Prealgebra Tutors
Hobart, IN Precalculus Tutors
Hobart, IN SAT Tutors
Hobart, IN SAT Math Tutors
Hobart, IN Science Tutors
Hobart, IN Statistics Tutors
Hobart, IN Trigonometry Tutors | {"url":"http://www.purplemath.com/hobart_in_math_tutors.php","timestamp":"2014-04-21T14:49:15Z","content_type":null,"content_length":"23865","record_id":"<urn:uuid:7edb7ee6-54a4-4b41-9dfe-83e7a2371810>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Incomplete gamma function: General characteristics
General characteristics
Domain and analyticity
Symmetries and periodicities
Mirror symmetry
Poles and essential singularities
With respect to z
With respect to a
Branch points
With respect to z
With respect to a
Branch cuts
With respect to z
With respect to a
© 1998-2014 Wolfram Research, Inc. | {"url":"http://functions.wolfram.com/GammaBetaErf/Gamma2/04/ShowAll.html","timestamp":"2014-04-21T07:20:56Z","content_type":null,"content_length":"44361","record_id":"<urn:uuid:962a5e13-0a3b-45c7-bdd4-78ba95635795>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Duke Physics
Henry Greenside
Stable propagation of a burst through a one-dimensional homogeneous excitatory chain model of songbird nucleus HVC
Physical Review E (2006)
Characterization of the domain chaos convection state by the largest Lyapunov exponent
Physical Review E (2006)
Enhanced tracer transport by the spiral defect chaos state of a convecting fluid
Physical Review E (2005)
Pattern Formation and Dynamics in Rayleigh-Benard Convection: Numerical Simulations of Experimentally Realistic Geometries
Physica D (2003)
Efficient simulation of three-dimensional anisotropic cardiac tissue using an adaptive mesh refinement method
Chaos (2003)
Efficient algorithm on a nonstaggered mesh for simulating Rayleigh-Benard convection in a box
Physical Review E (2003)
Mean Flow Dynamics of Stripe Textures and Spiral Defect Chaos in Rayleigh-Benard Convection
Physical Review E (2003)
Pattern formation near onset of a convecting fluid in an annulus
Phys. Rev. E (2001)
PhD - Princeton
MS - Princeton
BA - Harvard University
Simulating Complex Dynamics In Intermediate And Large-Aspect-Ratio Convection Systems
In 18th Symposium on Energy Engineering Sciences edited by . ; : Argonne National Laboratory.
Spatiotemporal Chaos in Large Systems: The Scaling of Complexity With Size
In Semi-Analytic Methods for the Navier Stokes Equations edited by K. Coughlin. ; pp. 9-40. : American Mathematical Society, Providence, RI.
The Spatiotemporal Dynamics of Generalized Tonic-Clonic Seizure EEG Data: Relevance To the Clinical Practice of Electroconvulsive Therapy
In Nonlinear Dynamics in Brain Function edited by . ; : In Press.
The EEG Effects of ECT: Implications for rTMS
In Progress in Neuropsychopharmacology and Biological Psychiatry edited by . ; : In Press.
2001 Barbara and Randall Smith Duke Faculty Enrichment Award | {"url":"http://www.phy.duke.edu/content/henry-greenside","timestamp":"2014-04-19T18:03:46Z","content_type":null,"content_length":"28334","record_id":"<urn:uuid:09f58e5e-263d-4501-acbf-33382496dacb>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: intuitionistic and classical truth
Neil Tennant neilt at mercutio.cohums.ohio-state.edu
Fri Sep 4 17:02:04 EDT 1998
your remarks about not understanding what is meant by "saying what truth
*consists in*" do not, as far as I can see, express any sort of philosophical
bafflement, so much as *express a particular (and controversial) philosophical
view about truth*. If you wonder at what truth consists in---thereby
implying that there is nothing for it to consist in---then you are being
a deflationist about truth.
But there is a long tradition (scoffed at, of course, by redundancy theorists,
deflationists and pragmatists such as Rorty) of philosophizing about truth
according to which the truth of a proposition consists in the existence of
an appropriate kind of *truth-maker*. This is a metaphysical intuition driving
the work of philosophers as diverse as, say, David Armstrong (in his
conception of instantiated universals, and highly realist truth-conditions
for causal claims) and, say, Michael Dummett (in his neo-Wittgensteinian
rehabilitation of Brouwer, in which he gives an *anti-realist* account of
the truth-makers---i.e., the proofs---behind the *objective truths* of
mathematics). The anti-realism itself consists, of course, in the view that
one cannot guarantee that every declarative sentence will be determinately
true or false, independently of the means (i.e., proofs) by means of which
one might establish what the truth-value is.
I really do think that Dummett put Brouwer's doctrine into a better light
by explicating Brouwer's "constructions" as intuitionistic proofs, and
re-reading the justificatory burden of Brouwerian intuitions as discharged
by reflecting on the meaning-constituting character of logical rules of
inference (in particular, the introduction rules for logical operators in
a system of natural deduction).
Of course, there is a need to go further than the logical operators, for these
were, for Brouwer, merely part of mathematics generally. One needs to be able
to give a similar "rule-based" or inferential accou of the important
*mathematical* operators, such as "number of", "set of" etc. One way to do
this is to follow Martin-L"of into the intuitionistic theory of types, whereby
one acquires the ontological riches needed for mathematics, but does so by
smooth extrapolation of the Dummettian treatment of (first order) logic.
There are also other ways, tailored to the particular mathematical domain
in question. For example, in the case of arithmetic, it is possible to give
a neo-Fregean but *intuitionistic* derivation of the Peano-Dedekind principles
from some very weak second-order principles. The latter principles are
arguably meaning-constituting (for the notions of zero and successor) in the
same way that, for example, the rule of &-introduction is meaning-consituting
for the operator &.
When one speaks of "classical mathematics" nowadays, one always "up-dates"
the conception under discussion by imagining that one can appeal to the
nicest deductive system (say, a Gentzen-Prawitz system), and a nice model-
theory (or at least truth-definition) couched in the informal language of
set theory. One assumes furthermore that one is entitled to take any of the
branches of mathematis under discussion in its most recent and elegant
characterization. The discussion will then progress against the background
(hopefully) of full knowledge of the G"odel phenomena, the limitations of
formalism, the implicit realism of the classical logical apparatus, the known
"reductions" of different branches of mathematics to set theory, the known
calibrations of existential strength to be had from reverse mathematics, etc.
Why, then, is the intuitionist not allowed a corresponding degree of intellectual freedom in "updating" the doctrine of intuitionism? Why cannot he press into
service, for purposes of foundational debate, all the mathematical *and
philosophical* improvements that have been made since Brouwer's day? Whenever
intuitionism is mentioned among mathematicians, they immediately take its
canonical identity as given by *Brouwerian* doctrines---which, unfortunately,
can be made to appear rather laughable by judicious albeit selective quotations
from that unfortunate crank's work. It is high time for mathematicians and
foundationalists to talk of intuitionism in an up-to-date fashion, in a way
that assumes (among disputants) that one is dealing with the most sophisticated
re-constructive developments in this field since the time of Brouwer. [By the
way, these general remarks are not at all directed at you personally; indeed,
I'm sure you would agree, since I for one would be recommending the Tait
analysis of strict finitism from your J.Phil. article!] We need to have
an updated conception of intuitionism that would be au fait with, say,
Gentzen-Prawtiz proof theory, relative consistency results due to Sieg,
Friedman, Pohlers et al., Martin-L"of's type theory, the meaning-theoretic
foundational work of Dummett, Bishop's constructivism, etc.
But given the dreadful disdain of many a core mathematician for classical
logic---as brought out by Adrian Mathias's compelling papers on Bourbaki---
one could be forgiven for thinking that it will take another century before
the sort of sophisticated appreciation of intuitionism that is needed would
be widespread within the mathematical community.
Neil Tennant
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-September/002062.html","timestamp":"2014-04-17T04:19:47Z","content_type":null,"content_length":"7725","record_id":"<urn:uuid:0145ebc8-848a-41b7-b910-92baebf2c752>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
tricky biased coin flipping question
March 19th 2010, 11:09 PM
tricky biased coin flipping question
I'm having trouble with the following question;
"A biased coin is tossed until more heads then tails appear.
The coin is biased such that it lands on heads with probability 2/3 and tails with probability 1/3
What is the expected number of flips and the variance?"
Any help would be appreciated;
March 24th 2010, 06:05 AM
I'm having trouble with the following question;
"A biased coin is tossed until more heads then tails appear.
The coin is biased such that it lands on heads with probability 2/3 and tails with probability 1/3
What is the expected number of flips and the variance?"
Any help would be appreciated;
I don't know if there is a simple way to do this. I had look up some combinatorial stuff, and do some summations with Maple.
Here goes. The first time that you will toss more heads than tails, you will have made an odd number of tosses. Let's write this as $2n+1$ with $n$ non-negative. If $p$ is the probability of
tossing tail and $q$ that of head, you have tossed $n$ tails and $n+1$ heads, and the probability of that is
${1\over 2n+1}\left({2n+1\atop n}\right)q^{n+1}p^n$.
The combinatorial factor in the beginning is the number of ways of tossing heads and tails with the number of heads exceeding the number of tails for the first time at toss $2n+1$. I got this
from the Ballot Theorem in William Feller, An Introduction to Probability Theory and Its Applications. Note that
$\sum_{n=0}^\infty{1\over 2n+1}\left({2n+1\atop n}\right)q^{n+1}p^n$
is only equal to 1 if $q>1/2$. Otherwise there is a finite probability that the number of heads will never exceed the number of tails.
The expected number of tosses is now equal to
$\sum_{n=0}^\infty\left({2n+1\atop n}\right)q^{n+1}p^n={2q\over\sqrt{1-4pq}(1+\sqrt{1-4pq})}.$
With $q=2/3$ and $p=1/3$ this becomes equal to 3. For the variance we need
$\sum_{n=0}^\infty(2n+1)\left({2n+1\atop n}\right)q^{n+1}p^n={2q(4pq+\sqrt{1-4pq})\over(1-4pq)^{3/2}(1+\sqrt{1-4pq})},$
which equals 33 with $q=2/3$ and $p=1/3$. The variance is then $33-3^2=24$. | {"url":"http://mathhelpforum.com/advanced-statistics/134663-tricky-biased-coin-flipping-question-print.html","timestamp":"2014-04-18T09:07:22Z","content_type":null,"content_length":"9149","record_id":"<urn:uuid:19b27936-6a8e-497e-9736-9174aabbbd91>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
BJU Press BJU Math 11: Algebra 2, Homeschool Kit
BJU Press' math utilizes the concept approach, making a thorough understanding of math the primary goal of each lesson. Problem solving skills, systematic learning and review, Christian principles
and enjoyable lessons create apt learning conditions. Math 11: Algebra II builds upon previous math, looking at radical, exponential, rational, logarithmic and trig equations. A Graphing calculator
will be used year-long.
Kit Includes:
Teachers Edition
Student Textbook
Tests & Answer Key
This resource is also known as Bob Jones Algebra 2 Homeschool Kit, Grade 11
Customer Questions & Answers: | {"url":"http://answers.christianbook.com/answers/2016/product/201103/bju-press-bju-math-11-algebra-2-homeschool-kit-questions-answers/questions.htm","timestamp":"2014-04-18T13:14:03Z","content_type":null,"content_length":"73123","record_id":"<urn:uuid:7ad471c0-0364-4f6f-ac3b-f48017a968e3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
3. Recent HRIBF Research - Coupled-Cluster Approach to Nuclear Structure
(G. Hagen, spokesperson)
One of the major aims in the nuclear structure and reaction community today, is to understand the nuclear properties from the basic interactions among protons and neutrons. This effort has been
labeled the "ab-initio" approach to nuclear structure and reactions. The "ab-initio" approach aims for a theory that is capable of not only explaining experimental data, but also making predictions,
and therefore providing guidance for future experimental setups. In the nuclear structure/reaction context, this approach involves treating the nucleus as a many-body quantum system. The quantum
many-body problem is a difficult undertaking. Today there exist several theoretical methods capable of virtual exact solution of the nuclear Hamiltonian in the lightest region of the nuclear chart (A
<=12); these include the Faddeev [Nog00], Hyperspherical Harmonics [Bar99], No-Core shell-model [Nav00] and Green's function Monte-Carlo [Piep01] approaches. Due to the combinatorial or exponential
scaling, these methods are limited to the lightest region of the nuclear chart.
Scientists are exploring different methods to extend the "ab-initio" program to medium-mass nuclei. Coupled-cluster theory is a very promising candidate for this purpose. Recently Coupled-cluster
theory has seen a renaissance in nuclear structure. Coupled-cluster theory originated in nuclear theory, and was pioneered by Coester and Kummel in the late 50's Ref.[Coe60]. In quantum chemistry
there was a parallel development of Coupled-cluster theory, and today it defines the state-of-the-art many-body theory in the quantum chemistry community, and a recent review of Coupled-cluster
theory can be found in Ref.[Bar07]. Coupled-cluster theory is an ideal compromise between computational cost on the one hand and accuracy on the other hand. It brings in correlation in a very
economical way when compared to other "ab-initio" methods. It has a polynomial scaling with system size, favoring it over methods with exponential or combinatorial scaling. Coupled-cluster is also
capable of systematic improvements and recovers the exact wave function in the full limit. Coupled-cluster theory maintains the very important feature of size-extensivity: the energy of the system
scales correctly with number of particles in the system regardless of the order of approximation made. This is crucial property must be a component of any "ab-initio" nuclear structure effort that
moves into heavier regions of the nuclear chart.
"Ab-initio" calculations of light nuclei, starting from Hamiltonians with two-body forces only have shown consistent failure to meet experimental mass values. These calculations have therefore
revealed the need for three-nucleon-forces (3NF'S) in order to account for this systematic discrepancy. The existence of 3NF's is not surprising since nucleons are not elementary point particles. A
theory starting from nucleon degrees of freedom is therefore an effective theory where internal degrees of freedom (quark and gluon) are integrated out. The relevant low-energy degrees of freedom are
given by a cutoff or resolution scale at which properties of the system are resolved and probed. The higher the resolution scale the more details of the inner structure is revealed. The removal of
degrees of freedom by a cutoff at a given energy scale, has to be compensated by additional many-body forces in order to recover the richness of the system where all degrees of freedom are taken into
account. The hope is that two- and three-body forces will be sufficient to approximately renormalize the nuclear many-body problem in a range of energy cutoffs. The modern understanding is that there
are no unique 3NF, all nucleon-nucleon forces have their associated cutoffs, and therefore have to be accompanied with their own 3NF. A frontier in nuclear structure concerns how one can consistently
relate 3NF's to a given realistic nucleon-nucleon force. A systematic way of relating low-energy nuclear physics to QCD through Chiral Effective Field Theory (EFT), was recently developed. Chiral EFT
starts from an effective Lagrangian consistent with the symmetries of QCD. The relevant low-energy degrees of freedom of Chiral EFT are the nucleons and pions, all other degrees of freedom are
integrated out of the theory. Expanding the nuclear amplitude in powers of a typical nucleon momentum or pion mass over the chiral symmetry break down scale (~1 GeV) a perturbative series is
obtained, where NN forces, 3NF's and forces of higher rank appear systematically at a given order. At each order in the theory there are a finite number of diagrams determined by one- and two-pion
exchange terms and contact terms. This approach further accounts for the natural hierarchy of forces, i.e. NN > NNN > NNNN ... In Ref.[Hag01] we performed large scale coupled-cluster calculations of
the ground state energies of ^4He, ^16O and ^40Ca using a Hamiltonian with a renormalized two-body force of the low-momentum type (V-lowk). Our results were reasonably well converged with respect to
the basis size, and we estimated that the ^40Ca ground-state energy were converged within 1% of the exact result. The calculated ground-state energy of ^16O ( -148.2 MeV) and ^40Ca (-502.9 MeV) were
largely overbound when compared to the experimental mass values of -127.6 MeV and -342.1 MeV, respectively. This is not surprising since we did not include the corresponding 3NF's which should
accompany the two-body interaction we used.
One of our major aims in the coupled-cluster project is to investigate the role of 3NF's in medium-mass nuclei and in isotopic chains with extreme isospin asymmetry. Recently we developed and
implemented coupled-cluster theory for three-body Hamiltonians [Hag1] and performed a benchmark calculation of the binding energy of ^4He using a renormalized two-body interaction accompanied with a
3NF at NNLO in the Chiral EFT expansion. Our results were in excellent agreement with the numerical exact Faddeev-Yakubovsky calculation starting with the same Hamiltonian. We further found that the
3NF could be very well approximated by a density dependent zero-, one- and two-body term, see Fig. 3-1. This finding is very promising, since we can account for the full 3NF using well developed
tools and machinery for two-body Hamiltonians. It remains to be seen whether this finding also holds in heavier nuclei.
Figure 3-1: Relative contributions ΔE/E to the binding energy of ^4He at the CCSD level. The different points denote the contributions from (1) low-momentum NN interactions, (2) the vacuum
expectation value of the 3NF, (3) the normal-ordered one-body Hamiltonian due to the 3NF, (4) the normal-ordered two-body Hamiltonian due to the 3NF, and (5) the residual 3NFs. The dotted line
estimates the corrections due to omitted three-particle--three-hole clusters.
Another frontier in the nuclear structure and reaction community today concerns the theoretical understanding of structure properties and reaction mechanisms of nuclei located far away from the
valley of beta-stability. At the limits of matter (neutron/proton drip lines), exotic features, which are not seen in the well-bound and stable nuclei, start to emerge, such as extreme matter
clusterizations, melting and reorganizing of shell structure, ground states embedded in the continuum, and extreme dilute and extended matter densities. Another peculiar feature which appear in some
of these exotic nuclei, is that the one-neutron decay threshold is above the two-neutron decay threshold. Some of these nuclei, like ^6He and the cardinal case of ^11Li, have been labeled as
Borromean nuclei. It is a great theoretical challenge to account for the properties of nuclei at the drip lines. In standard shell model approaches, the nuclear wave function is expanded in a finite
set of harmonic oscillator states. While this approach works well for well-bound and stable nuclei, it is obvious that this description is not the appropriate description when moving towards the drip
lines, where the nuclei become loosely bound and even unbound in their ground states. The proximity of the scattering continuum in these systems, directly relates to the exotic properties observed in
these nuclei. As the outermost nucleons approach the scattering thresholds, the tail of their wave functions extend far out in radial space and therefore accounts for the spatially dilute matter
distributions or halo densities observed in some of these nuclei.
A very promising way to account for these properties is by expanding the wave function in a Berggren basis [Berg68]. The Berggren basis is a generalized single-particle basis where bound-, resonant,-
and continuum states are treated on equal footing. A representation of the many-body wave function in such a basis allows for description of both halo-densities of loosely-bound nuclei and
calculation of lifetimes and decay widths of unbound nuclear states. This approach has been applied with great success in shell model calcuations here at Oak Ridge in W. Nazarewich group, led by J.
Rotureau [Rot06] and N. Michel [Mic02] . In Ref.[Hag3] we applied a Berggren basis within the Coupled-cluster framework for the first time, and calculated masses and lifetimes of the Helium chain (^
3-10He). This was the first "ab-initio" calculation of lifetimes of a whole isotopic chain. The results are summarized in Fig.3-2. The black dotted line gives our calculated masses, while the red
dotted line gives the experimental mass values. The inset gives our calculated widths of the helium isotopes compared with experimental values. This figure shows that our results are in
semi-quantitative agreement with experiment. With this interaction, all helium isotopes lack binding compared to experiment. However, the even-odd mass pattern is reproduced fairly well. We see that
^5He is unstable with respect to one-neutron emission, while ^6He is stable towards one-neutron emission. However, ^6He is not stable towards two-neutron emission. This is mainly due to the missing
three-body forces and inclusion of full triples in our calculation. ^8He is stable towards one-, two-, and three-neutron emissions but not stable against the emission of four neutrons to the
continuum and ^4He. We believe that the growing discrepancy between theory and experimental mass values as we move along the helium chain is due to the lack of 3NF's. But for larger systems, triples
corrections should play a more prominent role as well. By combining both of these missing ingredients, we believe that our results should be closer to the experimental values.
Figure 3-2: CCSD results (black dotted line) and experimental values (red dotted line) for the ground state of the helium chain ^3-10He using a Gamow-HF basis and a low-momentum interaction
generated from the N^3LO interaction model.
In summary, we are now in a position where we can answer questions in nuclear structure and reactions, which could previously not be addressed. In the near future we are going to explore the role of
3NF's in medium size nuclei. We will look at saturation properties of modern realistic forces in medium mass nuclei. We will implement and derive the Equation-of-Motion CCSD method, so that we can
study excited states and properties of closed-shell nuclei and their neighboring nuclei. A particularly interesting project is to perform an "ab-initio" Coupled-cluster calculation of halo nuclei,
and for the first time give "ab-initio" predictions of the drip lines. We are also aiming at merging of structure and reaction theory within the Coupled-cluster framework.
[Nog00] Phys. Rev. Lett. 85, 944 (2000).
[Bar99] Phys. Rev. C 61, 054001 (2000).
[Piep01] Ann. Rev. Nucl. Part. Sci. 51, 53 (2001).
[Nav00] Phys. Rev. C 62, 054311 (2000).
[Coe60] Nucl. Phys. 17, 477 (1960).
[Bar07] Rev. Mod. Phys. 79, 291 (2007).
[Hag1] Phys. Rev. C 76 044305 (2007).
[Hag2] Phys. Rev. C 76 034302 (2007).
[Berg68] Nucl. Phys. A 109, 265 (1986).
[Rot06] Phys. Rev. Lett. 97, 110603 (2006).
[Mic02] Phys. Rev. Lett. 89, 042502 (2002).
[Hag3] Phys. Lett. B 656 169 (2007). | {"url":"http://www.phy.ornl.gov/hribf/news/feb-08/hagen.shtml","timestamp":"2014-04-17T18:40:25Z","content_type":null,"content_length":"21172","record_id":"<urn:uuid:4ab97eed-0fd4-4155-a925-46d2acc44526>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Verifying Subsets as Subspaces through scalar vector
April 25th 2009, 09:06 PM
Verifying Subsets as Subspaces through scalar vector
I am having trouble with this example in the book (Intro to Linear Algebra, Johnson Riess Arnold, Fifth edition, p. 174):
"Let W be the subset of R^2 defined by W={x: x = [ x1 x2 ], x1 amd x2 any integers }. Demonstrate that W is not a subspace of R^2."
W passes the 0 test and the x+y test. However, it fails the scalar test. It says that if a=1/2, then x is in W but ax is not. I don't understand this. Does that mean if a=3/4, then ax is not as
well? What if a=3, is ax in W? Is ax only in W if a=1?
Why can we say that if a=XXX, then x is in W? I'm not sure how they came to that conclusion.
Thanks very much :)
April 25th 2009, 09:47 PM
Lattice of Integers
This is just the Cartesian product of two copies of $\mathbb{Z}$. So as a vector space over $\mathbb{R}$ it is clearly not closed under scalar multiplication. Let $S:=\mathbb{R} - \mathbb{Z}$ the
set of real numbers that are not integers. Just consider $s <1,1> = <s, s> ot \in W$. You may notice if you multiply instead by a scalar that is an integer, your product will stay in W as the
integers are closed under multiplication.
If you are interested, you can consider it as something of a vector space over $\mathbb{Z}$. These are called modules, and this W is in fact an example of a two dimensional free module with basis
$\{(0,1), (1,0)\}$.
April 25th 2009, 09:58 PM
thanks for the reply - it's clear and ALMOST makes sense. Just one question: what do you mean when you say it's not closed under scalar multiplication?
April 25th 2009, 10:17 PM
Oh closed just means if you "multiply" two things from a set they stay in that set.
For vector spaces you need to be able to add two things in the set and have them stay in the set. That is closed under addition.
Also you need to be "closed under scalar multiplication." If it is a vector space over $\mathbb{R}$, then if you take an element from $\mathbb{R}$ and multiply something in your candidate for a
vector space by it, it should remain in that candidate, ie be closed under scalar multiplication where the scalar comes from $\mathbb{R}$.
In your case the "candidate" I am referring to is W.
April 25th 2009, 10:20 PM
hm so in this case, since we are starting with an integer, does that mean we have to end up with an integer? is that why we can't have any fractions like a=1/2?
Sorry, I'm kind of math retarded
April 25th 2009, 10:35 PM
Well you are trying to determine if W is a subspace of $\mathbb{R}^2$ which is a real two dimensional vector space. So if W is to be a subspace, it too needs to be a REAL vector space.
For W to be a real vector space it needs needs to be the case that for EVERY $u,v \in W$ and EVERY $r\in \mathbb{R}$
1) $u+v \in W$
2) $ru\in W$
I am claiming condition 2 will only hold if r is an integer itself. Read my other post to see why it fails when r is NOT an integer.
The point is since W does not satisfy the definition of being a real vector space, it is not a subspace.
April 25th 2009, 10:44 PM
hmm....maybe I don't understand vectors correctly. Can vectors be fractions? ie, can we have something like <1.5 2>?
How do we know in this case that W doesn't have any fractions in it?
The way I'm thinking about this is, W is this space that encloses everything from one starting point to the ending point. For example, if W is between 1 - 5, it should contain every number,
fraction, etc between 1-5.
I'm trying to be as clear as I can but unfortunately it's been 5-6 years since I've done math...
April 25th 2009, 10:56 PM
What is W? I thought $W=\{(x_1,x_2)\in \mathbb{R}^2| x_1, x_2 \in \mathbb{Z} \}$. Is this incorrect?
It sounds to me like you are thinking it is intervals in the real line with integer endpoints?
April 25th 2009, 11:20 PM
A vector in $\mathbb{R}^n$ is just an ordered n-tuple with entries in $\mathbb{R}$, $(x_1, x_2, ..., x_n)$. This specifies both a direction and a magnitude.
An n-dimensional REAL vector space is characterized by the fact that any vector can be uniquely written as a linear combination of basis vectors, typically you use the standard basis for $\mathbb
{R}^n$, $\{(1,0,...,0), (0,1,0,..., 0), ... , (0, 0, ..., 1) \}$.
There is no reason the vector you supplied is not a vector, the problem is that if I am understanding W correctly you cannot have one of the entries be 1.5 because it is not an integer. That is
the whole point, if W were a Real vector space, you should be able to multiply any element of W by any real number and it should give you back another element of W. This is simply not the case
here with W. So it is not a subspace of $\mathbb{R}^2$.
The question is just asking you to observe that if you only had to make sure your set W was closed under multiplication by scalars coming from the integers and not all of the real numbers, it
would actually be a "two dimensional free module over the integers." While it turns out free modules and vector spaces are pretty similar, there are some distinct differences
April 25th 2009, 11:30 PM
ah ok this makes a lot more sense. Thanks for sticking with me - I appreciate it very much!!
April 25th 2009, 11:38 PM
No worries, glad you got it figured out. Take care, and glad to hear you are revisiting mathematics after a 6 year departure. | {"url":"http://mathhelpforum.com/advanced-algebra/85673-verifying-subsets-subspaces-through-scalar-vector-print.html","timestamp":"2014-04-17T07:26:34Z","content_type":null,"content_length":"14646","record_id":"<urn:uuid:457903df-30dc-476a-b12d-4601003d7cbd>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
JULY 6 - JULY 10
Young and Freedman - 12th Ed - CHAPTER 22
30. Do not use Gauss' Law for any part of this problem.
We have already found the result for the E field due
to a infinitely thin sheet of charge in two different
ways; you may simply use that result here. First
consider each of the four infinitely thin sheets of
charge separately. Then at each of the locations
A, B, and C, add the vector contributions to the
field from each of the four infinitely thin sheets.
Show your work carefully. GBA
54. Omit part (a) of this problem. To find the E field
strength within the slab, use a Gaussian pillbox
which is centered on the center of the slab, and which
has its parallel faces equidistant from the central
plane of the slab. With such a Gaussian pillbox, the
symmetry argument in part (a) is not required; instead
a simpler symmetry argument about the E field direction
within the slab will suffice. You must give your
symmetry argument clearly for this problem. GBA
(P prefixes) HRW Problem Supplement #1 - CHAPTER 22
26. Calculate the number of coulombs of positive charge
in 250 cm^3 of (neutral) water (about a glassful).
YOUNG - CHAPTER 21
60. The potassium chloride molecule (KCl) has a dipole
moment of 8.9 x 10^-30 Cm. (a) Assuming that this
dipole moment arises from two charges, each of
magnitude 1.6 x 10^-19 C, separated by distance d,
calculate d. (b) What is the maximum magnitude of
the torque that a uniform electric field with
magnitude 6.0 x 10^5 N/C can exert on a KCl
molecule? Sketch the relative orientations of the
electric dipole moment p (p is a vector) and the
electric field E (E is a vector) when the torque
is a maximum. (c) Suppose that the electric field
of part (b) points in the +y direction. If the
dipole swings from an initial orientation pointing in
a direction 20 degrees below the -x axis to a final
orientation pointing in a direction 20 degrees above
the -x axis, what is the change in the dipole's
electric potential energy?
78. (a) Suppose all the electrons in 20.0 g of carbon
atoms were located at the North Pole of the earth and
all the protons at the South Pole. What would be the
total force of attraction exerted on each group of
charges by the other? The atomic number of carbon
is 6, and the atomic mass of carbon is 12 g/mol.
(b) What would be the magnitude and direction of
the force exerted by the charges in part (a) on a
third charge that is equal to the charge at the
South Pole, and located at a point of the surface
of the earth at the equator? Draw a diagram showing
the locations of the charges and the forces on the
charge at the equator.
YOUNG - CHAPTER 22
30. This is essentially problem 22:32 in Y&F 12e but
with a nonuniform electric field. The x-component
of the E field is given by (-5.00 N/(Cm))x. The
y-component of the E field is zero. The z-component
of the E field is given by (+3.00 N/(Cm))z.
The value of L is 0.30 m.
(a) Find the electric flux through each of the six
cube faces S_1 through S_6. (b) In the uniform
field case, the electric flux through the entire
cube would have been zero (what goes in must come
out); but, in this case you will find that the
electric flux through the entire cube is not zero.
Find the total electric charge inside the cube.
44. A small, INSULATING, spherical shell with inner
radius a and outer radius b is concentric with a
larger INSULATING spherical shell with inner
radius c and outer radius d (Figure 22.39 in
Y&F 12e). The inner shell has total charge +q
distributed uniformly over its volume, and the
outer shell has charge -q distributed uniformly
over its volume. (a) Calculate the charge
densities in the inner shell and the outer shell.
(b) Calculate the electric field (magnitude and
direction) in terms of q and the distance r from
the common center of the two shells for (i) r<a;
(ii) a<r<b; (iii) b<r<c; (iv) c<r<d; (v) r>d.
(c) Show your results in a graph of the radial
component of E (E is a vector) as a function of
the distance r.
45. A long coaxial cable consists of an inner cylindrical
conductor with radius a and an outer coaxial cylinder
(also conducting) with inner radius b and outer
radius c. The outer cylinder is mounted on insulating
supports and has no net charge. The inner cylinder
has a uniform positive charge per unit length lambda.
Calculate the electric field (a) at any point between
the cylinders, a distance r from the axis; (b) at
any point outside the outer cylinder. (c) Graph the
magnitude of the electric field as a function of
the distance r from the axis of the cable, from
r=0 to r=2c. (d) Find the charge per unit length
on the inner surface and on the outer surface of
the outer cylinder.
48. A very long, solid cylinder with radius R has
positive charge uniformly distributed throughout it,
with charge per unit volume rho. (a) Derive the
expression for the electric field inside the volume
at a distance r from the axis of the cylinder in
terms of the charge density rho. (b) What is the
electric field at a point outside the volume in
terms of the charge per unit length labmda in the
cylinder? (c) Compare the answers to parts (a)
and (b) for r=R. (d) Graph the electric field
magnitude as a function of r from r=0 to r=3R.
YOUNG - CHAPTER 23
48. Three small spheres with charge 2.00 microC are
arranged in a line, with sphere 2 in the middle.
Adjacent spheres are initially 8.0 cm apart. The
spheres have masses m1=20.0 g, m2=85.0 g, and
m3=20.0 g, and their radii are much smaller than
their separation. The three spheres are released
from rest. (a) What is the acceleration of sphere 1
just after it is released? (b) What is the speed of
each sphere when they are far apart?
WOLFSON - CHAPTER 23
16. A charge 3q is at the origin, and a charge -2q is on
the positive x axis at x = a. Where would you place
a third charge so it would experience no net electric
22. Three identical charges +q and a fourth charge -q form
a square of side a. (a) Find the magnitude of the
electric force on a charge Q placed at the center of
the square. (b) Describe the direction of this force.
32. A 1.0 microC charge and a 2.0 microC charge are 10 cm
apart. (a) Find a point where the net electric field
is zero. (b) Sketch the net electric field lines
34. A positive charge +2q lies on the x axis at x = -a,
and a charge -q lies at x = +a. (a) Find an expression
for the electric field as a function of x for points to
the right of the charge -q. (b) Taking q = 1.0 microC
and a = 1.0 m, plot the field as a function of position
for x = 5.0 m to x = 25 m.
50. A seimcircular loop of radius a carries positive charge
Q distributed uniformly over its length (the loop begins
at the point (0,a) and extends counterclockwise to the
point (0,-a)). Find the electric field at the center of
the loop. Show all the steps in your work.
54. A thin rod of length L carries charge Q distributed
uniformly over its length (the rod begins at the point
(-L/2,0) and extends rightward to the point (+L/2,0)).
(a) What is the line charge density on the rod?
(b) What must be the electric field direction on the
rod's perpendicular bisector (i.e. the y axis).
(c) Find an expression for the electric field at a
point P at the location (0,+y). (d) Show that your
result for (c) reduces to the field of a point charge
Q for y >> L.
56. How strong an electric field is needed to accelerate
electrons in an old-style TV tube from rest to one-tenth
the speed of light in a distance of 5.0 cm?
58. An oscilloscope display requires that a beam of
electrons moving at 8.2 Mm/s be deflected through
an angle of 22 degrees by a uniform electric field
that occupies a region 5.0 cm long. What should be
the field strength?
(P prefixes) HRW Problem Supplement #1 - CHAPTER 23
25. A semi-infinite nonconducting rod (i.e. infinite in one
direction only) has uniform positive linear charge density
lambda. Show that the electric field at point P makes an
angle of 45 degrees with the rod and that this result is
independent of the distance R. (HINT: Separately find
the parallel and perpendicular (to the rod) components
of the electric field at P, and then compare those
_ +++++++++++++++++++++++++ =====> very long
| |
R |
| |
- P <== this point is a distance R from the end
of the rod
37. In Millikan's experiment, an oil drop of radius
1.64 microns and density 0.851 g/cm^3 is suspended in
the experimental chamber when a downward-directed
electric field of 1.92 X 10^5 N/C is applied. Find
the charge on the drop, in terms of e.
55. A charge (uniform linear density = 9.0 nC/m) lies on a
string that is stretched along an x axis from x = 0 to
x = 3.0 m. Determine the magnitude of the electric
field at x = 4.0 m on the x axis.
(P prefixes) HRW Problem Supplement #1 - CHAPTER 24
24. A charge of uniform linear density 2.0 nC/m is distributed
along a long, thin, nonconducting rod. The rod is coaxial
with a long, hollow, conducting cylinder (inner radius =
5.0 cm, outer radius = 10 cm). The net charge on the
conductor is zero. (a) What is the magnitude of the
electric field 15 cm from the axis of the cylinder? What
is the surface charge density on (b) the inner surface and
(c) the outer surface of the conductor?
66. A metallic spherical shell of inner radius a1 and outer
radius a2 has charge Qa. Concentric with it is another
metallic spherical shell of inner radius b1 (where b1 > a2)
and outer radius b2; shell b has charge Qb. Both Qa and Qb
are positive. Find the electric field strength at points a
distance r from the common center, where (a) r < a1,
(b) a2 < r < b1, and (c) r > b2. (d) Determine how the
charges are distributed on the inner and outer surfaces of
the shells (give the surface charge densities). (e) Graph
the electric field strength as a function of r over the
range r=0 to r=2*b2.
WOLFSON - CHAPTER 24
24. A thin spherical shell of radius 15 cm carries 4.8 microC,
distributed uniformly over its surface. At the center of
the shell is a point charge. (a) If the electric field at
the surface of the sphere is 750 kN/C and points outward,
what is the charge of the point charge? (b) What is the
field just inside the shell?
56. A point charge -q is at the center of a insulating thin
spherical shell carrying charge -(3/2)q. That shell, in
turn, is concentric with a larger shell (also insulating
and also thin) carrying charge +2q. (a) Draw a cross section
of this structure, and sketch the electric field lines
using the convention that 8 lines correspond to a charge
of magnitude q. (b) If the shells were metallic instead
of insulating, how would the field lines that you have
drawn be different, and what would be the total charge on
each of the inner and outer surfaces of each of the two
FISHBANE - CHAPTER 24
38. Two large, thin, metallic plates are placed parallel to
each other, separated by 15 cm. The top plate carries a
uniform charge density of 24 microC/m^2, while the bottom
plate carries a uniform charge density of -38 microC/m^2.
What is the electric field (magnitude and direction) (a)
halfway between the plates? (b) above the two plates?
(c) below the two plates? (d) What are the surface charge
densities on the top and bottom surfaces of both plates?
45. A metal sphere of radius a is surrounded by a metal shell
of inner radius b and outer radius R. The flux through a
spherical Gaussian surface located between a and b is
Q/epsilon0, and the flux through a spherical Gaussian
surface just outside radius R is 2Q/epsilon0. (a) What
are the total charges on the inner sphere and on the shell?
(b) Where are the charges located, and (c) what are the
charge densities?
Halliday and Resnick - 2nd Edition - CHAPTER 25
1. Water in an irrigation ditch of width w = 3.22 m and
depth d = 1.04 m flows with a speed of 0.207 m/s. The
mass flux of the flowing water through an imaginary
surface is the product of the water's density (1000 kg/m^3)
and its volume flux through that surface. Find the mass
flux through the following imaginary surfaces: (a) a
surface of area wd, entirely in the water, perpendicular
to the flow; (b) a surface with area 3wd/2, of which wd
is in the water, perpendicular to the flow; (c) a surface
of area wd/2, entirely in the water, perpendicular to the
flow; (d) a surface of area wd, half in the water and
half out, perpendicular to the flow; (e) a surface of
area wd, entirely in the water, with its normal 34 degrees
from the direction of flow.
(P prefixes) HRW Problem Supplement #1 - CHAPTER 25
88. Three particles with the same charge q and same mass m are
initially fixed in place to form an equilateral triangle
with edge lengths d. (a) If the particles are released
simultaneously, what are their speeds when they have
traveled a large distance (effectively an infinite distance)
from each other? (Measure the speeds in the original rest
frame of the particles.)
Suppose, instead, the particles are released one at a time:
The first one is released, and then, when the first one is
at a large distance, a second one is released, and then, when
that second one is at a large distance, the last one is
released. What then are the final speeds of (b) the first
particle, (c) the second particle, and (d) the last particle?
WOLFSON - CHAPTER 26
3. Four 50 microC charges are brought from far apart onto a
line where they are spaced at 2.0 cm intervals. How much
work does it take to assemble this charge distribution?
A Prefixes
A1: An isolated copper sphere of radius 16 cm is found to have
an electric field at its surface which is radially inward
(i.e. pointing towards the center of the sphere) and which
has a strength 1150 N/C. (a) How many excess electrons are
on this copper sphere? (b) A square made up of nine copper
atoms (3 atoms by 3 atoms) takes up an area of about one
square nanometer. Approximately how many copper atoms are
on the surface of this copper sphere? (c) On the surface
of the copper sphere, there are about how many copper atoms
per excess electron?
A2: Assume that water flowing through a pipe with a circular
cross section flows most rapidly at the center of the pipe
and least rapidly near the pipe walls (this might be
reasonable because of friction between the water and the
pipe walls). Assume that the water speed at the walls is
half as great as the speed at the center, and assume that
the decrease is linear (i.e. the graph of water speed
versus radius is a straight line). If the radius of the
pipe is 10 cm and if the volume flux of water is 5.0 liters
per second, find the speed of the water at the center of
the pipe. (One liter is 1000 cubic centimeters or 0.001
cubic meters). HINT: Draw the graph of water speed versus
distance from the centerline of the pipe; you must do
the integral for volume flux symbolically.
A3: Consider a thin disk, of negligible thickness, of radius R
which is oriented perpendicular to the x axis such that the
x axis runs through the center of the disk. The disk is
centered at x=0, and has positive charge density sigma.
(a) Show that the electric field on the positive x axis is
given by
E_x = (sigma/(2epsilon0))*(1-1/sqrt(1+(R/x)^2))
(b) Show how to apply the binomial expansion to your result
for (a) in order to demonstrate that the commonly-used
expression, E=sigma/(2epsilon0), is valid to within
1.0% for all values of x which are 1.0% of R or less.
A4: A thin wooden stick of length 12 cm has a tiny metal sphere
glued to each end. A charge of +3 microC is placed on one
sphere and a charge of -2 microC is placed on the other.
The center of mass is located 7 cm from the positively-
charged sphere. The system is mounted on a fixed
horizontal E-W axle passing through the center of mass
about which the system is free to rotate with no friction.
When the system is then placed in a horizontal uniform
southward electric field of 800 N/C, the resulting
equilibrium position of the system is horizontal with
the positive charge due S of the negative charge.
(a) When the system is in its equilibrium position,
what is the horizontal force on the system by the axle?
(b) What amount of torque (about the axle) is required
to hold the system at an angular displacement of 25
degrees away from the equilibrium position?
(c) What minimum amount of work must be done to move
the system from its equilibrium position to an angular
displacement of 25 degrees? (HINT: write the needed
torque as a function of displacement, then integrate
from zero to 25 degrees to find the needed work.)
(d) What work would be done by the electric field
during the displacement described in part (c)?
A5: Two conductors, A and B, are each in the shape of a
tetrahedron, but of different sizes. They are charged
in the following manner:
1. Tetrahedron A is charged from an electrostatic
generator to charge q.
2. Tetrahedron A is briefly touched to tetrahedron B.
3. Steps 1 and 2 are repeated until the charge on
tetrahedron B reaches a maximum value; i.e.
the process is repeated until further repetitions
will no longer result in the transfer of additional
charge from tetrahedron A to tetrahedron B.
If the charge on tetrahedron B was q/5 after the first
time it touched tetrahedron A, what is the final charge
on tetrahdedron B? HINT: the way to make progress is
to think carefully about the first and second transfers,
and then about the final attempted transfer (in which
no charge is actually transferred). Be sure to express
your logic clearly and thoroughly when writing up this | {"url":"http://www.public.asu.edu/~gbadams/sum09/unit1.html","timestamp":"2014-04-18T08:01:53Z","content_type":null,"content_length":"22840","record_id":"<urn:uuid:84645674-0adb-4eae-b3bc-63cd3f95919d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binary search tree implemented by use of an array
02-09-2006 #1
Registered User
Join Date
Feb 2006
Binary search tree implemented by use of an array
so...this is the problem...
i need to make a BST tnat has random integers in all nodes and then do a inorder print,and i have to construct a function tha would return how many nodes have only one child, and how many of them
have both children.
does anybody have any kind of written source code that could help me?!
i know how to do this by use of pointers but this array sh** is to idiotic and i really dont know how to do this.help...
> i know how to do this by use of pointers
So instead of storing a pointer in a node of your tree, store array indices.
The only slightly tricky bit is allocating 'nodes' from your array, which is basically down to marking each element of the array as 'free' or 'allocated'
but i need the code for the structure,and everything else...
just to see wha to do...
The root of the tree is index 0.
The left child of any node is at "current index" * 2 + 1.
The right child of any node is at "current index" * 2 + 2.
Just remember those rules and you should be able to duplicate the necessary components of your pointer based tree. Like Salem said, you need to figure a way to mark/differentiate the indicies as
being in use or available.
So, for example to insert the value 18 into an empty tree you check the root (index 0) to see if it is available. Since the tree is initially empty, the root is available and you can place 18 in
that index and then mark it as unavailable/in use.
Next say we try to insert the value 46. Well, the root (index 0) has been marked as unavailable and its value is 18 which is less than 46 and so we need to move on to the right node which is at
index 2 (0*2+2). Since index 2 is available, we put 46 into that index and mark it as unavailable... and so on and so on.
The marking of the indicies as available or unavailable can be done in a couple of ways. The first would be to have a second array of essentially boolean values to keep track of what's going on
in the array you are using to store the actual data in. The other way would be to store not just simply integers in the tree, but perhaps a struct consisting of an int and a bool "available/
unavailable" value. ...if you know your values to be stored are going to look a certain way (all positive integers for example) then you can also say that a special value(s) will be used (any
negative value in this example) to indicate that the node is empty and available for use
"Owners of dogs will have noticed that, if you provide them with food and water and shelter and affection, they will think you are god. Whereas owners of cats are compelled to realize that, if
you provide them with food and water and shelter and affection, they draw the conclusion that they are gods."
-Christopher Hitchens
> but i need the code for the structure,and everything else...
But you first said
"i know how to do this by use of pointers "
So either you know how to implement a binary tree using the traditional approach or you don't.
Algorithms and data structures are independent of implementation, so once you actually understand the principle, implementing them in a variety of styles shouldn't be that much of an issue.
02-09-2006 #2
02-09-2006 #3
Registered User
Join Date
Feb 2006
02-09-2006 #4
Registered User
Join Date
Jan 2002
Northern Virginia/Washington DC Metropolitan Area
02-09-2006 #5 | {"url":"http://cboard.cprogramming.com/c-programming/75591-binary-search-tree-implemented-use-array.html","timestamp":"2014-04-20T17:34:37Z","content_type":null,"content_length":"57020","record_id":"<urn:uuid:18f59a7c-87b1-4652-8cc2-6c5bddb95563>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Simulation Study Using a Local Ensemble Transform Kalman Filter for Data Assimilation in New York Harbor
September 27, 2008
By Hoffman, Ross N Ponte, Rui M; Kostelich, Eric J; Blumberg, Alan; Szunyogh, Istvan; Vinogradov, Sergey V; Henderson, John M
ABSTRACT Data assimilation approaches that use ensembles to approximate a Kalman filter have many potential advantages for oceanographic applications. To explore the extent to which this holds, the
Estuarine and Coastal Ocean Model (ECOM) is coupled with a modern data assimilation method based on the local ensemble transform Kalman filter (LETKF), and a series of simulation experiments is
conducted. In these experiments, a long ECOM “nature” run is taken to be the “truth.” Observations are generated at analysis times by perturbing the nature run at randomly chosen model grid points
with errors of known statistics. A diverse collection of model states is used for the initial ensemble. All experiments use the same lateral boundary conditions and external forcing fields as in the
nature run. In the data assimilation, the analysis step combines the observations and the ECOM forecasts using the Kalman filter equations. As a control, a free-running forecast (FRF) is made from
the initial ensemble mean to check the relative importance of external forcing versus data assimilation on the analysis skill. Results of the assimilation cycle and the FRF are compared to truth to
quantify the skill of each.
The LETKF performs well for the cases studied here. After just a few assimilation cycles, the analysis errors are smaller than the observation errors and are much smaller than the errors in the FRF.
The assimilation quickly eliminates the domain-averaged bias of the initial ensemble. The filter accurately tracks the truth at all data densities examined, from observations at 50% of the model grid
points down to 2% of the model grid points. As the data density increases, the ensemble spread, bias, and error standard deviation decrease. As the ensemble size increases, the ensemble spread
increases and the error standard deviation decreases. Increases in the size of the observation error lead to a larger ensemble spread but have a small impact on the analysis accuracy.
(ProQuest: … denotes formulae omitted.)
1. Introduction
Large advances in data gathering and ocean modeling capabilities in the last decade have started to make “operational oceanography” more than just a concept. In this new reality in oceanography, it
is becoming more and more important that nowcasting and forecasting methods be both fast and accurate, provide information about uncertainties, make best use of disparate in situ and satellite
datasets, and be implemented under different data constraints and dynamical regimes. Such ocean data assimilation systems (DASs) have a broad range of applications in areas as diverse as fisheries,
oil, and tourism industries, search and rescue operations, oceanographie field research, and national security. Of particular interest is the coastal zone, including harbors and estuaries.
The implementation of a fully functional DAS for regional application in the oceanic coastal zones is a challenging task for several reasons, among them the large dimensionality of the problem, the
need for accommodating a variety of asynchronous data in both dense and sparse sampling regimes, the stringent requirements for estimating uncertainties of analyses and forecasts, the portability and
modularity needed to allow integration in an operational environment, and timeliness requirements. A number of global- and basin-scale assimilation efforts, some at eddy-resolving resolutions, are
currently underway (e.g., Lermusiaux et al. 2006). Most recent efforts are focused on advanced DASs related to four- dimensional variational data assimilation (4DVAR) methods (e.g., Stammer et al.
2002; Wunsch and Heimbach 2007) or the various implementations of Kalman filter methods (e.g., Fukumori 2002; Lermusiaux et al. 2006). Application of these advanced methods for data assimilation in
coastal regions is an active area of research, but optimal interpolation schemes (Mellor and Ezer 1991; Fana et al. 2004) are still in use in operational systems.
One geographical region with considerable ongoing efforts in modeling and data collection is the New York Harbor and adjacent littoral zone (Fig. 1). A quasioperational DAS based on the Estuarine and
Coastal Ocean Model (ECOM) and a simple optimal interpolation scheme has been developed as part of the New York Harbor Observing and Prediction System (NYHOPS; Blumberg et al. 1999; Bruno et al.
2006). NYHOPS collects observations of the harbor and surrounding waters, but these data are not yet assimilated to improve forecasts. To explore the potential of an advanced DAS for the coastal
zone, we have used the NYHOPS as a test bed for a state- of-the-art data assimilation method made possible by recent advances in adapting the Kalman filter for very large nonlinear dynamical systems.
Our DAS is based on the local ensemble transform Kalman filter (LETKF), originally developed for atmospheric applications by Ott et al. (2004) and Hunt et al. (2007).
As a first step, this paper describes some simulation experiments and results. The experimental results presented here are based on a number of simplifications and assumptions appropriate for a
proof- of-concept study. A long run of the model-called the nature runis taken to be the truth. To begin our data assimilation, an initial ensemble of model states is created by choosing snapshots
from the nature run prior to the start of the assimilation experiment and treating them as realizations valid at the nominal synoptic time. Then, each ensemble member is advanced to the next synoptic
time using the forecast model, and the observations are combined with the forecasts (i.e., the background ensemble) to produce an ensemble of analyses. This process is iterated. As it proceeds, the
process fills gaps in sparsely observed regions, converts observations to improved estimates of model variables, and filters observation noise. All this is done in a manner that is physically
consistent with the dynamics of the ocean as represented by the model. In our experiments, the simulated observations are created by sampling the nature run and adding random errors to those values.
At each step, to monitor the quality of the system, we compare results to the truth and to a free-running forecast without any data assimilation.
In what follows, we describe our methodology (section 2), including the ocean model (ECOM), the assimilation method (LETKF), and the interface between these components (in sections 2a, 2b, and 2c,
respectively). Results are discussed in section 3, including the experimental setup (section 3a), a detailed look at the baseline experiment (section 3b), and an overview of sensitivity experiments
(section 3c) that explore how the quality of the analysis depends on ensemble size, data density, and observation error magnitudes. A summary of our experiments and key findings is provided in
section 4, and suggested directions for future work are discussed in section 5.
2. Methodology
The goal of a DAS is to combine all available information and provide an optimal estimate of the state of the system and its respective uncertainty (e.g., Daley 1991; Kalnay 2002; Evensen 2006).
Information is of two forms: past and present observations of the system and the dynamics of the system (as expressed by a model). In this section, we describe the ECOM, which we use to represent the
dynamics of the ocean, the analysis algorithm for updating the state estimate based on the latest observations, and the interface between the model and analysis.
a. Estuarine and Coastal Ocean Model
The ECOM is a state-of-the-art, three-dimensional, hydrodynamic ocean model developed by Blumberg (1996) as a derivative of the Princeton Ocean Model (Blumberg and Mellor 1987). The model
realistically computes water circulation, temperature, salinity, and mixing and transport in rivers, lakes, bays, estuaries, and the coastal ocean. Recent enhancements include generalized open
boundary conditions, tracers, and bottom boundary layer submodels. The overall ECOM framework fully integrates sediment transport, water quality, particle tracking, heat flux, and wave modules.
Anything predicted or diagnosed by the ECOM or its submodels can potentially be used by the LETKF.
The model numerically solves the continuity and nonlinear Reynolds momentum equations under hydrostatic and Boussinesq approximations. The free surface elevation is computed prognostically, and tides
and storm surges can be easily simulated with minimal computational cost due to a mode-splitting technique in which the volume transport and vertical velocity are solved separately. The external-mode
shallow-water equations, which are obtained from vertically integrating the three-dimensional equations, are solved by a leapfrog explicit scheme. The vertically dependent terms are solved less
frequently using a leapfrog scheme with an Asselin time filter. The vertical model coordinate sigma is defined by the ratio of the depth and the local height of the water column such that the free
surface is at sigma = 0 and the bottom of the ocean is at sigma = -1. A sigma-coordinate system (Phillips 1957) is preferable to a z-coordinate system in the vicinity of large bathymetric
irregularities that are common in coastal areas. The parameterization of turbulence uses a secondorder closure scheme (Mellor and Yamada 1982). Successful applications of the ECOM to oceanic,
coastal, and estuarine regions include studies in Chesapeake Bay (Blumberg and Goodrich 1990), Delaware Bay (Galperin and Mellor 1990), Massachusetts Bay (Signell et al. 1994), the Oregon Continental
Shelf (Alien et al. 1995), New York Harbor (Blumberg et al. 1999), Onondaga Lake (Ahsan and Blumberg 1999), and Mississippi Sound (Blumberg et al. 2000). Extensive comparisons with data have shown
that the model has good predictive capabilities, which suggests that the important physical processes are realistically reproduced (e.g., Vinogradova et al. 2005).
Input parameters for the ECOM include bathymetry, the initial ocean state (temperature, salinity, and surface elevation), and time- variable boundary conditions and atmospheric forcing. River
discharges can also be introduced as time-variable fluxes. For the experiments described here, the domain is the New York Harbor and adjacent littoral zone, an implementation used quasi-operationally
and known as the NYHOPS (Blumberg et al. 1999; Bruno et al. 2006). The spatial extent of the NYHOPS domain (see Fig. 1) incorporates the New York-New Jersey Harbor and extends beyond to include the
Hudson River Estuary up to the Troy Dam, all of Long Island Sound, and the New York Bight out to the continental shelf. A 59 x 94 computational grid employs an orthogonal-curvilinear coordinate
system that resolves the complex and irregular shoreline of the New York/New Jersey Harbor-New York Bight region. The resolution of the computational grid varies from 500 m in the rivers to about 42
km in the New York Bight. The local height of the water column varies from approximately 150 m to less then 2 m in the NYHOPS domain.
b. The local ensemble transform Kalman filter
The local ensemble transform Kalman filter belongs to the larger family of ensemble Kalman filter (EnKF) data assimilation schemes. Its design allows for efficient implementation on parallel, high-
performance computing architectures.
The goal of data assimilation is to determine the trajectory (time series of states) that best fits a set of noisy observations of the system from the past and the present. Let x be an m- dimensional
vector representing the state of the system at a given time. For a grid point model, such as the ECOM, the components of ? are the state variables at the different grid points. Suppose that we have a
time series of observations and assume both that the observational error is Gaussian with zero mean and a known error covariance matrix and that the observations depend on x in a known way. There are
three pieces of information associated with the set of observations collected at time t^sub j^: the vector y^sup o^^sub j^, whose components are the observations; the observation operator H^sub j^,
which defines the functional relation between x and y^sup o^^sub j^; and the observation error covariance matrix R^sub j^. That is,
y^sup o^^sub j^ = H^sub j^[x(t^sub j^)] + epsilon^sub j^, (1)
where epsilon^sub j^ is a Gaussian random variable with mean 0 and covariance matrix R^sub j^. Let the present time be t^sub n^. It can be shown that when the dynamics and the observation operator
are linear, the present state associated with the most likely trajectory can be obtained by finding the minimum, x^sup a^^sub n^, of the cost function
… (2)
where the first term reflects the effects of all observations collected up to time t^sub n-1^ and the second term reflects the effects of observations collected at tn. The Kalman filter equations
solve this least squares problem using the state estimate x^sup a^^sub n-1^ and the estimate of the analysis error covariance matrix P^sup a^^sub n-1^ from time t^sub n-1^ in the following manner:
(i) The background state x^sup b^^sub n^ is obtained by propagating the state estimate x^sup a^^sub n-1^ from t^sub n-1^ to t^sub n^ using the dynamics
… (3)
where M^sub t^sub n-1^,t^sub n^^ is the (linear) operator of the dynamics.
(ii) The background error covariance matrix P^sup b^^sub n^ is obtained by propagating the estimate of the analysis error covariance matrix P^sup a^^sub n-1^ from t^sub n-1^ to t^sub n^ using the
P^sup b^^sub n^ = M^sub t^sub n-1^,t^sub n^^P^sup a^^sub n- 1^M^sup T^^sub t^sub n-1^,t^sub n^^. (4)
(iii) The analysis error covariance matrix P^sup a^^sub n^ is given by
P^sup a^^sub n^ = (I – K^sub n^H^sub n^)P^sup b^^sub n^, (5)
where the Kalman gain matrix K is defined by
K^sub n^ = P^sup b^^sub n^H^sup T^^sub n^(H^sub n^P^sup b^^sub n^H^sup T^^sub n^ + R^sub n^)^sup -1^. (6)
(iv) The state estimate is x^sup a^^sub n^ obtained by
… (7)
This formulation of the Kalman filter assumes that M^sub t^sub n- 1^,t^sub n^^ provides a perfect representation of the true dynamics- a condition which is not satisfied when an imperfect numerical
model is employed to simulate the true dynamics. Thus, in addition to the effects of the initial condition uncertainties at the beginning of the forecast step, model errors introduced during the
forecast step also contribute to the uncertainty in the background x*. The effect of model errors is often taken into account by adding Q to the right- hand side of (4), where Q is the model error
covariance matrix. This formulation assumes that the effect of the model errors can be represented by a bulk Gaussian random error term that has zero mean and error covariance Q and is uncorrelated
with the errors hi the initial conditions (e.g., Evensen 2006, 28-29). Because the results of this paper are for the perfect model scenario, we make only two brief comments on the representation of
model errors. First, in practice Q is often parameterized with a multiplicative variance inflation, that is, by modifying the right hand side of (4) by a multiplicative factor of (1 + gamma), 0
The extended Kalman filter (EKF; Jazwinski 1970) extends the applicability of the Kalman filter equations to nonlinear systems by using the nonlinear model in (3), …, and substituting the linearized
(tangent linear) model for M^sub t^sub n-1^,t^sub n^^ in (4). Heuristically, this approach assumes that although the model dynamics is nonlinear, nonetheless the uncertainties in the state estimates
are small; thus, their evolution can be approximated by the linearized dynamics. However, implementation of the EKF on a state-of-the-art ocean or numerical weather prediction (NWP) model is
problematic for the following reasons:
(i) Explicitly forecasting P^sup b^^sub n^ for a high- dimensional nonlinear system using the tangent linear model in (4) and then calculating (5) and (7) is so computationally expensive that it is
totally unfeasible without major approximations (e.g., Fukumori and Malanotte-Rizzoli 1995). This is true even for relatively small domain systems.
(ii) The use of the tangent linear model in (4) can potentially lead to unbounded linear instability of the filter (e.g., section 4.2.3 in Evensen 2006).
The most popular approach to avoid the difficulties posed by the EKF is to use a three- (3DVAR) or fourdimensional variational (4DVAR) method (see, e.g., Courtier 1997; Lorenc 1997). These techniques
apply standard unconstrained optimization methods to directly minimize (2), thus eliminating the need for calculating (5) and (7), but do require an adjoint calculation of the gradient of (2). In the
variational schemes the state estimate is updated by (3), but (4) is not used. Instead, 3DVAR simply uses a precomputed, time-independent P^sup b^^sub n^, whereas 4DVAR obtains P^sup b^^sub n^
starting from the same time-independent P^sup a^^sub n-1^ in each analysis cycle. It is important to note that the minimization of the 4DVAR cost function, like the EKF, requires the use of the
tangent linear model (and its adjoint). Although some groups (e.g., Stammer et al. 2002; Wunsch and Heimbach 2007) use similar methods for ocean state estimation, here we consider a different
approach: an ensemble Kalman filter.
The EnKF approach makes Kalman filtering feasible by replacing (4) with a much cheaper approach for the calculation of P^sup b^^sub n^: at time t^sub n-1^, a k-member ensemble of initial conditions,
[x^sup a(i)^^sub n-1^, i = 1, 2,..., k], is selected such that the spread of the ensemble around the ensemble mean x^sup a^^sub n-1^ accurately represents P^sup b^^sub n-1^; then the members of the
ensemble are propagated using the nonlinear model to generate a background ensemble [x^sup b(i)^^sub n^, t = 1, 2,..., k]. Typically, the state space dimension m is orders of magnitude larger than
the ensemble size k. Then the EnKF estimates of x^sup b^^sub n^ and P^sup b^^sub n^ are
… (8)
… (9)
where X^sup b^ is the m x k matrix whose ith column is x^sup b(i)^ – x^sup b^. The improvement in computational efficiency results from the fact that although the evaluation of (4) requires m
linearized model integrations in the EKF, the evaluation of (8) requires only k integrations of the full model. Although the smaller number of model integrations significantly reduces the
computational burden, there are a number of issues any EnKF scheme has to address before it can be implemented on a state-of-the-art ocean or NWP model, namely,
(i) Athough the rank of P^sup b^ is m, the rank of its estimate in (9) is k – 1, which could make solving (5) and (7) problematic; (ii) solving (5) and (7) for a typical model and a large number of
observations can still be computationally prohibitive;
(iii) the EnKF is more sensitive to model errors than 3DVAR and 4DVAR because it uses the model dynamics to propagate the error statistics through many analysis cycles;
(iv) a computational algorithm, such as that given by Hunt et al. (2007), is needed to generate the analysis ensemble, [x^sup a(i)^, = 1, 2,..., k], such that
… (10)
… (11)
where X^sup a^ is the m x k matrix whose ith column is x^sup a(i)^ – x^sup a^; and
(v) P^sup b^ typically underestimates the actual uncertainty. One way to compensate is to multiply P^sup b^ by a constant factor (1 + gamma) before each analysis, where gamma is called the variance
inflation factor.
Fortunately, all these issues can be satisfactorily addressed. Extensive numerical investigations with operational weather forecast models show that the approximations embodied in the EnKF work well
and that some of the EnKF schemes are scalable to very large systems (e.g., Keppenne and Rienecker 2002; Whitaker et al. 2004, 2007; Szunyogh et al. 2005, 2008). Many different variants of the EnKF
have been developed since the publication of the first EnKF scheme (Evensen 1994), and the main differences between these schemes are essentially in how the abovementioned challenges are met. A
chronological history of ensemble Kalman filtering is provided by Evensen (2006, 255-265), and a detailed mathematical analysis of the relationship between the different schemes and a summary of the
latest developments are presented in Hunt et al. (2007).
The goal of the local ensemble transform Kalman filter (LETKF) is to optimize the computational performance of the EnKF without a loss of accuracy. The LETKF has been shown to be a computationally
efficient algorithm for NWP applications (Whitaker et al. 2007; Szunyogh et al. 2008).
The terms “local” and “transform” refer to the features of the LETKF approach described below.
* Local: The analysis ensemble members and their mean (the state estimate) are obtained independently for each model grid point G using observational information from a local volume containing G. The
analysis ensemble members, [x^sup a(i)^^sub n-1^, = 1. 2,..., k], and the analysis x^sup a^ are assembled from the grid point estimates. A similar approach was first used in the local ensemble Kalman
filter of Ott et al. (2004).
* Transform: The analysis perturbations x^sup a^ are obtained by linearly transforming the background perturbations x^sup b^ using the method of Hunt et al. [2007, Eqs. (22) and (23)]. Essentially,
the cost function is minimized in the space spanned by the background ensemble. A similar approach was first used in the ensemble transform Kalman filter scheme of Bishop et al. (2001).
The LETKF has been coupled to the Global Forecast System (GFS), which is the global atmospheric model of the National Centers for Environmental Prediction (NCEP) and the National Weather Service
(NWS; Szunyogh et al. 2008), and to the National Aeronautics and Space Administration (NASA)-National Oceanic and Atmospheric Adminstration (NOAA) finitevolume general circulation model (fvGCM)
developed by Lin and Rood (1996). Tests of the implementation of the LETKF on the NCEP GFS with real observations show that the scheme is about as accurate as the operational data assimilation system
of NCEP-NWS in data-dense regions and considerably more accurate than the operational system in data-sparse regions (Szunyogh et al. 2008). The interface between the ECOM and the LETKF, which we
describe in the next section, has been developed based on the interface that was originally designed and coded for the NCEP GFS.
c. ECOM/LETKF interface
As noted above, the initial applications of the LETKF have been to numerical weather prediction. There is a one-to-one mapping between the prognostic variables of the GFS weather model and the ECOM
(Table 1). In both models, a sigma-coordinate system is used, and the surface (top of the ocean, bottom of the atmosphere) is level 1. The two-dimensional prognostic variables that define sigma- the
surface pressure for the GFS and the free surface for the ECOM- are at level 1. As a result of these similarities, only a few changes to the LETKF software were required, as will be noted in various
contexts in this section.
To handle tide- or wind-forced gravity waves, the ECOM uses a time-splitting technique (Blumberg and Mellor 1987). In the NYHOPS, the time step is 5 s for the external (barotropic) mode and 50 s for
the internal (baroclinic) mode. Although the model can represent very high-frequency phenomena, we do not expect to predict or analyze these frequencies accurately. Therefore, to deal with more
predictable and observable quantities, we work in terms of time- averaged quantities. In fact, real observations are often time averaged over a period ranging from a few minutes to an hour.
Experience shows that, in spite of the small time steps used by the ECOM, the interesting features of the NYHOPS domain in Fig. 1 can be represented by hourly averages. Therefore, in these
preliminary experiments, 1-h averages of the forecasts of the prognostic variables are used as the background. Also, the observations are generated by adding random observational noise from the
specified error distribution to 1-h averages of the fields from the nature run at randomly selected model grid points. The analysis solutions are then given in terms of these time-averaged
Once initialized with a first ensemble of analyses, x^sup a(i)^^sub 0^, i = 1, 2,…, k, the data assimilation cycles through the following forecast and analysis steps:
(i) the ensemble of ECOM forecasts from t^sub n-1^ to t^sub n^ provides the background ensemble, x^sup b(i)^^sub n^, i = 1, 2,…,k; and
(ii) the LETKF analysis combines the background ensemble and the simulated observations to produce the analysis ensemble x^sup a(i)^^sub n^, i = 1, 2,…, k.
After each analysis, we must restart the model after making (relatively small) changes to the model prognostic variables. For this purpose, we also run a forecast from the ensemble average and use
the “hot restart” dataset that results as a template to create hot-restart files from the ensemble of analyses that only include time-averaged prognostic variables at the analysis time. In a hot
restart, all fields necessary to exactly continue a model integration are stored. For example, in the ECOM, to accommodate the time stepping scheme, model prognostic variables at different times are
included.1 The overall data flow in our numerical expertments is illustrated for a single data assimilation cycle in Fig. 2 and described in some detail in the appendix. Note that we carry out k + 1
forecast runs with the ECOM at every analysis time: one for each of the k ensemble members and an additional one from the ensemble mean analysis. The conversion procedures shown in Fig. 2 are needed
because the LETKF and the ECOM require the ensemble data to be hi slightly different formats.
3. Results
Here we describe our baseline experiment and results in detail. We then summarize the results of sensitivity experiments in which we vary three aspects of the system: density (i.e., amount) of
observations, ensemble size (i.e., number of ensemble members), and expected observation error (i.e., the standard deviation of the observation errors).
a. Baseline experiment description
The ECOM nature run was started from climatological values at 0000 UTC 1 Jan 2004. Thereafter external forcing was provided by meteorological inputs from operational NCEP Eta 12-km analyses, observed
U.S. Geological Survey (USGS) river discharge data, NOAA water level observations, and lateral open ocean salinity and temperature boundary conditions from climatology and water level from a global
ocean tidal model. We archived values from the ECOM nature run as hourly averages, valid at the half hour, beginning at 0030 UTC 1 March 2004 and continuing for 90 days. Our experiments have focused
on the first half of an interesting event from 26 April through 4 May 2004, when strong discharge from local rivers, associated with heavy local rainfall and spring melting in the northern reaches of
the Hudson River, forced a plume of warmer and fresher water to emerge from the New York Harbor and spread southward along the New Jersey coast.
Figure 3 shows two frames each from animations of T and S in the uppermost model layer (layer 1). The baseline experiment begins at 0530 UTC 26 April and the frames shown in Fig. 3 correspond to 0600
UTC 27 April and 1600 UTC 28 April 2004. Note how the plume of low- 5, high- T water from New York Harbor expands southward during this interval. (Although not described here, experiments for a
period earlier in April, when the New York Harbor nature run was more quiescent, show qualitatively similar results to those presented here.)
The baseline experiment runs for 96 h and contains 32 3-h forecasts, each followed by an analysis. As mentioned earlier, 1-h average quantities are used throughout-for the background (i.e.,
forecast), the observations (and thus also the output from the LETKF), and in all plots presented here. The nominal ensemble size is k = 16. The LETKF analyzes the principal prognostic variables (h,
T, u, v, S). At each analysis time, synthetic observations are generated by randomly selecting, on average, 10% of the elements in the model state vector x. This selection method implies that if
temperature is observed at a given map location, then there may be zero or a few other observations of temperature or other prognostic variables in the vertical column at that map location. The
standard deviations of the observation errors are uniform in the vertical and are 0.1 m, 0.5[degrees]C, 0.05 m s^sup -1^, and 1 psu for surface height, temperature, currents, and salinity,
respectively. The variance inflation factor is gamma = 0.09. The localization is done in terms of grid distances, and the search volume extends out to two grid lengths in both horizontal directions
and one to two grid lengths in the vertical direction, increasing with depth. This localization strategy provides a natural search volume because observations are generated randomly at a specified
fraction of grid points. At any given grid point, between a few and a few dozen observations are used (Fig. 4).
The initial ensemble is made up of data sampled regularly in time from the nature run during March and the first half of April 2004. For the baseline experiment, nature is sampled roughly every fifth
high tide (approximately every 62 h), beginning with 1030 UTC 3 March, 0030 UTC 6 March, and 1330 UTC 8 March, and ending with 0530 UTC 11 April. The times for the start of the experiment, the
initial ensemble members, and the template used to create the first ensemble of initial conditions for the ECOM are all close to the time of high tide at the Battery station at the southern tip of
Manhattan. In this way, the component of the flow due to the tidal forcing is approximately the same, and in fact minimized, in all these states. This should limit spurious large-amplitude
The time required to perform the analysis for all grid points is typically an order of magnitude less than the time required for a single 3-h ECOM forecast. Thus, in contrast to the case reported for
the NCEP GFS with large operational observational datasets by Szunyogh et al. (2008), we did not need to use domain decomposition and multiprocessing within the LETKF, but we did find it useful to
run the ECOM for different ensemble members on different processors.
b. Baseline experiment results
In the baseline experiment, the LETKF analyses very quickly asymptote to the nature run (i.e., to the “truth”). For comparison, a free-running forecast (FRF) was started from the mean of the initial
analysis ensemble. This comparison is important to determine the extent to which the ensemble tracks the truth because of the data assimilation and not simply because the system evolution is largely
determined by the forcing fields (surface winds, freshwater inputs, fluxes of heat and moisture at the surface, and inflow boundary conditions).
We begin by examining the impact of the data assimilation at a single location indicated in Fig. 1-the model grid point at the head of Hudson Canyon, located southeast of Sandy Hook beach, New
Jersey, and south of Jamaica Bay, Long Island. At this location, Fig. 3 shows that there are large temporal changes in surface temperature and salinity. Figure 5 shows timedepth cross sections at
this location for T and 5 for the analysis, FRF, and nature run. The FRF begins with a state that is representative of the cooler and saltier conditions during the first half of the nature run. The
FRF recovers neither the temperature nor the salinity structure of the nature run. In contrast, the baseline analysis does a very good job of representing the true state of the ocean at this location
after roughly 36 h of assimilation.
The colder conditions in the FRF seen in Fig. 5 hold over the entire domain. Figure 6 shows the temporal behavior of domain- averaged temperature for the nature run, the analysis, the forecast (or
background), and the FRF. The mean values are calculated over all ocean grid points or over all observed locations-roughly 10% of all ocean points for the baseline experiment. (For the rest of the
paper, each statistic that is presented-here, mean temperature-is calculated for a particular sample. This sample might include one time or a range of times, a single level or all levels, and a
single location or the entire horizontal domain. For all statistics presented, all grid points are given equal weight even though there is a great variation in the volume associated with a grid
point.) Fig. 6 shows that the FRF has a roughly 3[degrees]C cold bias and shows little improvement even at four days. The assimilation of data quickly eliminates the domain-averaged bias from the
background and the analysis.
The analysis provides unbiased estimates for the other model variables as well as for temperature. For example, Fig. 7a shows the evolution of domainaveraged bias for salinity. Note how smoothly the
analysis settles down to the truth and does not track the jitter in the bias between the observations and background (see curves T – A and O – B in Fig. 7a). By design, the data errors are Gaussian
and unbiased, and the expected observation errors are uniform: 1 psu for salinity. This is clearly seen in the O – T curves that follow, but here there is a slight positive (O > T) salinity bias due
to resetting simulated observations to zero psu when simulated observation errors would result in negative salinity. Similarly, Fig. 7b shows the evolution of domain-averaged bias for the w- current.
Here there is a clear oscillation in the FRF bias during the first day or so of the experiment, which is also seen in h and v (not shown).
Not only is the analysis unbiased, it is also accurate. Figure 8a shows the temperature error evolution. Here, and in what follows, we will use the term “error” to mean the standard deviation of the
difference between the truth and some other quantity, such as the ensemble mean analysis. Note the very fast approach of the analysis error to its asymptotic value: after just a few 3-h cycles, the
errors are greatly reduced and are smaller than the observation errors and much smaller than the errors in the FRF. The fact that the analysis and background errors quickly asymptote to very small
values relative to the observation errors is a key indicator that the observed information is assimilated efficiently. Insofar as the analysis errors are smaller than the FRF errors, one can conclude
that the small errors in the state estimates are not simply due to the model response to the strong external forcing. In Fig. 8a, the small deviations of the observation errors from the expected
value (curve O – T) are due to the finite nature of the sample.
It is apparent that different variables in the ocean model respond to the external forcing on different time scales. Figure 8b shows the evolution of the surface height error. The sample sizes are
roughly a tenth of those for temperature because the surface height is a two-dimensional field, whereas three-dimensional fields like temperature have 10 layers. The behavior of the assimilation is
again very accurate, with negligible errors after 48 h, but the FRF is also very good and asymptotes to similar very small errors after 72 h. Salinity errors are more like temperature in that the FRF
very slowly approaches the truth, whereas velocity errors are more like surface height in that the FRF quickly approaches the truth. These differences relate to the fact that u, v, and h are for the
most part tidally forced and can adjust rapidly to the perfect forcing, but adjustments of the T and S fields involve thermodynamic processes that have longer time scales and depend not only on the
forcing but also on advection and diffusion processes.
The fast approach of the data assimilation system to asymptotic behavior is also seen in the ensemble spread. Here, and in what follows, the ensemble spread is the rms value for the tune interval and
domain under consideration of the ensemble standard deviation calculated at each grid point and synoptic time. Figure 9 shows the evolution of ensemble spread for the temperature in the entire
domain, which shows that the initial ensemble spread is reduced quickly by the assimilation of observed information. Also, there is negligible growth or decay of the ensemble spread during the
forecast step. This is consistent with the perfect model, perfect external forcing experiment design and suggests that the model dynamics have only a weak sensitivity to initial condition
We began this discussion at a single point and then turned to domain-wide statistics. We now show that the spatial variation of analysis error is fairly smooth. For example, analysis errors vary
quite smoothly in the vertical. Figure 10 shows vertical profiles of the errors as defined in the discussion of Fig. 8, calculated at each model layer for temperature and salinity and for the time
interval from 48 to 96 h after the start of the experiment. Again, a Une is plotted that corresponds to the expected observation errors. Except for layers 2 and 3 for temperature, all other analysis
errors are smaller than the expected observation errors. There is a general decrease of error with depth. Vertical profiles of errors of u- and v-currents show a monotonic decrease of error as depth
increases, which mirrors the decrease of variability in the currents with depth, likely because of bottom friction (not shown).
Over most of the domain, the analysis errors are also much more spatially homogeneous than those of FRF, as evident in Fig. 11. The top two rows in Fig. 11 show the spatial dependence of the salinity
error in layer 1. The analysis is doing a good job everywhere, but especially in the freshwater plume, inner harbor, estuaries, rivers, and Long Island Sound. These areas, especially the rivers, are
much easier to see in the gridpoint coordinate version of the maps. Note the particularly large errors of the FRF within the range 1
All results described so far have been for our baseline experiment with observation density d = 0.10, ensemble size k = 16, and nominal error levels. We now examine the sensitivity of the results to
the choice of d, k, and observation error standard deviation. The range of choices for the parameters in the sensitivity experiments is listed in Table 2. The nominal observation error standard
deviations are multiplied by a factor e in these experiments. In this section, we discuss only the sensitivity of the analysis errors to the parameters.
Figures 12-13 show results for different data densities ranging from d = 0.50 to d = 0.02. As the data density increases, the analysis error and the ensemble spread decrease and the time required to
reach asymptotic behavior decreases. Figure 12 shows this for temperature error. Note that in the sequence of decreasing data density, the tune to reach asymptotic behavior for d = 0.02 is much
greater than for d = 0.05. However, the differences between these analyses are all small after 48 h of data assimilation. Figure 13 shows vertical profiles of the ensemble spread for hours 48 through
96 for the varying data densities. As the data density increases, the information content of the analysis increases and the ensemble spread decreases. The variation of ensemble spread is large
relative to the variation of the analysis error. Furthermore, the magnitude of the ensemble spread in all cases is smaller than that of the analysis error. One reason for this is that the ensemble
estimates the analysis uncertainty in the reduced dimensional space spanned by the ensemble members. Although this should be the correct uncertainty estimate for the purpose of the LETKF, for other
purposes better estimates of analysis error might be based on the correlations between the analysis error and the ensemble spread observed in simulation experiments like these.
Figure 14 presents summary statistics for all sensitivity experiments listed in Table 2. There are three groupings in Fig. 14 for variations in d, k, and e. The ensemble size experiments have d =
0.50. The other experiments are variations about the baseline experiment. All statistics in Fig. 14 are calculated for the sample composed of the entire domain and hours 48 through 96 of the
experiments. The values are then normalized by the baseline observation errors and presented as percentages. Of course, these are summary statistics that cover a variety of different regimes in the
New York Harbor (see Fig. 11).
The analysis error is reduced for T, u, v, and S as k or d increases. Variations of analysis error are small and show evidence of saturation as the ensemble size or data density increases. Typically,
doubling k reduces the error by 10%; doubling d reduces error by 2%. The percentage reduction of error for doubling k tends to decrease as the ensemble size increases. A similar relationship holds
for temperature and increasing data density. If the error for 2k were reduced by a constant factor relative to the error for k, then the log of the error would be proportional to the log of k, and
similarly for the observation density. Comparing the d = 0.50, k = 8 case (“k = 8″) to the d = 0.10, k = 16 (“d = 0.10″) cases, we see that even though the second experiment has only 20% of the
observations, it provides an equivalent or superior analysis by doubling the ensemble size. Increasing the ensemble size from k = 16 to k = 32 further improves error levels. Figure 15 shows how the
ensemble size affects the approach to asymptotic behavior as well as the quality of the analysis for T and h. In particular, for T, when k = 4, the filter still converges but a much longer time is
required to reach asymptotic behavior. In this case, the slow improvement with time may be an effect of the correct external forcing acting over time. The k = 4 time evolution of the T analysis error
parallels the FRF evolution of forecast error after the first few steps. For h, the FRF and k = 4 experiments converge to asymptotic behavior at approximately the same rate. Otherwise, h analysis
errors vary little from experiment to experiment. This result is consistent with the interpretation that h is to a large extent externally (tidally) forced and that the external forcing is specified
to exactly match that of the nature run.
The analysis bias is quite small for h, u, and v. For T and 5, bias decreases with more or better data and with increasing ensemble size. For bias, we do not discern any clear trends in the summary
The ensemble spread statistics show clear trends. The spread decreases when the number of observations or the accuracy of the observations is increased, but it increases when the number of ensemble
members is increased. Relative to the observing errors, spreads are largest in the d = 0.02 case for currents, and are very small for h. Except for the comparison of d = 0.02 to d = 0.05, doubling d
results in approximately constant decreases in spread. Similarly, with every doubling of ensemble size, the ensemble spread increases by a similar amount (i.e., the difference between successive
experiments is approximately constant). These constants are different for each variable and for k and d. If the spread were reduced by a constant factor for doubling of d or k, then the spread would
have a linear relationship with log d and log k.
In all comparisons, except for h and especially for T, the error and bias of the FRF are much larger than any of the data assimilation experiment results. The differences between the statistics of
different experiments are small by comparison to the difference between any one experiment and the FRF. In the sensitivity experiments, the largest impacts are noticeably larger errors and biases for
k = 4 and d = 0.02 than for other choices of the parameters. Variations in the rms error and the bias are surprisingly small when the observation error is doubled or halved. Apparently, at a data
density of 10% the LETKF is very efficient at filtering out uncorrelated, unbiased observation errors. Figure 16 shows this for T. With larger observation errors, it does take more time to reach
asymptotic levels of analysis error. Even though the analysis errors change little with observation error size, the ensemble spread that indicates an estimate of the analysis uncertainty is very
sensitive to the specified observation error levels. Although the analysis is robust to the specification of the a priori observation error statistics, as mentioned before, estimates of the analysis
uncertainty might be improved by a statistical postprocessing of the information provided by the ensemble spread. In general, the ensemble spread is sensitive to all three factors-observation
density, ensemble size, and observation error magnitude.
4. Summary
We have coupled a modern coastal ocean model (ECOM) with a modern data assimilation method (LETKF) and conducted a series of simulation experiments taking a long ECOM nature run to be the truth.
Observations are generated at analysis times by sampling the nature run at model grid points with a specified density of observations and perturbing these values with random errors with specified
statistics (normal; unbiased; with given standard deviations). A diverse collection of model states is used for the initial ensemble. As a control, a free-running forecast is made from the initial
ensemble mean. The FRF is an important point of comparison because, to a large extent, the coastal ocean is forced by tides, inflows at open lateral boundaries, and fluxes of momentum, heat, and
moisture at the surface; and in all experiments described here the external forcing is fixed and identical to that used in the nature run. During the data assimilation, the ECOM advances the ensemble
3h to provide a background for the analysis step. The analysis step combines the observations and the background, using the ensemble to estimate the background uncertainty and using the specified
observation standard deviations to estimate the observation uncertainty in the Kalman filter equations. The state estimation errors of the analysis and the FRF are quantified by comparing each to the
nature run.
The following are some of the findings from our experiments, which may be dependent on the particulars of our study-the domain, the season, and the external forcing. The assimilation quickly
eliminates the domain-averaged bias of the initial ensemble. The FRF is unable to do this for temperature or salinity. After just a few analysis cycles, errors are greatly reduced by the assimilation
of observations. Analysis errors are mostly smaller than observation errors and are much smaller than the errors of the FRF. The fact that the analysis and background errors quickly asymptote to
small values relative to the observation errors is a key indicator that observed information is assimilated efficiently. Insofar as the analysis errors are smaller than the FRF errors, one can
conclude that the result that the analysis asymptotes to the nature run is not simply the effect of the model response to the external forcing. FRF temperature and salinity very slowly approach the
truth, whereas current and surface height very quickly approach the truth. These differences relate to the different dependencies and adjustment time scales of dynamic and thermodynamic variables to
external forcing. Spatially, the analysis does a good job everywhere, especially in the inner harbor, estuaries, and Long Island Sound. The southeast part of the domain is relatively quiescent, and
therefore the analysis and FRF are similar there. As the data density increases, ensemble spread, bias, and error standard deviation all decrease. The filter accurately tracks the truth at all data
densities examined: from observing 50% down to 2% of the model grid points. As the ensemble size increases, ensemble spread increases and error standard deviation decreases. For an ensemble of just
four members, the filter still converges, but a much longer time is required to reach asymptotic behavior. Comparing the d = 0.50, k = 8 case to the d = 0.10, k = 16 case, we see that even though the
second experiment has only 20% of the observations it still provides superior analyses by doubling the ensemble size. Increasing the ensemble size from k = 16 to k = 32 provides still smaller
analysis errors. Increases in the size of the observation error lead to a larger ensemble spread but have a small impact on the analysis accuracy. 5. Future work
To further define the characteristics of the DAS and gain more experience in the choice of adjustable DAS parameters such as the ensemble size, localization, inflation factor, or initialization,
further idealized sensitivity experiments of the kind performed here will be useful. Extra experiments might include assimilating different sets of observations with different data densities (e.g.,
relatively dense surface coverage from SST and very sparse subsurface observations), examining the effect of incorrect external forcing during the data assimilation, or examining the sensitivity to
different lengths of the data assimilation step. Such experiments will be a helpful guide when working with real data and uncertain external forcing. In addition, continuing experiments with
“simulated” data will allow flexible choices of domains and data types that are not easily realized with current actual data collection systems. Such experiments should be useful in assessing the
robustness and scalability of the DAS for dense observing systems.
To demonstrate the skill of the ECOM/LETKF system in realistic and diversified settings, experiments need to make use of real observations taken at any time and location within the domain and to
include the effects of uncertainties in surface forcing, model physics, open boundary conditions, and any other factors contributing to model error. The NYHOPS setup provides a focus on the dynamics
of very shallow regions with convoluted coastal geometries and strong boundary forcing by river discharge, typical of estuarine and harbor domains. Studying a second domain with a focus on deeper
shelf and outer shelf zones and the frontal dynamics typical of shelf-break fronts would be useful to further test the methodology in different dynamical conditions.
Without having the benefit of knowing the truth, as with the experiments in this paper, one needs validation and verification metrics (e.g., Wilks 2006) to assess the behavior of the DAS. Other tools
required to work with real datasets include quality control procedures. Simple data quality control methods like the “background check,” which compares the observation minus forecast to the expected
forecast error, are quite suitable for the LETKF. Model errors resulting from poorly known forcing fields, open lateral boundary conditions, or missing physics can all introduce biases in the
forecasts, and methods to correct such problems can be implemented within the LETKF framework (Baek et al. 2006). Finally, with many available submodels for biogeochemistry, sediment transport, water
quality, waves, and particle tracking, there are opportunities to extend the assimilation to nonstandard data such as ocean color and turbidity, chemical tracers, wave energy, and locations of
drifting buoys and autonomous underwater vehicles (a.k.a. gliders). These opportunities exist because the LETKF method is completely general in the sense that when the observation errors can be
assumed to be Gaussian, any observation of a physical parameter that has a known functional dependence on the variables of the dynamical model, can potentially be usefully assimilated regardless of
the magnitude of the observation error.
Acknowledgments. The authors thank Gregg Jacobs and Craig Bishop for helpful discussions. This work was supported by the U.S. Navy SPAWAR SBIR program (Contract N00039-06-C-0050).
1 Many models have a so-called “warm restart” capability that uses an Euler forward time step from a given initial state. A warm- restart capability may also include some special balancing or
initialization procedures to reduce undesirable initial behavior (e.g., the excitation of spurious gravity waves). Because the present version of the ECOM model does not have a warm-restart
capability, we use the modified hot-restart procedure described here.
Ahsan, Q., and A. Blumberg, 1999: Three-dimensional hydrothermal model of Onondaga Lake, New York. J. Hydrol. Eng., 125, 912-923.
Allen, J. S., P. A. Newberger, and J. Federiuk, 1995: Upwelling circulation on the Oregon continental shelf. Part I: Response to idealized forcing. J. Phys. Oceanogr., 25, 1843-1866.
Baek, S.-J., B. R. Hunt, E. Kalnay, E. Ott, and I. Szunyogh, 2006: Local ensemble Kalman filtering in the presence of model bias. Tellus, 58A, 293-306.
Bishop, C. H., B. J. Etherton, and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Mon. Wea. Rev., 129, 420-436.
Blumberg, A. F., 1996: An estuarine and coastal ocean version of POM. Proc. Princeton Ocean Model Users Meeting (POM96), Princeton, NJ, Princeton University, 9.
_____, and G. L. Mellor, 1987: A description of a three- dimensional coastal ocean circulation model. Three-Dimensional Coastal Ocean Models, N. Heaps, Ed., American Geophysical Union, 1- 16.
_____, and D. M. Goodrich, 1990: Modeling of wind-induced destratification in Chesapeake Bay. Estuaries Coasts, 13, 236-249.
_____, L. A. Khan, and J. P. St. John, 1999: Three-dimensional hydrodynamic simulations of the New York Harbor, Long Island Sound, and the New York Bight. J. Hydraul. Eng., 125, 799-816.
_____, Q. Ahsan, and J. K. Lewis, 2000: Modeling hydrodynamics of the Mississippi Sound and adjoining rivers, bays, and shelf waters. Proc. OCEANS 2000 MTS/IEEE Conf. and Exhibition, Vol. 3,
Providence, RI, IEEE, 1983-1989.
Bruno, M. S., A. F. Blumberg, and T. O. Herrington, 2006: The urban ocean observatory-Coastal ocean observations and forecasting in the New York Bight. J. Mar. Sci. Environ., C4, 1-9.
Courtier, P., 1997: Variational methods. J. Meteor. Soc. Japan, 75, 211-218.
Daley, R., 1991: Atmospheric Data Analysis. Cambridge University Press, 457 pp.
Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res., 99 (C5), 10 143-10 162.
_____, 2006: Data Assimilation: The Ensemble Kalman Filter. Springer, 280 pp.
Fan, S., L.-Y. Oey, and P. Hamilton, 2004: Assimilation of drifter and satellite data in a model of the northeastern Gulf of Mexico. Cont. Shelf Res., 24, 1001-1013.
Fukumori, I., 2002: A partitioned Kalman filter and smoother. Mon. Wea. Rev., 130, 1370-1383.
_____, and P. Malanotte-Rizzoli, 1995: An approximate Kalman filter for ocean data assimilation: An example with an idealized Gulf Stream model. J. Geophys. Res., 100 (C4), 6777-6793.
Galperin, B., and G. L. Mellor, 1990: A time-dependent, three- dimensional model of the Delaware Bay and River system. Part 1: Description of the model and tidal analysis. Estuarine Coastal Shelf
Sci., 31, 231-253.
Hunt, B., E. J. Kostelich, and I. Szunyogh, 2007: Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter. Physica D, 230, 112-126.
Jazwinski, A. H., 1970: Stochastic Processes and Filtering Theory. Academic Press, 376 pp.
Kalnay, E., 2002: Atmospheric Modeling, Data Assimilation, and Predictability. Cambridge University Press, 364 pp.
Keppenne, C., and M. Rienecker, 2002: Initial testing of a massively parallel ensemble Kalman filter with the Poseidon isopycnal ocean general circulation model. Mon. Wea. Rev., 130, 2951- 2965.
Lermusiaux, P. F. J., and Coauthors, 2006: Quantifying uncertainties in ocean predictions. Oceanography, 19, 90-103.
Lin, S.-J., and R. B. Rood, 1996: Multidimensional flux-form semi- Lagrangian transport schemes. Mon. Wea. Rev., 124, 2046-2070.
Lorenc, A. C., 1997: Development of an operational variational assimilation scheme. J. Meteor. Soc. Japan, 75, 339-346.
Mellor, G. L., and T. Yamada, 1982: Development of a turbulence closure model for geophysical fluid problems. Rev. Geophys. Space Phys., 20, 851-875.
_____, and T. Ezer, 1991: A Gulf Stream model and an altimetry assimilation scheme. J. Geophys. Res., 96, 8779-8795.
Ott, E., and Coauthors, 2004: A local ensemble Kalman filter for atmospheric data assimilation. Tellus, 56A, 415-428.
Phillips, N. A., 1957: A coordinate system having some special advantages for numerical forecasting. J. Meteor., 14, 184-185.
Signell, R. P., H. L. Jenter, and A. F. Blumberg, 1994: Modeling the seasonal circulation in Massachusetts Bay. Proc. Third Int. Conf. on Estuarine and Coastal Modeling, ASCE, 578-590.
Stammer, D., and Coauthors, 2002: Global ocean circulation during 1992-1997, estimated from ocean observations and a general circulation model. J. Geophys. Res., 107, 3118, doi:10.1029/ 2001JC000888.
Szunyogh, I., E. J. Kostelich, G. Gyarmati, D. J. Patil, B. R. Hunt, E. Kalnay, E. Ott, and J. A. Yorke, 2005: Assessing a local ensemble Kalman filter: Perfect model experiments with the National
Centers for Environmental Prediction global model. Tellus, 57A, 528- 545.
_____, _____, _____, E. Kalnay, B. R. Hunt, E. Ott, E. Satterfield, and J. A. Yorke, 2008: A local ensemble transform Kalman filter data assimilation system for the NCEP global model. Tellus, 60A,
113-130, doi:10.1111/j.1600-0870.2007.00274.x.
Vinogradova, N., S. Vinogradov, D. Nechaev, V. Kamenkovich, A. F. Blumberg, Q. Ahsan, and H. Li, 2005: Evaluation of the northern Gulf of Mexico littoral initiative (NGLI) model based on the observed
temperature and salinity in the Mississippi Bight. Mar. Tech. Soc. J., 39, 25-38. Whitaker, J. S., G. P. Compo, X. Wei, and T. M. Hamill, 2004: Reanalysis without radiosondes using ensemble data
assimilation. Mon. Wea. Rev., 132, 1190-1200.
_____, T. M. Hamill, X. Wei, Y. Song, and Z. Toth, 2007: Ensemble data assimilation with the NCEP global forecast system. Mon. Wea. Rev., 136, 463-482.
Wilks, D. S., 2006: Statistical Methods in the Atmospheric Sciences. 2nd ed. Academic Press, 648 pp.
Wunsch, C., and P. Heimbach, 2007: Practical global ocean state estimation. Physica D, 230, 197-208, doi:10.1016/ j.physd.2006.09.040.
ROSS N. HOFFMAN AND RUI M. PONTE
Atmospheric and Environmental Research, Inc., Lexington, Massachusetts
Department of Mathematics and Statistics, Arizona State University, Tempe, Arizona
Stevens Institute of Technology, Hoboken, New Jersey
University of Maryland, College Park, College Park, Maryland
Atmospheric and Environmental Research, Inc., Lexington, Massachusetts
(Manuscript received 11 April 2007, in final form 31 December 2007)
Corresponding author address: Dr. Ross N. Hoffman, Atmospheric and Environmental Research, Inc., 131 Hartwell Avenue, Lexington, MA 02421-3126.
E-mail: ross.n.hoffman@aer.com
Interface Definitions
a. File formats
The LETKF analysis requires all ensemble members, whereas the ECOM operates on a single ensemble member. The interface requirements exist therefore to convert the LETKF analysis file to an ensemble
of ECOM analysis files and, conversely, to convert an ensemble of ECOM forecast files into the LETKF background file. We call these functions LETKF to ECOM (L2E) and ECOM to LETKF (E2L).
Input to the ECOM must be in the ECOM restart format (ERF), which contains a single snapshot (i.e., instantaneous, not time averaged) of the model state and all auxiliary information required for a
hot restart. The ECOM output includes an ERF file and an ECOM nature format (ENF) file that holds the time-averaged model state at the end of the forecast interval. The LETKF ensemble format (LEF)
files contain ensembles of forecast or analysis state vectors. Here, each of these files contains data associated with a single time.
b. The forecast-analysis procedure
As defined in Fig. 2 the forecast-analysis procedure starts with a current analysis and advances to the next analysis. We denote the initial and final times within the procedure as t^sub i^- = t^sub
n- 1^ and t^sub f^ = t^sub n^ = t^sub i^ + Deltat, where Deltat is the time increment between analyses. At the end of such an assimilation cycle we increment t^sub i^ by Deltat and continue.
To begin the forecast-analysis procedure, L2E converts the previous analysis (i.e., at t^sub i^) to ECOM initial conditions (in ERF), using as a template the ERF file from the previous ECOM forecast
initialized with the ensemble mean conditions. Some of the quantities taken from the template are strictly constants for the experiment, and the others provide reasonable estimates. In particular,
the turbulence parameters for each ensemble member’s initial conditions are taken from the ensemble mean forecast. Other quantities taken from the template that are not constant will be recalculated
before the second time step begins. A test showed little difference resulting from setting some of these to zero instead of copying these from the template.
Next, using the ERF files created by L2E, each ensemble member is advanced in time by the ECOM from t^sub i^ to t^sub f^ + t^sub a^/ 2, where t^sub a^ is the averaging time (1 h). The resulting
forecasts valid at t^sub f^ and averaged over the interval from t^sub f^ – t^sub a^/2 to t^sub f^ + t^sub a^/2 are then combined by E2L for use as the background ensemble by LETKF. At the same time
(see leftmost part of Fig. 2), the ensemble mean is advanced from t^sub i^ to t^sub f^ to provide the template in the next cycle.
A LETKF observation format (LOF) file is generated at the analysis time from the nature run. In this procedure, a random number generator is seeded by making use of the current experiment time. For
each variable at each potential observation location, a random normal number with zero mean and the given standard deviation is taken as the observation error. Then the observation is kept if a
uniform random number on the interval (0, 1) is smaller than the specified data density (e.g., d = 0.10). With this experimental design, the errors have the same structure in all experiments. First,
the data locations from an experiment with higher data density form a superset of the data for any experiment with lower data density. second, if two experiments share an observation of a particular
variable at a particular grid point and time, then the observation errors scaled by the specified observation standard deviation for each experiment will be the same. This approach makes the
interpretation of sensitivity experiments clearer. For diagnostic calculations of statistics, at each analysis time a second LOF file with complete coverage (d = 1.00) and with zero errors is
Copyright American Meteorological Society Sep 2008
(c) 2008 Journal of Atmospheric and Oceanic Technology. Provided by ProQuest LLC. All rights Reserved.
Topics: Estimation theory
Bayesian statistics
Control theory
Kalman filter
Ensemble Kalman filter
Linear filters
Data assimilation
Likelihood function
Statistical hypothesis testing | {"url":"http://www.redorbit.com/news/science/1569929/a_simulation_study_using_a_local_ensemble_transform_kalman_filter/","timestamp":"2014-04-25T04:16:23Z","content_type":null,"content_length":"88767","record_id":"<urn:uuid:4b9e2ee6-eff7-4250-80f1-73fb6b6a520b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what does x|y mean?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/506b7682e4b088f3c14c96de","timestamp":"2014-04-20T20:58:40Z","content_type":null,"content_length":"64280","record_id":"<urn:uuid:235c5467-a50e-4df0-bc0c-217ede6e6e80>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logarithm question
Can someone please help me start on this: http://82.221.28.13/webct/RelativeRe...s/image004.png thanks
lol. Can't see anything. But while I'm here I may as well throw down some properties of logs in case they help... $\log (ab) = \log(a) + \log(b)$. $\log(a/b) = \log(a) - \log(b)$ as long as $b eq 0$.
$\log(e^x) = x$ where [tex]e[/math[ is the exponential function.
Yes thanks I got it...I wasnīt sure if I had to use the laws of logarithms both side of the equal sign (right and left) | {"url":"http://mathhelpforum.com/algebra/137582-logarithm-question-print.html","timestamp":"2014-04-17T14:23:51Z","content_type":null,"content_length":"11395","record_id":"<urn:uuid:8fff851f-81ed-49c3-8667-fcfde00be563>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applied Math and Science Education Repository - Browse Resources
(35 resources)
Normal Tool
This applet, created by Tom Malloy of the University of Utah, demonstrates probability as the area under the normal and the standard normal curves. Students can manipulate mean, standard deviation,
and lower and upper...
more info
Numerical Summaries
This site, created by Michelle Lacey of Yale University, gives a definition and an example of numerical summaries. Topics include: mean, median, quantiles, variance, and standard deviation. While
brief, the site is...
more info
This site provides a PowerPoint presentation, created by Dr. Elizabeth Garrett-Mayer of Johns Hopkins University, of a lesson and examples of point estimation, odds ratios, hazard ratios, risk
differences and precision....
more info
Created by authors Carol Kuiper and Jack Tedeski, this site is a table of instructions for TI, Casio, HP, Sharp, and some common other brands of calculators. Instructions are given for how to enter
data (both one and...
more info
Repeated Measures
This applet simulates a two-condition experiment. You specify the number of subjects per sample (n), and the population means for each condition (Mean A and Mean B). You can also specify the
correlation between the two...
more info
<<< Previous 5 Resources Next 5 Resources >>>
Switch to browsing by LCC (More Detailed Classifications)Switch to browsing by GEM Subject (Fewer and Broader Classifications)
forget your password?
Manage your resources
Save, organize, and share resources that you find.
Subscribe to bulletins
Automatically be notified about new resources that match your interests.
It's easy, fast, and FREE! | {"url":"https://amser.org/index.php?P=BrowseResources&ParentId=979677&StartingResourceIndex=10","timestamp":"2014-04-21T12:57:20Z","content_type":null,"content_length":"21023","record_id":"<urn:uuid:96d8807b-f402-4646-a749-e01c375d835b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: December 2007 [00202]
[Date Index] [Thread Index] [Author Index]
Re: Convert Binary Image to Column Vector
• To: mathgroup at smc.vnet.net
• Subject: [mg83977] Re: Convert Binary Image to Column Vector
• From: Szabolcs Horvát <szhorvat at gmail.com>
• Date: Thu, 6 Dec 2007 02:38:31 -0500 (EST)
• References: <fj65hr$fp1$1@smc.vnet.net>
Tara.Ann.Lorenz at gmail.com wrote:
> Can a 30x30 binary image be converted directly into a column vector of
> 900 components in Mathematica?
It depends on what you mean by "image",
> Or must the image first be put into
> matrix form?
and what you mean by "matrix".
> I am using the command:
> Flatten@Transpose@t1,
This command works if t1 is a two-dimensional array, i.e. a nested list
that looks like this when output is set to StandardForm:
> where t1 is my binary image, and it seems to
> give me a vector but I'm not sure if it's the one that I desire.
> If I first convert my image into matrix form
How did you do this conversion? If the image is inside
Graphics[Raster[...]], then you need to extract the contents of Raster[...].
> and define my matrix as
> "m" and then use this command,
> Flatten@Transpose@m, then I get an output of "Transpose[...]" and my
> 30 x 30 matrix is explicitly written within the Transpose brackets.
> This does not provide me a column vector with 900 components, which is
> what I am looking for.
In Mathematica there are no column or row vectors, only vectors and
matrices (one might call n*1 matrices column vectors and 1*n matrices
row vectors). Have you read the post I suggested in my answer to you
last question?
Here's the link again:
If you use TraditionalForm output, try switching to StandardForm for a
while to better understand how matrices/vectors are represented. | {"url":"http://forums.wolfram.com/mathgroup/archive/2007/Dec/msg00202.html","timestamp":"2014-04-17T06:46:18Z","content_type":null,"content_length":"26958","record_id":"<urn:uuid:535b9e16-98bd-4d2b-a6fe-96983afb8757>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introductory Algebra for College Students
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/bk-detail?isbn=9780132356794","timestamp":"2014-04-19T22:22:45Z","content_type":null,"content_length":"32695","record_id":"<urn:uuid:8e132a79-3640-4898-979b-a82b6a7c036c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
ASTR 5540
ASTR 5540 Mathematical Methods Fall 1998: Syllabus
If you add that up, it comes to 110%. To make it 100%, I will drop the worst 10% of your score.
Problem sets will be handed out every one or two weeks; we'll see how that turns out in practice. Normally problems will be due on a Thursday, and the aim is to have them graded by Tuesday, when we
will devote some portion of the class period to discussing the problems.
The midterm will be on Probability and Statistics, while the final will be on Differential Equations. Both midterm and final will be 24 hour take home exams, which you will complete in any
consecutive 24 hour period over a certain time span.
In addition to the problem sets, you will each write a report, and deliver an associated 5-10 minute presentation in class, on Th 12 Nov, with overflow presentations going to Th 19 Nov. Details at
Reports and Presentations.
The course will be divided into two parts:
1. Probability & Statistics (6 weeks)
2. Differential Equations (8 weeks)
1. Probability & Statistics
• The need for a prior
• Parameter estimation
• The Rules of Probability, Bayes Theorem
• Likelihood
• c^2
• Additivity of irreducible moments, Central Limit Theorem
• Marginalization, Fisher information matrix
• Maximum entropy
2. Differential Equations
• Ordinary Differential Equations
□ Numerical methods
□ Basic theorems, boundary conditions
□ Sturm-Liouville theory
□ Green's Functions, transforms
• Partial Differential Equations
□ Classification: elliptic, parabolic, hyperbolic, boundary conditions
□ Characteristics
□ Separation of variables, transforms, similarity
See texts.
p" ASTR 5540 Homepage
Updated 9 Sep 1998 | {"url":"http://casa.colorado.edu/~ajsh/astr5540/syllabus.html","timestamp":"2014-04-20T18:30:50Z","content_type":null,"content_length":"3907","record_id":"<urn:uuid:b9bff243-3f10-43c9-9c0d-8ac603126376>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dynamic Bayesian Combination of Multiple Imperfect Classifiers
Posted on 06/15/2012
Using the new Voxcharta.org system, I was the only physicist at my institute to upvote this paper, Dynamic Bayesian Combination of Multiple Imperfect Classifiers (pdf), more in the realm of machine
learning or computer science than traditional astrophysics or astronomy. As such I was nominated to discuss it at our weekly journal club. Here I give a brief review of concepts needed to follow the
paper, and then go in depth into how we can use the opinions of multiple lay people as to whether an object is a supernova or not to achieve a highly accurate classification at the expert level.
Some review to understand the paper
Markov Chains
Markov Chain
• sequence of random variables with the Markov Property: given the present state, future and past states are independent
• Markov Chain Monte Carlo: an algorithm for sampling from a probability distribution based on constructing a Markov Chain which has the desired distribution as its equilibrium distribution
Gibbs Sampling
• MCMC algorithm for obtaining random samples from a multivariate probability distribution
• Good for e.g. sampling posterior distribution of a Bayesian Network
Bayes Theorem
posterior=likelihood*prior/marginal likelihood
Bayesian interpretation of probability
• The equations above are thought of as modeling a hypothesis and evidence which supports the hypothesis.
• P(H|E)=P(E|H)P(H)/P(E)
Bayesian inference
• Derivation from Bayes Theorem if H1 is one hypothesis and H2 is another:
1. P(H1|E)=P(E|H1)P(H1)/P(E)
2. P(H2|E)=P(E|H2)P(H2)/P(E)
3. dividing 1./2. => P(H1|E)/P(H2|E)=P(E|H1)/P(E|H2)*P(H1)/P(H2)
• Think of the left hand side giving us the odds of hypothesis 1 over 2 after the the evidence is seen, and the terms on the right hand side giving us probabilities prior to the evidence being seen
• Thus we can use Bayes Rule helps us to relate probability models before and after new evidence is observed in the Baysian interpretation.
• Example: If we use this model for a binary classifier that decides whether something is a funny picture of a cat or not: H1 could be “this object is a funny picture of a cat” and H2 could be
“this object is NOT a funny picture of a cat”. Bayes rule lets us see new evidence and compute the new probabilities based on that evidence.
Variational Bayes Methods
• and alternative to MCMC methods such as Gibbs sampling for approximating the posterior probability
Receiver Operating Characteristic Curves
• graphical plot which illustrates performance of binary classification
• true positive rate vs. false positive rate is plotted
• a curve as each point on the plot corresponds to a given threshold of the classifier.
• the closer the area under the curve is to one, the better the classifier performance
Dirichlet Prior
• multivariate generalization of the beta distribution
• often used a a prior distribution in Bayesian statistics
• its form depends on its concentration or hyperparameter
The paper itself
http://www.galaxyzoo.org/ is an amazing project trying to use the judgement of hundreds of thousands of citizen scientists on questions of astronomical relevance–classifying an object observed by a
telescope as a particular type of galaxy, for example–questions on which humans still out perform computers. However, expert humans, professional astronomers, still do better than lay people. So the
question is two fold, how to best use the judgement of the lay person to get to the scientific truth as quickly as possibly, and relatedly how to help them more quickly approach the accuracy of a
This paper addresses how to combine the judgements from multiple people for the Galaxy Zoo Supernovae project. Supernovae are violent stellar explosions and volunteers are asked to classify an
object as “definitely a supernova” “possibly a supernova” or “definitely not a supernova”, and for many of the objects multiple volunteer’s judgements are collected. The vast majority of objects are
not in fact supernova. For the first run of the project professional astronomer’s checked the work–so the paper has the unique position of knowing the “right” answer, and being able to test out
different methods of combining volunteer judgements to see which method is most likely to get the right answer. The model of the authors is called the “Independent Bayesian Classifier Combination”
and is detailed in equation 1 of the paper.
looks a bit complicated, eh? Well let’s tease it out graphically using terms I reviewed above
In this model, c is a group of classifications for a given object indexed by i. The classifiers themselves are indexed by k. So by specifying a k we pick a classifier-say Alice, and by specifying an
i we pick an object that that classifier classified, say SuperNovaCandidateA. t is the actual true classification of the object, in this case we know the answer as we have the object classified by a
professional astronomer. So far so good. So we have a bit more in our model, κ are the parameters on which the true classification depends on. Presumably the professional astronomer considered many
factors in her or his judgment and didn’t pull it out of thin air-this is an attempt to capture the fact that the correct classification depends on several factors. Finally, there is π. We can see
π has a k index, which gives us a clue that it is dependent on who classified the object. It is in fact something called the confusion matrix which represents a classifier dependent function
indicating how confident we are in that classifier’s classification. If classifier Alice always gets identifications of supernova correct, but sometimes misses a few (no false positives, several
false negatives), and classifier Bob always identifies them but sometimes says something is a supernova which isn’t (a few false positives, no false negatives) this is the factor which is able to
model that. The last two things in our model are the α and the ν. These come from the fact that the we are assuming Dirichlet priors on the distributions of both π and κ and α and ν are the
hyperparameters on these distributions.
OK, so now that we have a model, what do we do with it?
Variational Baysian IBCC
Using the model we perform inference for the unknown variables t, π, and κ plugging in the classification of each of the various classifiers for each of the various objects. In the end we are mainly
interested in t, which we can use to compute the efficacy of our model of combining the various judgments of amateur classifiers by comparing to the true classification by the professional
astronomer. If our model is perfect, t will always match the professional! Moreover, as the model is a tad complex, we’d also want to see if the complexity is worth it; namely do we get better
results with it than with a more simple model (say having the individual classifiers vote on whether the object is a supernova or not-or taking the average of their scores). These results are shown
in figure 2 of the paper:
as we can see the ROC curve of the VB-IBCC is the best of all the methods. Moreover, the authors show that it is quicker than its closest competitor Gibbs-IBCC. Great news! But we can do even more;
the model has also inferred the confusion matrix π.
Communities of Decision Makers Based on Confusion Matrices
The authors group decision makers by applying an approach they detail in a previous paper, Overlapping community detection using Bayesian non-negative matrix factorization, Psorakis et. al. 2011. Its
methodology is best explored by going into the details of the original paper and I’ll just present the results here, figure 3 from the paper, prototypical confusion matrices for each of the 5
communities inferred with this methodology, the confusion matrices themselves coming out of the results of the current paper.
here we can see for example that group 1 are clear if an object is not a supernova, but who are less certain in their classification if it is. Group 2 are extremists, mostly they classify as
“definitely not” or “definitely is” a supernova, and when they are uncertain the object tends not to be a supernova after all. Definitely neat to see such easy to describe groups come out of a big
chunk of math.
Pretty cool–we have a method of combining information from multiple decision makers who have varying degrees of expertise in a way most likely of tested methods to give us the correct classification
of an object. Moreover, for free we get information about the type of classifier our Alice or our Bob is, meaning that information could be potentially used to guide classifiers into improving their
accuracy even further. The rest of the paper explores these themes, developing models for how confusion matrices change over time-which involve tweaking the model above which assumes a static
confusion matrix for each classifier, and then exploring these changes for the Galaxy Zoo Supernovae classification communities. Next up for the authors is exploring ways to select users for a task
based on the results of this work, either for training purposes (to improve the performance of the classifier themselves) or when a correct classification is especially important to use the confusion
matrix information to select for a high-quality tester.
Be the first to start a conversation | {"url":"http://cosmicrays.wordpress.com/2012/06/15/dynamic-bayesian-combination-of-multiple-imperfect-classifiers/","timestamp":"2014-04-17T15:25:59Z","content_type":null,"content_length":"59915","record_id":"<urn:uuid:736922b4-f31b-4e8d-92be-f4b81a776d65>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math problem (cubic equation-related)
Find the exact value of x so that the equation http://www1.wolframalpha.com/Calcula...image/gif&s=42 has exactly 2 solutions when solved for a.
You mean, I take it, 2 distinct real solutions. In that case, the equation must be of the form $(a-u)^2(a-v)= a^3- a- x= 0$. for some u and v. Multiplying that out, we get $a^3- (2u+v)a^2+ (u^2+ 2uv)
a- u^2v= a^3- a- x= 0$. We must have 2u+v= 0, $u^2+ 2uv= -1$ and $u^2v= x$. Solve the first two equations for u and v and then get x from the third equation. | {"url":"http://mathhelpforum.com/pre-calculus/122102-math-problem-cubic-equation-related-print.html","timestamp":"2014-04-19T13:38:49Z","content_type":null,"content_length":"5467","record_id":"<urn:uuid:7eb056dd-5701-41df-8bf0-b2cbebc80636>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Learning permutations with exponential weights
Results 1 - 10 of 25
"... We analyze the computational complexity of market maker pricing algorithms for combinatorial prediction markets. We focus on Hanson’s popular logarithmic market scoring rule market maker (LMSR).
Our goal is to implicitly maintain correct LMSR prices across an exponentially large outcome space. We ex ..."
Cited by 31 (17 self)
Add to MetaCart
We analyze the computational complexity of market maker pricing algorithms for combinatorial prediction markets. We focus on Hanson’s popular logarithmic market scoring rule market maker (LMSR). Our
goal is to implicitly maintain correct LMSR prices across an exponentially large outcome space. We examine both permutation combinatorics, where outcomes are permutations of objects, and Boolean
combinatorics, where outcomes are combinations of binary events. We look at three restrictive languages that limit what traders can bet on. Even with severely limited languages, we find that LMSR
pricing is #P-hard, even when the same language admits polynomial-time matching without the market maker. We then propose an approximation technique for pricing permutation markets based on a recent
algorithm for online permutation learning. The connections we draw between LMSR pricing and the vast literature on online learning with expert advice may be of independent interest.
- In ACM EC , 2010
"... We explore the striking mathematical connections that exist between market scoring rules, cost function based prediction markets, and no-regret learning. We first show that any cost function
based prediction market can be interpreted as an algorithm for the commonly studied problem of learning from ..."
Cited by 30 (10 self)
Add to MetaCart
We explore the striking mathematical connections that exist between market scoring rules, cost function based prediction markets, and no-regret learning. We first show that any cost function based
prediction market can be interpreted as an algorithm for the commonly studied problem of learning from expert advice by equating the set of outcomes on which bets are placed in the market with the
set of experts in the learning setting, and equating trades made in the market with losses observed by the learning algorithm. If the loss of the market organizer is bounded, this bound can be used
to derive an O ( √ T) regret bound for the corresponding learning algorithm. We then show that the class of markets with convex cost functions exactly corresponds to the class of Follow the
Regularized Leader learning algorithms, with the choice of a cost function in the market corresponding to the choice of a regularizer in the learning problem. Finally, we show an equivalence between
market scoring rules and prediction markets with convex cost functions. This implies both that any market scoring rule can be implemented as a cost function based market maker, and that market
scoring rules can be interpreted naturally as Follow the Regularized Leader algorithms. These connections provide new insight into how it is that commonly studied markets, such as the Logarithmic
Market Scoring Rule, can aggregate opinions into accurate estimates of the likelihood of future events.
- JOURNAL OF MACHINE LEARNING RESEARCH , 2009
"... Permutations are ubiquitous in many real-world problems, such as voting, ranking, and data association. Representing uncertainty over permutations is challenging, since there are n!
possibilities, and typical compact and factorized probability distribution representations, such as graphical models, ..."
Cited by 19 (8 self)
Add to MetaCart
Permutations are ubiquitous in many real-world problems, such as voting, ranking, and data association. Representing uncertainty over permutations is challenging, since there are n! possibilities,
and typical compact and factorized probability distribution representations, such as graphical models, cannot capture the mutual exclusivity constraints associated with permutations. In this paper,
we use the “low-frequency” terms of a Fourier decomposition to represent distributions over permutations compactly. We present Kronecker conditioning, a novel approach for maintaining and updating
these distributions directly in the Fourier domain, allowing for polynomial time bandlimited approximations. Low order Fourier-based approximations, however, may lead to functions that do not
correspond to valid distributions. To address this problem, we present a quadratic program defined directly in the Fourier domain for projecting the approximation onto a relaxation of the polytope of
legal marginal distributions. We demonstrate the effectiveness of our approach on a real camera-based multi-person tracking scenario.
"... We study sequential prediction problems in which, at each time instance, the forecaster chooses a binary vector from a certain fixed set S ⊆ {0, 1} d and suffers a loss that is the sum of the
losses of those vector components that equal to one. The goal of the forecaster is to achieve that, in the l ..."
Cited by 17 (5 self)
Add to MetaCart
We study sequential prediction problems in which, at each time instance, the forecaster chooses a binary vector from a certain fixed set S ⊆ {0, 1} d and suffers a loss that is the sum of the losses
of those vector components that equal to one. The goal of the forecaster is to achieve that, in the long run, the accumulated loss is not much larger than that of the best possible vector in the
class. We consider the “bandit ” setting in which the forecaster has only access to the losses of the chosen vectors. We introduce a new general forecaster achieving a regret bound that, for a
variety of concrete choices of S, is of order √ nd ln |S | where n is the time horizon. This is not improvable in general and is better than previously known bounds. We also point out that
computationally efficient implementations for various interesting choices of S exist. 1
- In COLT , 2010
"... We develop an online algorithm called Component Hedge for learning structured concept classes when the loss of a structured concept sums over its components. Example classes include paths
through a graph (composed of edges) and partial permutations (composed of assignments). The algorithm maintains ..."
Cited by 14 (3 self)
Add to MetaCart
We develop an online algorithm called Component Hedge for learning structured concept classes when the loss of a structured concept sums over its components. Example classes include paths through a
graph (composed of edges) and partial permutations (composed of assignments). The algorithm maintains a parameter vector with one non-negative weight per component, which always lies in the convex
hull of the structured concept class. The algorithm predicts by decomposing the current parameter vector into a convex combination of concepts and choosing one of those concepts at random. The
parameters are updated by first performing a multiplicative update and then projecting back into the convex hull. We show that Component Hedge has optimal regret bounds for a large variety of
structured concept classes. 1
"... Representing distributions over permutations can be a daunting task due to the fact that the number of permutations of n objects scales factorially in n. One recent way that has been used to
reduce storage complexity has been to exploit probabilistic independence, but as we argue, full independence ..."
Cited by 9 (3 self)
Add to MetaCart
Representing distributions over permutations can be a daunting task due to the fact that the number of permutations of n objects scales factorially in n. One recent way that has been used to reduce
storage complexity has been to exploit probabilistic independence, but as we argue, full independence assumptions impose strong sparsity constraints on distributions and are unsuitable for modeling
rankings. We identify a novel class of independence structures, called riffled independence, which encompasses a more expressive family of distributions while retaining many of the properties
necessary for performing efficient inference and reducing sample complexity. In riffled independence, one draws two permutations independently, then performs the riffle shuffle, common in card games,
to combine the two permutations to form a single permutation. In ranking, riffled independence corresponds to ranking disjoint sets of objects independently, then interleaving those rankings. We
provide a formal introduction and present algorithms for using riffled independence within Fourier-theoretic frameworks which have been explored by a number of recent papers. 1
"... Recall our special variant of Hedge: we are allowed to uses only distributions p(t) from some fixed convex subset P of the simplex of all distributions. The goal then is to minimize regret
relative to an arbitrary distribution p ∈ P. Such a version of Hedge is given in Figure 1, and a statement of i ..."
Cited by 9 (0 self)
Add to MetaCart
Recall our special variant of Hedge: we are allowed to uses only distributions p(t) from some fixed convex subset P of the simplex of all distributions. The goal then is to minimize regret relative
to an arbitrary distribution p ∈ P. Such a version of Hedge is given in Figure 1, and a statement of its performance below. This algorithm is implicit in the work of [4, 6]. Algorithm MW(P)
Initialization: An arbitrary probability distribution p(1) ∈ P on the experts, and some η> 0. For t = 1, 2,..., T: 1. Choose distribution p(t) over experts, and observe the cost vector ℓ(t). 2.
Compute the probability vector ˆp(t + 1) using the following multiplicative update rule: for every expert i, ˆpi(t + 1) = pi(t) exp(−ηℓi(t))/Z(t) (1) where Z(t) = ∑ i pi(t) exp(−ηℓi(t)) is the
normalization factor. 3. Set p(t + 1) to be the projection of ˆp(t + 1) on the set P using the RE as a distance function, i.e. p(t + 1) = arg minp∈P RE(p ‖ ˆp(t + 1)). Figure 1: The Multiplicative
Weights Algorithm with Restricted Distributions Theorem 1.1. Assume that η> 0 is chosen so that for all t and i, ηℓi(t) ≥ −1. Then algorithm MW(P) generates distributions p(1),..., p(T) ∈ P, such
that for any p ∈ P, T∑ T∑ ℓ(t) · p(t) − ℓ(t) · p ≤ η (ℓ(t)) 2 · p(t) + t=1 Here, (ℓ(t)) 2 is the vector that is the coordinate-wise square of ℓ(t). t=1 RE(p ‖ p(1)) η Proof. We use the relative
entropy between p and p(t), RE(p ‖ p(t)): = ∑ i pi ln(pi/pi(t)) as a “potential ” function. We have RE(p ‖ ˆp t+1) − RE(p ‖ p(t)) = ∑ pi(t) pi ln
, 2007
"... We design an online algorithm for Principal Component Analysis. In each trial the current instance is centered and projected into a probabilistically chosen low dimensional subspace. The regret
of our online algorithm, i.e. the total expected quadratic compression loss of the online algorithm minus ..."
Cited by 8 (1 self)
Add to MetaCart
We design an online algorithm for Principal Component Analysis. In each trial the current instance is centered and projected into a probabilistically chosen low dimensional subspace. The regret of
our online algorithm, i.e. the total expected quadratic compression loss of the online algorithm minus the total quadratic compression loss of the batch algorithm, is bounded by a term whose
dependence on the dimension of the instances is only logarithmic. We first develop our methodology in the expert setting of online learning by giving an algorithm for learning as well as the best
subset of experts of a certain size. This algorithm is then lifted to the matrix setting where the subsets of experts correspond to subspaces. The algorithm represents the uncertainty over the best
subspace as a density matrix whose eigenvalues are bounded. The running time is O(n²) per trial, where n is the dimension of the instances.
- ACM Transactions on Economics and Computation. To Appear , 2012
"... We propose a general framework for the design of securities markets over combinatorial or infinite state or outcome spaces. The framework enables the design of computationally efficient markets
tailored to an arbitrary, yet relatively small, space of securities with bounded payoff. We prove that any ..."
Cited by 5 (2 self)
Add to MetaCart
We propose a general framework for the design of securities markets over combinatorial or infinite state or outcome spaces. The framework enables the design of computationally efficient markets
tailored to an arbitrary, yet relatively small, space of securities with bounded payoff. We prove that any market satisfying a set of intuitive conditions must price securities via a convex cost
function, which is constructed via conjugate duality. Rather than deal with an exponentially large or infinite outcome space directly, our framework only requires optimization over a convex hull. By
reducing the problem of automated market making to convex optimization, where many efficient algorithms exist, we arrive at a range of new polynomial-time pricing mechanisms for various problems. We
demonstrate the advantages of this framework with the design of some particular markets. We also show that by relaxing the convex hull we can gain computational tractability without compromising the
market institution’s bounded budget. Although our framework was designed with the goal of deriving efficient automated market makers for markets with very large outcome spaces, this framework also
provides new insights into the relationship between market design and machine learning, and into the complete market setting. Using our framework, we illustrate the mathematical parallels between
cost function based markets and online learning and establish a correspondence between cost function based markets and market scoring rules for complete markets. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=5753386","timestamp":"2014-04-24T09:53:03Z","content_type":null,"content_length":"39895","record_id":"<urn:uuid:a9d883fd-4041-4051-b6fc-8d37d4b286a8>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
Burgers' equation generalized for absorption obeying a power law.
ASA 125th Meeting Ottawa 1993 May
4pPA12. Burgers' equation generalized for absorption obeying a power law.
Thomas L. Szabo
Imaging Systems, Hewlett Packard, 3000 Minuteman Rd., Andover, MA 01810
Burgers' equation applies to finite-amplitude waves propagating in a medium with absorption that has a quadratic frequency dependence. Numerical solutions of a modified Burgers' equation have been
obtained in the frequency domain for other types of losses; however, a complete set of time domain nonlinear equations corresponding to power law attenuation has not been available. Power law
attenuation is defined by the equation, (alpha)((omega))=(alpha)[sub 0]|(omega)|[sup y], where (alpha)[sub 0] and y are arbitrary real constants, and (omega) is angular frequency. Blackstock [J.
Acoust. Soc. Am. 77, 2050--2053 (1985)] has suggested that Burgers' equation could be generalized if an appropriate operator L could be found such that the equation could become p[sub z]-L*p=Bpp[sub
(tau)], where p is pressure, B is a constant, (tau) is delayed time, and the subscripts denote derivative operations. The L operators have been derived based on a new causality principle and
parabolic wave equations for power law loss. The resulting time domain equations extend Burgers' approach to finite amplitude propagation in media with arbitrary power law absorption. | {"url":"http://www.auditory.org/asamtgs/asa93ott/4pPA/4pPA12.html","timestamp":"2014-04-18T09:09:17Z","content_type":null,"content_length":"1806","record_id":"<urn:uuid:e4182b69-f377-4528-9fcb-03aec9d7a803>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
need a help urgent >>>
February 21st 2009, 12:42 AM #1
Feb 2009
need a help urgent >>>
hi everyone;
befor 2 days was my first lecture in stats so i couldnot understand parameter and sample so please help me in answering this example:
According to the financial time newspaper (15-10-1987), the average size of an UK household had fallen from 3.14 persons in 1970 to 2.66 persons in 1987.
a) The 1987 figure of 2.66 is claimed to be the value of a population parameter. What are the population and the parameter?
b) What procedure must be taken to be 100% certain that the value of the population parameter is exactly 2.66?
c) What procedure was likely used to arrive at the 1987 figure of 2.66? Use the terms sample, sample statistic, and inference in your answer.
If I understand your question correctly, then:
a) a population is a set which interest you. the population in your case is all the houses in the UK. you are interested in the size of them. the parameter is an attribute of the population,
usually it's something you do not know and want to find out. in this case, the parameter is the "average size of a UK household", which I would have defined "expectation" rather than "average",
but you won't understand why at that stage. another definition of population can be the distribution of the data, but it's a more abstract definition.
b) in order to be 100% sure, and I repeat, 100% sure, you must check every house in your population, not other way.
c) since checking the entire population is not reasonable, some other procedure was probably taken. a sample, which is a subset of the population was taken, let's say, for example 300 houses.
then an average of this sample was calculated, and was used for inference on the parameter we want to find out. inference means taking the sample, calculating some value from it ( called
"Statistic", in this case the average, and using the statistic to predict the true value of the parameter we want, in this case the population's average.
February 22nd 2009, 06:08 AM #2
Mar 2006 | {"url":"http://mathhelpforum.com/advanced-statistics/74785-need-help-urgent.html","timestamp":"2014-04-24T16:13:55Z","content_type":null,"content_length":"33171","record_id":"<urn:uuid:1663c306-a49e-471e-ac2e-92f138ee820c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Knaster–Tarski theorem
Recently I was stuck on how to to prove the following.
It eventually occurred to me that the problem might have something to do with the theory of posets and functions which preserve order in posets. I already knew something about posets (partially
ordered sets). So I looked them up on the Internet – and found that the problem actually had something to do with complete lattices. A complete lattice is a poset in which every subset has a supremum
and an infimum. The power set with the subset order is certainly a complete lattice because if is a collection of subsets of
, its supremum is and its infimum is . (NB: this applies to the empty collection as well. The supremum of the empty collection is Ø and the infimum of the empty collection is
I presently found out about the
Knaster–Tarski theorem
: If
is a complete lattice and f:
is an order-preserving function, then the set of all fixed points of f is also a complete lattice.
This doesn’t solve the original problem directly, but I sensed that I was on the right track. And then I found this:
http://www.cas.mcmaster.ca/~forressa/ac … 1-talk.pdf
So, in the original problem, taking
will do the trick.
http://www.artofproblemsolving.com/Foru … p?t=183245
Last edited by JaneFairfax (2008-01-21 06:39:07) | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=86926","timestamp":"2014-04-20T16:55:53Z","content_type":null,"content_length":"12427","record_id":"<urn:uuid:aa5cfe06-b9ba-406f-91d4-b7b92c055af7>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
solve perpendicular vectors problem
December 7th 2009, 04:34 PM #1
Nov 2009
some place in th Milky Way, usually
(the vectors in the problem are really in column form, but i don't know how to type that so i put them in component form)
vectorP = 2i +5j
vectorQ = -i+ 2j
☆find the value of p·q
(I know the dot product is 8, that was ez) but the next part i don't get...
☆ if... s = kp - (p·q)q
find the value of the constant k such that s is perpendicular to q
plz help
The dot product $\mathbf{p}\cdot\mathbf{q}$ increases when $\mathbf{p}$ and $\mathbf{q}$ point in the same direction, and drops to $0$ exactly when $\mathbf{p}$ and $\mathbf{q}$ are
Therefore, $\mathbf{s}$ will be perpendicular to $\mathbf{q}$ when
\begin{aligned}<br /> \mathbf{s}\cdot\mathbf{q}&=(k\mathbf{p}-(\mathbf{p}\cdot\mathbf{q})\mathbf{q})\cdot\mathbf {q}\\<br /> &=k(\mathbf{p}\cdot\mathbf{q})-(\mathbf{p}\cdot\mathbf{q})|\mathbf{q}|
^2\\<br /> &=(\mathbf{p}\cdot\mathbf{q})(k-|\mathbf{q}|^2)\\<br /> &=0.<br /> \end{aligned}
Here, we have used the formula $\mathbf{q}\cdot\mathbf{q}=|\mathbf{q}|^2$.
The dot product $\mathbf{p}\cdot\mathbf{q}$ increases when $\mathbf{p}$ and $\mathbf{q}$ point in the same direction, and drops to $0$ exactly when $\mathbf{p}$ and $\mathbf{q}$ are
Therefore, $\mathbf{s}$ will be perpendicular to $\mathbf{q}$ when
\begin{aligned}<br /> \mathbf{s}\cdot\mathbf{q}&=(k\mathbf{p}-(\mathbf{p}\cdot\mathbf{q})\mathbf{q})\cdot\mathbf {q}\\<br /> &=k(\mathbf{p}\cdot\mathbf{q})-(\mathbf{p}\cdot\mathbf{q})|\mathbf{q}|
^2\\<br /> &=(\mathbf{p}\cdot\mathbf{q})(k-|\mathbf{q}|^2)\\<br /> &=0.<br /> \end{aligned}
Here, we have used the formula $\mathbf{q}\cdot\mathbf{q}=|\mathbf{q}|^2$.
ummm... sorry i feel kinda lame, but im in high school and i kinda got lost in the first step, plz explain, im not so good with the math...i still don't see how to get the value of k
Given that
we may divide both sides by $8$, giving
Now, we may add $|\mathbf{q}|^2$ to both sides, giving
Hope this helps!
i get it now, grazie!
Last edited by 3k1yp2; December 7th 2009 at 06:26 PM. Reason: now i got it
December 7th 2009, 05:42 PM #2
Senior Member
Dec 2008
December 7th 2009, 05:50 PM #3
Nov 2009
some place in th Milky Way, usually
December 7th 2009, 06:01 PM #4
Senior Member
Dec 2008
December 7th 2009, 06:14 PM #5
Nov 2009
some place in th Milky Way, usually | {"url":"http://mathhelpforum.com/calculus/119199-solve-perpendicular-vectors-problem.html","timestamp":"2014-04-17T07:24:42Z","content_type":null,"content_length":"45515","record_id":"<urn:uuid:30bd1fe5-f036-4c91-b2fb-55b1f9c83cbe>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elements I.29
Euclid's Elements: Book I: Proposition 29
Proposition 29: A straight line falling on parallel straight lines makes the alternate angles equal to one another, the exterior angle equal to the interior and opposite angle, and the interior
angles on the same side equal to two right angles.
For let the straight line EF fall on the parallel straight lines AB, CD;
I say that it makes the alternate angles AGH, GHD equal, the exterior angle EGB equal to the interior and opposite angle GHD, and the interior angles on the same side, namely BGH, GHD, equal to two
right angles.
For, if the angle AGH is unequal to the angle GHD, one of them is greater.
Let the angle AGH be greater.
Let the angle BGH be added to each; therefore the angles AGH, BGH are greater than the angles BGH, GHD.
But the angles AGH, BGH are equal to two right angles; I. 13 therefore the angles BGH, GHD are less than two right angles.
But straight lines produced indefinitely from angles less than two right angles meet; Post. 5 ^See Note
therefore AB, CD, if produced indefinitely, will meet; but they do not meet, because they are by hypothesis parallel.
Therefore the angle AGH is not unequal to the angle GHD, and is therefore equal to it.
Again, the angle AGH is equal to the angle EGB; I. 15 therefore the angle EGB is also equal to the angle GHD. C.N. 1
Let the angle BGH be added to each; therefore the angles EGB, BGH are equal to the angles BGH, GHD. C.N. 2
But the angles EGB, BGH are equal to two right angles; I. 13
Therefore the angles BGH, GHD are also equal to two right angles.
Therefore etc.
This proposition is very important as contains the first reference to postulate 5. This postulate has a similar effect on the structure of Euclid's Elements to that of Elements I.4. Removing this
postulate enabled geometers to "discover" Non-Euclidean Geometry. | {"url":"http://everything2.com/title/Elements+I.29","timestamp":"2014-04-16T16:53:40Z","content_type":null,"content_length":"24625","record_id":"<urn:uuid:d594b1bb-fa2d-4f96-9064-5c210387cdc2>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00299-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding integrals
September 9th 2008, 07:13 PM #1
Junior Member
Jan 2008
Finding integrals
find the indefinite integral of (x^2)/[(9-x^2)^(1/2)]
Im think im supposed to try to set x=3sin(u) thus dx=3cos(u) du i think im on the right track but i just keep getting the integral of 9sin^2(u) du and i get all messed up
i hope that makes sense. i dont know how to type all crazy math like
That's very good. Now use the identity: $\sin^2 x = \frac{1 - \cos 2x}{2}$
This is much easier to integrate.
okay then i split it up as 2 integrals like so 9{intieral(1/2)-integral[cos(2u)/2]}
then the the antiderivative of those come out to 9[(u/2)-(sin(2u)/4)]
then when i plug arcsin(x/3) into u it gets all messed up and i have to take the sin of the arcsin?
i dont know if im doing something wrong
Use the double angle identity for Sine:
$\sin{2x} = 2\sin{x}\cos{x}$
Then I'm sure you know how to figure out $\sin{x}$ and $\cos{x}$
umm i understand that the sin(u) would then equal x/3 but what happens to the cos(u)? i have cos(u)=dx/3?
$tan\theta = \frac{\text{opposite}}{\text{adjacent}}$
$sin\theta = \frac{\text{opposite}}{\text{hypotenuse}}$
$cos\theta = \frac{\text{adjacent}}{\text{hypotenuse}}$
Where did we get a triangle from?
$sin\theta = \frac{\text{opposite}}{\text{hypotenuse}}~=~\frac{ x}{3}$
Now using this information find out what $cos\theta$ is.
Hint use pythargeon theorem to find the adjacent side
finally i see it thank you!!!
September 9th 2008, 07:41 PM #2
September 9th 2008, 11:02 PM #3
Junior Member
Jan 2008
September 9th 2008, 11:07 PM #4
Super Member
Jun 2008
September 9th 2008, 11:19 PM #5
Junior Member
Jan 2008
September 9th 2008, 11:29 PM #6
September 9th 2008, 11:37 PM #7
Junior Member
Jan 2008
September 9th 2008, 11:47 PM #8
September 10th 2008, 12:06 AM #9
Junior Member
Jan 2008 | {"url":"http://mathhelpforum.com/calculus/48422-finding-integrals.html","timestamp":"2014-04-19T10:49:25Z","content_type":null,"content_length":"50450","record_id":"<urn:uuid:f19aeaf4-0059-42f9-a113-15940300876d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ardmore, PA Science Tutor
Find an Ardmore, PA Science Tutor
Hey Everybody! I am an experienced tutor teaching subjects like physics, calculus, algebra, Spanish and even guitar. I was originally an engineer for a helicopter company for nearly 4 years and I
resigned to start a career in education.
16 Subjects: including mechanical engineering, Spanish, physical science, calculus
...Scored 800/800 on January 26, 2013 SAT Writing exam, with a 12 on the essay. Able to help focus students on necessary grammar rules and help them with essay composition. I majored in
Operations Research and Financial Engineering at Princeton, which involved a great deal of higher level math similar to that seen on the Praxis test.
19 Subjects: including ACT Science, calculus, statistics, geometry
...I believe strongly in helping students see the larger patterns and pictures of subject areas and to see the practicality of what they are learning in their daily lives.I have a wide range of
strategies for helping students understand elementary math. I believe strongly in helping students to con...
32 Subjects: including sociology, anthropology, ACT Science, reading
...I now work full-time as an engineer in the greater Philadelphia area. My experience ranges from tutoring first graders on reading and small sentence structure to 12th graders on college
application essays to undergraduates on philosophy term papers. My goal is to train you on how to become a be...
37 Subjects: including mechanical engineering, philosophy, ACT Science, physics
...I have developed effective strategies for helping students to discover their authentic voices as writers. I am also a conservatory-trained, professional musician. I provide instruction in
guitar, bass (acoustic and electric), mandolin, music theory, and musical composition and arranging.I taught public speaking classes to high school students when I taught at Strath Haven High
24 Subjects: including anthropology, English, reading, public speaking
Related Ardmore, PA Tutors
Ardmore, PA Accounting Tutors
Ardmore, PA ACT Tutors
Ardmore, PA Algebra Tutors
Ardmore, PA Algebra 2 Tutors
Ardmore, PA Calculus Tutors
Ardmore, PA Geometry Tutors
Ardmore, PA Math Tutors
Ardmore, PA Prealgebra Tutors
Ardmore, PA Precalculus Tutors
Ardmore, PA SAT Tutors
Ardmore, PA SAT Math Tutors
Ardmore, PA Science Tutors
Ardmore, PA Statistics Tutors
Ardmore, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/Ardmore_PA_Science_tutors.php","timestamp":"2014-04-19T20:12:50Z","content_type":null,"content_length":"24128","record_id":"<urn:uuid:d91688d2-60f2-4068-bc82-07992fda87d9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
04 - Physics of a real stepper motor
So lets now leave our imaginary world and look at how this extends to real stepper motors.
Our rotating ‘Bar A’ only had two poles – one at each end of the bar – and so we only got two steps per revolution. A real stepper motor may have, for example, 48 steps per revolution and each step
would therefore move through 360/48 or 7.5 degrees per step. How does this happen? Well, rather than it being a bar, then think of it as a cylinder where every 7.5 degrees the polarity is either
North or South but it always alternates from one to the other. That’s why stepper motors always have an ‘even’ number of steps. Here’s what it may look like if we look down on it and there are only 8
Now instead of one Bar B we actually have two. Each of these two electro-magnets will have a coil around them. To complicate things slightly there are two different ways that these coils are
configured: bi-polar, and ‘uni-polar’.
We will look at Bi-Polar first because it is the simplest to understand whereas a Uni-Polar motor has other advantages and disadvantages but can, when required, be used as if it was a bi-polar motor.
We will find out why later.
4.1 - Bi-polar stepper motors
A bi-polar stepper motor looks like this:
As you can see it has two coils and will normally therefore have 4 leads. If you have no documentation for your motor then you can just use an ohmmeter to work out each pair of pins. I.e. 1a and 1b
will have a low resistance between them, and 2a and 2b will also have a low resistance. If you measure across two pins and have an infinite resistance then they are from different coils. So once you
know which two pins are coil 1 and which are coil 2 then you will still need to find out which are ‘a’ and which are ‘b’ – but more on that later.
Bi-polar stepper motor driver
To drive the motor then let us consider one of the coils – say coil 1. In order to become an electromagnet we need to be able to change the direction of the current through the coil. So each side of
the coil needs to be either or low.
If ‘1a’ is high and ‘1b’ is low then we will have a current through the coil.
If ‘1a’ is low and ‘1b’ is high then we will have a current through the coil in the opposite direction.
In the other two cases: where they are both high, or both low, then no current flows and so it is no longer a magnet.
A suitable circuit to do this as an H-Bridge. As you will see from that link – you need four switches in the H-Bridge to be able to give the four control states we have mentioned above.
4.2 - Uni-polar stepper motors
A uni-polar stepper motor looks like this:
The only difference is that each of the coils now has 3 wires and will normally there have 6 leads. The new wire on each coil is called a ‘centre tap’ and is connected to the middle of the coil. If
you have no documentation on your motor then use your ohmmeter to check the connections. By checking for infinite resistance then you should be able to identify the leads that are for one coil, and
the other leads for the other coil. Given a group of 3 leads you can tell which is the centre tap because, for coil 1, the resistance between ‘1a’ and the centre tap will be same as that between the
centre tap and ‘1b’. The resistance between ‘1a’ and ‘1b’ will be double this value. NB the resistance tend to be very low, a few ohms, so you will need to select the appropriate resistance scale on
your meter.
Uni-polar stepper motor driver
As mentioned earlier: you can drive a uni-polar motor ‘as if’ it was a bi-ploar motor. To do this you just ignore the centre tap and then use the other two leads per coil as if it was a bi-polar.
Otherwise: you want to use the centre tap and, assuming you connect it to ground, then you will need two switches to dictate which direction the current flows through the coil.
Note that this mode of operation means that you are only using half of the coil in each direction. This will mean the ‘half coil’ only has half of the total resistance. Using Volts = Amps x
Resistance (and assuming your supply voltage is the same) then if the resistance is halved then the current drain has doubled.
So why would you choose uni-polar over bi-polar if it requires twice as much current?
Compared to the bi-polar H-Bridge driver, which requires ‘4 switches; per coil then the uni-polar circuit only needs ‘2 switches’ per coil. So less electronics!
The price you pay is that you may be using twice as much current and because you are using half of the coil at a time then you may only get half the torque. Despite its name (uni versus bi) it sounds
as if it is less capable but remember you can always use a uni-polar motor as if it was a bi-polar by ignoring the centre tap. So a uni-polar can be thought of as a bi-polar with extra choices. | {"url":"http://www.societyofrobots.com/member_tutorials/book/export/html/124","timestamp":"2014-04-16T22:15:49Z","content_type":null,"content_length":"10249","record_id":"<urn:uuid:f5f380f1-fb76-407e-adc8-7f3fed1ed9fa>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complexity, Vol. 7, No. 5, May/June 2002, pp. 14-21
Abstract: I'll discuss how Gödel's paradox "This statement is false/unprovable" yields his famous result on the limits of axiomatic reasoning. I'll contrast that with my work, which is based on the
paradox of "The first uninteresting positive whole number", which is itself a rather interesting number, since it is precisely the first uninteresting number. This leads to my first result on the
limits of axiomatic reasoning, namely that most numbers are uninteresting or random, but we can never be sure, we can never prove it, in individual cases. And these ideas culminate in my discovery
that some mathematical facts are true for no reason, they are true by accident, or at random. In other words, God not only plays dice in physics, but even in pure mathematics, in logic, in the world
of pure reason. Sometimes mathematical truth is completely random and has no structure or pattern that we will ever be able to understand. It is not the case that simple clear questions have simple
clear answers, not even in the world of pure ideas, and much less so in the messy real world of everyday life.
This talk was given Monday 13 May 2002 at Monash University in Melbourne, Australia, and previously to summer visitors at the IBM Watson Research Center in 2001. There are no section titles; the
displayed material is what I wrote on the whiteboard as I spoke.
When I was a small child I was fascinated by magic stories, because they postulate a hidden reality behind the world of everyday appearances. Later I switched to relativity, quantum mechanics,
astronomy and cosmology, which also seemed quite magical and transcend everyday life. And I learned that physics says that the ultimate nature of reality is mathematical, that math is more real than
the world of everyday appearances. But then I was surprised to learn of an amazing, mysterious piece of work by Kurt Gödel that pulled the rug out from under mathematical reality! How could this be?!
How could Gödel show that math has limitations? How could Gödel use mathematical reasoning to show that mathematical reasoning is in trouble?!
Applying mathematical methods to study the power of mathematics is called meta-mathematics, and this field was created by David Hilbert about a century ago. He did this by proposing that math could
be done using a completely artificial formal language in which you specify the rules of the game so precisely that there is a mechanical procedure to decide if a proof is correct or not. A formal
axiomatic theory of the kind that Hilbert proposed would consist of axioms and rules of inference with an artificial grammar and would use symbolic logic to fill in all the steps, so that it becomes
completely mechanical to apply the rules of inference to the axioms in every possible way and systematically deduce all the logical consequences. These are called the theorems of the formal theory.
You see, once you do this, you can forget that your formal theory has any meaning and study it from the outside as if it were a meaningless game for generating strings of symbols, the theorems. So
that's how you can use mathematical methods to study the power of mathematics, if you can formulate mathematics as a formal axiomatic theory in Hilbert's sense. And Hilbert in fact thought that all
of mathematics could be put into one of his formal axiomatic theories, by making explicit all the axioms or self-evident truths and all the methods of reasoning that are employed in mathematics.
In fact, Zermelo-Fraenkel set theory with the axiom of choice, ZFC, uses first-order logic and does this pretty well. And you can see some interesting work on this by my friend Jacob T. Schwartz at
his website at http://www.settheory.com.
But then in 1931 Kurt Gödel showed that it couldn't be done, that no formal axiomatic theory could contain all of mathematical truth, that they were all incomplete. And this exploded the normal
Platonic view of what math is all about.
How did Gödel do this? How can mathematics prove that mathematics has limitations? How can you use reasoning to show that reasoning has limitations?
How does Gödel show that reasoning has limits? The way he does it is he uses this paradox:
"This statement is false!"
You have a statement which says of itself that it's false. Or it says
"I'm lying!"
"I'm lying" doesn't sound too bad! But "the statement I'm making now is a lie, what I'm saying right now, this very statement, is a lie", that sounds worse, doesn't it? This is an old paradox that
actually goes back to the ancient Greeks, it's the paradox of the liar, and it's also called the Epimenides paradox, that's what you call it if you're a student of ancient Greece.
And looking at it like this, it doesn't seem something serious. I didn't take this seriously. You know, so what! Why should anybody pay any attention to this? Well, Gödel was smart, Gödel showed why
this was important. And Gödel changed the paradox, and got a theorem instead of a paradox. So how did he do it? Well, what he did is he made a statement that says of itself,
"This statement is unprovable!"
Now that's a big, big difference, and it totally transforms a game with words, a situation where it's very hard to analyze what's going on. Consider
"This statement is false!"
Is it true, is it false? In either case, whatever you assume, you get into trouble, the opposite has got to be the case. Why? Because if it's true that the statement is false, then it's false. And if
it's false that the statement is false, then it's true.
But with
"This statement is unprovable!"
you get a theorem out, you don't get a paradox, you don't get a contradiction. Why? Well, there are two possibilities. With
"This statement is false!"
you can assume it's true, or you can assume it's false. And in each case, it turns out that the opposite is then the case. But with
"This statement is unprovable!"
the two possibilities that you have to consider are different. The two cases are: it's provable, it's unprovable.
So if it's provable, and the statement says it's unprovable, you've got a problem, you're proving something that's false, right? So that would be very embarrassing, and you generally assume by
hypothesis that this cannot be the case, because it would really be too awful if mathematics were like that. If mathematics can prove things that are false, then mathematics is in trouble, it's a
game that doesn't work, it's totally useless.
So let's assume that mathematics does work. So the other possibility is that this statement
"This statement is unprovable!"
is unprovable, that's the other alternative. Now the statement is unprovable, and the statement says of itself that it's unprovable. Well then it's true, because what it says corresponds to reality.
And then there's a hole in mathematics, mathematics is "incomplete", because you've got a true statement that you can't prove. The reason that you have this hole is because the alternative is even
worse, the alternative is that you're proving something that's false.
The argument that I've just sketched is not a mathematical proof, let me hasten to say that for those of you who are mathematicians and are beginning to feel horrified that I'm doing everything so
loosely. This is just the basic idea. And as you can imagine, it takes some cleverness to make a statement in mathematics that says of itself that it's unprovable. You know, you don't normally have
pronouns in mathematics, you have to have an indirect way to make a statement refer to itself. It was a very, very clever piece of work, and this was done by Gödel in 1931.
The only problem with Gödel's proof is that I didn't like it, it seemed strange to me, it seemed beside the point, I thought there had to be a better, deeper reason for incompleteness. So I came up
with a different approach, another way of doing things. I found a different source for incompleteness.
Now let me tell you my approach. My approach starts off like this... I'll give you two versions, a simplified version, and a slightly less-of-a-lie version.
The simplified version is, you divide all numbers... you think of whether numbers are interesting or uninteresting, and I'm talking about whole numbers, positive integers,
1, 2, 3, 4, 5, ...
That's the world I'm in, and you talk about whether they're interesting or uninteresting.
Somehow you separate them into those that are interesting, and those that are uninteresting, okay? I won't tell you how. Later I'll give you more of a clue, but for now let's just keep it like that.
So, the idea is, then, if somehow you can separate all of the positive integers, the whole numbers, 1, 2, 3, 4, 5, into ones that are interesting and ones that are uninteresting, you know, each
number is either interesting or uninteresting, then think about the following whole number, the following positive integer:
"The first uninteresting positive integer"
Now if you think about this number for a while, it's precisely what? You start off with 1, you ask is it interesting or not. If it's interesting, you keep going. Then you look and see if 2 is
interesting or not, and precisely when you get to the first uninteresting positive integer, you stop.
But wait a second, isn't that sort of an interesting fact about this positive integer, that it's precisely the first uninteresting positive integer?! I mean, it stands out that way, doesn't it? It's
sort of an interesting thing about it, the fact that it happens to be precisely the smallest positive integer that's uninteresting! So that begins to give you an idea that there's a problem, that
there's a serious problem with this notion of interesting versus uninteresting.
Interestingly enough, last week I gave this talk at the University of Auckland in New Zealand, and Prof. Garry Tee showed me the Penguin Dictionary of Curious and Interesting Numbers by David Wells
that was published in Great Britain in 1986. And I'll read what it says on page 120: "39---This appears to be the first uninteresting number, which of course makes it an especially interesting
number, because it is the smallest number to have the property of being uninteresting." So I guess if you read his dictionary you will find that the entries for the positive integers 1 through 38
indicate that each of them is interesting for some reason!
And now you get into a problem with mathematical proof. Because let's assume that somehow you can use mathematics to prove whether a number is interesting or uninteresting. First you've got to give a
rigorous definition of this concept, and later I'll explain how that goes. If you can do that, and if you can also prove whether particular positive integers are interesting or uninteresting, you get
into trouble. Why? Well, just think about the first positive integer that you can prove is uninteresting.
"The first provably uninteresting positive integer"
We're in trouble, because the fact that it's precisely the first positive integer that you can prove is uninteresting, is a very interesting thing about it! So if there cannot be a first positive
integer that you can prove is uninteresting, the conclusion is that you can never prove that particular positive integers are uninteresting. Because if you could do that, the first one would ipso
facto be interesting!
But I should explain that when I talk about the first provably uninteresting positive integer I don't mean the smallest one, I mean the first one that you find when you systematically run through all
possible proofs and generate all the theorems of your formal axiomatic theory. I should also add that when you carefully work out all the details, it turns out that you might be able to prove that a
number is uninteresting, but not if its base-two representation is substantially larger than the number of bits in the program for systematically generating all the theorems of your formal axiomatic
theory. So you can only prove that a finite number of positive integers are uninteresting.
So that's the general idea. But this paradox of whether you can classify whole numbers into uninteresting or interesting ones, that's just a simplified version. Hopefully it's more understandable
than what I actually worked with, which is something called the Berry paradox. And what's the Berry paradox?
Berry Paradox
I showed you the paradox of the liar, "This statement is false, I'm lying, what I'm saying right now is a lie, it's false." The Berry paradox talks about
"The first positive integer that can't be named in less than a billion words"
Or you can make it bytes, characters, whatever, you know, some unit of measure of the size of a piece of text:
Berry Paradox
"The first positive integer that can't be named in less than a billion words/bytes/characters"
So you use texts in English to name a positive integer. And if you use texts up to a billion words in length, there are only a finite number of them, since there are only a finite number of words in
English. Actually we're simplifying, English is constantly changing. But let's assume English is fixed and you don't add words and a dictionary has a finite size. So there are only a finite number of
words in English, and therefore if you consider all possible texts with up to a billion words, there are a lot of them, but it's only a finite number, as mathematicians say jokingly in their in-house
And most texts in English don't name positive integers, you know, they're novels, or they're actually nonsense, gibberish. But if you go through all possible texts of up to a billion words, and
there's only a finite list of them, every possible way of using an English text that size to name a number will be there somewhere. And there are only a finite number of numbers that you can name
with this finite number of texts, because to name a number means to pick out one specific number, to refer to precisely one of them. But there are an infinite number of positive integers. So most
positive integers, almost all of them, require more than a billion words, or any fixed number of words. So just take the first one. Since almost all of them need more than a billion words to be
named, just pick the first one.
So this number is there. The only problem is, I just named it in much less than a billion words, even with all the explanation! [Laughter] Thanks for smiling and laughing! If nobody smiles or laughs,
it means I blew it, it means that I didn't explain it well! And if everybody laughs, then that's my lucky day!
So there's a problem with this notion of naming, and this is called the Berry paradox. And if you think that the paradox of the liar, "this statement is false", or "what I'm saying now is a lie", is
something that you shouldn't take too seriously, well, the Berry paradox was taken even less seriously. I took it seriously though, because the idea I extracted from it is the idea of looking at the
size of computer programs, which I call program-size complexity.
Program-Size Complexity
For me the central idea of this paradox is how big a text does it take to name something. And the paradox originally talks about English, but that's much too vague! So to make this into mathematics
instead of just being a joke, you have to give a rigorous definition of what language you're using and how something can name something else. So what I do is I pick a computer-programming language
instead of using English or any real language, any natural language, I pick a computer-programming language instead. And then what does it mean, how do you name an integer? Well, you name an integer
by giving a way to calculate it. A program names an integer if its output is that integer, you know, it outputs that integer, just one, and then it stops. So that's how you name an integer using a
And then what about looking at the size of a text measured in billions of words? Well, you don't want to talk about words, that's not a convenient measure of software size. People in fact in practice
use megabytes of code, but since I'm a theoretician I use bits. You know, it's just a multiplicative constant conversion factor! In biology the unit is kilobases, right? So every field has its way of
measuring information.
Okay, so what does it mean then for a number to be interesting or uninteresting, now that I'm giving you a better idea of what I'm talking about. Well, interesting means it stands out some way from
the herd, and uninteresting means it can't be distinguished really, it's sort of an average, typical number, one that isn't worth a second glance. So how do you define that mathematically using this
notion of the size of computer programs? Well, it's very simple: a number is uninteresting or algorithmically random or irreducible or incompressible if there's no way to name it that's more concise
than just writing out the number directly. That's the idea.
In other words, if the most concise computer program for calculating a number just says to print 123796402, in that case, if that's the best you can do, then that number is uninteresting. And that's
typically what happens. On the other hand, if there is a small, concise computer program that calculates the number, that's atypical, that means that it has some quality or characteristic that
enables you to pick it out and to compress it into a smaller algorithmic description. So that's unusual, that's an interesting number.
Once you set up this theory properly, it turns out that most numbers, the great majority of positive integers, are uninteresting. You can prove that as a theorem. It's not a hard theorem, it's a
counting argument. There can't be a lot of interesting numbers, because there aren't enough concise programs. You know, there are a lot of positive integers, and if you look at programs with the same
size in bits, there are only about as many programs of the same size as there are integers, and if the programs have to be smaller, then there just aren't enough of them to name all of those
different positive integers.
So it's very easy to show that the vast majority of positive integers cannot be named substantially more concisely than by just exhibiting them directly. Then my key result becomes, that in fact you
can never prove it, not in individual cases! Even though most positive integers are uninteresting in this precise mathematical sense, you can never be sure, you can never prove it---although there
may be a finite number of exceptions. But you can only prove it in a small number of cases. So most positive integers are uninteresting or algorithmically incompressible, but you can almost never be
sure in individual cases, even though it's overwhelmingly likely.
That's the kind of "incompleteness result" I get. (That's what you call a result stating that you can't prove something that's true.) And my incompleteness result has a very different flavor than
Gödel's incompleteness result, and it leads in a totally different direction. Fortunately for me, everyone liked the liar paradox, but nobody took the Berry paradox really seriously!
Let me give you another version of this result. Let's pick a computer programming language, and I'll say that a computer program is elegant if no program that is smaller than it is produces the same
output that it does. Then you can't prove that a program is elegant if it's substantially larger than the algorithm for generating all the theorems of the formal axiomatic theory that you are using,
if that's written in that same computer programming language. Why?
Well, start generating all the theorems until you find the first one that proves that a particular computer program that is larger than that is elegant. That is, find the first provably elegant
program that's larger than the program in the same language for generating all the theorems. Then run it, and its output will be your output.
I've just described a program that produces the same output as a provably elegant program, but that's smaller than it is, which is impossible! This contradiction shows that you can only prove that a
finite number of programs are elegant, if you are using a fixed formal axiomatic theory.
By the way, this implies that you can't always prove whether or not a program halts, because if you could do that then it would be easy to determine whether or not a program is elegant. So I'm really
giving you an information-theoretic perspective on what's called Turing's halting problem, I'm connecting that with the idea of algorithmic information and with program-size complexity.
I published an article about all of this in Scientific American in 1975, it was called "Randomness and mathematical proof," and just before that I called Gödel on the phone to tell him about it, that
was in 1974.
I was working for IBM in Buenos Aires at the time, and I was visiting the IBM Watson Research Center in New York---that was before I joined IBM Research permanently. And just before I had to go back
to Buenos Aires I called Gödel on the phone at the Princeton Institute for Advanced Study and I said, "I'm fascinated by your work on incompleteness, and I have a different approach, using the Berry
paradox instead of the paradox of the liar, and I'd really like to meet you and tell you about it and get your reaction." And he said, "It doesn't make any difference which paradox you use!" (And his
1931 paper said that too.) I answered, "Yes, but this suggests to me a new information-theoretic view of incompleteness that I'd very much like to tell you about." He said, "Well, send me a paper on
this subject and call me back, and I'll see if I give you an appointment."
I had one of my first papers then, actually it was the proofs of one of my first papers on the subject. It was my 1974 IEEE Transactions on Information Theory paper; it's reprinted in Tymoczko, New
Directions in the Philosophy of Mathematics. And I mailed it to him. And I called back. And incredibly enough, he made a small technical remark, and he gave me an appointment. I was delighted, you
can imagine, my hero, Kurt Gödel! And the great day arrives, and I'm in my office in the Watson Research Center at Yorktown Heights, NY, and it was April 1974, spring. In fact, it was the week before
Easter. And I didn't have a car. I was coming from Buenos Aires, I was staying at the YMCA in White Plains, but I figured out how to get to Princeton, New Jersey by train. You know, I'd take the
train into New York City and then out to Princeton. It would only take me three hours, probably, to do it!
So I'm in my office, ready to go, almost, and the phone rings. And I forgot to tell you, even though it was the week before Easter, it had snowed. It wasn't a whole lot of snow; you know, nothing
would stop me from visiting my hero Gödel at Princeton. So anyway, the phone rings, and it's Gödel's secretary, and she says, "Prof. Gödel is extremely careful about his health, and because it's
snowed, he's not going to be coming in to the Institute today, so your appointment is canceled!"
And as it happened, that was just two days before I had to take a plane back to Buenos Aires from New York. So I didn't get to meet Gödel! This is one of the stories that I put in my book
Conversations with a Mathematician.
So all it takes is a new idea! And the new idea was waiting there for anybody to grab it. The other thing you have to do when you have a new idea is, don't give up too soon. As George Polya put it in
his book How to Solve It, theorems are like mushrooms, usually where there's one, others will pop up! In other words, another way to put it, is that usually the difference between a professional,
expert mathematician with lots of experience and a young, neophyte mathematician is not that the older mathematician has more ideas. In fact, the opposite is usually the case. It's usually the kids
that have all the fresh ideas! It's that the professional knows how to take more advantage of the few ideas he has. And one of the things you do, is you don't give up on an idea until you get all the
milk, all the juice out of it!
So what I'm trying to lead up to is that even though I had an article in Scientific American in 1975 about the result I just told you, that most numbers are random, algorithmically random, but you
can never prove it, I didn't give up, I kept thinking about it. And sure enough, it turned out that there was another major result there, that I described in my article in Scientific American in
1988. Let me try to give you the general idea.
The conclusion is that
Some mathematical facts
are true for no reason,
they're true by accident!
Let me just explain what this means, and then I'll try to give an idea of how I arrived at this surprising conclusion. The normal idea of mathematics is that if something is true it's true for a
reason, right? The reason something is true is called a proof. And a simple version of what mathematicians do for a living is they find proofs, they find the reason that something is true.
Okay, what I was able to find, or construct, is a funny area of pure mathematics where things are true for no reason, they're true by accident. And that's why you can never find out what's going on,
you can never prove what's going on. More precisely, what I found in pure mathematics is a way to model or imitate, independent tosses of a fair coin. It's a place where God plays dice with
mathematical truth. It consists of mathematical facts which are so delicately balanced between being true or false that we're never going to know, and so you might as well toss a coin. You can't do
better than tossing a coin. Which means the chance is half you're going to get it right if you toss the coin and half you'll get it wrong, and you can't really do better than that.
So how do I find this complete lack of structure in an area of pure mathematics? Let me try to give you a quick summary. For those of you who may have heard about it, this is what I like to call Ω,
it's a real number, the halting probability.
Omega Number
"Halting Probability"
And some people are nice enough to call this "Chaitin's number". I call it Ω. So let me try to give you an idea of how you get to this number. By the way, to show you how much interest there is in Ω,
let me mention that this month there is a very nice article on Ω numbers by Jean-Paul Delahaye in the French popular science magazine Pour la Science, it's in the May 2002 issue.
Well, following Vladimir Tasic, Mathematics and the Roots of Postmodern Thought, the way you explain how to get to this number that shows that some mathematical facts are true for no reason, they're
only true by accident, is you start with an idea published by Émile Borel in 1927, of using one real number to answer all possible yes/no questions, not just mathematical questions, all possible yes/
no questions in English---and in Borel's case it was questions in French. How do you do it?
Well, the idea is you write a list of all possible questions. You make a list of all possible questions, in English, or in French. A first, a second, a third, a fourth, a fifth:
Question # 1
Question # 2
Question # 3
Question # 4
Question # 5
The general idea is you order questions say by size, and within questions of the same size, in some arbitrary alphabetical order. You number all possible questions.
And then you define a real number, Borel's number, it's defined like this:
Borel's Number
The Nth digit after the decimal point, d[N], answers the Nth question! !
Well, you may say, most of these questions are going to be garbage probably, if you take all possible texts from the English alphabet, or French alphabet. Yes, but a digit has ten possibilities, so
you can let 1 mean the answer is yes, 2 mean the answer is no, and 3 mean it's not a valid yes/no question in English, because it's not valid English, or it is valid English, but it's not a question,
or it is a valid question, but it's not a yes/no question, for example, it asks for your opinion. There are various ways to deal with all of this.
So you can do all this with one real number---and a real number is a number that's measured with infinite precision, with an infinite number of digits d[N] after the decimal point---you can give the
answers to all yes/no questions! And these will be questions about history, questions about philosophy, questions about mathematics, questions about physics.
It can do this because there's an awful lot you can put into a real number. It has an infinite amount of information, because it has an infinite number of digits. So this is a way to say that real
numbers are very unreal, right? So let's start with this very unreal number that answers all yes/no questions, and I'll get to my Ω number in a few steps.
The next step is to make it only answer questions about Turing's halting problem. So what's Turing's halting problem? Well, the halting problem is a famous question that Turing considered in 1936.
It's about as famous as Gödel's 1931 work, but it's different.
Turing's Halting Problem 1936
[1931 Gödel]
And what Turing showed is that there are limits to mathematical reasoning, but he did it very differently from Gödel, he found something concrete. He doesn't say "this statement is unprovable" like
Gödel, he found something concrete that mathematical reasoning can't do: it can't settle in advance whether a computer program will ever halt. This is the halting problem, and it's in a wonderful
paper, it's the beginning of theoretical computer science, and it was done before there were computers. And this is the Turing who then went on and did important things in cryptography during the
Second World War, and built computers after the war. Turing was a Jack of all trades.
So how do you prove Turing's result that there's no algorithm to decide if a computer program---a self-contained computer program---will ever halt? (Actually the problem is to decide that it will
never halt.) Well, that's not hard to do, in many different ways, and I sketched a proof before, when I was talking about proving that programs are elegant.
So let's take Borel's real number, and let's change it so that it only answers instances of the halting problem. So you just find a way of numbering all possible computer programs, you pick some
fixed language, and you number all programs somehow: first program, second program, third program, you make a list of all possible computer programs in your mind, it's a mental fantasy.
Computer Program # 1
Computer Program # 2
Computer Program # 3
Computer Program # 4
Computer Program # 5
And then what you do is you define a real number whose Nth digit---well, let's make it binary now instead of decimal---whose Nth bit tells us if the Nth computer program ever halts.
Turing's Number
The Nth bit after the binary point, b[N], tells us if the Nth computer program ever halts.
So we've already economized a little, we've gone from a decimal number to a binary number. This number is between zero and one, and so is Borel's number, there's no integer part to this real number.
It's all in the fractional part. You have an infinite number of digits or bits after the decimal point or the binary point. In the previous number, Borel's original one, the Nth digit answers the Nth
yes/no question in French. And here the Nth bit of this new number, Turing's number, will be 0 if the Nth computer program never halts, and it'll be 1 if the Nth computer program does eventually
So this one number would answer all instances of Turing's halting problem. And this number is uncomputable, Turing showed that in his 1936 paper. There's no way to calculate this number, it's an
uncomputable real number, because the halting problem is unsolvable. This is shown by Turing in his paper.
So what's the next step? This still doesn't quite get you to randomness. This number gets you to uncomputability. But it turns out this number, Turing's number, is redundant. Why is it redundant?
Well, the answer is that there's a lot of repeated information in the bits of this number. We can actually compress it more, we don't have complete randomness yet. Why is there a lot of redundancy?
Why is there a lot of repeated information in the bits of this number? Well, because different cases of the halting problem are connected. These bits b[N] are not independent of each other. Why?
Well, let's say you have K instances of the halting problem. That is to say, somebody gives you K computer programs and asks you to determine in each case, does it halt or not.
K instances of the halting problem ?
Is this K bits of mathematical information? K instances of the halting problem will give us K bits of Turing's number. Are these K bits independent pieces of information? Well, the answer is no, they
never are. Why not? Because you don't really need to know K yes/no answers, it's not really K full bits of information. There's a lot less information. It can be compressed. Why?
Well, the answer is very simple. If you have to ask God or an oracle that answers yes/no questions, you don't really need to ask K questions to the oracle, you don't need to bother God that much! You
really only need to know what? Well, it's sufficient to know how many of the programs halt.
And this is going to be a number between zero and K, a number that's between zero and K.
0 ≤ # that halt ≤ K
And if you write this number in binary it's really only about log[2 ]K bits.
# that halt = log[2] K bits
If you know how many of these K programs halt, then what you do is you just start running them all in parallel until you find that precisely that number of programs have halted, and at that point you
can stop, because you know the other ones will never halt. And knowing how many of them halt is a lot less than K bits of information, it's really only about log[2 ]K bits, it's the number of bits
you need to be able to express a number between zero and K in binary, you see.
So different instances of the halting problem are never independent, there's a lot of redundant information, and Turing's number has a lot of redundancy. But essentially just by using this idea of
telling how many of them halt, you can squeeze out all the redundancy. You know, the way to get to randomness is to remove redundancy! You distill it, you concentrate it, you crystallize it. So what
you do is essentially you just take advantage of this observation---it's a little more sophisticated than that---and what you get is my halting probability.
So let me write down an expression for it. It's defined like this:
Omega Number
Ω = ∑ [p halts] 2 ^−|p|
|p| = size in bits of program p
0 < Ω < 1
Then write Ω in binary !
So this is how you get randomness, this is how you show that there are facts that are true for no reason in pure math. You define this number Ω, and to explain this I would take a long time and I
don't have it, so this is just a tease!
For more information you can go to my books. I actually have four small books published by Springer-Verlag on this subject: The Limits of Mathematics, The Unknowable, Exploring Randomness and
Conversations with a Mathematician. These books come with LISP software and a Java applet LISP interpreter that you can get at my website.
So you define this Ω number to be what? You pick a computer programming language, and you look at all programs p that halt, p is a program, and you sum over all programs p that halt. And what do you
sum? Well, if the program p is K bits long, it contributes 1/2^K, one over two to the K, to this halting probability.
In other words, each K-bit program has probability 1/2^K, and you'll immediately notice that there are two to the thousand thousand-bit programs, so probably this sum will diverge and give infinity,
if you're not careful. And the answer is yes, you're right if you worry about that. So you have to be careful to do things right, and the basic idea is that no extension of a valid program is a valid
program. And if you stipulate that the programming language is like that, that its programs are "self-delimiting", then this sum is in fact between zero and one and everything works. Okay?
Anyway, I don't want to go into the details because I don't have time. So if you do everything right, this sum
∑ [p halts] 2 ^−|p|
actually converges to a number between zero and one which is the halting probability Ω. This is the probability that a program, each bit of which is generated by an independent toss of a fair coin,
eventually halts. And it's a way of summarizing all instances of the halting problem in one real number and doing it so cleverly that there's no redundancy.
So if you take this number and then you write it in binary, this halting probability, it turns out that those bits of this number written in binary, these are independent, irreducible mathematical
facts, there's absolutely no structure. Even though there's a simple mathematical definition of Ω, those bits, if you could see them, could not be distinguished from independent tosses of a fair
coin. There is no mathematical structure that you would ever be able to detect with a computer, there's no algorithmic pattern, there's no structure that you can capture with mathematical
proofs---even though Ω has a simple mathematical definition. It's incompressible, irreducible mathematical information. And the reason is, because if you knew the first N bits of this number Ω, it
would solve the halting problem for all programs up to N bits in size, it would enable you to answer the halting problem for all programs p up to N bits in size. That's how you prove that this Ω
number is random in the sense I explained before of being algorithmically incompressible information.
And that means that not only you can't compress it into a smaller algorithm, you can't compress it into fewer bits of axioms. So if you wanted to be able to determine K bits of Ω, you'd need K bits
of axioms to be able to prove what K bits of this number are. It has---its bits have---no structure or pattern that we are capable of seeing.
However, you can prove all kinds of nice mathematical theorems about this Ω number. Even though it's a specific real number, it really mimics independent tosses of a fair coin. So for example you can
prove that 0's and 1's happen in the limit exactly fifty percent of the time, each of them. You can prove all kinds of statistical properties, but you can't determine individual bits!
So this is the strongest version I can come up with of an incompleteness result...
Actually, in spite of this, Cristian Calude, Michael Dinneen and Chi-Kou Shu at the University of Auckland have just succeeded in calculating the first 64 bits of a particular Ω number. The halting
probability Ω actually depends on the choice of computer or programming language that you write programs in, and they picked a fairly natural one, and were able to decide which programs less than 85
bits in size halt, and from this to get the first 64 bits of this particular halting probability.
This work by Calude et al. is reported on page 27 of the 6 April 2002 issue of the British science weekly New Scientist, and it's also described in Delahaye's article in the May 2002 issue of the
French monthly Pour la Science, and it'll be included in the second edition of Calude's book on Information and Randomness, which will be out later this year.
But this doesn't contradict my results, because all I actually show is that an N-bit formal axiomatic theory can't enable you to determine substantially more than N bits of the halting probability.
And by N-bit axiomatic theory I mean one for which there is an N-bit program for running through all possible proofs and generating all the theorems. So you might in fact be able to get some initial
bits of Ω.
Now, what would Hilbert, Gödel and Turing think about all of this?!
I don't know, but I'll tell you what I think it means, it means that math is different from physics, but it's not that different. This is called the quasi-empirical view of mathematics, and Tymoczko
has collected a bunch of interesting papers on this subject, in his book on New Directions in the Philosophy of Mathematics. This is also connected with what's called experimental mathematics, a
leading proponent of which is Jonathan Borwein, and there's a book announced called The Experimental Mathematician by Bailey, Borwein and Devlin that's going to be about this. The general idea is
that proofs are fine, but if you can't find a proof, computational evidence can be useful, too.
Now I'd like to tell you about some questions that I don't know how to answer, but that I think are connected with this stuff that I've been talking about. So let me mention some questions I don't
know how to answer. They're not easy questions.
Well, one question is positive results on mathematics:
Positive Results
Where do new mathematical concepts come from ?
I mean, Gödel's work, Turing's work and my work are negative in a way, they're incompleteness results, but on the other hand, they're positive, because in each case you introduce a new concept:
incompleteness, uncomputability and algorithmic randomness. So in a sense they're examples that mathematics goes forward by introducing new concepts! So how about an optimistic theory instead of
negative results about the limits of mathematical reasoning? In fact, these negative metamathematical results are taking place in a century which is a tremendous, spectacular success for mathematics,
mathematics is advancing by leaps and bounds. So there's no reason for pessimism. So what we need is a more realistic theory that gives us a better idea of why mathematics is doing so splendidly,
which it is. But I'd like to have some theoretical understanding of this, not just anecdotal evidence, like the book about the Wiles proof of Fermat's result. [Simon Singh, Fermat's Enigma; see also
the musical Fermat's Last Tango.]
So this is one thing that I don't know how to do and I hope somebody will do.
Another thing which I think is connected, isn't where new mathematical ideas come from, it's where do new biological organisms come from. I want a theory of evolution, biological evolution. [In
Chapter 12 of A New Kind of Science, Stephen Wolfram says that he thinks there is nothing to it, that you get life right away, we're just universal Turing machines, but I think there's more to it
than that.]
Biological Evolution
Where do new biological ideas come from ?
You see, in a way biological organisms are ideas, or genes are ideas. And good ideas get reused. You know, it's programming, in a way, biology.
Another question isn't theoretical evolutionary biology---which doesn't exist, but that is what I'd like to see---another question is where do new ideas come from, not just in math! Our new ideas.
How does the brain work? How does the mind work? Where do new ideas come from? So to answer that, you need to solve the problem of AI or how the brain works!
Where do new ideas come from ?
In a sense, where new mathematical concepts come from is related to this, and so is the question of the origin of new biological ideas, new genes, new ideas for building organisms---and the ideas
keep getting reused. That's how biology seems to work. Nature is a cobbler!---So I think these problems are connected, and I hope they have something to do with the ideas I mentioned, my ideas, but
perhaps not in the form that I've presented them here.
So I don't know how to answer these questions, but maybe some of you will be able to answer them. I hope so! The future is yours, do great things!
Thank you!
D. Bailey, J. Borwein, K. Devlin, The Experimental Mathematician, A. K. Peters, to appear.
C. Calude, Information and Randomness, Springer-Verlag, 2002.
G. J. Chaitin, "Information-theoretic computational complexity," IEEE Information Theory Transactions, 1974, pp. 10-15.
G. J. Chaitin, "Randomness and mathematical proof," "Randomness in arithmetic," Scientific American, May 1975, July 1988, pp. 47-52, 80-85.
G. J. Chaitin, The Limits of Mathematics, The Unknowable, Exploring Randomness, Conversations with a Mathematician, Springer-Verlag, 1998, 1999, 2001, 2002.
M. Chown, "Smash and grab," New Scientist, 6 April 2002, pp. 24-28.
J.-P. Delahaye, "Les nombres oméga," Pour la Science, May 2002, pp. 98-103.
G. Polya, How to Solve It, Princeton University Press, 1988.
J. Rosenblum, J. S. Lessner, Fermat's Last Tango, Original Cast Records OC-6010, 2001.
S. Singh, Fermat's Enigma, Walker and Co., 1997.
V. Tasic, Mathematics and the Roots of Postmodern Thought, Oxford University Press, 2001.
T. Tymoczko, New Directions in the Philosophy of Mathematics, Princeton University Press, 1998.
D. Wells, The Penguin Dictionary of Curious and Interesting Numbers, Penguin Books, 1986.
S. Wolfram, A New Kind of Science, Wolfram Media, 2002. | {"url":"https://www.cs.auckland.ac.nz/~chaitin/summer.html","timestamp":"2014-04-17T21:31:32Z","content_type":null,"content_length":"48844","record_id":"<urn:uuid:ac2a56e2-168c-4267-bc9b-afef2e6cae03>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Manhattan GMAT Challenge Problem of the Week – 21 Jan 10
Welcome back to this week’s Challenge Problem! As always, the problem and solution below were written by one of our fantastic instructors. Each challenge problem represents a 700+ level question. If
you are up for the challenge, however, set your timer for 2 minutes and go!
What is the 18th digit to the right of the decimal point in the decimal expansion of 1/37?
(A) 0
(B) 2
(C) 4
(D) 7
(E) 9
There is no way to intuit the answer; we must simply divide 1 by 37. However, we can be certain that we will be able to spot a pattern. The GMAT would not require us to compute all the way to the
18th digit. Moreover, every fraction with integers on top and bottom can be expanded into a decimal that either terminates (e.g., 1/4 = 0.25) or repeats. For instance, 1/3 = 0.333…, with an infinite
series of repeating 3’s. Likewise, 1/11 = 0.090909…, with an infinite series of 09’s.
Since the denominator of the fraction given in the problem is 37, we know that the fraction will repeat. (For a fraction to terminate, the denominator, after reducing, must have only 2’s or 5’s or
both as factors.) So we should do the long division, looking for the repeating cycle.
Perform the long division. You will probably have to experiment with the multiples of 37 to discover that 7 × 37 = 259.
Once we get back to a remainder of 1, then we know the cycle will start all over again. Therefore, 1/37 = 0.027027…, with a repeating cycle of 027. This cycle contains 3 digits, so every third digit
will be the same. For instance, 7 will be the 3rd digit, the 6th digit, the 9th digit, and so forth in the decimal expansion. Since 18 is also a multiple of 3 (like 3, 6, and 9), the digit 7 will be
in the 18th position as well.
The correct answer is (D) 7.
To view the current Challenge Problem, simply visit the Challenge Problem page on Manhattan GMAT’s website.
If you liked this article, let Michael Dinerstein know by clicking Like.
Ask a Question or Leave a Reply
The author Michael Dinerstein gets email notifications for all questions or replies to this post. | {"url":"http://www.beatthegmat.com/mba/2010/01/21/manhattan-gmat-challenge-problem-of-the-week-21-jan-10","timestamp":"2014-04-19T08:22:29Z","content_type":null,"content_length":"60565","record_id":"<urn:uuid:afa3c3a8-c83a-4735-ad69-5005eb2220d7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Course in the Geometry of n Dimensions
Results 1 - 10 of 13
, 2003
"... In this paper, we study a multiple antenna system where the transmitter is equipped with quantized information about instantaneous channel realizations. Assuming that the transmitter uses the
quantized information for beamforming, we derive a universal lower bound on the outage probability for any f ..."
Cited by 188 (13 self)
Add to MetaCart
In this paper, we study a multiple antenna system where the transmitter is equipped with quantized information about instantaneous channel realizations. Assuming that the transmitter uses the
quantized information for beamforming, we derive a universal lower bound on the outage probability for any finite set of beamformers. The universal lower bound provides a concise characterization of
the gain with each additional bit of feedback information regarding the channel. Using the bound, it is shown that finite information systems approach the perfect information case as (t 1)2 , where B
is the number of feedback bits and t is the number of transmit antennas. The geometrical bounding technique, used in the proof of the lower bound, also leads to a design criterion for good
beamformers, whose outage performance approaches the lower bound. The design criterion minimizes the maximum inner product between any two beamforming vectors in the beamformer codebook, and is
equivalent to the problem of designing unitary space time codes under certain conditions. Finally, we show that good beamformers are good packings of 2-dimensional subspaces in a 2t-dimensional real
Grassmannian manifold with chordal distance as the metric.
- Journal of Econometrics , 1994
"... Many statistical methods rely on numerical optimization to estimate a model’s parameters. Unfortunately, conventional algorithms sometimes fail. Even when they do converge, there is no assurance
that they have found the global, rather than a local, optimum. We test a new optimization algorithm, simu ..."
Cited by 126 (1 self)
Add to MetaCart
Many statistical methods rely on numerical optimization to estimate a model’s parameters. Unfortunately, conventional algorithms sometimes fail. Even when they do converge, there is no assurance that
they have found the global, rather than a local, optimum. We test a new optimization algorithm, simulated annealing, on four econometric problems and compare it to three common conventional
algorithms. Not only can simulated annealing find the global optimum, it is also less likely to fail on difficult functions because it is a very robust algorithm. The promise of simulated annealing
is demonstrated on the four econometric problems.
- J. Theor. Biol , 1991
"... Schneider, T. D. (1991). Theory of molecular machines. I. Channel capacity ..."
, 2006
"... on the authority of the Rector Magnificus of Wageningen University, ..."
, 1993
"... this paper is to examine key aspects of simulating this case. CORRELATION AND GEOMETRY Correlation is largely perceived to be a statistical phenomena. It is. But it is also a geometric phenomena
(Herr, 1980). To see this, we must first view the data in vector form (Halmos, 1974). Let x i be the vect ..."
Cited by 2 (1 self)
Add to MetaCart
this paper is to examine key aspects of simulating this case. CORRELATION AND GEOMETRY Correlation is largely perceived to be a statistical phenomena. It is. But it is also a geometric phenomena
(Herr, 1980). To see this, we must first view the data in vector form (Halmos, 1974). Let x i be the vector x i =
, 1999
"... Quaternion splines are a classical tool for orientation control in computer animation and robotics. In this paper, we design a rational quaternion spline with many desirable properties: it is a
fully general NURBS curve of arbitrary continuity; it has a closed form algebraic description, leading ..."
Cited by 1 (0 self)
Add to MetaCart
Quaternion splines are a classical tool for orientation control in computer animation and robotics. In this paper, we design a rational quaternion spline with many desirable properties: it is a fully
general NURBS curve of arbitrary continuity; it has a closed form algebraic description, leading to simple derivative computation; its construction is an efficient generalization of classical
interpolation techniques in Euclidean space, leading to simple implementation and easy incorporation into existing NURBS-based modelers; it is coordinate-frame invariant; and it is of high quality.
- 30th International Conference on Machine Learning (ICML 2013), JMLR W&CP , 2013
"... We derive sharp bounds on the generalization error of a generic linear classifier trained by empirical risk minimization on randomlyprojected data. We make no restrictive assumptions (such as
sparsity or separability) on the data: Instead we use the fact that, in a classification setting, the questi ..."
Cited by 1 (1 self)
Add to MetaCart
We derive sharp bounds on the generalization error of a generic linear classifier trained by empirical risk minimization on randomlyprojected data. We make no restrictive assumptions (such as
sparsity or separability) on the data: Instead we use the fact that, in a classification setting, the question of interest is really ‘what is the effect of random projection on the predicted class
labels? ’ and we therefore derive the exact probability of ‘label flipping ’ under Gaussian random projection in order to quantify this effect precisely in our bounds. 1.
, 1990
"... functions with simulated annealing* ..."
, 1997
"... ... kind of problem encompasses two main entities: the field production system and the geological reservoir. Each of these entities presents a wide set of decision variables and the choice of
their values is an optimization problem. In view of the large number of decision variables it is infeasible ..."
Add to MetaCart
... kind of problem encompasses two main entities: the field production system and the geological reservoir. Each of these entities presents a wide set of decision variables and the choice of their
values is an optimization problem. In view of the large number of decision variables it is infeasible to try to enumerate all possible combinations. Analysis tools encoded in computer programs can
spend hours or days of processing for a single run, depending on their sophistication and features. Also, it can be costly to prepare the input data if many hypotheses are going to be considered and
if it is desirable to allow the parameters to vary. A typical | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=382784","timestamp":"2014-04-17T14:24:46Z","content_type":null,"content_length":"33112","record_id":"<urn:uuid:7e3a0297-3e4b-4c16-b487-358b6798c446>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Pied Piper of Hamelin
Copyright © University of Cambridge. All rights reserved.
Why do this problem?
based on the well-known story, opens the door to a whole realm of mathematical calculations that can be explored with or without a spreadsheet. It also gives opportunities for pupils to create
further questions to answer.
Possible approach
If possible, it would be good to read a version of The Pied Piper of Hamlin with the children so that they are familiar with the story before starting this investigation.
On a second reading, you could use the story to talk about the number of legs at particular times. You could also pose some theoretical questions, such as asking the children to imagine you've opened
the book at a page which had 10 legs on it in total. How many people and how many rats could there have been? Learners could work on this in pairs using mini-whiteboards and then you can talk about
the possiblities as a whole group. This will lead into a general chat about the number of animals/people and how the number of each affects the other.
You might also want to spend some time sharing ways of recording what the children are doing. Some might be drawing pictures or symbols for the rats/people, others might be recording numbers only. It
is worth talking about the different ways and the advantages/disadvantages of each. You may find that after some discussion, a few children adopt a different way of recording to the one they started
Key questions
How many legs do your rats have?
What could you replace a rat with?
Can you tell me about the way you are working out so many possibilities?
(And for the pupils who have gone much further)
What have you noticed about all your results so far?
Can you explain why . . . . . has happened?
Possible extension
Look at animals with other numbers of legs and perhaps three types of different-legged animals at the same time - eg. birds, spiders and pigs.
Possible support
Some models,toys or pictures representing the different animals may help some pupils to get started. | {"url":"http://nrich.maths.org/1996/note?nomenu=1","timestamp":"2014-04-21T07:23:12Z","content_type":null,"content_length":"9087","record_id":"<urn:uuid:dc777b4f-d644-4f2d-99ae-42c4748ee676>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
Factor Graphs for Time Series Modeling"
Supplemental Materials for "Dynamic Factor Graphs for Time Series Modeling"
The 5 videos below compare nearest neighbor matching and Dynamic Factor Graphs for the estimation of missing joint angles from motion capture marker data (test sequence of 260 frames at 30Hz).
4 sets of joint angles were missing: 2 sequences of 65 frames, of either left leg or the entire upper body (missing joints are displayed in red on specific frames).
Original motion capture sequence: MoCap_Walk2.mpeg.
Reconstruction of two sequences with missing left leg using nearest neighbors: MoCap_Walk2_NN_ReconstructedLeg.mpeg.
Reconstruction of two sequences with missing upper body using nearest neighbors: MoCap_Walk2_NN_ReconstructedUp.mpeg.
Reconstruction of two sequences with missing left leg using a DFG: MoCap_Walk2_DFG_ReconstructedLeg.mpeg.
Reconstruction of two sequences with missing upper body using a DFG: MoCap_Walk2_DFG_ReconstructedUp.mpeg. | {"url":"http://cs.nyu.edu/~mirowski/pub/mocap/","timestamp":"2014-04-18T05:33:25Z","content_type":null,"content_length":"2081","record_id":"<urn:uuid:abf70b42-e10f-43b7-9076-17f633602616>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
prime subfield
March 7th 2009, 11:07 AM
prime subfield
Let $F$ be a field and let $E$ be the prime subfield of $F$. Prove that every automorphism of $F$ fixes $E$.
March 7th 2009, 02:04 PM
If $\sigma: F\to F$ is an automorphism then $\sigma (1) = 1$.
Let $n\cdot 1 = \underbrace{1 + 1 + ... 1}_{n\text{ times }}$ for $n>0$.
If $n=0$ then let $n\cdot 1 = 0$ and if $n<0$ then $n\cdot 1 = - [(-n)\cdot 1]$.
Now let $D = \{ n\cdot 1 | n \in \mathbb{Z} \}$ then $\sigma (x) = x$ for all $x\in D$.
To see this notice $\sigma (1+...+1) = \sigma (1) + ... + \sigma(1) = n\cdot \sigma(1) = n\cdot 1$.
This is because $\sigma (a+b) = \sigma(a) + \sigma(b)$ and $\sigma (-a) = -\sigma(a)$ since $\sigma$ is automorphism.
The prime subfield is $E = \{ \alpha/\beta | \alpha,\beta \in D , \beta ot = 0 \}$.
Notice that $\sigma (x) = x$ for all $x\in E$.
To see this notice, $\sigma (\alpha/\beta) = \sigma(\alpha)/\sigma(\beta) = \alpha/\beta$.
March 7th 2009, 05:35 PM
I understand the first two cases. But why is that the third case? How about $\{(n\cdot 1)^{-1}| n \in \mathbb{Z}\}$?
March 7th 2009, 05:50 PM
March 7th 2009, 06:23 PM
March 7th 2009, 06:58 PM
The prime subfield is the intersection of all subfields i.e. it is the smallest subfield. If $E$ is a subfield then it means $1\in E$ but then $n\cdot 1\in E$ because $E$ is closed under addition
and it has $0$ and it has all additive inverses. Therefore $D$ must be contained in $E$. However, $E$ is a field and so $\alpha/\beta \in E$ for all $\alpha,\beta \in D,\betaot =0$. Therefore,
the set of fraction is contained in $E$, but this set of fractions is itself a field and so $E$ is in fact the set of all these fractions that we can form. | {"url":"http://mathhelpforum.com/advanced-algebra/77364-prime-subfield-print.html","timestamp":"2014-04-21T00:22:51Z","content_type":null,"content_length":"16928","record_id":"<urn:uuid:bc9ad622-ea96-439e-885e-5631e585ea1a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boston SAT Math Tutor
Find a Boston SAT Math Tutor
...My aural skills are my highest level musical skill I have and I have tons of strategies that can be used by anyone to achieve high level results. My strong collegiate choral background
(membership in three different choirs that have performed nationally and internationally) has fortified those s...
16 Subjects: including SAT math, writing, geometry, algebra 1
...I taught at the International Learning Center at the Boston YMCA, at Roxbury Community College, at Catholic Charities, and, presently, at the Jewish Vocational Center in downtown Boston. In
college, I had a second minor in speech and worked with W. Norwood Brigance, one of the most renowned teachers and authors in the field.
24 Subjects: including SAT math, English, reading, writing
...I have always been invested in the math and sciences, and I felt as though I could make a much greater impact by pursuing something that I loved to learn about. Thus, I moved to Boston and
began a post-baccalaureate program at Harvard University Extension School. Here I completed coursework in ...
15 Subjects: including SAT math, chemistry, physics, English
...Teachers and professors can get caught up using too much jargon which can confuse students. I find real life examples and a crystal clear explanation are crucial for success. My schedule is
flexible as I am a part time graduate student.
19 Subjects: including SAT math, Spanish, chemistry, calculus
...I am a patient and encouraging teacher, and am used to helping students who are struggling in a subject that, besides being inherently difficult, does not come naturally to them. My work as a
high school teacher in particular has given me a rare opportunity to develop a pedagogy suited for strug...
9 Subjects: including SAT math, physics, calculus, geometry
Nearby Cities With SAT math Tutor
Brighton, MA SAT math Tutors
Brookline, MA SAT math Tutors
Cambridge, MA SAT math Tutors
Charlestown, MA SAT math Tutors
Chelsea, MA SAT math Tutors
Dorchester, MA SAT math Tutors
East Boston SAT math Tutors
Everett, MA SAT math Tutors
Jamaica Plain SAT math Tutors
Malden, MA SAT math Tutors
Medford, MA SAT math Tutors
Revere, MA SAT math Tutors
Roxbury, MA SAT math Tutors
Somerville, MA SAT math Tutors
South Boston, MA SAT math Tutors | {"url":"http://www.purplemath.com/boston_ma_sat_math_tutors.php","timestamp":"2014-04-16T10:47:05Z","content_type":null,"content_length":"23770","record_id":"<urn:uuid:eece6a17-6d46-462e-9642-9e9672a7f565>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem Set 4
Assigned: March 23
Due: April 6
Problem 1
Construct a resolution proof of sentence (F) from Problem set 3, Problem 3, sentences A-E. You should show (a) the Skolemized form of the axioms and of the negated goal; (b) each of the resolutions
involved in the proof. You need not show the intermediate stages of the conversion to clausal form, or resolutions that a theorem prover might carry out but are not part of the final proof.
Note: The solutions to problem set 3 will be posted on or about March 30. You should check that you are starting from the right first-order encoding of the sentences.
Converting the sentences to clausal form gives the following. A.1. W(A,K).
A.2. W(A,G).
B. ~B(T,p1) V M(p1,SK1(p1)).
C.1. ~W(p1,s) V D(p1,s).
C.2. ~W(p1,s) V ~M(p1,p2) V D(p2,s).
D. B(T,A).
E. ~M(p1,p2) V ~W(p1,G) V ~W(p2,G).
Negation of F: W(p,G) V ~D(p,G).
Resolution proof:
Resolving B with D with binding p1=A gives
(G) M(A,SK1(A)).
Resolving C.2 with A.2 with binding p1=A, s=G gives
(H) ~M(A,p2) V D(p2,G).
Resolving H with G with binding p2=SK1(A) gives
(I) D(SK1(A),G).
Resolving G with E with binding p1=A, p2=SK1(A) gives <.br> (J) ~W(A,G) V ~W(SK1(A),G).
Resolving A.2 with J gives
(K) ~W(SK1(A),G).
Resolving F with I with binding p=SK1(A) gives
(L) W(SK1(A),G).
Resolving L with K gives the null clause.
Problem 2
Consider the following knowledge base:
1. parent(ed,anne).
2. parent(mary,anne)
3. parent(ed,bill)
4. parent(mary,bill)
5. parent(anne,george)
6. parent(bill,harry)
7. male(ed).
8. female(mary).
9. painter(mary).
10. soldier(george).
10.A. male(bill).
11. parent(X,Y) => child(Y,X).
12. child(X,Y) ^ male(X) => son(X,Y).
13. child(X,Y) ^ female(X) => daughter(X,Y).
14. parent(X,Y) => ancestor(X,Y)
15. parent(X,Y) ^ ancestor(Y,Z) => ancestor(X,Z).
What new facts can be inferred using forward chaining? (You should explain how facts are combined with the rules to get new facts; e.g.
Combining rule 11 with fact 1 under the substitution X=ed Y=anne gives the new fact
18. child(anne,ed).
Answer: Using the algorithm presented in class, in which one fact is resolved with one rule at each steps gives the following:
Combining rule 11 with fact 2 under the substitution X=mary, Y=anne gives the new fact
19. child(anne,mary).
Combining rule 11 with fact 3 under the substitution X=ed, Y=bill gives the new fact
20. child(bill,ed).
Combining rule 11 with fact 4 under the substitution X=mary, Y=bill gives the new fact
21. child(bill,mary).
Combining rule 11 with fact 5 under the substitution X=ed, Y=bill gives the new fact
22. child(george,anne).
Combining rule 11 with fact 5 under the substitution X=bill, Y=harry. gives the new fact
23. child(harry,bill).
Combining rule 12 with fact 18 under the substitution X=anne, Y=ed gives the new rule
24. male(anne) => son(anne,ed).
Combining rule 12 with fact 19 under the substitution X=anne, Y=mary gives the new rule
25. male(anne) => son(anne,mary).
Combining rule 12 with fact 20 under the substitution X=bill, Y=ed gives the new rule
26. male(bill) => son(bill,ed).
Combining rule 12 with fact 21 under the substitution X=bill, Y=mary gives the new rule
27. male(bill) => son(bill,mary).
Combining rule 12 with fact 22 under the substitution X=george, Y=anne gives the new rule
28. male(george) => son(george,anne).
Combining rule 12 with fact 23 under the substitution X=george, Y=anne gives the new rule
29. male(harry) => son(harry,bill).
Combining rule 13 with fact 18 under the substitution X=anne, Y=ed gives the new rule
30. female(anne) => daughter(anne,ed).
Combining rule 13 with fact 19 under the substitution X=anne, Y=mary gives the new rule
31. female(anne) => daughter(anne,mary).
Combining rule 13 with fact 20 under the substitution X=bill, Y=ed gives the new rule
32. female(bill) => daughter(bill,ed).
Combining rule 13 with fact 21 under the substitution X=bill, Y=mary gives the new rule
33. female(bill) => daughter(bill,mary).
Combining rule 13 with fact 22 under the substitution X=george, Y=anne gives the new rule
34. female(george) => daughter(george,anne).
Combining rule 13 with fact 23 under the substitution X=george, Y=anne gives the new rule
35. female(harry) => daughter(harry,bill).
Combining fact 1 with rule 14 under binding X=ed, Y=anne gives the new fact
36. ancestor(ed,anne)
Combining fact 2 with rule 14 under binding X=mary, Y=anne gives the new fact
37. ancestor(mary,anne)
Combining fact 3 with rule 14 under binding X=ed, Y=bill gives the new fact
38. ancestor(ed,bill)
Combining fact 4 with rule 14 under binding X=mary, Y=bill gives the new fact
39. ancestor(mary,bill)
Combining fact 5 with rule 14 under binding X=anne, Y=george gives the new fact
40. ancestor(anne,george).
Combining fact 6 with rule 14 under binding X=bill, Y=harry gives the new fact
41. ancestor(bill,harry).
Combining fact 1 with rule 15 under binding X=ed, Y=anne gives the new rule
42. ancestor(anne,Z) => ancestor(ed,Z).
Combining fact 2 with rule 15 under binding X=mary, Y=anne gives the new rule
43. ancestor(anne,Z) => ancestor(mary,Z).
Combining fact 3 with rule 15 under binding X=ed, Y=bill gives the new rule
44. ancestor(bill,Z) => ancestor(ed,Z).
Combining fact 4 with rule 15 under binding X=mary, Y=bill gives the new rule
45. ancestor(bill,Z) => ancestor(mary,Z).
Combining fact 5 with rule 15 under binding X=anne, Y=george gives the new rule
46. ancestor(george,Z) => ancestor(anne,Z).
Combining fact 6 with rule 14 under binding X=bill, Y=harry gives the new rule
47. ancestor(bill,Z) => ancestor(harry,Z).
Combining fact 10.A with rule 26 gives the new fact
48. son(bill,ed).
Combining fact 10.A with rule 27 gives the new fact
49. son(bill,mary).
Combining fact 40 with rule 42 under binding Z=george gives the new fact
50. ancestor(ed,george)
Combining fact 40 with rule 43 under binding Z=george gives the new fact
51. ancestor(mary,george)
Combining fact 41 with rule 44 under binding Z=harry gives the new fact
52. ancestor(ed,harry)
Combining fact 41 with rule 45 under binding Z=harry gives the new fact
53. ancestor(mary,harry)
The forward chaining algorithm described in the textbook (fig 9.3) uses a more efficient but more complicated algorithm, technically called "hyperresolution", in which one matches all the conditions
on the left hand side against facts, and then infers the right-hand side. This cuts way down on the number of inferences that are made, but involves more complex matching. If this is used on the
assignment, then inferences 24-35 and 42-47 go away, and inferences 48 to 53 have the following justifications:
Combining facts 20 and 10.A with rule 12 under the substitution X=bill, Y=ed gives
48. son(bill,ed)
Combining facts 21 and 10.A with rule 12 under the substitution X=bill, Y=ed gives
49. son(bill,mary)
Combining facts 1 and 40 with rule 15 under the substitution X=ed, Y=anne, Z=george gives
50. ancestor(ed,george).
Combining facts 2 and 40 with rule 15 under the substitution X=mary, Y=anne, Z=george gives
51. ancestor(mary,george).
Combining facts 3 and 41 with rule 15 under the substitution X=ed, Y=bill, Z=harry gives
52. ancestor(ed,harry).
Combining facts 4 and 41 with rule 15 under the substitution X=mary, Y=bill, Z=harry gives
53. ancestor(mary,harry).
Problem 3
Show how backward chaining can be used to prove the goals:
(A) son(Q,mary).
(B) ancestor(mary,R) ^ soldier(R).
You should trace paths that do not lead to a solution as well as those that do. Assume that backward chaining is executed in Prolog order. Your trace should follow the form in the backward chaining
G1: ?- son(Q,mary). Resolve with Rule 12
G2: ?- child(Q,mary), male(Q). Resolve with rule 11
G3: ?- parent(mary,Q), male(Q) Resolve with fact 2.
G4: ?- male(anne). Fail. Return to G3.
G3: ?- parent(mary,Q), male(Q) Resolve with fact 4.
G5: male(bill). Resolve with fact 10.A
Succeed with Q=bill
G3: Succeed with Q=bill
G2: Succeed with Q=bill
G1: Succeed with Q=bill
G1: ?- ancestor(mary,R),soldier(R). Resolve with rule 14
G2:?- parent(mary,R),soldier(R). Resolve with fact 2
G3: ?- soldier(anne). Fail. return to G2.
G2: ?- parent(mary,bill),soldier(R). Resolve with fact 2
G4: ?- soldier(bill). Fail. return to G2.
G2: Fail. Return to G1
G1: ?- ancestor(mary,R),soldier(R). Resolve with rule 15
G5: ?- parent(mary,Y1),ancestor(Y1,R),soldier(R). Resolve with fact 2
G6: ?- ancestor(anne,R), soldier(R). Resolve with rule 14
G7: ?- parent(anne,R), soldier(R). Resolve with fact 5.
G8: ?- soldier(george). Resolve with fact 10. Succeed.
G7: Succeed with R=george.
G6: Succeed with R=george.
G5: Succeed with R=george.
G1: Succeed with R=george. | {"url":"http://cs.nyu.edu/courses/spring09/G22.2560-001/sol4.html","timestamp":"2014-04-17T07:04:59Z","content_type":null,"content_length":"10007","record_id":"<urn:uuid:73ac5717-c57d-4e9a-84a1-1eaa3a36afa1>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lesson 39: Equations with Algebraic Fractions
The lesson begins with using the lcd to clear the fractions in the equation, and then a review of proportions follows to motivate the "shortcut" of cross multiplication for equations containing only
a fraction on each side of the equation. The possible existence of extraneous solutions and solving the equations graphically is discussed before application problems are presented. The lesson
concludes with exercises in manipulating formulas. | {"url":"http://www.curriki.org/xwiki/bin/view/Coll_Group_SanDiegoCommunityCollegesDevelopmentalMathExchange/Lesson39EquationswithAlgebraicFractions?bc=;Coll_Group_SanDiegoCommunityCollegesDevelopmentalMathExchange.IntermediateAlgebra;Coll_Group_SanDiegoCommunityC","timestamp":"2014-04-16T21:52:17Z","content_type":null,"content_length":"47718","record_id":"<urn:uuid:83f8ff6f-49e8-4f8d-ae67-12d2bb72c7e4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebraic and Geometric Topology 4 (2004), paper no. 32, pages 721-755.
Peripheral separability and cusps of arithmetic hyperbolic orbifolds
D.B. McReynolds
Abstract. For X = R, C, or H it is well known that cusp cross-sections of finite volume X-hyperbolic (n+1)-orbifolds are flat n-orbifolds or almost flat orbifolds modelled on the (2n+1)-dimensional
Heisenberg group N_{2n+1} or the (4n+3)-dimensional quaternionic Heisenberg group N_{4n+3}(H). We give a necessary and sufficient condition for such manifolds to be diffeomorphic to a cusp
cross-section of an arithmetic X-hyperbolic (n+1)-orbifold. A principal tool in the proof of this classification theorem is a subgroup separability result which may be of independent interest.
Keywords. Borel subgroup, cusp cross-section, hyperbolic space, nil manifold, subgroup separability.
AMS subject classification. Primary: 57M50. Secondary: 20G20.
DOI: 10.2140/agt.2004.4.721
E-print: arXiv:math.GT/0409278
Submitted: 2 April 2004. (Revised: 24 August 04.) Accepted: 3 September 2004. Published: 11 September 2004.
Notes on file formats
D.B. McReynolds
University of Texas, Austin, TX 78712, USA
Email: dmcreyn@math.utexas.edu
AGT home page
Archival Version
These pages are not updated anymore. They reflect the state of . For the current production of this journal, please refer to http://msp.warwick.ac.uk/. | {"url":"http://www.emis.de/journals/UW/agt/AGTVol4/agt-4-32.abs.html","timestamp":"2014-04-17T15:35:02Z","content_type":null,"content_length":"3039","record_id":"<urn:uuid:8149157b-5b4a-44c9-a646-ddb4225c1fdb>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sunol Precalculus Tutors
...Thank you for your interest. ---------- I'm a graduate of Berkeley (BA, biochemistry) and Stanford (PhD, immunology). So, which side should I sit on at the Big Game? I have been a college-level
teaching assistant. I have tutored many students over the years, including my son, who is now in the twelfth grade.
17 Subjects: including precalculus, chemistry, writing, geometry
I tutored all lower division math classes at the Math Learning Center at Cabrillo Community College for 2 years. I assisted in the selection and training of tutors. I have taught algebra,
trigonometry, precalculus, geometry, linear algebra, and business math at various community colleges and a state university for 4 years.
11 Subjects: including precalculus, calculus, statistics, geometry
...My passion for teaching started before I was born ? my father is a lifelong quantum physics professor (now retired) and my late mother taught in high school. One day when I was a child, my dad
proudly said he had begun teaching my brother when he was three. I cried out, ?I?m four and a half already!
8 Subjects: including precalculus, physics, geometry, calculus
...I tutor middle school and high school math students. I can also teach Chinese at all levels. I am patient and kind.
11 Subjects: including precalculus, calculus, statistics, geometry
...I guide them to find the correct answers by themselves. So when they are in the test or exam, they can solve any problems. I show them the meaning behind each of exercises.
13 Subjects: including precalculus, calculus, algebra 1, algebra 2 | {"url":"http://www.algebrahelp.com/Sunol_precalculus_tutors.jsp","timestamp":"2014-04-19T20:51:41Z","content_type":null,"content_length":"24609","record_id":"<urn:uuid:abde14b3-2d4c-4fef-b5f0-7f702290ec69>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Sample selction
Sanjoy Bhattacharjee posted on Wednesday, April 09, 2008 - 1:18 pm
Prof. Muthen,
I have the following model to estimate
Y2, Y3, Y4 = g(eta1) (g is a vector function)
eta1 = h(X2)
Y1 = f(X1, eta1)
Where eta1 is a continuous latent variable and Y2, Y3, Y4 are ordinal indicators capturing eta1. Y1 is BINARY and the variable of primary interest. X’s are exogenous variables.
We have sample selection issue. I was wondering whether it would be OK to use IMR (inverse mills ratio) as one of the explanatory variable in Y1 equation and then use WLSMV. I am calculating IMR from
the selection model; Z = z(X4)
Could you kindly suggest me some related references?
Thanking you.
Linda K. Muthen posted on Thursday, April 10, 2008 - 8:59 am
This sounds like a reasonable approach. You may want to model the heteroscedasticity of your y1 residuals.
Sanjoy Bhattacharjee posted on Thursday, April 10, 2008 - 9:34 am
Thank you Madam.
Checking heteroscedasticity in equation Y1 is in fact a good idea.
Could you kindly suggest me an example to handle heteroscedasticity using MPlus.
Bengt O. Muthen posted on Thursday, April 10, 2008 - 5:31 pm
I can think of 2 approaches that could be useful. One draws on UG ex3.9. The other draws on UG ex5.3 with CONSTRAINT = x, where the heteroscedasticity is a function of the covariate x. But you would
have to explore.
Patchara Popaitoon posted on Thursday, September 08, 2011 - 8:29 pm
Dear Prof. Muthen,
I have a simple question. I would like to select only some groups of the sample from the main dataset and save it for another analysis. Please could you advise what command that I can use. Thanks.
Linda K. Muthen posted on Friday, September 09, 2011 - 6:55 am
You can select observations using the USEOBSERVATIONS option of the VARIABLE command. If you want to save that data set, use the FILE option of the SAVEDATA command.
Patchara Popaitoon posted on Friday, September 09, 2011 - 7:15 am
Dear Linda,
Thanks for the advise. I had tried this but the saved file contained no data. Please suggest. Here is the command.
File is C:\mplus\batch1.dat
names are ...(I have all the variables in the dataset written here including w1 which is used in useobservation command)...;
Useobservation = w1 EQ 1;
Savedata: File is C:\mplus\w1;
I got the file out of the run but there is no data in the file. It is supposed to be about 600 observations here. Thanks.
Linda K. Muthen posted on Friday, September 09, 2011 - 7:34 am
Please send the relevant files and your license number to support@statmodel.com.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=11&page=3087","timestamp":"2014-04-16T19:36:32Z","content_type":null,"content_length":"25482","record_id":"<urn:uuid:f44c098c-6b2d-4c5f-9e72-b76245e84543>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can anyone help me with Hidden Markov question?
October 15th 2012, 07:06 PM #1
Sep 2012
Can anyone help me with Hidden Markov question?
I've been working on this for few hours and no luck. I'd appreciate any help.
Based on the Viterbi algorithm, marginal probability of a sequences being generated by a given HMM model with the Forward algorithm, I'm trying to calculate both, the most probable path-Viterbi-
as well as the marginal probability of observing the sequence “ILDE”, for the model defined below.
a. The model has two possible states: transmembrane state (TM) and non-transmembrane state (NT)
b. The state transition matrix A:
TM NT
S[t] TM 0.8 0.2
NT 0.2 0.8
a. The emission matrix E:
Amino Acids
L I E D
State TM 0.45 0.45 0.05 0.05
NT 0.05 0.05 0.45 0.45
Assume that transition from start state to TM or NT has equal chance 0.5.
Re: Can anyone help me with Hidden Markov question?
Hey rico.
I'm not quite familiar with what you want to do but I do understand Markov Probability Models so can you explain that given your transition matrix to go from S_t to S_t+1 and your matrix E what
you are trying to find given these two constraints?
Re: Can anyone help me with Hidden Markov question?
Please take a look at attached files. I need to do Viterbi and Forward algorithm calculation the way it shows in the table. I spent another 2 hours last night and still no luck.
Re: Can anyone help me with Hidden Markov question?
Do you only have to do it for a particular sequence or a general calculation (say for n steps)?
If it's only for particular sequence, then post your calculations for each iteration of the chain line by line and we'll see what's going on.
October 15th 2012, 08:31 PM #2
MHF Contributor
Sep 2012
October 16th 2012, 03:11 AM #3
Sep 2012
October 16th 2012, 03:12 AM #4
Sep 2012
October 16th 2012, 09:00 PM #5
MHF Contributor
Sep 2012 | {"url":"http://mathhelpforum.com/advanced-statistics/205419-can-anyone-help-me-hidden-markov-question.html","timestamp":"2014-04-20T19:10:08Z","content_type":null,"content_length":"42342","record_id":"<urn:uuid:9620cee8-eeb4-45dc-9453-12d03352ea1a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |